Apple has broken Time Machine enough times that I would never consider using it at all anymore. Once upon a time, it was really neat, had great integration with Mac OS X, and an amazing user interface and experience, but it's now clearly technology that Apple will probably eventually drop entirely in favor of something less impressive all together, like telling you to buy more iCloud Storage.
Hasn't the issues always been related to remote Time Machine? I have a usb drive I use and haven't heard of any issues with that setup. Am I missing something?
In the past, I've heard recommendations not to use remote Time Machine over SMB directly, but rather to create an APFS disk image on a remote server and then backup to that as if its an external hard drive.
Supposedly, doing that eliminates a lot of the flakiness specific to SMB Time Machine, and while I haven't tested it personally, I have used disk images over SMB on macOS Tahoe recently, and they actually work great (other than the normal underlying annoyances of SMB that everyone with a NAS is mostly used to at this point).
The new ASIF format for disk images added in Tahoe actually works very well for this sort of thing, and gives you the benefits of sparse bundle disk images without requiring specific support for them on the underlying file system.[1][2] As long as you're on a file system that supports sparse files (I think pretty much every currently used file system except FAT32, exFAT, and very old implementations of HFS+), you get almost native performance out of the disk image now. (Although, again, that's just fixing the disk image overhead, you still have to work around the usual SMB weirdness unless you can get another remote file system protocol working.)
I tried moving to NFS, but the level of complexity of NFS auth is just comical. I gave up after trying to set up a Kerberos server on the Synology that I was trying to access. It's too much.
Using unauthenticated NFS, even on a local network, is too dodgy imo.
I lose my Time Machine drive, like, every year or two.
Sometimes, Time Machine just goes stupid and I have to wipe the drive and start over. All of my efforts in the past to copy or repair or do anything to a Time Machine drive has ended in folly, so when it starts acting up, I just wipe it and start anew.
Other times, it's the drive itself, and I swap it out.
99% of the time, it Just Works. Wiping the drive for me is more annoying than catastrophic (99.9999% of the time I don't care about my 18 month old data). It's mostly for local catastrophic fat fingering on my part, and to make sure I have a solid back up after I do a OS update. I have BackBlaze for "Why is there 5 feet mud in my burning house" scenarios.
Outside of that, I've always been able to recover from it.
My wife has a SSD drive she plugs into her laptop for TM backup. That machine at most makes laps around the house, so its not that big of a deal for her.
Linux is definitely not a "full operating system."
Here's Linux built on GitHub Actions, with Grub[1], and you can't do anything with it. I include a reference init that does nothing, per kernel.org. 17.8 MB image.
GNU is by every practical measure, everything else. People memed on Stallman for the whole GNU/Linux naming, but he's basically right. There's also Android/Linux, that another user mentioned, and some distributions which don't use a GNU userland at all.
But the wide majority of people are using GNU/Linux, or some ecosystem derivative of it, like people using GNOME, which was formerly a part of the GNU project.
I think Apple’s chip prowess is completely hampered by the fact that I’m buying hardware that is measurably less mine than the lesser x86 chips on the market that I can actually do whatever I want with.
I don’t really care how many hours their laptops last compared to Windows and Linux machines anymore.
I can’t put a price on user freedom. Even if I could, it’s far from negligible.
Apple has chipped away for years at user freedom. It’s an entire tooling and infrastructural
development built from intentional strategy. Not a marginal price difference.
Billions of dollars were invested in removing our ability to do common tasks.
All people unfamiliar with Linux at a documentation level assume that because Linux is Linux it must be pretty well documented, but in reality, just building the thing and creating an init is extremely poorly documented process for such mature software.
You’re not missing anything. It’s amazing Linux makes any progress at all, because the most high touch points about the damn thing are basically completely undocumented.
And if they are, the documentation is out of date, and written by some random maintainer and describes a process no longer used or it’s by a third-party and obviously wrong or superfluous and they have no idea what they’re talking about.
Edit: Oh it’s a cultural issue, too. Almost everything revolving around Linux documentation is also an amateur shitshow. Systemd, that init system and so much more that everyone uses? How do you build it and integrate it into a new image?
I don’t know. They don’t either. It’s assumed you’re already using it from a major distribution. There’s no documentation for it.
docs.kernel.org is generated from in tree readmes, docs, type/struct/function definitions. Making it a lot easier to read/browse documentation that would (previously) require grepping the source code to find.
I realize the site also hosts some fairly out-of-date articles, there is room for improvement. Those hand written articles start with an author & timestamp, so they're easy to filter.
A small but big detail that irritates me is one used to be able to search Applications faster through the dedicated Applications overlay, but now this behavior appears to just be a shortcut to Spotlight, which suffers from incredibly poor index planning.
In the past, when Spotlight was too slow to show me my most used applications by the first few letters, I'd bail and use Applications.
Now I'd have to use Finder, but opening that up would be slow enough that I'd almost need a desktop shortcut.
So, in essence, I have to hack around the most common functionality of using an application on an operating system, which is finding the damn thing. And this is supposed to be the most polished operating system on the market?
Apple frequently appears to be asleep at the wheel.
Yeah, I used to have a hot corner set up so that I could fling my mouse towards the upper left and then type the first letter or two of the app name, just like in Gnome.
Now that causes the screen to freeze for half a second (possibly my fault - I have 'reduce animations' switched on, but it seems to freeze the screen for the duration of the animation that would previously have played), and then the colour wheel spins for a couple of seconds, and then it might finally respond to my keyboard input... but even then, it fails to find the app maybe 20% of the time. This is on a ~1yo M4 Macbook Pro w/ 36 GB RAM.
So for the past month I've been training myself to alt+tab round to the finder window and navigate to the apps folder from there.
I've never been much of a Macos fan, but this is shockingly poor - less of a papercut, more a wedge of smouldering bamboo shoved under my fingernails.
On the other side of the fence, I enjoy the new Spotlight-for-Applications that opens when I hit the touch bar key (I still have an M1) for the old Launchpad. It seems to sort programs by frequency, so it knows that I open Ghostty far more often than Ghostery, and typing "Gh" will bring me to Ghostty instead of Ghostery. In the old Launchpad, applications were always presented alphabetically when you began typing, so Ghostery always was selected instead of Ghostty. I had to type "gh" right key enter before, but now just I just hit "gh" enter.
Tahoe's new Spotlight refresh includes an application specific option (open spotlight then arrow/cursor to the right or press cmd+1), and it will only match on applications, which is indeed very fast compared to a full blown Spotlight search...
except it doesn't match on Apple's built-in applications like Calendar or Screenshot.app, which makes it useless to me since I don't mentally separate Apple Apps from third party ones when trying to find or search for apps.
I concede that this is the state of the art in secure deployments, but I’m from a different age where people remoted into colocated hardware, or at least managed their VPSs without destroying them every update.
As a result, I think developers are forgetting filesystem cleanliness because if you end up destroying an entire instance, well it’s clean isn’t it?
It also results in people not knowing how to do basic sysadmin work, because everything becomes devops.
The bigger problem I have with this, is the logical conclusion is to use “distroless” operating system images with vmlinuz, an init, and the minimal set of binaries and filesystem structure you need for your specific deployment, and rarely do I see anyone actually doing this.
Instead, people are using a hodgepodge of containers with significant management overhead, that actually just sit on like Ubuntu or something. Maybe alpine. Or whatever Amazon distribution is used on ec2 now. Or of course, like in this article, Fedora CoreOS.
One day, I will work with people who have a network issue and don’t know how to look up ports in use. Maybe that’s already the case, and I don’t know it.
> The bigger problem I have with this, is the logical conclusion is to use “distroless” operating system images with vmlinuz, an init, and the minimal set of binaries and filesystem structure you need for your specific deployment, and rarely do I see anyone actually doing this.
In the few jobs I’ve had over 20 years, this is common in the embedded space, usually using yocto. Really powerful, really obnoxious tool chain.
What you describe is from the "pets" era of server deployment, and we are now deep into the "cattle" era. Train yourself on destroying and redeploying, and building observability into the stack from the outset, rather than managing a server through ssh. Every shop you go to professionally is going to work like this. Eventually, Linux desktops will work like this also, especially with all the work going into systemd to support movable home directories, immutable OS images with modular updates, and so forth.
I don't think this viewpoint is very pragmatic. "Pet" and "cattle" approaches solve different scales of problems. Shops should be adaptable to using either for the right job.
I already do this professionally, and when something is broken, we collectively as an industry have no idea why except for rolling back to a previous deployment because we have no time for system introspection, nor do we really want to spend engineering hours figuring it out. Just nuke it.
The bigger joke is everyone behaves like they have a ranch for all this cattle infrastructure.
In reality, the largest clients by revenue in the world have PetSmart. And frankly many of them, a fish bowl.
My Geforce2 MX 200/400 with an Athlon and 256MB of RAM began to become useless in ~2002/2003 with the new DX9 games.
Doom3? Missing textures. Half Life2? Maybe at 640x480. F.E.A.R? More like L.A.U.G.H.
Times changed so fast (and on top of that, shitty console ports) that PCs didn't achieve
great numbers at home until 2009 with a new machine.
Altough I began to play games like Angband, Nethack and the like in that era and in opened an amazing libre/indie world until today.
And, yes, I replayed Deus Ex because it had tons of secrets and it ran on a potato. Perfectly playable at 800x600 at max settings.
What’s your point? Say everything you just said again, but with software engineering and Indians, instead of manufacturing and the Chinese, or textiles and Vietnam and Pakistan.
There’s no reason American cars need to exist either, they basically all perform worse dollar-for-dollar, feature-for-feature, than foreign cars.
In fact, let’s offshore everything. There’s no reason not to use Filipinos for McDonald’s and In-n-Out drive-thru speakers.
Let’s all adopt Chinese tang ping. Lay down and die. Treat every effort of labor as replaceable and void of respect.
If China and India wanted to wage effortless war with the US all it would have to do is stop exporting goods and labor to us.
Please read my comment again. This time, consider that our laws and regulations are not laws of physics or axioms of mathematics and are therefore able to be changed. The comment will make more sense in that light.
The only thing I've never understood about the HPV vaccination is that for some reason after a certain age as an adult in the United States, no primary care provider appears to recommend you get it in addition to your regular vaccination schedule.
Is the idea that you're married and have a single partner and the risk factor has dropped below a certain percentage of the population where there's little reason to recommend getting it if the likelihood is that you've already acquired HPV in your lifetime thus far?
Every other vaccination appears to be straightforward, besides HPV, and I don't know why. I've also never heard a clear answer from a physician.
Is it just that our vaccination schedules are out of date in the United States? This seems to be the most likely culprit to me.
I don't really have time to read it all, but the basic idea is as you said - the cost-benefit ratio is off. Basically expanding from something like the current case, to vaccinating up to 45 year old will avert an extra 21k cases of cancer (compared to the base case of 1.4 million) - so about an extra 1.5% cases averted, while the direct vaccination costs are estimated to increase from 44 billion to 57 billion (+29%).
The current guidance says "do not recommend" plus "consult your doctor". You should read that as "blanket vaccination as public policy is cost inefficient in that age range" not "you as a 45 year old should not get the vaccine categorically".
reply