Out of curiosity, how recent is this? I haven't owned my own macbook since the pre-touchbar era (I think the last model I had was the early 2015 Pro), and I had heard that Linux had gotten harder to boot since then due to newer firmware (I remember hearing for a time the newer models were yet to get wifi support on Linux, although I don't know how long that lasted), but at least up until then I was able to use a fairly typical Linux setup dual booted alongside MacOS. I remember using a blogpost I found as a reference, but I think the order of the steps I used were turning off FileVault, shrinking the main MacOS partition by the amount I wanted to use for Linux, doing the Linux install the way I normally do (one FAT ESP partition with refind installed, then the rest as a LUKS volume with LVM root and swap), turn off SIP by booting the macbook into recovery, booting into MacOS and running the `bless` command to set the refind partition to boot by default, rebooting back into recovery to turn SIP back on, and then finally booting back into MacOS and turning FileVault back on. Essentially, by temporarily turning off SIP and FileVault, I was able to get Linux booted by default with my usual LUKS/LVM setup but also have the option to select MacOS from my refind menu and have that booted with the usual FileVault/SIP protections. Based on what I'd read about the efforts to support Linux on the new ARM macbooks and Apple seemingly not going out of their way to block this, I would have thought that this method would still work, although maybe there's something I'm missing.
Oh wow, that is a big change! Even though I know Microsoft influenced and pushed the UEFI standard, my first macbook was my first introduction to EFI booting (since I had used legacy BIOS on my PCs up until college), so I've always associated it with Apple hardware.
Some snarky people would even say that modern Macs are in fact iPhones, just with a different form factor.
The unification of the operating systems for both devices continues with every macOS release. (And that's not only about the unification of the naming scheme…)
It absolutely can. Fedora uses HFS+ for the ESP on Macs because it integrates more cleanly into the Apple boot menu you get when you hold down command on boot, but the firmware handles FAT just fine.
Just be a little careful with the state returns on CashApp. It has a bug on handling mortgage interest deductions if your mortgage is over $750000 and the state allows deduction up to $1000000. Double check by filing with other tax software and verify the refund amount is the same. I used freetaxusa.com to get this right.
Cash App taxes has a bug where mortgage interest deduction is not handled properly with state vs federal. If your mortgage is more than 750000$ and your state is California or a state allows deduction up to a million $ in mortgage interest then you will end up getting a lower refund.
Cash app also couldn't file my ev credit correctly. Support was significantly worse than useless. Ended up going with Free Tax USA this year but I've also seen bugs with them.
My dad used to tell me this story, In rural India in the 60s the Federal Government would lay copper cables to bring electricity to the town. Some locals would bring down all the cables overnight and sell it in black market to some quck money.
For them electricity wasn't that important compared to food on the table.
I haven't thought about it for 15 years, and now with "basic research" and I couldn't find anything about either fiber optic or copper metal theft being used for fabric in Africa
nothing on snopes, nothing in your wiki page. in my prior comment I felt either was equally likely
it did nothing to augment what I might or might not believe, thank you for the contribution about general metal theft
It's absolutely a thing, even in developed countries people will steal cabling for scrap metal, even if it's getting rarer. Copper roofs on very old churches are also commonly stolen for scrap value.
The parent comment was "my dad used to tell me this story", "in rural India in the 60s"
It wasn't about not believing the possibility, it was about noticing how I heard a similar story which featured association and no source, and identifying that this is the same way that all urban legends exist. I also said "or maybe I have been missing an entrepreneurial opportunity" completely, and clearly saying that I am open to believing when referencable information was provided, which it immediately was by the first response while ironically villifying me instead of leaving it at the educational moment.
/rant If fedora really loves python, they should first start showing their love by publishing all their distribution related libraries to pypi. blivet, selinux, none of these are available in pypi. Why do they expect everyone to use rpms?
Personally I can't stand when I have to get language specific packages from one of their many package managers, that's the point of having a distribution - so it can be distributed.
Right, but the issue GP has with this is that the Python libraries that Fedora develops aren't available on other distros, and that submitting them to a Python repository would be an easy way to achieve that availablility
For the operating system, sure, use the OS packages.
For writing and deploying an application in a language? Never use the OS packages; manage it using the language's tools. My Python applications deploy into a virtualenv and install their dependencies using pip.
This. Make as big a mess as you like on your own box (your sysadmin can pave it when you're hopelessly confused), but everything in prod has to be registered with the one and only package manager because otherwise nobody will know where it's deployed or what its dependencies are or whether they're up to date. cpan/pip/gem/cargo/go get/hackage/melpa are not sysadmin problems.
everything in prod has to be registered with the one and only package manager
No. Never. Not for any reason, ever. Never.
The language's packaging ecosystem and toolchain are:
* Tailored to the language, not the operating system, which means they're reproducible on multiple operating systems. This is important, since your developers are not running RHEL server as their laptop OS, and as a result they'll be using the language toolchain regardless of what your "sysadmin" does to the production environment.
* More likely to be up-to-date and/or update-able than distro-format packages. Unless you want to be running two years ago's version of your libraries (or older), the only way you'll get distro-format packages is to build them yourself... which requires you to go grab them from the language's package system, since that's where they get published, and maintain your own pipeline to re-package them into the distro format. Now you've injected additional moving parts into your systems where none were needed.
Distro packages are only for the base operating system and things like your HTTP daemon. For application code and dependencies, the distro packages should only be involved insofar as they bootstrap you to the point of being able to use the language's toolchain. Insisting on distro-format packages for the whole thing is the path to overcomplex builds/deploys and difficult-to-update codebases.
because otherwise nobody will know where it's deployed or what its dependencies are or whether they're up to date
I can, at a glance, look at any application in production where I work and see what its full dependency tree is and whether those dependencies are up-to-date (and if not, whether they're just outdated or also subject to security advisories). Using things built on the language toolchain. Really. And this is not new cutting-edge technology here, we've had that capability for a good number of years now!
Even better, I can match up that information to what upstream actually publishes: if they say the bug I care about is fixed in version 3.1.4, I can upgrade to version 3.1.4. With distro packages, who knows? The distro might have backported the bugfix into 2.7.1 and bumped the patch number, for all I know. The more places I have to look to find out what's up-to-date and what versions I should use, the more opportunities I have to mess up. Reduce the number of places to look until it's one and only one: the upstream release notes, using upstream's versioning and upstream's packages.
Distro packages for the base OS, language packages for application and its dependencies. Deviate from this at your peril.
If we ship code that runs on SuSE 234 and RHEL 345, that's what we test on. If we run on our own hardware or AWS, we pick a distro and test on that. I don't care whether it builds or runs on a laptop; I don't write twitch games and interesting problems don't fit on a single machine anymore.
It's been a decade since I worked at such a tiny nascent company that all the software was written in just one language. Language packages almost never express dependencies on either system packages or other languages' packages, making "bootstrap to the point of being able to use all of the languages' toolchains" a manual process that lacks any guardrails. Nothing ensures you have a httpd version that's compatible with all your apps, because each of them just sort of assume httpd is out there somewhere without saying anything about it. Staying on the right versions of shared libraries is even more error-prone since the system package manager literally doesn't know you're using them.
If you want to read upstream security advisories and use such bleeding-edge software that even the bleeding-edge distros don't trust it yet, you're basically rolling your own distro that only exists on one machine in the world (because some languages' package managers aren't idempotent and symmetric) and is supported by nobody besides you. I'd rather delegate that to the people who specialize, because the best case is that I don't fuck it up too badly, I'll never add value that way.
If we ship code that runs on SuSE 234 and RHEL 345, that's what we test on.
By all means run the test server as an environment identical to production. I've never said you shouldn't. But people do have to locally run the code on their laptops to do dev work.
I don't care whether it builds or runs on a laptop
Good for you! Now, clean out your desk, because "build custom infrastructure to suit my workflow, but your workflow isn't important" is a clear admission that you don't ever get to work on my team, or probably anyone else's.
I don't write twitch games and interesting problems don't fit on a single machine anymore.
Ah, so you only will work on "interesting" problems, and literally all possible problems you don't find "interesting" are in categories like "twitch games", to be insulted and belittled. It's a good thing you were already fired a paragraph ago, because you'd get fired for that too; turns out most companies don't have problems you'd consider "interesting". So sorry for that, but them's the breaks.
It's been a decade since I worked at such a tiny nascent company that all the software was written in just one language.
With a head that big, what's your size in hats?
Language packages almost never express dependencies on either system packages or other languages' packages
But to actually respond: how many single applications do you think the average company has which are written in, say, five or more different languages and must deploy the entire codebase to a single machine? You know, that single machine you refuse to work with, because it must be just for a "twitch game" or some other tiny puny baby child's toy of a program.
If you want to read upstream security advisories and use such bleeding-edge software that even the bleeding-edge distros don't trust it yet
So sorry that I wanted to use the version with the feature that didn't make it in under the distro's freeze date. Guess we'll just wait ten years until the support contract expires and we're finally forced into an OS upgrade, then? Management will be very happy to hear that timeline, I'll bet they give you a promotion and a raise when they find out you're the one holding it up!
you're basically rolling your own distro that only exists on one machine in the world
I have reproducible builds using language packaging toolchains. Turns out it's 2017 and we can do that now.
I'd rather delegate that to the people who specialize, because the best case is that I don't fuck it up too badly, I'll never add value that way.
There are parts of this sentence that I agree with.
I don't mean to disparage twitch games, that's my go-to example of one of the last domains where it's cost-effective to get really good at living within customers' hardware constraints. But when prod is a growing distributed system, it's natural for tests to assume the same distributed system, and forcing those tests to sort-of run on a single box with the wrong kernel/fs/network config just trades in cheap hardware for expensive engineers doing work that doesn't make prod better.
When I write java, I can't run maven in prod and expect it to get native libraries into /usr/lib64 and the sysadmins' Python and Go plumbing and config files into ... wherever the hell that may live. So together we tweak a .spec file that not only provisions the entire machine correctly but answers questions about whether the entire machine is provisioned correctly, not just the java half. (We probably could make maven do all that, but the result would be worse in every conceivable way, and in most languages it's not even an option.)
If you want a rolling release distro, use one. Godspeed. But nobody's going to sell a support contract that covers random alpha builds published overnight. "We can't upgrade the distro and get the code we need, so we're smuggling in code that the distro doesn't trust yet" is just devs and sysadmins playing chicken over the fate of the project. The bleeding-edge vs supported argument should have been settled before going live.
OS package managers are so much worse than language package managers though - no ability to install packages per-user or in a "local" environment, difficult to have multiple versions of the same package installed (a huge problem if you have a "diamond" where you depend transitively on two different version of the same library), no IDE integration, limited introspectibility, inconsistent testing standards...
If the answer to "are my dependencies installed and up to date?" is "well, yes and no, we can't tell which copy I'm actually using", I am not ready to go to prod.
> "diamond" where you depend transitively on two different version of the same library
That's a trainwreck waiting to happen. It's not even worth testing, much less deploying.
> If the answer to "are my dependencies installed and up to date?" is "well, yes and no, we can't tell which copy I'm actually using", I am not ready to go to prod.
Language package managers are a lot better at that than OS package managers, IME. Much better for all deploys of version x to use the same version than for all deploys to host y to use the same version.
The case could be made for choosing to stick with well tested and battle hardened libraries instead of bleeding edge, backward-compatibility breaking releases.
Virtual environments tend to solve the problem in either case.