Hacker Newsnew | past | comments | ask | show | jobs | submit | Avamander's commentslogin

Digital sanctions are long overdue.

They would be necessary just because of the amount of malicious traffic and abuse coming from Russia without any proper recourse. Why should we accept their traffic and play nice if Russia really doesn't.


> See Google Summer of Code projects for a very practical example of how "just pay randos to work on issue X for cheap" can quite often end up in failure.

That potential for failure is there for any "subcontractors". I wonder if anyone has any stats on this.


I don't this makes business sense in general.

I do however think that there are quite a few bugs that might be triaged as "easy" but if worked on would reveal much more serious problems. Which is why some random selection of "easy" issues should make it to work queues.


Makes business sense if you want to fill in developers down time - instead of waiting for CR or QA feedback pick up small bug.

Working on two or more big features at the same time is not possible. But throwing in some pebbles and dev can take on it.


The business sense! Would someone please think of the business!

I've yet to find a business that really, truly knows what it wants. Whatever is "good for the business case" today could change overnight after the President reads some cockamamie article in Harvard Business Review, and again in two weeks after the CEO spends a weekend in Jackson Hole.


Don't forget Gradle ("GRADLE_USER_HOME") and OpenJDK ("-Djava.util.prefs.userRoot"), those too litter.


There are multiple reasons for this.

One being that it's _my_ $HOME, not some random developers'. I literally had more than 50 different dotfiles and dotfolders in my $HOME at some point. It was a garbage dump and I couldn't even identify the culprit with some of them. Simply disrespectful.

Then there's the issue of cleaning up leftovers and stale cache files. It shouldn't take a custom script cleaning up after every special snowflake that decided to use some arbitrarily-named directory in $HOME.

Not following the spec also makes backing up vital application state much much harder.

In the end, I made my $HOME not writeable so I could instantly find out if some software wants to take a dump. It turns out it's often simply unnecessary as well, the software doesn't even care, just prints an error and continues.


> It shouldn't take a custom script cleaning up after every special snowflake that decided to use some arbitrarily-named directory in $HOME.

Not to take away from your point but I shall introduce you to systemd-tmpfiles

no scripts needed, it can clean up for you if you keep a list of directories/files to clean up


> In the end, I made my $HOME not writeable so I could instantly find out if some software wants to take a dump

A brilliant idea, but goddamn what a shame it is that we have to do such things to keep our homes clean


Migration might be nontrivial but there's absolutely zero good excuse for creating _new_ noncompliant directories for 17 years.

There's lot less to migrate if you don't wait that long.


I think most people are okay with software such as OpenSSH keeping its long-existing conventions. In the same way I don't think a lot of people mind ".bashrc" being where it is. It's manageable if there's just a few and they're well-known.

However this "exemption" does not and should not apply to anything newer. Things like Cargo, Snap, Steam, Jupyter, Ghidra, Gradle, none of those should be putting their stuff (especially temporary junk) directly and unsegmented into $HOME.

At some point I had more than 50 different dotfiles and dotfolders in my $HOME. It was unwieldy and nasty to look at. I couldn't even figure out what created some of those files because they were so generic.

Plain $HOME as the dumping ground simply does not scale beyond a select few.


>$HOME as the dumping ground

It's been a while since I used Windows, but I remember the "My Documents" folder being trash pile of configs, save games, data files and whatnot, making it the worst place to actually store your documents.


Windows-oriented developers bring that mess to Macs, too, and it's incredibly aggravating. For over 25 years, Apple has had Documents/Pictures/Movies/Applications/Downloads/etc folders under the user's home folder, and convention is predominantly that you never put non-hidden files or folders in the user's home directory. And you don't put application configuration in Documents, because that's what ~Library is for.

Then ignorant developers who don't care about the platform throw random configuration folders in ~/ or ~/Documents, or think their app needs a central workspace folder for all of its projects, instead of letting you manage your own damn files.


It's just plain lazy devs. They do that crap on Windows too despite having conventions for where the stuff goes since Windows 98 (though Photos and Videos folders were introduced with XP, and Game Saves with Vista).

The folder for config is even older. CSIDL_APPDATA has been able to be used to get the path to the AppData folder since the update for Windows 95 that added Internet Explorer 4.0.


how else will you remember where you stored your 3d objects


I actually use it for 3D objects for my printer. Stays nice and clean compared to My Documents.


    $ find ~ -maxdepth 1 -name '.??*'|wc -l
    435
[edit]

A sampling...

    $ (cd && find . -maxdepth 1 -name '.*'|sort -R|head)
    ./.texlive2023
    ./.stl
    ./.stp
    ./.repo_.gitconfig.json
    ./.xsel.log
    ./.msmtprc
    ./.fonts
    ./.bash_logout
    ./.steampath
    ./.compose-cache


It's not just running it, they have built on top of it. Embrace, extend, extinguish is exactly what the Rebble team is afraid of. If extinguished and Core goes bust, the community would be left holding the bag yet again. Rebble doesn't want that, why would they.


Isn’t EEE exactly what Rebble is doing?

They embraced the the pebble community with a copy of the App Store, extended it with their own weather apis and the like, and then now are trying to extinguish any ability for Core to implement their own solution without paying them more.


No. Core can absolutely implement their own, just not on top of their work.


But they wouldn't be extinguished? Core is literally offering to pay them per user and the OS is open sourced... how could they be extinguished under the deal as outlined?

Core could easily say "actually we won't support Rebble at all it's too complicated to maintain this relationship"... and Rebble would then only exist as long as people are willing to maintain the now decade-old original watches... which is a difficult task given the availability of superior hardware from the original manufacturer.

With the Core deal they could actually grow and they get a significantly longer lease on life even if the hardware company fails again.


I've seen you make the comment about the OS being open-sourced a lot. But this largely has nothing to do with the OS. This is a conversation about infrastructure and data. The concern (from what I gathered and will condense greatly) is that Core will take in all the current app data and infrastructure setup, duplicate it themselves, move themselves off of Rebble, and continue developing on it privately.


Which to be absurdly clear - is exactly what Rebble did to Pebble. They scraped the apps and are now mad that someone else could do the same to them.


I don't think it's equivalent. When Rebble did what they did, it was because Pebble was going under and they had no EOL plan. Rebble took it upon themselves to carry the torch without having been passed it.

If Core were to do the same thing here, it's not the same, because Rebble is still active. You can't kill what's already dead (Pebble), but Rebble is very much still alive.


It is not. If Core wants, they can take the old Pebble dump and start building on top of it like Rebble has. All is fair.


So Rebble wants to benefit from code they didn't write (Pebble apps)... but also wants to prevent Pebble from benefiting from code Pebble didn't write (Rebble updates to Pebble apps)?

This seems a little silly, no? rent seeking behavior for maintaining code they didn't write to begin with?


The fact that Core is not willing to just start from the old dump publicly available already shows that it's not just "rent-seeking". Core clearly wants what Rebble has spent significant effort in not just maintaining but also building.

They're entitled to it just because in some sense Core is a successor to Pebble? No, not really.


Of course it's rent-seeking, akin to squatting — Rebble took Pebble apps developed at no cost to the users, and then maintained them and added cost. In some cases they might actually be required by the licenses of individual apps to open source their maintenance.

No one's actually entitled to anything here on either end (legally), I see 0 work being done to actually contact the original authors to seek permission or licensing details.

AFAIK, there wasn't a blanket license that covered all apps in the ecosystem... so each app would vary. In the absence of a license all rights are held by the original developers.


> Rebble took Pebble apps developed at no cost to the users, and then maintained them and added cost.

Again, if that's all it were, Core could and should just take that old Pebble dump and use that. Why bother Rebble if they haven't done anything as you imply.


Why would Core agree to pay Rebble a per user fee if they wanted to destroy them? they could just say "nope you get nothing"

And how would this prevent Rebble from continuing to operate in the event that Pebble failed again?

Open sourcing the OS makes continuity in the event of a failure much easier for Rebble right?


> Things I learned to look out for:

Don't buy any recent Intels. Some Intel ThinkPads have accelerometers built-in just to throttle your PC to oblivion when it moves. Basically unusable in any moving vehicle such as a train. It's basically anti-portability baked-in.

When it doesn't throttle, it just has abysmal battery life compared to AMD Ryzen ThinkPads of the same generation. Both lose horribly to Apple's ARM chips though.

They also tend to have soldered WiFi modules, making it impossible to upgrade later when newer and better WiFi iterations come out. If that had been the case with a few of the older models I still have, they would be unusable at this point.

There are plenty of firmware bugs as well. For example plenty of Lenovo (especially Intel as far as I've seen) models have stuttery and freezing touchpads. Though the touchpads tend to be horrible anyways.

I'd say the older (5+ years old) generations might have had slightly better driver support or they're finally fixed at this point. But there's nothing I'd spend my money on if I can just as well install Asahi on an M-series laptop.


ThinkPads used to have accelerometers to protect the hard drives, so if you dropped the machine or treated it roughly, it could park the drive, protecting it from data loss.

People used to write Linux utilities that read these accelerometers, allowing for example to switch virtual desktops by physically smacking the machine on either side.


That’s horrifying.


Look, my setup works for me.



HDAPs

Hard drive active protection system parked the heads in Ms, fast enough to handle a hard drop off a desk


i think these also existed for macs with HDDs, i recall seeing some very fun demos on youtube


Maybe what you are noticing is the "laptop on lap" detection? Check the bios, there was a "cool when on lap detected" mode on mine. Turn that off and re-test.


Yes, that's it, but there's no toggle to turn it off. Maybe it can be patched, but I don't want to fight my hardware like that.


> there's nothing I'd spend my money on if I can just as well install Asahi on an M-series laptop.

But such laptops don't work 100% with Asahi. Speakers and mic, external displays, fingerprint reader, suspend are the sore points I've read about, and shorter battery life compared to when they run Apple's SO.


> Some Intel ThinkPads have accelerometers built-in just to throttle your PC to oblivion when it moves

Wtf? That sounds crazy, any sources?


This used to be a feature to protect spinning hard drives. Why this would exist today and why it would throttle anything is bizarre.


They don't want you to burn your testicles when keeping it in your lap.

https://download.lenovo.com/pccbbs/pubs/x1e_p1_gen5/html/htm...

> The Cool and Quiet on lap feature helps cool down your computer when it becomes hot. Any extended contact with your body, even through clothing, could cause discomfort. If you prefer using your computer on the lap, it is recommended that you enable the Cool and Quiet on lap feature in UEFI BIOS:

(it can be disabled on this laptop)

more: https://askubuntu.com/questions/1416567/disable-lap-mode-on-...


Honestly, I wasn't to say this is ridiculous but I've got a i7 13" laptop which I bought to do practically everything (personal coding projects, a bit of gaming, video editing, 3d modeling etc). I do find the heat of it is quite uncomfortable after a short period of time on my lap. I was thinking about getting a M series MacBook for messing around on the couch and building a desktop for many of those other tasks.

My work MacBook Pro on the other hand could do with the opposite some times. Burn a bit of battery to heat up the aluminium case please!


In my experience Intel and AMD Thinkpads of that era are about the same for battery life but Intel always needs some kernel parameters set. Where I notice the biggest difference is Intel's integrated graphics gets you better battery life over anything AMD if your GPU needs are modest enough to be handled by Intel's integrated graphics


M1 and M2. But those are in an entirely different price bracket. I’d go so far as to say those are not comparable.


You can buy refurb M1s for $379 at Walmart.


Has a proprietary bootloader that Apple can lock in an OTA update. Also doesn't support Linux as well as Intel or AMD chipsets, unfortunately.


Last I heard asahi ran pretty well on M1/M2. Is that not the case?


It runs well but battery life is quite a bit worse than on macos.


That’s not particularly surprising to be honest. A lot of what makes Apple tech what it is is the concert between their hardware and software. Not trying to put it too poetically here, but that’s what it’s always seemed like to me.

In general when I install Linux on an Apple device I just assume there isn’t the same level of performance. I remember installing mint on a 2016 intel MBpro and the limitations/cons didn’t surprise me at all because I just kind of expected it to perform at 70% of what I expected from macOS but with far more free freedom/control. It ran very smoothly but you definitely lose a lot of functionality.


> A lot of what makes Apple tech what it is is the concert between their hardware and software.

That's very cute, but it's not why Apple laptops run Linux poorly.

Apple Silicon has terrible and inefficient support because Apple released no documentation of their hardware. The driver efforts are all reverse-engineered and likely crippled by Apple's hidden trade secrets. This is why even Qualcomm chips run Linux better than Apple Silicon; they release documentation. Apple refuses, because then they can smugly pride themselves on "integration" and other plainly false catchisms.

And on Intel/AMD, Apple was well known for up-tuning their ACPI tables to prevent thermal throttling before the junction temp. This was an absolutely terrible decision on Apple's behalf, and led to other OSes misbehaving alongside constant overheating on macOS - my Intel Macs were regularly idling ~10-20c hotter than my other Intel machines.


>That's very cute, but it's not why Apple laptops run Linux poorly.

I have no doubt you have good information after this, but this sentence makes me not want to read any further.


Okay, that's your call. You can't phone Craig Federighi for the straight dope, so you're stuck hearing it from internet douches or product leads on prescription SSRIs.

And yes, your statement was a cutesy catechism with no actual evidence provided. A big reason why Apple tech doesn't work like a normal computer is Apple's rejection of standards that put hardware and software in-concert. ACPI is one such technology, per my last comment.


i don't think either of those is really true?

https://asahilinux.org/docs/platform/open-os-interop/


Literally the first step of the boot overview depends on a proprietary and irreplaceable Apple-controlled blob:

  iBoot2 loads the custom kernel, which is a build of m1n1
Apple decides whether or not m1n1 ever loads.


Only if you boot into macOS and connect it to the internet. iBoot2 never changes by itself, you, the user, decides if you want to boot into recovery or macOS and run an update.

So can Apple stop signing new iBoot2 versions? Sure! And that sucks. But it's a bit of FUD to claim that Apple at arbitrary points in time is going to brick your laptop with no option for you to prevent that.

Granted, if you boot both macOS and Asahi, then yes, you are in this predicament, but again, that is a choice. You can never connect macOS or recovery to the internet, or never boot them.


> You can never connect macOS or recovery to the internet, or never boot them

In other words, you're completely fucked if you brick your install. I consider iBoot a direct user-hostile downgrade from UEFI for this reason.

YMMV, but I would never trust my day-to-day on an iBoot machine. UEFI has no such limitations, and Apple is well-known for making controversial choices in OTA updates that users have no alternative to.


> In other words, you're completely fucked if you brick your install. I consider iBoot a direct user-hostile downgrade from UEFI for this reason.

That's a bit of a creative perspective, isn't it? You have no control over the UEFI implementation of your vendor, same can be said for AGESA and ME, as well as any FSP/BSP/BUP packages, BROM signatures or eFused CPUs. And on top of that, you'll have preloaded certificates (usually from Microsoft) that will expire at some point, and when they do and the vendor doesn't replace them, the machine might never boot again (in a UEFI configuration where SecureBoot cannot be disabled as was the case in this Fujitsu - that took a firmware upgrade that the vendor had to supply, which is the exception rather than the rule). For DIY builds this tends to be better, Framework also makes this a tad more reliable.

If anything, most OEM UEFI implementations come with a (x509) timer that when expires, bricks your machine. iBoot2 is just a bunch of files (including the signed boot policy) you can copy and keep around, forever, no lifetimer.

Now, if we wanted to escape all this, your only option is to either get really old hardware, or get non-x86 hardware that isn't Apple M-series or IBM. That means you're pretty much stuck with low-end ARM and lower-end RISC-V, unless you accept AGESA or Intel ME at which point coreboot becomes viable.


Basically your counterpoint is that I'm absolutely right to be concerned, but I'm wrong because UEFI can also be implemented with the same objectionable backdoors that Apple implements.

We're done here, have a nice day.


It's not a counterpoint, it's a display of your factually incorrect statement.


except apple silicon notebooks are notably unbrickable[0]? you can always do https://docs.fedoraproject.org/en-US/fedora-asahi-remix/trou...

[0](through any user-accessible software action, obviously)


M1 mac minis or macbooks?


MacBook Air, though the $379 price does seem to be a Black Friday deal: https://www.walmart.com/ip/Restored-Apple-MacBook-Air-13-Lap...


Just note that listing is for an item from a third-party seller. Walmart's website includes listings from their third-party marketplace unless you explicitly filter them out.


Accelerometers aren’t new, they were a feature 20yrs ago to suspend platter hard disks.


Not only is it difficult to make an informed choice, it also incurs a maintenance cost. Cost which is often not paid, resulting in configuration that becomes increasingly sub-optimal as time passes and the SSL/TLS library is updated.

I'm fairly certain that when that generator was made (or article written), OpenSSL and similar already had ciphersuite presets one could use. So it is a bit odd that the generator is not enhancing those.

As an example, in the case of OpenSSL you can combine presets such as "HIGH" with your additional preferences. Such as avoiding non-PFS key exchanges, DoS risks, SHA1 phase out or less frequently used block ciphers. Result being something like "HIGH:!kRSA:!kEDH:!SHA1:!CAMELLIA:!ARIA". Optionally one can also bump up global "SECLEVEL" in OpenSSL's configuration.

Such a combination helps avoid issues like accidentally crippling operations when an ECC key(/cert) is used and someone forgot to allow ECDHE+ECDSA in addition to ECDHE+RSA. Nor does it accidentally disable strong ciphersuites using ChaCha20 that aren't as old.

Same goes for key exchange configuration. Quite a few servers don't have EdDSA available that don't run Windows, I suspect it's because they were set at some point and forgotten. Now such configuration also disables post-quantum hybrid key exchange algorithms.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: