Paradox-of-choice issues are overblown. Every Linux distro is a repackaging of the same core components and same software. The PC is standardized for the most part, there is not much commodity hardware that lacks support, and the popular hardware that needs particular support (Nvidia drivers) is catered to by any popular distro out there.
Users are mostly afraid of wasting time trying Linux (any Linux) and having to go back to Windows for reason X, Y, or Z that they didn't even know about. For my partner who doesn't game, reason Z is one particular feature of Microsoft Word (the shrinkwrap application, not 365 Copilot App or whatever) that isn't emulated by LibreOffice or Google Docs. For competitive PC gamers, it's kernel anti-cheat. The Linux desktop story in general has been to slowly whittle down these reasons until there really is no good excuse for users not to switch and for vendors not to support the OS, even through compatibility layers.
On the face of it, this is not true; net interest margin is still the main profit driver, followed by fees. But besides that, retail deposit customers are a conversion funnel for more lucrative financial products such as credit cards and personal loans.
And besides that, banks need capital reserves in the form of customer deposits; if too much money flows out then they will have to either acquire customers or pause their real moneymaking activity (loans).
Your account doesn't make them significant money. Retail banking in general makes boatloads of money, and deposits are central to this now that we're out of zero-interest-rate-land.
> And besides that, banks need capital reserves in the form of customer deposits
USA's fractional reserve requirement is now 0%. UK has also gotten rid of their reserve requirement as well. In the UK, the limit to what the bank can loan out is more determined by the market cap of the bank (committed shareholder value). Cash is only strictly needed to cover ... customer deposits.
So in the UK, if a bank gets rid of customer deposits entirely, then it kind of doesn't need any cash anymore. It can just lend money out of thin air based on its total net worth (market cap).
> Retail banking in general makes boatloads of money, and deposits are central to this now that we're out of zero-interest-rate-land.
Talking about banking in general is generally a huge mistake. While deposits may be central, retail deposits are irrelevant for the banks that do > 70% of banking.
The bulk of deposits in those banks don't come from individual retail and thus the individual's got no sticking power.
Citibank interest rate on savings accounts is less than 1%, more like 0.1%, and there are account fees too. They're telling you load and clear "We don't want your money" - you can't argue with that empirical evidence, straight from the horse's mouth.
Banks get their reserves by borrowing from the Fed or from other banks that may have deposits. The idea that banks get reserves from deposits is severely outdated. Deposits are free reserves to the bank, but the overhead of maintaining customer accounts makes them an unattractive option.
Banks do this because they have made their own requirement that the mobile device is a trust root that can authenticate the user. There are better, limited-purpose devices that can do this, but they are not popular/ubiquitous like smartphones, so here we are.
The oppressive part of this scheme is that Google's integrity check only passes for _their_ keys, which form a chain of trust through the TEE/TPM, through the bootloader and finally through the system image. Crucially, the only part banks should care about should just be the TEE and some secure storage, but Google provides an easy attestation scheme only for the entire hardware/software environment and not just the secure hardware bit that already lives in your phone and can't be phished.
It would be freaking cool if someone could turn your TPM into a Yubikey and have it be useful for you and your bank without having to verify the entire system firmware, bootloader and operating system.
And this is because pilots are trained to keep their nose gear on the centerline, and there are relatively few aircraft types in use which receive the "heavy" after their flight number over ATC. So wheels are going to roll over the exact same tracks repeatedly.
> pilots are trained to keep their nose gear on the centerline
Funily I was learning to fly at a grass strip and we were told to vary our positioning left and right on the runway for exactly this reason. In practice it meant that as we were taxiing to the runway my instructor would tell me “Today we are taking off left/right of center to avoid damaging the grass too much.”
I remember when we organized our lives around television. On Saturday mornings it would be cartoons (including the first full-CGI television shows, Reboot and Transformers: Beast Wars), Wednesday evenings would be Star Trek: TNG, Fridays would be the TGIF block of family shows (from early-to-mid-90s USA perspective here). It felt like everyone watched the same thing, everyone had something to talk about from last night's episode, and there was a common connection over what we watched as entertainment.
We saw a resurgence of this connection with big-budget serials like Game of Thrones, but now every streaming service has their own must-watch thing and it's basically as if everyone had their own personal broadcast station showing something different. I don't know if old-school television was healthy for society or not, but I do have a feeling of missing out on that shared connection lately.
> but I do have a feeling of missing out on that shared connection lately
Mass media isolates individuals who don't have access to it. I grew up without a TV, and when TV was all my neighbors could talk about, I was left out, and everyone knew it.
While other children were in front of the television gaining "shared experience", I built forts in the woods with my siblings, explored the creek in home made boats, learned to solder, read old books, wrote basic computer programs, launched model rockets, made up magic tricks. I had a great childhood, but I had a difficult time connecting with children whose only experiences were these shallow, shared experiences.
Now that media is no longer "shared", the fragmented content that people still consume has diminishing social value -- which in many cases was the only value it had. Which means there are fewer social consequences for people like me who choose not to partake.
Mass media even moreso isolates individuals who DO have access to it.
Their "shared experience" is, actually, a debilitating addiction to flat, untouchable, and anti-democratic spectacle.
The least hundred years have seen our society drained of social capital, inescapably enthralled by corporate mediators. Mass media encourages a shift from "doing" to "watching." As we consume hand-tailored entertainment in private, we retreat from the public square.
Heavy television consumption is associated with lethargy and passivity, reinforcing an intolerance for unstructured time. This creates a "pseudoworld" where viewers feel a false sense of companionship—a parasocial connection with television personalities—that creates a feeling of intimacy while requiring (and offering) no actual reciprocity or effort.
Television, the "800-pound gorilla of leisure time," has privatized our existence. This privatization of leisure acts as a lethal competitor for scarce time, stealing hours that were once devoted to social interaction—the picnics, club meetings, and informal visiting that constitute the mētis or practical social knowledge of community life.
it feels like you're advocating that "unless everybody can form a shared connection through common culture, nobody should for a shared connection through common culture".
This is something I've been lamenting for a long time. The lack of shared culture. Sometimes a mega-hit briefly coalesces us, but for the most part everyone has their own thing.
I miss the days when everyone had seen the same thing I had.
I found this the other day: https://www.youtube.com/watch?v=ksFhXFuRblg "NBC Nightly News, June 24, 1975" I strongly urge people to watch this, it's 30 minutes but there are many very illuminating insights within. One word for you: Exxon.
While I was young in 1975, I did watch ABC's version of the news with my grandparents, and continued up through high school. Then in the late 1980s I got on the Internet and well you know the rest.
"Back Then", a high percentage of everybody I or my grandparents or my friends came into contact with watched one of ABC, NBC, or CBS news most nights. These three networks were a bit different, but they generally they all told the same basic stories as each other.
This was effectively our shared reality. Later in high school as I became more politically focused, I could still talk to anybody, even people who had completely opposite political views as myself. That's because we had a shared view of reality.
Today, tens of millions of people see the exact same footage of an officer involved shooting...many angles, and draw entirely different 'factual' conclusions.
So yes, 50 years ago, we in the United States generally had a share view of reality. That was good in a lot of ways, but it did essentially allow a small set of people in power to decide that convincing a non-trivial percentage of the US population that Exxon was a friendly, family oriented company that was really on your side.
Worth the trade off? Hard to say, but at least 'back then' it was possible, and even common, to have ground political discussions with people 'on the other side', and that's pretty valuable.
> 'back then' it was possible, and even common, to have ground political discussions with people 'on the other side'
As long as that common ground falls within acceptable parameters; couldn't talk too much about anything remotely socialist or being anti-war.
"The smart way to keep people passive and obedient is to strictly limit the spectrum of acceptable opinion, but allow very lively debate within that spectrum."
I don't know if it's good or bad but, outside of some megahit films, people mostly don't regularly watch the same TV series. I don't even have live TV myself.
i hate the single season dumping at once for a big binge. it always feel like i'm plugging into the content trough and gorging myself to pass the hours.
you can't talk about a show with somebody until they're also done binging, so there's no fun discussion/speculation (the conversation is either "did you watch that? yeah. <conversation over>" or "you should watch this. <conversation over>".
Not trying to sidetrack, but a figure like that is data, not evidence. At the very minimum you need context which allows for interpretation; 9,078 positive author comments would be less impressive if Greptile made 1,000,000 comments in that time period, for example.
Dario and Anthropic's strategy has been to exaggerate the harmful capabilities of LLMs and systems driven by LLMs, positioning Anthropic themselves as the "safest" option. Take from this what you will.
As an ordinary human with no investment in the game, I would not expect LLMs to magically work around the well-known physical phenomena that make submarines hard to track. I think there could be some ability to augment cybersecurity skill just through improved pattern-matching and search, hence real teams using it at Google and the like, but I don't think this translates well to attacks on real-world targets such as satellites or launch facilities. Maybe if someone hooked up Claude to a Ralph Wiggum loop and dumped cash into a prompt to try and "fire ze missiles", and it actually worked or got farther than the existing state-sponsored black-hat groups at doing the same thing to existing infrastructure, then I could be convinced otherwise.
> Dario and Anthropic's strategy has been to exaggerate the harmful capabilities of LLMs and systems driven by LLMs, positioning Anthropic themselves as the "safest" option. Take from this what you will.
Yeah, I've been feeling that as well. It's not a bad strategy at all, makes sense, good for business.
But on the nuclear issue, it's not a good sign that he's explicitly saying that this AGI future is a threat to nuclear deterrence and the triad. Like, where do you go up from there? That's the highest level of alarm that any government can have. This isn't a boy crying wolf, it's the loudest klaxon you can possibly make.
If this is a way to scare up dollars (like any tyre commercial), then he's out of ceiling now. And that's a sign that it really is sigmoiding internally.
> But on the nuclear issue, it's not a good sign that he's explicitly saying that this AGI future is a threat to nuclear deterrence and the triad. Like, where do you go up from there? That's the highest level of alarm that any government can have. This isn't a boy crying wolf, it's the loudest klaxon you can possibly make.
This is not new. Anthropic has raised these concerns in their system cards for previous versions of Opus/Sonnet. Maybe in slightly more dryer terms, and buried in a 100+ page PDF, but they have raised the risk of either
a) a small group of bad actors w/ access to frontier models, technical know-how (both 'llm/ai how to bypass restrictions' and making and sourcing weapons) to turn that into dirty bombs / small nuclear devices and where to deploy them.
b) the bigger, more scifi threat, of a fleet of agents going rogue, maybe on orders of a nation state, to do the same
I think option a is much more frightening and likely. option b makes for better scifi thrillers, and still could happen in 5-30ish(??) years.
I agree that it is not a good sign, but I think what is a worse sign is that CEOs and American leaders are not recognizing the biggest deterrent to nuclear engagement and war in general, which is globalism and economic interdependence. And hoarding AI like a weapons stockpile is not going to help.
The reality is, LLMs to date have not significantly impacted the economy nor been the driver of extensive job destruction. They dont want to believe that and they dont want you to believe it either. So theyll keep saying "its coming, its coming" under the guise of fear mongering.
This fact pattern (reimplementing API functions for emulation or interoperability) tracks even more closely with the Connectix case than Oracle. Google reimplemented a huge swath of the Java API surface so developers could reuse libraries, but actual applications still needed porting, so there's less protection from a fair use perspective; and even then copying APIs was still ruled to be fair use.
I just don't see how Microsoft could contort the facts to achieve a meaningfully different outcome. It doesn't matter if APIs are copyrightable if copying them is fair use for just about any purpose.
ya idk, structure screams AI and its a generic take, but i see no reason to assume anything beyond AI assistance especially in regards to the styling.
Personally I'd avoid a 2nd to last topic anywhere near resembling "reality check" into a clearly marked "final thoughts"... Even if i followed that sorta structure i would avoid # Reality Check style formatting simply because of the implication it gives off.
pretty much im hesitant to dismiss something as AI just because of bullet points.
It's not any one thing, it's all of it combined. Very middling language, bold keywords, simple tricolon throughout, unnecessary em-dashes... If it's not AI, it's still mediocre writing with no substance behind it.
Users are mostly afraid of wasting time trying Linux (any Linux) and having to go back to Windows for reason X, Y, or Z that they didn't even know about. For my partner who doesn't game, reason Z is one particular feature of Microsoft Word (the shrinkwrap application, not 365 Copilot App or whatever) that isn't emulated by LibreOffice or Google Docs. For competitive PC gamers, it's kernel anti-cheat. The Linux desktop story in general has been to slowly whittle down these reasons until there really is no good excuse for users not to switch and for vendors not to support the OS, even through compatibility layers.
reply