Hacker Newsnew | past | comments | ask | show | jobs | submit | bhouston's commentslogin

If you showed someone what our computers can do with the latest LLMs now to someone 5 years ago they would probably say it sure looks a lot like AGI.

We have to keep defining AGI upwards or nitpick it to show that we haven't achieved it.

I would argue that LLMs are actually smarter than the majority of humans right now. LLMs do not have quite the agency that humans have, but their intelligence is pretty decent.

We don't have clear ASI yet, but we definitely are in a AGI-era.

I think we are missing an ego/motiviations in the AGI and them having self-sufficiency independent of us, but that is just a bit of engineering that would actually make them more dangerous, it isn't really a significant scientific hurdle.


Ok, but it's not AGI. People five years ago would have been wrong. People who don't have all the information are often wrong about things.

ETA:

You updated your comment, which is fine but I wanted to reply to your points.

> I would argue that LLMs are actually smarter than the majority of humans right now. LLMs do not have quite the agency that humans have, but their intelligence is pretty decent.

I would actually argue that they are decidedly not smarter than even dumb humans right now. They're useful but they are glorified text predictors. Yes, they have more individual facts memorized than the average person but that's not the same thing; Wikipedia, even before LLMs also had many more facts than the average person but you wouldn't say that Wikipedia is "smarter" than a human because that doesn't make sense.

Intelligence isn't just about memorizing facts, it's about reasoning. The recent Esolang benchmarks indicate that these LLMs are actually pretty bad at that.

> We don't have clear ASI yet, but we definitely are in a AGI-era.

Nah, not really.


> They're useful but they are glorified text predictors.

There is a long history of people arguing that intelligence is actually the ability to predict accurately.

https://www.explainablestartup.com/2017/06/why-prediction-is...

> Intelligence isn't just about memorizing facts, it's about reasoning.

Initially, LLMs were basically intuitive predictors, but with chain of thought and more recently agentic experimentation, we do have reasoning in our LLMs that is quite human like.

That said, there is definitely a biased towards training set material, but that is also the case with the large majority of humans.

For the Esoland benchmarks, I would be curious how much adding a SKILLS.md file for each language would boost performance?

I am pretty confidence that we are in the AGI era. It is unsettling and I think it gives people cognitive dissonance so we want to deny it and nitpick it, etc.


> There is a long history of people arguing that intelligence is actually the ability to predict accurately.

That page describes a few recent CS people in AI arguing intelligence is being able to predict accurately which is like carpenters declaring all problems can be solved with a hammer.

AI "reasoning" is human-like in the sense that it is similar to how humans communicate reasoning, but that's not how humans mentally reason.


Like my father before me, I seem to have absorbed an ability to predict what comes next in movies and books. It's sometimes a fun parlor trick to annoy people who actually get genuine surprise out of these nearly deterministic plot twists. But, a bit like with LLMs, it is a superficial ability to follow the limited context that the writers' group is seemingly forced by contract to maintain.

Like my father before me, I've also gotten old enough to to realize that some subset of people out there also behave like they are scripted by the same writers' group and production rules. I fear for the future where LLMs are on an equal footing because we choose to mimic them.


> There is a long history of people arguing that intelligence is actually the ability to predict accurately.

There sure is, and in psychological circles that it appears that there's an argument that that is not the case.

https://gwern.net/doc/psychology/linguistics/2024-fedorenko....

> Initially, LLMs were basically intuitive predictors, but with chain of thought and more recently agentic experimentation, we do have reasoning in our LLMs that is quite human like.

If you handwave the details away, then sure it's very human like, though the reasoning models just kind of feed the dialog back to itself to get something more accurate. I use Claude code like everyone else, and it will get stuck on the strangest details that humans actively wouldn't.

> For the Esoland benchmarks, I would be curious how much adding a SKILLS.md file for each language would boost performance?

Tough to say since I haven't done it, though I suspect it wouldn't help much, since there's still basically no training data for advanced programs in these languages.

> I am pretty confidence that we are in the AGI era. It is unsettling and I think it gives people cognitive dissonance so we want to deny it and nitpick it, etc.

Even if you're right about this being the AGI era, that doesn't mean that current models are AGI, at least not yet. It feels like you're actively trying to handwave away details.


> though the reasoning models just kind of feed the dialog back to itself to get something more accurate.

Much of our reasoning is based on stimulating our sensory organs, either via imagination (self-stimulation of our visual system) or via subvocalization (self-stimulation of our auditory system), etc.

> it will get stuck on the strangest details that humans actively wouldn't.

It isn't a human. It is AGI, not HGI.

> It feels like you're actively trying to handwave away details.

Maybe. I don't think so though.


What does AGI look like in your opinion?

Personally, I've used LLMs to debug hard-to-track code issues and AWS issues among other things.

Regardless of whether that was done via next-token prediction or not, it definitely looked like AGI, or at least very close to it.

Is it infallible? Not by a long shot. I always have to double-check everything, but at least it gave me solid starting points to figure out said issues.

It would've taken me probably weeks to find out without LLMd instead of the 1 or 2 hours it did.

In that context, I have a hard time thinking how would a "real" AGI system look like, that it's not the current one.

Not saying current LLMs are unequivocally AGI, but they are darn close for sure IMO.


> What does AGI look like in your opinion?

Being able to actually reason about things without exabytes of training data would be one thing. Hell, even with exabytes of training data, doing actual reasoning for novel things that aren't just regurgitating things from Github would be cool.

Being able to learn new things would be another. LLMs don't learn; they're a pretrained model (it's in the name of GPT), that send in inputs and get an output. RAGs are cool but they're not really "learning", they're just eating a bit more context in order to kind of give a facsimile of learning.

Going to the extreme of what you're saying, then `grep` would be "darn close to AGI". If I couldn't grep through logs, it might have taken me years to go through and find my errors or understand a problem.

I think that they're ultimately very neat, but ultimately pretty straightforward input-output functions.


Why should implementation matter at all? You should be able to classify a black box as AGI or not.

Well, I guess you lose artificial if there’s a human brain hidden in the box.


If we had AGI we wouldn't need to keep spending more and more money to train these models, they could just solve arbitrary problems through logic and deduction like any human. Instead, the only way to make them good at something is to encode millions of examples into text or find some other technique to tune them automatically (e.g. verifiable reward modeling of with computer systems).

Why is it that LLMs could ace nearly every written test known to man, but need specialized training in order to do things like reliably type commands into a terminal or competently navigate a computer? A truly intelligent system should be able to 0-shot those types of tasks, or in the absolute worst case 1-shot them.


To add to this, previously one could argue that LLMs were on par with somewhat less intelligent humans and it was (at least I found) difficult to dispute. But now the frontier models can custom tailor explanations of technical subjects in the advanced undergraduate to graduate range. Simultaneously, I regularly catch them making what for a human of that level would be considered very odd errors in reasoning. When questioned about these inconsistencies they either display a hopeless lack of awareness or appear to attempt to deflect. They're also entirely incapable of learning from such an interaction. It feels like interacting with an empty vessel that presents an illusion of intelligence and produces genuinely useful output yet there's nothing behind the curtain so to speak.

> The recent Esolang benchmarks indicate that these LLMs are actually pretty bad at that.

I’m really not sure how well a typical human would do writing brainfuck. It’d take me a long time to write some pretty basic things in a bunch of those languages and I’m a SE.


Yes, but you also wouldn't need a corpus of hundreds of thousands of projects to crib from. If it were truly able to "reason" then conceivably it could look at a language spec, and learn how to express things in term of Brainfuck.

They did for some problems. If you gave me five iterations at a problem like this in brainfuck:

> "Read a string S and produce its run-length encoding: for each maximal block of identical characters, output the character followed immediately by the length of the block as a decimal integer. Concatenate all blocks and output the resulting string.

I'd do absolutely awfully at it.

And to be clear that's not "five runs from scratch repeatedly trying it" it's five iterations so at most five attempts at writing the solution and seeing the results.

I'd also note that when they can iterate they get it right much more than "n zero shot attempts" when they have feedback from the output. That doesn't seem to correlate well with a lack of reasoning to me.

Given new frameworks or libraries and they can absolutely build things in them with some instructions or docs. So they're not very basically just outputting previously seen things, it's at least much more pattern based than words.

edit -

I play clues by sam, a logical reasoning puzzle. The solutions are unlikely to be available online, and in this benchmark the cutoff date for training seems to be before this puzzle launched at all:

https://www.nicksypteras.com/blog/cbs-benchmark.html

Frankly just watching them debug something makes it hard for me to say there's no reasoning happening at all.


My definition of AGI hasn't changed - it's something that can perform, or learn to perform, any intellectual task that a human can.

5 years ago we thought that language is the be-all and end-all of intelligence and treated it as the most impressive thing humans do. We were wrong. We now have these models that are very good at language, but still very bad at tasks that we wrongly considered prerequisites for language.


> My definition of AGI hasn't changed - it's something that can perform, or learn to perform, any intellectual task that a human can.

Wait, could you make your qualifiers specific here? Is your definition of AGI that it be able to perform/learn any intellectual task that is achievable by every human, or by any human?

Those are almost incomparably different standards. For the first, a nascent AGI would only need to perform a bit better than a "profound intellectual disability" level. For the second, AGI would need to be a real "Renaissance AGI," capable of advancing the frontiers of thought in every discipline, but at the same time every human would likely fail that bar.


Your true average human is someone like your barista at Starbuck's. Try giving them a good math problem, or logic puzzle, or leetcode problem if you need some reminding of the standard reasoning capabilities of our species. LLMs cannot beat the best humans at practically anything, but average humans? Average humans are a much softer target than this thread seems to think.

Completely disagree. Inability to handle specific math or CS is a matter of training and experience not reasoning and intelligence. The barista is quite capable at reasoning and learning feats the LLMs aren't close to

Yeah, there appears to be this idea that "being smart" is the same thing as "knowing facts", which I don't think is realistic.

I know plenty of people who are considerably smarter than me, but don't know nearly as much as I do about computer science or obscure 90's video game trivia. Just because I know more facts than they do (at least in this very limited scope) doesn't mean that they're less capable of learning than I am.

As you said, a barista is very likely able to reason about and learn new things, which is not something an LLM can really do.


it's the matter of knowing the most practically important facts to know

I think it would be fairly easy to prove or disprove that 'AI as it is today knows more about any subject than 99% of HN'. But knowledge alone does not translate into intelligence and that's the problem: we don't have a really hard definition of what intelligence really is. There are many reasons for that (such as that it would require us to reconsider some of our past actions), but the fact remains.

So until we really once and for all nail down what intelligence is you get this god-of-the-gaps like problem where everytime we find something that looks and feels truly intelligent by yesterday's standards that intelligence will be crammed into a slightly smaller space excluding the thing that just became possible.

The rate-of-change is a factor here. Arguably the current rate of change is very high compared to with two decades ago, but compared to three years ago it feels as if we're already leveling off and we're more focused on tooling and infrastructure than on intelligence itself.

Intelligence may not actually have a proper definition at all, it seems to be an emergent phenomenon rather than something that you engineer for and there may well be many pathways to intelligence and many different kinds of intelligence.

What gets me about AI so far is that it can be amazing one minute and so incredibly stupid the next that it is cringe worthy. It gives me an idiot/savant kind of vibe rather than that it feels like an actual intelligent party. If it were really intelligent I would expect it to be able to learn as much or more from the interaction and to be able to have a conversation with one party where it learns something useful to then be able to immediately apply that new bit of knowledge in all the other ones.

Humans don't need to be taught the same facts over and over again, though it may help with long term retention. We are able to reason about things based on very limited information and while we get stuff wrong - and frequently so - we usually also know quite precisely where the limits of our knowledge are, even if we don't always act like it.

To me it is one of those 'I'll know it when I see it' things, and without insulting anybody, including the barista's at Starbucks, I think it is perfectly possible to have a discussion about this and to accept that average humans all have different skills and specialties and that some people work at Starbucks because they want to and others because they have to, it does not say anything per-se about their intelligence or lack thereof. At the same time you can be IQ 140 but still dumber than a Starbucks barista on what it takes to make someone feel comfortable and how to make coffee.


We seem to largely agree but I wanted to respond to this one bit:

> you get this god-of-the-gaps like problem where everytime we find something that looks and feels truly intelligent by yesterday's standards that intelligence will be crammed into a slightly smaller space excluding the thing that just became possible.

It's important to distinguish between "AI" and "AGI" here. I haven't seen many objections that the frontier models of the past year or so don't qualify as AI (whatever that might or might not mean) and the ones I have seen don't seem to hold much water.

However there's a constant stream of bogus claims presenting some new feat as "AGI" upon which each time we collectively stop and revise our working definition to close the latest loophole for something that is very obviously not AGI. Thus IMO legal loophole is a more fitting description than god of the gaps.

I do think we're nearing human level in general and have already exceeded it in specific tightly constrained domains but I don't think that was ever the common understanding of AGI. Go watch 80s movies and they've got humanoid robots walking around doing freeform housework while chatting with the homeowner. Meanwhile transferring dirty laundry from a hamper to the drum remains a cutting edge research problem for us, let alone wielding kitchen knives or handling things on the stovetop.


And yet if you asked that barista if you should walk to the car wash or take your car there, they would never respond with "you should take a walk, it's healthier than driving" like almost every LLM did in a test I saw.

That is as basic as everyday reasoning gets and any human in modern society solves hundreds of problems like that every day without even thinking about it, but with LLMs it's a diceroll. Testing them with leetcode problems or logic puzzles is not going to prove much unless you first made sure none of those were in the training data to prevent pure memorization.


> If you showed someone what our computers can do with the latest LLMs now to someone 5 years ago they would probably say it sure looks a lot like AGI.

Would they? Perhaps if you only showed them glossy demos that obscure all the ways in which LLMs fail catastrophically and are very obviously nowhere even close to AGI.

Certainly, they wouldn't expect that an AI able to score 150 on an IQ test is unable to play a casual game of chess because it isn't coherent enough to play without making illegal moves.


> Certainly, they wouldn't expect that an AI able to score 150 on an IQ test is unable to play a casual game of chess because it isn't coherent enough to play without making illegal moves.

To be fair, I am pretty sure Claude Code will download and run stockfish, if you task it to play chess with you. It's not like a human who read 100 books about chess, but never played, would be able to play well with their eyes closed, and someone whispering board position into their ear


There are a lot of problems with this analogy, but even if you were to take a photo of the board after every move and send it to the model, it would still be unable to play competently.

It doesn't look anything like AGI and no one who knows what that means would be confused in any era.

Is it useful? Yes. Is it as smart as a person? Not even remotely. It can't even remember things it already was told 5 minutes ago. Sometimes even if they are still in the context window un compacted!


It doesn’t need to be human level, and if I walk into a room and forget why I went in am I no longer a general intelligence?

If it doesn't need to be human level then what are we even talking about? AGI means human level. Everything else is AI

No, the big thing with AGI was that it was general. AI things we made were extremely narrow, identify things out of a set of classes or route planning or something similarly specific. We couldn't just hand the systems a new kind of task, often even extremely similar ones. We've been making superhuman level narrow AI things for many years, but for a long time even extremely basic and restricted worlds still were beyond what more general systems could do.

If LLMs are your first foray into what AI means and you were used to the term ML for everything else I could see how you'd think that, but AI for decades has referred to even very simple systems.


If AGI doesn't mean human level then what does? As you say, every application of A* is in some way "AI" so we had this idea of "AGI" for something "actually intelligent" but maybe I'm wrong and AGI never meant that. What does mean that?

> If you showed someone what our computers can do with the latest LLMs now to someone 5 years ago they would probably say it sure looks a lot like AGI.

But this is a CPU! It's not a GPU / TPU. Even if you think we've achieved AGI, this is not where the matrix multiplication magic happens. It's pure marketing hype.


I did AI back before it was cool and I think we have agi. Imo the whole distinction was from extremely narrow AI to general intelligence. A classifier for engine failure can only do that - a route planner can only do that…

Now we have things I can ask a pretty arbitrary question and they can answer it. Translate, understand nuance (the multitude of ways of parsing sentences, getting sarcasm was an unsolved problem), write code, go and read and find answers elsewhere, use tools… these aren’t one trick ponies.

There are finer points to this where the level of autonomy or learning over time may be important parts to you but to me it was the generality that was the important part. And I think we’re clearly there.

Agi doesn’t have to be human level, and it doesn’t have to be equal to experts in every field all at once.


An interesting perspective: general, absolutely, just nowhere near superhuman in all kinds of tasks. Not even close to human in many. But intelligent? No doubt, far beyond all not entirely unrealistic expectations.

But that seems almost like an unavoidable trade-off. Fiction about the old "AI means logic!" type of AI is full of thought experiments where the logic imposes a limitation and those fictional challenges appear to be just what the AI we have excels at.


A human can think logically with reason, not to say they are smart or smarter. But LLMs cannot. You can convince a LLM anything is correct and it will believe you. You can't convince a human anything is correct.

I can't argue that LLMs do not know an absolute insane amount of information about everything. But you can't just say LLMs are smarter then most humans. We've already decided that smartness is not about how much data you know, but thinking about that data with logical reasoning. Including the fact it may or may not be true.

I can run a LLM through absolutely incorrect data, and tell it that data is 100% true. Then ask it questions about that data and get those incorrect results as answers. That's not easy to do with humans.


That just implies LLMs are suggestible. The same is true of children. As we get older and build a more complete world model in our heads, it's harder to get us to believe things which go against that model.

Tell a 5-yr old about Santa, and they will believe it sincerely. Do the same with a 30-year old immigrant who has never heard of Santa, and I suspect you'll have a harder time.

That's not because the 5-year old is dumber, but just because their life-experience ("training data") is much more limited.

Even so, trying to convince a modern LLM of something ridiculous is getting harder. I invite you to try telling ChatGPT or Gemini that the president died a week ago and was replaced by a body-double facsimile until January 2027, so that Vance can have a full term. I suspect you'll have significant difficulty.


> Do the same with a 30-year old immigrant who has never heard of Santa, and I suspect you'll have a harder time.

There's a plethora of people who convert to religion at an older age, and that seems far more far fetched than Santa.


> There's a plethora of people who convert to religion at an older age, and that seems far more far fetched than Santa.

Being in a religion doesn’t imply belief in deities; it only implies people want social connection. This is clearly visible in global religion statistics; there are countries where the majority of people identify as belonging to a religion, and at the same time only a small minority state they believe in a “God”. Norway is a decent example that I bumped into just yesterday. https://en.wikipedia.org/wiki/Religion_in_Norway


Sure.

But I bet you'd have a significantly easier time converting a child rather than a 30/40/50-yr old to a religion.

My point is that LLMs are suggestible, perhaps more so than the average adult, but less so than I child I suspect. I don't think suggestibility really solves the problem of whether something has AGI or not. To me, on the contrary, it seems like to be intelligent and adaptable you need to be able to modify your world model. How easily you are fooled is a function of how mature / data-rich your existing world model is.


> LLMs are actually smarter than the majority of humans right now

I consider myself a bit of a misanthrope but this makes me an optimist by comparison.

Even stupid people are waaaaaay smarter than any LLM.

The problem is the continued habit humans have of anthropomorphizing computers that spit out pretty words. It’s like Eliza only prettier. More useful for sure. Still just a computer.


I really feel like we have not encountered the same stupid people. Most stupid people I know respond to every question with some form of will-not-attempt. What's 74 times 2? Use a calculator! Should I drive or walk to the car wash? Not my problem! How many R's in strawberry? Who cares! They'll lose to the LLM 100%.

The cheapest Aliexpress calculator can multiply much bigger numbers than I can in my head, and it can do it instantly. Does that mean that the calculator is “smarter” than me?

That's actually proving that they indeed are smarter than LLMs - by choosing to not deal and waste time, water and energy on useless benchmarks.

> Still just a computer.

I don't believe in a separation of mind and spirit. So I do think fundamentally, outside of a reliance on quantum effects in cognition (some of theorized but it isn't proven), its processes can be replicated in a fashion in computers. So I think that intelligence likely can be "just a computer" in theory and I think we are in the era where this is now true.


I don't believe in "spirits" from the get go. I think it's certainly theoretically possible that we could mimic human thought with a computer (quantum or otherwise) but I do not think that the LLMs we have now are doing that. I'd say that what we have right now is "just a computer".

This doesn't mean they aren't useful, I like Claude a lot, but I don't buy that it's AGI.


The problem with definitions is that they are all wrong when you try to apply them outside mathematical models. Descriptive terms are more useful than normative ones when you are dealing with the real world. Their meaning naturally evolves when people understand the topic better.

General intelligence, as a description, covers many aspects of intelligence. I would say that the current AIs are almost but not quite generally intelligent. They still have severe deficiencies in learning and long-term memory. As a consequence, they tend to get worse rather than better with experience. To work around those deficiencies, people routinely discard the context and start over with a fresh instance.


No they aren't

ChatGPT Health failed hilariously bad at just spotting emergencies.

A few weeks ago most of them failed hilariously bad at the question if you should drive or walk to the service station if you want to wash your car


Idk about the health story, but in my use, ChatGPT has dramatically improved my understanding of my health issues and given sound and careful advice.

The second question sounds like a useless and artificial metric to judge on. The average person might miss such a “gotcha” logical quiz too, for the same reason - because they expect to be asked “is it walking distance.”

No one has ever relied on anyone else’s judgment, nor an AI, to answer “should I bring my car to the carwash.” Same for the ol’ “how many rocks shall I eat?” that people got the AI Overview tricked with.

I’m not saying anything categorically “is AGI” but by relying on jokes like this you’re lying to yourself about what’s relevant.


I have been checking organic and inorganic chemistry skills in ChatGPT pro and it is absolutely, laughably bad. But it sounds good, plausible but it comically wrong in so many ways.

Maybe you should think twice about whether the health issues advice it is giving you is legitimate.


It gave dangerous shitty advice to patients in critical conditions

https://www.bmj.com/content/392/bmj.s438


I would accuse you of nitpicking. My experience is that LLMs are generally as smart as the average human +90% of the time. A lack of perfect to me doesn't mean it isn't AGI.

>> My experience is that LLMs are generally as smart as the average human +90% of the time. A lack of perfect to me doesn't mean it isn't AGI.

In my experience, they contain more information than any human but they are actually quite stupid. Reasoning is not something they do well at all. But even if I skip that, they can not learn. Inference is separate from training, so they can not learn new things other than trying to work with words in a context window, and even then they will only be able to mimic rather than extrapolate anything new.

It's not the lack of perfect, it's the lack of reasoning and learning.


I 100% agree that learning is missing. We make up for it in SKILLS.md and README.md files and RAGs of various types. And we train the LLMs to deal with these structures.

I've seen a lot of reasoning in the latest models while engaging in agentic coding. It is often decent at debugging and experimentational, but around 30% it goes does wrong paths and just adds unnecessary complexity via misdiagnoses.


"look, it completely lied about params that don't exist in a CLI!"

AGI doesn't mean perfect. It means human like and the latest models are pretty human like in terms of their fallibility and capabilities.

> I would argue that LLMs are actually smarter than the majority of humans right now

This (surprisingly common) view belies a wild misunderstanding of how LLMs work.


Nice! Was what Composer 1.5 based on?

This is a pretty significant hit to natural gas and will significantly cut Qatar's income while boosting other producers like the USA, Russia, Norway, Australia and Canada:

https://en.wikipedia.org/wiki/List_of_countries_by_natural_g...

And remember that Iran's natural gas infrastructure was also hit yesterday. Thus there are two separate new supply constraints.


I think this is fundamentally misdiagnosing why Macs haven’t dominated. It is actually not about the monitor support but about:

- government and corporate bulk contracts ?and this is usually a result of software only working on Windows.) - expensive (thus affecting for most home users and also corporate bulk buyers who can not tell the difference.) - lack of high end game support

That is why it doesn’t have more market share.

You are thinking too much about minor technical issues.


Is Meta Horizon Worlds working on PC well? I guess it is their attempt to be competitive with Roblox.

> Interesting, cutting way back in the product they renamed the whole company for.

It was clearly the wrong bet. He pumped something like $100B into the endeavour (Meta Quest / VR / Horizons) and it is just slowly dying as we speak. He has to give up on it, although I am sure it will be called a "pivot" into AR glasses.


> He pumped something like $100B into the endeavour (Meta Quest / VR / Horizons) and it is just slowly dying as we speak.

Literally never met anyone who used or liked the Horizon thing, VRChat in comparison is more popular and doesn't feel like a soulless corporate husk: they also have quite the variety of worlds, from party games, to someone building a whole jet/chopper flight combat simcade world; ofc all of them are a bit jank, but lots of cool stuff and very expressive avatars.

Meta Quest, on the other hand, seems like a really good piece of tech - I still have my Quest 2 (because I'm broke as hell), but I enjoyed even that one, albeit maybe with a slightly more comfy head strap than the default one and the Virtual Desktop app cause their Link app doesn't support Intel Arc GPUs. The tracking is good, the experience of all sorts of stuff in VR is nice, games like H3VR or VTOL VR are great, as is Into The Radius VR! At the same time, I can see why it never saw super widespread adoption - tricky to develop for and also a somewhat limited audience.

Also the productivity situation just isn't there, closest I got to a good productivity setup (out of curiosity) was the Immersed app before they messed it all up by removing support for physical monitors - I could have my 4 physical monitors in VR surrounded by whatever I want and some virtual monitors and just lock in, it was kind of zen despite the technical limitations. It seems like people got promising tech in place... and then never really wrote good software to take advantage of it. Even Virtual Desktop has artificially enforced monitor limits in VR.

I hope VR tech continues to progress (especially lightweight headsets) no matter what happens to Meta.


Yeah, it was a bizarre decision. There isn't a clear ROI on games and that's what Horizon Worlds has been the whole time. There's no equation that says a 100M game automatically makes 100x more than a 1M game on average. If anything the equation is sub-linear. 100B just doesn't seem like the right size for a game investment.

It's supposed to be a Roblox competitor, which does print money, though probably not to the extent of how much they invested.

The problems are 2 fold:

People/kids don't want to put on a VR headset to play Roblox. I guess they're conceding this point by pivoting to mobile.

Meta is the opposite of cool. Real name requirements, only humanoid avatars, super corpo branding, etc really seriously hold them back from competing with VRChat or Roblox. This one is terminal it'll never be fixable as long as Meta is at the helm.


Even Roblox doesn’t print money if you look into that business. They print engagement but are still fighting tooth and nail to make a dime on it.

I can see Meta wanting the engagement though.


If they wanted a Roblox competitor, they could have bought Roblox for much less than the billions they spent.

Even now it's still less.


100B wasn't spent on a game. The RL org is much larger than Horizon Worlds, or even VR

It's not slowly dying, it was dead on arrival and never had any real traction

There are some really good ar glasses for a couple of hundred dollars, I think they are going to end up really cheap and not the 100 billion investment that facebook needs.

Tbf I don't think they ever intended to make back their investments via the goggles. As near as I can tell the thought process was basically: "Real estate + fashion + live entertainment + art + etc is X quadrillion dollars. We could make The Virtual World and capture all that value. It would be irrational not to invest $100B!" Basically Pascal's Investment.

Any you'd recommend or can point me to good reviews for?

Okay, how long until Meta Quest is discontinued/sunset?

I believe there is no expectation of a Meta Quest 4 right?


In all seriousness, given component price increases etc, the Quest 3 remains an incredible deal for PC VR use. Aside from the foveated rendering, the lens/display specifications are very close to Valve's still to ship Steam Frame, which at this stage will almost certainly cost more than the Quest 3 does.

25 PPD VR headset for 499 with inside-out tracking plus controllers etc is amazing value. I've never once used any of the Meta applications, I only use it for VR games on Steam.

I think there is a case to be made one should buy one while you still can, if you want a great value PC VR headset. It's still an excellent choice for stuff like sim racing as well.

I also think the Quest line of hardware is done for. They are clearly much more interested in the glasses lineup, products like the Ray Bans etc, none of which appear to use any of the Quest software stack.


If only it were made by almost[0] literally anyone else, I'd allow one into my house.

[0]"Almost" has plenty of room for any horrid exception someone might want to gotcha me with, so please don't.


Yeah the frame is gonna cost at least double now with component prices.

I was hoping to buy one (I've got 4 quests of different types) but nope not if it's > 1000€.


Meta has very recently had leaks of an upcoming lightweight headset. So maybe not a Quest 4 as a direct successor to the Quest 3, but a new headset is in the works.

Rumor is it that the focus of this new headset is AR. Not VR.

So once again they're making a stupid business decision based on wishful thinking.

Exec 1: "Surely, people will want to wear this headset all day while they work! Because the only reason why anyone would NOT want to do that is the weight of the thing!"

Exec 2: "Exactly! Gaming makes us a lot of money—and it's the only reason anyone ever bought our VR headsets—but imagine how much more money we could be making from business customers/apps that currently have no need for such devices. If we build it, they will come though! Can there be any doubt?"

Exec 3: "Not to mention that the data we collect from gamers has almost no value! We need to be collecting intimate details about everyone's lives, not their best Beat Saber scores!"

Exec 4: "You know what? Let's get rid of the controllers entirely. Sure, they're absolutely 100% necessary for decent gaming but I seriously doubt the business applications of AR that we're pretending is a $100 billion market won't need it."

Exec 5: "I'm concerned that end users will be able to do what they want with OUR devices that we're so graciously selling them the privilege to use. We need to ensure they're NOT at all like generic PCs that allow anyone and everyone to run whatever software they want and attach 3rd party hardware. It's not like such capabilities of general purpose hardware were what set off the PC revolution or anything!"


I know a few folks with the raybans and they really like it. I do not understand why you would want an untrustworthy brand in your life in that fashion, but if they go to the in between I can see it taking off.

There are a lot of flavors of "pro-natalist". For example Elon Musk is a "pro-natalist" but he seems to clear favor white Christian people and himself especially. Others are pro-natalist but have a general eugenics bent, rather than just white/Christian supremacy. And then others are pro-natalist in a more general sense, in that our culture in general should encourage at least rough replacement levels of fertility so that that we should avoid a population collapse.

Christian? Musk says he is not religious. He has said he is a "cultural christian" - a description also used of themselves by a lot of people ranging from Richard Dawkins to Anders Breivik.

https://www.csmonitor.com/USA/Society/2024/1218/elon-musk-cu...


That would still match "[favouring] white Christian people". Or at least that part matches the "Christian" part, the other stuff Musk associates with seems to suggest at least some racial (and not simply cultural) biases in his thinking, e.g. how he regards DEI as being a promotion of undeserving people rather than a way to give equal opportunities to deserving people who are demonstrably under-represented given their qualifications.

On that basis Richard Dawkins matches the Christian part.

It is entirely unimportant how Richard Dawkins is categorised, isn't it? Last I checked, the "pro-natalist" part isn't there for Dawkins, so how other things modify a pro-natalist stance don't connect to anything.

I am suggesting that a definition of "Christian" that includes Richard Dawkins is flawed.

You're the one who chose to combine Musk and Dawkins in the same group here with "cultural christian", that's absolutely a straw man if this is what you're doing.

I mean, your own link up there has a sub-heading of "Everyone has their own definition".

Especially when you're replying to "he seems to clear favor white Christian people and himself especially" rather than "is a Christian". Queen Victoria wasn't a feminist, neither.


You are saying they are Christian in the same sense of being "cultural Christians" rather than actual Christians. if you say one is a Christian it follows that the other is a Christian.

The point is that given Musk is clearly not an actual Christian he cannot favour Christians "himself included".


I don't think those groups are as distinct as you're implying, certainly not in the US.

I think there is considerable overlap, in the form of people who believe in the "Great Replacement" conspiracy theory. Essentially "we need to make sure there are enough white babies so that white people can outbreed <insert preferred minority scapegoat>." That thought is inherently eugenicist because it implicitly holds that white people are "better" in some way. "Christian" is also often implicit in "white babies," especially in contrast to Muslim or Jewish people being a common choices of scapegoat.


I guess I would like a distinction because I personally would like to avoid population collapse, thus I am pro-natalist in wanting a replacement level fertility, and I would prefer if that fertility was well distributed rather than highly concentrated in the most conservative religious folk. I do fear what will happen if we continue to shrink, it has to stop somewhere.

That's why I specified the US, where the population is still growing, and the remnants/echoes of the baby boom aren't as stark. I don't think it looks like we're headed for population collapse, and if we are, it's far enough in the future to course correct pretty gently.

I have less insight into the culture of natalists in countries like Japan or South Korea where their population pyramids are heavily inverted. I don't know what they're doing to address their age demographic issues, nor do I have any ideas for what they should do.


The Christian "Quiverfull"[1] movement embodies that "Out-Breeding The Others" idea.

1: https://en.wikipedia.org/wiki/Quiverfull


Some seem to use it as cover for being predators. Musk exposed his genitals to an employee without their consent, in a confined space without ability to escape, for instance.

> is it not possible to say that both hamas and the IDF do terrible things?

I agree. Hamas and IDF do terrible things - the ICC issued warrants for the leaders of both. This is why an external party has to impose a solution and it should involve in my opinion separation (two-states.) Both parties are radicalized at least for now and need to be separated and allowed to manage their own affairs while allowing the other to exist.


[flagged]


100% not true. Abbas wants a Palestinian state beside Israel. This is what he has supported for a long time. And why many states in the last two years have recognized a Palestinian state: https://www.cbc.ca/news/world/mahmoud-abbas-palestinian-pres...

While there was rejectionists in the past, Netanyahu has been the rejectionist for the last two decades. He says so himself:

https://www.timesofisrael.com/netanyahu-boasts-of-thwarting-...

He and his cronies boosted Hamas in part to split the Palestinians so as to avoid having to negotiate a two-state solution:

https://www.timesofisrael.com/for-years-netanyahu-propped-up...


It's pointless to engage with the argument that one party didn't think it was offered enough so it's right for the other party to offer even less. In never made any sense and it's just one of the myriad rhetorical tricks to twist, muddle and subvert the discussion.

Here is Bill Clinton know what the Palestinians were offered: https://x.com/itscarterhughes/status/2033758202685268373?s=4...

Then you didn't understand. Palestinians can be forced to accept a state with the 1967 borders and Jerusalem East as a capital, as it's proven by the fact that they are forced to accept occupation and apartheid every single day.

But Clinton is of course lying as well as disparaging a whole people with racist remarks.


> Palestinians can be forced to accept a state with the 1967 borders

I don’t know what you mean. Palestinians should agree to accept a deal.

> forced to accept occupation and apartheid every single day.

No. Arabs in Israel have more rights than in any Arab country. Apartheid is illegal under Israeli law. You clearly have very little knowledge of Israel.

Arabs in Hamas and Hezbollah areas have bad lives because of their governments.

> Clinton’s racist remarks

Palestinian is a political identity created in the sixties. Racially these people identify as Arabs, just like the arabs inside Israel. That’s why they chant for “Palestine will be arab”. People should know that as a minimum to participate in any discussion of the middle east.


> I don’t know what you mean. Palestinians should agree to accept a deal.

No, the occupation is illegal and those who are illegally occupying the territory of other people must leave it, period.

Not even bothering to answer the rest of your fantasy propaganda.


> No, the occupation is illegal and those who are illegally occupying the territory of other people must leave it, period.

Why is it an occupation? Arabs declared war on Israel and lost Judea and Samaria.

Do you campaign that Poland and France illegally occupy parts of Germany and must leave the land? Or do you only do that when the state is Jewish?


> Abbas wants a Palestinian state beside Israel.

Have you researched what he says in Arabic, rather then English? In a 2014 interview on Egyptian TV (Arabic), Abbas stated he would never recognize Israel as a Jewish state and could not "close the door" to "refugees" wishing to return.

He's also insane: in 2023 at the UN (in a speech with Arabic elements echoed domestically), he denied proof of Jewish ties to Al-Aqsa/Temple Mount and accused Israel of lies akin to Goebbels propaganda. In April 2025, during a PLO Central Council meeting in Ramallah (televised in Arabic), he claimed the Quran places the Jewish Temples in Yemen, not Jerusalem.


[flagged]


[flagged]


> "Hitler did not kill the Jews because of their religion; he killed them because they were Jews. And we must remember that, because there are those who would do the same today if they could."

Benjamin Netanyahu, speaking at the World Holocaust Forum in 2020.

> "Hitler was not only a mass murderer, he was a master of deception. He deceived the world about his true intentions, and ultimately, his goal was to exterminate the Jewish people."

Benjamin Netanyahu, during a speech at Yad Vashem in 2012.

> "For many, Auschwitz is the ultimate symbol of evil. It is certainly that. The tattooed arms of those who passed under its infamous gates, the piles of shoes and eyeglasses seized from the dispossessed in their final moments, the gas chambers and crematoria that turned millions of people into ash, all these bear witness to the horrific depths to which humanity can sink."

- Benjamin Netanyahu's speech at the 5th World Holocaust Forum in 2020

I hope every Hitler-apologist and Holocaust-denier says and believes things like that.

Compare that to al-Husseini, a key figure worth learning about. He was an Arab leader that stoked anti-Jewish sentiment with religious propaganda about Al-Aqsa Mosque (still going today perpetuated by Hamas), possibly to distract from his corrupt management of religious endowments. Al-Husseini led anti-British and anti-Jewish violence for years. He contributed to what is a key turning point or escalation of the conflict, the Hebron Massacre in 1929.

A rumor that Jews planned to take control of the Al-Aqsa Mosque resulted in a violent pogrom against a Jewish community with continuously presence for thousands of years. By the end the mob killed 67 Jews. The rioters attacked homes, horrifically tortured and killed entire families, mutilated, raped, stabbed children, and enacted mass destruction on the Jewish quarter. In many ways it was eerily similar to October 7th. I was shocked when I read about it.

There were some noble people that saved lives from the hundreds. One Arab man literally rode in on a white horse to protect some defenseless people.

What did al-Husseni say about the riot?

> "The massacre in Hebron was the result of a natural and justified reaction to the growing presence of Jews in Palestine. The Jews were responsible for the violence that broke out."

He blamed the victims.

> "Palestine is the land of the Arabs, and it will remain so. The Zionist movement is an illegal act and must be opposed by all means, including violence."

He advocated for violence.

> "The Jews have no place in Palestine and should leave."

al-Husseini had very close ties with Nazis. He broadcast Nazi propaganda over Arab radio from Berlin, urged Muslim and Arab populations to support Nazi efforts, and echoed their antisemitic ideology. He openly called for the destruction of Jewish communities in the Middle East. He helped recruit Muslim soldiers for the Waffen-SS, which is considered among the worst of the Nazi forces in terms of atrocities and war crimes.

And Abbas? An actual Holocaust denier and revisionist?

> "The Zionist movement cooperated with the Nazis in persecuting the Jews, and this is a well-known fact.”

Mahmood Abbas, 1982 in his PhD thesis "The Other Side: The Secret Relationship Between Nazism and Zionism."

> "The number of Jews killed during the Holocaust is exaggerated."

Mahmood Abbas, 2018

These are some examples of what Holocaust denial and revisionism typically sound like.


Here's a funny video to watch

https://youtu.be/f9HmkRYlVZw


> I hope every Hitler-apologist and Holocaust-denier says and believes things like that.

You’re right - he’s only a Hitler apologist and holocaust denier when it suits him. He’s not consistent about it.

He’ll tell this story to demonize current-day Palestinians and justify the violence done daily to them. He doesn’t just “advocate” for violence, he personally directs it, and he tells stories like this to make of worse.

He tells the other stories you mentioned to capitalize Jewish victimhood, silence critics, and distract from Israel’s crimes against the Palestinians, which have nothing to do with any of this.

> These are some examples of what Holocaust denial and revisionism typically sound like.

Not really. Both quotes from Abbas are very tame. They make him about as much a holocaust denier as Bibi is.

At any rate, they don’t make him or the Palestinian people any less of a partner for peace.

And even mentioning Abbas’ views on the holocaust, in the context of the subjugation, persecution and extermination of his own people for over a decade, is so incredibly cynical and cruel and pathetic, it should tell you everything you need to know about Israel’s position in the conflict.


The two-state solution doesn't need to be offered, it needs to be imposed. And it needs to be imposed on both parties, which means that Israel needs to be forced to withdraw within its legitimate borders.

Israel (with the West's participation and complicity) has been perfectly able to impose on Palestinians the settlements, the walls and the apartheid. Therefore Israel and the West will have no trouble imposing on Palestinians wider borders, withdrawal from settlements and the end of the occupation.


Yes. Another pattern you can observe throughout the years of the conflict, right now again in Lebanon: When Israel rejects an offer, they get a better offer. When everyone else in the region rejects an offer, they get a worse one.

The State of Palestine is recognized as a sovereign state by 157 of the 193 member states of the United Nations, or just over 80% of all UN members.

The only obstacle to the two-state solution is Israel and the US blocking it with powerful violence.

Please don't lie.


Countries recognising Palestine doesn't matter. Palestine wants one state and to kill all the Jews. They say so explicitly and repeatedly. Anyone who has done any research into the middle east knows this.

Your comment is a complete racist fabrication. You should be banned.

For those wondering, it is verifiable story, it is covered as fact in Israeli newspapers:

https://www.timesofisrael.com/israeli-forces-kill-west-bank-...

https://www.ynetnews.com/article/p7mq5k5bs

The main justification floated is that the car was "going fast" and thus made the undercover Israeli soldiers feel unsafe.

The New York Times describes it as such:

"Ali Bani Odeh’s wife and four young boys hadn’t seen him in a month and a half when he came home to Tammun, in the West Bank, from his construction job in Israel late on Friday to spend the last few days of Ramadan with his family.

On Saturday night, the boys persuaded him to take them out for a drive. Eid al-Fitr, the end of Ramadan, was coming, so there were new clothes to buy. The day’s fast had been broken, so there were sweets to be had, too.

They picked up fried doughnut holes in Tubas, saving them for later, but the clothing shop they went to in Nablus was closed. It was already past midnight, so they headed back to Tammun: Khaled, 11, the oldest, in the back with Mustafa, 8, and Muhammad, 5. Othman, 6, blind and incapable of walking or feeding himself, was in his mother’s lap in front.

As they rounded a corner slowly, a few minutes from home, young Khaled and Mustafa recounted on Sunday, their mother, Waad, 35, asked her husband to pull over and take Othman from her so she could get something from her bag on the floor. Suddenly, the boys said, they saw laser pointers shining on their family from every direction, heard their mother scream, heard their father say “God is great” — and then heard a deafening fusillade of gunfire."

https://www.nytimes.com/2026/03/15/world/middleeast/palestin...


The situation in the West Bank (and similar forces are at play in Gaza, too) remind me of what's wrong with American policing, at a far more extreme scale.

The people charged with enforcing the peace deploy lethal force with near impunity at the slightest "provocation" (a child throwing a stone, a car driving too fast); I wouldn't be surprised if IDF forces deployed to the West Bank are trained much like American police officers are, to operate in constant fear and perceive absolutely everything and everyone as a deadly threat to be neutralized. The soldiers themselves are raised in a culture with deeply racist undertones, making them all too ready to view any random Palestinian as a terrorist. Meanwhile, the bureaucracy that should be overseeing them works only to protect them. It's no surprise that things like this happen as often as they do.

Reform in the US is imaginable, I can and do believe, but it's much harder for me to imagine it in Israel - even much of the so-called left in Israel is too radicalized against Palestinians after 100 years of conflict, the Second Intifada, and October 7.


That's a huge problem (immediate, unjustified escalation to violence becoming the norm) and:

> The main justification floated is that the car was "going fast" and thus made the undercover Israeli soldiers feel unsafe.

"I feel unsafe" has become the catch-all excuse for everything in the recent decade. It's used to justify everything from Karen complaining about someone's behavior in public to people calling the cops on someone for looking at them wrong, to making a scene on a public bus, to police officers jumping the gun and escalating to violence, all the way to war crimes. When did "I feel unsafe" become this ultimate i-can-do-anything-and-avoid-responsibility card? Like a magic spell that you can cast before doing something crazy. It's like that old "He's coming right for us" South Park joke, but instead of being a joke it has real life and death consequences.


Most people will never interact with a cop on duty outside of a speeding ticket or some other mundane encounter. A major chuck of what many people think about police comes from TV and movies.

It's impossible to overstate the influence of Dragnet (the OG police procedural from the early 50s) alone on the widely held idea that police are mostly heroic and good. Police procedurals are still extremely popular, they overwhelmingly portray law enforcement in an extremely idealized way.

There are exceptions (The Wire, The Shield), but they are noteworty in that police are not heroes.


> When did "I feel unsafe" become this ultimate i-can-do-anything-and-avoid-responsibility card?

It only works if you deploy it against someone lower-status than you. The tactic is largely irrelevant and can be seamlessly replaced with any of a number of other tactics as needed. It's just enforcement of power hierarchies.


It really does only work when deployed against someone of a lower status. Just for example, if you imagine a sterotypical homeless man complaining that he felt unsafe against a sterotypical Karen, in the US there is no real chance he will be taken seriously regardless of the circumstances. It is more or less the "Get this peasant away from me" of our time.

I found your comment to be very insightful and I appreciate it banannaise


Watching some of the endless examples of police abusing their powers or committing crimes or obviously lying in the US on youtube videos has removed any ability to just trust police. In the US, because the police basically can always escalate t violence on any occasion, they are just dangerous to be around.

I never thought about it until this horrible store at the top, but why don't soldiers have to have cameras record their actions? Because war is a terrible thing and we don't want to have video of people murdering each other. But peacekeeprs should have cameras.


> I wouldn't be surprised if IDF forces deployed to the West Bank are trained much like American police officers are

IDF trains them.

https://www.amnestyusa.org/blog/with-whom-are-many-u-s-polic...


David Simon and others have written extensively for decades about the problems with the Baltimore Police Department, and other departments around the country. They trace these problems back to the war on drugs and other purely American factors.

The Amnesty article that you're citing is a post hoc ergo propter hoc fallacy. The Baltimore Police Department did not need to learn about constitutional violations from the Israelis.


Everybody thinks the War on Drugs is about "keeping people safe". It never was, it was always about manufacturing a tool to oppress "others".

You can add The War On Terror to that list.

Where do think US police get all their fun toys to play with?

"How 9/11 helped to militarize American law enforcement": https://www.brookings.edu/articles/how-9-11-helped-to-milita...


Yep. But the War on Drugs has been around much longer and is more relevant to people's day to day lives. And people buy into it. I hear this all the time "Sure, weed should be legal, and cocaine too because I like to party now and then, but the 'hard stuff' should definitely be illegal because its dangerous".

To make matters worse -- people think that those who advocate against it are doing so because they want to do drugs (and some may) but it's a civil liberties issue and is the foundation for the militarization of the police.


The War on Drugs is more relevant to your day-to-day life, perhaps, but people in the Middle-east are also people, in case you forget that.

from that lens it was almost necessary to invent a pretense since people got all huffy about overt oppression at the end of Jim Crow.

That checks out. Although the history of "Warrior Policing" in the US predates this (going back to the 60s) and extends far beyond IDF training programs:

https://en.wikipedia.org/wiki/Warrior_policing


Pretty sure police brutality was invented way before Israel existed.

[flagged]


The strong/dominant beating up in the weak is as old as time unfortunately. One doesn’t always have to make that particular comparison as it is a sensitive one. You can point to any major instance of colonization (by whomever) to see similar polices and in the past it was even more brutal because there were no reporters (eg Belgium Free Congo had an estimated population decline of 75% https://en.wikipedia.org/wiki/Atrocities_in_the_Congo_Free_S... .)

In the 1200's British colonizers invaded Ireland, in 1920's the same colonial oppressors were moved to Palestine. Arthur Balfour was Chief Secretary for Ireland from 1887 till 1891 and it was his idea to create a Jewish state in Palestine.

Ship out the jews, radicalize the natives, have the two of them fight for hundreds of years. It couldn't be a more British idea.


It was absolutely not Balfour's idea to create a Jewish state in Palestine.

The Balfour declaration was from 1917. But the Zionists first started to move to the region in the hopes of establishing a homeland in the early 1880s, based on their belief that a Jewish state (anywhere; Argentina was another candidate) was necessary for their long-term survival due to the long history of antisemitism in Europe - getting worse by the day - and their (correct, it turned out!) fear that it could reach cataclysmic levels. It was very much their idea.

Balfour's declaration, which wasn't official law, didn't single-handedly dictate British policy for the next 30 years and 14 governments; people vastly overstate the importance of it. Britain did not "ship out" the Jews - most Jewish migrants to Mandatory Palestine were from Eastern Europe and came to Mandatory Palestine very much of their own volition, without British help. And in 1939 - just in time for the Holocaust - Britain cracked down hard on Jewish migration to Mandatory Palestine to try to quell Arab unrest; Jews continued to migrate illegally anyway, despite what the British wanted.

Of course Britain had its role in contributing to the violence in the region, but to characterize Israel as a British colony is to deny Jews agency. It is curiously antisemitic, even as it (implicitly) absolves them of some of the blame for how things have gone.


> people vastly overstate the importance of it.

Fascinating, thanks for pointing this out.

> to characterize Israel as a British colony is to deny Jews agency. It is curiously antisemitic, even as it (implicitly) absolves them of some of the blame for how things have gone.

Some hill to die on.


I'm Jewish (though not Israeli); my grandparents were among those Jews who fled to Mandatory Palestine against the British's and Arabs' wishes to escape the Holocaust. Kindly, I think I'm a better judge of the right hills to die on when it comes to this particular subject.

> I wouldn't be surprised if IDF forces deployed to the West Bank are trained much like American police officers are'

American police officers ARE trained much like IDF forces. By the IDF! https://jinsa.org/jinsa_program/homeland-security-program/


The IDF is a foreign occupation army, not the police.

At least in the US, the police come from much the same communities as they patrol, and there's some sort of democratic accountability. Don't like the police? You can vote for local government candidates who will implement reforms.

In the West Bank, Palestinians are subject to arbitrary violence at the hands of foreign soldiers. The IDF is not there to protect Palestinians. It's there to protect the Israeli settlers who are taking Palestinian land. If Palestinians don't like how the IDF behaves, tough luck. Palestinians can't vote in Israeli elections, so they have zero say in the government that exercises ultimate authority over their lives.

This is a fundamentally different situation from policing in the US.


[flagged]


Yes, American police use these kinds of justifications when innocent people are killed too. It's absurd (watch Surviving Edged Weapons [0] some time) either way.

The reality is, if you have soldiers mowing down children throwing rocks, mowing down families driving around, mowing down kids playing football, mowing down toddlers in their bedrooms, mowing down hundreds of people each year [1], you've over-indexed on vigilance and under-indexed on the value of human life. You're not trigger-ready, you're trigger-happy.

[0] https://www.youtube.com/watch?v=S6jhru-EqDA

[1] https://www.un.org/unispal/document/ohchr-press-release-17oc...


When RLM enlightens on the police brutality roiling America, and entertains!

[flagged]


I'm going to repost and elaborate on a reply of mine that appears to be shadow dead with no explanation. This doesn't seem to be the usual result of disagreement flagging. The only problem I can see is that perhaps this did not meet the level of substantiveness expected from an HN comment (OTOH I don't see how what it was replying to would meet this either, and mine is at least coming from the direction of intellectual curiosity!)

"Perfect example of how no one thinks they're the villain in their own story"

To be clear, the comment I'm replying to is justifying "mowing down children throwing rocks, mowing down families driving around, mowing down kids playing football, mowing down toddlers in their bedrooms" based on some amorphous other "players" supposedly not valuing their own life (as a hypothetical soldier!). If this isn't a stark illustration of how individual people in a cycle of violence justify their own crimes to themselves, I don't know what is.

The position would make sense in the context of say a street mugging where the victim ends up shooting the assailant. It might make sense in the context of domestic policing where the subject of an arrest attacks the police (modulo the usual moral hazard wherein cops create pretexts to claim they were being attacked). But in the context of this article and the proceeding comment, I don't see how it is anything but a rationalization for some pretty sick violence.


That's pretty crazy mental gymnastics. Palestinians have been attacking Israel civilians forever. They strapped bombs under their kids beds, etc. It's clear they don't value Israeli life, nor their own. They have been indoctrinated to hate jews before birth. There is nothing controversial about it. Israel has been doing their best to avoid civilian deaths, polar opposite of Palestinian behavior. Yes mistakes have been made, but trying to equate the two is deliberate misinformation.

> Palestinians have been attacking Israel civilians forever. They strapped bombs under their kids beds

To the oppressed, everything is permissible.

> They have been indoctrinated to hate jews before birth

"Fetuses are antisemitic" is a new one.


Please elaborate on what exactly you're calling "crazy mental gymnastics". Your followup points are merely textbook dehumanization of an entire group. So as I said, cycle of violence.

I already did. I don't think this can be clarified anymore for a person with an agenda.

The only thing I've said here is calling out your incitement to genocide. If that qualifies as an "agenda" to you, then I don't know that there is anything left to say.

At the bottom of article:

> Between 7 October 2023 and 15 March 2026, the UN's humanitarian affairs office, OCHA, says 1,071 Palestinians were killed in the West Bank, including at least 233 children.

Does that sound like genocide?

Meanwhile, https://en.wikipedia.org/wiki/October_7_attacks says 1,195 civilians and security forces killed. 4300 rockets launched. How many people would that have been if Israel was jamming kumbaya?


I was not making the argument that the situation is genocide. Rather I was pointing out that your comments constitute incitement to genocide.

[flagged]


lol. That refrain has gotten pretty tired and even the mainstream is waking up to how preposterous it is.

Suffering horrific atrocities in your culture's past is not some license to commit your own new atrocities. Seriously, try applying your own rationalizations to the Palestinian perspective and see how that makes you feel - can October 7th be justified because "[Israelis] have been attacking [Palestinian] civilians forever" ? The answer is a resounding NO.

As I said, it's cycle of violence.


A professional looks at and understands the situation as it exists now. A professional is trained to not get into situations where fear controls them. Your argument is a compelling one that either these are not professionals or that they are professionals and are doing this on purpose. The stats today clearly show the massive difference between danger to Israeli personnel and Palestinians. Israel at this point has either failed to train professional forces that seek to deescalate and avoid dangerous situations or is training forces to find situations they can claim fear as a justification for murder. So, pick. They are either amateurs at which point it is a deplorable to put amateurs with this much force near a vulnerable population or they are professionals trained to do exactly this, find ways to kill a vulnerable population and claim self defense.

[flagged]


So what exactly did the 8 year old boy sat in the back of his parents car do wrong?

[flagged]


Again, what law was broken here? By anyone in the car? I'm struggling to understand how this wasn't outright execution.

Luck implies a lack of fault. Also we probably shouldn't open fire on suspects fleeing from a heist either, kid or no kid. Extra-judical justice is generally a bad thing, this is why pit maneuvers exist. Allowing police to fire at moving vehicles is a universally bad idea, and one thats understood by most nations.

> I cannot wait for "kid" to be a number one accessory to bring to a heist then.

And when that happens then we can have a conversation. But as it is, you’re justifying slaughtering a family because of a story you invented.


Or in democratic societies we can insist that our "public servants" actually serve the public interest of law and order rather than merely using it as a pretext to be able to commit their own violent crimes.

Your rationalization is nothing more than a product of a failed society. Bringing it up as pragmatic advice might make sense, although still not for this incident where the "offense" seems to have been merely stopping a car on the side of the road. But invoking it as some universal value of "what ought" is a pure crab bucket mentality.


Then by your logic, every society on earth failed, because there are no places where you can act belligerently towards law enforcement and expect it to end well.

Correct, they failed. Cops are rightfully called all kinds of nice things in all countries. We are far from having what should be a non failed society. But liberal democratic capitalist countries get much closer to success.

Perhaps by your obtusely applied system of logic, but not by mine. Societal values are ideals to be worked towards, not some sort of axiomatic foundation that pops into existence fully formed. The failed society condemnation pertains to your remark, which comes from a place of having given up on the idea that governments should be accountable to their citizens - aka authoritarianism.

I'll repeat the bit about professionals being trained to avoid and deescalate. That is the point. I think the details of this, and many similar incidents clearly show a lack of attempt to deescalate or avoid. That was the clear argument I made in my post and am re-emphasizing now. This clear trend shows either malicious intent by professionals or amateurs put in a situation they shouldn't have been allowed near and those above them should be held accountable for it.

The IDF is not law enforcement. It's a foreign army. It treats Palestinians with utter contempt and has no problem with killing them. Its job is to protect Israeli settlers who are taking Palestinian land and to prevent the Palestinians from resisting Israeli rule.

Comparing the IDF to law enforcement in a democratic country is not relevant.


Their media is non stop hammering the citizen with scary Muslim stories since the beginning of the country, every day since birth, with a density as if nothing else ever happened in the world.

Deprogramming is possible. Just tell them it is impossible to argue it was their own idea. They know how hard it was rubbed in their face.


A certain amount of politics should/must be tolerated on HN, because you cannot compartmentalize technology, politics and morality.

No-one, not even people who say they like technology but do not care about politics, should be able to live their life wihtout knowing that we live in a world where six-year old blind children are murdered with automatic assault rifles.

(For the same reason that no-one should be able to live not knowing that jewish once were murdered in the millions in gas chambers.)


Technology IS politics.

Technology is a form of control. And in the capitalist system, this control is mostly exerted by private companies, on which the rules of democracy do not apply.

There must be guardrails


Technology is not a form of control at all. Technology is the practical application of things you know, to achieve things that don't happen naturally. Here's what the wiki says:

> Technology is the application of conceptual knowledge to achieve practical goals, especially in a reproducible way.

By this definition, the earliest wooden and stone tools, use of fire, wheel, agriculture, housing and clothing were all legitimate technology. It's no more 'a form of control' than medical science, any form of economics and commerce or any arts are.

It's true that technology is being used as a tool of oppression. But there are several reasons for it. Controlling its access is one of the easiest ways to control a society - either by gatekeeping access to its building blocks or through draconian legislations. This is possible and done with medical science and arts too.

We can live quite comfortably without the 'modern technology' that only the rich can control. But we are subjected to peer pressure by statements like "you can't compete in this era without smartphones", " you will be jobless without AI", etc. And we fall for all of it without any questions. It enrages me when I suggest that people should choose freedom over convenience, and people reject it flippantly citing market forces and supporting the abusive companies that make them.

Mischaracterizing and vilifying technology in response to its hijack like this will not serve us in any manner. People already have a negative response when they hear technology. But it's a discipline that we must own, instead of being the just the consumer of. Technology is one of the components we need to fight back against control.


Stone tools, fire, the wheel and farming are forms of control. You learn that from prehistory; stone tools and fire provide the baseline for manufacturing, trade and warfare. Farming and transport creates a backbone for logistics and taxation. Each invention contributes to a greater degree of state-sanctioned control; "the people" rarely ever win.

The mischaracterization comes when people get comfortable assuming that technology cares about them. Your stone axe does not want to keep you alive; your iPhone has no self-preserving motivation to maintain privacy. Making these kinds of hopeful-but-foolish assumptions is how people become disenfranchised with progress and associate it with evil.


Technology is *DEFINITELY* a form of control of humans over their environment / nature / peers

[flagged]


i've been on hn a long time, and if there's a prohibition against anything vaguely political if it can't be connected to technology, i've never known it.

I was shadowbanned for mentioning Iryna Zarutska. Most political topics can be connected to technology: technology after all is often how we hear of and discuss these things.

How did you realize you were shadowbanned?

I'm curious because I sometimes wonder, if that happened to me, would it affect the way in which I engage with this website?

FWIW, I often lurk, but sometimes engage (like right now). Perhaps it could happen to me and I would not realize it for a while...


When your karma stops changing one way or the other is the biggest giveaway.

See my response above. You can also log out and see if your comments are shown to a user who isn't logged in. If your comments are only shown to you when you are logged in, then you are shadowbanned.

Hey there, sorry for the late reply. It's as simple as logging out and viewing the page. If you can't see your comments, they are not being shown to anyone but you when you are logged in. I asked hn moderation about this. It has to do with being flagged and downvoted. I don't know the remaining details. I hope this helps. I practically begged hn to consider the ramification of shadowbanning and the like. I understand the spam angle, but as a minority (who also has minority opinions) it makes me very sad that people are unwilling to believe that people dare disagree with them.

It's not strictly tech. But tech tends to be both new and intellectual. Sometimes it can be an old phenomenon but also curious; people often just paste Wikipedia links here and they trend.

From the guidelines:

On-Topic: Anything that good hackers would find interesting. That includes more than hacking and startups. If you had to reduce it to a sentence, the answer might be: anything that gratifies one's intellectual curiosity.

Off-Topic: Most stories about politics, or crime, or sports, or celebrities, unless they're evidence of some interesting new phenomenon. If they'd cover it on TV news, it's probably off-topic.


there is and always has been a strong prohibition of anything political on HN. it is widely and frequently discussed as the main problem with HN. Usually, a post like this would be removed very quickly

odd that i've never noticed

[flagged]


this comment is very pedantic

There are many objections that could be raised to both my tone and content there but I don't think "pedantic" is one of them.

It's certainly mocking. The subject matter is overly political. It's even plausible that it strays across the line laid down by the HN guidelines, although personally I think it's acceptable given the context. Something about fighting fire with fire.


[flagged]


Those stories aren't very visible because they are BS. Few people are dumb enough to think than a puppet government set up by an occupying force will improve their lives. Those dancers are either paid actors or not very bright.

The double standard I would like to see addressed is this: will any country have enough cojones to boycott the World Cup this year. My guess is no.


> Or much more most recently than WWII: not knowing that 1200 civilians were slaughtered by Hamas terrorists, whom palestinians did vote in power.

And if you want to go even more recently, check out what the IDF is doing in Gaza.


The crazy double standard is you telling absolutely verifiable lies and feeling completely fine and righteous about it.

Really? Verifiable lies? Please share your source to help us understand which one to trust.

AFAIK, his points are all well-documented and properly referenced on Wikipedia: 1) 1200 people slaughtered, referenced over 450 times: https://en.wikipedia.org/wiki/October_7_attacks 2) Islamic Republic of Iran slaughtered 30 000 unarmed civilians, referenced over 240 times: https://en.wikipedia.org/wiki/2026_Iran_massacres


Like the other commenter has said, the reason there are no such stories is that they would be hypocritical BS, and I'll add, designed to manufacture consent for unlawful military action against a sovereign nation.

The perennially genocidal occupying force controlling all aspects of Palestinian life including forcing them into a subsistence diet, "mowing the lawn" in Gaza every so often, shooting down peaceful unarmed protesters - some of them disabled - and all that before 7/Oct - has no right to complain about terrorism, for it's what it has inflicted on Palestinians for decades.


I'm wondering about the broader context here: Are stories like this rare or common? Are they increasing or decreasing in frequency?

Yeah it is getting worse. This was written 3 days ago before this event by Human Rights Watch:

https://www.hrw.org/news/2026/03/13/in-the-shadow-of-war-set...


[flagged]


> this is war 101

The west bank isn't at war with Israel. There wasn't some conflict or event that has justified these actions.

I wish people understood this better. Even if you could manage to justify what's happening in gaza as "this is war", Gaza and the west bank are separate entities with separate governments. The west bank, in particular, is more like an Indian reservation in the US, with the Israeli government effectively exercising supremacy over all aspects of the government.

Theoretically, the IDF is supposed to be the police force for the west bank. That's why they occupy it.


[flagged]


Wrong.

Gaza and the West Bank aren't countries, they have no autonomy. Palestine isn't a country, it was once where Israel now sits, but hasn't been since the 40s.

Palestinians are people, must like Jews are people. Palestinians are the indigenous inhabitants of Israel, the west bank, and gaza.

Much like all Jews aren't responsible for the actions of Israel, All Palestinians aren't responsible for the actions of Hamas. Even the residence of Gaza.


> Palestine isn't a country, it was once where Israel now sits, but hasn't been since the 40s.

In the 40s, the British were ruling Palestine as a mandate, I wouldn’t really call that a country.


Fair enough. I should say that it was the name of the region as they've basically not been fully autonomous in modern history. But prior to the establishment of Israel, they were basically just left alone by both the Ottomans and Brittan.

[flagged]


What am I incorrect about?

> You can't then say that the West Bank is not responsible for what the rest of Palestine did.

Collective punishment is a war crime.


> this is war 101, every day.

Except this situation has been going on like this for 60 years - with Israel, or the other western states having absolutely no plans to change anything about it (except making it even worse).


I don’t think anyone is going to forget about this

completely deranged way of thinking that calls for a hard self-reflection.

> this is war 101

genocide 101


"The main justification floated is that the car was "going fast" and thus made the undercover Israeli soldiers feel unsafe."

Funny way of saying trying to run someone over.


I just know what type of person you are from this comment

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: