Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I'm convinced Worldcoin is just a massive biometric data gathering operation with a "we have a coin to save the world" being the cover story.

Altman (for some reason) needs a ton of biometric data, so go around the world and especially to poorer places, offer folks funny money in exchange for their biometrics and run off.

Set aside the moral and ethical questions and it's quite frankly a very good way to get your hands on this type of data.



HN has become quite kneejerk-cynic, but let me explain what Worldcoin is supposed to solve in Sam Altman's eyes.

#1: AGI is coming soon.

#1.a: Because of #1, there will be a flood of unverifiable bot activity on the internet, easily outnumbering real human interaction by 100x+. The only way to identify other humans is some sort of biometric verification process.

#1.b: Because of #1, the vast majority of people will become an economic net-negative. They can't outproduce an easily replicable AGI more than they consume. Thus we need a Universal Basic Income (without the messy fraud) and a way to evenly distribute it, so that people can still enjoy a decent quality of life.

Sama has written about these concerns on his blog about a decade ago. Or just believe it's "because money $"


Let’s be honest, Sam wanted an ATM of other peoples’ money and built one. If your name isn’t Sam Altman and you’re participating in Worldcoin, you’re an idiot.


I hate hearing these things described passively. AGI isn't "coming soon". It's not a storm or a season or a comet. People — specifically Sam Altman! — are building it. So when those same people say "hey, here's a solution to a problem that AGI will cause that also happens to make me a lot of money", we're right to be skeptical.


Rightly or wrongly, they believed AGI was inevitable but it would currently be reckless because it was going to end up in the hands of profit-maximizers Google or Facebook, so that they needed to get there first under a non-profit. Hindsight shows this was a dumb reckless move as they sped up timelines and the arms race by 3-10 years, and are showing no signs of restraint that was supposed to be their raison d'etre.


And they're no longer really a non-profit as Microsoft is now getting ready to break the bank with their gazillion different copilots.


What I don't understand from Altman's point of view is that if AGI will bring such a disaster that we need to subject ourselves to yet another Bad Thing along the lines of WC, then why in the world is he trying so hard to make sure his dystopian worldview actually happens?


If it's inevitable, better to be first than late. Personally, I don't think there will be AGI anytime soon or at all, and if there will be - OpenAI is not the one that will achieve it.. and if they do, they will not be the one controlling it.


> then why in the world is he trying so hard to make sure his dystopian worldview actually happens?

If he's also working under the assumption that AGI is inevitable, it would make sense to want to be first at it so it aligns with his values more, while also preparing for a post-AGI world.


I don't follow that logic. If he thinks that the development of an AGI is inevitable, why would it make sense to be the first one to do it, when he also clearly thinks that its existence if a grave danger? What does it matter who does it first? The results are the same regardless.

If I thought that AGI was anywhere close to imminent (which I don't), then this perspective seems to me like it has a great risk of being a self-fulfilling prophecy. Why risk being the one to bring the bad thing into existence? Wouldn't it make more sense to let someone else be the bad guy and instead focus on defense?


Because they believe the results are not the same regardless. AGI's impact on humanity will hinge upon if we are able to correctly impart human-loving values onto what is essentially an unhuman system, so called "alignment." If we align AGI, it will make us obsolete but atleast give us a good life. If we don't, and it has goals it wants and the superpower to subvert our attempts to thwart its goals, we will end up as ants are to Google. Not hated, but a tiny nuisance to be disregarded when a new data center needs to be built. The only defense is slowing down AI capabilities until alignment has been rigorously verified.


> Wouldn't it make more sense to let someone else be the bad guy and instead focus on defense?

Maybe an analogy is more like an unsafe building collapsing randomly versus a controlled demolition after getting everyone out? The thing is going to happen, but if you're the one to make it happen, you can prevent it from being as much of a disaster. Not all possible AGIs are equal, he sees it as making it so when there is an AGI, it's aligned to his values.

It's being defensive against bad-AGI by developing good-AGI first. It's not clear if it will work, but it's better than just hoping setting up a good framework for UBI will keep your from being turned into paperclips.


Money.


> #1.a: Because of #1, there will be a flood of unverifiable bot activity on the internet, easily outnumbering real human interaction by 100x+. The only way to identify other humans is some sort of biometric verification process.

The intrinsic problem is that over the internet, you're basically getting an "id=<hash of biometric data>" field, and how do you know that biometric data actually came from a scanner rather than a hacker who stole the master biometric database? This is the intrinsic problem with any e-verification: there is no way to distinguish between a human initiated something on their computer and a program on the computer imitated a human initiated something on their computer, and if you have a problem that requires you to distinguish the two, well, you're out of luck.


> #1.a: Because of #1, there will be a flood of unverifiable bot activity on the internet, easily outnumbering real human interaction by 100x+. The only way to identify other humans is some sort of biometric verification process.

This AGI is supposed to be super intelligent right?

I wonder if it can figure out 'Nice browsing history you have there Pete, would be a shame if your wife saw it. Now scan your eye for me please'.

I don't think this will help anything against a real AGI.


what prevents anyone with a sufficiently large dataset from using AI to create fake iris prints that can't be distinguished from the real thing?


I believe it's access to the scanners used for registration. The whitepaper doesn't seem to talk about how they prevent illegitimate use of "The Orb" outside of revoking IDs which sounds like it's ripe for abuse.


Ah, so classic in crypto: Pretend you solved the problem by just kicking it down a level in the org chart.


At this point, given what happened to OpenAI, why would you believe anything Sam Altman ever said or wrote?


> Thus we need a Universal Basic Income (without the messy fraud)

Is this depiction of basic income as (inherently) fraudulent yours? Or Altman’s?


It’s mine, in that there is historically some amount of fraud in government distributions (hello PPP and EDD), and if UBI is required to survive, then 5% fraud means 5% go without anything. I imagine Sama came to similar conclusions but idk of any explicit mentions.


Totally agree. There is no conspiracy.

Of course, 20 years from now if this is actually true well then we HAD to sell the biometric data at some point because a project of this size needs the funding.

We really had no choice in the matter at Worldcoin. We didn't want to sell the biometric data but we couldn't see any other way to get to UBI.


Step 1: Record biometrics from millions of poor people. Step 2: ??? Step 3: Profit.

It's a bit like the outlandish COVID vaccine conspiracy theories: Step 1: Vaccinate people and deploy 5G wireless networks. Step 2: ??? Step 3: Total power over people; world domination.


Well for one thing a database of biometric records gives you an immutable way to impersonate those people on any other platform that relies on the same biometrics.

But that's kind of beside the point. I wouldn't be at all confident that you or I thinking about this problem for a few minutes are going to uncover every possible abuse vector that a motivated nefarious actor will in the future.

Collecting this kind of private data for no clear benefit looks sketchy as hell.


Total speculation but it could be used as training data for GPT5 or whatever they are doing at OpenAI. I could totally see a scenario where they would want to train an AI on biometrics data and then sell that to the govt.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: