From a layman's point of view antimatter seems like an ideal spacecraft fuel. It's as energy dense as E = mc^2 allows, and if you have infrastructure to make it, the only input you need to produce it is electricity.
Being able to transport it seems like an important piece of that puzzle.
Production and storage would need to be scaled by many orders of magnitude, but that's merely an engineering problem...right?
The confinement scheme used here is likely a Penning Trap. Such devices are limited in the amount of antimatter they can store by the Brillouin limit. The energy stored will be no more than the magnetic energy of the field of the trap, and so much less than the explosive yield of a mass of TNT (say) equal to the mass of the trap.
From a layman's point of view, I'm more interested in antimatter's potential as a weapon.
Not necessarily because I want to use it, but because I have a vague idea of what it's capable of, and what that would mean in the hands of certain groups capable of producing it.
> According to, Michael Doser, a prominent particle physicist at CERN, "one 100th of a nanogram [of antimatter] costs as much as one kilogram of gold."
Those aren't comparable costs. The cost given for antimatter is the cost of producing it from nothing. The cost given for gold is the market price of buying gold that already exists.
Consider the cost of producing one kilogram of gold from nothing.
(Consider also the cost of ownership. Gold has a higher-than-average cost of ownership; you have to provide security or it will be stolen. Antimatter's cost of ownership is far, far beyond that.)
The relevant cost for the buyer is how much they need to pay to obtain the object. So far we haven't discovered any primordial antimatter deposits that we could mine, so creating it from scratch is the only way.
It’s a fundamentally different and riskier paradigm. Nuclear weapons at rest are inert, and can even be disarmed. If the lock falls off the gate at the compound, the nukes won’t spontaneously explode.
Antimatter is always “armed” and is only rendered safe by containment. If containment fails, it explodes. It’s more like keeping a massive stockpile of fluorine, but somehow worse and harder to contain.
Not to be dramatic, but wouldn't that level of destruction threaten all life on Earth? After the immediate destruction of the first county, extreme climate change would cause the same kind of problems as nuclear winter would, no?
Antimatter bombs are not a realistic technology. Aside from the unsolved technical issues - many, and fatal - no country has the GDP needed to make 1g of antimatter, which would make an explosion around 40kT.
We can't afford to blow up ourselves that way.
There are plenty of other ways we can afford, so antimatter isn't top of anyone's worries.
When I visited CERN, they mentioned that there were some large number of protons in the ring at a time, and the runs would last a significant amount of wall clock time. (Don’t remember the exact numbers, but I think it was like 10^19 atoms of H, and days of wall clock)
The upshot was, it was likely that less than a mol of hydrogen had been run through the ring.
If humanity doesn't perish in the next hundred year and masters interplanetary spaceflight, antimatter drive is the logical next step in propulsion after fusion.
Interstellar spaceflight will become (barely) feasible once spaceships can reach velocity between 0.02 to 0.1c are possible. Even assuming non-100% conversion efficiency, antimatter has enough energy density to provide this capability.
My memory is that 1g of constant acceleration grants sufficient relativity to make it to the edge of the known universe in a current human lifespan.
Now, it's true, there's some slight issues such as radiation, food storage/production, psychological effects, and any random space rocks obliterating your craft, all of which could reasonably turn out to be enough to make it not work. We also don't have a fuel source that can provide 1g of constant acceleration for 80 years for a reasonably sized space ship, though again my memory is that nothing prohibits it from a physics perspective (this is where my knowledge/understanding get prohibitively poor. I'm not sure how the math works if you stick a thousand ion drives to a spaceship that's already in space or if you just need a huge snifter of compressed hydrogen or if you can just use nuclear propulsion but I'm pretty sure that antimatter would do it, if you could bring yourself to waste the money. But maybe we don't have a plausible way to contain it so what do I know).
Maybe I'm remembering wrong, or maybe I glossed over what's currently considered a physics, rather than engineering/economic/materials science problem, but that's what it looked like last I checked.
Alpha Centauri yes, the edge of the universe no :D
Edge of observable universe is something like 46 billion light-years away, even at 0.9c thats 50 billion years of travel (22 billion years experienced by the traveller)
But yes, you can travel places by constant acceleration but unfortunately it still dwarfs in comparison to those places out of our reach.
Unfortunately also, the universe is expanding at a rate faster than the speed of light so you actually cant ever reach the edge
If the craft could maintain a constant 1G acceleration the entire time or more it is feasible to get near the known edge for the traveler, assuming we could make and utilize enough anti-matter to do it and that what we see as the edge here is actually a recognizable edge once you are out there.
0.9C would be reached in only a year and a half for the traveler under constant 1G acceleration. After 2.5 years you would be at .99c, and at a bit over 3.5 years you would hit .999c with a 6x time dilation compared to earth. After 6 years of acceleration it would be .99999c and Earth would be 200 years in the past. As you approach 12 years you would be going 0.9999999999c and Earth would have experienced almost 70,000 years. As you go past 16 years you would be in the millions of years and as you got past 20 years you would be in the billions of years.
Of course doing that may only be feasible with anti-matter energy storage. The next best energy source is fusion energy but it is 2 orders of magnitude less dense. Perhaps some kind of ram scoop would make that route possible but that is going beyond just speculation because we don't know if you can feasibly capture random particles at that speed even assuming you didn't explode from just hitting them in the first place.
You don't need new physics for interstellar spaceflight - 16 km/s of dV is enough. you don't even need to go that much faster to slowly spread among the stars. There are a lot of smaller bodies all the way from Sun to Alpha Centauri. As long as you hop between them within reasonable time in a few thousand years you can become a true interstellar civilization, while going at much-slower-than-light velocity (similar to Polynesian colonization of Pacific).
> If you're ok with the looming threat of total annihilation.
Don't you have that problem with any energy-dense fuel? It's just that it doesn get more dense than that, so you can be very space and weight efficient.
It's like everybody saying that a hydrogen car is a rolling bomb because of the energy stored in the hydrogen. Well, sure, but gasonline has just as much energy stored. Which is the whole point of fuel. To store energy. It's not like you are bringing 100x as much energy with you just because it's hydrogen. So that doesn't make an ICE car any less of a bomb...
The difference is that antimatter annihilates with any normal matter that it comes into contact with. This means you can't just put it in a tank, the way you can with hydrogen. You can't e.g. combine it with some metal to make a metal hydride to make it safer to store, the way you can with hydrogen.
At an absolute minimum, you need extremely strong magnetic confinement and an extremely hard vacuum. And even then, you're going to get collisions with stray atoms and annihilation events which release gamma rays and other radiation products - although shielding is probably the least of your worries in this scenario.
A typical research lab at a university or large corporation can't make a vacuum strong enough to store even tiny quantities of antimatter for more than a few minutes, and they can't produce the magnetic confinement strength required to store macro quantities of it, either.
So the question with an antimatter-powered car is not if it's going to destroy the surrounding region and bathe it in hard radiation, but how many milliseconds (or less) it will take before that inevitably happens.
But probably luckily for us, this is all moot, because we have no way of producing enough antimatter for this to be an issue. If all the antimatter that's ever been created by humans annihilated simultaneously, only scientists monitoring their instruments closely enough would notice, because it's such a microscopic amount.
Edit: for perspective, you'd need about 7 billion times the 92 antiprotons transported in the truck in the story to produce the energy produced by a single grain of gunpowder.
How is it possible to make as hard of vacuum as they did? I assume it's not perfect, so what's the trick? Does the magnet setup create a volume that's simultaneously high probability for antimatter and low for everything else?
For this antimatter transport experiment, they only transported 92 antiprotons. To store and transport that, the requirements for the magnetic field and vacuum are many orders of magnitude lower than what would be needed for macro-scale quantities.
Also, if there was an accident and all those protons annihilated, the consequences would be unnoticeable except to sensitive instruments. The energy involved is about one seven billionth of the energy in a single grain of gunpowder.
Liquid gasoline does not spontaneously explode like an action movie. You can put a match in the fuel tank and (presuming infinite oxygen availability) it'd just start a small fire. Heck, may even just give a little puff and then put out the match.
Antimatter in any sufficient fuel quantity, the moment it breaks confinement, will completely annihilate and release ALL it's energy in a single moment, setting off a chain reaction to the remaining antimatter. It's like sitting on an armed nuclear bomb, where you rely on electrified, highly sophisticated containment equipment never failing a single time for months to years... In a radiation-heavy environment known for causing sophisticated electronics to have errors.
And, yes, hydrogen cars were looked at critically because of the perception they can Hindenburg (I'm unsure if it's true or not). Which is a good example because you don't particularly see any hydrogen blimps anymore - we made them illegal because they're dangerous.
Any compressed gas fuel is inherently dangerous. There's a video of a CNG-fueled bus falling off a lift and sending a fireball through the maintenance facility.
Batteries have some of these same risks: they store a lot of energy and it can be released very quickly under the wrong circumstances.
Average human threat perceptions simply aren't useful here. People will also make wild assumptions about what kind of catastrophic thing could happen in aviation and then happily enter their car to drive somewhere without a thought in the world. In fact noone thought about designing gasoline fuel tanks in a safe way before we had cars. Not even really until people started burning. If we're already thinking about transporting antimatter safely today, this kind of technology will probably have an even better track record than planes.
Antimatter reactions are about a million times more powerful than conventional combustion. They surpass even nuclear explosions in energy release. That means even a small mishap becomes a large mishap.
Nuclear energy is limited to a little less than 1% of the energy release possible with antimatter, per mass.
The practical limit for nuclear energy is about 5 to 10 times less than that, because the theoretical limit corresponds to the transmutation of hydrogen into iron, coupled with the capture of the entire energy, which will not be achievable any time soon.
But there is an essential difference between nuclear energy and antimatter energy. Nuclear energy is stored in our environment and you just have to exploit it. Antimatter energy is a form of energy storage, so you need some other form of energy to make antimatter. The energy efficiency of making antimatter is many orders of magnitude worse than the factor of less than 100 that exists between nuclear energy and antimatter energy and the mass of the confinement device needed for storing antimatter is also orders of magnitude greater than the mass of the stored antimatter.
For now, there is absolutely no hope of ever using antimatter in practice for storing energy. Such a thing could be enabled only if some technologies that we cannot imagine would be invented.
Despite the great technological progress of the last couple of centuries, it is hard to say that there have been many inventions that have never been imagined before. After all, already 3 millennia ago the god Hephaestus did his metal smith work with the help of intelligent artificial robots.
Yeah but when you are talking about energy levels in the nuclear bomb range, the threat to the passenger stops going up. If im in a craft with 1 Hiroshima bomb of energy and another guy is in one with a million tsar bombs worth of energy, we would both be obliterated before we knew anything was wrong no matter how small of a mishap.
Not familiar with the subject so genuine question. HOW would antimatter be used as fuel? There is energy released in matter antimatter annihilation, but where would the force to move a spacecraft come from?
> Various antiproton-powered rocket systems have been proposed. All of which rely on the particles released to supply direct thrust or to heat a working fluid by interparticle collisions or by heating a solid core first [14]. There is also the possibility to use the heated working fluid to generate electricity for electric propulsion systems [14].
> Following Fig. 9, beam core and plasma core configurations can produce direct thrust by directing the charged particles produced into an exhaust beam using a magnetic nozzle. Gas core systems use the energy released from the reaction to heat a gas that is exhausted for thrust. Finally, solid core configuration heats a metal core like Tungsten that acts as a heat exchanger to a propellant that is then exhausted from a regular nozzle.
my absolutely-non-expert guess is that it would work much like any other fuel? Combine with matter, get a lot of head out of it and use that in the best way we know.
I don't like antimatter because it's the most volatile fuel possible. If power is ever interrupted for any reason for any amount of time, the entire mass explodes.
A slightly less insane fuel source is a micro black hole. Drag a tiny black hole behind your ship and drip-feed it any kind of mass you come across. You still get >90% mass-energy efficiency which is far beyond anything else we know of.
Besides, one of the big problems with antimatter is that it's a battery, not a fuel source. We must first collect the unimaginable amount of energy and then process it into antimatter one particle at a time. If you build a ton of factories around a star you can get meaningful production. But a black hole drive can suck up interstellar gas or any asteroids you come across. Matter is easy to get. Don't ask where the micro black hole comes from.
Black holes have similar problems to antimatter. A micro black hole is pretty close to an ongoing antimatter explosion in terms of effects on its surroundings. If any part of your shielding fails, it irradiates you or melts you. Their radiation increases as they get smaller, and if not fed they're always getting smaller, until they "explode" (yes, but even more so) and disappear. So you still have the problem that if you don't maintain it just right, it will annihilate your ship. So, "less insane" is dubious IMO. (Still my favorite starship idea, though.)
To propel things in space you don't need as much energy as you need momentum. With antimatter you just have momentum of the photons that were produced in the annihilation. I don't know if that's the best way of getting momentum. Might be.
We used to believe in freedom of speech and freedom of association.
Since the dawn of the Internet era, we've had a legal principle that platforms are relatively shielded from liability for what their users do.
It's the Internet. There's sexual content and sketchy characters on it. Occasionally people will encounter them -- even if they're under 18.
Anyone who grew up in the mid-1990s or later, think back to your own Internet usage when you were under 18. You probably found something NSFW or NSFL, dealt with it, and came out basically OK after applying your common sense. Maybe it was shocking and mildly traumatizing -- but having negative experience is how we grow. Part of growing up is honing one's sense of "that link is staying blue" or "I'm not comfortable with this, it's time to GTFO". And it seems a lot safer if you encounter the sketchy side of humanity from the other side of a screen. Think about how a young person's exposure to the underbelly of humanity might have gone in pre-Internet times: Get invited to a party, find out it's in the bad part of town and there are a bunch of sketchy people there -- well, you're exposed to all kinds of physical risks. You can't leave the party as easily as you can put your phone down.
I stopped logging onto Facebook regularly around 2009; I only log in a couple times a year. I hate what Facebook has become in the past decade and a half.
But giving a site with millions of users a multi-hundred-million-dollar fine because some of those users behave badly seems...asinine.
If your kid is old enough and responsible enough to be given unsupervised Internet access, you'd better teach them how to deal with the skeevy stuff they might encounter.
That’s not really true. Pre-internet we had relatively much stricter content controls. Fairness doctrine springs to mind, plus significant regulation of the movie industry.
Letting companies sell addiction has pretty significant negative externalities. That’s why we regulate gambling and drugs. Facebook sells addiction, so it makes sense to regulate it like we do drugs and gambling.
I'm pretty sure you can get rid of the 0xFFFFFFFF / p and get some more speedup by manually implementing the bitarray ops. You can get another boost by using BSF instruction [1] to quickly scan for the next set bit. And you really only need to store odd numbers; storing the even numbers is just wasteful.
You can get even more speedup by taking into account cache effects. When you cross out all the multiples of 3 you use 512MB of bandwidth. Then when you cross out all multiples of 5 you use 512MB more. Then 512MB again when you cross out all multiples of 7. The fundamental problem is that you have many partially generated cache-sized chunks and you cycle through them in order with each prime. I'm pretty sure it's faster if you instead fully generate each chunk and then never access it again. So e.g. if your cache is 128k you create a 128k chunk and cross out multiples of 3, 5, 7, etc. for that 128k chunk. Then you do the next 128k chunk again crossing out multiples of 3, 5, 7, etc. That way you only use ~512MB of memory bandwidth in total instead of 512MB per prime number. (Actually it's only really that high for small primes, it starts becoming less once your primes get bigger than the number of bits in a cache line.)
A lot of people people contributing to FOSS are volunteers. The calculus of working on stuff for free involves an assumption that your worst-case outcome is you make $0. This act's punitive fines change the worst-case outcome to somewhere around -$9999999 or more.
If you work on any programming project at all in any capacity:
- Are you confident your work doesn't fall afoul of this?
- Are you confident they won't decide to come after you anyway for insane political, bureaucratic or "seeing-like-a-state" dysfunctions?
- Are you willing to bet millions of dollars in potential fines that your answers to the previous two questions are correct?
Just in case your answers to the parent post's three questions were "Yes, yes and yes" here are some additional questions:
- Have you ever uploaded a container to Dockerhub or Quay.io?
- Does that container have an OS inside it that has user accounts?
- Before you answered parent post's questions, did it occur to you that you might have to update your Docker images to comply?
- Did you remember on your own that you also have to delete or update older Docker images to comply, or did you not think of that until you read this question?
After you've answered these questions, please re-answer the parent post's questions.
Here's my suggestion for an implementation strategy:
- Keep the "Next" button greyed out until you add three forms of identification.
- Ask the user to take photos of their 3 forms of ID with a webcam. Ask the user to hold them in increasingly bizarre poses -- left hand, right hand, woven between your fingers, behind your ear, between your toes.
- Add an "accessibility" button. This button pops up a text box that advises you if you can't comply because you don't have hands, ears, feet or whatever (hey, some people don't and that's perfectly fine!) you can just use a picture of somebody else's body parts, and helpfully provides a menu of AI-generated pictures of human ears, hands, etc. for you to copy-paste.
- To preserve privacy, send the actual photos to /dev/null.
- The "verify the photo of my ID" button should check whether random.random() > 0.8. On average the user will require 5 tries per photo, or 15 tries total.
- Add a checkbox that says "I am not in the state of California". Upon clicking this checkbox the "Next" button becomes not grayed out and you can proceed without completing the identity checking process.
- If the user does not seem to have a webcam installed, all UI elements are grayed out except the "I am not in the state of California" checkbox.
- If the user is installing via command line, say "Are you in the state of California [y/n]?" If the answer does not start with 'N' or 'n', it will simply repeat the question.
- The list of acceptable identification shall be: Driver's license, learner's permit, Social Security card, library card, school identification, Boy / Girl Scout membership card, school yearbook photo, Burger King Kid's Club membership card, utility bill, ISP bill, Burger King receipt, Mahalo Rewards card, any receipt paid via credit card, birthday card, a photo of a printout of any email from OnlyFans, a photo of a DNS TXT record containing the string "CALIFORNIA", a photo of your X account with a blue check mark.
It seems like sound testing methodology to identify important theorems related to the code, prove them, and then verify the proof.
Verification gets sold as "bulletproof" but I'm skeptical for a couple reasons:
- How do you establish the relationship between the code and the theorem? Lean theorem can be applied to zlib implemented in Lean, what if you want to check zlib implemented in a normal programming language like C, JS, Zig, or whatever?
- How do you know the key properties mean what you think they mean? E.g. the theorem says "ZlibDecode.decompressSingle (ZlibEncode.compress data level) = .ok data" but it feels like it would be very easy to accidentally prove ∃ x s.t. decompress(compress(x)) == x while thinking you proved ∀ x, decompress(compress(x)) == x.
I've tried Lean and Coq and...I don't really like them. The proofs use specialized programming languages. And they seem deliberately designed to require you to use a context explorer to have any hope of understanding the proof at all. OTOH a normal unit test is written in a general purpose programming language (usually the same one as the program being tested), I'm much more comfortable checking that a Claude-written unit test does what I think it's doing than a Claude-written Lean proof of correctness.
The article does not reveal it to me either how the existing code would be mapped to Lean and back. The impression from zlib example is that I'd be expected to program in Lean. No way it's going to happen. The language is too complex for me and my average colleague. We're also not going to have two parallel implementations in ordinary language and Lean and compare them with 'differential random testing' (see https://aws.amazon.com/blogs/opensource/lean-into-verified-s... someone linked in the discussion), that's just too taxing for bigger products, let alone we typically don't have enough time to do one implementation right.
The gap of having succinct, expressive, powerful and executable specification to be able to continuously verify AI-generated programs is real, but I don't see how Lean alone closes it. If the author's intention was to attract community to help build that out with Lean in the center, it's not clear to me where to even start. Since the author provided no hints or direction, I've a feeling it's not clear to them either.
An AI agent is doing some actions. Those actions must comply with "controls" like 'ALLOW transfer ON treasury WHERE amount <= 10000 AND currency = "USDC"' and provide public, auditable proof that actions complied with the spec. The action log seems to be verifiable via ZK proofs.
What's the application here? If you want to enforce that an agent's blockchain transactions follow some deterministic conditions, why not just give it access to a command-line tool (MCP / skill / whatever) that enforces your conditions?
If you want auditing of the agent's blockchain actions to be public, why not just make all your agent's actions go through an ordinary smart contract?
I don't mean to kill your enthusiasm for programming or AI. But this project...I'm sorry, but this project just isn't good. It's an over-engineered, vibe-coded "solution" in search of a problem.
This project is about a month old. I highly doubt one person produced 134 kloc in that time. I'm pretty sure a lot of it is vendorized dependencies and AI-generated code that's had minimal human review. Much of the documentation appears to be AI-generated as well.
On "why not a CLI tool / smart contract,” for single-agent, single-system setups, you are completely right. Nobulex is for when a third party has to verify compliance independently across systems. But the current examples don't make that clear enough.
On the code, yes, heavily AI-assisted. I designed the architecture, AI helped implement it. I am 15 and in school, no team. The project has been through many iterations over several months, not one month.
On "solution in search of a problem,” maybe. What would you consider worth solving in this space?
I don't understand how house prices are set. When I see a house that's $X I often think "$X seems reasonable" immediately followed by "$X/2 seems reasonable" and "$2X seems reasonable".
How does anyone ever set prices for real estate? Why do sellers have such a hard time cutting price 10% or 30% when the whole concept of "value of real estate" seems very nebulous and made-up, and each property is a completely unique combination of age, footage, state of repairs, location etc.?
It's a lot of factors that kinda organically all work to create the market. supply/demand/"what the market will bear", along with assessments of land, labor and, some self reinforcement from the fact that many people don't actually own their home.
Land and labor are pretty easy to see, a brand new house simply has a cost and that cost helps dictate many things. Obviously nobody is paying $500K for something if you could build from scratch for $50K
Supply/demand. if nobody wants to move to an area obviously no matter how nice your property is it doesn't hold value. Or if the area is desirable then your property is worth more. Even with medium demand if there's no supply price goes up and it will go as high as the market will bear.
Then you have "I can't afford to sell for less". if you bought a $400k house and still owe $300k on it you're not going to want sell for less otherwise it would literally cost you money. If the market slumps you might just stay put until things get better. With a lot of stuck people this could get better because it will drive supply down until the market comes back to a level where people can afford to sell.
And all of these factors can change at an interval of a few dozen miles so each little market can have very different factors and prices.
To quote golden age Simpsons - "Gambling is the finest thing a person can do IF he's good at it" [0].
In aggregate, most participants lack the background and experience needed to hedge against risk and bet intelligently.
Gamifying a financial instrument (futures) that is supposed to be used as an risk management device and advertising it to the lowest common denominator without providing the right checks and balances will leave a large portion of bettors worse off.
Heck, I'm decently good at blackjack, poker, and backgammon because it's fun to apply probability theory principles, but I would never dare put my personal money on such an investment when I would do significantly better in aggregate with other more diversified personal investments.
Because we don't want people whose profession is maximally exploiting perverse incentive structures to flourish. A society that grants outsized rewards to bad faith citizens is bad for everyone. The more influence those cheaters have over the economy the worse off we all are.
You should not be able to get rich to the tune of a 600% daily return just because you're insider trading. That doesn't incentivize sharing your information with the market. On the contrary that incentivizes delaying communicating your secret information until the last second to maximize the return on your unexpected information.
1. It gives people a reason to influence events toward outcomes that they can make money on rather than the best outcome. The geopolitical equivalent of going down in the fourth on purpose.
2. It encourages the leaking of classified information.
Being able to transport it seems like an important piece of that puzzle.
Production and storage would need to be scaled by many orders of magnitude, but that's merely an engineering problem...right?
reply