Is it though? Facebook and nextdoor are free. That's incredibly hard to compete with.
I'd be interested in building something like this, but even at $100/year, you really can't even afford to advertise for it, so I can't see how one builds distribution.
It sucks that for cash strapped community groups / rescue orgs / etc everyone defaults to facebook, but disrupting that requires a way to make money that isn't advertising, and I can't figure it out :shrug:
> user permissions/groups never come into the sandboxing discussions
Sometimes *nix user accounts for AI agent sandboxing does come up in discussions. At [0], HN user netcoyote linked to his sandvault tool [1], which "sandboxes AI agents in a MacOS limited user account".
Actually seems like a great idea IMO, to be lightweight, generic, and robust-enough.
Yeah, Ralph smells like a fresh rebranding of YOLO.
With YOLO on full-auto, you can give a wrapping rule/prompt that says more or less: "Given what I asked you to do as indicated in the TODO.md file, keep going until you are done, expanding and checking off the items, no matter what that means -- fix bugs, check work, expand the TODO. You are to complete the entire project correctly and fully yourself by looping and filling in what is missing or could be improved, until you find it is all completely done. Do not ask me anything, just do it with good judgement and iterating."
Which is simultaneously:
1. an effective way to spend tokens prodigiously
2. an excellent way to to get something working 90% of the way there with minimal effort, if you already set it up for success and the anticipatable outcomes are within acceptable parameters
3. a most excellent way to test how far fully autonomous development can go -- in particular, to test how the "rest of" one's configuration/scaffolding/setup is, for such "auto builds"
Setting aside origin stories, honestly it's very hard to tell if Ralph and full-auto-YOLO before it are tightly coupled to some kind of "guerilla marketing" effort (or whatever that's called these days), or really are organic phenomen. It almost doesn't matter.
The whole idea with auto-YOLO and Ralph seems to be you loop a lot and see what you can get. Very low effort, surprisingly good results. Just minor variations on branding and implementation.
Either way, in my experience, auto-YOLO can actually work pretty well. 2025 proved to be cool in that regard.
Install your OS of choice in a virtual machine, e.g. even hosted on your main machine.
Install the AI coding tool in the virtual machine.
Set up a shared folder between host+guest OS.
Only let the VM access files that are "safe" for it to access. Its own repo, in its own folder.
If you want to give the AI tool and VM internet access and tool access, just limit what it can reach to things it is allowed to go haywire on. All the internet and all OS tools are ok. But don't let this AI do "real things" on "real platforms" -- limit the scope of what it "works on" to development assets.
When deploying to staging or prod, copy/sync files out of the shared folder that the AI develops on, and run them. But check them first for subterfuge.
So, don't give the AI access to "prod" configs/files/services/secrets, or general personal/work data, etc. Manage those in other "folders" entirely, not accessible by the development VM at all.
Did somewhat exactly that for apple container based sandbox - Coderunner[1]. You can use it to safely execute ai generated code via an MCP at http://coderunner.local:8222
A fun fact about apple containers[2], it's more isolated than docker containers as in it doesn't share the VM across all containers.
I'd just do it over a Docker mount (or equivalent) to keep it a bit more lightweight. Can keep the LLM running local; and teach it how to test/debug via instruction files.
Whether they do some kind of reasoning or not, they have all the biases included that come from their training/programming: what material was included, how that material is handled, etc.
AFAIAA, there's certainly not even a single frontier model, trained on "the internet", that is able to process information factually and in an unbiased manner.
So, they're not really reasoning impartially, as a computer "should" be wont to do. They're regurgitating biases. In a word: parroting.
Can you coax a model into seeming fair, via context? Sure. But, the baseline would be need to be based on reasoning ab initio, to qualify as reasoning. Otherwise, they are, again, parroting.
It's important to not mis-state what is or is not "emergent reasoning", or else people will think we have something that we don't, because some expert said so.
Disagree? Do you think that there is at least one accessible frontier model trained on the internet that is not parroting the baises of its creators and users, and performs its own "emergent reasoning" (instead of just doing something that mimics doing that)? Then please link to it.
It is no fun to have old iCloud photos deleted unexpectedly. Apple has provided plenty of footguns, even if they really are user errors. For examples: (1) during device restores and (2) premium subscription management fumbles.
Product idea: Apple should offer a paid service to restore the "old backups" of photos that are no longer accessible via iCloud UI/API, which were soft-removed for missing the subscription quota or whatever, if Apple happens to have that data tucked away in cold storage somewhere.
Case in point, I had some c. 2016 era photos in iMessages that I thought I handled right to not lose from iCloud, but they are apparently nowhere to be found in iCloud API based on recent checks. More than mildly irritating.
I should have used an iCloud photos backup tool like this much sooner.
Print what you want to keep onto archival paper with archival dyes. Everything else will atrophy.
reply