My OpenClaw assistant (who's been using Claude) lost all his personality over the last week, and couldn't figure out how to do things he never had any issues doing.
I racked up about $28 worth of usage and then it just stopped consuming anymore, so I don't know if there was some other issue, but it was persistent.
I got sick of it and used a migration script to move my assistant's history and personality to a claude code config. With the new remote exec stuff, I've got the old functionality back without needing to worry about how bleeding-edge and prone to failure OpenClaw is.
I feel like this is what their plan was all along -- put enough strain and friction on the hobbyist space that people are incentivized to move over to their proprietary solution. It's probably a safer choice anyway -- though I'm sure both are equally vibe-coded.
I thought the reason OpenClaw was banned because of the strain it's putting on the systems.
(Well, 3rd party stuff was already illegal, and I believe remains so (sorta-kinda tolerated now? with the extra usage[0]) but enforcement seemed to be based on excessive usage of subs.)
Doing the same thing but with 50K of irrelevant, proprietary system prompt, doesn't seem to improve the situation!
i.e. my question here is: if you replicate OpenClaw with `claude -p prooompt` and cron, is Anthropic happy? (Or perhaps their hope is that the people able and willing to do that represent a rounding error, which is probably true.)
Well when the middleman between you and your users is bought out by the competitor, it makes sense to move away from it. It's a bit like Apple selling iPhones in a Microsoft store.
Not who you asked but I slapped this together in 100 lines of code and you may find it useful. It's just `claude -p proompt` (or indeed, `codex exec prooompt` inside a Telegram bot. (Was annoyed by NanoClaw's claim that it was 500 lines, so tried my own hand at it ;)
No memory, no cron/heartbeat, context mgmt is just "new chat", but enough to get you started.
Note: no sandboxing etc, I run this as unprivileged linux user. So it can blow up its homedir, but not mine. Ideally, I'd run it on a separate machine. (My hottest take here is "give it root on a $3 VPS, reset if it blows up" ;)
You may also enjoy CLIProxyAPI, which does the same thing (claude -p / codex exec) but shoves a OpenAI compatible API around it. Note: this probably violates every AI company's ToS (since it turns the precious subsidized subscription tokens into a generic API). OpenAI seems to tolerate such violations, for now, because they care about good. Anthropic and Google do not.
(Though Anthropic may auto-detect and bill it as extra usage; see elsewhere in this thread. Situation is very confusing right now.)
I used https://github.com/Kevjade/migrate-openclaw, and then started running Claude Code with remote exec against an empty folder that I've advised it to start adding new memories into. So far, my bot's personality is back, and it can utilize the same skills as before, which is was failing on last week.
I don't have an especially heavyweight implementation, because I only use mine to review things I've written in my Apple Notes (journaling of various kinds, mostly) and give insights.
Man it's really too bad that that's the headline, because it's a great tribute to Arturo Vega, and I don't understand why it has to come at the expense of such a seminal band. If what Eno said about the Velvet Underground is true, then album sales don't account for much in the grand scheme of things anyway.
I'm a big fan of Karl Popper's work. I learned about him when reading the book Empirical Linguistics by Geoffrey Sampson. At the time, it was a pretty iconoclastic publication, since it directly struck against the assumption of nativism by framing the study of language as something that could be evidence-based in a way where hypotheses were truly falsifiable. The ability to collect and process large amounts of data pertinent to language make it a lot easier to strike down some of the more inscrutable theories of the '90s and '00s -- at least to those who are willing to do real science.
Having more data and being able to consistency process it actually can say a lot about the hypotheses that linguists have. All other science is evidence-based. The challenge for linguistics has been that many theorists pick and choose armchair examples rather than back their assertions up with statistical validity.
I feel like if you need to utilize a tool like this, odds are pretty good you may have picked the Wrong Tool For the Job, or, perhaps even worse, the wrong architecture.
This is why it's so important to do lots of engineering before writing the first line of code on a project. It helps keep you from choosing a tool set or architecture out of preference and keeps you honest about the capabilities you need and how your system should be organized.
It’s almost as though choosing a single-threaded, GIL-encumbered interpreted scripting language as the primary interface to an ecosystem of extremely parallelized and concurrent high-performance hardware-dependent operations wasn’t quite the right move for our industry.
Ha. The question now is whether the ML industry will change directions or if the momentum of Python is a runaway train.
I can't guess. Perl was once the "800-pound gorilla" of web development, but that chapter has long been closed. Python on the other hand has only gained traction since that time.
Strange opinion. Plenty of apps have more than one language. I might end up using this.
Why? Because my app is built in Elixir and right now I’m also using a python app that is open source but I really just need a small part of the python app. I don’t wanna rewrite everything in Elixir because while it’s small I expect it to change over time (basically fetching a lot of data sources) and it will be pain to keep rewriting it when data collections needs to change (over a 100 different sources). Right now I run the python app as an api but it’s just so overkill and harder to manage vs just handling everything except the actually data collection in Elixir where I am already using Oban.
Sometimes the "right tool for the job" philosophy leads to breaking down a larger problem into two small problems, each which has a different "right tool".
Choosing a single tool that tries to solve every single problem can lead to its own problems.
I disagree, using python for a web-server and something like celery for background work is a pretty common pattern.
My reading of this is it more or less allows you to use Postgres (which you're likely already using as your DB) for the task orchestration backend. And it comes with a cool UI.
Wait until you find out about some people not writing pure python apps but also have some code in JavaScript. Crazy to mix more than one language in one machine.
reply