These words may mean anything. From "get people extinct" to "make shit ton of money" for myself.
What we know for sure he is not commited to people who trusted him or his project. Consider the project dead. He kinda fits into openai mindset: those people also say right words, use right terms, and do what benefits them personally.
openclaw is inevitable type of software (as cli agents, as context-management software, as new methodologies of structuring sofware for easier AI ingestion, etc). Guy gamed, built it, guy got it.
At this point I would not expect well-rounded software as a byproduct of huge investments and PR stunts. There will be something else after LLMs, I bet people are already working on it. But current state of affairs of LLMs and all the fuss aroud them is way more peceptive, PR and emotion driven than containing intristic value.
Next inevitable step is LLM alchemy. People will be writing crazy-ass prompts which ununderstandable text which somehow get system work better than the straight-human-text prompts.
I've experienced in times of gpt 3, and 3.5 that existence of any, even 1-word system message changed output drastically in the worse direction. I did not verify this behaviour with recentl models.
Since then I don't impose any system prompts on users of my tg bot. This is so unusual and wild in relation to what others do that very few actually appreciate it. I'm happy I don't need to make money for living with this project thus I and can keep it ideologically clean: user's control over system prompts, temperature, top_p, giving selection of the top barebones LLMs.
If exatly such markdown was written by some Joe from the internet no one would notice it. So these stars exist not because of quality or utility of the text.
reply