The very fact that people are arguing with a non-existent author signals that whatever generated the content did a good enough job to fool them today. Tomorrow it will do a good enough job to fool you. I think the more important question is what this means in terms of what is really important and what we should invest in to remain anchored in what matters.
This got me thinking: I am not about to fight windmills and the future will unfold as it will, but I think the idea of "LLM as a compiler of ideas to high-level languages" can turn out to be quite dangerous. It is one thing to rely on and not to be able to understand the assembly output of a deterministic compiler of a C++ program. It is quite another to rely on but not fully understand (whether due to lazyness or complexity) what is in the C++ code that a giant nondeterministic intractable neural network generated. what is guaranteed is that the future will be interesting...
The way I'm keeping up with it (or deluding myself into believing I am keeping up with it) is by maintaining rigorous testing and test standards. I have used LLMs to assist me building C firmware for some hardware projects. But the scale of that has been such that it can also be well tested. Anyway, part of the reason I was so much slower with python is I'm an expert at all the tech I used, spending literal years of my life in the docs and reading books, etc., and I've read everything the LLM wrote to double check it. I'm not so literate with go but its not very complex, and given the static nature, I just trusted the LLM more than I did with python. The react stack I am learning as I go, but the tooling is so good, and I understand the testing aspects, same issue, I trusted the LLM more and have been more productive. Anwyay, times are changing fast!
Even if this is true, a possible takeaway is that after the bubble bursts and the dust settles, AI's effect will be 17 times stronger than that of the Internet...
Personally, I think it will end up being much higher, but that doesn't mean I'm going to invest in it any time soon
Watch out for Occam's Hacksaw: Any complex problem can be made to look simple by hacking away enough parts of it as "not essential", saying you'll handle them in version two.
Thank you. Was subscribed to it around 1981-1983. Eagerly waited every month for it to make its way across the Atlantic so I could dig into all the fascinating new technologies. I'm sure it had a great influence on my interests and eventual career.
I'm not saying this isn't the GPT-5 system prompt, but on what basis should I believe it? There is no background story, no references. Searching for it yields other candidates (e.g https://github.com/guy915/LLM-System-Prompts/blob/main/ChatG...) - how do you verify these claims?
While I can definitely see the ability to type faster as an advantage in some cases, I don't think I'll ever bother going through the process of learning it. From decades of software development I can type fast enough for whatever it is I need without looking at the keyboard and not once have I felt that the bottleneck of my productivity is the the speed that I type. Most of the time goes into thinking how to do it right so that it doesn't have to be done again...
And with code generation becoming better all the time, I believe the abstraction layers were one will have to spend more time on will get even higher.
The correct analogy to me is that being able to run fast will not help you that much in building a rocket to take you to the moon.
I'm open to changing my mind though if presented with a solid counter argument.
Where do you think morality fits into this game? It seems that we agree that underneath it all is unfathomable and ineffable magic. The question is how does this influence how you act in the game?
Morality is an evolved heuristic for solving social conflicts that roughly approximates game theoretical strategies, among other things. Morality also incorporates other cultural and religious artifacts, such as "don't eat meat on a Friday."
Ultimately, it comes down to our brain's social processing mechanisms which don't have the tools to evaluate the correctness (or lack thereof) of our moral rules. Thus many of these rules survive in a vestigial capacity though they may have served useful functions at the time they developed.
I go back and forth on the usefulness of considering morality particularly other than accepting it as a race condition/updater system/thing that happens. I have some more unique and fairly strong views on karma and bardo that would be a very long comment to get into it, but I think Vedic/Vedanta(Advaita) is good, I think this is a good doc: https://www.youtube.com/watch?v=VyPwBIOL7-8
reply