If my compiler "went down" I could still think through the problem I was trying to solve, maybe even work out the code on paper. I could reach a point where I would be fairly confident that I had the problem solved, even though I lacked the ability to actually implement the solution.
If my LLM goes down, I have nothing. I guess I could imagine prompts that might get it to do what I want, but there's no guarantee that those would work once it's available again. No amount of thought on my part will get me any closer to the solution, if I'm relying on the LLM as my "compiler".
What stops you from thinking through the problem if an LLM goes down, as you still have its previously produced code in front of you? It's worse if a compiler goes down because you can't even build the program to begin with.
In my opinion, this sort of learned helplessness is harmful for engineers as a whole.
Yeah I actually find writing the prompt itself to be such a useful mechanism of thinking through problems that I will not-infrequently find myself a couple of paragraphs in and decide to just delete everything I've written and take a new tack. Only when you're truly outsourcing your thinking to the AI will you run into the situation that the LLM being down means you can't actually work at all.
An interesting element here, I think, is that writing has always been a good way to force you to organize and confront your thoughts. I've liked working on writing-heavy projects, but often in fast-moving environments writing things out before coding becomes easy to skip over, but working with LLMs has sort of inverted that. You have to write to produce code with AI (usually, at least), and the more clarity of thought you put into the writing the better the outcomes (usually).
Why couldn’t you actually write out the documents and think through the problem? I think my interaction is inverted from yours. I have way more thinking and writing I can do to prep an agent than I can a compiler and it’s more valuable for the final output.
I think if you're vibe coding to the extent that you don't even know the shapes of data your system works with (e.g. the schema if you use a database) you might be outsourcing a bit too much of your thinking.
This. When compilers came along, I believe a bunch of junior engineers just gave up utterly on understanding the shape of how the code was generated in assembly which was a mistake given early compilers weren't as effective as they are today. Today vibe-coders are using these early AI tooling and giving up on understanding the shape, and similarly struggling.
> An interesting side effect might be that only people locked out from using LLMs will learn how to program in the future, as vide coding doesn't teach you the fundamentals.
While thinking about/working with LLMs, I've been reminded more than once of Asimov's short story Profession (http://employees.oneonta.edu/blechmjb/JBpages/m360/Professio...). In it, no one goes to school: information is just dumped into your brain. You get an initial dump of the basics when you're a kid, and then later all the specialty information for your career (which is chosen for you, based on what your brain layout is most suited to).
The protagonist is one of a number of people who can't get the second dump; his brain just isn't wired right, so he's sent to a Home for the Feeble Minded to be with other people who have to learn the old-fashioned way.
Through various adventures he eventually realizes that everyone who was "taped" is incapable of learning new material at all. His Home for the Feeble Minded is in fact an Institute of Higher Studies, one of only a handful, which are responsible for all the invention and creation that sustains human progress.
> On a phone keyboard, sure, it's as hard as an accent sign (á, for example), difficult but not twrrible. But on a keyboard? Yeah, no one is typing in Alt combos when literally any other construction will do.
For me, --- gets converted to an em-dash (—) while typing, if I have my input method (ELatin) enabled. I'm so used to typing in while working in LaTeX I can easily slip it in elsewhere.
Correct; the ability of a model to reproduce source material verbatim does not necessarily make the model's existence illegal. However, using a model to do just that might very well present a legal liability for the user. I would be interested to see the extent to which models can "recite from memory" source code, e.g., from the various MS code leaks. Put another way, if I'm using LLM code generation extensively, do I need to run a filter on its output to ensure that I don't "accidentally" copy large chunks of the Windows codebase?
I wonder why that would be? Presumably if the batteries are low then the pressure the machine "thinks" it's inflated the cuffs to is higher than the actual pressure...
Dance along with the characters of the new Series, now streaming on $sponsor, and achieve a score of at least 6/10 to get another door unlock your door.
---
Your dance was not good enough, try again or buy a door unlock with the flash discount code "Distopia" for 99ct.
I miss TkDesk, which I discovered many years ago when I was first trying Linux, partly because it supports unlimited splits, not just two. In fact, if I'm remembering correctly, when navigating to a subdirectory the default was just to open it in a new split. You ended up with splits containing the full path from wherever you started to your eventual subdirectory (you could scroll the view of splits horizontally once there got to be too many).
This also lets you run QEMU over SSH, if you want. I use this in my assembly language course; towards the end I give an assignment to write Hello, World! as a 16-bit real mode MBR bootloader. Students can do the whole thing on our SSH server, including testing in QEMU (and even attaching GDB to it to debug) not needing to install anything locally.
Honestly it's a large enough library with enough weirdness and untested areas, footguns, and bugs that I'd deem it just as valid as React for example.
Why did tensor_parallel have output += mod instead of output = output + mod? (The += breaks backprop). Nobody tested it! A user had to notice it was broken and make a PR!
For an uni course I tried to fine tune Gemma in a few days, it wasn't easy because tutorials often were written with old version of hf libraries that now work differently, there's a lot of areas to improve, everything still seems kinda fresh and so it's a pain in the ass to deviate from simple walkthroughs to something tailored to your needs.
I've found I benefit most from AI when I ask it questions about technical topics, like programming or using a device like a synthesizer or DAW software. There's pshychological effect I get especially when I get an answer that says "that feature is not supported". I get the feeling that it's not my fault that something feels very difficult, I know WHY it is difficult when somebody tells me there is no easy way to do what you want, so I don't waste any more time trying to find the solution. I must look elsewhere then.
So I wonder, trying to learn AI and how to use it, shouldn't the AI itself be the best guide for understanding AI? Maybe not so much with the latest research or latest products, because AI is not yet trained on those, but sooner or later AI should feel as easy a subject as say JavaScript programming.
If my LLM goes down, I have nothing. I guess I could imagine prompts that might get it to do what I want, but there's no guarantee that those would work once it's available again. No amount of thought on my part will get me any closer to the solution, if I'm relying on the LLM as my "compiler".
reply