I suspect you’re misjudging the friend here. This sounds more like the famous “no brown m&ms” clause in the Van Halen performance contract. As ridiculous as the request is, it being followed provides strong evidence that the rest (and more meaningful) of the requests are.
Sounds like the friend understands quite well how LLMs actually work and has found a clever way to be signaled when it’s starting to go off the rails.
It's also a common tactic for filtering inbound email.
Mention that people may optionally include some word like 'orange' in the subject line to tell you they've come via some place like your blog or whatever it may be, and have read at least carefully enough to notice this.
Of course ironically that trick's probably trivially broken now because of use of LLMs in spam. But the point stands, it's an old trick.
It's not so much a case of personal targeting or anything particularly deliberate.
LLMs are trained on the full internet. All relevant information gets compressed in the weights.
If your email and this instruction are linked on your site, that goes in there, and the LLM may with some probability decide it's appropriate to use it at inference time.
That's why 'tricks' like this may get broken to some degree by LLM spam, and trivially when they do, with no special effort on the spammer's part. It's all baked into the model.
What previously would have involved a degree of targeting that wouldn't scale now will not.
> I suspect you’re misjudging the friend here. This sounds more like the famous “no brown m&ms” clause in the Van Halen performance contract. As ridiculous as the request is, it being followed provides strong evidence that the rest (and more meaningful) of the requests are.
I'd argue, it's more like you've bought so much into the idea this is reasonable, that you're also willing to go through extreme lengths to recon and pretend like this is sane.
Imagine two different worlds, one where the tools that engineers use, have a clear, and reasonable way to detect and determine if the generative subsystem is still on the rails provided by the controller.
And another world where the interface is completely devoid of any sort of basic introspection interface, and because it's a problematic mess, all the way down, everyone invents some asinine way that they believe provides some sort of signal as to whether or not the random noise generator has gone off the rails.
> Sounds like the friend understands quite well how LLMs actually work and has found a clever way to be signaled when it’s starting to go off the rails.
My point is that while it's a cute hack, if you step back and compare it objectively, to what good engineering would look like. It's wild so many people are all just willing to accept this interface as "functional" because it means they don't have to do the thinking that required to emit the output the AI is able to, via the specific randomness function used.
Imagine these two worlds actually do exist; and instead of using the real interface that provides a clear bool answer to "the generative system has gone off the rails" they *want* to be called Mr Tinkerberry
Which world do you think this example lives in? You could convince me, Mr Tinkleberry is a cute example of the latter, obviously... but it'd take effort to convince me that this reality is half reasonable or that's it's reasonable that people who would want to call themselves engineers should feel proud to be a part of this one.
Before you try to strawman my argument, this isn't a gatekeeping argument. It's only a critical take on the interface options we have to understand something that might as well be magic, because that serves the snakeoil sales much better.
> > Is the magic token machine working?
> Fuck I have no idea dude, ask it to call you a funny name, if it forgets the funny name it's probably broken, and you need to reset it
Yes, I enjoy working with these people and living in this world.
It is kind of wild that not that long ago the general sentiment in software engineering (at least as observed on boards like this one) seemed to be about valuing systems that were understandable, introspectable, with tight feedback loops, within which we could compose layers of abstractions in meaningful and predictable ways (see for example the hugely popular - at the time - works of Chris Granger, Bret Victor, etc).
And now we've made a complete 180 and people are getting excited about proprietary black boxes and "vibe engineering" where you have to pretend like the computer is some amnesic schizophrenic being that you have to coerce into maybe doing your work for you, but you're never really sure whether it's working or not because who wants to read 8000 line code diffs every time you ask them to change something. And never mind if your feedback loops are multiple minutes long because you're waiting on some agent to execute some complex network+GPU bound workflow.
> You don’t think people are trying very hard to understand LLMs? We recognize the value of interpretability. It is just not an easy task.
I think you're arguing against a tangential position to both me, and the person this directly replies to. It can be hard to use and understand something, but if you have a magic box that you can't tell if it's working. It doesn't belong anywhere near the systems that other humans use. The people that use the code you're about to commit to whatever repo you're generating code for, all deserve better than to be part of your unethical science experiment.
> It’s not the first time in human history that our ability to create things has exceeded our capacity to understand.
I don't agree this is a correct interpretation of the current state of generative transformer based AI. But even if you wanted to try to convince me; my point would still be, this belongs in a research lab, not anywhere near prod. And that wouldn't be a controversial idea in the industry.
We used the steam engine for 100 years before we had a firm understanding of why it worked. We still don’t understand how ice skating works. We don’t have a physical understanding of semi-fluid flow in grain silos, but we’ve been using them since prehistory.
I could go on and on. The world around you is full of not well understood technology, as well as non deterministic processes. We know how to engineer around that.
> We used the steam engine for 100 years before we had a firm understanding of why it worked. We still don’t understand how ice skating works. We don’t have a physical understanding of semi-fluid flow in grain silos, but we’ve been using them since prehistory.
I don't think you and I are using the same definition for "firm understanding" or "how it works".
> I could go on and on. The world around you is full of not well understood technology, as well as non deterministic processes. We know how to engineer around that.
Again, you're side stepping my argument so you can restate things that are technically correct, but not really a point in of themselves. I see people who want to call themselves software engineers throw code they clearly don't understand against the wall because the AI said so. There's a significant delta between knowing you can heat water to turn it into a gas with increased pressure that you can use to mechanically turn a wheel, vs, put wet liquid in jar, light fire, get magic spinny thing. If jar doesn't call you a funny name first, that's bad!
> It doesn't belong anywhere near the systems that other humans use
Really for those of us who actually work in critical systems (emergency services in my case) - of course we're not going to start patching the core applications with vibe code.
But yeah, that frankenstein reporting script that half a dozen amateur hackers made a mess of over 20 years instead of refactoring and redesigning? That's prime fodder for this stuff. NOBODY wants to clean that stuff up by hand.
> Really for those of us who actually work in critical systems (emergency services in my case) - of course we're not going to start patching the core applications with vibe code.
I used to believe that no one would seriously consider this too... but I don't believe that this is a safe assumption anymore. You might be the exception, but there are many more people who don't consider the implications of turning over said intellectual control.
> But yeah, that frankenstein reporting script that half a dozen amateur hackers made a mess of over 20 years instead of refactoring and redesigning? That's prime fodder for this stuff. NOBODY wants to clean that stuff up by hand.
It's horrible, no one currently understands it, so let the AI do it, so that still, no one will understand it, but at least this one bug will be harder to trigger.
I don't agree that harder to trigger bugs are better than easy to trigger bugs. And from my view, the argument that "it's currently broken now, and hard to fix!" Isn't exactly an argument I find compelling for leaving it that way.
> I used to believe that no one would seriously consider this too... but I don't believe that this is a safe assumption anymore. You might be the exception, but there are many more people who don't consider the implications of turning over said intellectual control.
Then they'll pay for it when something goes wrong with their systems with their job etc. You need a different mindset in this particular segment industry - %99.999 uptime is everything (we actually have a %100 uptime for the past 6 years on our platform - chasing that last 0.001 is hard, and something will _eventually_ hit us).
> It's horrible, no one currently understands it, so let the AI do it, so that still, no one will understand it, but at least this one bug will be harder to trigger.
I think you're commenting without context. It's a particular nasty Perl script that's been duct taped to shell scripts and bolted hard on to a Proprietary Third Party application which needs to go - having Claude/GPT rewrite that in a modern language, spending some time on it to have it design proper interfaces and API's around where the script needs to interface other things when nobody wants to touch the code would be the greatest thing that can happen to it.
You still have the old code to test, so have the agent run exhaustive testing on its implementation to prove that its robust, or more so than the original. It's not rocket surgery.
Your comment would be more useful if you could point us to some concrete tooling that’s been built out in the last ~3 years that LLM assisted coding has been around to improve interpretability.
This reads like you either have an idealized view of Real Engineering™, or used to work in a stable, extremely regulated area (e.g. civil engineering). I used to work in aerospace in the past, and we had a lot of silly Mr Tinkleberry canaries. We didn't strictly rely on them because our job was "extremely regulated" to put it mildly, but they did save us some time.
There's a ton of pretty stable engineering subfields that involve a lot more intuition than rigor. A lot of things in EE are like that. Anything novel as well. That's how steam in 19th century or aeronautics in the early 20th century felt. Or rocketry in 1950s, for that matter. There's no need to be upset with the fact that some people want to hack explosive stuff together before it becomes a predictable glacier of Real Engineering.
> There's no need to be upset with the fact that some people want to hack explosive stuff together before it becomes a predictable glacier of Real Engineering.
You misunderstand me. I'm not upset that people are playing with explosives. I'm upset that my industry is playing with explosives that all read, "front: face towards users"
And then, more upset that we're all seemingly ok with that.
The driving force of enshittifacation of everything, may be external, but degradation clearly comes from engineers first. These broader industry trends only convince me it's not likely to get better anytime soon, and I don't like how everything is user hostile.
Man I hate this kind of HN comment that makes grand sweeping statement like “that’s how it was with steam in the 19th century or rocketry in the 1950s”, because there’s no way to tell whether you’re just pulling these things out of your… to get internet points or actually have insightful parallels to make.
Could you please elaborate with concrete examples on how aeronautics in the 20th century felt like having a fictional friend in a text file for the token predictor?
We're not going to advance the discussion this way. I also hate this kind of HN comment that makes grand sweeping statement like "LLMs are like having a fictional friend in a text file for the token predictor", because there's no way to tell whether you're just pulling these things out of your... to get internet points or actually have insightful parallels to make.
Yes, during the Wright era aeronautics was absolutely dominated by tinkering, before the aerodynamics was figured out. It wouldn't pass the high standard of Real Engineering.
> Yes, during the Wright era aeronautics was absolutely dominated by tinkering, before the aerodynamics was figured out. It wouldn't pass the high standard of Real Engineering.
Remind me: did the Wright brothers start selling tickets to individuals telling them it was completely safe? Was step 2 of their research building a large passenger plane?
I originally wanted to avoid that specific flight analogy, because it felt a bit too reductive. But while we're being reductive, how about medicine too; the first smallpox vaccine was absolutely not well understood... would that origin story pass ethical review today? What do you think the pragmatics would be if the medical profession encouraged that specific kind of behavior?
> It wouldn't pass the high standard of Real Engineering.
I disagree, I think it 100% is really engineering. Engineering at it's most basic is tricking physics into doing what you want. There's no more perfect example of that than heavier than air flight. But there's a critical difference between engineering research, and experimenting on unwitting people. I don't think users need to know how the sausage is made. That counts equally to planes, bridges, medicine, and code. But the professionals absolutely must. It's disappointing watching the industry I'm a part of willingly eschew understanding to avoid a bit of effort. Such a thing is considered malpractice in "real professions".
Ideally neither of you to wring your hands about the flavor or form of the argument, or poke fun at the gamified comment thread. But if you're gonna complain about adding positively to the discussion, try to add something to it along with the complaints?
As a matter of fact, commercial passenger service started almost immediately as the tech was out of the fiction phase. The airship were large, highly experimental, barely controllable, hydrogen-filled death traps that were marketed as luxurious and safe. First airliners also appeared with big engines and large planes (WWI disrupted this a bit). Nothing of that was built on solid grounds. The adoption was only constrained by the industrial capacity and cost. Most large aircraft were more or less experimental up until the 50's, and aviation in general was unreliable until about 80's.
I would say that right from the start everyone was pretty well aware about the unreliability of LLM-assisted coding and nobody was experimenting on unwitting people or forcing them to adopt it.
>Engineering at it's most basic is tricking physics into doing what you want.
Very well, then Mr Tinkleberry also passes the bar because it's exactly such a trick. That it irks you as a cheap hack that lacks rigor (which it does) is another matter.
> As a matter of fact, commercial passenger service started almost immediately as the tech was out of the fiction phase. The airship were large, highly experimental, barely controllable, hydrogen-filled death traps that were marketed as luxurious and safe.
And here, you've stumbled onto the exact thing I'm objecting to. I think the Hindenburg disaster was a bad thing, and software engineering shouldn't repeat those mistakes.
> Very well, then Mr Tinkleberry also passes the bar because it's exactly such a trick. That it irks you as a cheap hack that lacks rigor (which it does) is another matter.
Yes, this is what I said.
> there's a critical difference between engineering research, and experimenting on unwitting people.
I use agents almost all day and I do way more thinking than I used to, this is why I’m now more productive. There is little thinking required to produce output, typing requires very little thinking. The thinking is all in the planning… If the LLM output is bad in any given file I simply step in and modify it, and obviously this is much faster than typing every character.
I’m spending more time planning and my planning is more comprehensive than it used to be. I’m spending less time producing output, my output is more plentiful and of equal quality. No generated code goes into my commits without me reviewing it. Where exactly is the problem here?
It feels like you’re blaming the AI engineers here, that they built it this way out of ignorance or something. Look into interpretability research. It is a hard problem!
I am blaming the developers who use AI because they're willing to sacrifice intellectual control in trade for something that I find has minimal value.
I agree it's likely to be a complex or intractable problem. But I don't enjoy watching my industry revert down the professionalism scale. Professionals don't choose tools that they can't explain how it works. If your solution to understanding if your tool is still functional is inventing an amusing name and trying to use that as the heuristic, because you have no better way to determine if it's still working correctly. That feels like it might be a problem, no?
I’m sorry you don’t like it. But this has very strong old-man-yells-at-cloud vibes. This train is moving, whether you want it to or not.
Professionals use tools that work, whether they know why it works is of little consequence. It took 100 years to explain the steam engine. That didn’t stop us from making factories and railroads.
> It took 100 years to explain the steam engine. That didn’t stop us from making factories and railroads.
You keep saying this, why do you believe it so strongly? Because I don't believe this is true. Why do you?
And then, even assuming it's completely true exactly as stated; shouldn't we have higher standards than that when dealing with things that people interact with? Boiler explosions are bad right? And we should do everything we can to prove stuff works the way we want and expect? Do you think AI, as it's currently commonly used, helps do that?
Can you cite a section from this very long page that might convince me no one at the time understood how turning water into steam worked to create pressure?
If this is your industry, shouldn't you have a more reputable citation, maybe something published more formally? Something expected to stand up to peer review, instead of just a page on the internet?
> We should not be making Luddite calls to halt progress simply because our analytic capabilities haven’t caught up to our progress in engineering.
You've misunderstood my argument. I'm not making a luddite call to halt progress, I'm objecting to my industry which should behave as one made up of professionals, willingly sacrifice intellectual control over the things they are responsible for, and advocate others should do the same. Especially not at the expense of users, which I see happening.
Anything that results in sacrificing the understanding over exactly how the thing you built works is bad should be avoided. The source, either AI or something different, doesn't matter as much as the result.
The steam engine is more than just boiling water. It is a thermodynamic cycle that exploits differences in the pressure curve in the expansion and contraction part of the cycle and the cooling of expanding gas to turn a temperature difference (the steam) into physical force (work).
To really understand WHY a steam engine works, you need to understand the behavior of ideal gasses (1787 - 1834) and entropy (1865). The ideal gas law is enough to perform calculations needed to design a steam engine, but it was seen at the time to be just as inscrutable. It was an empirical observation not derivable from physical principles. At least not until entropy was understood in 1865.
James Watt invented his steam engine in 1765, exactly a hundred years before the theory of statistical mechanics that was required to explain why it worked, and prior to all of the gas laws except Boyle’s.
Sounds like the friend understands quite well how LLMs actually work and has found a clever way to be signaled when it’s starting to go off the rails.