Humans understand rules to be commands with risks and consequences. They conceously evaluate the benefits of breaking rules against the risks and consequences. They also have their own needs, self-interests, and instincts for preservation and community.
LLMs don't do or have any of this. To them "rules" (just like all prompts) are just weights on a graph traversal used to output text.
Prompts are just weights on a graph traversal. They don't guarantee anything. The LLM does not "understand" the prompts and so it cannot fully adhere to them. They only improve the liklihood it will output what you want.
Never ever ever give an LLM access to something you can't afford to break. And stop thinking of them like people.
> this law will make phones _worse_ for most people
I challenge you to give me an example of how this law might result in a phone that is worse for most people.
This law does not require a slide-off phone cover. It does not require a screwed-on backplate. It does not forbid the use of chemical adhesives. It does not stipulate how a phone should or shouldn't be designed.
It basically just requires the manufacturer to offer replacement batteries and to enable the replacement to be done with commercially available tools. I'd wager the overwhelming majority of phones are already compliant, pending availability of a replacement battery from the manufacturer.
I'm quite confident I could replace the battery on my Sony Xperia 1 iii with a heat gun and my basic iFixit toolkit.
If you're using a company's product to get advice or do work, you should probably expect that product to be heavily biased towards that company and its affiliates. It's not your own employee, who would presumably act with the best interests of your organization in mind. It's not even your own agent. If that's what you want, the product simply isn't for you.
I'm perfectly fine with Mozilla working on other things as long as those things are profitable or at least self-funded. As long as they are not leeching donated resources from Firefox or Thunderbird, I don't see a problem. However, I wish I had some kind of assurance that the money I donate to Mozilla would go to Firefox and not some other project like this.
It's pattern matching on training material. There is almost certainly an overlap between positivity and success in the training material. Positive prompts cause the pattern matching to weight towards positivity and therefor more successful material.
The training or system prompts have shoved the probabilities toward a space that tends to select “halt” sooner. You need to drag the probability weights around until they are less likely to reach “halt” so soon.
Nice language often sorta does this for whatever model(s) they looked at, and is also something people are likely to try. Probably lots and lots of nonsense token combos would work even better, but who’s gonna try sticking “gerontocratic green giant giraffes” on the end of their prompts to see if it helps?
Positive or negative language likely also prevents pulling the probabilities away from the correct topic, being so generic a thing. The above suggestion might only be ultra-effective if the topic is catalytic converters, for some reason, and push the thing into generating tokens about giraffes otherwise. How would you ever discover the dozens or thousands of more-effective but only-sometimes nonsense token combos? You’d need automation and a lot of brute force, or some better way to analyze the LLM’s database.
> But there’s no way you can make wholesale changes to a layout faster than a machine.
You lost me here. I can make changes very quickly once I understand both the problem and the solution I want to go with. Modifying text is quite easy. I spend very little time doing it as a developer.
In most cases I've seen it's because they get overwhelmed by sloppy contributions from developers who do not bother to review their AI's output. Code reviews are a lot of work.
Also “responsibility” and “accountability” mean little for anon contributors from the internet. You can ban them but a thousand more will still be spamming you with slop.
LLMs don't do or have any of this. To them "rules" (just like all prompts) are just weights on a graph traversal used to output text.
They are not the same.
reply