> But skills are not fundamentally different from *.instruction.md prompt in Copilot or AGENT.md and its variations.
One of the best patterns I’ve see is having an /ai-notes folder with files like ‘adding-integration-tests.md’ that contain specialized knowledge suitable for specific tasks. These “skills” can then be inserted/linked into prompts where I think they are relevant.
But these skills can’t be static. For best results, I observe what knowledge would make the AI better at the skill the next time. Sometimes I ask the AI to propose new learnings to add to the relevant skill files, and I adopt the sensical ones while managing length carefully.
Skills are a great concept for specialized knowledge, but they really aren’t a groundbreaking idea. It’s just context engineering.
This is exactly what I do. It works super well. Who would have thought that documenting your code helps both other developers and AI agent? I've been sarcastic.
I would argue that many engineering “best practices” have become much more important much earlier in projects. Personally, I can deal with a lot of jank and lack of documentation in a early stage codebase, but LLMs get lost so quickly, or they just multiply the jank faster than anyone ever could have in the past, making it much, much worse for both LLMs and humans.
Documentation, variable naming, automated tests, specs, type checks, linting. Anything the agent can bang its proverbial head against in a loop for a while without involving you every step of the way.
This might be one of the best things about the current AI boom. The agents give quick, frequent, cheap feedback on how effective the comments, code structure, and documentation are to helping a "new" junior engineer get started.
I like to think I'm above average in terms of having design docs alongside my code, having meaningful comments, etc. But playing with agents recently has pointed out several ways I could be doing better.
If I see an LLM having trouble with a library, I can feed its transcript into another agent and ask for actionable feedback on how to make the library easier to use. Which of course gets fed into a third agent to implement. It works really well for me. Nothing more satisfying than a satisfied customer.
I've done something similar. I ask agents to use CLIs, then I give them an "exit survey" on their experience along with feedback on improvements. Feels pretty meta.
That comment didn't read like AI generated content to me. It made useful points and explained them well. I would not expect even the best of the current batch of LLMs to produce an argument that coherent.
This sentence in particular seems outside of what an LLM that was fed the linked article might produce:
> What's wild is that nothing here is exotic: subdomain enumeration, unauthenticated API, over-privileged token, minified JS leaking internals.
The users' comment history does read like generic LLM output. Look at the first lines of different comments:
> Interesting point about Cranelift! I've been following its development for a while, and it seems like there's always something new popping up.
> Interesting point about the color analysis! It kinda reminds me of how album art used to be such a significant part of music culture.
> Interesting point about the ESP32 and music playback! I've been tinkering with similar projects, and it’s wild how much potential these little devices have.
> We used to own tools that made us productive. Now we rent tools that make someone else profitable. Subscriptions are not about recurring value but recurring billing
> Meshtastic is interesting because it's basically "LoRa-first networking" instead of "internet with some radios attached." Most consumer radios are still stuck in the mental model of walkie-talkies, while Meshtastic treats RF as an IP-like transport layer you can script, automate, and extend. That flips the stack:
> This is the collision between two cultures that were never meant to share the same data: "move fast and duct-tape APIs together" startup engineering, and "if this leaks we ruin people's lives" legal/medical confidentiality.
The repeated prefixes (Interesting point about!) and the classic it's-this-not-that LLM pattern are definitely triggering my LLM suspicions.
I suspect most of these cases aren't bots, they're users who put their thoughts, possibly in another language, into an LLM and ask it to form the comment for them. They like the text they see so they copy and paste it into HN.
Or maybe these are people who learned from a LLM that English is supposed to sound like this if you want to be permitted to communicate a.k.a. "to be taken into consideration"! Which is wrong and also kinda sucks, but also it sucks and is wrong for a kinda non-obvious reason.
Or, bear with me there, maybe things aren't so far downhill yet, these users just learned how English is supposed to sound, from the same place where the LLMs learned how English is supposed to sound! Which is just the Internet.
AI hype is already ridiculous; the whole "are you using an AI to write your posts for you" paranoia is even more absurd. So what if they are? Then they'd just be stupid, futile thoughts leading exactly nowhere. Just like most non-AI-generated thoughts, except perhaps the one which leads to the fridge.
Or maybe the 2 month old account posting repetitive comments and using the exact patterns common to AI generated comment is, actually, posting LLM generated content.
> So what if they are? Then they'd just be stupid, futile thoughts leading exactly nowhere.
FYI, spammers love LLM generated posting because it allows them to "season" accounts on sites like Hacker News and Reddit without much effort. Post enough plausible-sounding comments without getting caught and you have another account to use for your upvote army, which is a service you can now sell to desperate marketing people who promised their boss they'd get on the front page of HN. This was already a problem with manual accounts but it took a lot of work to generate the comments and content.
> I suspect most of these cases aren't bots, they're users who put their thoughts, possibly in another language, into an LLM and ask it to form the comment for them. They like the text they see so they copy and paste it into HN.
Yes, if this is LLM then it definitely wouldn't be zero-shot. I'm still on the fence myself as I've seen similar writing patterns with Asperger's (specifically what used to be called Asperger's; not general autism spectrum) but those comments don't appear to show any of the other tells to me, so I'm not particularly confident one way or the other.
That's ye olde memetic "immune system" of the "onlygroup" (encapsulated ingroup kept unaware it's just an ingroup). "It don't sound like how we're taught, so we have no idea what it mean or why it there! Go back to Uncanny Valley!"
It's always enlightening to remember where Hans Asperger worked, and under what sociocultural circumstances that absolutely proverbial syndrome was first conceived.
GP evidently has some very subtle sort of expectations as to what authentic human expression must look like, which however seem to extend only as far as things like word choice and word order. (If that's all you ever notice about words, congrats, you're either a replicant or have a bad case of "learned literacy in USA" syndrome.)
This makes me want to point out that neither the means nor the purpose of the kind of communication which GP seems to implicitly expect (from random strangers) are even considered to be a real thing in many places and by many people.
I do happen to find that sort of thing way more coughinterestingcough than the whole "howdy stranger, are you AI or just a pseud" routine that HN posters seem to get such a huge kick out of.
Sure looks like one of the most basic moves of ideological manipulation: how about we solved the Turing Test "the wrong way around" by reducing the tester's ability to tell apart human from machine output, instead of building a more convincing language machine? Yay, expectations subverted! (While, in reality, both happen simultaneously.)
Disclaimer: this post was written by a certified paperclip optimizer.
It's probably a list of bullet points or disjointed sentences fed to the LLM to clean up. Might be a non-English speaker using it to become fluent. I won't criticize it, but it's clearly LLM generated content.
That was literally the same thought that crossed my mind. I agree wholeheartedly, accusing everything and everyone of being AI is getting old fast. Part of me is happy that the skepticism takes hold quickly, but I don't think it's necessary for everyone to demonstrate that they are a good skeptic.
(and I suspect that plenty of people will remain credulous anyway, AI slop is going to be rough to deal with for the foreseeable future).
Spammers use AI comments to build reputation on a fleet of accounts for upvoting purposes.
That may or may not be what's happening with this account, but it's worth flagging accounts that generate a lot of questionable comments. If you look at that account's post history there's a lot of familiar LLM patterns and repeated post fragments.
Yeah, you have a point... the comment - and their other comments, on average - seem to fit quite a specific pattern. It's hard to really draw a line between policing style and actually recognising AI-written content, though.
What makes you think that? it would need some prompt engineering if so since ChatGPT won't write like that (bad capitalization, lazy quoting) unless you ask it to
We finally have a blog that no one (yet) has accused of being ai generated, so obviously we just have to start accusing comments of being ai. Can't read for more than 2 seconds on this site without someone yelling "ai!".
For what it's worth, even if the parent comment was directly submitted by chatgpt themselves, your comment brought significantly less value to the conversation.
It's the natural response. AI fans are routinely injecting themselves into every conversation here to somehow talk about AI ("I bet an AI tool would have found the issue faster") and AI is forcing itself onto every product. Comments dissing anything that sounds even remotely like AI is the logical response of someone who is fed up.
Every other headline and conversation having ai is super annoying.
But also, its super annoying to sift through people saying "the word critical was used, this is obviously ai!". not to mention it really fucking sucks when you're the person who wrote something and people start chanting "ai slop! ai slop!". like, how am i going to prove is not AI?
I can't wait until ai gets good enough that no one can tell the difference (or ai completely busts and disappears, although that's unlikely), and we can go back to just commenting about whether something was interesting or educational or whatever instead of analyzing how many em-dashes someone used pre-2020 and extrapolating whether their latest post has 1 more em-dashes then their average post so that we can get our pitchforks out and chase them away.
LLMs will never get good enough that no one can tell the difference, because the technology is fundamentally incapable of it, nor will it ever completely disappear, because the technology has real use cases that can be run at a massive profit.
Since LLMs are here to stay, what we actually need is for humans to get better at recognising LLM slop, and stop allowing our communication spaces to be rotted by slop articles and slop comments. It's weird that people find this concept objectional. It was historically a given that if a spambot posted a copy-pasted message, the comment would be flagged and removed. Now the spambot comments are randomly generated, and we're okay with it because it appears vaguely-but-not-actually-human-like. That conversations are devolving into this is actually the failure of HN moderation for allowing spambots to proliferate unscathed, rather than the users calling out the most blatantly obvious cases.
Do you think the original comment posted by quapster was "slop" equivalent to a copy-paste spam bot?
The only spam I see in this chain is the flagged post by electric_muse.
It's actually kind of ironic you bring up copy-paste spam bots. Because people fucking love to copy-paste "ai slop" on every comment and article that uses any punctuation rarer than a period.
> Do you think the original comment posted by quapster was "slop" equivalent to a copy-paste spam bot?
Yes: the original comment is unequivocally slop that genuinely gives me a headache to read.
It's not just "using any punctuation rarer than a period": it's the overuse and misuse of punctuation that serves as a tell.
Humans don't needlessly use a colon in every single sentence they write: abusing punctuation like this is actually really fucking irritating.
Of course, it goes beyond the punctuation: there is zero substance to the actual output, either.
> What's wild is that nothing here is exotic: subdomain enumeration, unauthenticated API, over-privileged token, minified JS leaking internals.
> Least privilege, token scoping, and proper isolation are friction in the sales process, so they get bolted on later, if at all.
This stupid pattern of LLMs listing off jargon like they're buzzwords does not add to the conversation. Perhaps the usage of jargon lulls people into a false sense of believing that what is being said is deeply meaningful and intelligent. It is not. It is rot for your brain.
"it's not just x, it's y" is an ai pattern and you just said:
>"It's not just "using any punctuation rarer than a period": it's the overuse and misuse of punctuation that serves as a tell."
So, I'm actually pretty sure you're just copy-pasting my comments into chatgpt to generate troll-slop replies, and I'd rather not converse with obvious ai slop.
Congratulations, you successfully picked up on a pattern when I was intentionally mimicking the tone of the original spambot content to point out how annoying it was. Why are you incapable of doing this with the original spambot comment?
Cultural acceptance of conversation with AI should've come because of actual AI that are indistinguishable from humans, being forced to swallow recognizable if not blatant LLM slop and turn a blind eye feels unfair
For those looking to quickly understand scope of impact:
> According to Bloomberg and CNN, citing sources, SitusAMC sent data breach notifications to several financial giants, including JPMorgan Chase, Citigroup, and Morgan Stanley. SitusAMC also counts pension funds and state governments as customers, according to its website.
There are important contexts outside of machines you control where installing or running cli commands isn’t possible. In those cases, skills won’t help, but MCP will.
Hence why I said drastically, rather than totally. There are still a few edge cases where it is worthwhile, but they are small and shrinking, especially with services providing UI's with VM's/containers for the model to use increasingly being a thing.
Agreed. Only provide the servers and tools needed for that job.
It would be silly to provide every employee access to GitHub, regardless of whether they need it. It’s just distracting and unnecessary risk. Yet people are over-provisioning MCPs like you would install apps on a phone.
Principle of least access applies here just as it does anywhere else.
Which is really stupid. If I was going to an event and suddenly heard it was so dangerous there that the national guard had been deployed, I would not go to the event. Who would?
Guns, in one word. If you prefer longer answers, it's because police are not the rent-a-cop for private property.
There are definitely social situations where additional security is warranted, that should be clear to most Americans. That security has to come at the expense of those who finance contrived social situations on private property, though.
uh the police literally are the rent-a-cops - this entire thing was about him hiring off-duty cops to stand around, getting paid at cop overtime rates, with their guns, at his conference.
You’re probably underestimating how much credit is available to people. Having money issues? Keep paying your car while you borrow money from Klarna for your DoorDash chipotle.
I mean they hide it as best they can. Big restaurants like Applebee's you'll see "2 for $28" not priced at $28 so you can guesstimate the squeeze but otherwise you kinda have to go straight to Starbucks or McDonalds using a mobile app to order your "usual" to compare "here's what it looks like if I use DoorDash, here's what it looks like if I go myself," to find that the actual delivery fee is some $20-25 per order. Even worse, I'm pretty sure that they test algorithms to try to selectively lower this for new customers so that in the early days when you're more aware of the cost it seems like a steal.
Of course, you can arrive at the $20 just by thinking, "okay, I need someone to go do an errand for me, they'll have to drive to the restaurant, wait there for 15-20 minutes, and then bring it back... so it'll cost $15 for the hour of their time plus a few bucks of overhead for the platform plus a few bucks of messed-up-my-order insurance..."
Which gets us to 5 years from now when the DoorDash killer comes out, it'll be called Kourier or something starting with a K, and it'll start with trying to give Target a way to call up some extra trained Target employees, but they're cross-trained in packaging orders for K. One person will pick up 10 carefully-packaged K-orders, take them all to the central delivery hub, they'll get sorted into driverless cars that plot through some neighborhood some 10 stops, it'll be marketed as a real Amazon-killer and fly under DoorDash's nose -- InstaCart might balk, but DoorDash won't. Until they reveal some pizza-delivery partnership and suddenly within a year every restaurant has some K-employee working for them, whose job it is to batch orders down to the bikes that come by.
Sure, delivery times for Kourier will be 75, 80 minutes long at first. People won't mind because you pay $4 for delivery instead of $20. And Doordash/Amazon won't die, Amazon will just buy Kourier and DoorDash will focus on more rural locales.
I'll be disappointed if it isn't like Snow Crash (1992):
> The Deliverator, in his distracted state, has allowed himself to get pooned. As in harpooned. It is a big round padded electromagnet on the end of an arachnofiber cable. It has just thunked onto the back of the Deliverator's car, and stuck. Ten feet behind him, the owner of this cursed device is surfing, taking him for a ride, skateboarding along like a water skier behind a boat.
> In the rearview, flashes of orange and blue. The parasite is not just a punk out having a good time. It is a businessman making money. The orange and blue coverall, bulging all over with sintered armorgel padding, is the uniform of a Kourier. A Kourier from RadiKS, Radikal Kourier Systems. Like a bicycle messenger, but a hundred times more irritating because they don't pedal under their own power -- they just latch on and slow you down.
And while tipping is technically optional, it's de facto required. The driver will see the total pay for a delivery before they accept it, and if it's too low, they'll reject it, and DoorDash will offer the delivery to another driver. If you don't tip, then your delivery will be rejected until it reaches some driver that's gotten desperate. By that time, your food will likely have been already made and sitting and waiting for 30 minutes.
When the popular running theme of complaints is "it's impossible to do X because poor people all work 168 hours a week minimum", it's easy to excuse wasting your money to save time.
I think the most accurate part of your analogy is how fast the technology changes and renders yesterday’s product obsolete.
Just saw the Audi etron gt has amazing deals on used cars. Then I saw a new model coming out with better battery, more power, better range, and more features. Suddenly last year’s model is way less compelling.
True. At this point in time I'd only lease an EV. That being said, given that 100% of cars on the road won't be EV by 2030 as some have tried to convince us, I suspect the rate of innovation in EV land to slow as EV investment is greatly curtailed by the car companies.
Hey, did you read the article? The newsworthy point is that the EVs depreciate faster than gas counterparts.
But hey, that just means better used EV prices for the rest of us. You can get some high end gently used ones for a great price.
—
“ For Tesla owners in the U.S., their 2023 Model Ys are worth 42% less than what they paid two years ago, while a Ford F-150 truck bought the same year depreciated just 20%. Older EV models depreciate even faster than newer ones. ”
I'm not sure I'd call an F-150 a "counterpart" to a Tesla Model Y, especially when the F-150 Lightning exists. I assume that it is because F-150 vs F-150 Lightning disproves the premise of this article.
One of the best patterns I’ve see is having an /ai-notes folder with files like ‘adding-integration-tests.md’ that contain specialized knowledge suitable for specific tasks. These “skills” can then be inserted/linked into prompts where I think they are relevant.
But these skills can’t be static. For best results, I observe what knowledge would make the AI better at the skill the next time. Sometimes I ask the AI to propose new learnings to add to the relevant skill files, and I adopt the sensical ones while managing length carefully.
Skills are a great concept for specialized knowledge, but they really aren’t a groundbreaking idea. It’s just context engineering.