Sorry about that. I´m not a native speaker and asked GPT-4 to: "Create a engaging reply for HackerNews talking that this is a great model, and I really hope that they release a 13B and 34B version. As those sizes are way more capable and have a chance of finally surpassing the GPT 3.5. This would be a very nice decision for mind share, and their larger models that can rivalize gpt 4 can be keep private for commercialization."
I think that this is how GPT-4 thinks that a engaging comment for HN looks like.
AI outputs are not necessarily bad for datasets, given they've been verified by a human for their quality and correctness (probably the case here, but not for SEO content farms).
This is something that I have found LLMs nearly completely useless for. I gave a talk on uses of AI for gamedev, and had some great things but I couldn't get it to write a blurb for the talk that wasn't vomit inducing.
This isn't so much a problem white LLMs themselves but the training data. The world is so inundated with meaningless marketing speak that when you try and get it to talk about a topic in even a slightly promotional manner it will create something that fits in nicely with the existing drivel.
I have the same problem writing references for students and summarising my feedback to them. I find asking to “write concisely and without waffle, like a brusque, British academic” helps a bit.
The comment implied that 13B and 34B models are coming.
This is interesting... You didn't have any malicious intent, hence this is a somewhat novel example of GPT4 sneaking misinformation into an HN comment section.