Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Why this comment is written in sports podcast tone?


Sorry about that. I´m not a native speaker and asked GPT-4 to: "Create a engaging reply for HackerNews talking that this is a great model, and I really hope that they release a 13B and 34B version. As those sizes are way more capable and have a chance of finally surpassing the GPT 3.5. This would be a very nice decision for mind share, and their larger models that can rivalize gpt 4 can be keep private for commercialization."

I think that this is how GPT-4 thinks that a engaging comment for HN looks like.


I think your prompt was written well enough to not need GPT-4. Don't undersell yourself :)


I flagged it for being AI written. Even if you're not a native speaker, it's best to not have AI outputs polluting future datasets, anyway.


AI outputs are not necessarily bad for datasets, given they've been verified by a human for their quality and correctness (probably the case here, but not for SEO content farms).


We've had flawed human outputs re-polluting future human learning for some time, now


That's actually really interesting, thanks for sharing. We're in for an interesting future hah.


This is the future we are choosing. https://youtu.be/Cn8Pua5rhj4?si=tOro1MLaOE525Q2O


(ha!)


Given it's the most upvoted on the thread at the moment, I think GPT-4 was on the money here :D


This is something that I have found LLMs nearly completely useless for. I gave a talk on uses of AI for gamedev, and had some great things but I couldn't get it to write a blurb for the talk that wasn't vomit inducing.

This isn't so much a problem white LLMs themselves but the training data. The world is so inundated with meaningless marketing speak that when you try and get it to talk about a topic in even a slightly promotional manner it will create something that fits in nicely with the existing drivel.


I have the same problem writing references for students and summarising my feedback to them. I find asking to “write concisely and without waffle, like a brusque, British academic” helps a bit.


So basically you're admitting it's a prompting problem.


The comment implied that 13B and 34B models are coming.

This is interesting... You didn't have any malicious intent, hence this is a somewhat novel example of GPT4 sneaking misinformation into an HN comment section.


Its Mistral!

Or are you Mistral?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: