Hacker Newsnew | past | comments | ask | show | jobs | submit | johnbarron's commentslogin

I think its them on video: https://youtu.be/Rx19zOzQeis

>But at the same time I agree that they aren't doing enough to surface the high > quality courses

They have forced the creators to agree on being scrapped by AI or otherwise not showing up on their own top search. Ironically this has sealed their fate, and most top creators decided to move their content somewhere else.


Your comment is the equivalent of stating, that Jeff Bezos and Andy Jassy, do not really know their employees are carrying around urine bottles.

Mmm, no, I don’t think it’s equivalent. I think they know that if you make the work hard, some employees will have trouble keeping up and will do things like peeing in bottles. And they’re OK with that, because they think there are enough people who can keep up that they can push the weaker people out. I think they believe that the peeing in bottle is relatively rare. I’m unsure whether that’s right or not. It’s been reported that it happens, but I have no sense whether it’s common.

Wilful ignorance much?

Management is told its a homebrew. Very commonly used by developers all over the industry.

This is Matt Garman, the ultimate MBA. Bonus for sure tied to tokens-per-quarter, which is the 2026 equivalent of measuring engineers by lines of code...

This why AWS is bleeding good engineers for years. What is left is starting to look like Boeing post McDonnell merger...

They took out a quarter of their documentation page limited real estate, with AI doc shorts nobody asked for, nobody needs, and cant disable.


>> Anthropic using marketing to convince people their models are more advanced, better built, or that AI is a threat that needs to be regulated because only they have the answer? I’m shocked.

I remember when OpenAI was saying GPT-2 was too dangerous to release.


I remember when there was a guy at Google years a few years ago that was convinced that they had an internal, sentient creature in their labs (I think maybe 4 years ago?)

If I’m not mistaken, after the media cycle, he lost his job for breaking confidentiality.

That was the opposite of marketing, Google really didn’t get how to turn this into a product until ChatGPT happened.


They most likely understood that it wasn't viable for anything. OpenAI just yolo'd it and now we're dealing with the fallout. I'm fairly certain that any management layer at google isn't going to say yes to "invest 5 billion to make 10 million" scheme that OpenAI, Anthropic, are currently running.

"ChatGPT has over 900 million weekly active users worldwide. ... ChatGPT Plus has around 50 million paying subscribers"

What you have typed does not address anything the person you are responding to said.

With those 50 million subscribers, how much do they pay and how much do they cost? That is the only relevant piece of information when discussing the investment and returns of OpenAI.


> "invest 5 billion to make 10 million"

business is contextual, and is a game of numbers? If you agree, then there is a difference between "I made money selling lemon drinks at my driveway, but I sold a car to make room" .. versus "I have recurring revenue of 50 million x $80 USD per month, and it is growing, and I am using cheap credit to build that" .. Numbers have a meaning, and the larger dollar recurring revenue cannot be matched in any way, no matter how much I spend. IIR ChatGPT is the fastest adopted software in the history of the Internet.


Is it growing?

Don't they report annualized revenue AKA the best month times 12? How is that comparable?


They have no moat.

I for one cant wait for the 10 million to go all the way to zero

Google is the leader, they really don't want AI to be a success, it only comes with a risk of disruption. They probably don't even really believe it's going to be that big of a deal. They are only in that game to hedge; sure they have wasted a trillion dollars if AI doesn't come through, but they will earn that back in 3-5 years. So why would they need to do deranged marketing stunts and sacrifice their credibility for that?

If OpenAI or Anthropic doesn't turn this into a trillion dollar industry FAST, they are cooked. The strategy of building up fear around your product is risky, but necessary. There is simply no way to grow the AI business fast enough if they can't talk directly to the CEOs and bypass input from the employees, and baba yaga stories are perfect for that. Every time the CEO hears an employee say that the AI isn't working great for him, he hears an employee that's scared for his job or for his life, dismisses it, and sends out a mandate that everyone needs to prompt an AI every time they as much as need to go to the toilet.


Context from 2019: https://en.wikipedia.org/wiki/GPT-2

>While previous OpenAI models had been made immediately available to the public, OpenAI initially refused to make a public release of GPT-2's source code when announcing it in February, citing the risk of malicious use;[8][5] limited access to the model (i.e. an interface that allowed input and provided output, not the source code itself) was allowed for selected press outlets on announcement.[8] One commonly-cited justification was that, since generated text was usually completely novel, it could be used by spammers to evade automated filters; OpenAI demonstrated a version of GPT-2 fine-tuned to "generate infinite positive – or negative – reviews of products".[8]

>Another justification was that GPT-2 could be used to generate text that was obscene or racist. Researchers such as Jeremy Howard warned of "the technology to totally fill Twitter, email, and the web up with reasonable-sounding, context-appropriate prose, which would drown out all other speech and be impossible to filter".[18] ...


It's kind of funny watching the behavior on the forum of different groups with different beliefs.

"AI can't do anything harmful at all, kick this shit up to 11. It's all marketing, bla bla"

and

"My grandma gave away all her money to AI bots and is now starving in the street. My uncle murdered his wife and is trying to get married to GPT-4o. He thinks they are going to elope to a data center on a tropical island and live happily ever after".

I think the 'AI can do no harm, it's marketing" people are really disconnected from reality and that any other product that behaved in the same manner would have been banned in most places.


Related: https://youtu.be/Ykvf3MunGf8?si=UEIMRdrMWUFF6V8Q

AI chatbots have caused real harm. It has tragically convinced and encouraged a number of people to commit suicide, to say nothing about scams. It is having a real effect on the social fabric of our society.

I don't understand what point the people who blame the dangers of AI on marketing.


The sociocultural dangers weren't the danger they were referring too, Claude Mythos was purported to be so powerful that if released to the public it would result in all software being 0-dayed and so they could only give select important groups access. Curl's analysis said ehh, it didn't really seem that much better.

Now people who are getting negatively affected because they think AI is more real and more intelligent than it actually is and get tricked by it, well that is dangerous but for different reasons.


> I remember when OpenAI was saying GPT-2 was too dangerous to release.

The world didn’t end yet - but did it improve?


"it can almost like write 2 paragraphs!" "It might be conscious" "this is basically AGI, we had to fire someone who spilled the beans"

I always thought he was fired for making crackpot statements to the press in reference to his professional capacity, and thus creating bad PR and embarrassing spectacle for his employer. Seems like legitimate reasons to me.

An interesting question now is whether he had standard mental health issues, or if he was an early example of AI psychosis or whatever we call people who are falling in love with their AI chatbots because they tell them how smart they are.

Considering Richard Dawkins has recently succumbed to the same delusion it is a reminder that no matter how intelligent someone may otherwise be, we are all human and have certain tendencies and blind spots; anthropomorphizing non-entities being one of those.

Richard Dawkins is 85 to be fair, just like Bernie Sanders is 84 when he made similar comments.

The other guy worked on Google's AI safety team where one would expect he'd have a basic grasp of how the technology works before making outlandish claims.


One phenomenon that spooks me is when intelligent people believe in idiotic things.

It makes me wonder if there's a wrong turn in the road that I too might fall in the same pit.


Vigilance is warranted, I think.

I can't find it right now, but something came up a few years ago (probably on HN) about highly intelligent people being more adept at making up arguments to rationalize beliefs and actions that they had taken for other reasons entirely.

Sort of makes sense that wielding a more complex mind would offer more complex ways to go wrong, doesn't it?


And on balance, it also can mean that they make connections and see truth where others only see the facade. Both statements can (and are true) because highly intelligent people are still just people. Some people’s “delusions” are absolutely correct, and others “facts” are nothing more than anecdotes told to convince themselves of what they want to believe.

Sounds more like “intelligence” isn’t the only defining metric for such behavior to occur in people, because that describes a lot of less intelligent people too. Though, I suspect highly intelligent people are at least somewhat more likely to end up on the “correct” side of the facts.


As someone who watched one of their heros fall for some stupid cult like thing ten years ago and wondered the same thing. Then many years later fell for some dumb stuff. The answer is you probably will. Try to stay intellectually flexible, it'll be okay.

I am afraid of that, I wasn't joking.

I have seen people I consider as much smarter than me fall for some very idiotic things. I certainly don't consider myself immune.

I think that the advice to try being intellectually flexible is a good one. Strive to learn new things, expose yourself earnestly to ideas that challenge your beliefs, exercise empathy, etc


Good point.

Optimization on "Human Feedback", early exposure to high-effort experimental systems... I wouldn't be surprised it that turns into a bigger field than is generally recognized today.

Looking at it from the outside, I think it's still pretty hard to see how he came to end up in that position, but with a bit of individual vulnerability, arbitrary time to boil the frog slowly, and a fairly large number people exposed, maybe it would be stranger not to have the event occur with someone.


And Anthropic was founded by former, high ranking OpenAI employees so they were accustomed to the classic "its so dangerous we can't release it" trope.

It sounds like Mythos is good but none of us know exactly how good since they haven't released it yet. It also sounds like Anthropic is compute starved which is probably the biggest reason it has had a public release



>> Modern day piracy.

"Israel seizes Gaza aid ships in international waters" - https://www.reuters.com/world/middle-east/israel-begins-inte...


It still shocks me that there are people out there who think of this as watching sports: when my team does it, it's ok. When they do it, it's clear-cut wrong.



Great article

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: