> “ The model, code-named Avocado, outperformed Meta’s previous A.I. model and did better than Google’s Gemini 2.5 model from March, two of the people said. But it has not performed as strongly as Gemini 3.0 from November, they said.”
So in two months they will make it better than 3.1? But by then there will probably be even newer models. It would be great if we get another competing model, but it isn’t going to be easy for Meta.
I remember everyone wondering how they got faster inference speeds, and everyone was so focused on the memory throughput, but after simmering on it for a bit, I am left wondering if the SSD speed plays a part in that speed up they noticed. Then again could also be the specific model as well.
- "The agent mapped the attack surface and found the API documentation publicly exposed — over 200 endpoints, fully documented. Most required authentication. Twenty-two didn't."
I’m tired of every AI capability at SaaS companies charging usage fees. I get it: they both have their own token costs and want to make a profit on the feature. But it makes a $50 a month product potentially hundreds if I’m not careful. For example our customer chat tool with AI on is going to be hundreds a month instead of the $39/mo we pay. We turned off the AI capability.
> I’m tired of every AI capability at SaaS companies charging usage fees. I get it: they both have their own token costs and want to make a profit on the feature. But it makes a $50 a month product potentially hundreds if I’m not careful.
Well, I understand how, as a user, yoou’d prefer your $50 subscription to include hundreds of dollars of subsidies for the costs of consuming third-party AI resources, but other than a startup spending VC to buy a userbase and force the traditional unsubsidized competition out of business before milking their new monopoly hard to payback investors, how do you expect a firm to justify that?
It might go that way - skip the higher level languages or an AI fit to purpose language. Right now we want to have a comfort feeling for being able to read the code if we need to. I would love to see a study of a year ago the percentage of code reviewed vs today’s AI generated code
So in two months they will make it better than 3.1? But by then there will probably be even newer models. It would be great if we get another competing model, but it isn’t going to be easy for Meta.
reply