Ballmer hasn’t been around for a long long time. Not since the Red Ring of Death days. Ever since Satya took the reins, MBAs have filled upper and middle management to try to take over open source so that Sales guys had something to combat RedHat. Great for open source. Bad for Microsoft. However, Satya comes from the Cloud division so he knows how to Cloud and do it well. Azure is a hit with the enterprise. Then along comes AI…
Microsoft lost its way with Windows Phone, Zune, Xbox360 RRoD, and Kinect. They haven’t had relevance outside of Windows (Desktop) in the home for years. With the sole exception being Xbox.
They have pockets of excellence. Where great engineers are doing great work. But outside those little pockets, no one knows.
I can't really respect the artist though, after the assault on a random bystander in Stockholm in 2019 — for which he was convicted. He got off too easy.
I've noticed that the sentence “Compliant with RVA23 excluding V extension” has apparently been a bit confusing to some reporters in the tech press lately.
It means that the UR-DP1000 chip would have been RVA23-compliant if only it had supported the V (Vector) extension. The Vector extension is mandatory in the RVA23 profile.
There are other chips out there even closer to being RVA23-compliant, that have V but not a couple of scalar extensions. The latter have been emulated in software using trap handlers, but there was a significant performance penalty.
V is such a big extension, with many instructions and requiring more resources, that I don't think that it would be worth the effort.
> The latter have been emulated in software using trap handlers, but there was a significant performance penalty.
This is a thing SoC vendors have done before without informing their customers until it's way too late. Quite a few players in that industry really do have shockingly poor ethical standards.
I'm not sure if it's intentional. AWS doesn't have CPU features in their EC2 product documentation, either. It doesn't necessarily mean that they can disable CPU features for instances covered by existing customer contracts.
This is the sort of comment that makes people lose faith in HN.
There totally are cases where it's intentional, and no they are not discussed on the internet for obvious reasons. People in the industry will absolutely know what I'm on about.
I didn't intend to dismiss your experience. From the opposite (software) side, these things are hard to document, and unclear hardware requirement documentation result from the complexity and (perhaps) unresolved internal conflict.
The problems occurred when online interaction got mediated by corporations with profit motives that use dark patterns, automated systems and algorithms to extract more revenue from its users.
Most of my real-life friends are people I've first met online, or as a consequence of having met someone online. Those online sites have mostly been run by enthusiasts, driven by some hobby, fandom or other interest. A couple of them have risen highly in popularity and attracted many thousands of users, and also served news and allowed vendors to use their site for interactions with customers.
Those communities that have thrived have made sure that discourse does not get poisoned. They have had active, strong but fair moderators. Many have strict rules against discussing politics or religion, but people have a need to discuss that too sometimes — and being identifiable e.g. between subreddits could put people off from doing that.
Also, where do you draw the line to what is an online community and what is "social media"?
I've avoided Facebook and X-twitter, but I know genuine communities exist there too.
There is still a difference between "fetch this page for me and summarise" and "go find pages for me, and cross-reference".
And what makes you think that all AI agents using Tabstack would be directly controlled in real time with a 1:1 correspondence between human and agent, and not in some automated way?
I'm afraid that Tabstack would be powerful enough to bypass some existing countermeasures against scrapers, and once allowed in its lightweight mode be used to scrape data it is not supposed to be allowed to. I'd bet that someone will at least try.
Then there is the issue of which actions and agent is allowed to do on behalf of a user. Many sites have in their Terms of Service that all actions must be by done directly by a human, or that all submitted content be human-generated and not from a bot.
I'd suppose that an AI agent could find and interpret the ToS, but that is error-prone and not the proper level to do it at. Some kind of formal declaration of what is allowed is necessary: robots.txt is such a formal declaration, but very coarsely grained.
There have been several disparate proposals for formats and protocols that are "robots.txt but for AI". I've seen that at least one of them allow different rules for AI agents and machine learning.
But these are too disparate, not widely known ... and completely ignored by scrapers anyway, so why bother.
Plastic does not have to be 100% recyclable for it to be economically viable.
However, plastic straws are so small that I'd think most of them get tossed anyway.
You can see the progress schedule here: https://review.video.fosdem.org/overview
reply