Hacker Newsnew | past | comments | ask | show | jobs | submit | atq2119's commentslogin

If you read it carefully, you'll notice that the blog post misrepresents the AMD response.

The blog post title is "AMD won't fix", but the actual response that is quoted in the post doesn't actually say that! It doesn't say anything about will or won't fix, it just says "out of scope", and it's pretty reasonable to interpret this as "out of scope for receiving a bug bounty".

It's pretty careless wording on the part of whoever wrote the response and just invites this kind of PR disaster, but on the substance of the vulnerability it doesn't suggest a problem.


The challenge is that this doesn't really work for community-developed software.

Let's say somebody uses this scheme for software they wrote. Would anybody else ever contribute significantly if the original author would benefit financially but they wouldn't?

Mediating the financial benefits through a non-profit might help, but (1) there's still a trust problem: who controls the non-profit? and (2) that's a lot of overhead to set up when starting out for a piece of software that may or may not become relevant.


It does. And it works best if you elect politicians who are willing to listen.

Was that Jane Street? I remember watching a presentation from someone there about such a system.

If not, any chance this tooling is openly available?


> Was that Jane Street?

yep


I think the closest such thing we have is "suggestions" on github and gitlab.

In the large, ideas can have a massive influence on what happens. This inevitability that you're expressing is itself one of those ideas.

Shifts of dominant ideas can only come about through discussions. And sure, individuals can't control what happens. That's unrealistic in a world of billions. But each of us is invariably putting a little but of pressure in some direction. Ironically, you are doing that with your comment even while expressing the supposed futility of it. And overall, all these little pressures do add up.


How will this pressure add up and bubble to the sociopaths which we collectively allow to control most of the world's resources? It would need for all these billions to collectively understand the problem and align towards a common goal. I don't think this was a design feature, but globalising the economy created hard dependencies and the internet global village created a common mind share. It's now harder than ever to effect a revolution because it needs to happen everywhere at the same time with billions of people.

> How will this pressure add up and bubble to the sociopaths which we collectively allow to control most of the world's resources?

By things like: https://en.wikipedia.org/wiki/Artificial_Intelligence_Act

and: https://www.scstatehouse.gov/sess126_2025-2026/bills/4583.ht... (I know nothing about South Carolina, this was just the first clear result from the search)


To be clear - the sociopaths and the culture of resource domination that generates and enables them is the real problem.

AI on its own is chaotic neutral.


Having used Claude Code in anger for a while now, I agree that given the state of these agents, we can't stop writing code by hand. They're just not good enough.

But that also doesn't mean they're useless. Giving comparatively tedious background tasks to the agents that I check in on once or twice an hour does feel genuinely useful to me today.

There's a balance to be found that's probably going to shift slowly over time.


To me the biggest benefit has been getting AI to write scripts that automate some things for me that are tedious but not needed to be deployed. Those scripts don’t have to be production-grade and just have to work.

Similar experience. I just tried Claude for the first time last week, and I gave it very small tasks. "Create a data class myClass with these fields<•••> and set it up to generate a database table using micronaut data" was one example. I still have to know exactly what to do, but I find it very nice that I didn't have to remember how to configure micronaut data, (which tbf is really easy) I just had to know that that's what I wanted to use. It's not as revolutionary as the hype, but it does increase productivity quite a bit, and also makes programming more fun I think. I get to focus on what I want to build, instead of trying to remember jdbc minutiae. Then I just sanity check the generated code, and trust that I will spot mistakes in that jdbc connection. It felt like the world's most intuitive abstraction layer between me and the keyboard, a pretty cool feeling.

Just for fun, once I had played a bit with it like that, I just told it to finish the application with some vague Jira-epic level instructions on what I wanted in it and then fed it the errors it got.

It eventually managed to get something working but... Let's just say it's a good thing this was a toy project I did specifically to try out Claude, and not something anyone is going to use, much less maintain!


> Just for fun, once I had played a bit with it like that, I just told it to finish the application with some vague Jira-epic level instructions on what I wanted in it and then fed it the errors it got.

Would you finish the application with "some vague Jira-epic level instructions"? Or, even if you don't formally make tickets in Jira for them, do you go from vague Jira-epic-sized notions to ticket-sized items? If I had a mind-control helmet that forced you to just write code and not let you break down that jira-epic in your thoughts, do you think the code would be any good? I don't think mine would be.

So then, why does it seem reasonable that Claude would be any good, given such a mental straight jacket? Use planning mode, the keyword "ultrathink" and the phrase "do not write code", and have it break down the vage Jira epic into ticket-sized items, and then have it break it into sub tickets that are byte-sized tasks, and then have it get to work.


I mean, I didn't really expect it to work, I just wanted to see what would happen. I'd had pretty good results thus far and wondered how far I could push it. Jira-epic-style prompts was, not surprisingly, pushing it too far.

It did manage to get the application working though, with only a couple of "this thing broke, plz fix" style prompts, and it did better than I had thought it would to fulfill my intention, give how vague I was.

My point was that if you're going to build an actual product, you should probably not use Claude in that way. Break down the epics to smaller more manageable chunks however, and Claude can do an amazing job! I'll definitely keep experimenting with it this way, it's way better than full-manual coding, or at least that's my initial impression of about a week of experimentation!


My personal reason for switching some years ago was the excellent remote session support via ssh.

I haven't reevaluated that choice in a while, but that plus LSP support (and to a lesser extent ML Auto-complete) are must-haves for me nowadays.


Datacenter GPU dies cannot be binned for Geforce because they lack fixed function graphics features. Raytracing acceleration in particular must be non-trivial area that you wouldn't want to spend on a datacenter die. Not to mention the data fabric is probably pretty different.


I’m not saying their binning between data center and 3060s, but within gaming and between gaming and RTX Pro cards, there’s binning.

As you cut SMs from a die you move from the 3090 down the stack, for instance. That’s yield management right there.


The A40, L40S and Blackwell 6000 Pro Server have RT cores. 3 datacenter GPUs.

If you want binning in action, the RTX ones other than the top ones, are it. Look for the A30 too, of which I was surprised there was no successor. Either they had better yields on Hopper or they didn't get enough from the A30...


> once you open the door, the situation resets

That's the root cause error of your thinking.

The prizes aren't reshuffled and the host's choice of doors depends on both the player's choice and on information that is hidden to the player. No way you can treat that as a reset.


> HPC platforms

GPUs don't reorder instructions at all.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: