Hacker Newsnew | past | comments | ask | show | jobs | submit | datagram's commentslogin

The fact that we're calling $500 GPUs "midrange" is proof that Nvidia's strategy is working.


What strategy? They charge more because manufacturing costs are higher, cost per transistor haven't changed much since 28nm [0] but chips have more and more transistors. What do you think that does to the price?

[0]: https://www.semiconductor-digest.com/moores-law-indeed-stopp...


strategy of marketting expensive product as normal one? obviously?

if your product can't be cheap - your product is luxury, not a day-to-day one


It's mid range. The range shifted.


I think my TNT2 Ultra was $200. But Nvidia had dozens of competitors back then. 89 when it was founded! Now: AMD…


AMD cards are fine from a raw performance perspective, but Nvidia has built themselves a moat of software/hardware features like ray-tracing, video encoding, CUDA, DLSS, etc where AMD's equivalents have simply not been as good.

With their current generation of cards AMD has caught up on all of those things except CUDA, and Intel is in a similar spot now that they've had time to improve their drivers, so it's pretty easy now to buy a non-Nvidia card without feeling like you're giving anything up.


AMD RT is still slower than Nvidia's.


I have no experience of using it so I might be wrong but AMD has ROCm which has something called HIP that should be comparable to CUDA. I think it also has a way to automatically translate CUDA calls into HIP as well so it should work without the need to modify your code.


Consumer card ROCm support is straight up garbage. CUDA support project was also killed.

AMD doesn't care about consumers anymore either. All the money in AI.


> AMD doesn't care about consumers anymore either. All the money in AI.

I mean, this also describes the quality of NVIDIA cards. And their drivers have been broken for the last two decades if you're not using windows.


AMD "has" ROCm just like Intel "has" AVX-512


`I think it also has a way to automatically translate CUDA calls`

I suspect the thing you're referring to is ZLUDA[0], it allows you to run CUDA code on a range of non NVidia hardware (for some value of "run").

[0] https://github.com/vosen/ZLUDA


For an extremely flexible value of "run" that you would be extremely unwise to allow anywhere near a project whose success you have a stake in.


To quote "The Dude"; "Well ... ummm ... that's ... ahh ... just your opinion man". There are people who are successfully running it in production, but of course depending on your code, YMMV.


it's mostly about AI training at this point. the software for this only supports CUDA well.


It's not too difficult to use the TypeScript type checker on JS files, so it's possible to reap most of those benefits without having to introduce a compilation step.


In my experience, most of the benefits of Typescript come from type-checking across call boundaries, the point where type-related bugs are most likely to be introduced due to each side of the call often being in different locations and contexts. And you can't get those benefits without explicitly typing function parameters.


If you really don't want the compilation step; you can use JSDoc and get almost the best of both worlds (not everything in TS is supported by JSDoc but most essentials are)


I used to use a similar extension in Chrome called wasavi, but I got burned once too many times by bugs in extension causing me to lose all of the text I had been writing.


GhostText provides a live 2-way sync, so there’s no risk of losing anything.


In the same boat atm; reaching out to GitHub support and hoping that the lack of API access is just a permissions mixup.

Luckily I was able to screenshot/copy the text for one of my projects before refreshing the page.

Agreed that the 1-year limit makes no sense; it's just a few bits of text.


Flexbox was designed for these kinds of layouts, so that would also be an option here.


There is LosslessCut[1], though it's only designed to handle trimming and not general re-encoding.

[1]: https://github.com/mifi/lossless-cut


if you re-encoded then it wouldn't be lossless anymore?


I was thinking about this the other day; it would be really interesting to have big corporations whose profits depend on public-domain data. We might actually see lobbying to decrease copyright terms, to counter companies like Disney trying to extend copyright until the end of time.


As someone with a YouTube channel, from looking at my metrics it's pretty clear that YouTube is being held afloat by a) the fact that non-technical users can't easily block YouTube ads on mobile devices, and b) YouTube Premium.

A single user depriving YouTube of their revenue is inconsequential sure, but when hundreds of millions of people do it (like with blocking ads on desktop) it obviously runs the risk of making the entire company unviable. Hosting videos for free is a great way to lose a lot of money.


I think if you're going to show off and do clever impractical things, a personal site is a pretty good place for it.

That being said, scrolling that page with a regular mouse is incredibly frustrating.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: