Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Google Translate simply doesn't use a GPU for translation. It's as easy as that. There is a huge jump in cost associated with using the GPU.


Google translate uses TPUs, and has done since they swapped to neural models: https://cloud.google.com/blog/products/ai-machine-learning/a...


What are you even talking about?

You don’t just “use a gpu.” Software doesnt get better by magically throwing it at a gpu. Moreover you can’t just run any old software on a gpu, it has to be built for it.

Even then, the gpu is just a speed increase, not a magical make better box.

That’s like saying any random text editor would be a better translator if they ran on GPUs.

EDIT: google also literally builds its own acceleration hardware, suggesting that they can’t afford GPUs (which they already own for GCP) for google translate is weird.


No... I mean LLM's use a GPU. Hence why the cost is different.

Changing Google Translate to use an LLM would become much more expensive for Google.

This is why you see a different cost structure for using "AI". LLM's can't realistically use the CPU for anything serious.


You have it backwards. For any given amount of compute needed, GPU's are a lot cheaper than CPU's. But GPU's can't run all the code you can run on a CPU.

The reason models that use GPU's cost more to run, is that they tend to be A LOT more compute intensive. However, if you run the inference for the same models on CPU, they will be both much more expensive than on GPU's (or on specialized tensor silicon) and also slower.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: