Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

There are some people on r/LocalLlama using it [0]. Seems like the consensus is while it does have more unified RAM for running models, up to half a terabyte, the token generation speed can be fairly slow such that it might just be better to get an Nvidia or AMD machine.

[0] https://old.reddit.com/r/LocalLLaMA/search?q=mac+studio&rest...





Thanks for the link. I'll take a look.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: