Single comment thread
See full discussion

For most AI/ML work you need a GPU, my mac studio is doing fine running LLMs locally but if you want todo large processing yourself I would just go to the cloud

I know the GPU benefits. I'm exploring options :)

if its your main purpose I would not get one (but as a general good development mac it's really good)

@klaaz0r We cross-posted, explained my main purpose in the second comment. Thoughts on that?

My use case would be to just run the models. So think running 100B+ parameter chat-LLMs locally.

Home
Search
Messages
Notifications
More