Back
Question
Asked

What's the best Mac app for OpenAI?

Looking for mac app to use our own API key with openAI. Please recommend if you use any.

I have found a few good ones: boltai.com, getbeam.ai . Seems like boltai founder is shipping frequently and also not much worried about chatGPT app so he might further work on the app so I was thinking of buying it. Let me know if there's any better one than boltAI.


@daniel_nguyenx is here on WIP, so you can get info about BoltAI from the source. 🙂

Hey @yogesh, Daniel from BoltAI here.

I can’t answer your question since there are dozens of AI clients now that I lost track of the competition.

But here are a few things I strongly believe in BoltAI:

  • It supports more AI services & models than just OpenAI. My customers usually alternate between GPT-4 and Claude 3 Opus for coding for example. Or to use Groq with their insanely fast inference. Or using a local model with Ollama. It’s easy to switch in BoltAI.

  • It supports “context-aware” AI Command. You highlight the content, press a keyboard shortcut then ask AI about it. You can build your own AI tools with this: text summarization, content rewrite, translation…

  • Quickly take a screenshot and ask AI about it. Currently it supports both GPT 4 and Claude 3. BoltAI will support local model in the next version (llava)

  • A more powerful plugin system. This is still in progress but imagine AI can help you with your local tasks such as renaming, photo/video manipulation etc. It would be super powerful.

  • It’s one-time payment, not a subscription.

Happy to answer any questions 😊

BTW here is the list of all features with demo: docs.boltai.com/docs/features

Hey Daniel, just purchased the product. Is there a way to change all the assistants model to gpt4o? I see it's gpt3.5 turbo rn.

Yogesh, you can follow this guide to use a custom AI service or a different model. You can also bulk edit

docs.boltai.com/docs/ai-comma…

btw the llama3 req timeout most of the time and is too slow. It worked once or twice only. When I run it in the terminal it works pretty fast and always does.

It worked again when I checked stream response. I had unchecked it and it stopped working. But It is still slow compared to terminal.

Did you use Ollama or how are you using it?

yes I used Ollama llama3 8b. Feel the speed is slow compared to using it directly from terminal

I see. BoltAI currently uses the OpenAI-compatible server from Ollama. Maybe that's why it's slower than querying the model directly.

I will do more benchmarking and maybe switch to direct connection in the future.

Maybe I am late but I want to add Raycast to the list. I have been trying it for a couple of months and I think is great.