Thanks Ben. I looked into property hub but they all have min 1 year contract. Finding it hard to find apartments in airbnb as well and costs 2x price like you mentioned.
Found one 10 minutes walk from on nut for around $550/mo: www.airbnb.com/rooms/12411785…. You think this is a good deal?
If it's making money or if you have validated the idea go for it (depends on how much your project is making and it's potential) . If not then you can wait.
Seems like you haven't launched it yet. The .com is available for sale so you can start with other extensions and buy it later on.
Other option is to see if you can find a new name with available .com.
You can also check whois and find out when it was registered, when it's expiring, since when the domain owner is holding the domain. Depending on the info if its expiring soon and you think it might not be renewed then you can wait and get it for $10 when it expires.
yes I used Ollama llama3 8b. Feel the speed is slow compared to using it directly from terminal
I see. BoltAI currently uses the OpenAI-compatible server from Ollama. Maybe that's why it's slower than querying the model directly.
I will do more benchmarking and maybe switch to direct connection in the future.
It worked again when I checked stream response. I had unchecked it and it stopped working. But It is still slow compared to terminal.
Did you use Ollama or how are you using it?
yes I used Ollama llama3 8b. Feel the speed is slow compared to using it directly from terminal
I see. BoltAI currently uses the OpenAI-compatible server from Ollama. Maybe that's why it's slower than querying the model directly.
I will do more benchmarking and maybe switch to direct connection in the future.
btw the llama3 req timeout most of the time and is too slow. It worked once or twice only. When I run it in the terminal it works pretty fast and always does.
It worked again when I checked stream response. I had unchecked it and it stopped working. But It is still slow compared to terminal.
Did you use Ollama or how are you using it?
yes I used Ollama llama3 8b. Feel the speed is slow compared to using it directly from terminal
I see. BoltAI currently uses the OpenAI-compatible server from Ollama. Maybe that's why it's slower than querying the model directly.
I will do more benchmarking and maybe switch to direct connection in the future.
Hey Daniel, just purchased the product. Is there a way to change all the assistants model to gpt4o? I see it's gpt3.5 turbo rn.
Yogesh, you can follow this guide to use a custom AI service or a different model. You can also bulk edit
Ended up spending $1000 for the rent for 1 bedroom apartment in Phra Khanong right next to bts.