Back
Similar todos
install finally ollama with llama3 #life
See similar todos

No replies yet

Run openllm dollyv2 on local linux server
See similar todos

No replies yet

read up on llm embedding to start building something new with ollama
See similar todos

No replies yet

Playing with llama2 locally and running it from the first time on my machine
See similar todos

No replies yet

✏️ wrote about running Llama 3.1 locally through Ollama on my Mac Studio. micro.webology.dev/2024/07/24…
See similar todos

No replies yet

Starting up local server #olisto
See similar todos

No replies yet

🤖 Tried out Llama 3.3 and the latest Ollama client for what feels like flawless local tool calling. #research
See similar todos

No replies yet

Install LM Studio, download OpenHermes model, and run MLL in localhost #life
See similar todos

No replies yet

try load #monkeyisland in my own local LLM
See similar todos
implemented ollama local models #rabbitholes
See similar todos

No replies yet

More Ollama experimenting #research
See similar todos
Just finished and published a web interface for Ollama #ollamachat
See similar todos

No replies yet

prototype a simple autocomplete using local llama2 via Ollama #aiplay
See similar todos

No replies yet

Setup new local wordpress install #olisto
See similar todos

No replies yet

Setup new local WP install #olisto
See similar todos

No replies yet

#graphite Install quill and test
See similar todos

No replies yet

✏️ I wrote and published my notes on using the Ollama service micro.webology.dev/2024/06/11…
See similar todos

No replies yet

installed cody to cursor, so that i can use llama3.1 and gemma2 via ollama #astronote #leifinlavida
See similar todos

No replies yet

got llamacode working locally and it's really good
See similar todos

No replies yet

Ran some local LLM tests 🤖
See similar todos

No replies yet