Back
Similar todos
Install Ollama on Windows subsystem for Linux
See similar todos

No replies yet

install finally ollama with llama3 #life
See similar todos

No replies yet

Run openllm dollyv2 on local linux server
See similar todos

No replies yet

read up on llm embedding to start building something new with ollama
See similar todos

No replies yet

Playing with llama2 locally and running it from the first time on my machine
See similar todos

No replies yet

✏️ wrote about running Llama 3.1 locally through Ollama on my Mac Studio. micro.webology.dev/2024/07/24…
See similar todos

No replies yet

Starting up local server #olisto
See similar todos

No replies yet

setup ollama
See similar todos

No replies yet

Install LM Studio, download OpenHermes model, and run MLL in localhost #life
See similar todos

No replies yet

try load #monkeyisland in my own local LLM
See similar todos
Just finished and published a web interface for Ollama #ollamachat
See similar todos

No replies yet

⬆️ upgraded ollama and tried out some new features
See similar todos

No replies yet

prototype a simple autocomplete using local llama2 via Ollama #aiplay
See similar todos

No replies yet

Shipped BoltAI v1.13.6, use AI Command with local LLMs via Ollama 🥳 #boltai
See similar todos

No replies yet

Setup new local wordpress install #olisto
See similar todos

No replies yet

Setup new local WP install #olisto
See similar todos

No replies yet

#graphite Install quill and test
See similar todos

No replies yet

✏️ I wrote and published my notes on using the Ollama service micro.webology.dev/2024/06/11…
See similar todos

No replies yet

installed cody to cursor, so that i can use llama3.1 and gemma2 via ollama #astronote #leifinlavida
See similar todos

No replies yet

got llamacode working locally and it's really good
See similar todos

No replies yet