Back
Similar todos
Run openllm dollyv2 on local linux server
read up on llm embedding to start building something new with ollama
Playing with llama2 locally and running it from the first time on my machine
✏️ wrote about running Llama 3.1 locally through Ollama on my Mac Studio. micro.webology.dev/2024/07/24…
Starting up local server #olisto
try load #monkeyisland in my own local LLM
Just finished and published a web interface for Ollama #ollamachat
⬆️ upgraded ollama and tried out some new features
prototype a simple autocomplete using local llama2 via Ollama #aiplay
Shipped BoltAI v1.13.6, use AI Command with local LLMs via Ollama 🥳 #boltai
Setup new local wordpress install #olisto
Setup new local WP install #olisto
#graphite Install quill and test
✏️ I wrote and published my notes on using the Ollama service micro.webology.dev/2024/06/11…
installed cody to cursor, so that i can use llama3.1 and gemma2 via ollama #astronote #leifinlavida
got llamacode working locally and it's really good