Back
Similar todos
installed cody to cursor, so that i can use llama3.1 and gemma2 via ollama #astronote #leifinlavida
check out Llama 3.1 #life
trying out cursor #knifegeek
🤖 got llama-cpp running locally 🐍
✏️ wrote about running Llama 3.1 locally through Ollama on my Mac Studio. micro.webology.dev/2024/07/24…
try client side web based Llama 3 in JS #life webllm.mlc.ai/
switch #therapistai to Llama 3.1
got llamacode working locally and it's really good
Playing with llama2 locally and running it from the first time on my machine
Explore llama 3 8b for embeddings
Giving Cursor.sh a test drive #chores
prototype a simple autocomplete using local llama2 via Ollama #aiplay
🤖 played with Aider and it mostly working with Ollama + Llama 3.1 #research
realize #therapistai with Llama3-70B actually understands WTF is going on now
Added support for function calling for Groq #boltai
trying to stream response from llama #autorepurposeai
more fun with LLAMA2 and figuring out how to better control/predict stable output