Back
Similar todos
installed cody to cursor, so that i can use llama3.1 and gemma2 via ollama #astronote #leifinlavida
See similar todos

No replies yet

check out Llama 3.1 #life
See similar todos

No replies yet

trying out cursor #knifegeek
See similar todos

No replies yet

🤖 got llama-cpp running locally 🐍
See similar todos

No replies yet

✏️ wrote about running Llama 3.1 locally through Ollama on my Mac Studio. micro.webology.dev/2024/07/24…
See similar todos

No replies yet

install finally ollama with llama3 #life
See similar todos

No replies yet

try client side web based Llama 3 in JS #life webllm.mlc.ai/
See similar todos

No replies yet

switch #therapistai to Llama 3.1
See similar todos

No replies yet

get amazed by cursor, late to the party? 🎉
See similar todos

No replies yet

got llamacode working locally and it's really good
See similar todos

No replies yet

Playing with llama2 locally and running it from the first time on my machine
See similar todos

No replies yet

Explore llama 3 8b for embeddings
See similar todos

No replies yet

Giving Cursor.sh a test drive #chores
See similar todos

No replies yet

prototype a simple autocomplete using local llama2 via Ollama #aiplay
See similar todos

No replies yet

🤖 played with Aider and it mostly working with Ollama + Llama 3.1 #research
See similar todos

No replies yet

try groq.com/ #fajarsiddiq
See similar todos

No replies yet

realize #therapistai with Llama3-70B actually understands WTF is going on now
See similar todos
Added support for function calling for Groq #boltai
See similar todos

No replies yet

trying to stream response from llama #autorepurposeai
See similar todos

No replies yet

more fun with LLAMA2 and figuring out how to better control/predict stable output
See similar todos

No replies yet