Back
Similar todos
implemented ollama local models #rabbitholes
See similar todos

No replies yet

🤖 Tried out Llama 3.3 and the latest Ollama client for what feels like flawless local tool calling. #research
See similar todos

No replies yet

📝 prototyped an llm-ollama plugin tonight. models list and it talks to the right places. prompts need more work.
See similar todos

No replies yet

Playing with llama2 locally and running it from the first time on my machine
See similar todos

No replies yet

try client side web based Llama 3 in JS #life webllm.mlc.ai/
See similar todos

No replies yet

read up on llm embedding to start building something new with ollama
See similar todos

No replies yet

✏️ wrote about running Llama 3.1 locally through Ollama on my Mac Studio. micro.webology.dev/2024/07/24…
See similar todos

No replies yet

Test automating some translations with Llama.
See similar todos

No replies yet

installed cody to cursor, so that i can use llama3.1 and gemma2 via ollama #astronote #leifinlavida
See similar todos

No replies yet

🤖 played with Aider and it mostly working with Ollama + Llama 3.1 #research
See similar todos

No replies yet

check out Llama 3.1 #life
See similar todos

No replies yet

install finally ollama with llama3 #life
See similar todos

No replies yet

Install Ollama and run llm locally #life
See similar todos

No replies yet

got llamacode working locally and it's really good
See similar todos

No replies yet

🤖 played around with adding extra context in some local Ollama models. Trying to test out some real-world tasks I'm tired of doing. #research
See similar todos

No replies yet

🤖 played with Ollama's tool calling with Llama 3.2 to create a calendar management agent demo #research
See similar todos

No replies yet

more fun with LLAMA2 and figuring out how to better control/predict stable output
See similar todos

No replies yet

🤖 Updated some scripts to use Ollama's latest structured output with Llama 3.3 (latest) and fell back to Llama 3.2. I drop from >1 minute with 3.3 down to 2 to 7 seconds per request with 3.2. I can't see a difference in the results. For small projects 3.2 is the better path. #research
See similar todos

No replies yet

🤖 more working with Ollama and Llama 3.1 and working on a story writer as a good enough demo. #research
See similar todos

No replies yet

🤖 Created an Ollama + Llama 3.2 version of my job parser to compare to ChatGPT. It's not bad at all, but not as good as GPT4. #jobs
See similar todos

No replies yet