Back
Similar todos
installed cody to cursor, so that i can use llama3.1 and gemma2 via ollama #astronote #leifinlavida
See similar todos

No replies yet

check out Llama 3.1 #life
See similar todos

No replies yet

🤖 Tried out Llama 3.3 and the latest Ollama client for what feels like flawless local tool calling. #research
See similar todos

No replies yet

🤖 got llama-cpp running locally 🐍
See similar todos

No replies yet

✏️ wrote about running Llama 3.1 locally through Ollama on my Mac Studio. micro.webology.dev/2024/07/24…
See similar todos

No replies yet

install finally ollama with llama3 #life
See similar todos

No replies yet

try client side web based Llama 3 in JS #life webllm.mlc.ai/
See similar todos

No replies yet

switch #therapistai to Llama 3.1
See similar todos

No replies yet

wrote a guide on llama 3.2 #getdeploying
See similar todos

No replies yet

#thecompaniesapi run my phi3-128k flow using llama3.1 and the results are mind blowing, it's insane how good llama is at conserving context and original purpose even when supplied with thousands of tokens; also shipped multiple hotfixes in robot UI; about to merge a month of work and then hop on fine tuning
See similar todos

No replies yet

got llamacode working locally and it's really good
See similar todos

No replies yet

Playing with llama2 locally and running it from the first time on my machine
See similar todos

No replies yet

prototype a simple autocomplete using local llama2 via Ollama #aiplay
See similar todos

No replies yet

🤖 played with Aider and it mostly working with Ollama + Llama 3.1 #research
See similar todos

No replies yet

more fun with LLAMA2 and figuring out how to better control/predict stable output
See similar todos

No replies yet

FINALLY! Made the canvas work. First time using a combo of Llama 3.1, Claude Sonnet 3.5 and ChatGPT, but the trickiest parts were mostly solved by Llama 3.1. Looks like Claude is better for coding with more conventions, not more uncommon stuff like canvas, while I'm pleasantly surprised Llama 3.1 can deliver on it! Now, what should I call this new project... #indiejourney
See similar todos

No replies yet

🤖 Updated some scripts to use Ollama's latest structured output with Llama 3.3 (latest) and fell back to Llama 3.2. I drop from >1 minute with 3.3 down to 2 to 7 seconds per request with 3.2. I can't see a difference in the results. For small projects 3.2 is the better path. #research
See similar todos

No replies yet

use llama3 70b to create transcript summary #spectropic
See similar todos

No replies yet

🤖 more working with Ollama and Llama 3.1 and working on a story writer as a good enough demo. #research
See similar todos

No replies yet

📝 prototyped an llm-ollama plugin tonight. models list and it talks to the right places. prompts need more work.
See similar todos

No replies yet