Back
Similar todos
installed cody to cursor, so that i can use llama3.1 and gemma2 via ollama #astronote #leifinlavida
check out Llama 3.1 #life
🤖 Tried out Llama 3.3 and the latest Ollama client for what feels like flawless local tool calling. #research
🤖 got llama-cpp running locally 🐍
✏️ wrote about running Llama 3.1 locally through Ollama on my Mac Studio. micro.webology.dev/2024/07/24…
try client side web based Llama 3 in JS #life webllm.mlc.ai/
switch #therapistai to Llama 3.1
wrote a guide on llama 3.2 #getdeploying
#thecompaniesapi run my phi3-128k flow using llama3.1 and the results are mind blowing, it's insane how good llama is at conserving context and original purpose even when supplied with thousands of tokens; also shipped multiple hotfixes in robot UI; about to merge a month of work and then hop on fine tuning
got llamacode working locally and it's really good
Playing with llama2 locally and running it from the first time on my machine
prototype a simple autocomplete using local llama2 via Ollama #aiplay
🤖 played with Aider and it mostly working with Ollama + Llama 3.1 #research
more fun with LLAMA2 and figuring out how to better control/predict stable output
FINALLY! Made the canvas work. First time using a combo of Llama 3.1, Claude Sonnet 3.5 and ChatGPT, but the trickiest parts were mostly solved by Llama 3.1. Looks like Claude is better for coding with more conventions, not more uncommon stuff like canvas, while I'm pleasantly surprised Llama 3.1 can deliver on it! Now, what should I call this new project... #indiejourney
🤖 Updated some scripts to use Ollama's latest structured output with Llama 3.3 (latest) and fell back to Llama 3.2. I drop from >1 minute with 3.3 down to 2 to 7 seconds per request with 3.2. I can't see a difference in the results. For small projects 3.2 is the better path. #research
use llama3 70b to create transcript summary #spectropic
🤖 more working with Ollama and Llama 3.1 and working on a story writer as a good enough demo. #research
📝 prototyped an llm-ollama plugin tonight. models list and it talks to the right places. prompts need more work.