Back
Similar todos
✏️ wrote about running Llama 3.1 locally through Ollama on my Mac Studio. micro.webology.dev/2024/07/24…
See similar todos

No replies yet

🤖 played with Aider and it mostly working with Ollama + Llama 3.1 #research
See similar todos

No replies yet

🤖 Updated some scripts to use Ollama's latest structured output with Llama 3.3 (latest) and fell back to Llama 3.2. I drop from >1 minute with 3.3 down to 2 to 7 seconds per request with 3.2. I can't see a difference in the results. For small projects 3.2 is the better path. #research
See similar todos

No replies yet

🤖 got llama-cpp running locally 🐍
See similar todos

No replies yet

🤖 played with Ollama's tool calling with Llama 3.2 to create a calendar management agent demo #research
See similar todos

No replies yet

🤖 more working with Ollama and Llama 3.1 and working on a story writer as a good enough demo. #research
See similar todos

No replies yet

check out Llama 3.1 #life
See similar todos

No replies yet

🤖 spent my evening writing a better console for some more advanced Ollama 3.1 projects. #research
See similar todos

No replies yet

try client side web based Llama 3 in JS #life webllm.mlc.ai/
See similar todos

No replies yet

install finally ollama with llama3 #life
See similar todos

No replies yet

🤖 Created an Ollama + Llama 3.2 version of my job parser to compare to ChatGPT. It's not bad at all, but not as good as GPT4. #jobs
See similar todos

No replies yet

🤖 spent some time getting Ollama and langchain to work together. I hooked up tooling/function calling and noticed that I was only getting a match on the first function call. Kind of neat but kid of a pain.
See similar todos

No replies yet

🤖 played around with adding extra context in some local Ollama models. Trying to test out some real-world tasks I'm tired of doing. #research
See similar todos

No replies yet

got llamacode working locally and it's really good
See similar todos

No replies yet

switch #therapistai to Llama 3.1
See similar todos

No replies yet

✏️ I wrote and published my notes on using the Ollama service micro.webology.dev/2024/06/11…
See similar todos

No replies yet

Playing with llama2 locally and running it from the first time on my machine
See similar todos

No replies yet

#thecompaniesapi run my phi3-128k flow using llama3.1 and the results are mind blowing, it's insane how good llama is at conserving context and original purpose even when supplied with thousands of tokens; also shipped multiple hotfixes in robot UI; about to merge a month of work and then hop on fine tuning
See similar todos

No replies yet

Just finished and published a web interface for Ollama #ollamachat
See similar todos

No replies yet

prototype a simple autocomplete using local llama2 via Ollama #aiplay
See similar todos

No replies yet