Back
Similar todos
Run openllm dollyv2 on local linux server
See similar todos

No replies yet

installed cody to cursor, so that i can use llama3.1 and gemma2 via ollama #astronote #leifinlavida
See similar todos

No replies yet

switch #therapistai to Llama 3.1
See similar todos

No replies yet

try load #monkeyisland in my own local LLM
See similar todos
test #therapistai with Llama 3-70B
See similar todos
#cloth gem unit tests
See similar todos

No replies yet

implemented ollama local models #rabbitholes
See similar todos

No replies yet

check out Llama 3.1 #life
See similar todos

No replies yet

plugin comparison #ekh
See similar todos

No replies yet

test OpenHermes 2.5 Mistral LLM
See similar todos

No replies yet

🤖 Tried out Llama 3.3 and the latest Ollama client for what feels like flawless local tool calling. #research
See similar todos

No replies yet

switch #therapistai fully to Llama3-70B
See similar todos

No replies yet

Ran some local LLM tests 🤖
See similar todos

No replies yet

published a test package
See similar todos

No replies yet

testing hookmonitor #integratewp
See similar todos

No replies yet

🤖 spent some time getting Ollama and langchain to work together. I hooked up tooling/function calling and noticed that I was only getting a match on the first function call. Kind of neat but kid of a pain.
See similar todos

No replies yet

test LLM Studio #life
See similar todos

No replies yet

read llama guard paper #aiplay
See similar todos

No replies yet

add inline testing to project #wobaka
See similar todos

No replies yet

add pre-commit git hook testing lambda code
See similar todos

No replies yet