Back
Similar todos
try client side web based Llama 3 in JS #life webllm.mlc.ai/
See similar todos

No replies yet

Implemented response streaming #gistreader
See similar todos

No replies yet

check out Llama 3.1 #life
See similar todos

No replies yet

try llama_index #autorepurposeai
See similar todos

No replies yet

try #therapistai with Replicate LLM stream
See similar todos

No replies yet

Check out llama life #life
See similar todos

No replies yet

install finally ollama with llama3 #life
See similar todos

No replies yet

Playing with llama2 locally and running it from the first time on my machine
See similar todos

No replies yet

switch #therapistai to Llama 3.1
See similar todos

No replies yet

use llama3 70b to create transcript summary #spectropic
See similar todos

No replies yet

test #therapistai with Llama 3-70B
See similar todos
🔨 I have prompts working (but not streaming) with llm-ollama. It needs a lot of work and polish still.
See similar todos

No replies yet

✏️ wrote about running Llama 3.1 locally through Ollama on my Mac Studio. micro.webology.dev/2024/07/24…
See similar todos

No replies yet

Implement async streaming responses in Django #foxquery
See similar todos

No replies yet

download Llama3-70B and Llama3-8B #life
See similar todos

No replies yet

more fun with LLAMA2 and figuring out how to better control/predict stable output
See similar todos

No replies yet

Stream the main response from ai coach so it takes less time #mentalmodelsaicoach
See similar todos
realize #therapistai with Llama3-70B actually understands WTF is going on now
See similar todos
read llama guard paper #aiplay
See similar todos

No replies yet

deploy Llama3-70B on #therapistai for me
See similar todos

No replies yet