Back
Similar todos
try client side web based Llama 3 in JS #life webllm.mlc.ai/
Implemented response streaming #gistreader
check out Llama 3.1 #life
try llama_index #autorepurposeai
try #therapistai with Replicate LLM stream
Check out llama life #life
Playing with llama2 locally and running it from the first time on my machine
switch #therapistai to Llama 3.1
use llama3 70b to create transcript summary #spectropic
test #therapistai with Llama 3-70B
🔨 I have prompts working (but not streaming) with llm-ollama. It needs a lot of work and polish still.
✏️ wrote about running Llama 3.1 locally through Ollama on my Mac Studio. micro.webology.dev/2024/07/24…
Implement async streaming responses in Django #foxquery
more fun with LLAMA2 and figuring out how to better control/predict stable output
Stream the main response from ai coach so it takes less time #mentalmodelsaicoach
realize #therapistai with Llama3-70B actually understands WTF is going on now
read llama guard paper #aiplay
deploy Llama3-70B on #therapistai for me