Back
Similar todos
check out Llama 3.1 #life
test #therapistai with Llama 3-70B
wrote a guide on llama 3.2 #getdeploying
switch #therapistai fully to Llama3-70B
switch #therapistai to Llama 3.1
#thecompaniesapi run my phi3-128k flow using llama3.1 and the results are mind blowing, it's insane how good llama is at conserving context and original purpose even when supplied with thousands of tokens; also shipped multiple hotfixes in robot UI; about to merge a month of work and then hop on fine tuning
🤖 Updated some scripts to use Ollama's latest structured output with Llama 3.3 (latest) and fell back to Llama 3.2. I drop from >1 minute with 3.3 down to 2 to 7 seconds per request with 3.2. I can't see a difference in the results. For small projects 3.2 is the better path. #research
Shared how to generate llms.txt with Astro in a few lines of code 👉 scalabledeveloper.com/posts/l…
✏️ wrote about running Llama 3.1 locally through Ollama on my Mac Studio. micro.webology.dev/2024/07/24…
try client side web based Llama 3 in JS #life webllm.mlc.ai/
talk about LLM finetuning and Alpaca and ChatGPT being just instruct-text model #life fxtwitter.com/levelsio/status…
got llama3 on groq working with cursor 🤯
🤖 Tried out Llama 3.3 and the latest Ollama client for what feels like flawless local tool calling. #research
🧑🔬 researching when Llama 2 is as good or better than GPT-4 and when it's not as good. Good read here www.anyscale.com/blog/llama-2…
tested llama model to scan newsletters #sponsorgap
more fun with LLAMA2 and figuring out how to better control/predict stable output