Back
Similar todos
Load previous page…
try client side web based Llama 3 in JS #life webllm.mlc.ai/
See similar todos

No replies yet

install finally ollama with llama3 #life
See similar todos

No replies yet

got llama3 on groq working with cursor 🤯
See similar todos
got distracted with my writer AIs — testing between llama3.1 vs gemma2 #leifinlavida
See similar todos

No replies yet

#thecompaniesapi improving results from phi3 is a tedious task, but if it pays off we get rid of a ton of problems
See similar todos

No replies yet

#thecompaniesapi continue work on phi, lots of iterations, kinda exhausted lately
See similar todos

No replies yet

switch #therapistai fully to Llama3-70B
See similar todos

No replies yet

🤖 more working with Ollama and Llama 3.1 and working on a story writer as a good enough demo. #research
See similar todos

No replies yet

#thecompaniesapi big LLM results from our pipeline now gets saved as json files to prepare datasets for our own models ; optimizing the queue does not seem to be a finite task
See similar todos

No replies yet

Playing with llama2 locally and running it from the first time on my machine
See similar todos

No replies yet

I used Claude 3.5 Project + Artifacts to help refactor most of the Ollama + Llama 3.1 #research project. I would call it a writing bot, but I'm not building it to automate writing. It's mostly a content wrapper around the Chat interface but it's good at generating code and building off of chat history.
See similar todos

No replies yet

#thecompaniesapi quantize phi3.5 to 4bit to use it in our inference server; same model size but 128k context length instead of 4k, I can now process huge chunks of texts without relying on batching
See similar todos

No replies yet

use llama3 70b to create transcript summary #spectropic
See similar todos

No replies yet