Back
Similar todos
✏️ wrote about running Llama 3.1 locally through Ollama on my Mac Studio. micro.webology.dev/2024/07/24…
check out Llama 3.1 #life
ollama is worth using if you have an M1/M2 mac and want a speedy way to access the various llama2 models.
🤖 got llama-cpp running locally 🐍
try client side web based Llama 3 in JS #life webllm.mlc.ai/
fun saturday: Set up local LLM coding assistant and local voice transcription on my M1, for use when wifi is unavailable
🤖 lots of AI research last night including writing a functional story bot to wrap my head around how to apply step-by-step logic and get something meaningful out of llama2 #research
🖥 ordered a 32GB Mac Mini Pro so I can do more AI work and have a dedicated machine at home again #mylife
🦙 test flight Llama Life iPhone app, send review
got llamacode working locally and it's really good
more Mac Studio and Mac Mini Pro project setup to free up some cycles this week
livecode making an image uploader for the lovebox lambda #codewithswiz
Released BoltAI on Setapp #boltai
Released BoltAI on Setapp #boltai
#thecompaniesapi run my phi3-128k flow using llama3.1 and the results are mind blowing, it's insane how good llama is at conserving context and original purpose even when supplied with thousands of tokens; also shipped multiple hotfixes in robot UI; about to merge a month of work and then hop on fine tuning
Playing with llama2 locally and running it from the first time on my machine
After being sherlocked by Apple, I went crazy and added 20+ new features to BoltAI. Just released a new version.
boltai.com/changelogs/v1.16