Back
Similar todos
🤖 got llama-cpp running locally 🐍
See similar todos

No replies yet

Playing with llama2 locally and running it from the first time on my machine
See similar todos

No replies yet

✏️ wrote about running Llama 3.1 locally through Ollama on my Mac Studio. micro.webology.dev/2024/07/24…
See similar todos

No replies yet

fun saturday: Set up local LLM coding assistant and local voice transcription on my M1, for use when wifi is unavailable
See similar todos

No replies yet

livecode getting a HAI WORLD lolcode compiler working
See similar todos

No replies yet

played with symbex and llm to generate code. super cool and immediately useful
See similar todos

No replies yet

prototype a simple autocomplete using local llama2 via Ollama #aiplay
See similar todos

No replies yet

check out Llama 3.1 #life
See similar todos

No replies yet

livecode trying out NextJS #codewithswiz
See similar todos

No replies yet

Ran some local LLM tests 🤖
See similar todos

No replies yet

finally got a local dev environment of my day job codebase to work on a mac #life
See similar todos

No replies yet

livecode finishing the lovebox build #codewithswiz
See similar todos

No replies yet

livecode more webrtc stuff
See similar todos

No replies yet

Over the past three days and a half, I've dedicated all my time to developing a new product that leverages locally running Llama 3.1 for real-time AI responses. It's now available on Macs with M series chips – completely free, local, and incredibly fast. Get it: Snapbox.app #snapbox
See similar todos

No replies yet

got llama3 on groq working with cursor 🤯
See similar todos
try client side web based Llama 3 in JS #life webllm.mlc.ai/
See similar todos

No replies yet

livecode some auth work #threadcompiler
See similar todos

No replies yet

livecode more webrtc fun having
See similar todos

No replies yet

Install Ollama and run llm locally #life
See similar todos

No replies yet

try load #monkeyisland in my own local LLM
See similar todos