Back
Similar todos
check out Llama 3.1 #life
See similar todos

No replies yet

✏️ wrote about running Llama 3.1 locally through Ollama on my Mac Studio. micro.webology.dev/2024/07/24…
See similar todos

No replies yet

switch #therapistai to Llama 3.1
See similar todos

No replies yet

Over the past three days and a half, I've dedicated all my time to developing a new product that leverages locally running Llama 3.1 for real-time AI responses. It's now available on Macs with M series chips – completely free, local, and incredibly fast. Get it: Snapbox.app #snapbox
See similar todos

No replies yet

download Llama3-70B and Llama3-8B #life
See similar todos

No replies yet

Playing with llama2 locally and running it from the first time on my machine
See similar todos

No replies yet

order MacBook Pro 16” M1 Max #life
See similar todos

No replies yet

buy mac mini m2 for video production #fajarsiddiq
See similar todos

No replies yet

Read the up on MLX for Apple Silicon.
See similar todos

No replies yet

installed cody to cursor, so that i can use llama3.1 and gemma2 via ollama #astronote #leifinlavida
See similar todos

No replies yet

more fun with LLAMA2 and figuring out how to better control/predict stable output
See similar todos

No replies yet

try client side web based Llama 3 in JS #life webllm.mlc.ai/
See similar todos

No replies yet

Run openllm dollyv2 on local linux server
See similar todos

No replies yet

got llamacode working locally and it's really good
See similar todos

No replies yet

fix simulator with M1 macbook #cardmapr
See similar todos

No replies yet

probably gonna order a macbook pro 14 M2
See similar todos

No replies yet

prototype a simple autocomplete using local llama2 via Ollama #aiplay
See similar todos

No replies yet

Order macbook #life
See similar todos

No replies yet

Got Stable Diffusion XL working on M1 Macbook Pro.
See similar todos

No replies yet

format macbook #life
See similar todos

No replies yet