Back
Similar todos
Install LM Studio, download OpenHermes model, and run MLL in localhost #life
See similar todos

No replies yet

Run openllm dollyv2 on local linux server
See similar todos

No replies yet

Starting up local server #olisto
See similar todos

No replies yet

try load #monkeyisland in my own local LLM
See similar todos
get the app running locally #rikko
See similar todos

No replies yet

Install AI Toolkit on local #life
See similar todos

No replies yet

Install Ollama and run llm locally #life
See similar todos

No replies yet

get AIE app run locally so I can dev the API #aiplay
See similar todos

No replies yet

try client side web based Llama 3 in JS #life webllm.mlc.ai/
See similar todos

No replies yet

setup local dev environment to help w friend his app. #life
See similar todos

No replies yet

prototype a simple autocomplete using local llama2 via Ollama #aiplay
See similar todos

No replies yet

📝 prototyped an llm-ollama plugin tonight. models list and it talks to the right places. prompts need more work.
See similar todos

No replies yet

Set up local database #workalo
See similar todos

No replies yet

Over the past three days and a half, I've dedicated all my time to developing a new product that leverages locally running Llama 3.1 for real-time AI responses. It's now available on Macs with M series chips – completely free, local, and incredibly fast. Get it: Snapbox.app #snapbox
See similar todos

No replies yet

Ran some local LLM tests 🤖
See similar todos

No replies yet

work on setting up the system locally #labs
See similar todos

No replies yet

Get the app started #paperclip
See similar todos

No replies yet

✏️ wrote about running Llama 3.1 locally through Ollama on my Mac Studio. micro.webology.dev/2024/07/24…
See similar todos

No replies yet

setup local testing for pinger lambdas #hyperping
See similar todos

No replies yet

setup project on local #lite
See similar todos

No replies yet