Back
Similar todos
Install Ollama and run llm locally #life
See similar todos

No replies yet

Playing with llama2 locally and running it from the first time on my machine
See similar todos

No replies yet

Install LM Studio, download OpenHermes model, and run MLL in localhost #life
See similar todos

No replies yet

Starting up local server #olisto
See similar todos

No replies yet

✏️ wrote about running Llama 3.1 locally through Ollama on my Mac Studio. micro.webology.dev/2024/07/24…
See similar todos

No replies yet

Get Stable Diffusion (open-source DALL-E clone) up and running on my Mac #life
See similar todos

No replies yet

Install Ollama on Windows subsystem for Linux
See similar todos

No replies yet

try load #monkeyisland in my own local LLM
See similar todos
Make a quick app to requests the local server running the LLM with OpenHermes model #life
See similar todos
try client side web based Llama 3 in JS #life webllm.mlc.ai/
See similar todos

No replies yet

install finally ollama with llama3 #life
See similar todos

No replies yet

download dbs from server to M1X for local dev #nomads
See similar todos

No replies yet

try Jan.AI LLM app MacOS local client side #life
See similar todos

No replies yet

ollama is worth using if you have an M1/M2 mac and want a speedy way to access the various llama2 models.
See similar todos

No replies yet

deploy Llama3-70B on #therapistai for me
See similar todos

No replies yet

Ran some local LLM tests 🤖
See similar todos

No replies yet

connect to mlab
See similar todos

No replies yet

🤖 got llama-cpp running locally 🐍
See similar todos

No replies yet

setup local testing for pinger lambdas #hyperping
See similar todos

No replies yet

setting up local server for localization #apimocka
See similar todos

No replies yet