Back
Similar todos
Make a quick app to requests the local server running the LLM with OpenHermes model #life
See similar todos
load OpenHermes 2.5 Mistral LLM #life
See similar todos

No replies yet

test OpenHermes 2.5 Mistral LLM
See similar todos

No replies yet

test LLM Studio #life
See similar todos

No replies yet

Download LM Studio
See similar todos

No replies yet

Run openllm dollyv2 on local linux server
See similar todos

No replies yet

Install Ollama and run llm locally #life
See similar todos

No replies yet

Download and install LM StudioAI
See similar todos

No replies yet

connect to mlab
See similar todos

No replies yet

try load #monkeyisland in my own local LLM
See similar todos
Playing with llama2 locally and running it from the first time on my machine
See similar todos

No replies yet

✏️ wrote about running Llama 3.1 locally through Ollama on my Mac Studio. micro.webology.dev/2024/07/24…
See similar todos

No replies yet

Install AI Toolkit on local #life
See similar todos

No replies yet

work on setting up the system locally #labs
See similar todos

No replies yet

Starting up local server #olisto
See similar todos

No replies yet

field model installed #klimy
See similar todos

No replies yet

lms gui
See similar todos

No replies yet

Try Gemma 7b LLM via Replicate #life
See similar todos

No replies yet

📝 prototyped an llm-ollama plugin tonight. models list and it talks to the right places. prompts need more work.
See similar todos

No replies yet

ollama is worth using if you have an M1/M2 mac and want a speedy way to access the various llama2 models.
See similar todos

No replies yet