Back
Similar todos
Load previous page…
prototype a simple autocomplete using local llama2 via Ollama #aiplay
See similar todos

No replies yet

design & deploy simple lapa #devsync
See similar todos

No replies yet

Create apollo server
See similar todos

No replies yet

📝 prototyped an llm-ollama plugin tonight. models list and it talks to the right places. prompts need more work.
See similar todos

No replies yet

Set up local database #workalo
See similar todos

No replies yet

create 'lil demo on lapa #devsync
See similar todos

No replies yet

setup a server with the basic app
See similar todos

No replies yet

Over the past three days and a half, I've dedicated all my time to developing a new product that leverages locally running Llama 3.1 for real-time AI responses. It's now available on Macs with M series chips – completely free, local, and incredibly fast. Get it: Snapbox.app #snapbox
See similar todos

No replies yet

refactor lambdas to a `micro` app #accountableblogging
See similar todos

No replies yet

create small test project to learn laravel #life
See similar todos

No replies yet

launch lumen api #ssa
See similar todos

No replies yet

Try Gemma 7b LLM via Replicate #life
See similar todos

No replies yet

Ran some local LLM tests 🤖
See similar todos

No replies yet

work on setting up the system locally #labs
See similar todos

No replies yet

work on client app
See similar todos

No replies yet

work on client app #life
See similar todos

No replies yet

Figure out how to deploy the Elixir app for #gitfuck
See similar todos

No replies yet

setup Lighthouse consensus client for Gnosis chain #side
See similar todos

No replies yet

Set up new Laravel app #onelovehiphop
See similar todos

No replies yet