Back
Similar todos
Load previous page…
prototype a simple autocomplete using local llama2 via Ollama #aiplay
design & deploy simple lapa #devsync
Create apollo server
📝 prototyped an llm-ollama plugin tonight. models list and it talks to the right places. prompts need more work.
Set up local database #workalo
create 'lil demo on lapa #devsync
setup a server with the basic app
Over the past three days and a half, I've dedicated all my time to developing a new product that leverages locally running Llama 3.1 for real-time AI responses.
It's now available on Macs with M series chips – completely free, local, and incredibly fast.
Get it: Snapbox.app #snapbox
refactor lambdas to a `micro` app #accountableblogging
create small test project to learn laravel #life
launch lumen api #ssa
Ran some local LLM tests 🤖
work on setting up the system locally #labs
work on client app #life
Figure out how to deploy the Elixir app for #gitfuck
setup Lighthouse consensus client for Gnosis chain #side
Set up new Laravel app #onelovehiphop