Back
Similar todos
Load previous page…
ollama is worth using if you have an M1/M2 mac and want a speedy way to access the various llama2 models.
See similar todos

No replies yet

livecode even more WebRTC stuff
See similar todos

No replies yet

try Jan.AI LLM app MacOS local client side #life
See similar todos

No replies yet

quick livecode; get useAuth almost working as a lib
See similar todos

No replies yet

livecode some basic gatsby setup #serverlesshandbook
See similar todos

No replies yet

livecode some research #learnwhileyoupoop
See similar todos

No replies yet

livecode some progress #threadcompiler
See similar todos

No replies yet

get a NoCode client #labs
See similar todos

No replies yet

install finally ollama with llama3 #life
See similar todos

No replies yet

#thecompaniesapi run my phi3-128k flow using llama3.1 and the results are mind blowing, it's insane how good llama is at conserving context and original purpose even when supplied with thousands of tokens; also shipped multiple hotfixes in robot UI; about to merge a month of work and then hop on fine tuning
See similar todos

No replies yet

livecode progress on my little CMS #codewithswiz
See similar todos

No replies yet

more fun with LLAMA2 and figuring out how to better control/predict stable output
See similar todos

No replies yet

livecode messing around with Remix
See similar todos

No replies yet

livecode messing around with nextjs #codewithswiz
See similar todos

No replies yet