Back
Similar todos
Publish new episode of "Next In AI", about the foundational Google paper revealing the secret of o1
open.spotify.com/episode/3Lcu…
[Scaling LLM Test-Time Compute Optimally can be More Effective than Scaling Model Parameters] #nextinai
experiment with different scaling policies to optimize performance and cost #lyricallabs
#thecompaniesapi early LLM results are bonkers but the qop/s are relatively slow; setting up results storage to train smaller models; one step at a time
#thecompaniesapi big LLM results from our pipeline now gets saved as json files to prepare datasets for our own models ; optimizing the queue does not seem to be a finite task
talk about LLM finetuning and Alpaca and ChatGPT being just instruct-text model #life fxtwitter.com/levelsio/status…
draw scalability #serverlesshandbook
🧑🔬 researching when Llama 2 is as good or better than GPT-4 and when it's not as good. Good read here www.anyscale.com/blog/llama-2…
Ran some local LLM tests 🤖
scale down test agents #teamci
NN experimentation #learning
write about scaling teams #blog2
wrote high scalability article
Today's reading arxiv.org/pdf/2305.20050
cache recs for longer to save on model inference costs #pango