Back
Similar todos
Publish new episode of "Next In AI", about the foundational Google paper revealing the secret of o1 open.spotify.com/episode/3Lcu… [Scaling LLM Test-Time Compute Optimally can be More Effective than Scaling Model Parameters] #nextinai
See similar todos

No replies yet

experiment with different scaling policies to optimize performance and cost #lyricallabs
See similar todos

No replies yet

study Stanford’s Alpaca instruction model finetuning for LLaMA source data #life
See similar todos

No replies yet

#thecompaniesapi early LLM results are bonkers but the qop/s are relatively slow; setting up results storage to train smaller models; one step at a time
See similar todos

No replies yet

#thecompaniesapi big LLM results from our pipeline now gets saved as json files to prepare datasets for our own models ; optimizing the queue does not seem to be a finite task
See similar todos

No replies yet

talk about LLM finetuning and Alpaca and ChatGPT being just instruct-text model #life fxtwitter.com/levelsio/status…
See similar todos

No replies yet

draw scalability #serverlesshandbook
See similar todos

No replies yet

write about running a bigger AI experiment #blog2 #aiplay
See similar todos

No replies yet

🧑‍🔬 researching when Llama 2 is as good or better than GPT-4 and when it's not as good. Good read here www.anyscale.com/blog/llama-2…
See similar todos

No replies yet

Ran some local LLM tests 🤖
See similar todos

No replies yet

think about a potential scaling product
See similar todos

No replies yet

Determine which LLM quantization to use
See similar todos

No replies yet

scale down test agents #teamci
See similar todos

No replies yet

change fine tuning strategy and data structure and play with new fine tuned model #reva
See similar todos

No replies yet

NN experimentation #learning
See similar todos

No replies yet

test different models #avatarai
See similar todos

No replies yet

write about scaling teams #blog2
See similar todos

No replies yet

wrote high scalability article
See similar todos

No replies yet

Today's reading arxiv.org/pdf/2305.20050
See similar todos

No replies yet

cache recs for longer to save on model inference costs #pango
See similar todos

No replies yet