Back
Similar todos
#thecompaniesapi big LLM results from our pipeline now gets saved as json files to prepare datasets for our own models ; optimizing the queue does not seem to be a finite task
#thecompaniesapi early LLM results are bonkers but the qop/s are relatively slow; setting up results storage to train smaller models; one step at a time
got actually working dataset for clv #ml4all
provision batch of L4 and L40S GPUs at Scaleway for #mirage since our account got validated and quotas lifted
Work on #nichewit, pull in production datasets to iterate bit faster locally
finish migrating #mirage kubernetes intel and nvidia gpu instances to scaleway, getting last-generation NVIDIA L40S + L4 GPUs, running much smoother now! (previously: old A40 and A16)
#thecompaniesapi work on datasets filtering & classification UI ; we can now review outputs from Claude/GPT and directly grab the dataset to send for fine-tuning
setup more powerful vm for processing data & training models #mused
Updating Automatic 11, Installing a new video gpu, today its a day to learn how to train local models :B #dailywork
Chunk dataset big files and import to DB (Data Science project) #zg
#thecompaniesapi subdomains support ready for merge ; swap efforts back on AI to start creating the datasets ; added new extractions steps last missing main table datapoints, we can now fill all our columns with combination of website + ai extraction! next step fine tuning
Prepare datasets for demo