Back
David Duwaer

David Duwaer
PRO

@davidduwaer

Does a killer Billy Joel. Built the world's most advanced ORM for frontend so I can build apps faster. Irony noted, but it *does* work to that effect now.
308
Joined January 2025
Load previous page…
i librarized my whole homegrown frontend Auth-, Stripe connection- and ORM (the world's coolest) earlier this year, and today moved one of my Create React App apps to it (in replacement of a having a copy-paste of it in its own codebase, as i used to do everywhere). That system which I call #lovabase is now compatible with both partially-server-side rendered and single page react apps. I was able to delete a *lot* of code today, while performance grew (I made a few performance improvements to the library version of Lovabase since my last sync of all the copy-pastes in the different codebases).
Fixed a few bugs in my proprietary orm
enlarged day headers in #happenlist list view
removed #happenlist "chevron right" on list items – screen space is too valuable and that items are clickable is still trivially discovered by users who want more info
Now with the supervised scraper of #happenlist basically in place I've made it perform much better over the course of the day, making many adjustments, some small, some big. Part of the improvements were to the supervision UI itself to make me work faster, make it easier to debug, or to make it easier for me to see problems. From the second half of the day I started telling claude code in a different source tree to make adjustments to the iOS UI. This went very well due to the investments I made in the setup at the start of this month. Added more filterable categories and grouped events in the list view by day so the date doesn't have to be repeated on each event.
algorithm is human-supervised now. all input is error. #happenlist
openai json schema STRICT mode is a superpower. Now added that you can trivially enabled it by calling .strict() on your fluent llm call with #promptgun. It is still off by default because it puts limits on your json type (e.g. optional properties are not allowed)
#promptgun now fully switches to openai's native response_format json schema support whenever available. this saves a lot of tokens and makes promptgun way more reliable.
day job
"npm i promptgun" now allows you to add tools as simple inline callbacks, turning your single openai llm call into a real conversation, all type safe #promptgun
"npm i promptgun" now leverages openai native json schema support #promptgun
Drinking is bad folks, no progress today
Minimizing dependency on prompts that have to produce lists of stuff. #happenlist
Work slow. Feels like stagnation. Damn. And the LLM of the “agent” I’m building just behaving incredibly unpredictably. Dumbly. On steps that are supposed to be trivial. Terrible.
Walking the sail event with visiting family
made my lovabase (world's most advanced ORM, a typescript client-side ORM for hasura) automatically convert snake_case field names into camelCase property names for the resulting objects, because doing this in hasura for evvvvery single property is too time consuming
data model for new part of application done. now claude is implementing the whole thing #happenlist
Made database schema for algorithm framework #happenlist, AI coding should go smoothly from there
added a job launcher and job log viewer to admin panel. basically, i built the AI system, but with an AI based app, you really need a system a system around the AI system, or quality will be utter shite #happenlist
Claude Code has removed the token counter. Not it is harder to see if you're being throttled. It is harder to debug why the hell things are go so slow. Anyone else running into this today? #productivity
Home
Search
Messages
Notifications
More