Back
Promptgun

Promptgun

#promptgun

Type-safe LLM prompting. `npm install promptgun` Supports - type-safe access and in-place docs for all openai models - all prompts that request data output type-safe - makes even streams type-safe - subscribe to json elements as soon as they stream in - extremely short, self-explanatory, discoverable, fluent syntax - text prompts - image prompts - auto-retrying
prior attempt outputs and problems given to any .check callback #promptgun
all attempted LLM results (parsed and typed, if structured) are available on error object thrown when a .check reaches its max attempt count #promptgun
fixed: when a .check reaching max attempt count interrupts LLM conversation by throwing an error, that error does not trigger the retry mechanism for individual OpenAI calls #promptgun
slight in-place correction: throwing in .check clause now stops prompt, escapes answer iteration #promptgun
Improved the docs of Promptgun a lot. Added many missing pieces of information. If you use OpenAI API from a Typescript or Javascript project, you’d do me a big favor if you want to go take a look at #promptgun to see if you can get it set up in 1 minute or if something is unclear
Made tools even simpler to add to your prompts in #promptgun. Go take a look if you use the OpenAI API in a NodeJS project!
Solved several bugs with GPT-5 reasoning responses with #promptgun. Also, since I switched it to the Responses endpoint, tools had stopped working – this is now fixed.
Refined the .check(callback) clause to #promptgun that allows you to check the LLM output, give it feedback and force it to iterate on it, with only a few lines of code. Now out in v1.1.2.
Added a .check(callback) clause to Promptgun so you can check outputs and give the llm error feedback – extending the conversation and letting the LLM iterate on its output with a single callback #promptgun
openai json schema STRICT mode is a superpower. Now added that you can trivially enabled it by calling .strict() on your fluent llm call with #promptgun. It is still off by default because it puts limits on your json type (e.g. optional properties are not allowed)
#promptgun now fully switches to openai's native response_format json schema support whenever available. this saves a lot of tokens and makes promptgun way more reliable.
"npm i promptgun" now allows you to add tools as simple inline callbacks, turning your single openai llm call into a real conversation, all type safe #promptgun
"npm i promptgun" now leverages openai native json schema support #promptgun
Further improved tool support #promptgun and made #happenlist more intelligent with jt
Tools added, polishing edges #promptgun
continue tool support in #promptgun
Response type adherence in extended conversations much improved #promptgun
improved UI of conversation view cloud service for #promptgun
added copying of conversations to promptgun, so you can create conversation trees: save a conversation to a variable, and continue it from that same point multiple times #promptgun
built conversation support into `npm install promptgun`! Call `.createConversation()` on the ai client and you get a dedicated client for a conversation, where each new prompt gets appended to the existing conversation internally, so you can remove conversation management boilerplate! #promptgun
Home
Search
Messages
Notifications
More