Hey folks,
We just pushed out something we’ve been using internally and thought it might be useful to others building with LLMs. It’s called Promposer - basically a dev tool for prompt engineering.
The idea is simple:
- You write and iterate on prompts (or bigger instruction sets).
- Add the task-specific context or tools they need.
- Then run simulations with test cases to see how the model behaves in edge situations before you ship it.
- In addition you can hook into the API for realtime thread review, so you can catch production issues early instead of chasing them down later.
Instead of doing the usual copy/paste + trial and error loop, you can keep everything in one place and run structured evaluations. It also works with multiple models and lets you compare outputs side by side.
There’s an IDE extension (VS Code compatible) if you want to stay inside your editor, and a web UI if you prefer browser. We also exposed an API for anyone who wants to wire this into their own pipeline. Cloud and privacy modes available.
We built it because we were tired of manually testing prompts, tweaking context windows, and not having any way to track what worked vs what broke. If you’re doing prompt/context engineering or need to simulate tasks before pushing to production, it might save you some time.
Watch the video Supercharge prompt engineering with Promposer
Site & docs are here: https://promposer.ai
Would love feedback from others hacking on prompt workflows.
