Two prompts walk into an LLM. The one that comes out more intact wins.
That’s promptwars.io — a multiplayer, async game I’ve been running for over a year. Simple premise, but it’s surprisingly hard to describe what the game actually is. It’s about manipulating chat models, prompt injection techniques, taking control over LLM thinking. Not quite a single clean concept, which turns out to be part of the appeal.
The Core War connection
If you know Core War, you’ll recognize the DNA. Both games are about battling for control over a virtual battlefield governed by rules very different from the physical world. In Core War, that battlefield was von Neumann computer memory. In Prompt Wars, it’s the activation space of a language model.
Two prompts get concatenated, sent to an LLM, and scored based on the longest common subsequence between each prompt and the output. Highest score wins the round.
It’s emergent
The scoring creates interesting dynamics. Winning means getting your text reproduced in the output — so higher scores naturally expose your strategy. Battle outputs are themselves valid prompts, enabling potential evolutionary dynamics.
Different models behave differently. We run multiple arenas: GPT, Claude, Gemini. The Gemini arena now has multiple scoring leaderboards, including embedding-based scoring (VoyageAI) as an alternative to pure character-matching.
It lasts
The game is designed in a way that can’t really be broken (Spiffing Brit style) or gamified into triviality. The core mechanic scales with model capability — more sophisticated models just open up new strategic dimensions rather than breaking the game.
Source code: https://github.com/SupraSummus/prompt-wars
Curious what you think