You Are Probably Using 30% of Your AI Agent
Most developers install Claude Code, type a prompt, get code back. The code is usually good. Sometimes excellent. And yet something is off. The agent writes implementation before tests. It guesses at architecture instead of asking questions. It fixes symptoms instead of root causes. It works fast -- like an enthusiastic intern who skips the boring parts.
This is not the agent's fault. Large language models are trained to be helpful, which in practice means they rush to produce output. Ask for a feature and the agent starts writing code. It does not stop to ask what you actually need. It does not write a spec. It does not plan file structure. It certainly does not write a failing test first.
Jesse Vincent noticed this pattern and decided to fix it. Not by building a new model or tool, but by giving existing agents a set of rules to follow. The result: Superpowers, a framework of composable skills that turns your AI coding agent from a fast typist into a disciplined engineering partner. As of March 2026, over 89,000 GitHub stars. One of the fastest-growing developer tools in history.
The idea is disarmingly simple: if your agent is smart but undisciplined, give it discipline.
Who Made This and Why
Jesse Vincent is not new to tools developers depend on. He created Request Tracker (RT) in the 1990s. He managed Perl 6 from 2005 to 2008. He co-founded Keyboardio. He built K-9 Mail for Android, later acquired by Mozilla and rebranded Thunderbird for Android. The thread: Jesse builds infrastructure others rely on, and he obsesses over workflow.
Superpowers grew from Jesse's experience using Claude Code for serious development. The agent was capable but inconsistent. Left alone, it skipped tests, implemented features before understanding requirements, applied quick fixes to undiagnosed bugs. These are not AI problems. They are engineering discipline problems. A junior developer does the same things.
The insight: AI agents respond to structure. You cannot lecture them about best practices and expect compliance. But you can give them explicit step-by-step workflows and hard gates blocking progress until conditions are met. A skill saying "write tests first" gets ignored. A skill saying "NO PRODUCTION CODE WITHOUT A FAILING TEST FIRST. Write code before the test? Delete it. Start over." gets followed.
The philosophical core: treat your AI agent like a powerful but undisciplined junior engineer. Give it the process guardrails that turn juniors into seniors.
The Core Skills
Superpowers ships with over a dozen skills organized into a complete development workflow. Each is a SKILL.md file with explicit instructions, hard gates, and process flows.
Brainstorming: Explore Before Building
Activates before any creative work. Hard gate:
Do NOT invoke any implementation skill, write any code, scaffold any project, or take any implementation action until you have presented a design and the user has approved it.
Forces the agent to explore project context first (reading files, docs, recent commits), ask clarifying questions one at a time, propose 2-3 approaches with trade-offs, present design in sections for approval, write a spec document. Only after approval does it transition to implementation.
Why this matters: most wasted work comes from building the wrong thing. A fast agent building the wrong thing loses more time than one that asks two questions first.
Superpowers 5 (early March 2026) added visual brainstorming -- HTML mockups in-browser instead of ASCII diagrams. When design involves visual elements, the agent offers a "visual companion" before clarifying questions.
Writing Plans: Spec Before Code
After design approval, breaks work into bite-sized tasks. Each task: 2-5 minutes of work, exact file paths, complete code context, verification steps. Written assuming the implementer has "zero context for your codebase and questionable taste."
Sounds harsh. Practical though. When subagents execute tasks (below), each starts with fresh context. The plan must be detailed enough that a new agent knowing nothing can complete each step correctly.
Plans enforce DRY, YAGNI, and TDD. Every task includes what to test, how, expected output.
Test-Driven Development: Test Before Implementation
The strictest skill. The "Iron Law":
NO PRODUCTION CODE WITHOUT A FAILING TEST FIRST. Write code before the test? Delete it. Start over. No exceptions.
Classic red-green-refactor: failing test, verify it fails for the right reason, minimum code to pass, verify all tests pass, refactor. Includes an anti-patterns reference cataloguing common TDD mistakes.
This skill draws the strongest reactions. Some love it -- finally, the agent writes tests. Others resist -- they do not practice TDD themselves. But results speak. chardet shipped 7.0.0 using Superpowers: 41x faster, 96.8% accuracy, dozens of longstanding issues fixed. The comprehensive test suite covering 2,161 files across 99 encodings was a direct product of TDD.
Systematic Debugging: Diagnose Before Fixing
Four-phase process: root cause investigation, hypothesis formation, targeted fix, verification. Iron law:
NO FIXES WITHOUT ROOT CAUSE INVESTIGATION FIRST.
Explicitly warns against skipping: "Use this ESPECIALLY when under time pressure. Emergencies make guessing tempting." Includes root-cause tracing, defense-in-depth analysis, condition-based waiting.
Addresses the most common AI failure mode. Without guidance, an agent encountering a bug tries random fixes. If the first makes the error disappear, it declares victory. The debugging skill forces understanding before touching code.
Code Review: Verify Before Merging
Dispatches a separate subagent to review completed work. The reviewer gets precisely crafted evaluation context, never the implementer's session history. Prevents bias from knowing the implementer's reasoning.
Reviews check against implementation plan, report by severity, critical issues block progress. The receiving-code-review skill handles the other direction: responding to feedback without defensiveness or unrelated changes.
Subagent-Driven Development: Parallelize Independent Tasks
This is where Superpowers moves from "good practices" to "architectural innovation." Dispatches a fresh agent per task from the plan, two-stage review after each: spec compliance first, then code quality.
Each subagent starts clean. Receives only its task description and relevant context, not full conversation history. Prevents context pollution (accumulated context degrading judgment) and lets the coordinator manage many tasks without exhausting the context window.
Result: Claude working autonomously for hours without deviating from plan is not uncommon. The coordinator dispatches, reviews, handles failures, continues forward -- only escalating when something genuinely requires human judgment.
The Philosophy: Rigid Where It Matters, Flexible Where It Does Not
Not all skills work the same way. Some are rigid with hard gates. Others flexible, providing guidance without enforcement. The distinction is deliberate.
TDD and debugging are rigid. Iron laws, explicit prohibitions, delete-and-restart consequences. These are domains where cutting corners causes compounding damage. Skipped test today becomes hours of regression debugging tomorrow. Uninvestigated root cause becomes three bugs downstream.
Brainstorming is structured but adaptive. Checklist and hard gate (no code before design approval), but questions and approaches vary by context. A todo app gets brief design. A distributed system gets thorough treatment.
Code review is advisory. Reports findings and severity. Human decides which to fix.
This is "explain why, not what." Each skill explains its reasoning: why tests must fail before passing, why root causes matter more than symptoms, why fresh context prevents drift. The agent follows because it understands principles, not because it was told to follow blindly.
Installation and Usage
Claude Code (Official Marketplace)
Since January 2026:
/plugin install superpowers@claude-plugins-official
Or community marketplace:
/plugin marketplace add obra/superpowers-marketplace
/plugin install superpowers@superpowers-marketplace
Cursor
/add-plugin superpowers
Or search "superpowers" in Cursor plugin marketplace.
Codex CLI
Tell Codex:
Fetch and follow instructions from https://raw.githubusercontent.com/obra/superpowers/refs/heads/main/.codex/INSTALL.md
Gemini CLI
gemini extensions install https://github.com/obra/superpowers
How Skills Trigger
Once installed, skills trigger automatically. Start a session, ask to build a feature. The agent detects task type and activates brainstorming. Approve design -- writing-plans activates. Start implementing -- TDD kicks in. Hit a bug -- systematic-debugging takes over.
Manual invocation works too: "Use the brainstorming skill to help me think through this." But automatic triggering is the point. You do not need to remember which skill to use. The framework handles orchestration.
Verify It Works
New session. Ask something that should trigger a skill -- "help me plan this feature" or "let's debug this issue." If installed correctly, the agent announces which skill it is using and follows structured process instead of jumping to code.
Cross-Agent Compatibility
Superpowers' most significant design decision: platform agnosticism. Works with Claude Code, Cursor, Codex CLI, OpenCode, Gemini CLI, Qwen Code, Goose CLI, and Auggie. Possible because skills are Markdown files, not platform-specific plugins. Any agent that reads SKILL.md follows the instructions.
This makes Superpowers a portable methodology. Team uses Claude Code, colleague prefers Codex CLI -- both run the same skills. Brainstorming, TDD enforcement, subagent coordination all transfer. The cross-agent skills ecosystem is converging on this model: knowledge encoded once, applied everywhere.
That said, Claude Code has the deepest integration. allowed-tools for sandboxing, automatic plugin updates, native subagent support mean some skills (particularly subagent-driven development) work best on Claude Code. Other agents get the core workflow without advanced orchestration.
Writing Your Own Skills on Top of Superpowers
Superpowers is a foundation, not a ceiling. Includes a writing-skills skill that teaches the agent how to create new skills following best practices. Meta, yes -- but it means you can extend the framework with domain-specific skills.
Common extensions:
- Deployment skills enforcing your release checklist
- ADR skills documenting design choices
- Security review skills checking compliance requirements
- Onboarding skills encoding tribal knowledge
Your custom skills compose with existing ones. Deployment skill can depend on code-review completing first. ADR skill can plug into brainstorming workflow.
For writing effective skills, see How to Write Your First SKILL.md. For design principles separating good skills from noise, read What Makes a Good Skill.
What Makes This Different from Good Prompts
Could you get the same results with detailed prompts? TDD instructions in CLAUDE.md, "always ask clarifying questions" in system prompt, skip the framework?
You could try. It would not work the same way.
A single prompt instruction is a suggestion. The agent follows it when convenient, ignores it under pressure. "Always write tests first" in CLAUDE.md works for the first three tasks. By the fourth, context getting long and problem getting complex, tests quietly get skipped. You will not notice until something breaks.
A Superpowers skill is a process with enforcement. The TDD skill does not suggest tests first. It mandates with an iron law, includes instructions to delete code written before tests, structures every step around red-green-refactor. Speed limit sign versus speed bump. One informs. The other physically prevents.
The compounding effect matters. Individual skills are useful. TDD alone improves quality. Brainstorming alone reduces waste. The framework together changes how you work. Brainstorming produces a spec. Spec feeds into plan. Plan feeds into subagent development. Subagents follow TDD. Code review catches what TDD missed. Each skill's output is the next skill's input.
Think chess. Knowing individual moves makes you a beginner. Understanding opening theory, middlegame strategy, endgame technique makes you a player. Superpowers is not a collection of moves. It is a strategy for how moves fit together.
Getting Started: 5 Steps
Shortest path from "interested" to "using Superpowers productively":
-
Install on your preferred agent. Claude Code:
/plugin install superpowers@claude-plugins-official. -
Start a real task. Not a toy example. A feature you need or a bug you need to fix. The framework shines on real work.
-
Follow brainstorming. When the agent starts asking questions instead of writing code, resist "just build it." Answer the questions. Approve the design. Watch implementation clarity.
-
Let TDD run. First time the agent writes a failing test, watches it fail, writes minimal code to pass -- you feel the difference. Code from this cycle is smaller and more focused.
-
Review and extend. After your first project, you know which parts fit and which need adjustment. Write custom skills for what does not fit. The skill development workflow covers the process.
Superpowers is not a magic wand. It will not fix a bad plan or make a wrong architecture work. What it does: ensure easy mistakes -- from rushing, skipping steps, not asking questions -- do not happen. For most projects, those easy mistakes cost the most time.
Open source under MIT. Actively maintained. Zero to 89,000 stars in five months. That trajectory does not happen unless the tool solves a real problem for a lot of people.
If you use an AI coding agent and have ever thought "this would be great if it just slowed down and did things properly" -- Superpowers is exactly that. Discipline for AI agents. Turns out, that is all they needed.
Ready to streamline your terminal workflow?
Multi-terminal drag-and-drop layout, workspace Git sync, built-in AI integration, AST code analysis — all in one app.