Twelve Tabs and No Answer
It is 11 PM on a Wednesday. You have been researching laptops for three hours. You have twelve browser tabs open -- four review sites, two Reddit threads, a YouTube comparison you watched at 2x speed, three spec sheets, a pricing page, and a tab you opened twenty minutes ago that you have already forgotten about. You are more confused now than when you started.
This is a familiar scene. Not just for laptops. For apartments. For job offers. For which project management tool your team should adopt. For whether to lease or buy a car. The pattern is always the same: you start researching, you accumulate information, and at some point the information starts working against you. Every new data point introduces a new tradeoff. Every review contradicts another review. You feel less certain with every tab you open.
The problem is not that you lack information. The problem is that you never defined what you actually care about.
Think of it like packing for a trip without knowing the destination. You throw in sandals and snow boots, sunscreen and a parka, a swimsuit and thermal underwear. You end up with a suitcase that weighs forty pounds and covers every climate on Earth. If someone had told you "you are going to Iceland in January," you would have packed in ten minutes. The destination -- the set of things you care about -- eliminates ninety percent of the decisions.
That is what this framework does. Before looking at a single option, you define your criteria and how much each one matters. Then you hand the research to an AI agent that evaluates every option against every criterion. What comes back is not a pile of tabs. It is a table. Scores, weights, reasoning, and a recommendation you can actually act on.
The Framework: Three Steps
The entire decision framework fits in three steps. You will build a reusable prompt template that works for any decision -- tech purchases, career moves, SaaS tools, rental apartments, anything with multiple options and multiple criteria.
Step 1: Define your criteria and assign weights. Step 2: Feed criteria + options to the AI agent for research. Step 3: Review the weighted comparison table and recommendation.
That is it. The power is in step one. Most people skip it entirely and go straight to browsing. That is why they end up with twelve tabs and no answer.
Step 1: Define What You Actually Care About
This is the hard part. Not because it is complicated, but because it forces honesty. You have to answer a question most people avoid: what do I actually value here?
Open a file. Call it decision-criteria.md. Write down the decision you are making, list every criterion that matters, and assign each one a weight from 1 to 5.
Here is an example -- choosing a laptop:
## Decision: Which laptop to buy for software development
## Options
- MacBook Pro 14" M4 Pro
- ThinkPad X1 Carbon Gen 12
- Framework Laptop 16
## Criteria (weight 1-5, where 5 = dealbreaker)
- Build quality and durability: 4
- Performance for compilation and local AI models: 5
- Battery life (real-world, not manufacturer claims): 4
- Keyboard quality: 3
- Repairability and upgradeability: 2
- Port selection (no dongle life): 3
- Price (total cost including accessories): 4
- Linux compatibility: 3
- Display quality: 3
- Weight and portability: 2
Notice what happens when you write this list. Decisions start making themselves. If "performance for compilation" is a 5 and "repairability" is a 2, you have already told yourself something important. You are not someone who will actually open the laptop and swap the RAM -- you just liked the idea of it. The weight forces the admission.
Here is another example -- choosing between job offers:
## Decision: Which job offer to accept
## Options
- Startup A (Series B, 40 people, remote)
- BigCorp B (public, 10,000 people, hybrid)
- Consultancy C (boutique, 200 people, on-site)
## Criteria (weight 1-5)
- Total compensation (salary + equity + benefits): 5
- Technical growth opportunity: 4
- Work-life balance: 4
- Team quality and engineering culture: 5
- Remote work flexibility: 3
- Job stability (runway / financial health): 4
- Product domain interest: 3
- Career brand value: 2
- Commute time: 2
The criteria file is your compass. Everything that follows is just the AI walking in the direction you pointed.
Step 2: Let the AI Research Each Option
Now you hand the criteria file to Claude Code along with a prompt that tells it what to do. This is the prompt template:
claude --prompt "$(cat <<'EOF'
You are a structured decision analyst. I need to make a decision and I want
a rigorous, weighted comparison.
## The Decision
[paste from decision-criteria.md, or reference the file]
## Your Task
For each option, research and evaluate it against every criterion listed above.
For each (option, criterion) pair:
1. State the relevant facts (cite specifics: numbers, dates, sources)
2. Assign a score from 1-10 (1 = worst, 10 = best for this criterion)
3. Write one sentence explaining the score
Then:
4. Multiply each score by the criterion weight
5. Sum the weighted scores for each option
6. Output a comparison table: rows = criteria, columns = options, cells = score
7. Output a final ranking with total weighted scores
8. Write a recommendation (2-3 sentences) explaining why the top option wins
and what tradeoffs the person is accepting
## Rules
- Be specific. "Good performance" is not acceptable. "Compiles a 500K-line
C++ project in 47 seconds (benchmarked by Phoronix, Jan 2026)" is.
- If you cannot find reliable data for a criterion, say so explicitly.
Do not invent numbers.
- The recommendation must acknowledge the second-place option's strengths.
A good decision requires understanding what you are giving up.
EOF
)"
Run this from your terminal. Claude Code will spend a few minutes researching each option against each criterion. What comes back looks something like this (abbreviated):
## Weighted Comparison: Laptop Decision
| Criterion (Weight) | MacBook Pro M4 (Score) | ThinkPad X1 (Score) | Framework 16 (Score) |
|---------------------------------|------------------------|----------------------|----------------------|
| Build quality (4) | 9 (36) | 8 (32) | 6 (24) |
| Compilation performance (5) | 10 (50) | 7 (35) | 8 (40) |
| Battery life (4) | 9 (36) | 8 (32) | 5 (20) |
| Keyboard quality (3) | 7 (21) | 9 (27) | 7 (21) |
| Repairability (2) | 2 (4) | 4 (8) | 10 (20) |
| Port selection (3) | 7 (21) | 8 (24) | 9 (27) |
| Price (4) | 5 (20) | 6 (24) | 7 (28) |
| Linux compatibility (3) | 6 (18) | 9 (27) | 9 (27) |
| Display quality (3) | 10 (30) | 8 (24) | 7 (21) |
| Weight (2) | 7 (14) | 9 (18) | 4 (8) |
|---------------------------------|------------------------|----------------------|----------------------|
| **Total Weighted Score** | **250** | **251** | **236** |
## Recommendation
The ThinkPad X1 Carbon edges out the MacBook Pro by one weighted point,
driven by its keyboard, Linux support, and portability scores. However,
the margin is razor-thin. If compilation performance is truly your top
priority — and you weighted it as a 5 — the MacBook Pro's M4 Pro chip
is measurably faster. The real question is whether you will actually
run Linux on this machine. If the answer is "probably not," drop that
criterion's weight to 1 and re-run. The MacBook wins by 12 points.
That last paragraph is the key. The AI does not just give you a number. It tells you which criteria are swinging the result and invites you to pressure-test your own weights.
Step 3: Pressure-Test and Iterate
The first run is never the final answer. It is the first draft. And like a first draft, it reveals things you did not see before.
Maybe you realize "career brand value" at weight 2 is dishonest -- you actually care about it more than you admitted. Change it to 4. Re-run.
Maybe the AI scored "job stability" for the startup at 4/10, and you think that is too harsh because you know the founder personally and the runway is 24 months. Override the score. Re-run.
The framework is not a black box. It is a conversation between you and your own priorities. The AI handles the research and the arithmetic. You handle the honesty.
claude --prompt "$(cat <<'EOF'
Re-run the laptop comparison with these updated weights:
- Repairability: 1 (I am being honest -- I will never open it)
- Linux compatibility: 1 (I am staying on macOS)
- Keyboard quality: 5 (I type 8 hours a day, this matters more than I thought)
Keep all other criteria and scores the same. Show the new totals and
whether the recommendation changes.
EOF
)"
This is the iterative loop. Define, research, review, adjust, re-run. Each cycle takes a few minutes. Three cycles and you have explored the decision space more thoroughly than twelve hours of tab-browsing ever could.
Making It Reusable: The Template File
You will make this decision more than once. Save the prompt as a reusable template:
# decision-template.md
You are a structured decision analyst.
## Decision
{{DECISION_DESCRIPTION}}
## Options
{{OPTIONS_LIST}}
## Criteria (weight 1-5)
{{CRITERIA_WITH_WEIGHTS}}
## Task
For each (option, criterion) pair:
1. State relevant facts with specifics
2. Score 1-10
3. One sentence justification
Then:
4. Weighted score table (score x weight)
5. Final ranking by total weighted score
6. 2-3 sentence recommendation acknowledging tradeoffs
## Rules
- Cite specifics. No vague qualitative assessments.
- If data is unavailable, state "No reliable data found."
- Recommendation must acknowledge second-place strengths.
Now any decision is a ten-minute exercise:
# Fill in the template, then:
claude --prompt "$(cat decision-criteria.md)"
Choosing a SaaS tool for your team? Same template, different criteria -- focus on API quality, pricing per seat, data export options, SSO support. Deciding whether to rent or buy? Criteria become monthly cost, flexibility, tax implications, maintenance burden, neighborhood. The structure is always the same. Only the content changes.
Why This Works: The Psychology of Decision Paralysis
There is a reason twelve browser tabs make you more confused instead of less. Psychologists call it the paradox of choice. More options and more information do not lead to better decisions -- they lead to decision avoidance. You keep researching because making a choice feels risky, and researching feels productive. It is not. It is procrastination wearing a lab coat.
The framework breaks the cycle in three ways.
First, it forces you to declare your values before you see the options. This is backwards from how most people decide. Most people look at options first and try to figure out what they want based on what is available. That is like walking into a restaurant and deciding you are hungry for Italian because it is an Italian restaurant. Define the hunger first.
Second, it externalizes the comparison. When the tradeoffs live in your head, they feel overwhelming. When they live in a table with numbers, they feel manageable. The decision did not get simpler -- your view of it did.
Third, the weighted scoring makes tradeoffs explicit. You cannot score one option highest on every criterion. You have to admit that the cheap apartment has a long commute, or that the high-paying job has questionable work-life balance. The numbers force the acknowledgment that vibes-based browsing lets you avoid.
One More Thing: Group Decisions
The framework becomes even more valuable when multiple people need to agree. Instead of arguing in circles during a meeting, each person fills out their own criteria weights independently. Then you run the comparison once for each person's weights and see where the disagreement actually lives.
Usually it is not about the options. It is about the weights. One person values cost at 5 and flexibility at 2. Another values flexibility at 5 and cost at 2. Now you are having the real conversation -- not "which tool is better" but "what does this team actually prioritize." That is a conversation worth having.
claude --prompt "$(cat <<'EOF'
Run this decision comparison three times with different weight sets.
Show each result and then a summary of where the rankings diverge.
## Decision: Project management tool
## Options: Linear, Jira, Notion
## Weight Set A (Engineering Lead):
- GitHub integration: 5, API quality: 5, speed: 4, price: 2, customization: 3
## Weight Set B (Product Manager):
- Roadmap views: 5, stakeholder reporting: 4, customization: 5, price: 3, speed: 3
## Weight Set C (CEO):
- Price: 5, scalability: 4, vendor stability: 4, speed: 2, integration: 3
For each weight set, score all options on all criteria. Then show where
the three weight sets produce different winners and why.
EOF
)"
The output shows that Linear wins for the engineering lead, Notion wins for the product manager, and Jira wins for the CEO. Now the team knows the disagreement is not about tools -- it is about priorities. That is a forty-minute meeting instead of a four-hour one.
The Decision You Already Made
Here is the thing about this framework. By the time you finish defining your criteria and weights, you often already know the answer. The research confirms it. The table quantifies it. The recommendation validates it. But the real decision happened in step one, when you sat down and admitted what you actually care about.
That is the key insight. Decision paralysis is rarely about insufficient information. It is about undefined criteria. You do not need more browser tabs. You need ten minutes of honesty with a blank document and a numbered list.
The AI does not make the decision for you. It builds the scaffolding that lets you see the decision you have already made.
For a complete introduction to AI CLI tools, start with the beginner's guide. If you want to apply this same structured approach to competitive analysis with live web data, see the competitive analysis automation walkthrough.
Ready to streamline your terminal workflow?
Multi-terminal drag-and-drop layout, workspace Git sync, built-in AI integration, AST code analysis — all in one app.