What You Will Build
Three AI CLI tools. Three git worktrees. One full-stack task management app — backend, frontend, and database layer — scaffolded in parallel in under 30 minutes.
Here is the final result: a Node.js/Express API, a React frontend, and a PostgreSQL schema with migrations and seed data. Each layer was generated by a different AI agent running simultaneously. After merging the three branches, the app runs with docker compose up and serves a working task board at localhost:3000.
This article walks through the exact steps. Every command is copy-pasteable. The workflow generalizes to any full-stack project.
Why Three Tools Instead of One
Each AI CLI tool has a different strength profile. Using one tool for everything means you are paying premium rates for tasks that do not need premium reasoning, or accepting weaker output on tasks that do.
| Tool | Assigned Layer | Why This Match |
|---|---|---|
| Claude Code (Opus 4.6) | Backend API | Strongest at system design, multi-file architecture, error handling chains |
| Gemini CLI (free tier) | Frontend UI | Free tier handles component generation well; high iteration count costs nothing |
| Codex CLI (GPT-5.3-Codex) | Database + Tests | Excels at structured, well-defined tasks: schemas, migrations, seed scripts |
Claude Code is the most capable agent for complex, multi-step work. It is also the most expensive. Allocating it to the backend API — where architectural decisions cascade through the entire app — maximizes the return on that cost.
Gemini CLI's free tier gives you 1,000 model requests per day. Frontend work involves high iteration: generating components, adjusting layouts, tweaking styles. Burning free requests on this work instead of paid Claude Code capacity is the rational choice.
Codex CLI is strong at focused, template-driven tasks. Database schemas, migration files, and seed data have predictable structures. Codex CLI generates these reliably and quickly.
A full comparison of all three tools' capabilities and pricing is available in the 2026 AI CLI Tools Complete Guide.
Prerequisites
Tools to install:
- Claude Code — requires a Claude subscription (Pro $20/month or Max $100+/month). Install:
curl -fsSL https://claude.ai/install.sh | bash - Gemini CLI — free with any Google account. Install:
npm install -g @google/gemini-cli - Codex CLI — requires an OpenAI API key. Install:
npm install -g @openai/codex - Git 2.20+ — for worktree support
- Node.js 18+ and Docker — for running the final app
Time required: approximately 30 minutes for the full scaffold, plus 10-15 minutes for integration and testing.
Step 1: Create the Project and Worktrees
Start with an empty repository.
mkdir taskboard && cd taskboard
git init
echo "node_modules/" > .gitignore
git add .gitignore && git commit -m "init"
Create three worktrees, one per layer.
git worktree add ../taskboard-api feature/api
git worktree add ../taskboard-ui feature/ui
git worktree add ../taskboard-db feature/db
You now have four directories sharing one .git history:
~/taskboard → main branch (orchestration)
~/taskboard-api → feature/api (Claude Code)
~/taskboard-ui → feature/ui (Gemini CLI)
~/taskboard-db → feature/db (Codex CLI)
Each agent gets its own filesystem. No conflicts. No overwrites. For the full explanation of why worktrees are essential for multi-agent work, see the Git Worktree Multi-Agent Setup Guide.
Step 2: Write the Shared Spec
Before launching any agent, create a shared specification file. Every agent needs to agree on the data model and API contract. Without this, you get three layers that do not fit together.
Create SPEC.md in the main branch:
# Taskboard — Shared Specification
## Data Model
- **User**: id (uuid), email (unique), name, created_at
- **Task**: id (uuid), title, description (nullable), status (enum: todo/in_progress/done), priority (enum: low/medium/high), assignee_id (fk → User), created_at, updated_at
## API Endpoints
- GET /api/tasks — list all tasks, supports ?status= and ?assignee_id= filters
- POST /api/tasks — create task (title required, status defaults to "todo")
- PATCH /api/tasks/:id — update task fields
- DELETE /api/tasks/:id — soft delete (set deleted_at)
- GET /api/users — list all users
- POST /api/users — create user
## Conventions
- Backend: Node.js + Express + TypeScript
- Frontend: React 19 + Vite + TypeScript + Tailwind CSS 4
- Database: PostgreSQL 16 with Drizzle ORM
- All API responses: { data: T } on success, { error: string } on failure
- Port: API on 4000, frontend on 3000, DB on 5432
Commit this to main and ensure each worktree can access it:
git add SPEC.md && git commit -m "add shared spec"
Copy or symlink SPEC.md into each worktree so agents can reference it. Alternatively, instruct each agent to read it from the main branch.
Step 3: Launch All Three Agents
Open three terminal panels. If you are using Termdock, drag to create a three-panel layout — one panel per agent. Each panel shows its agent's output in real time, and you can resize panels to focus on whichever agent needs attention.
Panel 1: Claude Code → Backend API
cd ~/taskboard-api
claude
> Read SPEC.md. Scaffold a complete Express + TypeScript backend API
> implementing every endpoint in the spec. Include:
> - src/ directory with routes, controllers, middleware, types
> - Error handling middleware with proper HTTP status codes
> - Input validation using zod
> - CORS configured for localhost:3000
> - Docker-compatible: reads DATABASE_URL from env
> - package.json with scripts: dev, build, start
> Do not implement database queries yet — use placeholder functions
> that return mock data matching the schema in SPEC.md.
> We will integrate the real DB layer after merge.
Claude Code will create the project structure, install dependencies, write the route handlers, add validation schemas, and set up error handling. This is multi-file architectural work — exactly where Claude Code's reasoning depth pays off.
Panel 2: Gemini CLI → Frontend UI
cd ~/taskboard-ui
gemini
> Read SPEC.md. Scaffold a React 19 + Vite + TypeScript frontend for
> the task management app. Include:
> - Vite config with proxy to localhost:4000/api
> - Components: TaskBoard, TaskCard, TaskForm, UserSelect, StatusFilter
> - Tailwind CSS 4 for styling — clean, minimal design
> - API client module using fetch, typed to match the spec endpoints
> - State management with React 19 use() and context
> - package.json with scripts: dev, build, preview
> Focus on a working Kanban-style board with three columns:
> Todo, In Progress, Done. Tasks are draggable between columns.
Gemini CLI handles the high-iteration component work. If a component looks wrong, you iterate for free. The model router sends straightforward component generation to Gemini Flash, keeping requests fast.
Panel 3: Codex CLI → Database + Tests
cd ~/taskboard-db
codex
> Read SPEC.md. Create the database layer:
> - Drizzle ORM schema matching the data model exactly
> - Migration files generated from the schema
> - Seed script that creates 3 users and 10 sample tasks
> - docker-compose.yml for PostgreSQL 16
> - Connection config reading DATABASE_URL from .env
> - Integration tests using vitest: test each API endpoint's
> expected DB behavior (CRUD operations, filters, soft delete)
> Structure: db/ directory with schema.ts, migrate.ts, seed.ts,
> and a tests/ directory.
Codex CLI generates structured, predictable output. Schema definitions, migration files, and test assertions are template-driven tasks where Codex CLI performs reliably.
Step 4: Monitor and Adjust
With all three agents running simultaneously, your job shifts from writing code to reviewing output. Check each panel periodically.
Common issues to watch for:
- Type mismatches — if an agent invents a field name not in
SPEC.md, correct it immediately. The shared spec exists to prevent this, but agents occasionally improvise. - Port conflicts — verify each layer uses the ports defined in the spec.
- Dependency version mismatches — if both the API and DB layers install Drizzle ORM, ensure they pin the same version.
Most agents complete their scaffold in 5-10 minutes. Claude Code typically takes longest because the backend has more architectural decisions. Gemini CLI often finishes first because component generation is fast.
Step 5: Merge the Three Branches
Once all three agents finish, return to the main branch and merge.
cd ~/taskboard
# Merge the database layer first — it has no dependencies on the others
git merge feature/db --no-ff -m "merge: database schema, migrations, seed, tests"
# Merge the backend API
git merge feature/api --no-ff -m "merge: Express API with routes and validation"
# Merge the frontend
git merge feature/ui --no-ff -m "merge: React frontend with Kanban board"
If you followed the spec and each agent worked in its own directory (db/, src/, and a frontend root), these merges should be conflict-free. The layers occupy different file paths.
If conflicts occur: they are almost always in shared config files like package.json or tsconfig.json. Resolve manually — combine the dependencies from all three branches and keep the strictest TypeScript config.
Step 6: Integration Wiring
The three layers are now in one branch, but they are not yet connected. The backend has mock data functions, and the frontend has API calls pointing to localhost:4000. You need to wire the real database into the backend.
This is a focused task for Claude Code:
claude
> The backend in src/ uses placeholder functions for database queries.
> The database layer in db/ has the Drizzle schema and connection setup.
> Replace all placeholder functions with real Drizzle queries.
> Update the backend to import from db/schema.ts.
> Ensure the docker-compose.yml starts both the API and the database.
> Add a top-level package.json with a "dev" script that starts
> API + frontend concurrently.
Claude Code reads both layers, understands the contract between them, and wires them together. This is exactly the kind of multi-file, cross-boundary work that justifies using the strongest agent.
Step 7: Run and Verify
docker compose up -d db # Start PostgreSQL
npm run db:migrate # Run migrations
npm run db:seed # Load sample data
npm run dev # Start API + frontend concurrently
Open localhost:3000. You should see a Kanban board with three columns and 10 sample tasks. Drag a task from "Todo" to "In Progress" — the PATCH request hits the API, which updates the database, and the UI reflects the change.
Run the integration tests:
npm run test
The tests generated by Codex CLI validate each endpoint against real database state.
Workflow Summary
| Phase | Time | What Happens |
|---|---|---|
| Setup (repo + worktrees + spec) | 5 min | Create repo, 3 worktrees, shared SPEC.md |
| Parallel scaffold (3 agents) | 10 min | All 3 agents generate their layers simultaneously |
| Monitor + minor corrections | 5 min | Fix type mismatches, verify output |
| Merge 3 branches | 3 min | Sequential merge, resolve any config conflicts |
| Integration wiring | 7 min | Claude Code connects DB to API, adds docker-compose |
| Verify | 3 min | Start app, run tests |
| Total | ~33 min | Full-stack app from zero to working |
Without parallelization, the same work takes 60-90 minutes sequentially. The time savings compound with project complexity — a larger app with more layers benefits even more from parallel scaffolding.
The workflow outlined here follows the same parallel development pattern described in the dual-tool strategy guide, extended to three tools.
Adapting This to Your Own Projects
The three-tool split generalizes to any full-stack architecture. The principle is the same: assign each tool to the layer that matches its strength.
Different tech stacks: Replace Express with FastAPI, React with Svelte, PostgreSQL with MongoDB. The worktree structure and parallel workflow remain identical.
More layers: Add a fourth worktree for infrastructure (Terraform, CI/CD). Assign it to whichever agent handles template-driven config files best.
Fewer tools: If you only have Claude Code and Gemini CLI, assign the database layer to whichever agent finishes its primary task first. The dual-tool strategy covers this two-agent approach in detail.
Shared spec is mandatory. Without SPEC.md, your agents produce three layers with incompatible data models. The 5 minutes you spend writing the spec saves 30 minutes of post-merge debugging.
Troubleshooting
Gemini CLI hits rate limits mid-scaffold. The free tier allows 60 requests/minute and 1,000/day. A complex scaffold with many iterations can burn through requests quickly. Solution: use focused prompts that minimize agent exploration. If you hit the daily cap, the scaffold can be finished manually or with Claude Code. Note: starting March 25, 2026, Pro models on the free tier are being restricted — see the free AI CLI tools comparison for current limits.
Codex CLI sandbox blocks network access. Codex CLI runs in a sandboxed environment by default. If it needs to install npm packages, ensure your Codex configuration allows network access during setup, or pre-install dependencies before launching the agent.
Merge conflicts in package.json. This is the most common issue. Both the API and DB layers may create their own package.json. Solution: structure the project as a monorepo from the start — give each layer its own directory with its own package.json, and add a root-level package.json for shared scripts.
Agent ignores SPEC.md. Some agents prioritize their own conventions over the spec. If an agent generates a status field as a string instead of the specified enum, correct it in the prompt: "Use the exact field types from SPEC.md. Status must be an enum: todo, in_progress, done."
Running three agents in three terminal panels is the core of this workflow. The ability to see all three outputs simultaneously, resize panels to focus on the one that needs attention, and recover sessions if a terminal crashes — that is the difference between orchestrating agents effectively and constantly losing context.
Ready to streamline your terminal workflow?
Multi-terminal drag-and-drop layout, workspace Git sync, built-in AI integration, AST code analysis — all in one app.