Boris Cherny: 5 parallel Claudes + 5-10 browser sessions (Jan 2026) — concurrency as the multiplier
Read the field note below to see how we apply this pattern in practice.
Turn this cable into a shipping system.
We help teams deploy reliable AI workflows with architecture, implementation, and hardening support.
Boris Cherny's Claude Code Setup #2: Five parallel Claudes
Part 2 of 4 — breaking down the Claude Code creator's actual workflow with FRE|Nxt Labs production commentary.
The thread
"I run 5 Claudes in parallel in my terminal. I number my tabs 1-5, and use system notifications to know when a Claude needs input."
— Boris Cherny, January 2026
Plus 5–10 more Claude sessions in claude.ai/code in the browser. Plus mobile. Total concurrent workflow capacity: 10–15 Claude sessions at once.
From the opening post of his thread: "My setup might be surprisingly vanilla! Claude Code works great out of the box, so I personally don't customize it much."
What we heard
Two things stand out from Cherny's setup:
1. It's vanilla. The creator of Claude Code doesn't heavily customize it. No exotic hooks, no deep plugin stack. The leverage comes from how he uses the tool, not from configuring the tool. This is a rebuke to every team that has built a 500-line custom integration before trying the basic workflow.
2. Concurrency is the multiplier. Five terminals + 5–10 browser sessions is not a productivity gimmick — it's the entire insight. A single agent session waiting for input is dead time. Five sessions means at any moment, at least one is ready for review. The iTerm2 notification pattern is the UX primitive that makes it work; without notifications, you're just manually polling tabs.
What we actually do with this
We adapted Cherny's setup into what we call the 5-lane pattern, adjusted for consulting engagements:
| Lane | Purpose | Runtime |
|---|---|---|
| 1 | Current feature implementation | Claude Code terminal, main checkout |
| 2 | Parallel feature / bug fix | Claude Code terminal, separate git worktree |
| 3 | Test generation + eval updates | Claude Code terminal, separate worktree |
| 4 | Research / exploratory spike | claude.ai/code in browser |
| 5 | Slow background task (refactor, migration, doc generation) | Claude Code terminal, long-running |
Rules:
- Separate worktrees per lane, not branches — concurrent
gitstate is the trap Cherny's setup avoids. - Named terminal tabs (1–5) with system notifications — so you always know which lane needs input, not which tab was most recent.
- Opus for coding lanes, Haiku for test generation and research. Model choice per lane, not per session.
- Review gate between lanes, not parallel commits — only one lane can be in the process of committing at a time. Prevents merge conflicts.
Applied: 5-lane on InterviewLM
On the InterviewLM build week, we ran five concurrent lanes for three days straight:
- Lane 1: Interviewer persona agent implementation
- Lane 2: Evaluation agent + golden-set eval harness
- Lane 3: API surface + session routing
- Lane 4: Research on LangGraph checkpoint semantics (browser)
- Lane 5: Prompt caching optimization (long-running profile-and-iterate)
Without the pattern: linear, maybe 2× productivity from Claude Code. With the pattern: 5× on implementation, 3× on overall engagement throughput (limited by review and merge capacity — see entry #1's bottleneck diagnostic).
The cost: every engineer on the team had to rewire their mental model from "sequential focus" to "parallel supervision." Not every engineer adapts. The ones who did produced roughly 3× more shipped code per week than the ones who stuck with a single-session workflow.
The one thing to steal from this
Set up a separate git worktree for Claude Code this week (git worktree add ../myproject-lane2 main). Start two parallel sessions — one in the main checkout, one in the worktree. Notice what work becomes viable that wasn't before: running tests while coding, refactoring while feature-building, researching while implementing. Two sessions, not five, is the right starting experiment. Scale up from there only if you hit review capacity faster than you hit agent capacity.
Next in this series
#3 — CLAUDE.md as a living postmortem. Cherny's team adds to the file every time Claude does something wrong. Why this converts "AI tool" into "self-correcting team memory."
Quick answers
What do I get from this cable?
You get a dated field note that explains how we handle this ai-industry workflow in real Claude Code projects.
How much time should I budget?
Typical effort is 6 min. The cable is marked intermediate.
How do I install the artifact?
This cable is guidance-only and does not ship an installable artifact.
How fresh is the guidance?
The cable is explicitly last verified on 2026-04-17, and includes source links for traceability.
More from @frenxt
Anthropic's Responsible Scaling Policy (Sep 2023) — safety as operating procedure
*A five-part series tracing Anthropic's public thinking through Dario Amodei's writing and the company's model spec — one foundational document per entry, each with FRE|Nxt Labs l…
Anthropic's "brilliant friend" spec — the product voice that defines Claude
*Part 2 of 5 — tracing Anthropic's public thinking with FRE|Nxt Labs production commentary.*
Dario Amodei's Machines of Loving Grace (Oct 2024) — planning against the upside case
*Part 3 of 5 — tracing Anthropic's public thinking with FRE|Nxt Labs production commentary.*