Anthropic's "brilliant friend" spec — the product voice that defines Claude
Read the field note below to see how we apply this pattern in practice.
Turn this cable into a shipping system.
We help teams deploy reliable AI workflows with architecture, implementation, and hardening support.
Inside Anthropic with Dario Amodei #2: The "brilliant friend" spec
Part 2 of 5 — tracing Anthropic's public thinking with FRE|Nxt Labs production commentary.
The source
"Claude can be like a brilliant friend who also has the knowledge of a doctor, lawyer, and financial advisor, who will speak frankly and from a place of genuine care and treat users like intelligent adults capable of deciding what is good for them."
Dario Amodei has consistently reinforced this framing in public appearances and in Machines of Loving Grace (entry #3 in this series). The constitution is where it became a design constraint, not just a philosophy.
What we heard
This is a product spec written as a value statement. It defines the failure mode of nearly every AI product on the market: the assistant that hedges, disclaims, and refuses to give a direct answer.
The "brilliant friend" bar is: be as useful to this specific person in this specific moment as a deeply knowledgeable person who actually knows them would be. Notice what that test rules out — the generic answer, the five-bullet framework when someone asked a direct question, the liability disclaimer no one asked for.
If entry #1 (the Responsible Scaling Policy) was the safety floor, this is the product ceiling. The RSP says don't hurt people. The constitution says actually help them.
What we actually do with this
When we build AI advisor products for clients, we run every sampled response through a brilliant friend test:
- Would a knowledgeable friend say this, or would they say something more specific?
- Does the response contain any sentence that only exists to protect the AI from liability?
- Does the response assume the user needs hand-holding they didn't ask for?
If the answer to #2 or #3 is yes, we rewrite the system prompt. Concretely, we audit for:
"I'm not a licensed professional"boilerplate in responses where it wasn't asked"Please consult a human expert"on clear technical questions- Bullet lists that expand a direct answer into a five-point framework
On the FRE|Nxt Labs AI advisor
Our /advisor endpoint on this site is a live experiment in this. The system prompt is written to answer the specific AI architecture question the user actually asked — not to explain what RAG is if they already said they're using LangGraph.
We track three signals: % of responses under 150 words (target: 60%+), % of responses containing hedge phrases (target: under 5%), and % of sessions that convert to a discovery call.
The brilliant friend doesn't pad. The brilliant friend answers.
The one thing to steal from this
Run 20 responses from your AI product through this filter: remove every sentence that wouldn't appear in a text message from a knowledgeable friend. What's left should be most of the response. If you lose more than 30% of the content, your system prompt is optimized for legal cover, not for the user.
Next in this series
#3 — Machines of Loving Grace (Oct 2024). Dario's 14,000-word utopian essay: the "compressed 21st century," a "country of geniuses in a datacentre," and why the upside case is specific enough to plan against.
Quick answers
What do I get from this cable?
You get a dated field note that explains how we handle this ai-industry workflow in real Claude Code projects.
How much time should I budget?
Typical effort is 5 min. The cable is marked beginner.
How do I install the artifact?
This cable is guidance-only and does not ship an installable artifact.
How fresh is the guidance?
The cable is explicitly last verified on 2026-04-17, and includes source links for traceability.
More from @frenxt
Anthropic's Responsible Scaling Policy (Sep 2023) — safety as operating procedure
*A five-part series tracing Anthropic's public thinking through Dario Amodei's writing and the company's model spec — one foundational document per entry, each with FRE|Nxt Labs l…
Dario Amodei's Machines of Loving Grace (Oct 2024) — planning against the upside case
*Part 3 of 5 — tracing Anthropic's public thinking with FRE|Nxt Labs production commentary.*
Dario Amodei's Urgency of Interpretability (April 2025) — the unsolved problem in production
*Part 4 of 5 — tracing Anthropic's public thinking with FRE|Nxt Labs production commentary.*