2026-01-23Alireza Rezvani

Your CLAUDE.md Is Probably Wrong: 7 Mistakes Boris Cherny Never Makes

Claude CodeAISoftware DevelopmentDeveloper ProductivityAnthropic
Y

Boris Cherny's file is 2.5k tokens. Mine was 15k. After 3 weeks studying his workflow, I found the patterns most developers miss — and how to fix them fast.

My CLAUDE.md was 847 lines. I was proud of it — every edge case documented, every convention spelled out, every team preference captured in meticulous detail.

Then I saw Boris Cherny's file: 2.5k tokens.

That's roughly 100 lines. His team ships Claude Code itself. Mine was 8x longer and producing worse results.

How to avoid mistakes with CLAUDE.md Avoid Mistakes With CLAUDE.md | Image Generated with Gemini 3 Pro

Note: AI tools assisted with research and editing. The testing, restructuring, and real-world examples are from my actual workflow.

For three weeks, I've been studying what Boris shared about his workflow. Not the flashy parts everyone covered — the 5 parallel sessions, the teleporting between terminals. The boring parts. The parts about how his team actually structures their CLAUDE.md.

What I found: most of us are making the same mistakes. And the fixes aren't complicated.

The Workflow That Broke the Internet

In early January 2026, Boris Cherny — Staff Engineer at Anthropic and creator of Claude Code — posted a thread on X about his development setup. What started as a casual terminal walkthrough became the most dissected developer workflow of the year.

"If you're not reading the Claude Code best practices straight from its creator, you're behind as a programmer," wrote Jeff Tang. Kyle McNease called it Anthropic's "ChatGPT moment."

But here's what most coverage missed.

The viral parts were the parallel sessions and the system notifications. The important part was one line buried in the thread:

"My setup might be surprisingly vanilla! Claude Code works great out of the box, so I personally don't customize it much."

His team's CLAUDE.md? 2.5k tokens. Checked into git. The whole team contributes. That's it.

No elaborate configurations. No 50-page manifesto. Just a focused file that earns every line.

Most of us are doing exactly what he avoids.

The 7 Anti-Patterns

1. The Context Stuffing Trap

What most devs do: Cram everything into CLAUDE.md. Every edge case, every historical decision, every "just in case" instruction. I've seen files hit 10,000+ tokens.

What Boris does: 2.5k tokens total. Concise. Focused.

Why it matters: Context rot is real. According to the NoLiMa benchmark, at 32,000 tokens, 11 out of 12 tested models dropped below 50% accuracy on recall tasks. Your CLAUDE.md loads on every conversation start. A bloated file consumes tokens before you even ask a question.

The fix is embarrassingly simple:

Bad (verbose): "When implementing authentication, always ensure you follow security best practices including input validation, proper error handling, secure token storage, and following our established patterns in the auth/ directory..."

Good (concise): "Auth: validate inputs, handle errors securely, follow auth/ patterns"

Same information. 80% fewer tokens.

I reduced my CLAUDE.md from 847 lines to 127. Token overhead dropped from ~15,000 to ~2,400. Sessions started faster. Claude stopped "forgetting" instructions mid-conversation.

2. The Static Memory Problem

What most devs do: Create CLAUDE.md once during project setup. Never touch it again.

What Boris does: Uses the @.claude tag on coworkers' PRs to add learnings. The file evolves with every code review.

Why it matters: Your codebase changes. Static instructions become wrong. Worse — the same mistakes repeat because Claude never learns from them.

Last month, my team had the same TypeScript strict mode violation flagged in four separate PRs. Same fix suggested by reviewers each time. That pattern should have been in CLAUDE.md after the first occurrence.

The fix:

  • Add a "Learnings" section that grows from PR reviews
  • Use the # shortcut during sessions to add instant updates (type # Always use named exports for utilities and it's added to memory)
  • Schedule monthly CLAUDE.md audits — delete what's stale, add what's missing

Your CLAUDE.md should be a living document, not a time capsule.

3. The Solo Configuration

What most devs do: Keep CLAUDE.md as a personal file. Maybe .gitignore it.

What Boris does: "Each team at Anthropic maintains a CLAUDE.md in git."

Why it matters: Different team members get inconsistent Claude behavior. Onboarding new devs means rebuilding context from scratch. Best practices discovered by one person don't propagate.

I watched a junior engineer spend two hours debugging a test setup that three seniors already knew was broken. The workaround existed — in someone's personal Claude configuration. Not shared. Not discoverable.

The fix:

project-root/
├── CLAUDE.md           # Shared, checked into git
├── CLAUDE.local.md     # Personal preferences, .gitignored
└── .claude/
    └── commands/       # Team slash commands

Establish a team ritual: updating CLAUDE.md is part of the PR process. Found a gotcha? Document it. Fixed a recurring issue? Add the pattern.

4. The Plan Mode Skip

What most devs do: Jump straight to auto-accept edits. Let Claude start writing code immediately.

What Boris does: "If my goal is to write a Pull Request, I will use Plan mode, and go back and forth with Claude until I like its plan. From there, I switch into auto-accept edits mode and Claude can usually 1-shot it."

His exact words: "A good plan is really important!"

Why it matters: Auto-accept without planning means burning context on wrong directions. You end up with 15 files changed, realize the architecture is wrong, and have to start over.

Planning catches mistakes when they're cheap to fix.

The fix:

  • Default to Plan mode (Shift+Tab twice)
  • Iterate on the plan until it matches your mental model
  • Only then switch to auto-accept
  • Add planning instructions to CLAUDE.md:
## Workflow
- Start complex tasks in Plan mode
- Get plan approval before implementation
- Break large changes into reviewable chunks

5. The Missing Verification Loop

What most devs do: Let Claude write code, eyeball the output, ship it.

What Boris does: "Probably the most important thing to get great results out of Claude Code: give Claude a way to verify its work."

He's not exaggerating. Claude Code itself is tested by Claude — using browser automation to test the UI, iterating until the code works and the UX feels right.

Why it matters: Boris claims verification improves final quality "2–3x." Without it, Claude guesses whether code works. With it, Claude proves the code works.

The fix — add verification requirements to CLAUDE.md:

## Verification Requirements
- Run `npm test` after code changes
- Run `npm run typecheck` before marking complete
- For API changes, test with `curl` or Postman
- For UI changes, verify in browser before committing

Then Claude will actually run these checks. Not because you told it to in the prompt — because it's baked into project context.

6. The Dangerous Permissions Shortcut

What most devs do: Use --dangerously-skip-permissions because the permission prompts are annoying.

What Boris does: "Instead he uses /permissions to pre-allow common safe commands, shared across the team."

Why it matters: Skip-permissions removes all guardrails. One bad command and you're debugging why your database got dropped. Boris treats permissions as a team asset — shared, reviewable, versioned.

His framing: "Design boundaries you won't regret crossing automatically."

The fix: Instead of --dangerously-skip-permissions, use /permissions to allow:

  • npm run test:*
  • npm run build:*
  • git commit:*
  • git push:*
  • bun run format

Pre-allow the safe stuff. Keep the guardrails for everything else. Only use dangerous mode in sandboxed environments for long-running autonomous tasks.

7. The Format Drift

What most devs do: No automatic formatting. Inconsistent code style across AI-generated files. CI failures from linting issues.

What Boris does: PostToolUse hook to auto-format after every edit.

Why it matters: Claude's code is usually well-formatted, but inconsistencies creep in. Manual formatting interrupts flow. And nothing kills momentum like a CI failure on a formatting rule.

The fix — add a PostToolUse hook:

{
  "hooks": {
    "PostToolUse": [{
      "matcher": "Write|Edit",
      "hooks": [{
        "type": "command",
        "command": "npm run format || true"
      }]
    }]
  }
}

Now every file Claude touches gets auto-formatted. The || true prevents format errors from blocking the session.

What I Changed

Here's my CLAUDE.md transformation:

Before (excerpt):

## Authentication Guidelines
When working on authentication-related code, please ensure that you follow our established security practices. This includes but is not limited to: validating all user inputs before processing, implementing proper error handling that doesn't leak sensitive information, using secure token storage mechanisms as defined in our security documentation located in /docs/security/...
[continued for 40 more lines on auth alone]

After (excerpt):

## Auth
- Validate inputs, sanitize outputs
- Errors: no sensitive data in messages
- Tokens: use /lib/auth/tokenStore
- See /docs/security for edge cases

Same guidance. 90% fewer tokens.

The Real Productivity Gain

I want to be honest about what changed and what didn't.

What actually improved:

  • Consistency across the team
  • Fewer repeated mistakes
  • Faster onboarding (new devs inherit the context)
  • Less "why did Claude do that?" debugging

What didn't change much:

  • Raw coding speed (Claude was already fast)
  • Quality of individual outputs (the model matters more than config)

The real value: compound learning.

Each PR makes CLAUDE.md smarter. Each team member's discovery becomes shared knowledge. Over weeks, the gap between "Claude understands our codebase" and "Claude is guessing" widens.

Boris's workflow isn't magic. It's discipline.

The file is small because every line earns its place. The team contributes because the system rewards contribution. The verification loop exists because shipping broken code is expensive.

None of this is hard. It's just intentional.


Author Alireza Rezvani (Reza) is a CTO building AI development systems for engineering teams.