Logo Vincent
Back to all posts

Claude Code /context: What's Eating Your Context Window?

Claude
Claude Code /context: What's Eating Your Context Window?

What Is /context

After using Claude Code for a while, you’ve surely encountered these situations:

  • Mid-conversation, you suddenly get a warning that context is running low
  • After a few large file reads, Claude’s response quality noticeably drops
  • You have no idea what’s actually consuming your context space

/context is the “dashboard” for your context window. It uses a colored grid and detailed category breakdowns to show you what’s occupying your context, how much space each category takes, how much remains — and how to optimize.

How to Use It

In Claude Code interactive mode, type:

/context

No arguments needed. It immediately displays a complete analysis of your current context.

What You’ll See

Colored Grid

This is a visual representation of context usage:

SymbolMeaning
(colored)Used space (category >= 70% of its allocation)
(colored)Used space (category < 70%)
(colored)Autocompact buffer
(dim)Remaining free space

Grid size adapts to your terminal width and model context size:

  • 200K models: 10x10 grid (5x5 on narrow screens)
  • 1M models: 20x10 grid (5x10 on narrow screens)

Category Breakdown

To the right of the grid is a detailed legend showing token counts and percentages for each category:

CategoryDescription
System PromptBase system instructions
System ToolsBuilt-in tool definitions (Bash, Edit, Read, etc.)
MCP ToolsMCP server tool definitions
Custom AgentsCustom agent definitions
Memory FilesCLAUDE.md and memory files
SkillsLoaded skill definitions
MessagesConversation messages (user input, Claude replies, tool calls and results)
Free SpaceRemaining available space
Autocompact BufferReserved buffer for auto-compaction

Detailed Information

Below the statistics, you’ll see specific contents for each category:

  • MCP Tools: How many tools each MCP server has loaded vs. deferred
  • Custom Agents: Grouped by source (Project/User/Managed/Plugin/Built-in)
  • Memory Files: Each CLAUDE.md file’s path and token usage
  • Skills: Loaded skills listed

Each detail item includes navigation hints like - /mcp, - /agents, - /memory, so you can jump directly to the relevant management command.

Optimization Suggestions

/context also provides targeted optimization suggestions based on the analysis. For example:

  • Context over 80% full: “Use /compact now to control what gets kept”
  • Large tool call results: “Bash/Read/Grep returned a lot of content — consider narrowing your query”
  • Large memory files: “CLAUDE.md files are taking up significant space — consider trimming”
  • Autocompact disabled: “Enable it in /config or use /compact manually”

Context Window Fundamentals

Window Size

ModeSize
Default200,000 Tokens
1M Mode1,000,000 Tokens (available for Sonnet 4.6 / Opus 4.6)

1M mode requires model support. You can enable it by adding a [1m] suffix to the model name (e.g., via /model).

What’s Inside the Context

A typical Claude Code session’s context consists of these parts:

┌─────────────────────────────────┐
│  System Prompt                  │  Fixed overhead
│  Tool Definitions               │  Fixed overhead
│  Memory Files (CLAUDE.md, etc.) │  Semi-fixed
│  Skills / Agents                │  Semi-fixed
├─────────────────────────────────┤
│  Message 1 (user input)        │
│  Message 2 (Claude reply)      │
│  Message 3 (tool call + result)│  Grows dynamically
│  Message 4 (user follow-up)    │
│  ...                           │
├─────────────────────────────────┤
│  Autocompact Buffer            │  Reserved 13,000 Tokens
│  Free Space                    │
└─────────────────────────────────┘

“Fixed overhead” exists in every conversation as baseline cost. The “dynamic” portion keeps growing as the conversation progresses. When it approaches the limit, compaction is needed.

What Consumes the Most Context

Based on real-world usage, the biggest context consumers are:

  1. Large file reads — reading a 1000-line file can consume thousands of tokens in one go
  2. Bash command outputnpm install, git log, build output tends to be very long
  3. Multiple tool call rounds — each tool call (input + output) accumulates
  4. MCP tool definitions — if you have many MCP servers loaded, tool definitions alone can take significant space

How /context Relates to /compact

/context and /compact work as a pair:

CommandPurpose
/contextDiagnose — see what’s consuming your context
/compactTreat — compress conversation history, free up space

Typical workflow:

  1. Notice Claude’s responses getting slower or lower quality
  2. Run /context to check context usage
  3. If it’s nearly full, run /compact to compress
  4. Run /context again to confirm how much space was freed

Auto-Compaction

If you’ve enabled auto-compaction in /config, Claude Code automatically triggers compaction when context approaches its limit — no manual action needed.

The auto-compact trigger threshold is: effective context window - 13,000 Tokens.

The “Autocompact Buffer” shown in the /context grid represents this 13,000-token reserved space — it ensures there’s enough room for Claude to complete the current response when auto-compaction triggers.

If auto-compaction fails 3 consecutive times, it automatically stops (circuit breaker mechanism), and you’ll need to run /compact manually.

Context Indicators in the Status Line

Besides the /context command, Claude Code shows context information in real-time in two places:

Status Bar

The status bar at the bottom of the screen shows the current context usage percentage, keeping you informed of remaining space at all times.

Warning Above the Input Box

When context approaches its limit, a warning appears above the input box:

StateMessage
Autocompact enabled”X% until auto-compact” (dimmed)
Autocompact disabled”Context low (X% remaining) - Run /compact” (yellow/red warning)
1M upgrade availableSuggests switching to 1M context mode

Practical Tips

Tip 1: Check Context Regularly

During long sessions, run /context periodically to check usage. Don’t wait until Claude warns you about running low — by then, you may have already lost some important conversation history.

Tip 2: Control Tool Output Size

/context’s optimization suggestions tell you which tool calls consumed the most tokens. Common optimizations:

  • Specify line ranges when reading files instead of reading entire large files
  • Use | head or | tail to limit Bash command output
  • Use more precise patterns when searching to reduce result count

Tip 3: Trim Your CLAUDE.md

If /context shows Memory Files taking up significant space, your CLAUDE.md files might be too long. Keep only essential information and remove redundant content. Remember: CLAUDE.md is fixed overhead loaded in every conversation.

Tip 4: Use Deferred Loading

MCP tools support deferred loading. If you have many MCP servers loaded but don’t use all of them frequently, deferred loading can significantly reduce the tool definition footprint in your context. /context shows loaded vs. deferred tool counts separately.

Tip 5: Consider 1M Context

If you frequently run out of context, both Sonnet 4.6 and Opus 4.6 support 1M-token context windows. Switch to 1M mode in /model for five times the context space.

Of course, a larger context also means more token consumption and higher costs — choose based on your needs.

Final Thoughts

The context window is the “workbench” for your collaboration with Claude Code — the cleaner the surface, the higher the efficiency.

/context lets you see the state of this workbench: what’s taking up how much space, what can be cleaned up, and how long the remaining space will last. Think of it as a regular checkup — there may not always be a problem, but knowing the status is always better than not knowing.

If context is running low, compress with /compact. If it’s always insufficient, consider upgrading to 1M mode. If fixed overhead is too large, trim your CLAUDE.md and MCP configurations.

Managing your context well means managing the efficiency of your AI collaboration.

More Articles

© 2026 vincentqiao.com . All rights reserved.