Claude Code /context: What's Eating Your Context Window?
What Is /context
After using Claude Code for a while, you’ve surely encountered these situations:
- Mid-conversation, you suddenly get a warning that context is running low
- After a few large file reads, Claude’s response quality noticeably drops
- You have no idea what’s actually consuming your context space
/context is the “dashboard” for your context window. It uses a colored grid and detailed category breakdowns to show you what’s occupying your context, how much space each category takes, how much remains — and how to optimize.
How to Use It
In Claude Code interactive mode, type:
/context
No arguments needed. It immediately displays a complete analysis of your current context.
What You’ll See
Colored Grid
This is a visual representation of context usage:
| Symbol | Meaning |
|---|---|
⛁ (colored) | Used space (category >= 70% of its allocation) |
⛀ (colored) | Used space (category < 70%) |
⛝ (colored) | Autocompact buffer |
⛶ (dim) | Remaining free space |
Grid size adapts to your terminal width and model context size:
- 200K models: 10x10 grid (5x5 on narrow screens)
- 1M models: 20x10 grid (5x10 on narrow screens)
Category Breakdown
To the right of the grid is a detailed legend showing token counts and percentages for each category:
| Category | Description |
|---|---|
| System Prompt | Base system instructions |
| System Tools | Built-in tool definitions (Bash, Edit, Read, etc.) |
| MCP Tools | MCP server tool definitions |
| Custom Agents | Custom agent definitions |
| Memory Files | CLAUDE.md and memory files |
| Skills | Loaded skill definitions |
| Messages | Conversation messages (user input, Claude replies, tool calls and results) |
| Free Space | Remaining available space |
| Autocompact Buffer | Reserved buffer for auto-compaction |
Detailed Information
Below the statistics, you’ll see specific contents for each category:
- MCP Tools: How many tools each MCP server has loaded vs. deferred
- Custom Agents: Grouped by source (Project/User/Managed/Plugin/Built-in)
- Memory Files: Each CLAUDE.md file’s path and token usage
- Skills: Loaded skills listed
Each detail item includes navigation hints like - /mcp, - /agents, - /memory, so you can jump directly to the relevant management command.
Optimization Suggestions
/context also provides targeted optimization suggestions based on the analysis. For example:
- Context over 80% full: “Use /compact now to control what gets kept”
- Large tool call results: “Bash/Read/Grep returned a lot of content — consider narrowing your query”
- Large memory files: “CLAUDE.md files are taking up significant space — consider trimming”
- Autocompact disabled: “Enable it in /config or use /compact manually”
Context Window Fundamentals
Window Size
| Mode | Size |
|---|---|
| Default | 200,000 Tokens |
| 1M Mode | 1,000,000 Tokens (available for Sonnet 4.6 / Opus 4.6) |
1M mode requires model support. You can enable it by adding a [1m] suffix to the model name (e.g., via /model).
What’s Inside the Context
A typical Claude Code session’s context consists of these parts:
┌─────────────────────────────────┐
│ System Prompt │ Fixed overhead
│ Tool Definitions │ Fixed overhead
│ Memory Files (CLAUDE.md, etc.) │ Semi-fixed
│ Skills / Agents │ Semi-fixed
├─────────────────────────────────┤
│ Message 1 (user input) │
│ Message 2 (Claude reply) │
│ Message 3 (tool call + result)│ Grows dynamically
│ Message 4 (user follow-up) │
│ ... │
├─────────────────────────────────┤
│ Autocompact Buffer │ Reserved 13,000 Tokens
│ Free Space │
└─────────────────────────────────┘
“Fixed overhead” exists in every conversation as baseline cost. The “dynamic” portion keeps growing as the conversation progresses. When it approaches the limit, compaction is needed.
What Consumes the Most Context
Based on real-world usage, the biggest context consumers are:
- Large file reads — reading a 1000-line file can consume thousands of tokens in one go
- Bash command output —
npm install,git log, build output tends to be very long - Multiple tool call rounds — each tool call (input + output) accumulates
- MCP tool definitions — if you have many MCP servers loaded, tool definitions alone can take significant space
How /context Relates to /compact
/context and /compact work as a pair:
| Command | Purpose |
|---|---|
/context | Diagnose — see what’s consuming your context |
/compact | Treat — compress conversation history, free up space |
Typical workflow:
- Notice Claude’s responses getting slower or lower quality
- Run
/contextto check context usage - If it’s nearly full, run
/compactto compress - Run
/contextagain to confirm how much space was freed
Auto-Compaction
If you’ve enabled auto-compaction in /config, Claude Code automatically triggers compaction when context approaches its limit — no manual action needed.
The auto-compact trigger threshold is: effective context window - 13,000 Tokens.
The “Autocompact Buffer” shown in the /context grid represents this 13,000-token reserved space — it ensures there’s enough room for Claude to complete the current response when auto-compaction triggers.
If auto-compaction fails 3 consecutive times, it automatically stops (circuit breaker mechanism), and you’ll need to run /compact manually.
Context Indicators in the Status Line
Besides the /context command, Claude Code shows context information in real-time in two places:
Status Bar
The status bar at the bottom of the screen shows the current context usage percentage, keeping you informed of remaining space at all times.
Warning Above the Input Box
When context approaches its limit, a warning appears above the input box:
| State | Message |
|---|---|
| Autocompact enabled | ”X% until auto-compact” (dimmed) |
| Autocompact disabled | ”Context low (X% remaining) - Run /compact” (yellow/red warning) |
| 1M upgrade available | Suggests switching to 1M context mode |
Practical Tips
Tip 1: Check Context Regularly
During long sessions, run /context periodically to check usage. Don’t wait until Claude warns you about running low — by then, you may have already lost some important conversation history.
Tip 2: Control Tool Output Size
/context’s optimization suggestions tell you which tool calls consumed the most tokens. Common optimizations:
- Specify line ranges when reading files instead of reading entire large files
- Use
| heador| tailto limit Bash command output - Use more precise patterns when searching to reduce result count
Tip 3: Trim Your CLAUDE.md
If /context shows Memory Files taking up significant space, your CLAUDE.md files might be too long. Keep only essential information and remove redundant content. Remember: CLAUDE.md is fixed overhead loaded in every conversation.
Tip 4: Use Deferred Loading
MCP tools support deferred loading. If you have many MCP servers loaded but don’t use all of them frequently, deferred loading can significantly reduce the tool definition footprint in your context. /context shows loaded vs. deferred tool counts separately.
Tip 5: Consider 1M Context
If you frequently run out of context, both Sonnet 4.6 and Opus 4.6 support 1M-token context windows. Switch to 1M mode in /model for five times the context space.
Of course, a larger context also means more token consumption and higher costs — choose based on your needs.
Final Thoughts
The context window is the “workbench” for your collaboration with Claude Code — the cleaner the surface, the higher the efficiency.
/context lets you see the state of this workbench: what’s taking up how much space, what can be cleaned up, and how long the remaining space will last. Think of it as a regular checkup — there may not always be a problem, but knowing the status is always better than not knowing.
If context is running low, compress with /compact. If it’s always insufficient, consider upgrading to 1M mode. If fixed overhead is too large, trim your CLAUDE.md and MCP configurations.
Managing your context well means managing the efficiency of your AI collaboration.
More Articles
- Why AI-First Startups Only Need One Programming Language
- cc-ping: Ping All Your Claude Code Configs in One Command
- Shocking! This Tool Lets Programmers Finish 95 Minutes of Work in 4 Minutes! 24x Efficiency Boost
- CCBot - 24x Development Efficiency Boost
- Claude Code /add-dir: The Monorepo Command You Miss
- Claude Code Token-Saving Tip: The Power of the Exclamation Mark
- I Built a Bot That Runs Claude Code From Chat
- Claude Code /btw Command Explained: Quick Side Questions Without Breaking Flow
- Claude Code /compact: Free Up Context, Keep Progress
- Claude Code /config: Every Setting Explained
- Claude Code /diff: See Exactly What Changed, Turn by Turn
- Claude Code /fast: Same Opus, 2x Speed — Worth It?
- Best Practice for External Knowledge in Claude Code: GitHub MCP + Context7
- Claude Code /hooks: Make AI Follow Your Rules
- Claude Code /init: Generate CLAUDE.md in 10 Seconds
- Claude Code MCP: Give Your AI Access to Any Tool
- Claude Code /memory Explained: Make AI Truly Remember Your Project
- Claude Code /model: Opus vs Sonnet vs Haiku Guide
- Claude Code /permissions: Fine-Grained Control Over What AI Can Do
- Claude Code /plan Explained: Think Before You Code
- Claude Code + Playwright MCP: AI Can Finally "See" the Page
- Claude Code /resume Command Explained: Don't Let Your Conversations Go to Waste
- Claude Code /review: Let AI Do Your Code Review
- Claude Code Skills Explained: Build Your Custom Command Library
- Claude Code /stats: See How Much AI Does For You
- Claude Code /status Command Explained: Your Session Dashboard
- Claude Code /tasks Command Explained: Master Your Background Tasks
- Claude Code /usage Command Explained: Know Your Remaining Quota
- Claude Code /vim: Vim Keybindings in Your AI Coding Assistant
- Claude Code in 2026: The Only Setup Guide You Need
- The Complete Guide to Claude: From Chat to Code to Automation