Claude Code /insights: Using AI to Analyze How You Use AI
Why /insights exists
You use Claude Code every day — writing code, fixing bugs, refactoring. But have you ever stopped to ask: how exactly are you using it? Which workflows feel effortless? Where do you keep getting stuck?
Most people never think to reflect on this. You just use it, keep using it, friction points persist, and good habits never get locked in.
/insights answers those questions for you.
What /insights does
/insights is Claude Code’s session analysis command. It scans all your locally stored Claude Code sessions, analyzes your usage patterns with Claude Opus, and generates an HTML report.
/insights
After running it, you’ll see the progress message analyzing your sessions, followed by the analysis output and a path to the saved report.
The five-phase analysis pipeline
Behind /insights is a complete data processing pipeline — not just a simple number aggregator.
Phase 1: Lightweight scan
Claude Code stores all sessions under ~/.claude/projects/ as .jsonl files, organized by project and session ID. Phase 1 does a filesystem scan reading only metadata, without loading full session content — it’s fast.
Phase 2: Cache + parse
Analysis results are cached under ~/.claude/usage-data/:
session-meta/— statistical summary for each sessionfacets/— AI-extracted dimensions for each session
Only new sessions are re-parsed. A maximum of 200 sessions are analyzed, with the most recent ones prioritized when the limit is exceeded.
Phase 3: Session facet extraction (SessionFacets)
For each uncached session, Claude Opus is called to extract structured dimensions:
| Field | Meaning |
|---|---|
underlying_goal | What you actually wanted to accomplish |
outcome | Result (fully/mostly/partially achieved, or not achieved) |
brief_summary | Session summary |
goal_categories | Task classification — see table below |
user_satisfaction_counts | Your satisfaction signals (happy/satisfied/dissatisfied/frustrated) |
claude_helpfulness | How helpful Claude was (unhelpful → essential) |
friction_counts | Count of each friction type |
friction_detail | Specific description of friction |
primary_success | Key success factor |
user_instructions_to_claude | Instructions you gave Claude during the session |
Goal categories:
| Category | Description |
|---|---|
debug_investigate | Debugging / investigation |
implement_feature | Implementing a new feature |
fix_bug | Fixing a bug |
write_script_tool | Writing a script or tool |
refactor_code | Refactoring code |
configure_system | System configuration |
create_pr_commit | Creating a PR or commit |
analyze_data | Data analysis |
understand_codebase | Understanding a codebase |
write_tests | Writing tests |
write_docs | Writing documentation |
deploy_infra | Deployment / infrastructure |
One key extraction rule: only count actions the user explicitly initiated — not work Claude decided to do autonomously. “Help me implement the login feature” counts; Claude independently browsing a few extra files does not.
Friction categories:
| Category | Meaning |
|---|---|
misunderstood_request | Claude interpreted your intent incorrectly |
wrong_approach | Right goal, wrong solution method |
buggy_code | Generated code that didn’t work |
user_rejected_action | You stopped Claude mid-action |
excessive_changes | Over-engineered or changed too much |
Phase 4: Cross-session aggregation (AggregatedData)
Global statistics are aggregated across all sessions:
Basic stats:
- Total sessions, messages, and usage duration (hours)
- Total input/output token counts
- Days active, average messages per day
Code changes:
- Total lines added, removed, and files modified
Tool usage:
- Call count distribution for each tool
- Sessions that used Task Agent
- Sessions that used MCP
- Sessions that used Web Search / Web Fetch
Collaboration patterns:
multi_clauding: detects whether you ran multiple Claude Code sessions simultaneously — determined by timestamp overlap, recording overlap event count, sessions involved, and messages sent during overlap
Response times:
- Median and average time between Claude’s response and your next message
- Used to characterize whether you prefer rapid iteration or deliberate, thoughtful follow-ups
Time-of-day distribution:
- Records the hour of each user message, used to identify your peak usage hours
Phase 5: Parallel insight generation
The aggregated data and session summaries are fed to Claude Opus, which generates all report sections in parallel — each section is an independent API call with up to 8,192 output tokens.
The seven report sections
1. Project Areas
Identifies 4–5 categories of projects you worked on, each with a session count and 2–3 sentences describing what you worked on and how you used Claude Code for it.
2. Interaction Style
The most interesting section. Uses 2–3 paragraphs to analyze how you actually interact with Claude:
- Do you write detailed specs upfront, or iterate as you go?
- Do you interrupt Claude often, or let it finish before reviewing?
- What recurring interaction patterns show up?
Ends with a single sentence capturing your most distinctive interaction style.
3. What Works
Lists 3 workflows where you’re performing impressively well — with titles and descriptions written in second person (“you”), as if someone who knows your work is summarizing your strengths.
4. Friction Analysis
Lists 3 friction categories, each with:
- A sentence explaining what the friction is and what could be done differently
- 2 specific examples drawn from real sessions
This is one of the most valuable sections in the entire report — many habitual inefficiencies are invisible to you, but the session data captures them all.
5. Suggestions
Concrete suggestions across three dimensions:
CLAUDE.md additions: Based on instructions you’ve repeated across multiple sessions, these are rules worth hardcoding into your CLAUDE.md so you never have to say them again. For example, if you’ve told Claude “run the tests after making changes” in multiple sessions, that’s a prime candidate to add to CLAUDE.md.
Features to try: Selected from MCP, custom skills, Hooks, Headless mode, and Task Agents — the ones best suited to your current workflow, each with a copyable command or config snippet.
Usage pattern suggestions: Each suggestion comes with a ready-to-use prompt, making it easy to act on immediately.
6. On the Horizon
Based on your usage patterns, suggests 3 advanced directions you haven’t explored yet — autonomous workflows, parallel agents, test-driven development, and more — each with a “try this now” prompt.
7. Fun Ending
Finds one memorable or amusing moment from all your sessions, presented as a headline with brief context. Not a statistic — just a human touch to close the report.
The summary header
The report opens with a quantitative overview:
Sessions analyzed: 87 (of 142 scanned)
Total messages: 1,203
Total duration: 47.3 hours
Git commits: 234 | Git pushes: 89
Date range: 2026-01-15 → 2026-04-07
Where the report is saved
The HTML report is saved to:
~/.claude/data/report.html
After running /insights, the terminal outputs the file path. Open it in a browser to view the full formatted report.
Why Opus
/insights uses Claude Opus for all analysis — both Phase 3 facet extraction and Phase 5 insight generation.
The reason is straightforward: this task requires deep comprehension of large amounts of unstructured session data, pattern recognition, and causal inference across hundreds of conversations. That’s exactly what Opus is built for. Speed isn’t the priority here — report quality is.
Where data comes from and lives
/insights reads only local data. Nothing is uploaded to the cloud:
- Raw session data:
~/.claude/projects/<project-hash>/<session-id>.jsonl - Stats cache:
~/.claude/usage-data/session-meta/ - AI analysis cache:
~/.claude/usage-data/facets/ - Generated report:
~/.claude/data/report.html
The caching system means the second run is much faster — only new sessions need re-analysis; everything else is read from cache.
Technical detail: lazy loading
The /insights implementation file is 113 KB and includes heavy HTML rendering dependencies. To avoid slowing down Claude Code’s startup time, this module uses lazy loading: it’s only imported when you actually run /insights, and carries zero startup overhead the rest of the time.
When to use it
A few practical scenarios:
Monthly retrospective: See what types of work dominated your month, how effective you were, and where friction concentrated.
Optimize your CLAUDE.md: Use the suggestions section to identify instructions you keep repeating to Claude and hardcode them — stop explaining the same things in every session.
Discover blind spots: The features_to_try section might surface features you’ve never used but that fit your workflow well — MCP for database access, Hooks for auto-formatting, Headless mode for CI integration.
Track output: Git commits, line count changes, and session duration provide a useful record of productivity over time.
Closing thoughts
/insights does something genuinely interesting: it uses AI to analyze how you collaborate with AI.
It’s not just counting numbers. It’s trying to understand your actual working patterns — what flows, what blocks, where to go next. Across seven sections, it tells you what you’re doing well, surfaces friction you didn’t realize was there, and hands you prompts you can copy and try immediately.
Understanding how you use your tools is the prerequisite for using them well.
More Articles
- Why AI-First Startups Only Need One Programming Language
- cc-ping: Ping All Your Claude Code Configs in One Command
- Shocking! This Tool Lets Programmers Finish 95 Minutes of Work in 4 Minutes! 24x Efficiency Boost
- CCBot - 24x Development Efficiency Boost
- Claude Code /add-dir: The Monorepo Command You Miss
- Claude Code Token-Saving Tip: The Power of the Exclamation Mark
- I Built a Bot That Runs Claude Code From Chat
- Claude Code /btw Command Explained: Quick Side Questions Without Breaking Flow
- Claude Code /compact: Free Up Context, Keep Progress
- Claude Code /config: Every Setting Explained
- Claude Code /context: What's Eating Your Context Window?
- Claude Code /diff: See Exactly What Changed, Turn by Turn
- Claude Code /fast: Same Opus, 2x Speed — Worth It?
- Best Practice for External Knowledge in Claude Code: GitHub MCP + Context7
- Claude Code /hooks: Make AI Follow Your Rules
- Claude Code /init: Generate CLAUDE.md in 10 Seconds
- Claude Code MCP: Give Your AI Access to Any Tool
- Claude Code /memory Explained: Make AI Truly Remember Your Project
- Claude Code /model: Opus vs Sonnet vs Haiku Guide
- Claude Code /permissions: Fine-Grained Control Over What AI Can Do
- Claude Code /plan Explained: Think Before You Code
- Claude Code + Playwright MCP: AI Can Finally "See" the Page
- Claude Code /resume Command Explained: Don't Let Your Conversations Go to Waste
- Claude Code /review: Let AI Do Your Code Review
- Claude Code Skills Explained: Build Your Custom Command Library
- Claude Code /stats: See How Much AI Does For You
- Claude Code /status Command Explained: Your Session Dashboard
- Claude Code /tasks Command Explained: Master Your Background Tasks
- Claude Code /usage Command Explained: Know Your Remaining Quota
- Claude Code /vim: Vim Keybindings in Your AI Coding Assistant
- Claude Code in 2026: The Only Setup Guide You Need
- The Complete Guide to Claude: From Chat to Code to Automation
- Claude Code /agents Explained: Custom AI Sub-Agents, Each with Their Own Role
- Claude Code /doctor Explained: One-Click Diagnostics for Your Dev Environment
- Claude Code /effort Explained: Control How Hard Your AI Thinks
- Claude Code /cost Explained: How Much Is Your AI Coding Really Costing?
- Claude Code /export Explained: Take Your AI Conversations With You
- Claude Code /rewind Explained: AI Made a Mistake? Undo It Instantly
- Claude Code /plugin Explained: Install Plugins for Your AI Coding Assistant
- Claude Code /theme Explained: Give Your Terminal a New Look
- Claude Code /rename Explained: Give Your Sessions Meaningful Names
- Claude Code settings.json Explained (1): Where Config Files Live and Who Wins
- Claude Code settings.json Deep Dive (Part 2): The Permissions System
- Claude Code settings.json Deep Dive (Part 3): The Hooks System
- Claude Code settings.json Deep Dive (4): env, Models, Auth, and Other Useful Fields