Logo Vincent
Back to all posts

Claude Code /insights: Using AI to Analyze How You Use AI

Claude
Claude Code /insights: Using AI to Analyze How You Use AI

Why /insights exists

You use Claude Code every day — writing code, fixing bugs, refactoring. But have you ever stopped to ask: how exactly are you using it? Which workflows feel effortless? Where do you keep getting stuck?

Most people never think to reflect on this. You just use it, keep using it, friction points persist, and good habits never get locked in.

/insights answers those questions for you.

What /insights does

/insights is Claude Code’s session analysis command. It scans all your locally stored Claude Code sessions, analyzes your usage patterns with Claude Opus, and generates an HTML report.

/insights

After running it, you’ll see the progress message analyzing your sessions, followed by the analysis output and a path to the saved report.

The five-phase analysis pipeline

Behind /insights is a complete data processing pipeline — not just a simple number aggregator.

Phase 1: Lightweight scan

Claude Code stores all sessions under ~/.claude/projects/ as .jsonl files, organized by project and session ID. Phase 1 does a filesystem scan reading only metadata, without loading full session content — it’s fast.

Phase 2: Cache + parse

Analysis results are cached under ~/.claude/usage-data/:

  • session-meta/ — statistical summary for each session
  • facets/ — AI-extracted dimensions for each session

Only new sessions are re-parsed. A maximum of 200 sessions are analyzed, with the most recent ones prioritized when the limit is exceeded.

Phase 3: Session facet extraction (SessionFacets)

For each uncached session, Claude Opus is called to extract structured dimensions:

FieldMeaning
underlying_goalWhat you actually wanted to accomplish
outcomeResult (fully/mostly/partially achieved, or not achieved)
brief_summarySession summary
goal_categoriesTask classification — see table below
user_satisfaction_countsYour satisfaction signals (happy/satisfied/dissatisfied/frustrated)
claude_helpfulnessHow helpful Claude was (unhelpful → essential)
friction_countsCount of each friction type
friction_detailSpecific description of friction
primary_successKey success factor
user_instructions_to_claudeInstructions you gave Claude during the session

Goal categories:

CategoryDescription
debug_investigateDebugging / investigation
implement_featureImplementing a new feature
fix_bugFixing a bug
write_script_toolWriting a script or tool
refactor_codeRefactoring code
configure_systemSystem configuration
create_pr_commitCreating a PR or commit
analyze_dataData analysis
understand_codebaseUnderstanding a codebase
write_testsWriting tests
write_docsWriting documentation
deploy_infraDeployment / infrastructure

One key extraction rule: only count actions the user explicitly initiated — not work Claude decided to do autonomously. “Help me implement the login feature” counts; Claude independently browsing a few extra files does not.

Friction categories:

CategoryMeaning
misunderstood_requestClaude interpreted your intent incorrectly
wrong_approachRight goal, wrong solution method
buggy_codeGenerated code that didn’t work
user_rejected_actionYou stopped Claude mid-action
excessive_changesOver-engineered or changed too much

Phase 4: Cross-session aggregation (AggregatedData)

Global statistics are aggregated across all sessions:

Basic stats:

  • Total sessions, messages, and usage duration (hours)
  • Total input/output token counts
  • Days active, average messages per day

Code changes:

  • Total lines added, removed, and files modified

Tool usage:

  • Call count distribution for each tool
  • Sessions that used Task Agent
  • Sessions that used MCP
  • Sessions that used Web Search / Web Fetch

Collaboration patterns:

  • multi_clauding: detects whether you ran multiple Claude Code sessions simultaneously — determined by timestamp overlap, recording overlap event count, sessions involved, and messages sent during overlap

Response times:

  • Median and average time between Claude’s response and your next message
  • Used to characterize whether you prefer rapid iteration or deliberate, thoughtful follow-ups

Time-of-day distribution:

  • Records the hour of each user message, used to identify your peak usage hours

Phase 5: Parallel insight generation

The aggregated data and session summaries are fed to Claude Opus, which generates all report sections in parallel — each section is an independent API call with up to 8,192 output tokens.

The seven report sections

1. Project Areas

Identifies 4–5 categories of projects you worked on, each with a session count and 2–3 sentences describing what you worked on and how you used Claude Code for it.

2. Interaction Style

The most interesting section. Uses 2–3 paragraphs to analyze how you actually interact with Claude:

  • Do you write detailed specs upfront, or iterate as you go?
  • Do you interrupt Claude often, or let it finish before reviewing?
  • What recurring interaction patterns show up?

Ends with a single sentence capturing your most distinctive interaction style.

3. What Works

Lists 3 workflows where you’re performing impressively well — with titles and descriptions written in second person (“you”), as if someone who knows your work is summarizing your strengths.

4. Friction Analysis

Lists 3 friction categories, each with:

  • A sentence explaining what the friction is and what could be done differently
  • 2 specific examples drawn from real sessions

This is one of the most valuable sections in the entire report — many habitual inefficiencies are invisible to you, but the session data captures them all.

5. Suggestions

Concrete suggestions across three dimensions:

CLAUDE.md additions: Based on instructions you’ve repeated across multiple sessions, these are rules worth hardcoding into your CLAUDE.md so you never have to say them again. For example, if you’ve told Claude “run the tests after making changes” in multiple sessions, that’s a prime candidate to add to CLAUDE.md.

Features to try: Selected from MCP, custom skills, Hooks, Headless mode, and Task Agents — the ones best suited to your current workflow, each with a copyable command or config snippet.

Usage pattern suggestions: Each suggestion comes with a ready-to-use prompt, making it easy to act on immediately.

6. On the Horizon

Based on your usage patterns, suggests 3 advanced directions you haven’t explored yet — autonomous workflows, parallel agents, test-driven development, and more — each with a “try this now” prompt.

7. Fun Ending

Finds one memorable or amusing moment from all your sessions, presented as a headline with brief context. Not a statistic — just a human touch to close the report.

The summary header

The report opens with a quantitative overview:

Sessions analyzed:     87 (of 142 scanned)
Total messages:        1,203
Total duration:        47.3 hours
Git commits:           234  |  Git pushes: 89
Date range:            2026-01-15 → 2026-04-07

Where the report is saved

The HTML report is saved to:

~/.claude/data/report.html

After running /insights, the terminal outputs the file path. Open it in a browser to view the full formatted report.

Why Opus

/insights uses Claude Opus for all analysis — both Phase 3 facet extraction and Phase 5 insight generation.

The reason is straightforward: this task requires deep comprehension of large amounts of unstructured session data, pattern recognition, and causal inference across hundreds of conversations. That’s exactly what Opus is built for. Speed isn’t the priority here — report quality is.

Where data comes from and lives

/insights reads only local data. Nothing is uploaded to the cloud:

  • Raw session data: ~/.claude/projects/<project-hash>/<session-id>.jsonl
  • Stats cache: ~/.claude/usage-data/session-meta/
  • AI analysis cache: ~/.claude/usage-data/facets/
  • Generated report: ~/.claude/data/report.html

The caching system means the second run is much faster — only new sessions need re-analysis; everything else is read from cache.

Technical detail: lazy loading

The /insights implementation file is 113 KB and includes heavy HTML rendering dependencies. To avoid slowing down Claude Code’s startup time, this module uses lazy loading: it’s only imported when you actually run /insights, and carries zero startup overhead the rest of the time.

When to use it

A few practical scenarios:

Monthly retrospective: See what types of work dominated your month, how effective you were, and where friction concentrated.

Optimize your CLAUDE.md: Use the suggestions section to identify instructions you keep repeating to Claude and hardcode them — stop explaining the same things in every session.

Discover blind spots: The features_to_try section might surface features you’ve never used but that fit your workflow well — MCP for database access, Hooks for auto-formatting, Headless mode for CI integration.

Track output: Git commits, line count changes, and session duration provide a useful record of productivity over time.

Closing thoughts

/insights does something genuinely interesting: it uses AI to analyze how you collaborate with AI.

It’s not just counting numbers. It’s trying to understand your actual working patterns — what flows, what blocks, where to go next. Across seven sections, it tells you what you’re doing well, surfaces friction you didn’t realize was there, and hands you prompts you can copy and try immediately.

Understanding how you use your tools is the prerequisite for using them well.

More Articles

© 2026 vincentqiao.com . All rights reserved.