90 days of session data / Jan-Apr 2026

It's called Claude Code. I don't use it for code.

I'm not an engineer. I'm a co-founder who runs ops, strategy, research, sales, and infrastructure. I mined 4,053 sessions and 126,522 tool calls to see where the time actually goes. The answer surprised me.


I

82% of everything I do has nothing to do with writing code

I classified every single tool call across 4,053 sessions. Coding is 18% of the total. The rest is research, ops, writing, email, design, sales, analytics, and browser automation. Claude Code is a misnomer. It's an operating system.

Not code Actual coding

Where the 82% actually goes


II

What does "not code" mean for a co-founder?

It means SSH-ing into servers. Querying Amplitude for conversion data. Drafting Gmail outreach. Running multi-model research swarms. Scraping competitor sites. Managing Figma designs. Pulling calendar context from Granola meetings. Operating a business through a terminal.

What I type into the shell

ssh leads with 10K calls. gws is Google Workspace CLI. cdp.mjs automates Chrome.

    Services I connected via MCP

    14 external services wired directly into the conversation.


    III

    I direct. 2,556 agents execute.

    93% of web research is done by agents, not me. 75% of file reading. But I keep 81% of agent dispatching. I decide what to investigate, who to send, and what model to use. The agents do the legwork.

    Me (main session) Delegated to agents

    Who I dispatch the most


        IV

        Six things I actually said to Claude this month

        These are real prompts from real sessions. Not demos. Not cherry-picked. The kind of thing I'd otherwise need a chief of staff, an analyst, or 4 hours of manual work for.

        "how is the wallet app growing"
        90 Amplitude queries pulled conversion funnels, swap volumes, retention cohorts. Cross-referenced with Omni data. The session then evolved: I asked for a Swap Optimization PRD, which got independently reviewed by a UX Architect, Software Architect, and Senior Developer agent. One prompt turned into an analytics report and a peer-reviewed product spec.
        16h session 90 Amplitude queries Output: earn-funnel-report.html + Swap PRD
        "should we rewrite the web app to native?"
        72 agents dispatched. Bull/bear/adversarial research gathered via Gemini, Grok, GPT-5.4, Kimi, DeepSeek in parallel. Multi-model debates synthesized. Software Architect, UX Researcher, Senior Developer each reviewed the synthesized PRD independently. Then 65 Amplitude queries showed iOS already converts 2.45x better through the existing WebView wrapper (26.6% vs 10.9%). Killed the rewrite. Redirected engineering toward the web-to-app acquisition funnel instead.
        72 agents / 5 LLMs 65 Amplitude queries Output: mobile-conversion-analysis.md
        "just spoke to Przemek, transcribed via MacWhisper"
        Transcribed the co-founder call. I said "I promised Przemek a note on strategy for tomorrow's meeting, do it for me." It pulled context from Gmail threads, previous meeting transcripts, and vault notes, then drafted a strategy brief. Six revisions. Sent the same evening. That's a chief of staff.
        4.4h session Gmail + transcription Output: przemek-workshop-strategy-note.md
        "this is all giberrish and absolutely trash slides"
        Board pipeline update deck. Started from a stale Google Slides link. Pulled the real data from Obsidian vault, recent meeting transcripts, email threads. Swept the vault for missed M&A target companies. Built a Python deck generator. Four iterations until the slides matched reality. Investor relations, from raw chaos to board-ready in one session.
        3.4h session Vault + transcripts + email Output: q1-board-pipeline-update deck
        "Triage all the emails from the last seven days"
        Started as weekly email triage. Then I asked for a research swarm on three deal angles: Coinme/Sequence acquisition economics, Polygon's enterprise strategy, and regulatory exposure. 38 agents ran in parallel - each angle researched via Gemini and Grok, then debated across multiple models. Weekly admin turned into deal intelligence.
        38 agents dispatched Gmail + /research-swarm Output: deal research briefs
        "guide me step by step through the Day 1 execution plan"
        Project Rocket GTM launch. Claude walked me through the morning routine: LinkedIn profile optimization, content scheduling, then automated Chrome to comment on 5 target posts. Set up LinkedIn Ads via Campaign Manager UI. Created TLA creatives. Drafted 8 personalized outreach emails via Gmail. A full content marketing launch day, executed through the terminal.
        69h session Chrome CDP + LinkedIn + Gmail 8 email drafts + ad creatives live
        None of these are coding tasks. They're the work a co-founder actually does: research, outreach, analytics, ops, follow-ups. Claude Code just happens to be the best interface for all of it.

        V

        How I got here in 3 phases

        Week by week, the composition shifted. Early on, everything was Bash. Then MCP servers connected external services. Then agents and skills turned it into an orchestration layer.

        Manual work MCP integrations Agents Skills

        VI

        The system I built around it

        Claude Code is a runtime. I plugged in reusable workflows, specialized AI personas, and live integrations. Together they form a personal operating system for running a business.

        55
        Custom skills. Research swarms, ad audits, cold email writer, Chrome automation, meeting transcription, LinkedIn post writer.
        156
        Agent definitions. From "Deal Strategist" to "UX Researcher" to "Legal Compliance Checker." Each with specific expertise.
        14
        MCP servers. Gmail, Calendar, Figma, Amplitude, Chrome, Qdrant, Firecrawl, Apollo CRM, Playwright, Granola.

        Top skills I actually invoke


            VII

            The raw numbers

            Scale

            • Total sessions4,053
            • Tool calls126,522
            • Agents dispatched2,556
            • Conversation data2.6 GB
            • Images shared1,430
            • Sessions over 6 hours91

            Models

            • Opus 4.6149,712 (77%)
            • Haiku 4.524,878 (13%)
            • Sonnet 4.614,646 (8%)
            • Sonnet 4.53,358 (2%)
            • Kimi K2.5 (via PAL)885
            • Opus 4.5613