Local analytics dashboard for AI coding assistants.
Tokens, costs, agent metrics and activity heatmap —
parsed straight from ~/.claude/
Real screenshots from a live agentistics instance.
One-click report — dark & light themes, shareable anywhere
Ready to see your own data?
Three commands, three modes — a fullscreen TUI for live metrics, a background OTel daemon, and a server with embedded React dashboard.
Input, output, cache read and cache write tokens broken down separately for every session and every model. Understand exactly where your token budget goes and which cache strategies save you the most.
Real costs in USD and BRL with live exchange rates. Blended cost-per-token across your entire model mix. Per-model breakdown with exact Anthropic pricing so you know which model is costing what.
Every Agent tool call is tracked individually: duration, token usage, cost, and detailed tool stats including file reads, edits, bash executions, and searches. Compare success rates per agent type across your sessions.
GitHub-style contribution heatmap of your AI coding activity across 52 weeks. Streak counter that tracks consecutive active days — without penalizing you for not having worked yet today. Intensity reflects token volume per day.
Token and cost distribution across every Claude model in your usage history. Filter by project to identify which workstreams lean most heavily on expensive models. Donut chart and per-model table with share percentages.
Reads directly from ~/.claude/ on your filesystem. No cloud sync, no account creation, no analytics, no telemetry. A single binary that parses JSONL files locally and serves everything from your machine.
Export your AI usage metrics to any OTel-compatible backend. Token counters, cost gauge, session count, streak days, git line stats, and per-tool-type call counts. Works with Prometheus, Grafana, Datadog, and any OTLP endpoint.
One-click export of your full analytics report as a PDF. Includes token usage, cost breakdown, session history, model distribution, and agent metrics. Choose between dark and light themes — perfect for sharing with your team.
Single binary, no config, no dependencies. Drop it anywhere in your $PATH and start exploring your AI usage immediately.
One-line install for Linux/macOS. Or clone and build from source with Bun.
Starts the API on port 3001 and serves the full React dashboard. Your ~/.claude/ is read directly — no config needed.
Use agentop tui for a fullscreen terminal dashboard or agentop watch to stream OTel metrics to your observability stack.
# download the latest release
curl -fsSL \
https://github.com/blpsoares/agentistics/\
releases/latest/download/agentop-linux \
-o agentop
chmod +x agentop
sudo mv agentop /usr/local/bin/
# start the dashboard
agentop server
# open in browser
→ http://localhost:3001
# live TUI (separate terminal)
agentop tui
# OTel metrics export
OTEL_EXPORTER_OTLP_ENDPOINT=\
http://localhost:4318 \
agentop watch
Every session is a JSONL file. Agentistics parses them locally, aggregates stats-cache, extracts agent metrics, and streams live updates via SSE — all without ever touching a remote server.
Open source · Local first · Built with Bun + React + TypeScript