Skip to content

Admin Operations Dashboard

Monitor platform health, revenue metrics, and system performance

Overview

The Admin Operations Dashboard gives platform administrators a real-time view of business metrics, system health, and performance data. Track MRR, subscriber growth, churn rates, and monitor system latency, data integrity, automated load test results, and AI autofix activity across Sentry errors, load tests, and CI workflows.

Accessing the Dashboard

  1. Log in with an admin account
  2. Navigate to /admin/operations/overview for business metrics
  3. Navigate to /admin/operations/health for system health

Overview Page

Revenue Metrics

At the top of the overview page, see:

  • MRR (Monthly Recurring Revenue) - Current monthly revenue from active subscriptions
  • Total Subscribers - Number of active paying customers
  • New Subscribers This Month - Growth in the current billing period
  • Churn Rate - Percentage of subscribers who cancelled
  • Average Revenue Per User (ARPU) - MRR divided by total subscribers

Recent Activity

View recent platform activity:

  • New signups - Users who created accounts today
  • Active companies - Companies with recent activity
  • Jobs created - Work scheduled in the last 24 hours
  • Invoices sent - Billing activity

Health Page

System Health Metrics

Monitor core system performance:

  • Database latency - How fast database queries respond
  • API latency - How fast API endpoints respond
  • Record counts - Total records by model (Companies, Users, Tickets, Jobs, Invoices, Leads)

Data Integrity Checks

Automated checks for data quality:

  • Orphaned company members - CompanyMember records with no matching user
  • Orphaned invoices - Invoice records with no matching company

Each check shows:

  • Count of affected records
  • Severity level (OK, Warning, Error)
  • Color coding (green = ok, yellow = warning, red = error)

Load Test Results

View the last 5 automated load test runs:

  • Run date - When the test was executed
  • Duration - How long the test took in seconds
  • Virtual users - Number of simulated concurrent users
  • Average response time - Mean response across all requests
  • P95 response time - 95th percentile response time
  • Error rate - Percentage of failed requests
  • Pass/Fail status - Whether the test met performance thresholds
  • Summary - AI-generated overview of results
  • Diagnosis - Analysis of any issues detected
  • Action required - Whether manual intervention is needed
  • Autofix applied - If AI attempted to fix detected issues

Expanding Error Samples

Load test rows can be expanded to show sample failed requests:

  1. Click the expand arrow on any load test row
  2. View detailed error samples including:
    • Endpoint - Which API route failed
    • Status code - HTTP status returned
    • Request URL - Full URL that was called
    • Response body - Error message or response content
    • Duration - How long the request took in milliseconds
  3. Click the collapse arrow to hide details

Autofix Commit Links

If a load test triggered an autofix:

  • GitHub commit link appears in the row
  • Click the link to view the autofix commit on GitHub
  • Review code changes made by Claude AI

Autofix Activity Feed

The Autofix Activity card shows a unified feed of every Claude autofix attempt across the platform:

What It Shows

A chronological list of all autofix runs with:

  • Source badge - Where the autofix was triggered:
    • Sentry (purple) - Production error from Sentry monitoring
    • Load Test (blue) - Issue detected during weekly load test
    • CI (gray) - Code quality issue caught in CI pipeline
  • Status badge - Outcome of the autofix:
    • Fixed (green) - Claude successfully created and pushed a fix
    • Skipped (yellow) - No fix needed or autofix opted to skip
    • Failed (red) - Autofix encountered an error
  • Title - Truncated description of what was fixed (click row for full details)
  • Commit link - If fixed, links to the GitHub commit with Claude's changes
  • Timestamp - When the autofix attempt occurred

Why This Matters

Instead of digging through GitHub Actions logs to find autofix results, you see every attempt in one place:

  • Track autofix effectiveness - How often Claude successfully fixes issues
  • Monitor failure patterns - Which types of issues autofix struggles with
  • Audit AI changes - Click through to review every code change Claude makes
  • Catch regressions - See if autofixes introduce new issues

Recent System Health Logs

View the last 5 system health log entries:

  • Type - Category of log (error, warning, info)
  • Message - Description of the event
  • Source - Where the log originated (API route, background job, etc.)
  • Created at - Timestamp of the log entry

Using the Dashboard

Monitoring System Health

Check the health page regularly to:

  • Verify database and API performance is acceptable
  • Identify data integrity issues before they cause problems
  • Review load test results to catch performance regressions
  • Monitor autofix activity to ensure AI is catching and fixing issues
  • Investigate system logs for errors or warnings

Taking Action

When issues are detected:

  • Red integrity checks - Fix orphaned records using admin tools or SQL
  • High latency - Investigate slow queries or API endpoints
  • Failed load tests - Review diagnosis and error samples, check if autofix addressed it
  • Failed autofixes - Click the commit link to see what Claude attempted, check logs for errors
  • System errors in logs - Click through to error detail pages for full stack traces

Refreshing Data

  • Click the refresh icon in the top-right to reload all metrics
  • Data is cached for performance, so refresh to see latest values
  • Most metrics update in real-time as activity occurs

Tips

  • Check the dashboard daily to catch issues early
  • Review load test results after each Sunday run to monitor trends
  • Monitor autofix activity to ensure Claude is successfully fixing production issues
  • Set up alerts for critical metrics if your monitoring system supports it
  • Compare week-over-week to identify performance trends
  • Expand load test rows to diagnose failures - error samples show exactly what went wrong
  • Click autofix commits to review Claude's code changes before they're deployed

Questions

Q: What do the different severity levels mean? A: OK (green) means no issues detected, Warning (yellow) means attention may be needed, Error (red) means immediate action required.

Q: How often is data refreshed? A: Most metrics update in real-time. Load tests run weekly on Sundays. Click the refresh icon to reload cached data.

Q: What triggers an autofix? A: Autofixes are triggered by Sentry production errors, failed load test requests, or code quality issues detected in the CI pipeline. Each source has its own GitHub Actions workflow that calls Claude AI.

Q: Can I see full details of error samples? A: Yes - click the expand arrow on any load test row to see endpoint, status code, URL, response body, and duration for each failed request.

Q: Where do autofix commits go? A: Autofix commits are pushed directly to the main branch of the GitHub repository and deployed automatically if tests pass.

Q: What if an autofix fails? A: Failed autofixes are logged in the activity feed with a red "failed" badge. Click through to GitHub Actions logs to see the error, or check the error summary in the autofix run details.

Was this helpful?
Contact Support →