How We Calculate Status

A plain‑language walkthrough of how Is AI Down translates real user reports into clear, trustworthy AI platform status.

Methodology Overview

At a high level, Is AI Down watches how often people report problems for each AI platform, then looks at how those reports cluster over time. When reports spike in the same time window, we interpret that as a strong signal that something is wrong on the provider side.

User Reports
Community input
Live signals
Time Windows
Spike detection
5 min / 1 h / 24 h
Status Engine
Status + confidence
Human-readable output

What Data We Use

User Reports

Community-submitted reports about slow, failing, or unusual AI responses.

  • Community-submitted reports with relevant technical details
  • Issue categories tailored to common AI problems (timeouts, bad responses, auth failures, etc.)
  • Protected against spam and abuse
  • Advanced verification measures
  • Real-time community feedback
  • Comprehensive severity assessment

User Reports Analysis

Structured analysis of community-submitted reports over multiple time windows.

  • Time-based spike detection (5 minutes, 1 hour, 24 hours)
  • Report volume analysis
  • Pattern recognition for mass outages
  • Category-based issue tracking
  • Geographic and platform breakdown
  • Real-time report processing

Advanced Analytics

Intelligent analysis of multiple data sources.

  • Multi-source data correlation
  • Pattern recognition and trend analysis
  • Automated status determination
  • Confidence scoring algorithms
  • Historical data comparison
  • Predictive status modeling

Status Levels

Likely Operational

Systems appear to be functioning normally with minimal reported issues.

When we use this label:

  • Minimal or no user reports detected
  • Low report volume across all time windows
  • No spike patterns in recent reports
  • Consistent low activity indicates normal operation
  • High confidence from comprehensive report analysis

Possible Issues

Moderate user-reported issues detected.

When we use this label:

  • Moderate community-reported issues
  • Multiple reports indicating potential problems
  • Medium confidence from user report analysis

Potential Problems

Low volume user reports indicate potential issues.

When we use this label:

  • Limited community reports detected
  • Elevated report activity patterns
  • Medium confidence pending more data

Likely Down

Strong evidence of server outage with high user impact.

When we use this label:

  • Mass user reports detected in short timeframe
  • Significant community-reported issues
  • High report volume indicating widespread problems
  • High confidence from user report analysis

Monitoring Reports

User reports present but pattern analysis indicates isolated issues.

When we use this label:

  • Significant community reports detected
  • Pattern analysis shows isolated or localized problems
  • Medium confidence pending more data
  • Continuous monitoring of situation

Confidence Levels

High Confidence

Strong report patterns confirm the status with consistent data.

  • Strong report patterns confirm the status with consistent data
  • Community feedback supports reported issues
  • Impact analysis shows widespread or localized effects
  • Time-based analysis shows clear patterns across multiple windows

Medium Confidence

Report patterns show some consistency but with minor discrepancies.

  • User reports present but pattern analysis shows mixed signals
  • Community feedback indicates issues but scattered reports
  • Limited impact observed
  • Time-based analysis shows inconsistent patterns

Low Confidence

Conflicting signals or insufficient data.

  • Very few user reports available
  • Pattern analysis shows conflicting signals
  • Community feedback is minimal or contradictory
  • Time-based analysis shows inconsistent patterns across windows

Under the Hood

Intelligent Calculation

  • • User reports grouped into short, medium, and long time windows.
  • • We look for sudden spikes compared to a rolling baseline.
  • • We treat a few isolated reports very differently from a global wave.
  • • Each check outputs both a status and a confidence score.

Real-Time Monitoring

  • • New reports are processed continuously.
  • • Status is recalculated frequently, not just on a fixed cron.
  • • Dashboards update via WebSocket so you see changes as they happen.
  • • We favor stability over jumping between states on tiny fluctuations.

How Often We Recalculate

Realtime-ish, Not Noisy

  • • New reports are ingested immediately.
  • • Status checks run frequently, with extra checks when spikes appear.
  • • We smooth very short bursts so the status doesn't flicker.

Data Retention

  • • We keep enough history to show daily/weekly patterns and incident history.
  • • Raw reports are handled according to our privacy policy and may be aggregated over time.

Have Questions?

Transparency is a core feature of Is AI Down. If something about a status doesn't make sense, or you have ideas to improve the model, please reach out and we can iterate together.

Contact Us