Skip to main content
Back to blog
·PageCalm

How to Measure If Your Status Page Is Actually Working

status pagesincident communicationmetricscustomer trust

You stood up a status page. You add components. You post updates during outages. But there's a quiet question most teams never answer: is it actually working?

Not "is it online" — you'd know if your status page itself were down. The harder question is whether it's doing the job you built it for: keeping customers informed, deflecting support tickets, and building trust over time. Most teams have no idea, because they never set up a way to tell.

A status page isn't like a monitoring dashboard. Monitoring gives you clear signals — error rates, latency, queue depth. A status page's output is softer: fewer confused customers, calmer support inboxes, a track record that reads well six months later. Those are real outcomes, but they don't generate graphs on their own.

Here are the signals that actually tell you whether your status page is pulling its weight — and how to track them without building a metrics pipeline.

Signal 1: "Is it down?" ticket volume

This is the simplest and most honest metric. During your next outage, count the support tickets that ask some version of "is the service down?" or "are you aware of an issue?" Then divide by the total tickets during that window.

If the number is high — say, more than 30% of tickets are just people asking if you know — your status page is either not discoverable or not being updated fast enough. People are reaching out because they can't find the answer themselves.

If the number is low — under 10% — your status page is catching them before they reach for the support form. They checked, saw you were on it, and closed the tab.

The number won't reach zero, and that's fine. Some people will always email first. But the ratio should trend down over time as you improve discoverability and speed. Track it per-incident. You don't need a dashboard — a note in your incident doc or a tag in your support tool is enough.

If you're not sure where to start, the next outage is your baseline. Count the "is it down" tickets. That's the number you're trying to shrink.

Signal 2: Time to first update

This isn't how fast you fix the problem. It's how fast you tell customers you know about it.

The gap between "we're aware of an issue" and the moment customers started experiencing it is the most under-measured number in incident communication. Every minute of that gap is a minute customers spent wondering if anyone was paying attention.

A useful target for small teams: under five minutes from detection to a status page update. Not five minutes from root cause. Five minutes from the moment someone on your team confirms "something is definitely wrong."

If you're consistently hitting 10, 20, or 30 minutes — or worse, if there's no pattern because sometimes it's three minutes and sometimes it's an hour — the problem isn't speed. It's process. Someone doesn't know the status page is their job, or your alerting doesn't route to the person who can post, or the context switch from debugging to writing is too painful. Whatever the cause, the fix is cheaper than the trust you're burning in those silent windows.

Signal 3: Status page traffic during incidents

When something breaks, do people show up?

This one is easy to check if your status page has any kind of analytics — even a basic pageview counter. Look at traffic during your last three incidents. If traffic spikes when something goes wrong, your status page is discoverable and customers have learned to check it. If traffic stays flat during an outage, customers don't know it exists or don't trust it to tell them anything useful.

The shape of the spike is informative too. A sharp spike at the start of the incident that tapers off quickly means people checked, got what they needed, and left. A spike that sustains throughout the incident suggests people are refreshing because they don't trust the update cadence — they're waiting for news rather than subscribing and walking away.

Neither pattern is inherently bad, but they tell you different things about how to improve. A sustained spike says "add subscriptions and make them more visible." A quick spike says "discoverability is working, focus on update quality."

Signal 4: Subscriber growth and churn

Subscribers are people who have explicitly opted in to hear from you during incidents. Every subscriber represents at least one fewer support ticket during the next outage, because they'll get notified instead of having to check.

Track two numbers:

  • Net subscriber growth over time. Is the number going up? It should be. Not explosively — subscribers grow slowly as customers discover the option — but steadily.
  • Unsubscribe spikes around incidents. If you see a bump in unsubscribes after a specific outage, something about that incident's communication turned people off. Maybe you sent too many emails. Maybe the updates were frustratingly vague. The unsubscribes are telling you something — don't ignore them.

A flat subscriber count on a growing product means you're not making the subscription option visible enough. Add it to your onboarding emails. Mention it in support replies. Include it in your error pages. Subscribers are the compounding asset of status page communication — every one you add reduces future support load.

Signal 5: Resolution update completeness

This one is qualitative, but it matters. Go back through your last five resolved incidents and read only the resolution updates. Can a customer who skipped the whole incident understand what happened, how long it lasted, and that you've taken it seriously?

If the answer is "no" for most of them — if your resolution updates are variations of "the issue has been resolved, thanks for your patience" — you're closing incidents without telling the story. The resolution update is the one customers remember. It's also the one that shows up in your incident history for months afterward, read by prospects evaluating whether to trust you.

A simple check: does each resolved incident include what broke, how long it lasted, and what changed to prevent recurrence? If not, the status page is doing half its job. It's informing customers in the moment but failing to build the track record that pays off over time.

What you don't need to measure

Some things aren't worth tracking. Uptime percentage, for example, measures your service — not your communication. A status page at 99.9% uptime can still be terrible at its job during the 0.1%. The two numbers are related but distinct, and conflating them makes you think you're doing well when you're not.

You also don't need a dashboard for any of this. Most of these signals can be tracked with a note in your incident doc or a quick check every few months. The goal isn't a monitoring system for your monitoring system — it's a feedback loop that tells you whether the thing you built is doing what you built it for.

Where to start

Pick one signal. The easiest to start with is "is it down?" ticket volume — you already have the data in your support tool, and the number either moves in the right direction or it doesn't. Track it for your next three incidents. If it's dropping, your status page is working. If it isn't, you know where to focus.

The rest can wait. But pick something. A status page with no feedback loop is a guess. A status page with even one signal you're watching is a tool you can actually improve.


PageCalm helps small teams run status pages with AI-powered incident updates that sound human and ship fast. Subscriber notifications, Slack integration, and uptime history included. Try it free — no credit card required.

Share

Stop wordsmithing during outages

PageCalm writes your incident updates so you can focus on fixing what's broken.

Get Started Free