Skip to main content

How to Communicate to Customers During a DDoS Attack: The Playbook

Customer communication dashboard during a DDoS attack showing status page updates and incident timeline
Customer communication dashboard during a DDoS attack showing status page updates and incident timeline
By: Abdulkader Safi
Software Engineer at DSRPT
9 min read

TL;DR

A DDoS attack isn't a data breach — nothing gets stolen, your site just goes dark. But how you communicate during those 40 minutes decides whether it's a blip or a three-week reputation hit. The playbook: break silence after 10-15 min of degradation, post a holding statement that acknowledges the issue without naming DDoS/vendors/ETAs, update every 30-60 min even if nothing's changed, brief your internal team on a separate channel, wait 60+ min of clean traffic before the all-clear, and host your status page off your main infrastructure. Prep the templates, owners, and tabletop exercises before you ever need them.

Your site is crawling. Or dead. Support inbox is exploding. Someone in the group chat just asked, "are we being hacked?"

Welcome to the first thirty minutes of a DDoS attack — the part nobody trains you for.

Here's the thing about DDoS attacks: the technical side usually gets handled by your hosting provider, a CDN, or a specialist mitigation service within the first hour. Most of it is automated now. What isn't automated — and what will actually determine how much trust you keep or lose — is how you talk to your customers while it's happening.

I've watched businesses turn a 40-minute DDoS into a three-week reputation problem because someone either went silent or posted something they shouldn't have. This playbook is the communication side of incident response. Save it. You'll need it.

Why DDoS Communication Is Different From Breach Communication

Let's clear this up first, because it matters for tone.

A DDoS attack floods your infrastructure with junk traffic so legitimate users can't get through. Nothing is stolen. No customer data touches the attacker. Your service is just… unavailable.

A data breach means attackers got inside and took something.

Two completely different incidents. Two completely different communication strategies. Two completely different legal obligations.

The good news: DDoS is the easier one to talk about. You don't have regulators breathing down your neck about disclosure windows. You don't have to explain what PII was exposed. You just have to be honest that the service is degraded and you're on it.

The bad news: because it's "just" an outage, a lot of teams either underplay it ("everything's fine!") or overplay it ("we've been attacked!"). Both kill trust. The sweet spot is boring, specific, and frequent.

If your security posture is generally shaky, brush up on the fundamentals first — we covered the basics in Zero Trust Architecture Explained for Non-Technical Business Owners. It's the other half of this conversation.

When to Break the Silence (Your Trigger Thresholds)

The mistake I see most often: teams wait too long to say something. They're hoping it resolves in five minutes so they never have to explain anything. Meanwhile, support tickets pile up, Twitter starts cooking, and the silence becomes the story.

Here are the thresholds that should trigger communication. Any one of them — not all of them:

  • Users see it. If even a sample of customers is hitting errors, slowness, or timeouts, the clock starts.
  • 10-15 minutes of sustained degradation. If it's not fixed by then, assume it's not getting fixed for a while.
  • Support ticket volume triples. Even if engineering swears it's isolated.
  • Someone external mentions it. Downdetector, a customer tweet, media pickup, an attacker bragging — once it's public, your silence becomes complicit.

On the last point: I had a client once where the attacker emailed their customers directly with a ransom note. They were still "monitoring internally" when their own users were forwarding the ransom email to them asking if it was real. Don't be that team.

The First Message: Your Holding Statement

The first message isn't supposed to explain anything. It's supposed to stop the bleeding — cut support volume, reassure people you're aware, and buy you time to actually do your job.

Here's a working template you can adapt in 30 seconds:

We're currently investigating an issue affecting access to [product/service].
Some users may experience slow loading or timeouts.

No customer data has been affected — this is a service availability issue.

Our team is actively working on resolution. We'll post another update
at [TIME] local. Thanks for your patience.

Four things this does right:

  1. Acknowledges the user's reality. "You're not crazy, yes it's slow."
  2. Preemptively kills the 'are we hacked?' panic by explicitly saying data is fine.
  3. Commits to a next update time — specific, not vague.
  4. Doesn't overpromise resolution. You said you'd update, not fix.

What to not say in the first message:

  • The word "DDoS" — you don't know yet. Seriously. I've seen teams label outages as DDoS that turned out to be a misconfigured load balancer. Looks bad.
  • Any mitigation vendor names.
  • Specific fix ETAs. "Up in 15 minutes" is how you lose people when minute 16 arrives.
  • Anything that reveals attack vectors — the attacker is reading your status page too.
  • Blame. Not the upstream provider. Not the users. Not "unprecedented traffic."

The Follow-Up Cadence That Keeps People Calm

Every 30 to 60 minutes. Write that down.

Even if there's nothing new. A "still mitigating, next update at 14:30" post is a thousand times better than silence. Silence is read as incompetence, a cover-up, or worse — that you've given up and gone home.

Template:

Update [TIME]: Our team has confirmed this is caused by abnormal inbound traffic
impacting [service]. Mitigation is in progress.

Some users may still see intermittent slowness. We'll post the next update
at [TIME + 45min], or sooner if resolved.

Notice what's not there:

  • Traffic volume numbers ("we're seeing 400Gbps") — this is a trophy for the attacker, not news for your customers.
  • Your mitigation provider's name — again, attacker info.
  • Internal blame or finger-pointing.
  • Emotional language. "Horrific," "massive," "unprecedented" — all fuel for headlines you don't want.

If the attack stretches past two hours, tighten your cadence, not loosen it. Hour three is when customers start deciding whether they'll churn. Be present.

Brief Your Own Team First — Here's How

Your support team, your sales team, your execs, and your social media person all need the same story. Every hour. Not the customers' story — the internal story, with enough context to answer questions confidently.

The playbook I give clients is a single Slack channel — call it #incident-active or similar — with these fixed posts every update cycle:

  1. Status: active / monitoring / resolved
  2. What customers see: (one sentence)
  3. What we're telling them: (link to current public post)
  4. What NOT to say publicly: (the technical details, vendors, volumes)
  5. Next internal update: (timestamp)

A support person should be able to open Slack, read the last message, and be current in 30 seconds. If they have to ask questions during the incident, your comms is broken.

Sales team especially needs to know. Nothing worse than a sales call going sideways because your AE didn't know the product was down.

The All-Clear Message (And Why You Shouldn't Rush It)

Here's where teams mess up. Traffic stabilizes. Someone cheers. The "we're back!" tweet goes out.

Twelve minutes later, the attack resumes. Your customers now think you're lying.

Wait 60 minutes minimum of clean, stable traffic before declaring resolution. I've seen 30-minute lulls. I've seen attackers who come back specifically when they see your all-clear post, because they know you'll look worse the second time.

The phased approach:

  • Monitoring phase (0-60 min after traffic normalizes): "Services appear to be recovering. We're continuing to monitor closely. Next update at [TIME]."
  • Resolution phase (60+ min clean): "We've confirmed service is fully restored. We're continuing to monitor. A full incident summary will be published within 48 hours."
  • Post-mortem (24-72 hours later): Written up, published publicly. What happened (at a high level), how long, what you're doing to improve. No scapegoating.

The post-mortem is where you win back trust. Short, honest, specific. Customers remember the write-up more than the outage.

Where to Post When Your Site Is Down

Your status page cannot live on the servers getting attacked. This is the rule.

Channels to have ready before anything ever happens:

  • Third-party hosted status page — Statuspage (Atlassian), Instatus, BetterStack, or Better Uptime. None of these run on your infra. That's the whole point.
  • X/Twitter — still the fastest real-time comms channel for outages, like it or not.
  • LinkedIn — for B2B especially, execs check LinkedIn during incidents more than Twitter now.
  • Direct email — if you have a paid customer base or SLAs, send it. Don't assume they're watching your status page.
  • In-product banner — only works if part of the product is still serving, which it often is during a DDoS.
  • Support auto-responder update — change it immediately so incoming tickets get context before a human replies.

If you're running on shared hosting or a single VPS, your communication options shrink fast when things go south. This ties directly into infrastructure choices — how we pick the right hosting platform for each project affects not just uptime but your ability to talk during downtime.

What to Prepare Before It Ever Happens

Everything above is ten times harder to execute mid-attack if you haven't set it up in advance. The teams that handle DDoS well are the teams that rehearsed.

Prep checklist:

  1. Templates written and approved — holding statement, update, monitoring, resolution, post-mortem. Legal signs off once, you use them forever.
  2. Status page live and tested — pointing to the real service with real monitors. Not a dormant account.
  3. Access list defined — who can post to the status page, who can tweet from the company account, who signs off on the post-mortem. Names, not roles.
  4. Escalation tree — who gets woken up at 3am, in what order.
  5. Tabletop exercise every 6 months — fake DDoS on a Tuesday afternoon, run the whole comms flow, time it, find the gaps.
  6. An "incident captain" role — one person whose only job during an incident is to run comms. Not engineering. Not leadership. A dedicated seat.
  7. A "do not send" list — phrases, claims, and numbers that are banned from public comms. Print it. Pin it.

The thing nobody wants to hear: most teams skip this prep because DDoS feels unlikely. It used to be. It isn't anymore. Automated attack tools are cheap, tsunami-scale attacks are getting more common, and "we're too small to be a target" stopped being true years ago.

The edge computing and infrastructure decisions you make today heavily influence both your attack surface and your recovery speed. Worth thinking about before you need to.

What to Do Right Now

If you haven't done these three things, do them this week — not next quarter:

  1. Set up a third-party status page and point it at your critical endpoints. Instatus has a free tier. Statuspage integrates with PagerDuty. Pick one today.
  2. Write your five templates (holding, update, monitoring, resolution, post-mortem) and drop them in a shared doc. Draft them when you're calm, not when the site's on fire.
  3. Assign the incident captain role now — with a backup. Not at 2am when the attack is hitting.

Crisis comms isn't a content problem. It's an operations problem dressed up as a content problem. Solve it like ops — with runbooks, rehearsals, and named owners.

If you want a hand setting up the infrastructure and communication layer for your business before something goes sideways, that's literally what we do at DSRPT. Better to build the plan on a Tuesday than write it at 2am.

Frequently Asked Questions

Should we tell customers a DDoS attack is happening?

Yes — but only after 10-15 minutes of persistent degradation. A DDoS attack blocks access to your services, it doesn't steal data, so there's no legal reason to hide it. Acknowledging the incident reduces support tickets, builds trust, and prevents the rumor mill from filling the silence with worse theories. You don't need to name the attack vector or your mitigation vendor — just confirm there's an incident and you're on it.

How often should we send updates during a DDoS attack?

Every 30 to 60 minutes, minimum — even if nothing has changed. Silence is read as incompetence or worse, a cover-up. A "still mitigating, next update at X time" message is better than going dark for two hours. Once you declare monitoring mode, you can stretch updates to 90 minutes. Only declare full resolution after 60+ minutes of stable, clean traffic.

Where should we post updates if the main site is down?

Use channels that don't share infrastructure with the site under attack. That usually means: a third-party hosted status page (Statuspage, Instatus, BetterStack), your X/Twitter account, LinkedIn, and direct email to affected customers. Never host your status page on the same servers as your main app — if the app dies, your way to communicate dies with it. This is the single most common mistake teams make.

What should we avoid saying during a DDoS attack?

Don't blame users. Don't promise a specific resolution time ("up in 30 minutes") — attacks rarely follow your schedule. Don't name your mitigation vendor or reveal technical details that help the attacker. Don't downplay the impact if users are clearly affected. And don't stay silent — even "we're still investigating, next update in 45 minutes" beats nothing.

Do we need to prepare DDoS communication templates in advance?

Yes — writing crisis messages while the crisis is happening is how you end up with typos, tone problems, and legal issues. Draft templates for: initial holding statement, ongoing update, recovery, all-clear, and post-mortem. Get them approved by legal and PR once, store them in a shared doc with fill-in-the-blank fields, and run a tabletop exercise twice a year so the team knows where to find them.

Subscribe to our Newsletter!
Copyright © 2026 DSRPT | All Rights Reserved