It's Friday evening. A security researcher emails you: "I found your database backup in a public S3 bucket." Or worse: your monitoring alerts that someone is exfiltrating user data right now.
What you do in the next 24 hours will determine whether this is a contained incident or a company-ending disaster. Most startups don't have a plan. Here's yours.
The First 60 Minutes
Minute 0-5: Confirm the incident
Don't panic. Verify the report. Check logs. Confirm this is a real security incident, not a false alarm or an overzealous scanner.
Minute 5-15: Activate the response team
Call your incident response lead (usually CTO at a startup). Get your senior engineer and a communications person on a call. No Slack messages about the incident — use a private channel or phone.
Minute 15-30: Contain the threat
Stop the bleeding. Revoke compromised credentials. Block the attacker's IP/account. Take the affected service offline if necessary. Containment first, investigation second.
Minute 30-60: Preserve evidence
Before you fix anything, preserve logs, snapshots, and forensic data. You'll need it for investigation, legal, and potentially law enforcement. Don't reboot servers or redeploy — you'll destroy evidence.
🛑The Biggest Mistake
The most common mistake is immediately fixing the vulnerability and redeploying without preserving evidence or understanding the full scope. You may close one hole while the attacker is still inside through another. Contain first, investigate fully, then remediate.
The Response Playbook
Phase 1: Contain (Hours 1-4)
Isolate
Isolate affected systems from the network. Don't destroy them.
Credentials
Rotate ALL credentials — API keys, database passwords, JWT secrets, OAuth client secrets, cloud IAM keys
Access
Revoke all active sessions. Force re-authentication for all users.
Monitor
Set up enhanced monitoring on all remaining systems for signs of lateral movement
For Web3 incidents specifically:
- Pause the contract if you have a pause mechanism
- Move funds to a secure wallet if possible (guardian multisig)
- Contact bridge operators if cross-chain funds are at risk
- Monitor the attacker's wallet addresses for fund movement
- Contact exchanges to flag/freeze stolen funds
Phase 2: Investigate (Hours 4-48)
Answer these questions:
Investigation Questions
How did they get in?
Identify the initial access vector. Was it a code vulnerability? Compromised credentials? Social engineering? Supply chain? Knowing the entry point is essential for preventing re-entry.
What did they access?
Enumerate every system, database, file, and API the attacker touched. Don't assume the visible damage is the full extent. Check for backdoors, new accounts, modified configurations.
How long were they inside?
Review logs to determine when the initial compromise occurred. The longer the dwell time, the more data they likely accessed. Average dwell time is 204 days — they may have been inside for months.
Is the threat fully contained?
Are there persistence mechanisms — backdoor accounts, SSH keys, web shells, cron jobs, modified dependencies? Validate that containment is complete before moving to remediation.
Phase 3: Notify (Hours 24-72)
As we discussed in The Cost of Ignoring Security, notification is both a legal requirement and a trust obligation:
Legal requirements:
- GDPR: 72 hours to notify the supervisory authority
- CCPA: "Reasonable" timeframe to notify affected users
- HIPAA: 60 days to notify affected individuals
- State breach notification laws: vary, typically 30-90 days
What to communicate:
- What happened (in plain language)
- What data was affected
- What you're doing about it
- What users should do (change passwords, monitor accounts)
- How to reach you for questions
⚠️Don't Hide It
The coverup is always worse than the crime. Companies that disclose quickly and transparently recover trust faster than those that delay or minimize. Your users will find out eventually — better they hear it from you first.
Phase 4: Remediate (Days 3-14)
Quick Fixes (Do Now)
- •Patch the specific vulnerability that was exploited
- •Remove any backdoors or persistence mechanisms
- •Reset all credentials and sessions
- •Deploy enhanced monitoring and alerting
Systemic Fixes (Plan)
- •Address the root cause (not just the symptom)
- •Implement missing security controls
- •Schedule a security audit of the full application
- •Establish ongoing security testing process
Phase 5: Post-Mortem (Week 2-3)
A blameless post-mortem is essential:
- Timeline — Reconstruct exactly what happened and when
- Root cause — What was the underlying vulnerability and why did it exist?
- Detection gap — Why didn't we catch this sooner? What monitoring was missing?
- Response evaluation — What went well and what needs improvement in our response?
- Action items — Specific, assigned, time-bound improvements
Building Incident Response Capability
As we recommended in Building a Security-First Culture, preparation is key:
Before an Incident
- Document your response plan — Who to call, what to do, in what order
- Know your data — What data do you store, where, and what's the regulatory impact if it's breached?
- Establish communication channels — Out-of-band communication (phone, Signal) for when you assume email/Slack is compromised
- Run a tabletop exercise — Simulate an incident once a year. Walk through your response plan with the actual people who'd respond.
- Engage an IR retainer — If you can afford it, have a professional incident response firm on retainer. You don't want to be shopping for help during a crisis.
Ongoing
- Monitor and alert — You can't respond to what you can't detect
- Regular testing — As we discussed in Bug Bounty vs Pentest vs Audit, continuous security testing reduces the window of vulnerability
- Keep logs — Centralized logging with sufficient retention (90+ days). Without logs, investigation is impossible.
204 days
Avg Time to Detect
73 days
Avg Time to Contain
$1.2M
Savings with IR Plan
54%
of Breaches Found by External Parties
The best incident response is prevention. Our Web Security Auditor and Smart Contract Auditor find vulnerabilities before attackers do. And if you want comprehensive coverage, our expert review combines AI automation with human expertise for maximum security assurance.