(Print) Use this randomly generated list as your call list when playing the game. There is no need to say the BINGO column name. Place some kind of mark (like an X, a checkmark, a dot, tally mark, etc) on each cell as you announce it, to keep track. You can also cut out each item, place them in a bag and pull words from the bag.
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
Got locked out mid-recovery
Restored successfully—into prod by mistake
Alert false positive
Found critical system on a personal laptop
Oncall during a holiday
Wrote a DR plan no one read
“It worked in dev”
Ran a failover, forgot the firewall rules
DR test passed... because no one actually tested anything
Created a backup strategy
Backup tape was corrupted
Fix required physical access (no one had keys)
Vendor said, “That’s not covered”
Started a DR drill—no one showed up
Got called mid-flight (tried to troubleshoot over airplane Wi-Fi)
Accidentally deleted data
Someone unplugged the “do not touch” server
Dependency failed silently
Ran chaos test in prod
Searched Teams/WhatsApp/Slack for the DR steps
Recovery took >1 day
Team used five different definitions of “RTO”
Cloud region down
Did a post-mortem
Ran a DR stimulation game
Unreachable DNS
Conflicting recovery instructions
Backup password was changed but not shared
Applied the wrong config to prod
Alert fatigue
Discovered the backup drive was full
Tested DR... in prod by accident
Ignored an alert that was real this time
Confused dev and prod environments
Lost prod data (even a bit)
Deployment broke prod
Found passwords in a sticky note
Practiced failover
Network outage
Got called during dinner
Power came back... and then went out again
Found the backup in the wrong format
Couldn’t reach the primary contact
Called vendor support—hit voicemail
No runbook available
Restored from backup
DR test failed
Saw a mysterious cron job labeled “do not delete”
Logged into the wrong cloud account
Logged incident... to the wrong team
Discovered half the infra was never documented
Realized the DR test broke something else
Spent 2 hours debugging—then found it was a typo
DB migration failed
System alert missed because alert rule was too specific
The “hot site” was actually cold
Hit restore, regretted immediately
External service went down
Realized you were restoring the wrong day’s backup