(Print) Use this randomly generated list as your call list when playing the game. There is no need to say the BINGO column name. Place some kind of mark (like an X, a checkmark, a dot, tally mark, etc) on each cell as you announce it, to keep track. You can also cut out each item, place them in a bag and pull words from the bag.
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
Ran chaos test in prod
DB migration failed
Hit restore, regretted immediately
Found critical system on a personal laptop
Vendor said, “That’s not covered”
Forgot to test backup
Alert false positive
Saw a mysterious cron job labeled “do not delete”
Ran a DR stimulation game
Alert fatigue
Discovered the backup drive was full
Cloud region down
Practiced failover
Couldn’t reach the primary contact
External service went down
DR plan included a retired employee
Misread a severity alert
Deployment broke prod
Custom script failed with no logs
Confused dev and prod environments
Restored from backup
Wrote a DR plan no one read
Did a post-mortem
Discovered half the infra was never documented
Tested DR... in prod by accident
Spent 2 hours debugging—then found it was a typo
DR test failed
Backup tape was corrupted
Conflicting recovery instructions
Didn’t have a backup
Logged incident... to the wrong team
Accidentally deleted data
Called vendor support—hit voicemail
Woke up midnight to standby call
Applied the wrong config to prod
“It worked in dev”
Ignored an alert that was real this time
Backup ran... but didn’t include the database
Got locked out mid-recovery
Fix required physical access (no one had keys)
System alert missed because alert rule was too specific
Someone unplugged the “do not touch” server
The “hot site” was actually cold
Backup password was changed but not shared
Power came back... and then went out again
Started a DR drill—no one showed up
No runbook available
Unreachable DNS
Network outage
Got called mid-flight (tried to troubleshoot over airplane Wi-Fi)
Realized the DR test broke something else
Found the backup in the wrong format
Deployed during a major incident
Found passwords in a sticky note
Searched Teams/WhatsApp/Slack for the DR steps
Ran a failover, forgot the firewall rules
Recovery took >1 day
Realized you were restoring the wrong day’s backup
Restored successfully—into prod by mistake
Logged into the wrong cloud account
Dependency failed silently
Got called during dinner
DR test passed... because no one actually tested anything