Scenario: Content moderation — CCA-F Exam Prep
L3.22|Scenario: Content moderation
1/12
A social platform launches AI moderation. False positives drop by 60%. Everyone celebrates.
Three months later, a senator's post criticizing a policy is flagged and removed. It takes 48 hours to restore. The story makes national news. The platform's stock drops 4%.
The AI was 99.5% accurate. On 1 million posts per day, that's 5,000 wrong decisions. One of those 5,000 happened to be a senator.
The mystery: how do you design a system that's fast enough, accurate enough, and recovers from mistakes quickly enough?
