The old military aphorism that “the enemy gets a vote” is oft forgotten in both Silicon Valley and Washington, D.C. This cliche is worth keeping in mind as Congress debates adjustments to Section 230 (§230) of the Communications Decency Act. For starters, Silicon Valley’s persistent inability to ground products in the knowledge that some users will deliberately abuse them, and the unsurprising abuse that results, motivates many in Washington to adjust §230’s liability protections. But policymakers, intent on taming platforms, must not inadvertently empower the real dangerous actors?terrorists, child predators, hate groups?who will abuse any technology and any legal recourse created by adjustments to §230.
Harm manifests online, sometimes in world-changing ways. To some, this is evidence that the current regulatory regime should shift, and I grudgingly agree. But bad policy could very well make things worse, especially when it comes to high-severity, relatively low-prevalence harms like terrorism and hate. This paper distinguishes those issues from the sometimes-related but ultimately broader issue of misinformation that often manifests as higher-prevalence, lower-severity harms. On the former, policymakers should keep three core ideas in mind as they move forward.