How can society capture the benefits of artificial intelligence (AI) while minimizing fraud, safety concerns, and unemployment that AI could produce? Some AI experts argue that regulations might be premature given the technology’s early state, while others believe they must be implemented immediately to ensure AI systems are developed and deployed responsibly. Central to this debate are two implicit assumptions: that regulation rather than market forces primarily drive innovation outcomes and that AI should be regulated in the same way as other potentially harmful products which are more fully developed.
Both assumptions are incorrect. When and how the market distorts the direction of technological innovation in the presence of externalities and uncertainty and when regulation is useful or harmful, are topics that have long been studied in contexts outside of AI. Promoting socially beneficial AI depends not just on technical and legal knowledge but on lessons from economics and management in how the trajectories of new technologies unfold.