We consider an environment in which there is substantial uncertainty about the potential negative external effects of AI algorithms. We find that subjecting algorithm implementation to regulatory approval or alternatively holding developers accountable for adverse external impacts of their algorithms is insufficient to implement the social optimum. When testing costs are low, a combination of mandatory beta testing for external effects and making developers liable for the negative external effects of their algorithms implements the social optimum even when developers have limited liability.