Risk management practice, on which current AI governance and regulation are based, differs substantially from managing in the face of uncertainty.
Generative pretrained AI applications are complex and dynamic, with uncertain outcomes that will be deployed in complex dynamic human systems, the mechanisms of which are also unknown. The potential outcomes are highly uncertain.
Regulation based on risk management cannot prevent harm arising from outcomes that cannot be known ex ante.
Some harm is inevitable as society learns about these new applications and use contexts. Rules that are use-case specific rather than generic and a practice of examining ways of redressing harm when it occurs offer a principled way of enabling efficient development and deployment of AI applications.