Regulation of AI and corresponding explainability practices

Doctoral 2019 student · Email


The legal regulation of AI presents a challenging new frontier. The common law conceptions of foreseeability and reasonability must find novel parallels in AI creation and deployment. The contribution of our work is two-fold: first, we propose a new framework for categorising AI from a legal perspective, and then relate how recent AI explainability research can be applied to help give guarantees and regulate AI systems.Our framework categorises AI systems along two dimensions: understanding of inputs, and foreseeability of impact. The first dimension, understanding of inputs, focuses on the depth of understanding about the data inputs and how an AI system should ideally learn the task at hand. The second dimension looks at the foreseeability of impact. This considers how the output of the AI system will affect the system in which it functions.The recent trend in machine learning and AI research has been towards more complex models that are able to achieve high performance on difficult tasks. This transition from simple model to complex models has the unfortunate side effect of a loss of model understanding. When attempting to regulate AI, this lack of network understanding and interpretability raises concerns about trust and guarantees of safe performance. This work reviews how explainability can be used in regulation in relation to our novel framework.

Policy and governance
Download poster