Governance and Regulation for AI
As AI becomes more and more prominant in the daily lives or individuals and organisations, there is a need to ensure the correct policies and regulation are in place to reduce harms. Below is some resources which look at some approaches, and investigations into AI policy and regulation.
General Approaches
Governments around the world are increasingly considering the harms of AI and how to effectively govern and regulate their usage.
- The AI Safety Summit has been a prominent conference bringing together world leaders from government, companies, researchers and civil society groups to better understand how to develop safe and responsible AI. The focus has primarily been on frontier AI but many of the principles extend to other techniques.
- The EU has been one of the first to pass comprehensive regulation on the use of AI, via the EU AI Act.
- The UK government is taking a pro-innovation approach to AI regulation, and is consdiering five core principles to guide and inform the responsibel development and use of AI. The Department of Science, Industry and Technology (DSIT) will be responsible for overseeing the governments AI strategy, and is expected to form the basis of implementations within other departments. This principles are:
- Safety, security and robustness
- Appropriate transparency and explainability
- Fairness
- Accountability and governance
- Contestability and redress
Algorithmic Governance
Key to Assessing, understanding and managing the risks from AI is understanding how the algorithms work, which other systems they interact with and what data they use. Some resources in this area
- UK Government have a Algorithmic Transparency Recording Hub to support public sector organisations provide clear information about the algorithmic tools they use, and why they’re using them. The standard will become a requirement for all central government departments.
- Energy Systems Catapult have a report on Algorithmic Governance. This considers the role of registering algorithms and what data to collect.
AI Risks
Appropriate governance and regulation will be required to assess, minimise, and manage the risks caused by the implementation of AI.
- Energy Systems Catapult have published a report on the potential negative impacts of AI on the energy networks, including cascade risks, and recommendations of what may be required through regulation to reduce and mitigate such risks.
- The US Department of Energy also published a report on the Potential Benefits and Risks of Artificial Intelligence for Critical Energy Infrastructure.