Skip to content

About this Course

Illustration by Johnny Lighthands

This course is designed to help you understand the fundamentals of AI Ethics and Governance. The course begins with an introduction to metaethics and normative theories. It then follows with the practical ways AI systems can produce diverse harms to individuals, society, and even the biosphere, as well as the values that should be upheld when thinking about AI ethics.

The course then goes into a deeper dive on the following topics: AI Sustainability through Stakeholder engagement and impact assessment, AI fairness and bias mitigation, accountability and governance, explainability and transparency, and the CARE & Act principles.

Throughout the course we will have time for Q&A, group discussions, case studies, and structured activities to further the discussion and understanding of these concepts.

Who is this Guidebook For?

Primarily, this guidebook is for researchers with an active interest in the ethics and governance of data science and AI. This doesn't mean you have to be a data scientist or develop machine learning algorithms. You could also be an ethicist, sociologist, or someone with an interest in law and public policy.

This course has practical, and sometimes hands-on activities that are designed to a) encourage critical reflection and b) help you build practical understanding of the processes associated with effective and responsible engagement with AI systems. While they can be carried out as part of individual and self-directed learning, they are most suited to group discussion.

Learning Objectives

This guidebook has the following learning objectives:

  • Get familiar with some key concepts in practical ethics. In particular, understand the metaethical motivation behind our course, as well as the main families of normative ethical theories.
  • Understand the different kinds of harms that AI systems can create.
  • Explore the different values that underpin the thinking and reflection behind issues in AI ethics.
  • Understand the importance of AI sustainability and anticipatory reflection through a comprehension of the stakeholder engagement process (SEP) and stakeholder impact assessment (SIA).
  • Explore some of the ethical issues around AI systems, such as: fairness & bias mitigation, explainability & transparency, and accountability & governance.
  • Learn the importance and use of the CARE & ACT principles in developing AI systems where ethics is considered throughout the process and in an iterative manner.

Table of Contents

  • Introduction to Practical Ethics


    This chapter looks at foundational concepts of practical ethics, through two broad-brush introductions to: (i) metaethics and (ii) normative ethical theories.

    Go to chapter

  • AI Harms and Values


    This chapter looks at the different kinds of harms AI systems may cause, as well as the values that should be used as goals and objectives when thinking about the ethics of AI systems.

    Go to chapter

  • AI Sustainability and Stakeholder Engagement


    This chapter introduces the concepts of AI Sustainability and the importance of anticipatory reflection. We will develop sustainability by looking at the Stakeholder Engagement Process (SEP) and the Stakeholder Impact Assessment (SIA).

    Go to chapter

  • Fairness, Bias Mitigation, Accountability, and Governance


    This chapter addresses various issues that arise from the use of AI systems: AI fairness and bias mitigation, as well as accountability and governance.

    Go to chapter

  • Transparency & Explainability and CARE & ACT Principles


    The concluding chapter starts with the importance of transparency and explainability in AI systems. We then go through what we call CARE & ACT Principles: (i) consider context, (ii) anticipate impacts, (iii) reflect on purpose, positionality, and power, (iv) engage inclusively, and (v) act responsibly and transparently. Go to chapter