About the TEA Platform¶
The Trustworthy and Ethical Assurance (TEA) platform is an open-source and community-oriented tool that has been designed and developed by researchers at the Alan Turing Institute and University of York to support the process of developing and communicating trustworthy and ethical assurance cases.
To better understand the purpose and motivation of the TEA platform, consider the following question:
Question
How could a team of researchers and developers provide assurance to their stakeholders or users that some ethical goal has been achieved over the course of designing, developing, and deploying a data-driven technology?
This is not an easy question to answer! As we pick it apart, we realise there are many more questions that need to be addressed:
- Which ethical goals are relevant to the technology (e.g. fairness, explainability, safety, sustainability)?
- How are these goals defined in the context of the project?
- How can a project team provide justified evidence that these goals have been achieved?
- Who should be engaged with as part of this process, and how should this engagement be structured?
What does the TEA Platform do?¶
The TEA platform helps project teams—including researchers, developers, decision-makers, managers, auditors, regulators, and users—answer these questions in a systematic manner. It achieves this through three interlocking features:
- An interactive tool for building assurance cases (accessible here)
- A set of educational resources that help users get the most out of the tool (see learning modules)
- The community infrastructure that promotes open and collaborative practices (see community resources)
Feature 1: Interactive Tool for Building and Reviewing Assurance Cases¶
The main component of the TEA platform is the interactive tool that allows members of a project team to iteratively develop an assurance case using a graphical interface. Figure 1. shows an example assurance case.
Figure 1. A simple assurance case showing a top-level goal claim, a set of three property claims, and corresponding evidence.
At the top of the assurance case is a clear and accessible claim about the technology or system in question, which serves as the goal of the argument (i.e. the goal claim). Underneath this goal claim is a set of additional claims about specific properties of the project or system (i.e. property claims), which help specify the goal and demonstrate what actions or decisions have been taken to achieve the goal. And, at the base of the assurance case is the evidence that justifies the validity of the above claims.
In short, an assurance case presents an argument, in a logical and graphical format, about how an ethical goal has been achieved. The key to an assurance case is the structure of the argument.
An Introduction to Trustworthy and Ethical Assurance
A more complete introduction to Trustworthy and Ethical Assurance (including the tool and methodology) can be found in our learning modules section.
Feature 2: User Training and Resources¶
Although the logical structure of an assurance case is simple, the process of building and sharing an assurance case can be more involved. As such, a significant element of the TEA platform is the learning resources and technical documentation that has been designed to widen the scope of who can participate in the assurance ecosystem.
You can browse our learning modules, technical documentation, or community resources to find out more.
Feature 3: Community Infrastructure¶
A key part of the TEA platform is meaningful engagement with the wider community. This is also true for trustworthy and ethical assurance more generally.
For instance, a project team may believe that they have carried out the set of actions and decisions that are sufficient to justify a claim made about the fairness of an AI system. However, the complexity of an ethical principles such as fairness means that it is easy to (unintentionally) overlook a core property that disproportionately affects a group of users (e.g. representativeness of data, equitable impact of a system).
Furthermore, our understanding of trustworthy and ethical assurance evolves as the capabilities of sociotechnical systems, such as AI systems or digital twins, also evolves. Therefore, it is vital that the process of developing and communicating assurance cases, where possible, is done in an open and collaborative manner.
The benefits of this include:
- Community support for identifying and defining key ethical principles
- Sharing case studies and exemplary assurance cases that help promote consensus and best practices
- A collaborative approach to evaluating the strength and justifiability of assurance cases (e.g. identifying gaps or insufficient evidence)
- Open design and collaboration of new ideas and features to improve the TEA platform
If you want to learn more about how the TEA platform scaffolds community engagement, please read our community guide. Here, you can also find more information about past or upcoming events for the TEA community.
Funding Statements¶
- From March 2024 until September 2024, the project was funded by the BRAID (UKRI AHRC) programme as part of a scoping research grant for the Trustworthy and Ethical Assurance of Digital Twins project, which was awarded to Dr Christopher Burr.
- Between April 2023 and December 2023, this project received funding from the Assuring Autonomy International Programme, a partnership between Lloyd’s Register Foundation and the University of York, which was awarded to Dr Christopher Burr.
- Between July 2021 and June 2022 this project received funding from the UKRI’s Trustworthy Autonomous Hub, which was awarded to Dr Christopher Burr (Grant number: TAS_PP_00040).
- Between April 2023 and December 2023 this project received funding from the Assuring Autonomy International Programme, a partnership between Lloyd’s Register Foundation and the University of York, which was awarded to Dr Christopher Burr.