Skip to content

Executive Summary

Animation of a person peeling back a smartphone screen to uncover the algorithms running behind

There is a culture of distrust surrounding the development and use of digital mental health technologies (DMHTs).

As many organisations continue to grapple with the long-term impacts on mental health and well-being from the COVID-19 pandemic, a growing number are turning to digital technologies to increase their capacity and try to meet the growing need for mental health services. Prior to the pandemic, we had already called for greater attention to the ethical challenges of using digital technologies in the domain of psychiatry or mental healthcare3. Since then, the urgency for meeting this call has only grown.

In this report, we argue that clearer assurance for how ethical principles have been considered and implemented in the design, development, and deployment of DMHTs is necessary to help build a more trustworthy and responsible ecosystem. To address this need, we set out a positive proposal for a framework and methodology we call 'Trustworthy Assurance'.

To support the development and evaluation of Trustworthy Assurance, we conducted a series of participatory stakeholder engagement events with students, University administrators, regulators and policy-makers, developers, researchers, and users of DMHTs. Our objectives were

  • to identify and explore how stakeholders understood and interpreted relevant ethical objectives for DMHTs,
  • to evaluate and co-design the trustworthy assurance framework and methodology, and
  • solicit feedback on the possible reasons for distrust in digital mental healthcare.

Based on these objectives, the following 'key findings' and 'recommendations' are presented.

Key Findings

  1. The current landscape of digital mental healthcare is characterised by significant uncertainty, a lack of transparency or accountability, and a rising demand that outpaces trusted services and resources. This contributes to a culture of distrust, which may prevent vulnerable users from accessing support.
  2. Concerns raised by stakeholders suggest that there are a wide range of challenges to be addressed, which may broadly be grouped into concerns surrounding a dearth of trustworthy innovation and concerns surrounding a lack of transparent communication between groups:

    1. For developers, key concerns focused on
      1. the lack of clear guidance and structure through which to present evidence of trustworthy innovation,
      2. the lack of integration of ethics within existing workflows, and
      3. the challenges posed by what is viewed as burdensome regulation.
    2. For policymakers, key concerns focused on
      1. the lack of clarity surrounding standards for medical devices versus services for "well-being" that are widely available on digital platforms (e.g. app stores),
      2. the lack of integration or harmonisation between existing examples of legislation and standards in this space.
    3. For those with lived experiences of using these tools, key concerns focused on
      1. the lack of clear and meaningful consent procedures and the insufficiency of data privacy policies,
      2. the perceived erosion of in-person care by digital technologies and services,
      3. a perceived lack of diversity and representation in development teams, and
      4. the varying quality and accessibility of services across society (e.g. the digital divide).
  3. Trustworthy Assurance is a framework and methodology that can support the design, development, and deployment of data-driven technologies and also create a more responsible and trustworthy ecosystem of digital mental healthcare.

Recommendations

  1. Organisations that are involved in the design, development, and deployment of DMHTs should adopt and use trustworthy assurance methodology to demonstrate and justify how they have embedded core ethical principles into their systems. In doing so, the methodology can also help provide assurance for how key legislative or regulatory duties and obligations have been met.
  2. Standards can be co-developed within and among organisations by sharing best practices related to trustworthy assurance. This can help ease the burden associated with relevant responsibilities (e.g. compliance, deliberation).
  3. Common capacities should be developed across the digital mental healthcare landscape, such as initiatives aimed at improving data and digital literacy, in order to support and foster trustworthy and responsible innovation through shared best practices and standards.
  4. Research should be undertaken to identify how organisations and product managers could ease the time burden on developers through embedding and integrating the trustworthy assurance methodology into key stages of the project lifecycle, rather than the methodology being treated as a post hoc compliance exercise.
  5. Organisations involved in the design, development, or deployment of DMHTs should identify opportunities and processes to support the transformative and inclusive engagement and participation of affected users within the project lifecycle.

In each subsequent chapter, these key findings of our project are further contextualised and refined with reference to the specific topics under discussion.

Report Overview

Our report is structured as follows:

  • Chapter 1 (Introduction) establishes the background context and conceptual foundations for the report, while also outlining the many challenges that exist for researchers, developers, and policy-makers/regulators working in the domain of digital mental healthcare.
  • Chapter 2 (Presenting Trustworthy Assurance) introduces the framework and methodology of 'Trustworthy Assurance'. The framework includes a model of a typical project lifecycle involving a data-driven technology (e.g. health and well-being app), and a discussion of several ethical principles, known as the SAFE-D principles. The framework serves as a guide to our methodology for developing an assurance case that promotes trustworthy goals associated with DMHTs. Finally, this chapter also includes an important discussion about 'argument patterns', which supports the material presented in Chapter 5.
  • Chapter 3 (Applying Trustworthy Assurance) presents findings from a research sub-project conducted with students and administrators from UK Universities. These engagement events explored the application of trustworthy assurance to the procurement of DMHTs for use in the higher education (HE) sector, as well as general attitudes and perceptions towards the use of data-driven technologies in higher education. A series of recommendations accompany our thematic analysis.
  • Chapter 4 (Co-Designing Trustworthy Assurance) broadens the scope from the previous chapter to present research findings from a series of stakeholder engagement events carried out with regulators and policy-makers, developers, researchers, and users with lived experience of DMHTs. As with the previous chapter, a set of recommendations accompanies our thematic analysis.
  • Chapter 5 (Developing Trustworthy Assurance) introduces, motivates, and explains two argument patterns that are intended to help project teams meet objectives for fair and explainable DMHTs. This chapter also connects the argument patterns to existing and relevant legislation and regulation (e.g. Equality Act 2010).

Examples

If you are new to the topics covered in this report, you can also find a set of illustrative examples of DMHTs available on this page.

About the Report

The following summarises what this report is and what it is not:

✅ An introduction to 'Trustworthy Assurance'—a framework and methodology for enabling a more trustworthy ecosystem of digital mental healthcare through the responsible and ethical design, development, and deployment of digital technologies.

❌ A comprehensive user guide for 'Trustworthy Assurance' or argument-based assurance—though links and further resources are provided.1

✅ A summary of findings from research conducted on the application of trustworthy assurance to the procurement of DMHTs for use in the higher education (HE) sector.

❌ Findings from a sociological study or series of generalisable results from scientific experiments.

✅ A summary of findings from a series of more general stakeholder engagements, exploring the ethics of digital mental healthcare and attitudes towards trustworthy and untrustworthy technologies.

❌ A report with a strong international or multi-national focus. While we make reference to non-UK developments in this domain, our primary focus is on the UK. However, the methodology we present and many of the findings we discuss have value beyond the UK.

✅ An explanation and discussion of two argument patterns exploring the goals of fairness and explainability in the design, development, and deployment of DMHTs.

❌ A critical examination of argument-based assurance.2

✅ A series of recommendations, targeted at different stakeholders, for how to enable a more responsible and trustworthy ecosystem of digital mental healthcare.

❌ A review of the current legislative or regulatory publications that are relevant to digital mental healthcare.

Who is this report for?

This report is primarily targeted at the following groups:


Icon of a regulator controlling sliders on a screen

Policy-makers and Regulators

Policy-makers and regulators will find the recommendations and guidance we set out of specific interest, and will find value in the methodology that we set out in Chapter 2 because it is framed in procedural terms, and with links to process-based forms of governance. We also link our two argument patterns, which are presented in Chapter 5, to specific legislative and regulatory developments in healthcare.


Icon of a regulator controlling sliders on a screen

Senior Decision-Makers

Senior Decision-Makers, like policy-makers and regulators, will likely benefit from our methodology of Trustworthy Assurance. Specifically, from exploring its procedural underpinnings that are discussed in the section on a typical ML or AI lifecycle (see Chapter 2).


Icon of a regulator controlling sliders on a screen

Developers and Product Managers

Although our framework and methodology are not set out using formal syntax and schemas for argument-based assurance, Trustworthy Assurance is, nevertheless, primarily aimed at developers and product managers. For instance, one of its key values is as a reflective and deliberative aid for demonstrating how ethical principles and decisions have been undertaken and establish through a project's lifecycle, with easy means for justifying the relevant claims by linking them to evidence. Therefore, developers and product managers are a key stakeholder group that we have targeted in this report and research project.


Icon of a regulator controlling sliders on a screen

Researchers

Researchers may find less practical value in our framework and methodology than the above groups. However, our report highlights and emphasises significant knowledge gaps and research opportunities for improving our collective understanding about the individual and social impacts of DMHTs. Therefore, our report can also be seen as a call for further research into specific areas, including the evaluation and validation of the Trustworthy Assurance framework and methodology.


Users of DMHTs may also find value in the report, but it has not been produced with members of the public as primary target audience. In general, the responsibility for utilising the methodology and implementing and acting upon the recommendations is for the groups above; they are not the responsibility of the user!


  1. Work is underway to develop an user guide, and this will be added to the online version of our report when ready. The guide will also include instructions on how to use our tool for producing assurance cases. 

  2. The following documents provide a more critical examination for those interested: (Sujan and Habli, 2021); (Burr and Leslie, 2022)

  3. Christopher Burr, J. Morley, M. Taddeo, and L. Floridi. Digital Psychiatry: Risks and Opportunities for Public Health and Wellbeing. IEEE Transactions on Technology and Society, 1(1):21–33, March 2020. doi:10.1109/TTS.2020.2977059