Skip to content

Appendix: Project Methodology

This appendix offers additional information about the structure and methodology for our workshops. First, we begin by providing an overview of the three case studies that were used across the workshops. Next, we detail the general methodology for the workshops split across the following four sections:

  1. University administrators
  2. University students
  3. Regulators, developers, and researchers
  4. Users of digital mental health technology

Case Study Information

No one ever said ethics was easy. Moral deliberation requires deep and wide-ranging consideration of issues and challenges such as conflicting values, resource limitations, and the needs and interests of diverse people and groups. One way to facilitate ethical deliberation, therefore, is through the use of illustrative case studies, which hold certain details fixed to support and guide exploratory reflection.

We prepared four case studies for this project, all of which were related to hypothetical projects (though based on real-world examples) involving the development or use of a digital mental health technology.

Case Studies

Our case studies are free to download from our online repository. They are released under a Creative Commons Attribution-ShareAlike 4.0 International (CC BY-SA 4.0) license. They can be freely shared and adapted subject to the following licensing restrictions. We would also encourage anyone who uses the case studies to provide feedback on where they were helpful or could be improved.

Risk assessment and peer-to-peer support

A platform for users to discuss their mental health with others in an anonymous environment, but where a machine-learning algorithm is identifying markers of risk and alerting trained professionals if positive instances are detected.

App limits

An automated service on a smartphone that learns to detect signs of problematic usage of specific apps (e.g. gambling, social media) and prevents users from accessing these apps for a specified period.

Clinical decision support system

An AI system that supports a psychiatrist with the assessment and diagnosis of a patient by analysing the patient's speech and making recommendations for follow-up questions or possible follow-up actions.

Virtual reality for therapeutic support

A virtual reality (VR) system that immerses patients in a virtual environment that is designed to expose them to challenging situations in a way that is monitored and controlled (e.g. social encounters for individuals suffering from social anxiety).

For each case study, the following information was presented:

  • Overview: a summary of the case study with relevant information about the system and use context
  • Key consideration: a question that was designed to elicit reflection on ethically salient features of the case study
  • Deliberative prompts: additional questions to help structure group discussion
  • Datasheet: a table of information about the data available to the project team (e.g. input data, training data) and the techniques used in the project (e.g. natural language processing, artificial neural network)
  • Affected users, groups, and stakeholders: a list of those who are likely to be impacted by the system or who may impact the system's use and adoption.

For all the case studies, this information was presented as a starting point but the participants were given sufficient flexibility to build upon the case studies where necessary. For instance, if the participants thought that a property of the use context was relevant to their deliberation, but it was not explicitly stated in the document, they were encouraged to include it their discussion.

With this preliminary information specified, we now turn to our analysis of the workshops.

Engagements: Structure and Methodology

Note

The following applies to all of our engagements:

  • An internal and independent ethics review process was followed to evaluate the design of our engagements.
  • Participants were sent an informed consent statement and briefing document prior to their participation. These documents provided information about the purpose and nature of the project, how their data and involvement would support the project, and also specified that all data would be a) securely stored in an anonymised format, and b) deleted following completion of the project.
  • The same four case studies (see above) were presented to the participants for use in the workshop activities.
  • No information about the SAFE-D principles were presented to any of the groups prior to any activities where it was important to have feedback that was not primed. However, for some groups where knowledge of ethics or practical decision-making could not be assumed, a general introduction was provided to support their participation (e.g. users).

The following sections provide details of these workshop groups:

  1. University administrators
  2. University students
  3. Regulators and Policy-Makers, Developers, and Researchers
  4. Users with lived experience of DMHTs

University administrators

Recruitment and participant details

Interviews with administrators across 10 UK universities were conducted between January and March 2022, each lasting one hour and facilitated by Dr Christopher Burr and Rosamund Powell. In all instances, interviews were conducted remotely via Zoom due to the geographic diversity of participants. Participants were selected from the top 20 UK universities according to all metrics, based upon the Times Higher Education Survey 2021.1

Relevant representatives were identified and invited to interview. Individuals invited to interview worked within Student Services departments with the majority serving as Director of Student Welfare or Wellbeing. Some instead served as Head of Disability, Deputy Director of Wellbeing or Head of Student Services.

Interview structure

In advance of interviews, all participants were sent a consent form, a list of questions and briefing document on the digital mental healthcare landscape. At the start of each interview, one of the facilitators began by introducing the project and the project members on the call. Following this, verbal consent was requested, and participants were given the opportunity to ask questions on the informed consent form that had been pre-circulated. The informed consent statement explained that their answers would be fully anonymised to allow the participants to feel comfortable expressing candid opinions and beliefs about potentially sensitive topics.

Once consent was obtained, the interview recording began. A semi-structured format was selected for several reasons:

  • It was suitable for an exploratory stage interview
  • It encourages exploration of tangential issues that participants feel are relevant
  • It is more likely to promote a relaxed interview and hones feedback.

The interviews were split into three main sections. Section 1 focused on attitudes to the advantages and disadvantages of digital mental health technologies and current procurement practices at UK universities. Section 2 focused on how current procurement processes align with duty of care. Section 3 focused on collecting feedback on the ethical assurance methodology.

At the end of interviews, the recording was stopped, and the participant was asked if they had any further questions for the interviewers. The participants were thanked for their time and told they would receive a copy of this report.

Analysis

To facilitate qualitative analysis, automated transcriptions were verified and corrected by members of the Turing project team before the original recording was deleted. Two project team members then analysed these transcriptions independently to identify salient themes, grouped into two sections:

  1. Contextual challenges to the ethical deployment of digital mental healthcare
  2. Administrator feedback on trustworthy assurance

Once themes were identified, themes were discussed and key conclusions were summarised and quote from interviews extracted. This preliminary analysis was then set aside so that student workshops could be completed.

University students

Recruitment and participant details

Workshops with students from UK universities were conducted between February and March 2022. Two six-hour workshops were completed and facilitated by the project team (see above). In all instances, workshops were conducted remotely via Zoom to support geographic diversity. Participants were selected from across all UK universities.

An open call for applications was published by the Turing, inviting any students currently enrolled in an undergraduate or postgraduate course. In order to apply, students completed an application form asking a series of optional EDI questions alongside the below three project questions:

  1. Why are you interested in participating in this workshop?
  2. How do you understand the aims of this research in your own words?
  3. Do you have any prior understanding about the ethics of digital mental healthcare? If so, please provide details.

Participants were selected from a total of 45 applications and a total of 25 students joined the final workshop sessions.

Workshop design

In advance of interviews, all participants were sent a consent form and briefing document on the digital mental healthcare landscape. At the start of each workshop, one of the facilitators began by introducing the project and the project members on the call. Following this, verbal consent was requested, and participants were given the opportunity to ask questions on the informed consent form that had been pre-circulated. The informed consent statement explained that their answers would be fully anonymised, in order to allow the participants to feel comfortable expressing their opinions and beliefs about potentially sensitive topics.

Once consent was obtained, the recording began.

The workshops were split into two sections. Section 1 focused on attitudes to the advantages and disadvantages of digital mental health technologies and on identifying which values and principles mattered to students. Section 2 then saw students evaluate two illustrative case studies (see above) designed to identify possible ethical issues which might arise if they were deployed in a university context.

During the workshop sessions, participants were also asked to complete an online survey seeking individual feedback on the ethics of digital mental healthcare in a university context. At the end of workshops, the recording was stopped, and the participants were asked if they had any further questions for the facilitators. The participants were thanked for their time and told they would receive a copy of this report.

Analysis

To facilitate qualitative analysis, automated transcriptions were verified and corrected by members of the Turing project team before the original recording was deleted. This transcription was also accompanied by notes taken by members of the Turing team during the workshops to help identify key themes. In addition, the completed surveys were analysed. Finally, the notes taken online during the workshop activities were analysed as participants had been encouraged to take structured notes on the case studies during the workshop sessions.

Each of these elements were taken into account in identifying key themes. Once themes were identified, key conclusions were summarised and quotes from workshops extracted. This preliminary analysis was then set aside, to be combined with pre-existing analysis form administrator interviews.

Final Analysis

For students and administrators, key themes which fell into the category of “contextual challenges to the deployment of digital mental health technologies” were compared to identify cross-cutting themes. During analysis, researchers were careful to maintain differentiation between administrator and student perspectives such that agreements and disagreements could be identified. For all of the six themes identified (see Chapter 3), student and administrator feedback was relevant and so all themes were cross-cutting. However, in many cases, students and administrators contributed to the development of these themes in contrasting ways.

In addition to contextual challenges, the final report identifies a series of methodological challenges. These are largely drawn from administrator interviews where more time was dedicated to the evaluation of trustworthy assurance. Nevertheless, where relevant, student perspectives are used to supplement this analysis.

Regulators and Policy-makers, Developers, and Researchers

Recruitment and participant details

Unlike the previous workshops, 39 participants from these three groups were sent specific invitations based on their roles and responsibilities (30 attended). While this introduces a source of selection bias, it also allowed the project team to utilise existing relationships to ensure senior decision-makers and researchers from across the UK government and civil service, third-sector, development community, and academia were included.

A decision was made early in the project to bring these different stakeholder groups together, rather than holding separate workshops for each group. This was to allow the participants to have an opportunity to discuss and identify shared values and differences in perspectives and approaches out in the open. This limited the amount of comparative analysis we could perform, but a separate survey was undertaken to explore these differences specifically. However, due to the limited sample size we accepted that there would be a limit to how much we could generalise from our findings.

Workshop design

Two workshops were held with the participants in May 2022. The first workshop focused on broader ethical, social, and legal issues, and the second focused on the Trustworthy Assurance methodology. Because of the longer time dedicated to these groups, several presentations were given by the project team, including two on the Trustworthy Assurance methodology, which was a key focus for the groups. The first workshop was split into three sections. Section 1 focused on exploring the values and principles that were important to the participants. Section 2 identified and explored salient issues and concepts in the four case studies. And, section 3 introduced participants to the trustworthy assurance methodology.

The second workshop provided a refresher to the trustworthy assurance methodology and then involved the design and production of a hypothetical assurance case for one of the four case studies, which the participants had voted for in the previous workshop. The participants were used our online platform to put the assurance cases together, focusing on a key ethical goal that they had selected from a set of options that they had put together.

Analysis

To facilitate qualitative and thematic analysis, automated transcriptions were verified and corrected by members of the project team before the original recording was deleted. This transcription was also accompanied by notes taken by members of the project team during the workshops to help identify key themes. In addition, the completed surveys were analysed and unedited results are provided in Chapter 4.

Each of these elements were taken into account in identifying key themes, with a specific emphasis on their relation to the Trustworthy Assurance methodology. Therefore, unlike the research conducted with UK Universities, for this work we were looking to identify themes that could support the operationalisation of ethical principles as key components in an assurance case.

Users of digital mental health technology

Recruitment and participant details

Based on feedback from our independent and internal ethics review process, we decided to carry out these workshops with the support of the McPin foundation—a mental health research charity that provide advice and support on research strategies to involve participation and expertise from individuals with lived experience of mental health issues.

Representatives of McPin handled recruitment, such that all contact with the individuals prior to the workshops and after the workshops were organised by a single organisation. Participants were required to be over 18 years old and to have either a) direct lived experience of using DMHTs or b) direct experience of caring for someone who made use of DMHTs. No other constraints were set, and no information about the specific reason for use was requested (e.g. CBT app for anxiety).

Two workshops were offered to participants and both were held during July 2022—one in person at the Alan Turing Institute and one online using Zoom. For the in-person workshop, 10 participants were selected from around Greater London due to the need to travel to the Alan Turing Institute—travel expenses were reimbursed. For the online workshop, 10 participants were selected from across the UK. All participants were reimbursed for their time.

Workshop design

All workshops were facilitated by two members of staff from the McPin foundation. The project team supported the McPin facilitators and also gave two presentations—our slides can be downloaded here. However, in these workshops the project team took a less hands-on role than the other engagements.

The workshops with users focused on the moral attitudes and perspectives of the participants and only indirectly explored trustworthy assurance. While they were informed about the trustworthy assurance methodology, so they could understand why they were carrying out the respective activities, it was not emphasised in the workshops. This choice was influenced a) by the feedback from the students gathered during the sub-project (see Chapter 2), and b) preliminary discussions with our workshop facilitators—McPin—about the accessibility of the material within the time constraints.

Two activities were designed for these workshops (using breakout sessions to allow wider contribution from the participants):

  • The first activity was a guided discussion of the following three questions:
  • What is Digital Mental Healthcare?
  • What are some positive use cases for digital mental health technologies?
  • What are some negative use cases for digital mental health technologies?
  • The second activity involved a structured task where the participants evaluated several statements, which were designed as possible assurance claims from one of the hypothetical developers in our case studies (download list of claims and questions).

Analysis

To facilitate qualitative and thematic analysis, minutes and notes were provided by the McPin Foundation facilitators and supplemented by notes taken by the Turing project team during the meeting. No recordings were taken or transcribed to help create a more comfortable environment for the participants. The Turing team used the minutes, notes, and the original documents from the workshop activities to identify general themes see Chapter 4 and extract the core attributes that were used to operationalise the ethical goals associated with the argument patterns presented in Chapter 5


  1. Desk research was used to determine different methods for filtering universities. For example, based upon the best and worst student satisfaction. In the end, the top 20 Universities, according to Times HE Survey, were selected because of the broad range of digital offerings available at these institutions. Admittedly, this decision introduced some selection bias into our engagements, which is why we present our findings as limited exploratory analysis and recommend further themes for ongoing research and analysis.