Skip to content

Turing Commons Roadmap

This blog post sets out a general roadmap and explores a series of objectives for the Turing Commons as we head into 2023.

Abstract illustration of two people making ethical decisions as they walk along a digital road towards a more responsible digital society. Illustration by Jonny Lighthands.

The Turing Commons started out, as all good projects do, with an idea and a name, presented to the Turing's Data Ethics Group in the Winter of 2019 (🎩 to David Leslie and Christina Hitrova).

With the important decision out of the way, our plan was to build a community platform (the “commons”) to host and support a set of resources that were freely open and accessible to all people with an interest in the ways that data-driven technologies are changing society.

Those who are familiar with Garret Hardin’s influential analysis of the ‘Tragedy of the Commons’2, will appreciate why the “commons” cannot be left unmanaged if it is to serve a sustainable and collective benefit. However, such governance or curation should not occur in the dark, in case it ends up serving the vested interests of a small minority.

So, in the spirit of transparency and openness, this post sets out our current roadmap for how and where we plan to develop the Turing Commons, and how we will work with others to ensure that the resources are co-designed to serve genuine needs and challenges.

Goals and Objectives

From the start of this project, a key goal has been to develop high-quality resources for academic researchers whose work involves the design, development, or evaluation or data-driven technologies, including machine learning or artificial intelligence. To that end, we have produced and delivered three courses on the following topics:

  1. Responsible Research and Innovation
  2. Public Engagement for Data Science and AI
  3. AI Ethics and Governance

For each course we started by hosting a series of workshops with 10-15 researchers to identify the topics, questions, and issues which were most important to them. The feedback we received from these workshops was used to develop our original courses.

We have learned a lot from planning and delivering these courses. Following the delivery of our first course on Responsible Research and Innovation, we sought feedback from the participants, including a specific request for recommendations for improvements.

Several participants suggested more time to explore the new ideas being presented. In response, we redesigned our second course to have a better balance between time spent delivering new material and time spent exploring the material during activities or group discussion. Others recommended closer integration of our activities with the core material. For this, we ensured that our next course has a clear thread which ran through each of the days, progressively building up to a capstone activity.1

The benefit of these small but important changes was clear in the feedback from our second course:

Participant Feedback

The activities and group work helped understand the course content really well so I would say that the activities are the highlights.

Being responsive to the changing needs of our course participants is a key objective for us, and is central to the above goal of creating high quality (and needs driven) resources. However, designing content and resources that are valuable for all of our participants has been challenging given the multi-disciplinary setting of our courses. For example, if we provided an illustrative example to help explain a key concept, those participants who happened to have a background in the respective area (e.g. healthcare) were able to grasp the idea more readily. Addressing this challenge leads us to the first of our new objectives.

People walking up a mountain, which is represented as a graph. Illustration by Jonny Lighthands.

Modular and Tailored Resources

At present, we are revisiting the content of our three courses and focusing on the following set of objectives:

First Objectives

  1. Revise the content of each course based on what worked well, what did not, and what needs updating.
  2. Modularise the courses to allow more flexibility for participants who are unable or do not wish to take a 5-day long course.
  3. Design a more flexible set of materials and resources that can be tailored to different disciplines.

Let’s look at the (2) and (3).

Asking researchers to block five days from their schedules to attend a course is demanding, and can also be exclusionary for some (e.g. those with parenting responsibilities, or part-time jobs). Designing our courses in this way allowed us to test our material as a whole, but our next step is to modularise the courses to enable a more flexible approach that supports different modes of learning.

To this end, we have started splitting our original courses into three main components:

  1. Core modules: the primary topic areas covered in our courses
  2. Optional modules: additional content that can be brought across from other courses, or take specific concepts in different directions (e.g. data privacy and protection)
  3. Activities: the set of activities (both self-directed and group-based) that help users understand the module’s topics
  4. Case Studies: illustrative case studies that anchor the topics in concrete cases and practical examples

Separating these components allows us to design and develop a modular approach where users can tailor our content and resources to their own skills and training needs. For example, a researcher in robotics could select 3 of our recommended (core) modules on public engagement, combine them with introductory modules on AI ethics (from a separate course) and supplement them with the recommended activities and domain-specific case studies to support their learning. This results in a modular approach to building ‘skills tracks’ depicted in Figure 1.

A schematic depicting the modular approach to our skills tracks, comprising core modules, activities, and case studies Figure 1: a schematic depicting the modular approach to our skills tracks, comprising core (and optional) modules, activities, and case studies

In addition to being more flexible and tailored, this also enables us to focus on novel approaches to self-directed learning, which has so far been absent on our platform (e.g. creating an online learning environment for individuals who cannot attend hosted courses).

While this sets out our general approach, there is an important component missing... who is involved in the redesign and redevelopment?

Participatory Design and User Engagement

Developing domain-specific case studies that can be used to tailor core modules to the needs of researchers in different disciplines requires domain-specific expertise. Therefore, the redesign and redevelopment of our content and resources has to be conducted in a participatory manner.

Recently, we have started collaborating with the UKRI CDT in Biomedical AI (University of Edinburgh) and the UKRI CDT in AI for the study of Environmental Risks (University of Cambridge) to develop and evaluate two of our newly designed core modules for Responsible Research and Innovation. This participatory design work will include the co-creation of tailored case studies for their respective domains (e.g. predictive diagnostics for healthcare, earth monitoring and surveillance technologies). It will also allow us to work with domain experts to identify the most pressing needs and challenges related to skills and training gaps in responsible research and innovation.

Logos of our two pilot partners

A key output from our planned workshops will be to evaluate two core modules on ‘AI Fairness’ and ‘Explainable AI’, which will be adapted to the context of the two Centres for Doctoral Training (CDT). In addition, we will evaluate activities for self-directed learning and group activities. These activities will also make use of the case studies (to be co-developed) that will help ground the content in practical and domain-specific examples.

Although limited to two domains during our pilot phase, we intend to explore further areas following our initial evaluations. However, scaling in this way can be time-consuming and slow, so we are also researching and developing a new technical infrastructure to support the process.

Building an Open Infrastructure

Our current website is hosted by GitHub pages, and uses the fantastic Material for MkDocs as a static-site generator. This set-up has allowed us to focus on creating written content using Markdown, rather than worrying about web development. However, there are limitations to static sites, and some of these limitations also serve as barriers to our next objective:

Second Objective

Create an open platform that supports interactive and self-directed learning approaches that are customisable to different users and groups.

As mentioned in the previous section a key milestone on our project’s roadmap is the development of domain-specific case studies that enable users to tailor our core modules to their domain (e.g. robotics or journalism). Another milestone for our future roadmap is to develop a case study repository and API to build additional functionality into our website. Doing so will allow users to better tailor skills tracks to their needs by using the available case studies and modules that ground the content of the modules.

And, where there are gaps in our resources, an open API will allow partners to more easily integrate their own contributions into the platform, in turn supporting and caring for the commons.

This goal requires us to do two things:

  1. Develop a database (or data store) to serve case study data to our front-end
  2. Enable more interactivity in our website, which uses the database to present specific content to users based upon pre-specific information and preferences, such as
    • Disciplinary background
    • Research priorities
    • Prior knowledge of topics
    • Time available for learning

Figure 2 shows how this could work if using something like a JSON file to dynamically adjust elements of a template for our case studies.

Proof of concept diagram showing how a JSON file could populate a HTML document for a case study on decision support systems Figure 2: an example of how a JSON file could populate a HTML document for a case study on decision support systems

However, the modular design of these case studies will also allow for partial updating of additional parts of our website (e.g. information boxes in our online guidebooks). Meeting these two objectives can be achieved using our current infrastructure as a proof-of-concept. However, our longer term plans will require a more thorough redesign to add features such as the following:

  • Public API access to allow users to submit new case studies to our repository (i.e. feeding into the “commons”)
  • User authentication to allow individuals to keep track of their progress, store notes, and build new skills tracks that are customisable to their needs
  • Improved interactivity for activities and learning that rely on modern web frameworks with state management functionality (e.g. React)
  • Enabling communication and collaboration between users through our platform

Implementing these features will take time and resources, and we hope to ensure our roadmap remains open to external collaborators who can steer the project in new an exciting directions. This brings us to our final objective.

Expanding the Commons

An illustration of a participatory design workshop. Illustration by Jonny Lighthands.

Third Objective

To further build on our current resources to meet the needs of additional groups and communities.

Many of the topics and issues we have explored in our courses so far have importance beyond the academic community. We know this from some of our own public engagement work, as well as from discussions we have had with partners from the public, private and third sectors. Although we have started with an academic audience, this is not our end point.

The following list contains examples of groups who we also wish to build resources with and for:

  1. Regulators and Policy-Makers
  2. Journalists and Science Communicators
  3. Members of the Public
  4. Industry Professionals (e.g. developers and data analysts)

While there are topics in our current resources that will be of interest to some of these groups (e.g. public engagement for journalists; AI fairness for regulators), we cannot assume that the needs of the academic community will be met by our current offerings. Therefore, in the near future we have plans to work with these groups to co-design materials and resources that make the ethical, social, and legal issues of data-driven technologies clear and accessible to all.

We have also recently taken part in a workshop with the UK Government’s Science and Innovation Network, organised by the Turing’s International team. The purpose of this workshop was to identify ways that we can work with international partners to support the global community and in turn co-create bidirectional forms of value.

By expanding the scope of the Turing Commons in the above ways, we hope that our first objective also feeds into a wider-reaching goal.

Overarching Goal

To equip diverse groups and communities with the knowledge and understanding of data-driven technologies so they are able to participate fully in democratic forms of deliberation about how these technologies should be used to benefit individuals and society.

We’re excited about these plans and developments, and especially to working with new partners who are passionate about creating a responsible and trustworthy ecosystem for data-driven technologies.

How can you get involved?

At the moment, we are still working on creating structured forms of support to allow interested parties to get involved. For instance, we will be releasing updates on this blog and on social media over the coming weeks with details of how to contribute to this blog directly.

For the time being, the best way to get involved is to just to reach out to us and start a conversation. Do you have an idea for a case study? Do you want to take part in one of our evaluations? Do you have time to help us maintain the current GitHub repository?


  1. See this blog post for more information. 

  2. Hardin, G. (1968). The Tragedy of the Commons. Science, 162(3859), 1243–1248. http://www.jstor.org/stable/1724745