Skip to content

Participatory Design Workshops

A collaboration between the AI4ER CDT at Cambridge University, and the Turing Commons project

Illustration of stakeholder engagement.

It’s not every day that you get to sit down on a wintery morning, cup of tea in hand, to help direct the future of an innovative new course to be delivered by the Turing Commons.

My name’s Orlando, and I’m currently a Master of Research (MRes) student with the Artificial Intelligence for Environmental Risk (AI4ER) CDT at the University of Cambridge. Our course aims to harness the increasing availability of large datasets and powerful compute to better understand physical systems in order to inform national and international policy, and make a tangible, positive impact on the environments in which we live.

The beverage-backed online meeting followed a general call by the Turing Commons team – part of the Alan Turing Institute – for all MRes, PhD, and early career researchers in AI4ER to have their voice heard.

The call is responding to a broad consensus which demands that the development of AI methods be widely accessible, transparent for stakeholders, and, above all, guided by an understanding of any potential ethical ramifications from inception to deployment, and beyond.

Given the multi-faceted and wide-reaching research of the AI4ER CDT’s MRes and PhD students such knowledge is especially vital for us, and we’re also well-placed to help guide the establishment of such a course. Within this collaboration, a variety of voices – including the very earliest of early career researchers such as myself and fellow MRes student Andrew McDonald – are vital to understand each aspect of the field including project lifecycles, open science tools, and public communication of AI technologies.

Our efforts in that first meeting focused on designing the structure and delivery of a Responsible Research and Innovation (RRI) skills track. The COVID-inspired discussions of in-person vs asynchronous online learning resurfaced and concluded with a resolution to harness the best of both. This will likely involve making a variety of participation options available to everyone in accordance with their academic (or non-academic) background and availability. This theme of inclusivity was also reflected in the importance of including activities and technology spotlights that would be of universal interdisciplinary appeal, as well as different domain-specific case studies that would allow various disciplines to use the resources most effectively. The ability to provide continued feedback regarding the course’s effectiveness – invaluable to the evaluation of its success – will also be embedded in its delivery.

Putting something together on this scale necessarily takes a great deal of time. Production of the skills track is under way, supported by ongoing meetings and the hard work of the Turing Commons's team. Our initial meeting was followed up by a case study design workshop, where we collectively developed case studies on issues around AI and environmental sciences. The final participatory design workshop will take place Monday, March 6th 2023, where we will have an in-person opportunity to trial some of the course materials on explainability, our newly developed case studies, and various other new activities.

The Turing Institute is always on the lookout for stakeholders willing to offer their experience, advice, and a little of their time to help direct the responsible future development of artificial intelligence and machine learning methods. In particular, there is a need for case studies on diverse research areas, as the team is building a repository of such studies to enable people from all backgrounds to receive tailored course content.

If you’d like to get involved, please get in touch with Clau Fischer, part of the Turing Commons team.

In the meantime, check out the Turing Commons existing skills tracks and the original version of their courses, and watch this space for 2023’s RRI course!