Skip to content

Consider context

phone-peel

No AI system exists in a vacuum. They are all embedded in a wider socio-technical environment which will affect the way the system's deployment functions. Therefore, considering the wider context the system operates in is imperative for responsible research and innovation in AI.

This translates into think diligently about the conditions and circumstances surrounding the system, its operation and its outputs, including the norms, values, and interests that inform the people undertaking the development of the project and that shape and motivate the reasonable expectations of the project's stakeholders.

Some of the questions to bear in mind when considering context are:

  • How are these norms, values and interests influencing or steering the project and its outputs?

  • How could they influence the the users’ meaningful consent and expectations of privacy, confidentiality, and anonymity?

  • How could they shape a project’s reception and impacts across impacted communities?

Considering these questions will ensure reflection within the project team, and will help to anticipate the potential negative impacts the use of an AI system may have.

Considering context also involves taking into account the specific domain(s), geographical location(s), and jurisdiction(s) in which the project is situated and reflecting on the expectations of affected stakeholders in these specific contexts. Some relevant questions are:

  • How are do the existing institutional norms and rules in a given domain or jurisdiction shape expectations regarding project goals, practices, and outputs?

  • How do the unique social, cultural, legal, economic, and political environments in which different projects are embedded influence the conditions of data generation, the intentions and behaviours of the research subjects that are captured by extracted data, and the space of possible inferences that data analytics, modelling, and simulation can yield?

In summary

phone-peel

All in all, contextual considerations should, at minimum, track three vectors:

  1. The first involves considering the contextual determinants of the condition of the production of the project (e.g., thinking about the positionality of the team, the expectations of the relevant community of practice, and the external influences on the aims and means of research by funders, collaborators, and providers of data and research infrastructure).

  2. The second involves considering the context of the users of the system (e.g., thinking about subjects’ reasonable expectations of gainful obscurity and ‘privacy in public’ and considering the changing contexts of their communications such as with whom they are interacting, where, how, and what kinds of data are being shared).

  3. The third involves considering the contexts of the social, cultural, legal, economic, and political environments in which different projects are embedded as well as the historical, geographic, sectoral, and jurisdictional specificities that configure such environments (e.g., thinking about the ways different social groups—both within and between cultures—understand and define key values, research variables, and studied concepts differently as well as the ways that these divergent understandings place limitations on what computational approaches to prediction, classification, modelling, and simulation can achieve).