Skip to content

Stage 3: System Deployment

In this section we will begin by looking at the final four tasks of the project lifecycle model, which are concerned with the deployment of the system. For each task, a description is given as well as information about the importance of the typical activities associated with the task, including issues that have an ethical significance.

An illustration of the third over-arching task of the project lifecycle model: system deployment

Important

Some projects, most notably those concerned primarily with research, may not reach the system deployment stage. However, for those that do it is important to emphasise that the system deployment stage marks the beginning of a new phase of the project lifecycle, which is concerned with the ongoing maintenance and monitoring of the system.

System Implementation

Task Description

System implementation is the process of putting a model into production, and implementing the resulting system into an operational environment. The system enables and structures human interaction with the model, within the respective environment (e.g. a recommender system that converts a user’s existing movie ratings into recommendations for future watches). As such, this task often requires a significant level of involvement from systems and software engineers and designers.

Importance of Task

Regardless of how well the preceding stages have gone, unless the encompassing system is implemented effectively, the model's performance will be impacted. Here, we can note the importance of two forms of implementation:

  • Technical implementation: designing and building the hardware and software infrastructure (e.g. server, interfaces) that will host the model. Among other things, it is important to ensure the technical system is secure, performant, and accessible.
  • Social or organisational implementation: how the technical system is situated within broader social and organisational practices is also important when considering the project's goals and objectives (e.g. appropriately informed users, complementarity with organisational practices).

Both types give rise to a large number of considerations, such as the need to ensure the system is certified for sale (e.g. according to necessary safety legislation); that it complies with relevant regulatory requirements (e.g. processing of personal data in accordance with the General Data Protection Regulation); that it is performant and secure (e.g. against cyberattacks), and that it is accessible (e.g. for users with disabilities).

Illustrative Example

Illustrative examples coming soon!

Illustrative examples coming soon!

User Training

Task Description

'User training' includes any form of support, upskilling, or capacity building that is offered to or carried out with the individuals or groups who are required to operate the system in question (e.g. mandatory training in a safety–critical context), or who are likely to use the system (e.g. consumers).

Importance of Task

User training is rarely carried out by the same team members who designed and developed the system. While developers may produce documentation for the model (see above), this is often insufficient as a form of user training—additional forms of formal training workshops or courses may be required depending on the complexity of the system.

Insufficient or inadequate training can create conditions in which cognitive biases such as algorithmic aversion thrive (e.g. users do not trust the performance or behaviours of a trustworthy algorithmic system), or conversely, users trust the outputs of an untrustworthy system).

Human Factors

The field known as 'human factors' is concerned with the study of human–machine interaction, and is a useful resource for understanding the importance of user training. See Durso (2014)1 for an overview of the field of human factors and see Burton (2019)2 for a review and discussion of six forms of algorithmic aversion and proposed solutions to them, including the importance of user training.

Illustrative Example

Illustrative examples coming soon!

Illustrative examples coming soon!

System Use and Monitoring

Task Description

Depending on how a system has been designed, its deployment and use in an environment (physical or virtual) can create conditions for ongoing feedback and learning (e.g. robotic systems that employ reinforcement learning, digital twins linked to a monitored counterpart). Regardless, the use of metrics and evaluation methods are commonly used to monitor the performance of a system and ensure that it retains (or ideally improves on) the same level of performance that it had when first validated.

Importance of Task

The potentially dynamic (and sometimes unpredictable) behaviour of machine learning models and AI systems means that ongoing monitoring and feedback of the system, either automated or probed, is important to ensure that issues such as model drift have not affected performance or resulted in harms to individuals or groups.

What is model drift?

Model drift is a phenomenon that occurs when the underlying data distribution used to train a model changes over time, resulting in a change in the model's performance. This can happen in one of two ways.

On the one hand, drift can occur when the statistical properties of an input variable change (i.e. there is a shift in the underlying data distribution). For example, perhaps house prices start increasing and a model becomes more and more inaccurate at predicting them.

On the other hand, there could be a more nuanced reason related to the conceptual or social meaning of the input variables. An example of this could be a machine learning algorithm used in finance that aims to predict whether someone is likely to default on a loan using variables with social meaning, such as occupation. If the model detects a relationship between specific values for certain occupations and the employee's ability to pay back a loan in a timely manner, the system may recommend more loans to people in this occupation. However, if something happens that has a global impact on these jobs (e.g. increased investment in the sector creating a rise in average wages), this association will change. The result is that people who could otherwise afford a loan may still be denied one due to inaccurate models.

Illustrative Example

Illustrative examples coming soon!

Illustrative examples coming soon!

Model Updating or Deprovisioning

Task Description

If the use and monitoring of a model or system identifies vulnerabilities or inadequate levels of performance, it may be necessary to either update the model through retraining (i.e. looping back through some of the model development tasks) or deprovision the system if it is no longer fit for purpose.

Where the latter option occurs, the project team or organisation may need to start a new project to address any gaps in their business or organisation that arise because of the deprovisioning of the present system. This would then restart the project lifecycle from the beginning, bringing us full circle.

Importance of Task

An algorithmic model that adapts its behaviour over time or context may require updating or deprovisioning (i.e. removing from the production environment). While this can include elements such as improvements to the system's architecture (e.g. for speed or security), the more important component here is the model itself (e.g. the model parameters, the features used).

The need to update or deprovision a model or system can arise for a number of reasons, including:

  • The model is no longer fit for purpose (e.g. it is no longer accurate or reliable).
  • The system contains too many vulnerabilities as it is based on an outdated architecture.
  • The purpose of the system has changed, and it is no longer commercially viable or scientifically useful.
  • The system is no longer compliant with relevant legislation, regulation, or with the organisation's policies and procedures.

Illustrative Example

Illustrative examples coming soon!

Illustrative examples coming soon!

Next Steps

At this point in the skills track you can choose different routes through the remaining modules. First, you will have a choice between five ethical principles, known as the SAFE-D principles, which help guide responsible design, development, and deployment of data-driven technologies:

  • Sustainability Coming soon.
  • Accountability Coming soon.
  • Fairness
  • Explainability
  • Data Quality, Integrity, Privacy and Protection Coming soon.

The modules associated with these principle can be carried out in any order, and you are not required to complete all of them. Rather, you can choose to focus on the ones that are most relevant to your project or area of interest. However, at least one of these modules will need to be completed before proceeding to the final module of the skills track.

Further Resources

The following resources provide additional and relevant information on the project lifecycle model:

  • Burr, C., & Leslie, D. (2022). Ethical assurance: A practical approach to the responsible design, development, and deployment of data-driven technologies. AI and Ethics. https://doi.org/10.1007/s43681-022-00178-0

  1. Durso, Frank T., Margulieux, Lauren E., Blickensderfer, Elizabeth L. (2014). Human factors. Oxford University Press. DOI: 10.1093/obo/9780199828340-0159 

  2. Burton, .. (2019).