Communication and Deployment Biases

../_images/c.png

Fig. 5 A simplified schematic of a project worklow.

Deployment Bias

Deliberative Prompts

  • Have you consulted with relevant stakeholder groups to identify and understand the potential impacts of human factors within the context or environment of your model or system’s use?

Related biases: Training-Serving Skew, Automation Bias

Automation Bias

Deliberative Prompts

  • Have you considered requirements such as transparency or interpretability when designing your model?

  • Does the intended use domain demand a greater need for interpretability, and how may this affect the model’s accuracy (e.g. reducing model complexity)?

Related biases: Dismissal Bias

Dismissal Bias

Deliberative Prompts

  • What steps have you taken to evaluate and measure the performance of your model across (protected) sub-groups of the population? What measure of fairness have you adopted for this purpose?

  • Is your model likely to be implemented within a system that alerts or notifies users (e.g. early warning system)? If so, have you considered the necessary human factors (e.g. usability, interpretability, explainability)?

Related biases: Automation Bias

Biases of Rhetoric or Spin

Deliberative Prompts

  • Are there methods of internal peer review (or “red teams”) that you can use to proactively identify cases where you are going too far beyond what is justifiably implied by the data?

Related biases: Positive results Bias

Positive results Bias

Deliberative Prompts

  • Have you sufficiently reported all relevant results of the study, even if they speak against your favoured hypothesis?

  • If your study is not accepted in your favoured journal, have you made provisions to ensure that the results can be accessed through other repositories or services that promote open science principles?

Related biases: Biases of Rhetoric or Spin, Confirmation Bias