In their contribution, Ugar and Malele1 shed light on an often overlooked but crucial aspect of the ethical development of machine learning (ML) systems to support the diagnosis of mental health disorders. The authors restrain their focus on pointing to the danger of misdiagnosing mental health pathologies that do not qualify as such within sub-Saharan African communities and argue for the need to include population-specific values in these technologies’ design. However, an analysis of the nature of the harm caused to said populations once their values remain unrecognised is not offered.
Building on Ugar and Malele’s considerations, we add a further perspective to their analysis by showing the need to design considering intended values to avoid the occurrence of epistemic injustices.2 First, we argue that failing to acknowledge the hermeneutical offerings of the populations interacting with these systems can qualify as contributory injustice.3 Second, we show that this form of injustice paves the way to patterns of epistemic oppression that need scrutiny, particularly given the epistemic authority these systems tend to increasingly acquire.
Contributory injustice in ML for mental health supportDotson’s concept of contributory injustice 3 points out that in the case of blind spots in collectively shared epistemic resources, people in marginalised social positions often develop …
留言 (0)