A great deal has already been written on this subject, so I will primarily focus on sharing existing resources about this topic, along with an archive of additional sources. To begin with, consider some examples and implications of biased AI within a variety of crucial applications. At this point, many people are already familiar with the use of facial recognition software, in part because it has become a feature of many smart devices. In 2018, Timnit Gebru and Joy Buolamwini wrote a crucial study which “found that commercial facial recognition tools sold by companies such as IBM and Microsoft were 99 percent accurate at identifying White males, but only 35 percent effective with Black women”. A tool that fails to achieve its purported task two out of three times demands immense reform, especially when used in high-stakes situations.
Another area of concern is in health care, where the misuse or failure of AI would be catastrophic. As Noseworthy explains, “[d]eep learning algorithms derived in homogeneous populations may be poorly generalizable and have the potential to reflect, perpetuate, and even exacerbate racial/ethnic disparities in health and health care” . If the research and data are skewed, the diagnosis and treatment will likely miss the mark as well and prevent patients from receiving proper, life-changing treatment.
A third example of where data bias can carry significant ramifications is hiring practices where computers are increasingly being used to screen applicants. Associate Attorney McKenzie Raub explains that, “[w]ithout accountability and responsibility, the use of algorithms and artificial intelligence leads to discrimination and unequal access to employment opportunities” . Raub goes on to describe how the choices that go into data gathering and labeling can lead to biased and discriminatory results. There must be checkbacks to prevent the expansion of existing social issues amplified by “algorithmic flaws” .
With these consequences in mind, let us redirect our attention to the question of how bias is transferred to these machine learning systems and how it might be accounted for. Quite simply, the individuals working on these systems and their implementation statistically lack diversity. In fact, “only five percent of the workforce in the technology industry are from one of the underrepresented groups” . Gebru, a renowned computer scientist and advocate for ethics and diversity in AI, described how at a conference on AI, out of hundreds of participants she “counted six black people in the entire audience, and realized she was the only black woman in attendance” .
Not only does this indicate a disbalance in employment opportunities systemic to society, but it also means that technology is not “going to address problems that are faced by the majority of people in the world” . Discourses about how AI should be used, what should be included in its training, what the possible side effects of its existence are, and so on are all discussions where a multitude of voices should be considered.
We must treat this issue with care and transparency, listening to thinkers and researchers like Gebru who honestly question the ways we think about AI. She deeply cares about and works towards inviting companies to provide their users and employees with more information . his would allow people to understand the origins of their tools and examine the inherent biases that may be embedded in them, so that they can make informed decisions about whether to use these technologies or not. All said and done, this is an issue that must be addressed by a multitude of people and not just by those already in the field of machine learning . Which is precisely where the voices of artists become tantamount.