Biases of AI

A Social Challenge that Impacts Everyone
©207493167 - stock.adobe.com

Human biases are nothing new. There is a lot of research on this topic and most people are aware of the issue. What about machines? They are supposed to decide rationally and simply based on data, right? But a system cannot access all data in the world, instead it is dependent on the selected data humans expose the algorithm to. What happens if these data are wrong or not enough? Self-learning systems become a replica of the society – unconsciously.

The Netflix documentation „Coded Bias” raises the discussion about discrimination of systems using artificial intelligence (AI) once again. An AI algorithm can only “learn” what it is being “taught”. Meaning that its prediction is based on the data that it receives from human beings. In the film, the director Shalini Kantayya talks about the experiences of the Massachusetts Institute of Technology (MIT) computer scientist Joy Buolamwini with an AI system that was not able to recognize her darker-coloured skin as a face. Instead, the software worked perfectly when she put on a white mask. Joy found out that the software was designed and developed using data of mainly caucasian (light skin-coloured) faces. These biases of AI systems lead to wrong results or in her case, no results. She explains that this is not a technical problem – no programmer did a bad job: It is a social problem. The system itself is not racist, it simply decides mathematically, but it can only process and analyse the real-world data it is given.

AI researcher Meredith Broussard states that only 14% of the people who develop AI systems are female. Furthermore, the companies that are currently leading the AI market are not considering the ethical aspects in their development (By the way, Germany is not one of the pioneers.) [1]. Adrian Daub sees the problem of gender biases in the tech industry as well, especially in Silicon Valley. Silicon Valley is a role model for many companies – imitating the workplace design, the language, the standards – and influences the way of doing business and achieving success, specifically in the tech industry. Adrian draws attention to one standard that shapes our business world and society in a problematic way: the connection of labour and value. In the tech industry, men are associated with the hard, technical “core” work and women, even with the same qualification, are pushed into managerial positions, which are said not to require specific skills [3].

According to VentureBeat, a Columbia University, study found that “the more homogenous the (engineering) team is, the more likely it is that a given prediction error will appear” [5]. No wonder that most of the people reporting on this topic are marginalized people, especially non-white women – the ones affected by such biases. The documentation „Coded Bias” concludes that biases change the society creepingly.

Digitalisation is evolving and most people accept new technology gratefully – it makes things easier and faster. But the effects on society often happen unconsciously and are generally recognized too late. On the one hand, training AI systems with distorted data establishes the status quo of a society and, with it, prevents any kind of change that social and political initiatives are aiming for. On the other hand, it even reverses previous progress and scales biases [1].

Sometimes respecting people means making sure your systems are inclusive such as in the case of using AI for precision medicine, at times it means respecting people’s privacy by not collecting any data, and it always means respecting the dignity of an individual. 
– Joy Buolamwini

Nicolas Kayser-Bril from AlgorithmWatch recently conducted another noteworthy experiment. He found out that Google Translate swallows the gender in many translation processes. The reason for this is that the algorithm optimizes translations for English and English is most often used as a bridge for other languages. Meaning, when translating an English, gender-neutral word into a gender-inflected language the algorithm often sticks to the stereotypical gender.

See the following two examples:


In Nicolas’ opinion the Google feature that alerts users that some words could be gender-specific when translating from English is not working good enough and because Google Translate is integrated into the widely used Chrome browser by default, the false translations presumably affect many people, maybe unwittingly [2].

Debiasing humans is harder than debiasing AI systems. 
– Olga Russakovsky, Princeton [7]

There is no quick fix for biases in AI. First of all, representative training data are needed. If a system is trained with as many dark as light skin-coloured examples, with a many male as female examples, the algorithm equalizes them and improves its functionality. Second, there are technical ways of ensuring fairness in the programming, e.g. requiring models to have the same predictive value across different groups. Counterfactual fairness is one promising approach for AI decisions regardless of specific attributes, like gender. But the biggest challenge is to determine what fairness actually means [4]. And this is a question of digital ethics. Companies are responsible to cooperate with scientific research and governmental organisations to minimise AI biases, because the fundamental change in digitalisation happens on a social level [6]. Evolving and implementing this thinking, AI has the potential to make fair, inclusive and all-encompassing decisions – better than humans could. “AI can help humans with bias – but only if humans are working together to tackle bias in AI.”[4].


Sources
[1] sueddeutsche.de/kultur/coded-bias-netflix-doku-1.5268189
[2] algorithmwatch.org/en/google-translate-gender-bias
[3] thenation.com/article/society/gender-silicon-valley
[4] hbr.org/2019/10/what-do-we-do-about-the-biases-in-ai
[5] venturebeat.com/2020/12/09/columbia-researchers-find-white-men-are-the-worst-at-reducing-ai-bias
[6] forbes.com/sites/forbestechcouncil/2021/02/04/the-role-of-bias-in-artificial-intelligence/?sh=3adaa260579d
[7] wired.com/story/ai-biased-how-scientists-trying-fix

Total
0
Shares
Prev
Digital Sovereignty
©467523338 - stock.adobe.com

Digital Sovereignty

As a chance for Europe

Next
Vorurteile der KI
©207493167 - stock.adobe.com

Vorurteile der KI

Eine gesellschaftliche Herausforderung, die alle betrifft

You May Also Like