Algorithms Are as Good as What They Are Fed
MARKETING |

Algorithms Are as Good as What They Are Fed

MACHINE LEARNING ALGORITHMS LEARN BY EXPERIENCE. THUS, THEY MAY REPLICATE SOCIETAL BIAS CONTAINED IN THE DATA THEY ARE FED, ARGUE DIRK HOVY AND DEBORA NOZZA IN THE THIRD EPISODE OF THE CLARITY IN A MESSY WORLD PODCAST

Algorithms have come to rule very sensitive aspects of our lives, but they are not always treating us all fairly. They might fail to understand us or even work against us, according to the guests of the third episode of the Clarity in a Messy World podcast series, Dirk Hovy, Associate Professor of Computer Science at Bocconi Department of Marketing, and Debora Nozza, a Postdoctoral Researcher at Bocconi’s Data and Marketing Insights research unit.
 
If we are what we eat, algorithms are as good as what they are fed. To train an algorithm based on machine learning, you feed it with large amounts of data and let it learn by experience. If you feed it data that reflect societal stereotypes, the algorithm will replicate them.
 

 
“In the US,” Professor Hovy takes an example in the podcast, “they tried to automate judgments on bail decisions for defendants and trained the system on historical data, real judgements. When they ran it, the system turned out to only pay attention to the color of a defendant skin and made that the single decisive feature, because historical decisions had actually been biased against defendants of color. The algorithm itself did not have an ideology but it reflected the bias that it was trained on - it reflected exactly the kind of stereotypes that had gone into the decisions that the judges had made before.”
 
Doctor Nozza, who’s working on hate speech detection algorithms, stumbled upon something similar. “There's some societal bias in the world,” she says, “and a model can learn some dangerous patterns such as associating to negative connotations the presence of words related to what should be protected minorities. An example is an algorithm aimed at detecting misogyny in a social media text which classified as misogynous all the texts containing the word women, irrespective of the meaning. This is something called unintended bias and we don’t want it to happen.”
 
Clarity in a Messy World, hosted by David W. Callahan, is the Bocconi podcast that looks at the causes behind the most confounding issues of our time. You can follow the podcast on Apple Podcasts, Google Podcasts, Spotify, Spreaker, and YouTube.

by Fabio Todesco
Bocconi Knowledge newsletter

News

  • Caselli, Ventoruzzo and Mosca Join the Committee for the Reform of Capital Markets

    The activities will lead to the new Consolidated Text  

  • Bocconi Research Excellence Awards 2024

    Faculty whose publications have been accepted by the most prestigious journals or publishers honored  

Seminars

  August 2021  
Mon Tue Wed Thu Fri Sat Sun
            1
2 3 4 5 6 7 8
9 10 11 12 13 14 15
16 17 18 19 20 21 22
23 24 25 26 27 28 29
30 31          

Seminars

  • Jacopo Perego: Competitive Markets for Personal Data

    JACOPO PEREGO - Columbia Business School

    Room 3-E4-SR03 (Rontgen)

  • Alessia Caponera - Multiscale CUSUM tests for time-dependent spherical random fields

    ALESSIA CAPONERA - LUISS

    Room 3-E4-SR03 (Roentgen)