Algorithms Are as Good as What They Are Fed
MARKETING |

Algorithms Are as Good as What They Are Fed

MACHINE LEARNING ALGORITHMS LEARN BY EXPERIENCE. THUS, THEY MAY REPLICATE SOCIETAL BIAS CONTAINED IN THE DATA THEY ARE FED, ARGUE DIRK HOVY AND DEBORA NOZZA IN THE THIRD EPISODE OF THE CLARITY IN A MESSY WORLD PODCAST

Algorithms have come to rule very sensitive aspects of our lives, but they are not always treating us all fairly. They might fail to understand us or even work against us, according to the guests of the third episode of the Clarity in a Messy World podcast series, Dirk Hovy, Associate Professor of Computer Science at Bocconi Department of Marketing, and Debora Nozza, a Postdoctoral Researcher at Bocconi’s Data and Marketing Insights research unit.
 
If we are what we eat, algorithms are as good as what they are fed. To train an algorithm based on machine learning, you feed it with large amounts of data and let it learn by experience. If you feed it data that reflect societal stereotypes, the algorithm will replicate them.
 

 
“In the US,” Professor Hovy takes an example in the podcast, “they tried to automate judgments on bail decisions for defendants and trained the system on historical data, real judgements. When they ran it, the system turned out to only pay attention to the color of a defendant skin and made that the single decisive feature, because historical decisions had actually been biased against defendants of color. The algorithm itself did not have an ideology but it reflected the bias that it was trained on - it reflected exactly the kind of stereotypes that had gone into the decisions that the judges had made before.”
 
Doctor Nozza, who’s working on hate speech detection algorithms, stumbled upon something similar. “There's some societal bias in the world,” she says, “and a model can learn some dangerous patterns such as associating to negative connotations the presence of words related to what should be protected minorities. An example is an algorithm aimed at detecting misogyny in a social media text which classified as misogynous all the texts containing the word women, irrespective of the meaning. This is something called unintended bias and we don’t want it to happen.”
 
Clarity in a Messy World, hosted by David W. Callahan, is the Bocconi podcast that looks at the causes behind the most confounding issues of our time. You can follow the podcast on Apple Podcasts, Google Podcasts, Spotify, Spreaker, and YouTube.

by Fabio Todesco
Bocconi Knowledge newsletter

News

  • A Big Bang Theory About Gucci Success

    The decision to conquer the market through a rejuvenation of the organisation and the adoption of a new set of values has been the key to the successful Gucci turnaround under the leadership of Marco Bizzarri and Alessandro Michele, according to a study presented yesterday  

  • AI That Describes Images in Italian

    CLIPItalian, recently developed by a team including Federico Bianchi, is the first and only AI model that associates images to their descriptions in Italian on a large scale  

Seminars

  September 2021  
Mon Tue Wed Thu Fri Sat Sun
    1 2 3 4 5
6 7 8 9 10 11 12
13 14 15 16 17 18 19
20 21 22 23 24 25 26
27 28 29 30      

Seminars

  • Going Digital to Conform and Perform: Learning Logics Underpinning Digital Advertising Spending

    Seoyoung KIM, University of Georgia   Job Market Seminar For information please contact dip.mkt@unibocconi.it

    Webinar

  • Explaining Greenium in a Macro-Finance Integrated Assessment Model

    Biao Yang, Bocconi University Practice for Job Market

    Room 7, Via Bocconi 8