The Opinionated Machine
COMPUTING SCIENCES |

The Opinionated Machine

ALGORITHMIC BIAS MAY LEAD TO MACHINE LEARNING SYSTEMS DISCRIMINATING AGAINST MINORITIES. THE ISSUE IS TO BE TACKLED NOW, SAYS LUCA TREVISAN IN THE SIXTH EPISODE OF THE THINK DIVERSE PODCAST

We are wary of Artificial Intelligences for all the wrong reasons. The chance of one of them taking control of the world, as in a Hollywood movie, is thin, but they can still hurt large chunks of the humankind (e.g. women) or minorities (according to ethnicity, sexual preferences and so on) through the so-called algorithmic bias.
 
In The Opinionated Machine, the sixth episode of the THINK DIVERSE podcast series, Luca Trevisan, Full Professor of Computer Science at Bocconi, clarifies how Machine Learning (a type of Artificial Intelligence) can perpetuate societal bias or prove so ineffective in dealing with minorities to practically discriminate against them.
 

 
“The use of Machine Learning systems may seem limited for now,” Prof. Trevisan warns, “but their rate of adoption is increasing at an exponential rate, and we must tackle the issue as soon as possible. To this end, we need a multidisciplinary effort including computer scientists, mathematicians, political scientists, lawyers, and other scientists.”
 
When we want a Machine Learning system to make decisions, we feed it a set of data (decisions made in the past) and let it calibrate itself (i.e. understand which are the relevant variables) until it makes ca. the same decisions.
 
If past decisions are biased, Professor Trevisan says to host Catherine De Vries, such bias is perpetuated. It may be the case of judges discriminating against persons of color in bail decisions, or employers discriminating against women when screening resumes.
 
Minorities can be also hurt in subtler ways. If they are underrepresented in the training dataset, for example, a facial recognition system can be ineffective with them. But even when they are adequately represented, the optimal calibration solution for a system may be to be very accurate with the majority, even if much less accurate, or completely wrong, with the tiny minority. “This kind of bias is harder to detect and to fix,” Prof. Trevisan says.
 
Listen to the episode and follow the series on:

Spotify
Apple Podcasts
Spreaker
Google Podcasts

 

by Fabio Todesco
Bocconi Knowledge newsletter

News

  • Monitor for Circular Fashion Launches Eight Pilot Tests

    In the 2022 report of the observatory by SDA Bocconi School of Management and powered by Enel X, the analysis of sustainability indicators in the textile sector is applied to eight innovative prototypes made by partner companies. From organic cotton jeans to a tshirt that can be repaired, a bag that can be recycled, and a shoe that can be sewn at home  

  • Cybersecuring a Country… with a Podcast

    Greta Nasi hosts a series of talks about how modern states aim to protect individuals, firms and society against an invisible enemy  

Seminars

  November 2022  
Mon Tue Wed Thu Fri Sat Sun
  1 2 3 4 5 6
7 8 9 10 11 12 13
14 15 16 17 18 19 20
21 22 23 24 25 26 27
28 29 30        

Seminars

  • Martin Oehmke, London School of Economics: Green Capital Requirements

    MARTIN OEHMKE - London School of Economics

    Seminar Room 2-e4-sr03 - Via Roentgen, 1

  • Leonardo Bursztyn: Justifying Dissent

    LEONARDO BURSZTYN - The University of Chicago

    Alberto Alesina Seminar Room 5.e4.sr04, floor 5, Via Roentgen 1