The Opinionated Machine
COMPUTING SCIENCES |

The Opinionated Machine

ALGORITHMIC BIAS MAY LEAD TO MACHINE LEARNING SYSTEMS DISCRIMINATING AGAINST MINORITIES. THE ISSUE IS TO BE TACKLED NOW, SAYS LUCA TREVISAN IN THE SIXTH EPISODE OF THE THINK DIVERSE PODCAST

We are wary of Artificial Intelligences for all the wrong reasons. The chance of one of them taking control of the world, as in a Hollywood movie, is thin, but they can still hurt large chunks of the humankind (e.g. women) or minorities (according to ethnicity, sexual preferences and so on) through the so-called algorithmic bias.
 
In The Opinionated Machine, the sixth episode of the THINK DIVERSE podcast series, Luca Trevisan, Full Professor of Computer Science at Bocconi, clarifies how Machine Learning (a type of Artificial Intelligence) can perpetuate societal bias or prove so ineffective in dealing with minorities to practically discriminate against them.
 

 
“The use of Machine Learning systems may seem limited for now,” Prof. Trevisan warns, “but their rate of adoption is increasing at an exponential rate, and we must tackle the issue as soon as possible. To this end, we need a multidisciplinary effort including computer scientists, mathematicians, political scientists, lawyers, and other scientists.”
 
When we want a Machine Learning system to make decisions, we feed it a set of data (decisions made in the past) and let it calibrate itself (i.e. understand which are the relevant variables) until it makes ca. the same decisions.
 
If past decisions are biased, Professor Trevisan says to host Catherine De Vries, such bias is perpetuated. It may be the case of judges discriminating against persons of color in bail decisions, or employers discriminating against women when screening resumes.
 
Minorities can be also hurt in subtler ways. If they are underrepresented in the training dataset, for example, a facial recognition system can be ineffective with them. But even when they are adequately represented, the optimal calibration solution for a system may be to be very accurate with the majority, even if much less accurate, or completely wrong, with the tiny minority. “This kind of bias is harder to detect and to fix,” Prof. Trevisan says.
 
Listen to the episode and follow the series on:

Spotify
Apple Podcasts
Spreaker
Google Podcasts

 

by Fabio Todesco
Bocconi Knowledge newsletter

News

  • Providers of Long Term Care for the Elderly Must Evolve

    The latest report on this sector by the Cergas research center and Essity has been released  

  • Bocconi Postdoc Invited to High Profile Conference

    Gianluigi Riva joins a selected group of young scientists that will attend a meeting with Nobel laureates later this year  

Seminars

  April 2024  
Mon Tue Wed Thu Fri Sat Sun
1 2 3 4 5 6 7
8 9 10 11 12 13 14
15 16 17 18 19 20 21
22 23 24 25 26 27 28
29 30          

Seminars

  • THE FAILURE TO PREVENT FRAUD IN THE UK CORPORATE ENVIRONMENT
    Seminar of Crime Law

    NICHOLAS RYDER - Cardiff University

    Room 1-C3-01, Via Roentgen 1

  • Clare Balboni - Firm Adaptation in Production Networks: Evidence from Extreme Weather Events in Pakistan

    CLARE BALBONI - LSE

    Alberto Alesina Seminar Room 5.e4.sr04, floor 5, Via Roentgen 1