How to Protect User Rights Against an Algorithm
NEWS |

How to Protect User Rights Against an Algorithm

THE RIGHT TO KNOW THE MOTIVATION OF A DECISION MADE BY ARTIFICIAL INTELLIGENCE IS DIFFICULT TO ENFORCE. EUROPE THEREFORE AIMS TO STRENGTHEN THE STANCE OF USERS THROUGH PROCEDURAL OBLIGATIONS ON PLATFORMS, EXPLAINS ORESTE POLLICINO

Those affected by a decision made by an algorithm should have the right to know the reasons for it (the so-called right to explanation or right to explicability), and the European Union's General Data Protection Regulation (GDPR), in fact, provides for this.
 
“The actual applicability of the clause is controversial, however, for at least two reasons,” warns Oreste Pollicino, Professor of Constitutional Law at Bocconi, anticipating some of the content of his talk at the 27 June Fairness in AI workshop. “The first reason is language: algorithms are produced with a technical language, which makes them opaque to the vast majority of citizens. The second concerns trade secrecy: when it comes to proprietary algorithms, there may not be a disclosure requirement.”
 
The European legislation, with the soon to be introduced Digital Services Act, thus aims to strengthen the user's position through the introduction of procedural obligations for platforms, and especially for so-called "very large platforms" that, by collecting more data, also have greater opportunities for profiling.
 
Link to related stories. Image: rainbow colors. Story headline: Pride: STEM Disciplines Fight Algorithmic Bias Link to related stories. Image: two schwa. Story headline: How to Make Language Technologies More Inclusive Link to related stories. Image: CPU processor. Story headline: When Machines Learn Prejudices Link to related stories. Image: a hooded person and symbols recalling cyber bullying. Story headline: Machines Get It Wrong: How to Avoid that Woman and Gay Are Mistaken as Bad Words

Among the additional obligations to which these operators will be subjected, there are: a preliminary assessment of the risk of damaging users' rights; the possibility of effective adversarial process, which will take the form of appeal possibilities and mandatory interlocutions between platform and user; and the need to take affirmative action (and not only the removal of disputed content) to remedy any damage done by algorithmic bias.
 
In some legislation, algorithms are also beginning to be used in the legal field to make decisions, for example, on parole or bail. Professor Pollicino observes a different approach in the United States, where there is considerable trust in digital tools, and in Europe. “In particular,” he concludes, “for Italy, I would speak of digital humanism: Council of State jurisprudence makes it clear that an algorithm cannot be the exclusive element of evaluation in any judgment, and always refers the final decision to the 'prudent assessment' of a judge.”
 

by Fabio Todesco
Bocconi Knowledge newsletter

People

  • Kapacinskaite Nominated Among Top 5 for Two Dissertation Awards at AOM

    The Academy of Management leads the discussion on the world's most prominent organizational and management issues  

  • Catherine De Vries in the 50 Influential Researchers List by Apolitical Foundation

    A list of scholars from around the world whose research could help cultivate reflective, representative, and informed politicians  

Seminars

  August 2022  
Mon Tue Wed Thu Fri Sat Sun
1 2 3 4 5 6 7
8 9 10 11 12 13 14
15 16 17 18 19 20 21
22 23 24 25 26 27 28
29 30 31        

Seminars

  • ELLIS@Milan Artificial Intelligence workshop

    GABOR LUGOSI - Department of Economics, Pompeu Fabra University
    RICARDO BAEZA-YATES - Khoury College of Computer Sciences Northeastern University
    NOAM NISAN - School of Computer Science and Engineering, Hebrew University of Jerusalem
    MICHAL VALKO - Institut national de recherche en sciences et technologies du numérique

    AS02 DEUTSCHE BANK - Roentgen building

  • tbd

    ANDREW KING - Questrom School of Business

    Meeting room 4E4SR03 (Roentgen) 4