How to Protect User Rights Against an Algorithm
NEWS |

How to Protect User Rights Against an Algorithm

THE RIGHT TO KNOW THE MOTIVATION OF A DECISION MADE BY ARTIFICIAL INTELLIGENCE IS DIFFICULT TO ENFORCE. EUROPE THEREFORE AIMS TO STRENGTHEN THE STANCE OF USERS THROUGH PROCEDURAL OBLIGATIONS ON PLATFORMS, EXPLAINS ORESTE POLLICINO

Those affected by a decision made by an algorithm should have the right to know the reasons for it (the so-called right to explanation or right to explicability), and the European Union's General Data Protection Regulation (GDPR), in fact, provides for this.
 
“The actual applicability of the clause is controversial, however, for at least two reasons,” warns Oreste Pollicino, Professor of Constitutional Law at Bocconi, anticipating some of the content of his talk at the 27 June Fairness in AI workshop. “The first reason is language: algorithms are produced with a technical language, which makes them opaque to the vast majority of citizens. The second concerns trade secrecy: when it comes to proprietary algorithms, there may not be a disclosure requirement.”
 
The European legislation, with the soon to be introduced Digital Services Act, thus aims to strengthen the user's position through the introduction of procedural obligations for platforms, and especially for so-called "very large platforms" that, by collecting more data, also have greater opportunities for profiling.
 
Link to related stories. Image: rainbow colors. Story headline: Pride: STEM Disciplines Fight Algorithmic Bias Link to related stories. Image: two schwa. Story headline: How to Make Language Technologies More Inclusive Link to related stories. Image: CPU processor. Story headline: When Machines Learn Prejudices Link to related stories. Image: a hooded person and symbols recalling cyber bullying. Story headline: Machines Get It Wrong: How to Avoid that Woman and Gay Are Mistaken as Bad Words

Among the additional obligations to which these operators will be subjected, there are: a preliminary assessment of the risk of damaging users' rights; the possibility of effective adversarial process, which will take the form of appeal possibilities and mandatory interlocutions between platform and user; and the need to take affirmative action (and not only the removal of disputed content) to remedy any damage done by algorithmic bias.
 
In some legislation, algorithms are also beginning to be used in the legal field to make decisions, for example, on parole or bail. Professor Pollicino observes a different approach in the United States, where there is considerable trust in digital tools, and in Europe. “In particular,” he concludes, “for Italy, I would speak of digital humanism: Council of State jurisprudence makes it clear that an algorithm cannot be the exclusive element of evaluation in any judgment, and always refers the final decision to the 'prudent assessment' of a judge.”
 

by Fabio Todesco
Bocconi Knowledge newsletter

People

  • Daniele Durante Wins Award for Young Researchers

    A rare distinction for an academic outside America  

  • Peter Pope's Career Celebrated

    EAA's most prestigious award honors the Bocconi academic's research achievements  

Seminars

  March 2024  
Mon Tue Wed Thu Fri Sat Sun
        1 2 3
4 5 6 7 8 9 10
11 12 13 14 15 16 17
18 19 20 21 22 23 24
25 26 27 28 29 30 31

Seminars

  • Jacopo Perego: Competitive Markets for Personal Data

    JACOPO PEREGO - Columbia Business School

    Room 3-E4-SR03 (Rontgen)

  • Alessia Caponera - Multiscale CUSUM tests for time-dependent spherical random fields

    ALESSIA CAPONERA - LUISS

    Room 3-E4-SR03 (Roentgen)