Laster Arrangementer

« Alle Arrangementer

  • Dette arrangement har allerede funnet sted.

Knut Selmer Annual Memoral lecture

23. november 2020 @ 17:15 - 18:45

We argue that, if we wish to nurture and sustain our communities as liberal democratic polities committed to respect for individual freedom, democracy and the rule of law, then unless and until algorithmic systems intended for use in the criminal justice context pass constitutional muster, they cannot be justified and should not be used at all.

In this lecture, Professor Karen Yeung, University of Birmingham, will give us some interesting insights into an on-going 4-year collaborative inter-disciplinary project on algorithmic systems. She will discuss more specifically  “The constitutional dangers of algorithmic decision-making systems in the criminal justice domain”.


The memorial lecture by will be held in the form of a webinar and does not require registration. Here is the link to the webinar: https://uio.zoom.us/j/62425232664

UPDATE!

Due to the rise in covid-19 related cases in the Oslo area, we unfortunately have to CANCEL the gathering and planned screening of the Knut Selmer memorial lecture at Kjerka, UIO.

The lecture will be held via webinar as planned at 17:15 (CET) November 23rd

Karen Yeung’s research lies in the regulation and governance of, and through, new and emerging technologies, with a particular focus on the legal, ethical, social and democratic implications of a suite of technologies associated with automation and the ‘computational turn’, including big data analytics, artificial intelligence (including various forms of machine learning), distributed ledger technologies (including blockchain) and robotics.  She is actively involved in several technology policy and related initiatives at the national, European and international levels, including as a former member of the EU High Level Expert Group on AI and the Council of Europe’s Expert Committee on human rights dimensions of automated data processing and different forms of artificial intelligence (MSI-AUT). Karen occupies a number of strategic and advisory roles for various non-profit organisations and research programmes concerned with responsible governance of technology.  Her recent academic publications include Algorithmic Regulation (co-edited with Martin Lodge) Oxford University Press (2019) and The Oxford Handbook of Law, Regulation and Technology (co-edited with Roger Brownsword and Eloise Scotford) in 2017. She is on the editorial boards of the Modern Law Review, Big Data & SocietyPublic Law and Technology and Regulation.


 

The turn towards algorithmic decision-making (ADM) tools and systems taking place across the public sector has attracted considerable attention in recent years, accelerated by the Covid-19 pandemic.  Although algorithmic decision-making is being taken up in many domains, it has gained increasing popularity within the criminal justice context, particularly in the USA and increasingly across Europe and other industrialised democracies.  One of its alleged attractions lies in a belief that the turn to machine decision-making can overcome the well-documented subjective biases and fallibility of human decision-making.  But, as highly publicised studies have demonstrated in recent years, machine decision-making is no panacea, revealing how algorithmic tools often have racial bias ‘baked in’ to their configuration and design, in ways that may not merely replicate but reinforce and exacerbate practices of racial injustice.

Yet aside from concerns about unfair bias and discrimination, the larger constitutional threats posed by these tools and systems has been overlooked in contemporary debate and academic discourse.  This is particularly worrying, given that it is within the criminal justice domain that the state’s monopoly on the exercise of legitimate coercion is at its most vivid and powerful, and through which individuals may be convicted and thereafter subjected to state sanctioned punishment which may result in serious restrictions on individual liberty and/or the deprivation of property accompanied by significant moral and social stigma.  In other words, the criminal justice system is unique and distinctive, representing the apex of the state’s legitimate coercive power during peacetime.  Accordingly, the exercise of decision-making authority in the criminal justice decision-making must, at least within liberal democratic states, demonstrate respect for foundational constitutional principles, including insistence on the rule of law and the protection of human rights.  These principles are rooted in a commitment to constitutionalism, based on recognition of the need for institutional safeguards against the abuse of governmental power as an indispensable bulwark against despotism.

In this lecture, I will highlight the constitutional threats posed by several ADM tools and systems that purport to evaluate the ‘risks’ posed by individuals who come into contact with the criminal justice system.  In so doing, I will draw on an on-going 4-year collaborative inter-disciplinary project funded by VW Stiftung.  Our investigation of three case studies of algorithmic systems currently (or until recently) in operation, reveals that the design, implementation and evaluation of these systems have hitherto failed to take constitutional principles seriously.  We argue that, if we wish to nurture and sustain our communities as liberal democratic polities committed to respect for individual freedom, democracy and the rule of law, then unless and until algorithmic systems intended for use in the criminal justice context pass constitutional muster, they cannot be justified and should not be used at all.

 

Detaljer

Dato:
23. november 2020
Tid
17:15 - 18:45
Kategori for Arrangement:

Arrangør

Norsk forening for Jus og EDB
View Arrangør Website

Sted

Kjerka
Karl Johans gate 47
Oslo, Norge
+ Google-kart