Outline
The summerschool starts on Sunday with a welcome reception between 19:00 and 20:30. On Tuesday evening there is a social event in downtown Nijmegen and on Wednesday there will be a walk in the woods followed by a rump session. The summerschool ends on Friday after lunch. The ISP schedule comprises of:
- 9 lectures
- 5 case study sessions
- presentations
- rump session
- BBQ
- movie
- privacy karaoke?
This Year's Programme
Abstracts
Robin Pierce:
AI in Health: Challenges for Machine Learning and the GDPR
The use and uptake of Artificial Intelligence (AI) has rapidly increased across sectors. In the health domain, AI, particularly in the form of Machine Learning (ML) has emerged as a potentially valuable tool in diagnosis, treatment, and care. Whether in the form of algorithm-driven diagnostics, predictive evaluations, or precision medicine, the ability to perform iterative optimization strategies based on ever-expanding data sets, e.g. pixel evaluation of image data or the processing of sensory data, carries the potential to enhance healthcare in significant ways. However, the use of ML for diagnostic, therapeutic, and public health optimization presents challenges to data protection and privacy. This session examines the challenges that ML modalities present for the GDPR and explores possible ways to address these challenges.
Paul Dourish:
Dissolving Privacy and Finding New Ground
My goal in this lecture is to attempt to “dissolve” the notion of privacy, not in order to dismiss it or to minimize its significance, but to be able to recover our sense of the other elements of social life that often lie obscured behind it — mutuality, accountability, collectivity, identity, and more. I will draw on both my own empirical studies as well as broader literature to explore these considerations as aspects of social conduct and cultural practice.
With that in hand, I’ll then turn back to questions of AI and algorithmic action in order, first, to similarly understand how cultural practices delimit the scope of “algorithms” as objects of critical attention and, second, how we might reformulate our thinking about how we engage with and through technologies around data and information.
Again, I’ll draw where I can on ethnographic projects that focus on data work in practical organizational settings.
Mireille Hildebrandt:
From Agnostic to Agonistic Machine Learning
Machine learning (ML) is agnostic insofar as machines do not know anything in our sense of that term. As machines increasingly make decisions that make a difference, often based on ML, it becomes crucial that those who suffer the consequences of such decisions do not remain agnostic as the potential and the limits of ML. In this lecture I will dive into the assumptions, the design and the implications of machine learning (ML), notably for privacy as the right the protection of the incomputable self. Based on a practical understanding of what ML stands for I will develop a small vocabulary of terms used in the science of ML, some of which have been hyped way beyond their limited meaning in computer science. This vocabulary, in turn, should enable an agonistic debate on the legitimacy of relevant design decisions and their trade-offs, while reinstating the incomputable self as what requires protection in environments that feed on ML decision making.
Malte Ziewitz:
Shadow Cultures: Studying Algorithmic Systems from the Margins
Understanding algorithmic systems has become a key concern for policy-makers, engineers, and academics. But how do ordinary people make sense of and engage with systems that are said to be inscrutable? What is the role of novel industries and intermediaries in mediating this relationship? And how do platform operators manage and respond to disobedience in the shadow of the engine? In this session, I will explore these questions through materials from an ethnography of search engine optimization (SEO) professionals and use these insights to rethink common tropes like bias, gaming, and accountability.
Krishna Gummadi:
Discrimination in Algorithmic Decision Making
Algorithmic (data-driven learning-based) decision making is increasingly being used to assist or replace human decision making in a variety of domains ranging from banking (rating user credit) and recruiting (ranking applicants) to judiciary (profiling criminals) and journalism (recommending news-stories). Recently concerns have been raised about the potential for discrimination and unfairness in such algorithmic decisions. Against this background, in this lecture, I will present recent attempts to tackle the following foundational questions about algorithmic unfairness:
- How do algorithms learn to make discriminatory decisions?
- How can we quantify (measure) discrimination in algorithmic decision making?
- How can we control (mitigate) algorithmic discrimination? i.e., how can we re-design learning mechanisms to avoid discriminatory decision making?
Lina Dencik:
Beyond Data-Centrism: Studying Algorithms in Context
The collection and analysis of massive amounts of data has become a significant feature of contemporary social life; what has been described as the ‘datafication’ of society (Mayer-Schönberger and Cukier 2013). These processes are part of a significant shift in governance, in which big data analysis is used to predict, preempt and respond in real time to a range of social issues. Yet, we still struggle to account for the ways in which different actors make use of data, and how data analysis is changing the ways in which actors research, prioritise and act in relation to social and political issues. Overwhelmingly, focus has been on data and algorithms as technical artefacts, abstracted from social context, and analysed in relation to their functionalist design. This has meant that discussions on big data have often neglected the social dimension of datafication, instead confining it to a question of technology, and - with that - not fully engaged with the politics of data, instead presenting it as a neutral representation of social life. As Ruppert et al. (2015) contend, emphasis needs to be placed on the social significance of big data, both in terms of its social composition (a subject’s data is a product of collective relations with other subjects and technologies) as well as in terms of its social effects. From this, we can begin to explore data politics as the performative power of or in data that includes a concern with how data is generative of new forms of power relations and politics at different and interconnected scales (Ruppert, Isin and Bigo 2017). In this presentation, I will draw on case studies from different institutional and social contexts across law enforcement, welfare services, and border control to highlight the value of situating algorithmic decision-making in relation to social practices as a way to overcome data-centric understandings of challenges pertaining to datafication.
Julia Powles:
Artificial Intelligence and Power
It has become almost automatic. While public conversation about algorithms and artificial intelligence dwells in problems of the long future (the rise of the machines), the ingrained past (systemic inequality, now perpetuated and reinforced in data-driven systems), and the messy present (regulatory will, compromise, arbitrage), a small cadre of tech companies amasses unprecedented power on a planetary scale.
This lecture interrogates the debates we have, and those we need, about AI, algorithms, privacy, and the future. It examines what we talk about, why we talk about it, what we should ask and solve instead, and what is required to spur a richer, more imaginative, more innovative conversation about the world we wish to create. The lecture will be grounded in case studies concerning the major tech companies. Uncritical adherents may be required to eat their hats.
Frederik Zuiderveen Borgesius:
Algorithmic Pricing, Data Protection Law, and Discrimination Law
Online shops could offer each website customer a different price, a practice called algorithmic pricing. Such algorithmic pricing can lead to advanced forms of price discrimination based on individual characteristics of consumers, which may be provided, obtained, or assumed. An online shop can recognise customers, for instance through cookies, and categorise them as price-sensitive or price-insensitive. Subsequently, the shop can charge (presumed) price-insensitive people higher prices. Such practices could lead to poor people paying lower, or higher, prices.
In this lecture, we will discuss:
- Is algorithmic pricing fair, and if not: why?
- Do data protection law and discrimination law allow algorithmic pricing?
- Should policymakers adopt additional rules for algorithmic pricing?
Recommended reading:
F.J. Zuiderveen Borgesius and J. Poort, ‘Online price discrimination and EU data privacy law’, Journal of Consumer Policy, 2017, p. 1-20. https://ssrn.com/abstract=3009188
Martijn van Otterlo:
Human Values and the Value of Artificial Intelligence
Ethics of algorithms is an emerging topic in various disciplines such as social science, law, and philosophy, but especially artificial intelligence (AI). The so-called value alignment problem expresses the challenge of (machine) learning values that are, in some way, aligned with human requirements or values. In this lecture I will specifically explore the reinforcement learning (RL) paradigm as a framework for ethical decision making in machines, and how it provides at the same time a conceptual tool for understanding as well as a technical tool for constructing intelligent machines that can optimize their behavior according to current norms and values. RL is one of the most prominent directions in AI: it is used in various human-machine interactive settings (for example to teach robots how to perform tasks) but it also forms the core mechanism driving the powerful algorithm “AlphaZero” (Google/DeepMind), which taught itself how to beat anyone (human or machine) in chess recently. I further explore how humans have formalized and communicated values, in professional codes of ethics, and how that can lead to injecting pre-existing human norms and values declaratively into RL algorithms. This renders machine ethical reasoning and decision-making, as well as learning, more transparent and explainable, and hopefully more accountable. To illustrate matters I will frequently employ results from several research projects in the gatekeeping domains of public libraries and archival practices.