The programme will consist of nine scheduled lectures (five
morning lectures and four afternoon lectures) of two and a half
hours each. The late afternoon / early evening is used for hands on working group
sessions to study practical cases provided by industry and government.
The summerschool starts Sunday with a welcome reception between 19:00 and 20:30. On Tuesday evening there
will be a social event in downtown Nijmegen. The summerschool ends Friday after lunch.
On Tuesday July 12 at 19:30 we will have dinner at city brewery De Hemel
(Heaven), in downtown Nijmegen.
We will take the bus (line 8) for a 30 minute drive, to exit at stop "Waalkade" in downtown Nijmegen. From there it's a four minute walk to the brewery. We return to the hotel from the same stop. We leave from the hotel at 18:45.
In case you get lost, the address of the brewery is:
6511 VS NIJMEGEN
024-360 61 67
European data protection in the spotlight: basic concepts and the recently adopted GDPR
On the 27^th of April 2016 and after more than four years of heated
debates and negotiations the General Data Protection Regulation (GDPR)
was adopted. The GDPR will be applicable as of the 25^th of May 2018
and aims at creating a homogeneous framework for the protection of
personal data across Europe. This lecture will present some of the
most crucial provisions of the GDPR that will have an impact on the
development and deployment of new applications and on the way how
businesses operate in the European market.
Consent and the Internet of Things / regulation of smart cities in European privacy law
Privacy-friendly smart metering, insurance, and other services.
Contextual Integrity in Practice
Privacy Research Paradigms, Privacy Engineering and SaaS
In this lecture, I will first give an overview of the nascent field of privacy engineering. I will then share results of an exploratory empirical study on the impact of the shift from shrink wrap software to services and apps on software engineering practice. Instead of organizing around stable versions of client specific binaries released at longer time intervals, and installed on user owned devices, software provided as a service or in the form of apps tends toward continuous, networked and centrally controlled functionality. What kind of challenges does this shift to services and apps pose to computer science research on privacy? And, have computer scientists understood and responded to these challenges in the privacy solutions they develop?
Platforms, privacy and dis/empowerment by design
The lecture, based on Media and Communication Studies (MCS) and Science and
Technology Studies (STS), takes an interdisciplinary look at how data,
media technologies and online platforms of social media and the sharing economy interact with today’s society, and the kind of social challenges this poses for users and society at large. For this we start from the notion of user (dis)empowerment, identifying how people/citizens/consumers are being exposed to and cope with new kinds of vulnerabilities regarding privacy and data protection. These insights then serve as input for identifying (social)
requirements and designing technologies that incorporate public values like
privacy and empowerment (by design). We conclude that a multifaceted perspective on media and communication between people changes and broadens the framing of online privacy and datafication of media users. It also helps to delineate a realistic picture of users and their awareness, attitudes, capabilities and practices.
Fairness in machine learning
This lecture will survey emerging research into what some now call fairness-by-design—the attempt to integrate substantive principles of fairness like non-discrimination into automated decision-making. While machine learning might seem like a way to overcome the prejudices, implicit biases, and faulty heuristics that plague human decision-making, this session will show that it is remarkably vulnerable to a number of problems that can render its models discriminatory. These models can inherit the prejudices of prior decision makers, reflect the widespread biases that persist in society, or discover useful regularities in a dataset that are really just preexisting patterns of exclusion and inequality. I will show that the resulting discrimination in each of these cases is unintentional, an artifact of the way machine learning works rather than conscious choices by programmers. I will then explain why attempts to address discrimination stemming from each of these problems will be difficult, costly, or controversial. Discrimination law is unlikely to reach most of these cases and efforts to correct the underlying problem will only be able to go so far. More aggressive solutions that attempt to engineer concerns with fairness into the machine learning process will have to blur the line between procedural fairness and distributive justice, opening these solutions to legal or political attack. Taking this into account, I will suggest practical paths forward for research, regulation, and policy.
Privacy from the point of view of (organized) collective action
George Danezis/Seda Gürses:
Anonymous communications beyond Tor