BNR radio on facial recognition by Dutch police

2017/02/02

jorgen-raymann

Already some time ago, but at request below the link to the broadcast of the BNR Dutch radio station on announcement of the Dutch police that they are going to use facial recognition to identify suspects. I’m a guest in the second halve of the broadcast (“Ask me anything” with Jörgen Raymann presenting, on 16 December 2016). Of course, in Dutch. We discussed issues like effectiveness and privacy.

https://www.bnr.nl/radio/askmeanything/10315458/16-december-biometrische-opsporing.


Guest at BNR Digital radio on biometrics

2016/08/17

biometrie-BNR-MaartenOn August 17th I was a guest at the Dutch BNR news radio station, as part of the weekly BNR Digital broadcast. I was there as expert on biometric authentication. I responded to the questions on the opportunities for biometric authentication in mostly positive manner. I argued that biometric authentication can be a user-friendly second authentication factor. But I also voiced some concerns: not all implementations as done well, liveness detection (presentation attack detection) is and will remain a (if not the) challenge and privacy can be a serious issue.

Read the rest of this entry »


Digi2: a PoC app for DigiD

2016/02/07

digi2-PoC-screenshots1

DigiD is the Dutch national digital identity solution for citizens to use e-government services (and online health and pension-related services). It is quite popular actually, in 2015 there where 12 million citizens that had a DigiD, on a population of a bit of 17 millions. Also the amount of logins had increased significant over the year, with over 200 millions logins in 2015. InnoValor did a project in 2015 to make a proof-of-concept app for DigiD that can 1) serve as replacement of SMS as second-factor, 2) can be used with government mobile app and 3) is more secure than current DigiD because it can use the contactless chips in e-passports etc as second factor. We did this project for and with DUO (government organisation responsible for student enrolment, student finance etc), in collaboration with RDW (government organisation responsible for driving licenses, vehicle registration etc) and Logius (government organisation responsible for DigiD).

The below blogpost is written jointly with Jan Kouijzer from DUO and gives details. It is in Dutch and includes links to videos with a demo. It appeared earlier (7 December 2015) on https://innovalor.nl/digi2-een-proof-of-concept-app-voor-digid/.

Read the rest of this entry »


Re-usable identities instead of different passwords everywhere

2014/10/14

wachtwoorden-postits

Below is a blog post in Dutch on re-usable identities instead of different passwords for all websites. The trigger for the blogpost is that Hold Security released the Dutch (or actually, .nl) part of the logindata/emailadresses that they discovered to be hacked. The NCSC (National Dutch Cyber Security Centre) IMHO focusses to much on educating users to prevent this, contrary to fnding/promoting solutions such as re-usable identities, including the Dutch eID Stelsel NL (similar to NSTIC in the US).

Read the rest of this entry »


What if I want to share my data from my bank with others?

2014/03/18

This is a cross-post from a blogpost in Dutch on the InnoValor site in which I provide my view of the announcement of and responses to the plans of the Dutch bank ING on providing personal data they have to third parties (after opt-in).

En wat als ik mijn bancaire klantdata nu wil delen?

Er is sinds het interview in de FD van 10 maart jl. veel aandacht geweest voor het plan van de ING om een proef te gaan doen met het delen van klantdata met derden. ING heeft hier blijkbaar onderschat wat voor reacties dit plan zou oproepen, wat misschien wel een beetje naïef is geweest, zeker na wat er met Equens gebeurde vorig jaar. Ze hebben het plan voorlopig ook in de koelkast gezet en gaan eerst werken aan draagvlak bij toezichthouders, consumentenorganisaties en privacy-organisaties. Ze konden weinig anders meer doen lijkt me.

Maar zijn al die negatieve reacties wel terecht ? Het ligt wat mij betreft aan de manier waarop een bank klantdata zou delen of dit een goed of een slecht idee is. En het belangrijkste hierbij is of degene waar die data over gaat, de consument, dit wil. Read the rest of this entry »


Which level of assurance is needed for LSP and other patient portals?

2013/09/09

lock

More and more health providers offer patient portals. These portals can contribute more efficient and effective health care. In addition, because since they provide easy access to personal health records and personalized health information, they can contribute to more patient empowerment. But there is also a risk: the wrong person (i.e., an identity thief) may get access to this very personal information.

Novay participated in a working group that developed a guide for health providers to help them determine how secure the authentication solution for patient portals should be, i.e., which levels of assurance is needed. My colleague Mettina Veenstra and myself tried out this new guide on the Dutch national infrastructure for the exchange of personal health records. This infrastructure is in Dutch called Landelijk Schakelpunt (LSP), which I have no idea how to translated in English (it resembles what the EU epSOS project calls a National Contact Point). The LSP recently added the possibility for patients to see which health professionals used the LSP to access their health records. It does not provide access for patients to the actual health records. Nevertheless, if an identity thief can see that e.g. an oncologist accessed your medication record as stored by your local pharmacy, then it implies something you may not want to share. The blog post discusses this, including the relationship to the national identity solution in the Netherlands (DigiD which is STORK 2, and lack of STORK 3 solution in the Netherlands).

The full blog post is only in Dutch, see here and copied below for convenience. For non-Dutch speakers, this is what Google translate makes of it.

Read the rest of this entry »


CIO perspective on (the future of) privacy

2013/05/15

As part of the CIO Days 2012 we did scenario planning sessions with a group of CIOs from the Netherlands. Scenario planning is methodology to consider what might happen in the future, and what the impact will be. Instead of trying to predict a future, we determined two dominant uncertainties about the future, and combined these in four possible futures. My Novay colleague Timber Haaker is our scenario planning guru, and also authored this blog post and this article in  CIO Magazine nr.2013-1 with more background on scenario planning and the scenario planning sessions we did at the CIO Days.  This is a pdf with only the relevant pages. All in Dutch. I facilitated the scenario planning session on privacy, the results of which I share below:

Read the rest of this entry »


7′ speech: students in control over their own data

2012/10/04

Image

SURFnet, the Dutch National Research and Education Networking organisation, had their two-year networking event for their customers and partners (3-4 October 2012). A new item were 7′ TEDx-like speeches, one of which was give by me. I talked about putting the student central is discussions about privacy in higher education, e.g., when introducing promising innovations like learning analytics. Although preparing for 7′ takes way more time per minute than preparing for 45′ or 90′ presentations (the length of the presentation the day and week before), it was fun doing it. I basically argued that the user acceptance of privacy-sensitive innovations in higher education is more important than if lawyers think that these innovations are allowed. This means that you should 1) explain the benefits of the innovation for the student and why the data is needed, 2) that you should be transparent on what data is collected exacly and 3) that whenever possible the student should be able to control the collection/sharing/rentention of this data.

For more information (all in Dutch ..): here is a blog post from SURFnet on my presentation. Here are the slides, but since they have a lot of pictures and little text, you are probably better of watching the video. It is only 7′ 🙂 My presentation starts at 1:11′. You can also watch the other presentations, including cool visualisations of open data by the VPRO (first talk) and interesting thoughs on Next-generation trust infrastructures by Roland van Rijswijk (SURFnet, second talk).


Digital identity in the Netherlands: DigiD for consumer-2-business?

2011/10/05

On Tuesday 4 October we organised a Novay networking event called Tuesday Update, with digital identities as the subject. The main subject of discussion was the need for re-usable identities, and especially who should be the identity provider: government or private parties. This is a hot subject in the Netherlands, also because of the recent security incidents (DigiNotar). Hein Aanstoot, director at SIVI, argued very well that the insurance sector increasingly needs a consumer-2-business identity solution, and would they be allowed to use the national citizin-2-government solution DigiD then this would help insurance companies a lot. This is however not allowed in the Netherlands, and Kees Keuzenkamp from the ministry of Internal Affairs explained the policy developments in this area (NL and EU), including the planned Dutch eID smartcard (called eNIK, elektronische Nederlandse Identiteits Kaart). Bottom-line (in my wording) is that the decision on eNIK will be taken end of this year (after which it goes to parlement) and that it is very unlikely that DigiD/eNIK can be used as a generic consumer-2-business identity solution. Hein Aanstoot also gave some insight into a new initiative with several large insurance companies to create a breakthrough in a re-usable identity for the insurance sector, I think it is good for these insurance companies that they do not make themselves (too) dependent on the government or others (banks). I also presented, and gave my perspectives on consumer-2-business identities, why this is so difficult (privacy, trust etc), the outcomes of our cidSafe project, my views on DigiD (and eHerkenning) and what the role of government should be (especially: solve it or be very clear you’re not going to do so). I also presented three innovations we are working on that we believe will increasingly become important: user control over their data, mobile-centric identity and context-enhanced authentication/authorization. My presentation is on slideshare (dutch!).

 


Consent from the EU legal perspective

2011/07/27

The Article 29 Data Protection Working Party wrote an opinion on the definition of consent. Not everything this Working Party produces is of interest to me, or even understandable (‘too’ legal for mere mortals). I however did find this opinion interesting since it describes when consent is needed from a legal perspective (based on Data Protection and e-Privacy Directives), and it has examples making it relatively easier to interpret.  In my work on this area I usually take the user’s perspective on consent (e.g., on consent for the SURFfederatie), and how to enforce this (architectural/technical perspective), but a legal perspective is of course also needed.

The statement in the summary that especially got my attention was that if consent is used incorrectly, the data subject’s control becomes illusory. I couldn’t agree more, of course, consent cannot be used as an excuse, and in some cases a different legeal ground is needed, and that consent should be informed, freely given etc. I however do want to make a point here that even in cases that privacy law requires a different legal ground for data exchange than consent, it does not forbid to additionally ask for consent. I therefore argue that the decision if and how to offer consent should be primarily based on whether users want it.

Below I quote and interpret parts of the opinion that I found most interesting, and further motivate my position on doing consent-even-when-not-legally-needed.

… obtaining consent does not negate the controller’s obligations under Article 6 with regard to fairness, necessity and proportionality, as well as data quality. For instance, even if the processing of personal data is based on the consent of the user, this would not legitimise the collection of data which is excessive in relation to a particular purpose.

Consent is related to the concept of informational self-determination. The autonomy of the data subject is both a pre-condition and a consequence of consent: it gives the data subject influence over the processing of data. However, as explored in the next chapter, this principle has limits, and there are cases where the data subject is not in a position to take a real decision. The data controller may want to use the data subject’s consent as a means of transferring his liability to the individual. For instance, by consenting to the publication of personal data on the Internet, or to a transfer to a dubious entity in a third country, he may suffer damage and the controller may argue that this is only what the data subject has agreed to. It is therefore important to recall that a fully valid consent does not relieve the data controller of his obligations, and it does not legitimise processing that would otherwise be unfair according to Article 6 of the Directive.

Or in my wording: if a data processor has obtained consent then this does not mean the data processor can do whatever he wants with the data, it has to be a reasonable usage of the privacy sensitive data, the data processor still has a liability and last-but-not-least the person has be in a position to really make a decision.

Transparency is a condition of being in control and for rendering the consent.

Or in my wording: without insight there is no actual control.

There is in principle no limits as to the form consent can take. However, for consent to be valid, in accordance with the Directive, it should be an indication.

The form of the indication (i.e. the way in which the wish is signified) is not defined in the Directive. For flexibility reasons, “written” consent has been kept out of the final text. It should be stressed that the Directive includes “any” indication of a wish. This opens the possibility of a wide understanding of the scope of such an indication. The minimum expression of an indication could be any kind of signal, sufficiently clear to be capable of indicating a data subject’s wishes, and to be understandable by the data controller. The words “indication” and “signifying” point in the direction of an action indeed being needed (as opposed to a situation where consent could be inferred from a lack of action).

Or in my wording: consent can be implicit in an action, but not implicit in doing nothing.

Consent can only be valid if the data subject is able to exercise a real choice, and there is no risk of deception, intimidation, coercion or significant negative consequences if he/she does not consent.

In several opinions, the Working Party has explored the limits of consent in situations where it cannot be freely given. This was notably the case in its opinions on electronic health records (WP131), on the processing of data in the employment context (WP48), and on processing of data by the World Anti-Doping Agency (WP162).

Or in my wording: a consent given in a situation where the person did not really have a choice is basically no consent, and another basis for processing the data is needed. I guess the consent could be considered a form of conformation that the person was at least informed, but the opinion did not state that explicitly.

To be valid, consent must be specific. In other words, blanket consent without specifying the exact purpose of the processing is not acceptable.

To be specific, consent must be intelligible: it should refer clearly and precisely to the scope and the consequences of the data processing. It cannot apply to an open-ended set of processing activities. This means in other words that the context in which consent applies is limited.

Consent must be given in relation to the different aspects of the processing, clearly identified. It includes notably which data are processed and for which purposes. This understanding should be based on the reasonable expectations of the parties. “Specific consent” is therefore intrinsically linked to the fact that consent must be informed. There is a requirement of granularity of the consent with regard to the different elements that constitute the data processing: it can not be held to cover “all the legitimate purposes” followed by the data controller. Consent should refer to the processing that is reasonable and necessary in relation to the purpose.

The need for granularity in the obtaining of consent should be assessed on a case-by-case basis, depending on the purpose(s) or the recipients of data.

Actually, this one does not help me much. Completely open-ended consent is of course not valid, but there are many gray zones here … I guess doing a user survey on what users expect what the consent would reasonably include would be an approach, but don’t know if that would hold up in court.

“consent by the data subject (must be) based upon an appreciation and understanding of the facts and implications of an action. The individual concerned must be given, in a clear and understandable manner, accurate and full information of all relevant issues, in particular those specified in Articles 10 and 11 of the Directive, such as the nature of the data processed, purposes of the processing, the recipients of possible transfers, and the rights of the data subject. This includes also an awareness of the consequences of not consenting to the processing in question”

Two sorts of requirements can be identified in order to ensure appropriate information:

• Quality of the information – The way the information is given (in plain text, without use of jargon, understandable, conspicuous) is crucial in assessing whether the consent is “informed”. The way in which this information should be given depends on the context: a regular/average user should be able to understand it.

• Accessibility and visibility of information – information must be given directly to individuals. It is not enough for information to be “available” somewhere.

I do not understand the difference with transparency, but it certainly makes sense that consent needs to be informed. This is in my opinion also very difficult in reality, since users will often not be willing to spent time/attention to be informed. There are trade-offs here. I think in current practise the quality of information requirement is violated with long legal texts that no-one wants to read or is able to understand.

As time goes by, doubts may arise as to whether consent that was originally based on valid, sufficient information remains valid. For a variety of reasons, people often change their views, because their initial choices were poorly made, or because of a change in circumstances, such as a child becoming more mature.This is why, as a matter of good practice, data controllers should endeavor to review, after a certain time, an individual’s choices, for example, by informing them of their current choice and offering the possibility to either confirm or withdraw. The relevant period would of course depend on the context and the circumstances of the case.

This is what we call “timed consent“. I didn’t realize this was a good practise from a legal perspective 🙂 Our primary motivation for introducing timed consent is also different, we did it because people will forget what they consented to, not because they changed their mind or circumstances changed.

What becomes clear in the opinion, is that simply asking for consent is often not enough. There has to be an actual choice, and the data processor has to provide different legal grounds if this choice is not there. This is also argued by this blog post of Andrew Cormack (JANET). Although I, of course, agree with this, I do not think this means that a consent functionality is therefore not beneficial in cases that a different legal ground is needed.

To make this more specific, taking the consent-from-a-user-perspective pilot we did as an example. In this case, in the SURFfederatie. personal information is exchanged between universities and service providers. Some of the provided services a student simply has to use to be able to complete some course. In this case, there is little choice and there needs to be a different legal ground for the data exchange (and I think there is). However, I believe there is added value in still offering a consent question during the login user experience because:

  1. The users are informed that this exchange takes place, which in my opinion is a goal in itself.
  2. There are also services that the user does have a choice, and consent is needed as a legal ground to exchange data, and we need a consistent user experience for all services
  3. Last but not least: users appreciate the consent question, as our research showed (85% in our pilot)

Or to make it as simple as I can make it (repeating my earlier statement): even in cases that privacy law requires a different legal ground for data exchange than consent, it does not forbid to additionally ask for consent. I therefore argue that the decision if and how to offer consent should be primarily based on whether users want it.