An article by article analysis of the GDPR from an ethics perspective need not go any further than the first line when it comes to its Article 5: “Personal data shall be processed lawfully, fairly and in a transparent manner in relation to the data subject (‘lawfulness, fairness and transparency’)”.
What is “lawful processing” one can well understand: Whatever is allowed by the law. One can also imagine what is more or less a “transparent manner of processing”, despite its time-sensitive content. What is, however, “fair processing” of personal data? How could one possibly describe what is “fair” in our lives? Why is it mentioned here? How does it affect application of the GDPR?
Ever since Antigone decided that what was fair was to bury her dead brother despite state law that ordered against it, and paid for that decision with her life, the tension between what people consider fair and what the law actually says holds strong until today. A complete re-run of the debate largely exceeds not only the limits of these notes but also their purposes, this being ultimately a debate pertaining to law rather than ethics.
Acknowledging however this gap lawyers have devised a number of ways to go around it or even use it to written law’s benefit: After all, a law that is considered unethical offers justification to many not to observe it and therefore does not serve its purposes well. It is in fact one of these clever legal tricks that we are faced with here, when discussing the “fair and lawful processing” basic GDPR principle.
“Fair and lawful” have been an inseparable pair since data protection’s first appearance. They were present in Convention 108 of the Council of Europe back in 1981 and never missed a data protection legal instrument ever since, either at national or international level. They thus became inseparable, a commonplace term by now. Indeed, what would be weird today is for anybody to refer to them separately, that is calling for only lawful or only fair personal data processing.
However, are they indeed one and the same? Intrinsically connected in this manner?
As regards the type of their connection, one must distinguish between the legal and the ethical analysis. As far as the legal interpretation of the GDPR is concerned things are quite straightforward: For a particular processing operation to be lawful it needs to be both, lawful and fair. A lawful but not fair operation is ultimately unlawful; The other way around is unlawful anyway (something to which Antigone would bitterly object).
Ethical problems arise when we try to say what “fair personal data processing” is, even if the “fairness” criterion, as seen, is inferior to the “lawful” one in the GDPR.
When is a personal data processing operation fair and when it isn’t?
Little help is provided by official sources. The UK ICO uses a triple criterion of not processing “the data in a way that is unduly detrimental, unexpected or misleading to the individuals concerned“. While at first sight this is a brave effort to define what “fair processing” is, I am afraid that in practice it confuses rather than elucidates: When is an effect “unduly detrimental“? What should be considered “unexpected” to a given person? What “misleading“? Overall, then, three new question in place of one.
The Article 29 Working Party made an even more valiant effort, in its Guidelines on profiling of 2018, providing a concrete example in this regard: “A data broker sells consumer profiles to financial companies without consumer permission or knowledge of the underlying data. The profiles define consumers into categories (carrying titles such as “Rural and Barely Making It,” “Ethnic Second-City Strugglers,” “Tough Start: Young Single Parents,”) or “score” them, focusing on consumers’ financial vulnerability. The financial companies offer these consumers payday loans and other “non-traditional” financial services (high-cost loans and other financially risky products)“.
I think that this example, same as above, creates even more confusion. First, the processing seems to be unlawful anyway (tellingly, it comes from a USA example). Second, categorizing or scoring consumers is not outright unlawful under the GDPR, after all these being guidelines on how to conduct lawful profiling.
What we are therefore left with is the discriminating, if not blatantly offensive, characterizations of people in this example.
While I would say that this is too little to assist lawyers, I think it is good enough to help with the ethical analysis. In fact, this example can be used to demonstrate exactly the problem with the “fairness” principle in the GDPR.
In essence, what is and what is not fair is in the eye of the beholder.
A characterization leading to categorization as “rural and barely making it” is unfair? Or, “tough start: young single parents”? To some they may sound offensive while to others they may sound as an accurate description of reality. And, at any event, if this data agency is any good in its job, then presumably indeed poor single parents who have a tough start with newborns would fall under a certain category. If it was named “Category A” would that have solved the problem of this processing?
Or, is it not a problem of names used and it is a problem of the financial deals offered to each category of people? The example says that people falling under underprivileged categories received worst-terms loans or were marketed with non-traditional financial products. Well, I would imagine that traditional products would be denied to them anyway (why not consider such denial by the banks also unfair, under the same context?). Could these people not benefit from financial possibilities open to them of which they are probably unaware? If we consider non-traditional financial products as “unfair”, then I suppose that some other state agency, the one responsible to oversee financial markets, is not doing its job well.
My purpose in the above two paragraphs is not to bring down the Article 29WP example; My purpose is to demonstrate how difficult, if not impossible, it is to find a good one, simply because, one could debate what is fair and what is not for ages. There is simply too much arbitrariness in it.
So, why have it in the GPPR anyway? As said, it is a clever legal trick to regulate processing that is lawful but makes us feel uncomfortable. If we only had the lawful criterion then we would have to allow it. Now, under the fairness criterion we are offered with some flexible options. This is not unusual either: In business law, business ethics are central interpretational tools in contracts and business practices.
“Fair personal data processing” is therefore a dynamic filter. What was not fair in the 1990s may well be fair today. The opposite is presumably also possible but less likely (if we perceive ethics as expanding liberty over time rather than reducing it). One cannot go to extremes, on either way of the political spectrum, while interpreting what “fair” is; Moderation and time-specificity are central here. In essence, fairness under the GDPR is the timestamp for our societies.