Published in European Law Blog, 25.11.2020

By and

When is new law needed and when are patches of existing legal tools preferrable?

In October 2020 the European Parliament issued three Resolutions on the ethical and legal aspects of Artificial Intelligence software systems (“AI”): Resolution 2020/2012(INL) on a Framework of Ethical Aspects of Artificial Intelligence, Robotics and related Technologies (the “AI Ethical Aspects Resolution”), Resolution 2020/2014(INL) on a Civil Liability Regime for Artificial Intelligence (the “Civil Liability Resolution”), and Resolution 2020/2015(INI) on Intellectual Property Rights for the development of Artificial Intelligence Technologies (the “IPR for AI Resolution”).

All three Resolutions acknowledge that AI will bring significant benefits for a number of fields (business, the labour market, public transport, the health sector). However, as identified in the AI Ethical Aspects Resolution, “there are concerns that the current Union legal framework, including the consumer law and employment and social acquis, data protection legislation, product safety and market surveillance legislation, as well as antidiscrimination legislation may no longer be fit for purpose to effectively tackle the risks created by artificial intelligence, robotics and related technologies” (K). Therefore, “in addition to adjustments to existing legislation, legal and ethical questions relating to AI technologies should be addressed through an effective, comprehensive and future-proof regulatory framework of Union law reflecting the Union’s principles and values as enshrined in the Treaties and the Charter of Fundamental Rights that should refrain from over-regulation, by only closing existing legal loopholes, and increase legal certainty for businesses and citizens alike, namely by including mandatory measures to prevent practices that would undoubtedly undermine fundamental rights” (L). It is in this context that the Parliament makes concrete legislative proposals in each Resolution within its respective subject-matter.

However, all three Resolutions are also adamant on not providing AI software systems with legal personality. To our mind all three make a mistake, failing to see that their otherwise excellent assessment of the problems at hand would best be served by embracing change and not shying away from it.

Back in 1980, Frits Hondius, one of the godfathers of personal data protection law in Europe, remarked that “the legal system has developed three rather different processes for dealing with computer-related problems. In certain matters, straightforward application of existing law is possible and sufficient. For other questions, the existing law is not entirely adequate, but it can be adapted or reformed in order to make it more responsive to the computer field. Finally, certain problems demand creation of entirely new law” (p. 88). He then went on to identify contract liability as a case of straightforward application of existing law, intellectual property law as a case of law that back then needed some reform in order to accommodate software, and data protection law as a case where an entirely new body of law was needed (pp. 88-89).

Hondius subsequently identified three criteria that signify when a new law is needed: First, the legal problem falls either outside the scope of any existing branch of law or simultaneously under several branches, none of which resolves all aspects of the problem. Second, the new law purports to regulate an important problem which affects broad sections of society and is likely to remain for a considerable time: Third, the new law results in basic principles sanctioned by the constitutional and legal system of the country concerned (p. 89).

All three of the above identified criteria are met by AI: As noted in the AI Ethical Aspects Resolution, the legal problems posed by AI fall under several branches of law simultaneously (“consumer law and employment and social acquis, data protection legislation, product safety and market surveillance legislation, as well as antidiscrimination legislation” (K) none of which is able to resolve all aspects of the problem alone. Similarly, as also identified in the same text, AI is without doubt an important problem which does affect broad sections of society (business, the labour market, public transport, the health sector) and is likely to remain. Similarly, any new law needs to address challenges through an “effective, comprehensive and future-proof regulatory framework of Union law reflecting the Union’s principles and values as enshrined in the Treaties and the Charter of Fundamental Rights” (L).

If the Parliament has reached all the necessary assumptions, why is it so adamantly opposed to new legislation granting legal personality to AI?

  1. The Parliament’s approach

The Parliament’s Resolutions are particularly important because they not only identify the sectors most likely to be affected by AI and highlight potential problems, but also make concrete suggestions for legislative intervention in order to address them. The Parliament asks for development of a new “Regulatory Framework for AI”, that will also take into account ethical principles. While it is impossible to summarise its detailed proposals here, perhaps most notably the Parliament suggests to distinguish between “developers, deployers and users” of AI so as to introduce legal obligations and ethical principles for the development, deployment and use of AI in full respect of the Charter; to apply to AI regulation the principles of necessity and proportionality; to expand and make specific the EU’s privacy and data protection legal framework to the AI circumstances; to address the question of liability in cases of harm or damage caused by AI by way of specific and coordinated adjustments to the liability regimes of Member States; and, to assess the impact of AI under the current systems of patent law, trademark and design protection, copyright and related rights and the protection of trade secrets, and to amend them accordingly.

Nevertheless, the Parliament clearly states that “any required changes in the existing legal framework should start with the clarification that AI-systems have neither legal personality nor human conscience, and that their sole task is to serve humanity” (Civil Liability Resolution, Annex, (6)). The Parliament therefore opts for a new regulatory framework that is in effect an amendment through ad hoc patches of the one currently in effect; Its “new” regulatory framework is new only by name. Although it acknowledges the new conditions and challenges posed by AI, the Parliament wishes to address them using our existing legal tools: For example, by amending the personal data protection framework or the civil liability framework or patent law so as to address specific AI-relevant issues. Old law needs to be improved to cover for the new AI circumstances; new law, in the form of a regulatory framework granting AI with legal personality is expressly rejected by the Parliament.

  1. Three misunderstandings in the Parliament’s line of reasoning

(a) Liability is enhanced, not reduced, through granting legal personality to AI

The Parliament notes that “all physical or virtual activities, devices or processes that are driven by AI-systems may technically be the direct or indirect cause of harm or damage, yet are nearly always the result of someone building, deploying or interfering with the systems; notes in this respect that it is not necessary to give legal personality to AI-systems; is of the opinion that the opacity, connectivity and autonomy of AI-systems could make it in practice very difficult or even impossible to trace back specific harmful actions of AI-systems to specific human input or to decisions in the design” (Civil Liability Resolution, 7).

The issue of liability, as outlined in the extract above, is one of the Parliament’s arguments against giving legal personality to AI. However, we believe that it is misguided, if seen from the perspective of its “end-users” (individuals/data subjects/consumers). Exactly because AI will infiltrate all of human activity, indistinguishable from any technology and embedded in all of our daily decision-making systems, it will be impossible to “trace back specific harmful actions of AI” to a particular “someone”. Any AI setup will most likely involve a number of (cross-border) complex agreements between many developers, deployers and users before it reaches an end-user. Identifying the “someone” liable within this international scheme will be extremely difficult for such end-user without the (expensive) assistance of legal and technical experts. On the contrary, end-users would better be served through a one-on-one relationship, whereby the AI effect that affects them is visibly caused by a specific entity; only by granting legal personality to AI may warrant that this will be an identifiable entity, rather than a string of opaque multinational organisations hiding behind complex licensing and development agreements.

(b) Human conscience is not a prerequisite for legal personality

As seen above, the Parliament is adamant that “any required changes in the existing legal framework should start with the clarification that AI-systems have neither legal personality nor human conscience” (Civil Liability Resolution, Annex, (6)). If the purpose of this wording is to connect human conscience with legal personality, then it is a flawed one: Legal persons (corporations, organisations, associations etc.) visibly lack “human conscience”. This has not impeded us from granting them with legal personality anyway. Although there may come a day when AI will perhaps acquire human conscience, act autonomously, or develop any other human trait imaginable, this is a misguiding discussion, because it keeps us away from the solution at hand: Granting legal personality to AI in similar terms to that of legal persons. As is the case today with humans running corporations, it will be again humans running AI. Human conscience or autonomy need not enter this (legal) picture.

(c) “Incentives for human creators” should not stand in the way of awarding IP ownership rights to AI

The Parliament “notes that the autonomisation of the creative process of generating content of an artistic nature can raise issues relating to the ownership of IPRs covering that content; considers, in this connection, that it would not be appropriate to seek to impart legal personality to AI technologies and points out the negative impact of such a possibility on incentives for human creators” (IPR for AI Resolution, p. 14).

The Parliament in its IPR for AI Resolution when faced with the important issue of AI intellectual property creation and whether ownership of intellectual property rights can be granted to AI, recommends that AI shall not be awarded legal personality because it would reduce “incentives for human creators”. Notwithstanding a formalistic legal discussion (e.g. patent law requires “family names” and “full addresses” of the “inventor”, therefore the European Patent Office has ruled that under current law only natural persons and not AI qualify for inventorship), we believe that the Parliament reached the above conclusion for the wrong reason: Although it could have discussed whether the “autonomisation of the creative process” has enabled AI creativity or whether such creativity should at all times be attributed to the humans who developed the AI (that developed the “content of artistic nature”), the Parliament opted for not granting legal personality to AI simply in order to protect the professional rights of human creators. While everyone can have his/her own views on this line of reasoning (state protectionism against new sources of creativity), the fact is that, if this is indeed the Parliament’s objective, this policy is doomed to fail: State protectionism historically has always failed to protect effectively its intended recipients in the long run, as, for example, was the relatively recent case under EU law with telecommunications and incumbent operators.

  1. The Parliament’s GDPR-in disguise legal model simply will not work for AI

The Parliament in its AI Ethical Aspects Resolution puts forward a proposal for a Regulation “on ethical principles for the development, deployment and use of artificial intelligence, robotics and related technologies”. Careful reading reveals that its recommendation follows closely the GDPR pattern. A new set of actors is introduced (“developer” resembling the GDPR’s controller and “deployer” the GDPR’s processor). Its principles very much follow the GDPR’s ones (“safety, transparency and accountability”). And, most pertinently, the Parliament recommends the establishment of “national supervisory authorities”: “Each Member State shall designate an independent public authority to be responsible for monitoring the application of this Regulation (‘supervisory authority’)” (AI Ethical Aspects Resolution, Article 18), the link with Data Protection Authorities being unmissable.

In attempting to replicate the, largely successful, EU personal data protection model for AI purposes, the Parliament misses the essence of both. Personal data protection is a human right in need of specialised legislation in order for it to become effective and develop its protective scope for individuals. Its principles are derived from the Charter and are particularised by the provisions of the GDPR and other personal data protection legislation.

In contrast, AI is a methodology or a procedure for doing things that will soon conquer society in all of its aspects. It will become intrinsically embedded to anything from our daily private routines to factory or government operation. In other words, it is not a concrete human right to be protected or a specific technology to regulate, but a procedure and way of thinking and doing business that will affect all of our legal and moral relationships. Its reach is impossible to imagine today, much less to map accurately so as to regulate in a single piece of regulation, similar to the GDPR.

  1. Why granting legal personality to AI would be the preferred way forward

The first and most important advantage in granting legal personality to AI, by way of trisecting the traditional dichotomy of natural/legal person to allow for a “digital person” very much resembling the legal person, is flexibility. AI, be it “embedded in hardware devices” (robots, drones etc.) or “software-based” (processing methods and procedures; AI Ethical Aspects Resolution, proposal for a Regulation, Art. 4(a)), “has the potential to generate opportunities for businesses and benefits for citizens and that can directly impact all aspects of our societies, including fundamental rights and social and economic principles and values, as well as have a lasting influence on all areas of activity” (AI Ethical Aspects Resolution, B). While exploring this “potential” and therefore navigating in unchartered territory, flexibility is crucial. Legal personality will mean that each field of law (civil law, tax law, employment law, penal law, competition law) will be allowed with the freedom to assess the legal issues posed by AI within its own boundaries and under its own rules and principles. Same as with legal persons, legal solutions will come on a case-specific basis in each field of law. This will allow for nuanced, detailed and thoughtful specialised regulation each time. None of this will be made possible if a “supervisory authority” with an opaque legal mandate is incorporated in order to “monitor” any and all AI.

The second advantage is proximity. Legal personality to AI will mean that each individual affected by it will have a specific legal entity facing him or her locally, in the same manner as is the case with legal persons today. In other words, it will be immediately clear to anybody who is interacting with him or her, with whom he or she enters a transaction, who is causing discomfort within a one-to-one relationship. The benefit of proximity becomes clear if compared with the suggested regulatory model by the Parliament, which includes distinction between a “developer”, a “deployer” and a “user” of AI: Effectively, therefore, at least three legal entities, potentially located across the globe, to interact with a single end-user. Rather than formally creating a scheme of legal obscurity, the solution would therefore be to bring AI to the ground, by awarding it legal personality and therefore mandating local, concrete transactions and interaction.

Admittedly, granting legal personality to AI would not be a legal panacea. First and foremost, it would create a period of legal uncertainty, taking into account that this would be a legal novelty to all stakeholders. While some parallels may be drawn with the case of legal persons, the fact remains that any new rules granting legal personality to AI would have to be drafted within a legal vacuum. Also, granting legal personality to AI would neither be a fix-it-all nor a one-size-fit-all legal solution. As regards the former, there may indeed be cases, to be identified under the risk assessments requested also by the Parliament, where legal personality could be used to deflect or be released from liability (as perhaps in the case of self-driving cars or military operations); in these cases the option of legal personality should perhaps be abolished. Similarly, adaptations of any relevant legal regime would most likely be needed: AI embedded in hardware would most likely need to be treated differently than purely-software AI. Overall therefore, this would amount to substantial speculative legal work, that would perhaps be best suited to take place at a first instance within the context of an EU “regulatory sandbox”.

Conclusion: not all is lost

How well did humanity in general and the law in particular fare under the Hondius criteria of 1980? As we all know today, data protection (or data privacy) became a world legal sensation, with more than 130 countries having introduced such entirely new laws into their systems, and the EU through its GDPR arguably setting the global tone.

Did intellectual property law fare as well? One has good reasons to doubt it, as evidenced by the global discussion on software patents: Would we not have been better if a new right  within intellectual property law was developed back in the 1980s in order to protect computer programs, as was the case of databases in the EU in 1997?

What we are now left with is Hondius’ third example of law affected by computers, that of liability. Back in 1980, before PCs even existed, contract law appeared well geared to deal with new issues. Will this continue to be the case under AI circumstances?

The fact that two out of three of the Parliament’s Resolutions addressed the issue of granting legal personality to AI, even in the negative, means that it is still on the table. Because the Parliament has reached this recommendation based on a sound analysis of the problems at hand, a change of perspective would perhaps not be unthinkable: The Parliament could mandate the issue to be examined further, perhaps within the confines of an EU “regulatory sandbox”.  This may be a historical opportunity for lawyers, and Europe, to make once again, a change. As in the case of data protection law Europe took the international driver seat and created new rules and new principles, which are humanity’s best hope in today’s technological Armageddon, same is the case now with AI: The Parliament and the Commission should make a leap of faith, follow their excellent assessment of the legal and moral problems at hand, but rather than opt for patches of our existing legal framework, instead decide to grant legal personality to AI.