AI, Law, and Agency in the Age of Machine Learning

  • About the Conference
  • Program
  • Participants
  • Abstracts & Papers

Abstracts:

 

Setting the Scene: Legal Personhood, Natural Agency, and Self-Determination

Hanoch Dagan

Should AI technology be treated as a legal person, namely: a carrier of rights (regarded, for example, as an author, and thus a copyright holder) and of duties (which can therefore be held responsible for wrongs and crimes)? In my short introductory remarks, I will bracket the specific, admittedly urgent, context of AI and offer a few reflections on legal personhood.

     Law, I will claim, should confer the status of legal person either to vindicate a subject’s agency or to promote (directly or indirectly) agents’ foundational right to self-determination. The vindication prong is most clearly manifested when we reflect upon the emancipation of slaves or upon the Married Women’s Property Acts; it also captures the essence of our deliberations regarding the standing of mentally disabled people, as well as some of the claims made with respect to animal rights. The promotion prong, in turn, applies to the familiar artificial legal persons: law is justified in conferring legal personhood on a diverse set of artifacts – such as property owners’ associations, publicly-held corporations, or states – because (and to the extent that) this status is conducive to people’s right to self-determination.

Many complex questions admittedly follow these inquiries (e.g., should an artificial legal entity’s veil be pierced?). But the fundamental matter of vindicating agency or promoting self-determination must come first. It is prior not only conceptually, but also normatively: the justification for the legal personhood status should guide our answers to all the more practical questions that follow.

 

 

 

How Artificial is Our Intelligence Anyway?

Mickey Zar

The question of AI's agency is haunting legal scholars. That is no surprise, since competent agency is the ultimate condition for establishing legal responsibility. Not surprisingly, then, much effort is directed at articulating the differences between human "natural" intelligence, and a machine, "artificial" one. I challenge the  natural/artificial dichotomy not by claiming that machine intelligence gradually becomes human-like, but  the other way around: by claiming that human intelligence is not "natural" in any intelligible sense. We can no longer separate human from machine intelligence. It is humans who teach machines how to adapt to their human surroundings; and just like other machines, humans are coded by other humans. We are always socially programmed, normalized from our birth by rules and codes, the first of which is language. The ability to manipulate and modulate our behavior according to a machine-generated profile, which in its turn learn our behavior and adjust itself accordingly, is dramatically enhanced by data mining. The origin is lost; we are caught in a loop that blurs the already dubious distinction between natural and artificial.

I believe all this entails that if human intelligence – as a rational ability to act at one's autonomous will - is the basis for the establishment of legal agency, then AI has legal agency as well. In other words, agency has to be attached to man-made machines: be it AI or other men.

 

Extended Abstract

 

 

Robotics and Artificial Intelligence: A technological basis of our future society

Sami Haddadin

Humans are far superior to machines in almost every respect, because only they can think and conclude in abstract ways, possess sensomotoric abilities that have so far been impossible to reproduce, and have the ability of effortlessly connecting the two worlds. At the same time, robotics and artificial intelligence are becoming more and more important and will change our world like few other technologies before them. Do we now have to fear that humans will soon be replaced by machines? Or do intelligent robots, as the "hammer of tomorrow", rather represent an opportunity to make our everyday lives and also our working world easier in the future as intelligent tools?

These questions concern not only the everyday life of humans, but also increasingly the interaction of humans, robotics and artificial intelligence in legal questions. More than ever before, AI researchers and legal experts must work together to create the basis for a sustainable, technically feasible and applicable legal situation that meets the requirements for dealing with artificial intelligence and robotics in our society. Above all, this includes transparent rules, human orientation and the further development of proven laws and standards.

 

Extended Abstract

 

 

 

The Death of the AI Author

Crays Craig and Ian Kerr

Much of the recent literature on AI and authorship asks whether an increasing sophistication and independence of generative code should cause us to rethink embedded assumptions about the meaning of authorship, arguing that recognizing the authored nature of AI-generated works may require a less profound doctrinal leap than has historically been suggested. In this essay, we argue that the threshold for authorship does not depend on the evolution or state of the art in AI or robotics. Rather, the very notion of AI-authorship rests on a category mistake: it is an error about the ontology of authorship.

Building on the established critique of the romantic author, we contend that the death of the romantic author also and equally entails the death of the AI author. Claims of AI authorship simply do not make sense in terms of the realities of the world in which the problem exists. Those realities should push us past bare doctrinal or utilitarian considerations about what an author must do. Instead, they demand an ontological consideration of what an author must be. Drawing on insights from feminist literary and political theory, we offer an account of authorship that is relational: authorship is a dialogic and communicative act that is inherently social, with the cultivation of selfhood and social relations being the entire point of the practice. This inquiry transcends copyright law, of course, going to the normative core of how law should—and should not—think about robots and AI, and their role in human relations.

Extended Abstract

 

 

Authorship, Ownership and Originality in the area of AI and AI generated works

Assaf Jacob

The rise of AI and AI generated works introduce many new and intriguing dilemmas in the area of IP law. For example, can a machine or software be considered the author or the owner of a new work? What should the parameters be in deciding whether, to begin with, such work complies with the minimum requirements for IP protection? Etc. To illustrate by using two examples – one of the basic requirements of copyright protection is that the work should be original – how can the concept of originality survive the new methods of IP creation and production? In a similar vein, Patent Law requires that in order for an invention to be eligible for protection it should adhere, among other things, to the inventive step requirement. Can AI generated work comply with this requirement? What should be the determining factor? Moreover, questions arise as to whether the concept of AI falls within the realm of patentability as it is not necessarily a patentable subject matter.

This raises the question of whether we should have different legal regimes for similar works or inventions based on the identity of the author, or whether we should design the legal protection to solely consider objective standards in regarding the end product. Another set of questions deals with the issue of ownership. Do we insist on natural agency as a precondition for any entitlement, or are we willing to accept AI entities as owners of their work? In this respect, one can also struggle with two conflicting paradigms – AI as a source of creation, as opposed to AI as a sophisticated tool of production. Each of these paradigms may yield a different outcome and lead the legislator in a different direction.

Dealing with these complicated issues requires reflection on the basic justifications of IP protection, as well as reevaluation of the minimal standards for protection so as to decide whether, and to what extent, new kinds of works will be granted protection. Thus, for example, in addressing the issue of authorship from an economic/utilitarian perspective, this produces a different outcome than when tackling it from the perspective of natural agency or from a narrower perspective of protecting one's personhood. Here one should also speculate about the desire to draw analogies to property or real property law. Can these analogies broaden our perspectives and shed a new light on the AI subject matter?

Another set of questions deals with the garden variety of IP interests. Should we treat all IP rights, with respect to AI, in the same manner, or should we treat them differently, based on their respective characteristics within a given field, or similarly across the board. Thus, for example, within copyright law, should we treat economic rights and moral rights in the same manner? And if so, should the same criteria or solutions be applied to patent law or trade secrets?

These questions are not limited to authorship or inventorship and can also affect the issue of ownership.  Firstly, if the work is not entitled to any protection, no one has ownership over it in the strong sense. Moreover, the combination of man and machine can produce various outcomes in terms of IP protection, deciding how to allocate rights respectively may be a rather difficult task. The “easy” case is when machines have no rights. Here one should then speculate who the “man behind the machine” is. But due to the large number of people oftentimes involved in the process of creation, it becomes quite a burdensome task. However, the more difficult task would be to decide that machines are entitled to some rights as a form of artificial legal creatures. Here the allocation of rights becomes more complex and compels us to rethink concepts of joint works (between man and machine) and joint ownership. In this context the real challenge is to have legal arrangements that will make sense across various legal regimes. Should we have the same arrangements across the board or should we be more careful regarding the different subject matters? Thus, for example, in juxtaposing copyright law and torts– does the fact that AI carry or do not carry any liability in torts necessarily affect the outcome of the IP legal regime and vice versa? What, if at all, is the difference between the corporation as an artificial legal creature and AI as such, and do the reasons that make us develop the theory of the firm apply with the same force to AI? And with this respect do we want to make a distinction between “weak” AI and “strong” AI?

In this presentation I will highlight some of the most basic questions regarding authorship, ownership and originality of AI/in AI created works and place them in the context of IP law in particular and the law in general. Although I don’t have many simple and clear answers to the aforementioned questions, thinking about these issues may propel us in the right direction by creating more sophisticated and apt legal structures.  

 

Extended Abstract

 

 

Peripheral Boilerplate

Lauren Henry Scholz

Scholars often compare boilerplate contracts to non-boilerplate contracts. Some critics see a mismatch between applying the rules for the latter to the former due to categorical differences between the contexts in which they are used. This article makes the novel theoretical contribution that there are transactions that are closer or further away from the ideal, or core, case of boilerplate contracts. In general, when interpreting peripheral boilerplate contracts, courts should go beyond the text to enforce the terms to which the context suggests the parties actually agreed. The core case for boilerplate is when there is a well-established type of transaction, and the drafting party is able to foresee and parse many of the risks that it could face in the future from a transaction to select and modify boilerplate terms that are readily available so it can frame the transaction to its benefit. These are established types of transactions that have been tested both in practice and in courts of law. By contrast, peripheral boilerplate is the use of voluminous form terms to govern a relatively novel type of transaction. What makes peripheral boilerplate different is that the standard terms are not, and in at least some cases cannot, be tailored to the type of transaction before the drafting party. This article’s central example of peripheral boilerplate is software as a service (SaaS) contracts. SaaS is a software distribution model in which a third-party provider hosts software applications and makes them available to customers over the Internet.

 

Extendnen Abstract

 

 

Moral crumple zones and the limits of technological certification: A socio-technical perspective 

Madeleine Clare Elish

As debates about the policy and ethical implications of AI systems grow, it will be increasingly important to accurately locate who is responsible when agency is distributed in a system and control over an action is mediated through time and space. Analyzing several high-profile accidents involving automated socio-technical systems, I introduce the concept of a moral crumple zone to describe how responsibility for an action may be misattributed to a human actor who had limited control over the behavior of an automated or autonomous system. Just as the crumple zone in a car is designed to absorb the force of impact in a crash, the human in a highly complex and automated system may become simply a component—accidentally or intentionally—that bears the brunt of the moral and legal responsibilities when the overall system malfunctions. While the crumple zone in a car is meant to protect the human driver, the moral crumple zone protects the integrity of the technological system, at the expense of the nearest human operator. The concept is both a challenge to and an opportunity for the design and regulation of human-robot systems. At stake in articulating moral crumple zones is not only the misattribution of responsibility but also the ways in which new forms of consumer and worker harm may develop in new complex, automated, or purported autonomous technologies. 

 

Extended Abstract

 

 

 

Walking the Talk - Implementing Ethical Guidelines in an Organization

 Christoph Peylo

Artificial Intelligence (AI) is a powerful and promising technology that has the capability of processing and analyzing an unprecedented amount of data, of distilling information, of learning and decision making. The ability to perceive the world, to gain expertise from experience and to take action, results in some kind of system autonomy. So far, societies always established norms, rules, and behavioral patterns to bound and regulate the autonomy of agents and to make their interactions predictable. However, can these mechanisms be applied to AI as well? Can reliability and trustworthiness be achieved, to allow trusted interactions between humans and AI enabled products? Is there a way to ensure that machines adhere to ethical principles and standards?

AI imposes challenges not only for societies, but for organizations as well. Using AI to optimize internal processes and to gain more efficiency, results in challenges with respect to the accountability of decisions and liability aspects. If AI is used in products and for product development, AI hast to be part of both, a company’s I(o)T and data strategy and its compliance policies. Consequently, in this talk it is argued that a comprehensive AI strategy has to be anchored in a company’s core values. It should provide the scaffolding for policies that regulate the use of AI systems within the company, and to define the principles and processes of how to build and design AI products that can be trusted and that adhere to ethical principles.

                                

 

 

Reasoning, justification and causality: How to overcome the black box problem in the automation of administrative decision-making?

David Roth-Isigkeit

The so-called “black box” problem in legal informatics describes the fact that when using advanced machine learning techniques to automate legal decisions, while we can observe the in- and output data of the machine, there are considerable difficulties in explaining the decision-making that is a result of the processing of the data between in- and output. In particular in automating the public domain, this becomes relevant. Since administrative decisions that constrain the rights of single citizens require some standard of reasoning and justification in order to comply with the principle of the rule of law, advanced automation processes might de lege lata be incompatible with the constitutional tradition. The present contribution discusses requirements of reasoning and justification in administrative procedures in several jurisdictions. It argues that whether the “black box” problem turns out as an obstacle for the automation of administrative procedures depends inter alia on whether the law requires a causal link between the actual processing of data in the legal machine and the ex post justification by legal officials.

 

Extended Abstract

 

 

Contesting Algorithms

Niva Elkin-Koren

Platforms, such as Google, Facebook and Twitter, are responsible for mediating much of the public discourse and for governing access to speech and speakers around the world. They deploy Artificial Intelligence (AI) to filter, block and remove content, at the request of righholders,  governmental agents or state actors.

This is potentially game-changing for democracy. It facilitates the rise of unchecked power, which often escapes judicial oversight and constitutional restraints. In a digital ecosystem which is governed by AI, we currently lack sufficient safeguards against the blocking of legitimate content, while securing due process and ensuring freedom of speech. Existing rights and legal procedures that seek to ensure civil liberties, are ill-equipped to address the robust, non-transparent and dynamic nature of filtering unwarranted content by AI.

I propose to address this crises in AI speech regulation, by introducing Contesting Algorithms. The idea is pretty simple: introducing systematic adversary into a hegemonic system, may enhance transparency and freedom of speech. In the context of algorithmic copyright enforcement, for instance, speech regulation by AI is designed to remove infringing materials as defined by rigthholders, while neglecting other public values, such as fair use or free speech.  

Contesting Algorithms introduce an adversarial design, which may insert a social perspective into the machine learning feedback loop and create a space for conflicting judgments. Thus, Contesting Algorithms may offer a systematic check on dominant removal systems. 

The presentation will introduce Contesting Algorithms as a policy intervention strategy and demonstrate how regulatory measures could promote the development and implementation of this strategy in online content moderation.

 

Extended Abstract

 

 

Looking Beyond Bias: From Algorithmic Fairness to Algorithmic Justice

Dr. Tomer Shadmy

The current period of intensive assimilation of Machine Learning (ML) systems, in various decision-making processes, is a formative period in terms of determining the normative vocabulary by which these systems will be judged and evaluated. The industry and Computer Science literature tend to frame concerns regarding these systems’ social effects, by using the term Algorithmic Fairness. Fairness according to this mindset is about ensuring that algorithmic decisions do not create discriminatory impacts, mainly when comparing across different demographic groups.

From legal theory perspective, fairness is only one dimension of justice. Other dimensions of justice - such as political freedom, individual autonomy and environmental justice - must be part of the normative framework which regulates the development and use of ML. Adding principles like political freedom to this normative framework is a complex task. ML relies on distinct decision making rationalizations. These rationalizations are radically different from the rationalization that accompanies principles such as political freedom.

The paper argues that tackling this challenge requires creating three channels. One channel states that in making certain decisions, the use of ML should be limited, no matter how much a concrete ML system is free of biases and transparent. The second channel, states that in making other decisions, the use of the traditional legal principles for ML regulation should be limited. And the most challenging third channel requires Re-imagining of contemporary legal principles in order to fit them to ML environment. Following that I present some directions for rethinking political freedom in ML environment.

 

Extended Abstract

 

 

They Don't Need Our Data

 Katrina Ligett

It’s easy to be resigned about privacy’s diminishment in the digital era. If companies need to track every click and keystroke in order to function, then it’s natural to conclude that giving up privacy is the price that must be paid. This is wrong—but not for the reasons you might think.

Recent technical advances in machine learning, cryptography, and data analysis make it possible for companies to learn from your data without ever gathering it. However, despite the power of these new techniques, large tech companies don’t have sufficient incentives to adopt them, and some are using these techniques to cover up continued abusive behavior.

In this talk, I'll give a brief introduction to the new technical tools that support learning without data gathering, and illustrate what can be done with them.

 

Extended Abstract

 

 

Transparency and Explicability under the GDPR

Paul Vogel

Being applicable for 18 months, it appeared that the development and application of new technologies is not always compatible with the sometimes pretty strict specifications of the General Data Protection Regulation (GDPR), which fully regulates the processing of personal data within and even beyond the borders of the European Union. This is especially the case with the use of intelligent algorithms that are capable of adapting their behavior to internal or external influences. A much-discussed problem is the transparency of algorithmic decision-making. Significant reservations against the increasing use of intelligent algorithms are directed against the opacity of their decision-making. One also speaks of the so-called “black box character” of deep-learning algorithms. A great challenge is dealing with discrimination by algorithms. In order to 1) uncover such discrimination and 2) prevent it, various solutions are being discussed. One of the main approaches to tackle this problem is whether the GDPR confers a subjective “right to explanation”. The presentation will deal with the question whether and to what extent Art. 15 par. 1 lit. h GDPR grants the data subject such a “right to explanation”. It will conclude that the material scope of this provision is narrow and does not include every AI system. Rather, the depth of a right to be informed about the logic of an algorithmic decision should depend on the risk the processing poses for the data subject. The GDPR-inherent “risk-based approach” is capable of forming a flexible, but still human-centric approach to the “black box”-dilemma. Additionally, accompanying measures such as Codes of Conduct for programmers or external inspection procedures could build a consistent regulatory framework.

 

Extended Abstract

 

 

AI made in the EU:To improve data sharing and to implement the explainability requirement

Alain Strowel

The presentation will first review the EU general approach towards AI and compare it with other national/regional strategies towards AI. The facilitation of data sharing despite strong data protection and the ethical concerns appear central in the EU approach. The presentation will review the challenges posed by those objectives, and some ways to go forward, including the incentives for B2G and G2B data sharing or the sandbox approach, as well as the role of (legal) design in implementing the ethical recommendation on explainability.

 

Extended Abstract

 

 

Designing a National Framework for Ethics and Regulation of AI: Challenges and Opportunities

Karin Nahon

This talk analyzes the recent experience of designing a national framework for ethics and regulation of AI in Israel by a multi-stakeholder committee, which was one of 15 other committees that focused on various perspectives of AI. The uniqueness of AI technology, the diversity of people in the committee, and the diversity of the target audiences (national and organization decision-makers, AI developers and architects, regulators etc…) brought forward some major challenges. To name a few: how to ensure ethical considerations of AI development, along aside innovation and national leadership needs? what are the circumstances for ex-ante and ex-post interventions? how should a live ethical tool for decision-makers look like? What distinguishes our national plan from others we surveyed?

The committee has identified the following principles as basis in ethics and regulation of AI: fairness, accountability (transparency, explainability, responsibility), respecting human rights and protecting them (protecting the integrity of body, privacy, protecting autonomy, other political and civil rights), cyber-protection and information security, safety and existence of a competitive market. While some of these values are part of a longer discussion in the literature, some need further clarification. These values were used as the basis for developing an ethical tool for decision-makers, to identify possible ethical failures and challenges.

 

 

 

 

 

 

 

Setting the Scene: Legal Personhood, Natural Agency, and Self-Determination

 

 

Tel Aviv University makes every effort to respect copyright. If you own copyright to the content contained
here and / or the use of such content is in your opinion infringing, Contact us as soon as possible >>