Raso on Interoperable AI Regulation

Jennifer Raso (McGill U Law) has posted “Interoperable AI Regulation” (Forthcoming in the Canadian Journal of Law and Technology) on SSRN. Here is the abstract:

This article explores “interoperability” as a new goal in AI regulation in Canada and beyond. Drawing on sociotechnical, computer science, and digital government literatures, it traces interoperability’s conceptual genealogy to reveal an underlying politics that prioritizes harmony over discord and consistency over plurality. This politics, the article argues, is in tension with the distinct role of statutory law (as opposed to regulation) in a democratic society. Legislation is not simply a technology through which one achieves the smooth operation of governance. Rather, legislation is better understood as a “boundary object”: an information system through which members of different communities make sense of, and communicate about, complex phenomena. This sense-making includes and even requires disagreement, the managing and resolution of which is a vital function of both law and indeed of any information system.

Lee & Souther on Beyond Bias: AI as a Proxy Advisor

Choonsik Lee (U Rhode Island) and Matthew E. Souther (U South Carolina Darla Moore Business) have posted “Beyond Bias: AI as a Proxy Advisor” on SSRN. Here is the abstract:

After documenting a trend towards increasingly subjective proxy advisor voting guidelines, we evaluate the use of artificial intelligence as an unbiased proxy advisor for shareholder proposals. Using ISS guidelines, our AI model produces voting recommendations that match ISS in 79% of proposals and better predicts shareholder support than ISS recommendations alone. Disagreements between AI and ISS are more likely when firms disclose hiring a third-party governance consultant, suggesting these consultants-often the proxy advisor itself-may influence recommendations. These findings offer insight into proxy advisor conflicts of interest and demonstrate AI’s potential to improve transparency and objectivity in voting decisions.

Lahmann et al. on The Fundamental Rights Risks of Countering Cognitive Warfare with Artificial Intelligence

Henning Lahmann (Leiden U Centre Law and Digital Technologies) et al. have posted “The Fundamental Rights Risks of Countering Cognitive Warfare with Artificial Intelligence” (Final version accepted and forthcoming in Ethics & Information Technology) on SSRN. Here is the abstract:

The article analyses proposed AI-supported systems to detect, monitor, and counter ‘cognitive warfare’ and critically examines the implications of such systems for fundamental rights and values. After explicating the notion of ‘cognitive warfare’ as used in contemporary public security discourse, it describes the emergence of AI as a novel tool expected to exacerbate the problem of adversarial activities against the online information ecosystems of democratic societies. In response, researchers and policymakers have proposed to utilise AI to devise countermeasures, ranging from AI-based early warning systems to state-run, internet-wide content moderation tools. These interventions, however, interfere, to different degrees, with fundamental rights and values such as privacy, freedom of expression, freedom of information, and self-determination. The proposed AI systems insufficiently account for the complexity of contemporary online information ecosystems, particularly the inherent difficulty in establishing causal links between ‘cognitive warfare’ campaigns and undesired outcomes. As a result, using AI to counter ‘cognitive warfare’ risks harming the very rights and values such measures purportedly seek to protect. Policymakers should focus less on seemingly quick technological fixes. Instead, they should invest in long-term strategies against information disorder in digital communication ecosystems that are solidly grounded in the preservation of fundamental rights.

Kemper & Kolain on K9 Police Robots: An Analysis Of Current Canine Robot Models Through The Lens Of Legitimate Citizen-Robot-State-Interaction

Carolin Kemper (German Research Institute Public Administration) and Michael Kolain (German Research Institute Public Administration (FÖV Speyer)) have posted “K9 Police Robots: An Analysis Of Current Canine Robot Models Through The Lens Of Legitimate Citizen-Robot-State-Interaction” (UCLA Journal of Law and Technology Vol. 30 (2025), 1-95, https://uclajolt.com/k9-police-robots-vol-30-no-1/) on SSRN. Here is the abstract:

The advent of a robotized police force has come: Boston Dynamics’ “Spot” patrols cities like Honolulu, investigates drug labs in the Netherlands, explores a burned building in danger of collapsing in Germany, and has already assisted the police in responding to a home invasion in New York City. Quadruped robots might soon be on sentry duty at US borders. The Department of Homeland Security has procured Ghost Robotics’ Vision 60—a model that can be equipped with different payloads, including a weapons system. Canine police robots may patrol public spaces, explore dangerous environments, and might even use force if equipped with guns or pepper spray. This new gadget is not unlike previous tools deployed by the police, especially surveillance equipment or mechanized help by other machines. Even though they slightly resemble the old- fashioned police dog, their functionalities and affordances are structurally different from K9 units: Canine robots capture data on their environment wherever they roam and they communicate with citizens, e. g. by replaying orders or by establishing a two-way audio link. They can be controlled fully through remote-control over a long distance—or they automate their patrol by following a preconfigured route. The law does currently not suitably address and contain these risks associated with potentially armed canine police robots.

As a starting point, we analyze the use of canine robots by the police for surveillance, with special regard to existing data protection regulation for law enforcement in the European Union (EU). Additionally, we identify overarching regulatory challenges posed by their deployment. In what we call “citizen-robot-state interaction,” we combine the findings of human-robot interaction with the legal and ethical requirements for a legitimate use of robots by state authorities, especially the police. We argue that the requirements of legitimate exercise of state authority hinge on how police use robots to mediate their interaction with citizens. Law enforcement agencies should not simply procure existing robot models used as military or industrial equipment. Before canine police robots rightfully roam our public and private spaces, police departments and lawmakers should carefully and comprehensively assess their purpose, which citizens’ rights they impinge on, and whether full accountability and liability is guaranteed. In our analysis, we use existing canine robot models “Spot” and “Vision 60” to as a starting point to identify potential deployment scenarios and analyze those as “citizen-robot-state interactions.” Our paper ultimately aims to lay a normative groundwork for future debates on the legitimate use of robots as a tool of modern policing. We conclude that, currently, canine robots are only suitable for particularly dangerous missions to keep police officers out of harm’s way.

Haim & Yogev on What Do People Want from Algorithms? Public Perceptions of Algorithms in Government

Amit Haim (Tel Aviv U Buchmann Law) and Dvir Yogev (UC Berkeley Law) have posted “What Do People Want from Algorithms? Public Perceptions of Algorithms in Government” on SSRN. Here is the abstract:

Objectives: This study examines how specific attributes of Algorithmic Decision-MakingTools (ADTs), related to algorithm design and institutional governance, affect the public’s perceptions of implementing ADTs in government programs.

Hypotheses: We hypothesized that acceptability varies systematically by policy domain. Regarding algorithm design, we predicted that higher accuracy, transparency, andgovernment in-house development will enhance acceptability. Institutional features werealso expected to shape perceptions: explanations, stakeholder engagement, oversight mechanisms, and human involvement are anticipated to increase public perceptions.

Method: This study employed a conjoint experimental design with 1,213 U.S. adults.Participants evaluated five policy proposals, each featuring a proposal to implement anADT. Each proposal included randomly generated attributes across nine dimensions. Participants decided on the ADT’s acceptability, fairness, and efficiency for each proposal. The analysis focused on the average marginal conditional effects (AMCE) of ADT attributes.

Results: A combination of attributes related to process individualization significantly enhanced the perceived acceptability of using algorithms by government. Participants preferred ADTs that elevate the agency of the stakeholder (decision explanations, hearing options, notice, and human involvement in the decision-making process). The policy domain mattered most for fairness and acceptability, while accuracy mattered most for efficiency perceptions.

Conclusions: Explaining decisions made using an algorithm, giving appropriate notice, a hearing option, and maintaining the supervision of a human agent are key components for public support when algorithmic systems are being implemented.

Fitas et al. on Leveraging AI in Education: Benefits, Responsibilities, and Trends

Ricardo Fitas (Technical U Darmstadt) et al. have posted “Leveraging AI in Education: Benefits, Responsibilities, and Trends” on SSRN. Here is the abstract:

This chapter presents a review of the role of Artificial Intelligence (AI) in enhancing education outcomes for both students and teachers. This review includes the most recent papers discussing the impact of AI tools, including ChatGPT and other technologies, in the educational landscape. It explores the benefits of AI integration, such as personalized learning and increased efficiency, highlighting how these technologies contribute to the learning experiences of individual student needs and administrative processes to enhance educational delivery. Adaptive learning systems and intelligent tutoring systems are also reviewed. Nevertheless, it is known that important responsibilities and ethical considerations intrinsic to the deployment of AI technologies must be included in such an integration. Therefore, a critical analysis of AI’s ethical considerations and potential misuse in education is also carried out in the present chapter. By presenting real-world case studies of successful AI integration, the chapter offers evidence of AI’s potential to positively transform educational outcomes while cautioning against adoption without addressing these ethical considerations. Furthermore, this chapter’s novelty relates to exploring emerging trends and predictions in the fields of AI and education. This study shows that, based on the success cases, it is possible to benefit from the positive impacts of AI while implementing protection against detrimental outcomes for the users. The chapter is significantly relevant, as it provides the stakeholders, users, and policymakers with a deeper understanding of the role of AI in contemporary education as a technology that aligns with educational values and the needs of society.

Coleman on Human Confrontation

Ronald J. Coleman (Georgetown U Law Center) has posted “Human Confrontation” (Wake Forest Law Review, Vol. 61, Forthcoming) on SSRN. Here is the abstract:

The U.S. Constitution’s Confrontation Clause ensures the criminally accused a right “to be confronted with the witnesses against” them. Justice Sotomayor recently referred to this clause as “[o]ne of the bedrock constitutional protections afforded to criminal defendants[.]” However, this right faces a new and existential threat. Rapid developments in law enforcement technology are reshaping the evidence available for use against criminal defendants. When an AI or algorithmic system places an alleged perpetrator at the scene of the crime or an automated forensic process produces a DNA report used to convict an alleged perpetrator, should this type of automated evidence invoke a right to confront? If so, how should confrontation be operationalized and on what theoretical basis?

Determining the Confrontation Clause’s application to automated statements is both critically important and highly under-theorized. Existing work treating this issue has largely discussed the scope of the threat to confrontation, called for more scholarship in this area, suggested that technology might not make the types of statements that would implicate a confrontation right, or found that direct confrontation of the technology itself could be sufficient.

This Article takes a different approach and posits that human confrontation is required. The prosecution must produce a human on behalf of relevant machine statements or such statements are inadmissible. Drawing upon the dignity, technology, policing, and confrontation literatures, it offers several contributions. First, it uses automated forensics to show that certain technology-generated statements should implicate confrontation. Second, it claims that for dignitary reasons only cross-examination of live human witnesses can meet the Confrontation Clause. Third, it reframes automation’s challenge to confrontation as a “humans in the loop” problem. Finally, it proposes a “proximate witness approach” that permits a human to testify on behalf of a machine, identifies an open set of principles to guide courts as to who can be a sufficient proximate witness, notes possible supplemental approaches, and discusses certain broader implications of requiring human confrontation. Human confrontation could check the power of the prosecution, aid system legitimacy, and ultimately act as a form of technology regulation.

Tang on Creative Labor and Platform Capitalism

Xiyin Tang (UCLA Law UCLA Law) has posted “Creative Labor and Platform Capitalism” (Forthcoming, UCLA Law Review, Volume 73 (2026)) on SSRN. Here is the abstract:

The conventional account of creativity and cultural production is one of passion, free expression, and self-fulfillment, a process whereby individuals can assert their autonomy and individuality in the world. This conventional account of creativity underlies prominent theories of First Amendment and intellectual property law, including the influential “semiotic democracy” literature, which posits that new digital technologies, by providing everyday individuals the tools to create and disseminate content, results in a better and more representative democracy. In this view, digital content creation is largely (1) done by amateurs; (2) done for free; and (3) conducive of greater freedom.

This Article argues that the conventional story of creativity, honed in the early days of the Internet, fails to account for significant shifts in how creative work is extracted, monetized, and exploited in the new platform economy. Increasingly, digital creation is done neither by amateurs, nor is it done for free. Instead, and as this Article discusses, fundamental shifts in the business models of the largest Internet platforms, led by YouTube, paved a path for the class of largely professionalized creators who increasingly rely on digital platforms to make a living today. In the new digital economy, monetization—in which users of digital platforms sell their content, and themselves, for a portion of the platform’s advertising revenues—not free sharing, reigns. And far from promoting freedom, such increased reliance on large platforms brings creators closer to gig workers—the Uber drivers, DoorDash delivery workers, and millions of other part-time laborers who increasingly find themselves at the mercy of the opaque algorithms of the new platform capitalism.

This reframing—of creation not as self-realization but as work that is both precarious and exploited, most notably as surplus data value—demands that any framework for regulating informational capitalism’s exploitation of labor is incomplete without considering how creative work is extracted and datafied in the digital platform economy.

Nobel et al. on Unbundling AI Openness

Parth Nobel (Stanford U) et al. have posted “Unbundling AI Openness” (2026 Wisconsin Law Review (forthcoming)) on SSRN. Here is the abstract:

The debate over AI openness—whether to make components of an artificial intelligence system available for public inspection and modification—forces policymakers to balance innovation, democratized access, safety and national security. By inviting startups and researchers into the fold, it enables independent oversight and inclusive collaboration. But technology giants can also use it to entrench their own power, while adversaries can use it to shortcut years and billions of dollars in building systems, like China’s Deepseek-R1, that rival our own. How we govern AI openness today will shape the future of AI and America’s role in it.

Policymakers and scholars grasp the stakes of AI openness, but the debate is trapped in a flawed premise: that AI is either “open” and “closed.” This dangerous oversimplification—inherited from the world of open source software—belies the complex calculus at the heart of AI openness. Unlike traditional software, AI is a composite technology built on a stack of discrete components—from compute to labor—controlled by different stakeholders with competing interests. Each component’s openness is neither a binary choice nor inherently desirable. Effective governance demands a nuanced understanding of how the relative openness of each component serves some goals while undermining others. Only then can we determine the trade-offs we are willing to make and how we hope to achieve them.

This Article aims to equip policymakers with the analytical toolkit to do just that. First, it introduces a novel taxonomy of “differential openness,” unbundling AI into its constituent components and illustrating how each one has its own spectrum of openness. Second, it uses this taxonomy to systematically analyze how each component’s relative openness necessitates intricate trade-offs both within and between policy goals. Third, it operationalizes these insights, providing policymakers with a playbook for how law can be precisely calibrated to achieve optimal configurations of component openness.

AI openness is neither all or nothing nor inherently good or evil—it is a tool that must be wielded with precision if it has any hope of serving the public interest.

Duhl on Embedding AI in the Law School Classroom

Gregory M. Duhl (Mitchell Hamline School of Law) has posted “All In: Embedding AI in the Law School Classroom” on SSRN. Here is the abstract:

What is the irreducibly human element in legal education when AI can pass the bar exam, generate effective lectures, and provide personalized learning and academic support? This Article confronts that question head-on by documenting the planning and design of a comprehensive transformation of a required doctrinal law school course—first-year Contracts—with AI fully embedded throughout the course design. Instead of adding AI exercises to conventional pedagogy or creating a stand-alone AI course, this approach reimagines legal education for the AI era by integrating AI as a learning enhancer rather than a threat to be managed. The transformation serves Mitchell Hamline School of Law’s access-driven mission: AI helps create equity for diverse learners, prepares practice-ready professionals for legal practice transformed by AI, and shifts the institutional narrative from policing technology use to leveraging it pedagogically.

This Article details the roadmap I have followed for AI integration in a course that I am teaching in Spring 2026. It documents the beginning of my experience with throwing out the traditional legal education playbook and rethinking how I approach teaching using AI pedagogy within a profession in flux. Part I establishes the pedagogical rationale grounded in learning science and institutional mission. Part II describes the implementation strategy, including partnerships with instructional designers, faculty innovators, and legal technology companies. Part III details a course-wide series of specific exercises that develop AI literacy alongside doctrinal and skill mastery. Part IV addresses legitimate objections about bar preparation, analytical skills, academic integrity, and scalability beyond transactional courses. The Article concludes with a commitment to transparent empirical research through a pilot study launching in Spring 2026, acknowledging both the promise and the uncertainty of this pedagogical innovation. For legal educators grappling with AI’s rapid transformation of both education and practice, this Article offers a mission-driven, evidence-informed, yet still preliminary template for intentional change—and an invitation to experiment, adapt, and share results.