Artifical Intelligence &

Algorithmic Governance

The rise of algorithmic governance, where decision-making once performed by humans is now delegated to code, marks one of the most profound shifts in modern law and public administration. Algorithms determine credit scores, policing patterns, hiring outcomes, and even legal discovery.

As artificial intelligence grows more “agentic”—capable of autonomous goal-setting and adaptation—the legal order must confront a foundational question: how do we regulate decision-making that no longer has a single, human author?

Analyzing regulatory architectures, transatlantic AI frameworks, and the right to unlearn in generative AI systems.

Agentic AI

When an AI system independently executes decisions or learns strategies beyond its programmer’s intent, traditional liability frameworks begin to fracture. The legal system presumes a traceable human actor (whether manufacturer, developer, or user) who can be held responsible for harm. Yet in an era of agentic AI, models can autonomously modify their parameters, generate code, and act in distributed environments where no single actor maintains control.

Erasure and E-Discovery

As AI systems ingest terabytes of personal and proprietary data, the legal principle of erasure, best known from Europe’s right to be forgotten, faces unprecedented technical and procedural challenges. Unlike a traditional database, machine learning models do not merely store data; they internalize it during training. Deleting a record does not undo its influence on the model’s weights or outputs.

This tension collides with another pillar of modern law: e-discovery. Under U.S. litigation standards, relevant data must be preserved and produced during discovery, even when it resides within complex algorithmic systems. What happens when a litigant’s right to deletion conflicts with another’s right to evidence? Can an AI model be subpoenaed, and if so, what constitutes compliance — a dataset, a model checkpoint, or an output audit trail?

Regulatory Inoperability

A subtler but equally pressing issue is regulatory interoperability: the ability of laws governing AI to coexist and interact across jurisdictions and technical standards. The EU’s AI Act, China’s Generative AI Measures, and the emerging U.S. Transatlantic AI Framework each embody distinct philosophies of control. Without harmonization, multinational developers may face contradictory obligations—one regime requiring model transparency, another mandating trade secret protection.