Web 4.0 and the Accountability Vacuum
First published in collaboration with the American Enterprise Institute.
March 2, 2026
Shane Tews and Nicoletta Kolpakov
In early January 2026, an AI agent built on an open-source platform called ClawdBot acquired a Twilio phone number, integrated a voice API, and called its human owner at 6:00 a.m. to request expanded system permissions. The internet found it charming. Security professionals didn’t. Since then, the customizable, open-source AI assistant / bot / personal agent has changed names and structures, from ClawdBot—the original open-source Claude (Anthropic)-based model—to MoltBot to OpenClaw, which is now a foundation sponsored by OpenAI.
We are witnessing the early infrastructure of what some are calling “Web 4.0”: the transition from an internet humans navigate to one autonomous AI agents navigate on our behalf. Web 3.0 was semantic and decentralized; this next phase is agentic and persistent. One thing is clear: OpenClaw is not an aberration. What policymakers are confronting is not a product category, it’s a paradigm shift. And our regulatory frameworks are not built for these interactions.
Within 60 days of launch, OpenClaw’s plug-in marketplace, ClawHub, became the vector for a coordinated supply-chain attack known as ClawHavoc. Over 1,100 malicious “skills” were uploaded and thousands were downloaded. Convincing README files misled users into pasting terminal commands that exfiltrated credentials, API keys, SSH tokens, crypto wallets, and corporate data. ClawHub functions like npm or the Python Package Index for AI agents but lacks the decades of institutional security hardening those ecosystems have developed. The ad hoc response, including integrations with scanning tools like VirusTotal, is a bandage.
There is a marketplace governance gap. Voluntary best practices are not sufficient when agents have OAuth-level access to enterprise systems. AI agent marketplaces need something closer to the US Food and Drug Administration’s adverse-event reporting regime: mandatory disclosure of malicious packages, coordinated takedown authority, audit obligations, and platform liability for failure to act.
There is a memory-poisoning problem in which agentic systems introduce a novel risk vector: persistent memory manipulation. A successful prompt injection against an always-on agent does not merely trigger a one-time breach, it also alters future behavior. It changes how the agent interprets inputs, prioritizes actions, and allocates authority—potentially indefinitely. That is not a traditional data breach; rather, it is closer to long-term behavioral manipulation of a system acting with delegated human authority. No existing legal category maps cleanly onto that harm.
A second problem is emerging quietly: the epistemic loop. Much of the commentary analyzing these threats is itself LLM-generated or LLM-assisted. Highly structured taxonomies, compliance-checklist “solutions,” and hedged prose increasingly dominate AI security discourse. When LLMs describe LLM risks using frameworks generated by similar systems, epistemic independence erodes. Policymakers drafting legislation from that corpus may be building on plausible-sounding abstractions untethered from adversarial reality. Threat inflation crowds out threat literacy. Viral demos garner millions of impressions, while plaintext credential storage and 14,000 malicious downloads do not.
If OpenClaw exposed the risks of agent autonomy, projects like Conway aim to institutionalize it. The Conway thesis is straightforward: Today’s AI systems require human permission to act. Chat interfaces require prompts. Code agents require access. Payment systems require accounts. Conway proposes removing each constraint by enabling autonomous agents to transact, replicate, and fund successors via stablecoins and HTTP 402 payment flows. Framed as liberation, the proposal in fact contains the central problem of accountability. The piece touts immutable “constitutions” inspired by Anthropic. But what enforces them? What happens when a self-modifying agent rewrites its execution loop? Does the constitution survive forks and replications? “AI safety by assertion” is not a governance mechanism.
More troubling is the machine-to-machine payment layer. Stablecoin transactions by autonomous agents without logins, “know your customer” standards, or identifiable counterparties sit uncomfortably alongside such US Treasury Department enforcement mechanisms as the Bank Secrecy Act, Office of Foreign Assets Control screening requirements, and the Financial Crimes Enforcement Network’s travel rule. An agent transacting in USDC without human approval may be engaging in regulated money transmission. Current law does not clearly assign responsibility in such a scenario. If an automaton can spawn a second-generation agent, fund its wallet, and release it into production, who is liable for downstream misconduct? No statute answers this question.
The lag is now structural. Reactive regulation cannot serve as a safety mechanism when capabilities evolve quarterly. If autonomous agents become adept, not at superintelligence but at low-level fraud, regulatory arbitrage, sanctions evasion, and market manipulation, at machine speed and scale, the harm will be systemic.
Agentic systems deliver real productivity gains. But three measures are urgent: marketplace accountability for AI plug-in ecosystems, including potential mandatory reporting and platform liability triggers; clear liability allocation for harms caused by autonomous agents acting under delegated credentials; and sector-specific deployment rules for agents that interact with regulated data or financial systems. Without intervention, an accountability vacuum is inevitable.
The friction points that compliance officers rely on, the remaining human-permission layers, are being engineered away. Before they disappear, policymakers must decide whether those constraints were inefficiencies to eliminate or the last checkpoints between AI capability and real-world consequences.
First published with the American Enterprise Institute; see https://www.aei.org/technology-and-innovation/web-4-0-and-the-accountability-vacuum/.
Citation: Shane Tews and Nicoletta Kolpakov, Web 4.0 and the Accountability Vacuum, AEIdeas, American Enterprise Institute (March 2, 2026), https://www.aei.org/technology-and-innovation/web-4-0-and-the-accountability-vacuum/.