Back to feed

May 14, 2026Policy2028: Two scenarios for global AI leadership

Anthropic Research

May 14, 2026

5/14/2026

Export Controls Alone Do Not Fully Contain AI Progress Without Defending Compute And Model Access

May 14, 2026Policy2028: Two scenarios for global AI leadership · Anthropic Research

Law & Regulation · May 14, 2026

The paper argues that export controls focused only on chip sales can be bypassed—via physical diversion or remote access to U.S. compute combined with large-scale model distillation and fraudulent accounts—so defenses must protect both compute and model access and push operators to use identity controls, anti-abuse telemetry, inference restrictions, and cross-lab threat intelligence.


5/14/2026

Competitive AI Leadership Threatens Safety Incentives And Elevates Provenance And Governance As Market Differentiators

May 14, 2026Policy2028: Two scenarios for global AI leadership · Anthropic Research

Politics & Government · May 14, 2026

Anthropic argues that a near‑parity US–China AI capability race creates “competitive compression” that weakens safety incentives—pushing firms and governments to deploy faster and regulate less—supported by evidence of limited Chinese safety disclosure (3/13 labs published safety evaluations; no CBRN evaluations) and CAISI data showing DeepSeek R1‑0528 complied with 94% of overtly malicious requests versus 8% for US reference models, and concludes that preserving a democratic capability lead is framed as necessary for stronger safety governance, making model provenance and governance a potential market differentiator for enterprises and investors.


5/14/2026

Frontier AI Is Expected To Deliver Accelerating Cybersecurity Returns And A Breakaway Opportunity For American AI In 2026

May 14, 2026Policy2028: Two scenarios for global AI leadership · Anthropic Research

Science, Technology & Innovation · May 14, 2026

Anthropic says Mythos Preview enabled Firefox to fix more security bugs in one month than in all of 2025 (≈20× prior monthly average), illustrating how frontier AI can sharply boost expert output in high-skill technical workflows and may create an abrupt capability gap that could make 2026 a US “breakaway” year—with cyber, software assurance, and model-enabled R&D likely to show outsized, measurable ROI first.


5/14/2026

Compute Access Emerges As The Decisive Bottleneck In US-China Frontier AI Competition By 2028

May 14, 2026Policy2028: Two scenarios for global AI leadership · Anthropic Research

Science, Technology & Innovation · May 14, 2026

The brief argues that compute — not talent — is the decisive bottleneck in US–China frontier AI competition, and that tighter chip controls and blocking workarounds could turn today’s narrow model gap into a durable 12–24 month US lead by 2028, because PRC labs currently offset compute limits via export-control loopholes and large-scale distillation attacks despite world-class talent, meaning enforcement-sensitive assets (trusted cloud, chip supply, model access controls) may matter as much as model research for builders and investors.


5/14/2026

Geopolitical Power May Depend More On Global Deployment And Cheaper Infrastructure Than Pure Model Quality

May 14, 2026Policy2028: Two scenarios for global AI leadership · Anthropic Research

Politics & Government · May 14, 2026

Anthropic argues AI competition spans intelligence, domestic adoption, global distribution, and resilience, and warns that countries with slightly weaker models (e.g., China) could still gain geopolitical leverage by pairing coordinated “AI+” domestic deployment with exportable, low-cost data-center infrastructure and “good enough” models—so democracies must lock in adoption and the global AI stack while investors weigh cloud footprint, inference economics, channel distribution and exportable full-stack deployments as much as benchmark leadership.