Releasing the SL5 Standard v0.1
Almost one year ago, the SL5 Task Force set out to address a gap we saw in the AI security ecosystem: protecting frontier IP against nation-state attacks. This undertaking is critical on most paths to a positive AI future. However, there aren’t existing roadmaps and standards to guide us there. AI datacenters are different, because they need to defend against novel emerging threats and meet the strong productivity constraints of the nascent AI industry. Our work builds on existing decades of security knowledge and applicable standards such as IL6, ICD705 and NIST SP800-53 and extends this with about 20% of novel AI-datacenter-specific controls.
Strategic considerations such as AGI timelines, geostrategic implications and the AI risk landscape will become rapidly clearer in the crucial years of 2026-2028 and frontier lab and government leadership might wish to have actionable contingency plans to fall back on. Given the scale of effort and advance planning required to build nation-state secure datacenters and practical AI R&D workflows, it is essential to frontload the design and research as much as possible to enable rapid action as the strategic picture becomes clearer. Thus, we developed the SL5 Standard as a first step towards achieving SL5 optionality.
Today, we are releasing the first public draft of our work: the Security Level 5 (SL5) standard, developed with input from over 100 technical contributors spanning AI labs, security researchers, and government.
The SL5 Standard is designed to protect frontier AI model weights against top-priority operations by the world’s most cyber-capable adversaries: actors with extensive resources, state-level infrastructure, and expertise years ahead of the public state of the art—and whose motivation to steal or disrupt frontier AI capabilities will only intensify as these models approach the ability to automate AI research itself. For more on the threat model, see Section 1.1 of the standard.
Many of the controls needed to meet this threat have lead times measured in months or years. Custom accelerator features have multi-year development cycles. Vetting large numbers of highly trusted personnel takes many months. If labs wait to begin these long-lead-time efforts until SL5 is urgently needed, they won’t be ready in time. This first version of the standard focuses on interventions that need to start now or that require significant lead time, while remaining sensitive to the practical constraints of frontier AI development. By acting on these controls early, labs can preserve the option to achieve full SL5 compliance without bearing the entire cost upfront.
Highlights
Network architecture
We propose using physical bandwidth limiters to separate the highest-security zones (where model weights are stored and computed on) from the rest of the SL5 network, where rapid development and other less sensitive operations occur. This offers a way to navigate a core tension: full air gapping and allow-by-exception policies, typical for this level of security, could be incompatible with the high-bandwidth distributed operations and rapid automated software development expected inside an SL5 data center. The large size of model weights makes physical bandwidth limiting a particularly effective control.
Hardware trusted compute base (TCB)
We recommend a minimal set of new features in accelerator chips that would enable excluding the host system and CPU from the trusted compute base entirely. Complex hardware systems have fragile security postures, and minimizing the TCB is one of the most impactful controls available.
Personnel security
We introduce the Sensitivity Levels (SenL) framework to help AI labs establish a rigorous personnel security process while navigating legal and operational constraints. Insiders are widely considered one of the most critical attack vectors, and even a single compromised individual can currently pose a major risk. Government collaboration will likely be necessary to achieve adequate vetting at the highest levels.
AI model security
As AI models take increasingly active roles in system development and security-critical operations, adversaries may target the models themselves, whether through training data poisoning or prompt injection. We recommend significantly greater investment in adversarial robustness research to address use cases from training data filtering to runtime prompt injection defense.
Physical security
Since NIST SP 800-53 provides limited guidance on physical security, we draw on ICD-705 for interventions including TEMPEST countermeasures and a Red/Black zone architecture separating sensitive from non-sensitive areas. For more on the overall security architecture, see Section 1.2.
Status and Open Questions
This is version 0.1, a first draft. It reflects months of work by our 100-person technical track, sourced from across the industry: consulting experts, researching existing frameworks, enumerating attack vectors, hypothesizing and red-teaming controls, and iterating. The control specifications are structured as an overlay on NIST SP 800-53, expressing the “diff” from existing high-security baselines rather than restating established requirements. We believe the long lead time interventions outlined here are necessary for SL5, but significant refinement is expected as we engage more stakeholders and incorporate implementation experience.
Several open questions carry significant architectural or policy implications, in areas of genuine uncertainty where we lack consensus or information. These include: the feasibility of private-sector personnel vetting at this level, whether adversarial content detection can be made robust against sophisticated adversaries, and whether long-distance network connections can be adequately secured against nation-state actors. We discuss these openly in Section 1.3.
Get Involved
We invite frontier AI labs, government agencies, datacenter operators, and security researchers to engage with this work. Read the full introduction for concrete details on the security architecture and open questions and reach out through sl5.org. You can also leave feedback directly on any page using the feedback tool at the bottom of each control page.



