Fort Meade, Md. — The National Security Agency’s Artificial Intelligence Security Center has issued a joint cybersecurity information sheet aimed at helping organizations secure the data used to train and operate artificial intelligence systems, underscoring the growing focus on safeguarding the AI lifecycle as adoption accelerates across government and industry.
The guidance, “AI Data Security: Best Practices for Securing Data Used to Train & Operate AI Systems,” outlines measures to protect data from development through deployment. Recommendations include using digital signatures to verify trusted revisions, tracking data provenance, and relying on trusted infrastructure. It also stresses the need for sustained protections across the full AI system lifecycle, not just during model training.
In addition to prescriptive steps, the document details risks confronting AI programs and proposes mitigations. It calls out threats to the data supply chain, the danger of maliciously modified or “poisoned” data that can skew model behavior, and data drift—gradual changes in inputs over time that can erode performance and reliability.
The release targets organizations already running AI tools and those preparing to integrate them, with a particular emphasis on system owners and administrators within the Department of Defense, National Security Systems, and the Defense Industrial Base. The agencies urge adopters to fold the recommended practices into mission environments to better protect sensitive and critical information.
The publication is co-sealed by the NSA, the Cybersecurity and Infrastructure Security Agency, the Federal Bureau of Investigation, Australia’s Signals Directorate’s Australian Cyber Security Centre, New Zealand’s National Cyber Security Centre, and the United Kingdom’s National Cyber Security Centre—reflecting a broad, five-nation partnership on AI security.
The information sheet is available on the NSA website: https://media.defense.gov/2025/May/22/2003720601/-1/-1/0/CSI_AI_DATA_SECURITY.PDF