HomeCyberSecurity NewsNew AI Security Guidelines for Critical Infrastructure Released by U.S. Government

New AI Security Guidelines for Critical Infrastructure Released by U.S. Government

The U.S. government has revealed new security guidelines to enhance critical infrastructure protection against threats related to artificial intelligence (AI).

“These guidelines are based on a comprehensive government effort to evaluate AI risks in all sixteen critical infrastructure sectors and address threats involving AI systems,” stated the Department of Homeland Security (DHS) on Monday.

The agency is also working to promote the safe, responsible, and trustworthy use of AI technology without compromising individuals’ privacy, civil rights, and civil liberties.

The new guidelines focus on the use of AI in increasing and magnifying attacks on critical infrastructure, manipulating AI systems adversarially, and dealing with flaws in such tools that could lead to unintended consequences, requiring transparency and secure practices to assess and mitigate AI risks.

Specifically, this guidance covers four key functions throughout the AI lifecycle: govern, map, measure, and manage –

  • Establish a culture of AI risk management within organizations
  • Understand the individual context and risk profile of AI use
  • Develop systems to evaluate, analyze, and monitor AI risks
  • Prioritize and address safety and security risks related to AI

“Critical infrastructure owners and operators should consider their sector-specific and context-specific AI use when evaluating AI risks and selecting appropriate mitigations,” the agency emphasized.

“These owners and operators should be aware of their dependencies on AI vendors and collaborate to allocate and identify mitigation responsibilities accordingly.”

These guidelines come after the Five Eyes (FVEY) intelligence alliance (Australia, Canada, New Zealand, the U.K., and the U.S.) released a cybersecurity information sheet emphasizing the careful setup and configuration necessary for deploying AI systems.

The recommended best practices include securing the deployment environment, reviewing AI model sources, ensuring a solid deployment architecture, validating AI system integrity, protecting model weights, enforcing strict access controls, conducting external audits, and implementing robust logging.

Recent research has identified vulnerabilities in AI systems, such as prompt injection attacks and jailbreaking techniques, which can be exploited by cybercriminals and nation-state actors.

Studies have also shown that AI agents can autonomously exploit vulnerabilities in real-world systems, highlighting the importance of ensuring the security of AI technologies.

RELATED ARTICLES

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Latest News