Inside the DHS’s AI security guidelines for critical infrastructure
Last year, Executive Order 14110 (Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence) stated that “Artificial intelligence (AI) holds extraordinary potential for both promise and peril.” In response to this reality, the United States Department of Homeland Security (DHS) recently released guidelines to help critical infrastructure owners and operators develop AI security and safety.
The DHS guidelines stem from insights gained from CISA’s cross-sector analysis of AI risk assessments completed by Sector Risk Management Agencies (SRMAs) and relevant independent regulatory agencies. DHS drew upon this analysis, as well as input from existing U.S. government policy, to develop specific safety and security guidelines to mitigate AI risks to critical infrastructure.
“Based on CISA’s expertise as National Coordinator for critical infrastructure security and resilience, DHS’ Guidelines are the agency’s first-of-its-kind cross-sector analysis of AI-specific risks to critical infrastructure sectors and will serve as a key tool to help owners and operators mitigate AI risk,” said CISA Director Jen Easterly in a statement.
Cross-sector AI security threats
The guidelines in the DHS document highlight three categories of system-level AI risk, which CISA developed in its cross-sector AI risk analysis. The categories include:
-
Attacks using AI: Refers to the use of AI to automate, enhance, plan or scale physical or cyberattacks against critical infrastructure. Common attack vectors include AI-enabled cyber compromises, automated physical attacks and AI-enabled social engineering.
-
Attacks targeting AI systems: Focuses on attacks that target AI systems supporting critical infrastructure. Common attack vectors include adversarial manipulation of AI algorithms, evasion attacks and interruption of service attacks.
-
Failures in AI design and implementation: Refers to problems in the planning, structure, implementation, execution or maintenance of an AI tool or system. This can lead to malfunctions or other unintended consequences that affect critical infrastructure operations. Common failures include autonomy, brittleness and inscrutability.
Learn more on AI cybersecurity
The DHS guidelines’ four core functions
The new DHS guidelines also incorporate the NIST AI Risk Management Framework (AI RMF), including four key functions that help organizations address the risks of AI systems:
-
Govern: This function supports setting up policies, processes and procedures to anticipate, identify and manage the benefits and risks of AI during the entire AI lifecycle. It follows a “secure by design” philosophy, prioritizing safety and security when building organizational structures.
-
Map: This establishes a foundational context to evaluate and mitigate AI risks. This includes an inventory of all current or proposed AI use cases. Mapping begins with documenting context-specific and sector-specific AI risks, including attacks using AI, attacks on AI and AI design and implementation failures.
-
Measure: Refers to repeatable methods and metrics for measuring and monitoring AI risks and impacts. Critical infrastructure can develop its own context-specific testing, evaluation, verification and validation (TEVV) processes to inform usage and AI risk management decisions. Measuring should include continuous testing of AI systems for errors or vulnerabilities, including both cybersecurity and compliance vulnerabilities.
-
Manage: Defines risk management controls and best practices to increase the benefits of AI systems while decreasing the likelihood of harm. This mandates regularly allocating resources and applying mitigations, as outlined by governing processes, to mapped and measured AI risks.
Strengthening AI cybersecurity
In a flurry of activity to establish national AI cybersecurity solutions, the new DHS AI guidelines coincide with CISA being named the National Coordinator for Critical Infrastructure Security and Resilience.
Furthermore, the DHS has recently named a new Artificial Intelligence Safety and Security Board. The Board will develop AI security recommendations for critical infrastructure organizations such as transportation, pipeline and power grid operators and internet service providers. Meanwhile, the NIST GenAI program aims to create generative AI benchmarks to address the sticky issue of whether content is human- or AI-generated.
All these efforts are crucial as the nation fortifies its cyber defenses in the age of AI.
More from News
May 29, 2024
ONCD releases 2024 Report on the Cybersecurity Posture of the U.S.
4 min read – On May 7, the Office of the National Cyber Director (ONCD) released the 2024 Report on the Cybersecurity Posture of the United States. This new document is a report card on how well cyber policy followed the guidelines set by the National Cybersecurity Strategy, introduced in March 2023. Here’s what you need to know about the newly released report. Fundamental shifts in cyber roles Over the past year, the U.S. national cybersecurity posture was driven by the 2023 National Cybersecurity…
May 29, 2024
CISA wants private industry to publicly commit to Secure by Design
4 min read – The tech industry has the power to protect the world from nation-state threat attacks, cyber crime and those wanting to compromise data and manipulate critical infrastructure. But with this power comes great responsibility, which, to be honest, the tech industry has not been that interested in holding. But at the RSA Conference (RSAC) in San Francisco, the cybersecurity and tech communities took steps to exert some power and take responsibility. They took the Secure by Design pledge, a promise to…
May 24, 2024
Change Healthcare discloses $22M ransomware payment
3 min read – UnitedHealth Group CEO Andrew Witty found himself answering questions in front of Congress on May 1 regarding the Change Healthcare ransomware attack that occurred in February. During the hearing, he admitted that his organization paid the attacker’s ransomware request. It has been reported that the hacker organization BlackCat, also known as ALPHV, received a payment of $22 million via Bitcoin.Even though they made the ransomware payment, Witty shared that Change Healthcare did not get its data back. This is a…
Topic updates
Get email updates and stay ahead of the latest threats to the security landscape, thought leadership and research.
Subscribe today
This post was originally published on the 3rd party site mentioned in the title of this this site