DHS Takes Steps to Mitigate AI Risks in Critical Infrastructure – ClearanceJobs

author
4 minutes, 6 seconds Read

Last month, the Department of Homeland Security (DHS) announced the establishment of the Artificial Intelligence Safety and Security Board, which will advise the DHS secretary, the critical infrastructure community, other private sector stakeholders, and the broader public on the safe and secure development and deployment of AI technology in our nation’s critical infrastructure.

The board – which includes 22 representatives from a range of sectors – will further develop recommendations to help critical infrastructure stakeholders, such as transportation service providers, pipeline and power grid operators, and internet service providers, more responsibly leverage AI technologies. It will also develop recommendations to prevent and prepare for AI-related disruptions to critical services that impact national or economic security, public health, or safety.

DHS, in coordination with the Cybersecurity and Infrastructure Security Agency (CISA) and the Countering Weapons of Mass Destruction Office (CWMDO), also released new guidelines to protect against AI risks – including to weapons of mass destruction (WMDs).

The guidelines organized its analysis around three overarching categories of system-level risk including the use of AI to enhance, plan, or scale physical attacks on, or cyber compromises of, critical infrastructure; targeted attacks on AI systems supporting critical infrastructure; and deficiencies or inadequacies in the planning, structure, implementation, or execution of an AI tool or system leading to malfunctions or other unintended consequences that affect critical infrastructure operations.

“CISA was pleased to lead the development of ‘Mitigating AI Risk: Safety and Security Guidelines for Critical Infrastructure Owners and Operators on behalf of DHS,” said CISA Director Jen Easterly. “Based on CISA’s expertise as National Coordinator for critical infrastructure security and resilience, DHS’ Guidelines are the agency’s first-of-its-kind cross-sector analysis of AI-specific risks to critical infrastructure sectors and will serve as a key tool to help owners and operators mitigate AI risk.”

Road to Safe Implementation of AI

The DHS AI roadmap is now the most detailed AI plan put forward by a federal agency to date, and it calls for testing uses of the technologies that deliver meaningful benefits to the American public and advance homeland security while ensuring that individuals’ privacy, civil rights, and civil liberties are protected.

“The introduction of these new guidelines is an excellent step in the move to provide advice to organizations with very sensitive systems to make best use of the growing tendency to introduce AI capabilities into corporate networks and systems. It is essential that while organizations make use of the enormous benefits that AI can offer in terms of efficiency, they remain cognizant of the vulnerabilities the AI might be introducing,” said Tim Rawlins, director and senior advisor at cyber threat assessment firm NCC Group.

“We are seeing an increasing number of organizations seek to establish the cyber security and data privacy impact of AI in their environments,” Rawlins told ClearanceJobs via an email. “While not many organizations are working on CBRN issues taking the new guidelines into account will be a sensible move.”

Protection Critical Infrastructure

DHS has outlined a four-part mitigation strategy, building upon the National Institute of Standards and Technology’s (NIST) AI Risk Management Framework (RMF), which critical infrastructure owners and users can consider when approaching contextual and unique AI risk situations. This includes establishing an organizational culture of AI risk management, prioritizing safety and security outcomes, embracing radical transparency, and building organizational structures that make security a top business priority.

It also called for a better understanding of the foundational context from which AI risks can be evaluated and mitigated; while developing systems to assess, analyze, and track AI risks – and then identifying repeatable methods and metrics for measuring and monitoring AI risks and impacts. It also called upon efforts to prioritize and act upon AI risks to safety and security – and to implement and maintain identified risk management controls to maximize the benefits of AI systems while decreasing the likelihood of harmful safety and security impacts.

“The DHS establishing guidelines focused on AI safety and security for critical infrastructure is important. Having a proactive framework to identify and mitigate AI risks is going to be huge. However, the guidelines will need to strike the right balance between risk management and enabling innovation,” suggested Joseph Thacker, principal AI Engineer at cybersecurity solution provider AppOmni.

Thacker told ClearanceJobs that more should be done and that the board should prioritize releasing more specific, actionable implementation recommendations alongside the high-level guidelines.

“The strategic recommendations are useful,” he added. “But given how fast AI is moving, critical infrastructure operators will need concrete technical guidance they can put into practice. The board and organization should provide hands-on tools like reference architectures, configuration checklists, and code samples that translate the principles into real-world safeguards.”

DHS could also offer workshops and a knowledge base to help organizations rapidly level up their AI security know-how.

“We need to make rigorous AI security as easy as possible,” Thacker continued. “The board has the resources to define common tactics and recommend them.”

This post was originally published on the 3rd party site mentioned in the title of this this site

Similar Posts