Title: Artificial Intelligence Could Improve Critical Infrastructure Services, But It Comes with Risks Description: Artificial intelligence could be used to improve services and utilities we use every day--such as electricity, water, transportation, banking, and more. But AI could also make these systems more vulnerable to cyber threats. In a new report, we looked at what’s being done at the federal level to monitor AI use and protect these systems from risks. GAO’s Dave Hinchman tells us more. Related work: GAO-25-107435, Artificial Intelligence: DHS Needs to Improve Risk Assessment Guidance for Critical Infrastructure Sectors Released: December 2024 {Music} [Dave Hinchman:] Considering the risk of artificial intelligence, we need to make sure that we’re examining that risk from every angle in a way that’s consistent with foundational practices. [Holly Hobbs:] Hi, and welcome to GAO’s Watchdog Report. Your source for fact-based, nonpartisan news and information from the U.S. Government Accountability Office. I’m your host, Holly Hobbs. Artificial intelligence could be used to improve services and utilities we use every day--such as electricity, water, transportation, banking, and more. But AI could also make these systems more vulnerable to cyber threats. And if attacked, the loss or disruption of one of these critical infrastructure systems could have dramatic effects on the communities that rely on them or the nation at large. In a new report, we looked at what’s being done at the federal level to monitor AI use in critical infrastructure and to protect these systems from risks. We’ll find out more from GAO’s Dave Hinchman who led work for a new report on this topic. Thanks for joining us. [Dave Hinchman:] Hi, Holly, thanks for having me here today. [Holly Hobbs:] So, Dave, can we start with how is artificial intelligence currently being used in critical infrastructure? What are the pros? [Dave Hinchman:] AI is such a transformative technology. It has applications across our nation’s entire infrastructure. These are areas like medicine, agriculture, manufacturing. And the technology is used for things like optimizing supply chain performance, which you hear about in the news, automating routine tasks like data entry, but also detecting events or changes in the system. For instance, medicine uses AI to detect a patient’s irregular heartbeat before anyone even realizes something is wrong. But despite the good, we’ve also found that AI poses a lot of unique challenges. Many of which may be unknown or unforeseen at this time. [Holly Hobbs:] So since you mentioned it, what are the risks here? What are we worried about? [Dave Hinchman:] There are three basic risks that folks in the government have identified. It’s attacks using artificial intelligence. And this is when a bad actor is using this technology to do something like create deepfake videos or send massive amounts of phishing spam. But there are also attacks targeting AI systems. If you’re using an artificial intelligence to do data processing, perhaps a bad actor thinks that if they can get in and attack that AI system, they’ll stop that process or bring some other process to a halt. And finally, there can be issues with failures in AI design and implementation--so that you have an AI system that’s in place, but it isn’t being used the right way. It wasn’t set up the right way and creates problems that way. [Holly Hobbs:] At the federal level, who all is responsible for protecting these systems and what are they doing about it? [Dave Hinchman:] So the government has divided our national infrastructure into 16 critical sectors--things like manufacturing, health, defense industrial base and then appointed nine federal agencies to serve as the lead or sector risk management agencies. Some of those agencies have multiple responsibilities or they share responsibilities for a certain sector. These agencies serve as the day-to-day federal interface for ensuring the physical and cybersecurity of their sectors. They work with the owners/operators of each to implement guidance and make sure that those sectors are as safe as possible. Those nine sector risk management agencies were tasked by a Presidential Executive order in October 2023 to develop and annually submit assessments of the risks to each of those 16 sectors. The agencies were required to submit these risk assessments to DHS. DHS provides the guidelines for completing those assessments. The first risk assessments, which are the ones that we review in our report, were due in January 2024. And then, as I mentioned, they’ll be do annually thereafter. [Holly Hobbs:] So you said we looked at these assessments. What did we find? [Dave Hinchman:] We found that all the agencies submitted the risk assessments as required, which is great. But unfortunately, none of the assessments fully address the six characteristics that GAO has found provide a sound foundation for effective risk assessment and mitigation. For instance, most assessments didn’t fully identify the potential risks associated with AI uses or the likelihood of a risk occurring. We also found that none of the assessments fully evaluated the level of each identified risk. And that’s important because when you measured the level of risk, you also need to include both the magnitude of the harm posed by the risk and the probability that the harmful event might occur. For instance, we know this is going to happen. If it happens, how bad is it going to be and what’s the likelihood of that? [Holly Hobbs:] So this is obviously a newer effort, but did the agencies tell us about any challenges they faced when they’re trying to make progress towards these efforts? [Dave Hinchman:] They did. There were two things that we heard consistently from the agencies when we talked. And the first is that they had a very short time frame of only 90 days from when the Executive Order was issued until the first risk assessment was due. That’s not a lot of time for an agency to get something up and running, especially with the technology where a lot of folks are still trying to figure out how it’s being used and what they’re doing with it. And so that was a reason that a lot of the agencies cited for them having incomplete assessments. The other thing that agencies pointed out to us was that they had trouble identifying a specific instance of an AI technology being used in a specific system. And they had trouble identifying these use cases because this is a new technology. It evolves quickly. And agencies also don’t have a lot of historical data about the risks that AI poses to the critical infrastructure. They’re really just beginning this journey and just starting to keep those records. {MUSIC} [Holly Hobbs:] So agencies are taking steps, as directed, to protect our nation’s critical infrastructure from the risks that artificial intelligence poses. But there are some gaps in these efforts. Dave, what more do we think should be done to better protect these systems from potential AI threats? [Dave Hinchman:] Well, one of our key findings was that DHS’s initial guidance on how to prepare these risk assessments didn’t require those six risk assessment characteristics that I talked about, and that we would expect to see. And so, we recommended that DHS act quickly to update its guidance for AI risk assessment, so that the future assessments that are due every year include those six fundamental characteristics. [Holly Hobbs:] And last question, what’s the bottom line of this report? [Dave Hinchman:] So it’s great that the government is starting to look at and think about how AI can impact our nation’s critical infrastructures. But in doing that, and in considering the risk of artificial intelligence, we need to make sure that we’re examining that risk from every angle in a way that’s consistent with foundational practices. [Holly Hobbs:] That was GAO’s Dave Hinchman talking about our new report on AI and critical infrastructure. Thanks for your time, Dave. [Dave Hinchman:] Thanks, Holly. [Holly Hobbs:] And thank you for listening to the Watchdog Report. To hear more podcasts, subscribe to us on Apple Podcasts, Spotify, or wherever you listen. And make sure to leave a rating and review to let others know about the work we’re doing. For more from the congressional watchdog, the U.S. Government Accountability Office, visit us at GAO.gov.