Hackers backed by nation-states and cybercriminals are utilizing generative AI in their cyberattacks. On the flip side, a senior official from the U.S. National Security Agency mentions that U.S. intelligence is also employing artificial intelligence technologies to detect malicious activities.
“We already see criminal and nation state elements utilizing AI. They’re all subscribed to the big name companies that you would expect — all the generative AI models out there,” said NSA director of cybersecurity Rob Joyce, speaking at a conference at Fordham University. “We’re seeing intelligence operators [and] criminals on those platforms,” said Joyce.
Joyce, who heads the NSA’s cybersecurity directorate responsible for safeguarding U.S. critical infrastructure and defense systems from threats, didn’t provide details on specific cyberattacks involving the use of AI or attribute particular activities to a specific state or government.
However, Joyce mentioned that the recent attempts by hackers supported by China to target U.S. critical infrastructure, believed to be part of preparations for a potential Chinese invasion of Taiwan, illustrate how AI technologies are revealing malicious activity. This gives U.S. intelligence an advantage.
“They’re in places like electric, transportation pipelines and courts, trying to hack in so that they can cause societal disruption and panic at the time in place of their choosing,” said Joyce.
Joyce mentioned that hackers supported by the Chinese government aren’t relying on conventional malware that can be easily detected. Instead, they exploit vulnerabilities and implementation flaws, enabling them to establish a presence in a network and make it seem like they have authorization.
“Machine learning, AI and big data helps us surface those activities [and] brings them to the fore because those accounts don’t behave like the normal business operators on their critical infrastructure, so that gives us an advantage,” Joyce said.
Joyce made these remarks during a period when generative AI tools can create highly convincing computer-generated text and imagery. These tools are becoming more prevalent in cyberattacks and espionage campaigns.
In October, the Biden administration rolled out an executive order to set fresh standards for AI safety and security, emphasizing the need for stronger safeguards against misuse and mistakes. The Federal Trade Commission has also cautioned that AI technologies, such as ChatGPT, have the potential to “used to turbocharge fraud and scams.”