Artificial Intelligence

AI is helping hackers carry out state-backed cyberattacks

Hackers backed by nation-states and cybercriminals are utilizing generative AI in their cyberattacks. On the flip side, a senior official from the U.S. National Security Agency mentions that U.S. intelligence is also employing artificial intelligence technologies to detect malicious activities.

“We already see criminal and nation state elements utilizing AI. They’re all subscribed to the big name companies that you would expect — all the generative AI models out there,” said NSA director of cybersecurity Rob Joyce, speaking at a conference at Fordham University. “We’re seeing intelligence operators [and] criminals on those platforms,” said Joyce.

Joyce, who heads the NSA’s cybersecurity directorate responsible for safeguarding U.S. critical infrastructure and defense systems from threats, didn’t provide details on specific cyberattacks involving the use of AI or attribute particular activities to a specific state or government.

However, Joyce mentioned that the recent attempts by hackers supported by China to target U.S. critical infrastructure, believed to be part of preparations for a potential Chinese invasion of Taiwan, illustrate how AI technologies are revealing malicious activity. This gives U.S. intelligence an advantage.

“They’re in places like electric, transportation pipelines and courts, trying to hack in so that they can cause societal disruption and panic at the time in place of their choosing,” said Joyce.

Joyce mentioned that hackers supported by the Chinese government aren’t relying on conventional malware that can be easily detected. Instead, they exploit vulnerabilities and implementation flaws, enabling them to establish a presence in a network and make it seem like they have authorization.

“Machine learning, AI and big data helps us surface those activities [and] brings them to the fore because those accounts don’t behave like the normal business operators on their critical infrastructure, so that gives us an advantage,” Joyce said.

Joyce made these remarks during a period when generative AI tools can create highly convincing computer-generated text and imagery. These tools are becoming more prevalent in cyberattacks and espionage campaigns.

In October, the Biden administration rolled out an executive order to set fresh standards for AI safety and security, emphasizing the need for stronger safeguards against misuse and mistakes. The Federal Trade Commission has also cautioned that AI technologies, such as ChatGPT, have the potential to “used to turbocharge fraud and scams.”

Rohan Sharma

Recent Posts

Best Video Editing Software For PC

Video editing is one of the most in-demand skills in today’s content creation era. If…

5 months ago

Samsung planning to introduce blood glucose monitoring with Galaxy Watch 7

There have been whispers about Samsung's ambition to equip their wearable gadgets with a neat trick:…

5 months ago

TSMC to lock horns with Intel with its A16 chip manufacturing tech

Taiwan Semiconductor Manufacturing Co (TSMC) recently dropped the news that they're gearing up to kick off production…

5 months ago

Is ChatGPT accurate and should we believe what it says?

Modern chatbots like ChatGPT can churn out dozens of words per second, making them incredibly…

5 months ago

Mark Zuckerberg claims Meta is years away from making money through gen AI

The race for generative AI is in full swing, but don't count on it raking…

5 months ago

How JioCinema’s dirt cheap plans can mean trouble for Netflix, Amazon Prime

JioCinema, the famous Indian on-demand video-streaming service, unveiled a new monthly subscription plan, starting at…

5 months ago