Mikko Hyppönen, 54, has been battling malware for many years. He’s successfully defeated some of the most harmful computer worms, hunted down the creators of the first PC virus, and has been selling his own software since his teenage years in Helsinki. Over the years, he’s scored Vanity Fair profiles, made it onto Foreign Policy’s Top 100 Global Thinkers list, and landed the position of Chief Research Officer at WithSecure, the biggest cybersecurity company in the Nordics.
The Finn with a ponytail is in charge of the online Malware Museum. However, all the historical stuff in his collection might take a back seat to the new tech era: the age of artificial intelligence. Being an optimist, the hacker hunter believes the revolution will bring positive changes. Yet, he’s also concerned about the cyber threats it might unleash.
As we kick off 2024, Hyppönen spilled the beans on his top five worries for the upcoming year. They’re not ranked in any particular order, though there’s one keeping him up at night more than the rest.
Also Read: UK doesn’t want AI to invent things; Wants it only for humans
AI deepfakes on the rise
For a while, experts have been sounding the alarm about deepfakes being the scariest AI crime, but so far, those predictions haven’t fully played out. Not until recently, at least. In the last few months, their concerns are becoming a reality. Deepfake fraud attempts have surged by a whopping 3,000% in 2023, as per research from Onfido, the London-based unicorn specializing in ID verification.
In the realm of information warfare, fake videos are stepping up their game. The basic deepfakes of Ukrainian President Volodymyr Zelenskyy during the initial stages of Russia’s full-scale invasion have been surpassed by more polished media manipulations. Deepfakes are also cropping up in everyday scams. A prime example surfaced in October when a video popped up on TikTok, alleging to capture MrBeast offering brand-new iPhones for a mere $2.
However, financial scams utilizing believable deepfakes are not yet widespread. Hyppönen has only come across three so far, but he anticipates this figure will shoot up rapidly. With the growing sophistication, accessibility, and affordability of deepfake technology, their prevalence could skyrocket.
To mitigate the risk, he proposes a traditional defense: using safe words. Imagine a video call with coworkers or family. If someone asks for sensitive info like a money transfer or confidential document, you’d ask for the agreed-upon safe word before complying with the request.
Deep scams targetting millions
Even though they share a name with deepfakes, deep scams don’t always involve tampered media. In this scenario, the “deep” part refers to the massive scale of the scam. This extensive reach is achieved through automation, allowing the targets to go from just a few to practically limitless.
These techniques can supercharge all sorts of scams—investment scams, phishing scams, property scams, ticket scams, romance scams… basically, anywhere there’s some manual work, there’s space for automation.
Recall the Tinder Swindler? That scammer made off with around $10 million from women he connected with online. Now, picture if he had access to large language models (LLMs) to spread his deceptions, image generators to include seemingly real photographic proof, and language converters to translate his messages. The pool of potential victims would be massive.
Airbnb scammers can cash in on the advantages too. Right now, they usually grab stolen images from legit listings to trick vacationers into making a reservation. It’s a time-consuming tactic that can be thwarted with a reverse image search. But with GenAI, those obstacles are no longer in the way.
Malware made using LLMs
AI is already crafting malware. Hyppönen’s crew has uncovered three worms that deploy large language models (LLMs) to rewrite code each time the malware reproduces. While none have been detected in actual networks so far, they’ve been uploaded on GitHub — and they’re effective.
By tapping into an OpenAI API, the worms utilize GPT to produce distinct code for each target they infect, making them tricky to spot. However, OpenAI does have the ability to blacklist the malware’s behavior.
The same advantage extends to image generator algorithms. Provide open access to the code, and see your limitations on violence, porn, and deception break down. Considering that, it’s not surprising that OpenAI is more restricted than its name implies. Well, that and the fact that they’d lose income to copycat developers, of course.
Automated malware
WithSecure has integrated automation into its defenses for years, giving the company an advantage over attackers who mostly stick to manual operations. For criminals, there’s a straightforward way to level the playing field: fully automated malware campaigns.
That showdown is about to kick off soon, and when it does, the outcomes might be pretty alarming. In fact, Hyppönen sees fully automated malware as the top security threat for 2024. But wait, there’s an even more significant threat lurking around the corner.
Zero-day exploits
Another worry on the rise is zero-day exploits, which attackers find before developers can come up with a fix. AI has the ability to identify these threats, but it also has the potential to generate them.
A student at WithSecure has already shown how real the threat is. In a thesis project, they got standard user rights to access the command line on a Windows 11 computer. The student then automated the whole process of scanning for vulnerabilities to gain local admin status. As a result, WithSecure deemed the thesis as classified.
Also Read: Here’s how AI has changed music, cinema, and art industries
Artificial General Intelligence could greatly impact
Hyppönen has this theory about IoT security, famously known as Hyppönen Law. According to this idea, if an appliance is labeled as “smart,” it’s likely to be vulnerable. If this law holds true for superintelligent machines, we might find ourselves in some major trouble.
“I think we will become the second most intelligent being on the planet during my lifetime,” Hyppönen says. “I don’t think it’s going to happen in 2024. But I think it’s going to happen during my lifetime.”
This would ramp up concerns about artificial general intelligence. To ensure we keep control over AGI, Hyppönen argues for making sure it strongly aligns with our goals and needs.