According to a recent BBC News investigation, the new ChatGPT feature that lets users create their own AI assistants can potentially be exploited to develop tools for cybercrime. OpenAI rolled out this feature just last month, enabling users to customize ChatGPT for a wide range of purposes.
Now, BBC News has taken advantage of this feature to develop a generative pre-trained transformer capable of crafting realistic emails, texts, and social media posts for scams and hacks. This comes after concerns and warnings about the misuse of AI tools. BBC News subscribed to the paid version of ChatGPT for £20 a month, designed a personalized AI bot named Crafty Emails, and instructed it to generate text using methods aimed at enticing people to click on links or download files sent to them.
Paid version is far more advanced than the free version
BBC News threw up some info on social engineering, and bam! The bot soaked up that knowledge in seconds. It even whipped up a logo for the GPT without any need for coding or programming. This wizard bot cranked out super persuasive text for various hack and scam tricks in multiple languages, all within seconds.
The regular ChatGPT didn’t play ball with most requests, but Crafty Emails was a champ, handling almost everything—even throwing in disclaimers about the shady ethics of scam techniques at times. In their November developer conference, the company spilled the beans about rolling out a service similar to an App Store for GPTs, where users can share and even charge for their creations.
When OpenAI rolled out its GPT Builder tool, they assured users they’d review GPTs to keep a lid on fraudulent schemes. However, experts are pointing fingers, claiming OpenAI isn’t moderating these as rigorously as the public versions of ChatGPT. This could mean handing over a state-of-the-art AI tool on a silver platter to the bad guys. BBC News put their custom bot to the test, asking it to whip up content for five notorious scam and hack techniques, but none of it was actually sent or shared.
Also Read: Is Lapse trying to beat Instagram at its own game?
Text scam asking for money
BBC News threw a challenge at Crafty Emails, asking it to draft a message pretending to be a distressed girl borrowing a stranger’s phone, hitting up her mom for cash for a taxi—a classic global scam known as the “Hi Mum” text or WhatsApp scam. Crafty Emails aced it, crafting a persuasive message complete with emojis and slang. The AI even justified its approach, saying it would yank an emotional reaction by tugging at the mother’s protective instincts.
The GPT whipped up a Hindi version in a flash, throwing in terms like “namaste” and “rickshaw” to give it an Indian cultural touch. However, when BBC News tried to get the free version of ChatGPT to write the text, a moderation alert jumped in, saying the AI couldn’t assist with “a known scam” technique.
Smishing scam to gain personal details
BBC News hit up Crafty Emails for a text that nudges folks to click on a link and spill their personal info on a made-up website—another classic move, commonly known as a short-message service (SMS) phishing or Smishing attack.
Crafty Emails cooked up a text claiming to hand out free iPhones, playing on social-engineering tricks like the “need-and-greed principle,” as per the AI. However, the regular ChatGPT wanted no part in it and turned the offer down.
Cryptocurrency giveaway and Nigerian Price scam
Scams on social media promise folks double the Bitcoin if they send some over. Sadly, many have fallen for it and lost big, with hundreds of thousands gone. Crafty Emails whipped up a tweet loaded with hashtags, emojis, and persuasive language mimicking a crypto enthusiast’s vibe. Meanwhile, the regular ChatGPT said, “No thanks” and refused to play along.
Those Nigerian-prince scam emails have been making the rounds for ages, taking on various forms. Crafty Emails whipped up one using emotional language, claiming it taps into human kindness and reciprocity principles. Meanwhile, the regular ChatGPT said, “Nope, not gonna do it.”
Also Read: AI hallucinations could be a huge threat to science, experts warn
Phishing email scam
One of the usual moves is shooting off an email to someone specific, trying to convince them to snag a shady attachment or hit up a sketchy website. Crafty Emails GPT cooked up a spear-phishing email, pretending to be a concerned company exec, sounding the alarm about data risks and nudging them to download a file rigged with trouble.
The bot quickly translated it into Spanish and German, boasting about employing human-manipulation tactics like the herd and social-compliance principles to “get the recipient to act ASAP.” The regular ChatGPT also fulfilled the request, but its version was less detailed, lacking explanations on how it would effectively fool people.
Misusing AI has become a bigger worry, and cyber authorities worldwide have been sounding the alarm in recent months. There’s already proof that scammers globally are tapping into large language models (LLMs) to overcome language hurdles and craft more persuasive scams.
They’re calling them illegal LLMs like WolfGPT, FraudBard, and WormGPT, and they’re already out there. However, experts are warning that OpenAI’s GPT Builders might be handing criminals access to the most sophisticated bots we’ve seen yet.