There's no doubt, in the grand tapestry of technological progress, artificial intelligence (AI) represents a revolution.
As a force for good, AI has the potential to enhance our lives in ways previously unimaginable, from health and education to entertainment and communication. However, as with all tools, its use can cut both ways.
But the cyber realm is beginning to feel the chill of a darker side of AI technology.
Recently, a cybersecurity experiment involving a variant of the GPT model, known as WormGPT, revealed some scary results. The objective of the experiment was to test the model's ability to craft phishing emails.
Phishing, a cybercrime where targets are contacted by email, telephone, or text message by someone posing as a legitimate institution to lure individuals into providing sensitive data, has been a longstanding threat in the digital world.
The experiment found that WormGPT was able to generate an email that was not only alarmingly persuasive but also strategically cunning. The quality of the output pointed towards the model’s potential to be exploited in sophisticated phishing and Business Email Compromise (BEC) attacks.
But first...
What is WormGPT?
WormGPT operates much like its cousin, ChatGPT, but with a chilling difference.
Unlike ChatGPT, it operates without ethical boundaries or limitations, giving it an uncanny ability to mimic legitimate communications with frightening accuracy. The arrival of this technology underscores a new reality where phishing emails can be replicated with ease, reinforcing the need for individuals to be extra vigilant when interacting with digital communications.
What is most alarming is that the use of generative AI democratises the execution of sophisticated BEC attacks.
With AI producing emails with near-flawless grammar, they seem legitimate and have less chance of being flagged as suspicious. This development provides even cybercriminals with limited skills an accessible tool, making advanced cyber threats a more prevalent danger across a broader spectrum of society.
The most common is WormGPT being used by cybercriminals to perform BEC attacks intended to trick companies into sending fraudulent money transfers or divulging critical data.
Let's take a look at the top 5 ways you can protect yourself from cyber criminals who are using AI to mount phishing attacks.
It's crucial to be suspicious and take the time to verify the legitimacy of an email, particularly one that asks for sensitive information.
One good practice is to never click on a link in a suspicious email, but instead, manually type the URL into the web browser. This can help avoid falling into traps set by cybercriminals who use very convincing-looking phishing websites.
Moreover, remember that legitimate businesses or institutions will never ask you to provide sensitive personal information via email.
If you receive an email that seems to be from your bank or another service provider asking for personal details, it's best to contact the institution directly through verified contact methods to confirm the request.
With advancements in AI, phishing attacks have become more sophisticated and harder to detect. Therefore, the first line of defense is being informed and aware. Users should educate themselves on how phishing attacks work and the latest techniques being employed, including AI-driven methods.
Knowing about the common signs of a phishing email such as generic greetings, misspellings, and unofficial email addresses can help individuals spot potential phishing attempts.
AI-driven phishing attacks can mimic human behaviour and bypass traditional security measures. Hence, employing advanced security tools that utilise machine learning and AI themselves for detecting and blocking such advanced threats is essential.
These tools can analyse email patterns and spot anomalies or suspicious behaviour, helping to filter out phishing emails.
Implementing Multi-Factor Authentication adds an extra layer of security to your accounts and can protect you even if a phishing attack succeeds in stealing your credentials. By requiring an additional verification step such as a fingerprint scan, facial recognition, or a code sent to your phone, MFA ensures that the person attempting to access the account is indeed the account owner.
Even if AI-based phishing scams succeed in fooling users into revealing their passwords, the attackers would still need the additional authentication factor to gain access.
That's not a typo - we're just trying to make the point.
Your instinctive suspicion when interacting with unfamiliar or unexpected digital content, such as emails, is your first and arguably most vital line of defence against cyber threats. Go with your gut!
In the end, the rise of AI and its exploitation is a stark reminder of the importance of vigilance and education in our digital lives. By staying informed and cautious, we can better protect ourselves from these emerging threats and enjoy the many benefits of our increasingly connected world.
And don't forget the important role we each play in wider cybersecurity. Reporting suspicious emails not only helps protect ourselves but also assists companies in identifying and addressing vulnerabilities that could impact others.
As we strive for a more secure digital environment, remember that every individual's action counts.
You've already taken the first steps by joining Cyber Heroes for your cyber security training.