Generative AI is a gift for cyber criminals

While there’s lots of talk about how jobs might disappear in the face of a rapidly advancing generative AI sector, it may also provide hackers and scammers tools to scale up their operations.

Andrew Wooden

April 27, 2023

10 Min Read
Cyber crime

While there’s lots of talk about how jobs might disappear in the face of a rapidly advancing generative AI sector, it may also provide hackers and scammers tools to scale up their operations.

Generative AI and it’s widely referenced standard bearer Chat GPT might well end up doing for your job and driving the world collectively insane with a blizzard of unlimited deepfake content, but at least the cyber criminals might have something to raise a glass to.

One early upshot of Large Language Model tools – which oscillate between states of amusing curiosity when it gets things weirdly wrong, and an all-out existential threat –  is that they could provide hackers and scammers with the ability to significantly up there game when it comes to robbing people online. The good news just keeps coming.

Satnam Narang (pictured), Senior Staff Research Engineer at Tenable – a security/exposure vulnerability management firm – says its his job is to have his ear to the streets and keep up with what’s happening with such things, and we asked him about the security implications of generative AI and what it might mean for the future.


The pros and cons of generative AI

“It’s a balancing act, because on the one hand we see a lot of potential for these applications, generative AI can do a lot and it can provide quite a bit of value. But also, the rapid pace of development and this rush to try to be the best or be the first to market in this space… I think that creates challenges when it comes to the type of guard rails that are in place.

“I know some folks have managed to create prompts that will allow them to leverage these generative AI tools to ask questions that normally aren’t responded to, like creating bombs and things of this nature. So there’s definitely concern about where things could go in the future. On the one hand, it’s just trying to figure out how to balance that – how do we balance the development of these really valuable technologies with the security and privacy concerns that come from them as well?

“That’s one of the other things we’ve seen since Chat GPT burst onto the scene. There have been some bugs discovered in there, some folks have managed to make queries to Chat GPT and see responses or private data associated with other users. So there’s a lot of issues that we have to work through before we get to a place where my mum or dad could use Chat GPT.”

Plugging in a generative AI to, say, a marketing department might mean you can churn out some mostly legible social posts or passable corporate blog drafts at pace – is harmless enough, unless you ask the poor gits that used to be employed to do it. For those with something more dastardly in mind that boosting twitter engagement however, there could be some other applications.

“When we look at it from the cybersecurity lens I think we’ve already started to see some of the value that generative AI provides,” adds Narang. “A lot of phishing emails and profiles on dating apps and other platforms, they often are littered with poor English or grammatical issues. Leveraging tools like Chat GPT, cybercriminals can [make it] hard to differentiate between a genuine email from say your banking institution, versus one that we would have seen in the past where you can spot some of the glaring issues – because now they can provide you with a perfect template.

“You can just ask it the question: ‘generate me an email from this banking institution saying this’, and it’ll do it. It’ll give you the response, and then all you have to do is just incorporate that into your email. Similarly, on the dating applications, we’ve seen profiles where they have really poor English or when you’re engaging with a user and you can kind of tell that the English is not the first language… so they can essentially have two screens open working through the dating app, or on their phone, and then using Chat GPT to help them respond.”

“So it’s definitely up-levelling these scammers and these cyber criminals ability to dupe users. I think the concern that we have most is when will we see the first truly, purely created Chat GPT malicious software that actually works effectively? We’ve seen some talk of that, people have tried to get it to develop some malware, but it’s not the most robust, it’s not the most well defined – but it’s only a matter of time before we get there. And then the obvious question is how is that going to affect a lot of these cyber criminals… that’s going to be hard to hard to tell, we just don’t know yet.”

‘Chat GPT rules everything around us’

Open AI may have the current champion of generative AI on its books, but tech giants like Google and Microsoft are making big plays in the area, and have the cash to move mountains if they want to. In this Wacky-races style sprint to get some sort of product under their corporate umbreallas, we asked Narang if it is feasible that any of these tech firms could be persuaded to slow down a bit and have a think about where all this is going, or if the genie is well and truly out of the bottle.

“Whatever analogy you want to use –  Pandora’s box, the toothpaste is out of the tube… at this point, you can’t put it back. It’s too late now,” he says. “I’m Not sure if you’re a fan of the Wu Tang Clan – but Chat GPT rules everything around us, essentially. Generative AI is here to stay and I don’t see it going away anytime soon. It’s become like a relay race essentially – who’s going get there first?

“Right now Microsoft has an advantage with their partnership with Open AI, but Google also has very smart individuals working for them as well. We’ve even seen Adobe launch something called Firefly, which is generative AI in Adobe platforms. I think I’ve seen some tools where you can basically feed it video footage and it’ll know how to cut the right places in the video to make the transition seamless. It’s crazy how fast this is developing.”

‘The future looks bright, but it also looks dim for some’ 

In another nascent tech field, quantum computing, you get a sense that the potential pitfalls of it getting into the wrong hands (it could gobble through any current encryption, basically) are treated seriously, while the upsides might include things like a cure for cancer. Aside from the fact there doesn’t seem to be the same emphasis on mitigating threats from those creating this stuff, it’s sometimes hard to see the upsides as very good either.

Used as intended, generative AI would appear to have the capacity to dissolve most jobs it comes into contact with – so aside from making some big tech firms richer and saving some other firms from the annoyance of having humans on the payroll, the question presents itself – is there anything to see as positive from them on an economic or societal level?

“I agree it’s probably been one of my biggest concerns over the last several years, thinking about machine learning, artificial intelligence, thinking about how that’ll affect the workforce and how many will be displaced as a result,” says Narang. “We could probably go down the rabbit hole of things like Universal Basic Income, which I know some countries and some cities and states in the United States have been looking into – because the writing’s on the wall, ultimately. If I’m not mistaken, there was a report from the US National Intelligence Agency kind of talking about the next 20 years and I believe they talk about how AI definitely will be displaced in quite a few jobs.

“What kind of effect that will have on individuals in the economy and is it sustainable… the future looks bright, but it also looks dim for some. While we are excited about a lot of the possibilities that these technologies provide, we also have to take into consideration the societal impact that will have.”

What can be done about this, if anything? Some sort of global regulation or even mild collaboration on how we navigate this new technology doesn’t seem very likely given the current geopolitical tensions. But Narang says there are levers that can be pulled.

“In some instances, certain platforms could be potentially banned by governments. If memory serves me, I think Italy has banned Chat GPT recently. Certain applications could be regulated, governments are the ones that can make these decisions. But the problem is that there’s a lot of tools out there that are open source, they’re offshoots… we are so fixated and focused on Chat GPT because it’s the de facto mainstream version that everybody knows. There are other alternatives out there you know, large language models do exist and other forms.

“So I think if there’s a localised version that users can download on their computers, there’s not a whole lot that can be done to stop the usage, granted there’s costs involved with that too. The cat is out of the bag, it’s too late now, but I think we have to try to grapple with the fact that it’s here and see the potential value that it provides, but also look out for some of the challenges that can emerge as these technologies grow, whether it’s the societal impact, the economic impact, and security impact that comes from that. “

Potential good and evil 

As to what consumers and businesses can look forward too in the future with an AI tooled up cyber criminal community, Narang says:

“When we talk about how Chat GPT, generative AI, will be able to supercharge cyber criminal activity, we think okay, it’s going to develop the malware, it’s going to create these templates. At the end of the day, all the things that we do today to protect users and organisations, the basic things that we do from a cyber hygiene perspective, that’s not going to change. That’s going to still be required. It’s sort of like, if you’re facing a small group of attackers, and you say okay, these are the groups of attackers I have to focus my energy on –  what these tools provide is to just enable more attackers. You’re facing more of a challenge as you have more… it’s going to be more quantity over quality.

Basically you’re going to be inundated with more attacks. The attacks will grow in in volume as a result of these technologies, because it will enable cyber criminals to have access to this where they may not have had in the past… It’s very economical. If you think about it, if you go out and you want to purchase malware kits and things like that, you have to spend maybe a few hundreds or thousands of dollars, but if you want to use Chat GPT or GPT 4 its $20 a month. Which for cybercriminals they’re making hundreds of dollars a day or more.”

Which begs the question, can and should firms be held accountable for what generative AI tools end up getting used for?

“We have this issue today too,” says Narang “If you if you think about it, you and I could go on to GitHub today and procure some type of exploit code for vulnerabilities that persist across organisational environments. You can go and download this exploit code… it doesn’t mean that it’s GitHub’s fault that this code exists, someone created it. Usually it’s created for good purposes, to help audit or determine if an organisation is affected. But that good tool, that good piece of code is being leveraged in a nefarious way. So we already kind of face these issues today with other things – this is just another challenge we have to face as a result of a tool that’s been created for potential good, being leveraged for potential evil.”


Get the latest news straight to your inbox. Register for the newsletter here.

About the Author(s)

Andrew Wooden

Andrew joins on the back of an extensive career in tech journalism and content strategy.

Get the latest news straight to your inbox.
Register for the newsletter here.

You May Also Like