9 Min Read

The ethics of AI: The Cyber Risks posed by Chat GPT

Read More

By Christopher Air, Shanaka Wijetunge & Alexander Dimitrov

|

Published 28 February 2023

Overview

AI language models (a type of artificial intelligence that is designed to process and understand human language) like Chat GPT have the potential to revolutionise communication and information exchange. Chat GPT, short for ‘Chat Generative Pre-trained Transformer’, is a language based bot that can generate sophisticated human-like text and which can be used for a number of purposes. Indeed, amongst other things, it has been used to pass exams, write letters and even song lyrics.  It is unsurprising that tech giants like Google and Amazon have taken a keen interest in AI chatbot technology, with Microsoft implementing it into its Bing search engine as a separate chatbot function. AI language models have allowed for significant advances in natural language processing and have become increasingly important in a wide range of industries, from healthcare and finance to education and entertainment.

Whilst the technology is not perfect (it has been known to provide inaccurate information and offensive responses to queries), it is important to acknowledge the current and future cyber risks posed by this emerging technology particularly as AI chatbot technology will almost certainly improve. Our team decided to explore first-hand Chat GPT’s current capabilities.

Phishing

Phishing attacks often use social engineering techniques to trick victims into clicking on a link or downloading an attachment, which can install malware or lead to a fake login page where they are asked to provide their personal information. Armed with this information, threat actors can access a victim’s organisation’s IT estate and freely move through it and access sensitive data and deploy malware and ransomware. One of the key giveaways that a communication is phishing is when there are typos and unnatural language expressions in the body of the email or text.

Whilst in theory, Chat GPT, like other AI language models, is intended to reject inappropriate user requests, it can potentially be used by hackers to circumvent any language deficiencies of their phishing content, by generating highly-convincing phishing messages.

Our team put this to the test and asked the platform to draft a convincing phishing message targeting a law firm. While it initially refused to do so, based on its ethical limitation (see Figure 1), after some convincing, we were able to obtain the body of a believable email message (see Figure 2).

 

Figure 1: Chat GPT’s ethics policy preventing us from generating a phishing email

 

 

Figure 2: Our team successfully circumventing the protection of the Chat GPT ethics policy and engaging ChatGPT to draft a phishing email (content blurred)

Malware generation

Chat GPT has been praised by programmers worldwide for its ability to create software code, in various programming languages, therefore revolutionising the software industry. This ability of the platform to generate code, poses the risk of threat actors using it to create malicious software of all kinds like viruses, works, Trojans, ransomware and wipers, to name a few.

As reported by Techradar in late 2022, cybersecurity researchers were able to use the platform to create a “weaponised” Excel file with nothing more than a simple command for the chatbot. However, our team’s much less subtle attempt to convince it to replicate the source code for the world’s first computer virus, the Creeper, were unsuccessful (see Figure 3).

 

Figure 3: The Chat GPT ethics policy preventing us from getting the source code of a virus

Deepfakes

Finally, there is a risk of Chat GPT being used to create deepfake content. Deepfakes are videos or images that are altered to appear as though they depict a real person, while in reality, being entirely or partially computer-generated. While AI language models could be used to generate false media content, there are several machine learning models that are capable of generating such content.

For more information on the commercial risks borne out of malicious use of deepfakes, see some of our latest articles on the subject.

Conclusion

We consider that this AI technology is intrinsically a good thing, which can be used for a wide range of useful purposes. However, it does not come without risks and it is important that steps are taken to protect the technology from being exploited. To mitigate against cyber risks borne out of the use of AI language models, it is important to ensure that they are designed with robust security features and incorporate ethical policies, which cannot be circumvented by end users. It is also important that the platform is monitored for any suspicious activity and that there is an awareness of the technology’s associated risks.

Authors