If you’ve been hanging around underground tech forums lately, you may have seen ads for a new program called WormGPT.
The program is an AI-based tool for cybercriminals to automate the creation of personalized phishing emails; although it sounds a bit like ChatGPT, WormGPT is no your friendly neighborhood AI.
ChatGPT launches in November 2022. and since then, generative AI has taken the world by storm. But few consider how its sudden rise will shape the future of cybersecurity.
In 2024 generative AI is poised to facilitate new types of transnational and translingual cybercrime. For example, much of cybercrime is led by underemployed men from countries with underdeveloped technological economies. The fact that English is not the primary language in these countries has hampered the ability of hackers to defraud those in English-speaking economies; most English speakers can quickly identify phishing emails by their ungrammatical and ungrammatical language.
But generative AI will change that. Cybercriminals around the world can now use chatbots like WormGPT to write well-written, personalized phishing emails. By learning from scammers on the web, chatbots can create data-driven scams that are particularly convincing and effective.
In 2024 generative AI will also facilitate biometric hacking. Until now, biometric authentication methods—fingerprints, facial recognition, voice recognition—were difficult (and expensive) to imitate; it is not easy to fake a fingerprint, face or voice.
However, AI has made deep counterfeiting much cheaper. Can’t imagine the voice of your target? Tell a chatbot to do it for you.
And what will happen when hackers start targeting chatbots themselves? Generative AI is just that – generative; it creates things that weren’t there before. The basic scheme allows hackers to inject malware into the objects generated by chatbots. In 2024 anyone using AI to write code will need to ensure that the output is not created or modified by a hacker.
Other bad actors will also begin to take control of chatbots in 2024. A central characteristic of the new wave of generative AI is its “inexplicability.” Algorithms trained through machine learning can return surprising and unpredictable answers to our questions. Although humans designed the algorithm, we don’t know how it works.
It seems natural, then, that future chatbots will act as oracles trying to answer difficult ethical and religious questions. At Jesus-ai.com, for example, you can ask an artificially intelligent Jesus questions. Ironically, it’s not hard to imagine that programs like this were created in bad faith. An app called Krishna, for example, already advises killing infidels and supporting the ruling party in India. What prevents fraudsters from demanding tithes or promoting criminal activity? Or, as one chatbot did, telling users to leave their spouses?
All security tools are dual-use – they can be used for offense or defense – so in 2024 we should expect AI to be used for both offense and defense. Hackers can use AI to fool facial recognition systems, but developers can use AI to make their systems more secure. In fact, machine learning has been used for more than a decade to protect digital systems. Before we get too worried about new AI attacks, we should remember that there will be new AI defenses to match.