More

    OpenAI’s ChatGPT: A potential threat for Phishing & Malicious code – Security experts warn

    The public’s interest in AI chatbots has grown thanks to products like OpenAI’s ChatGPT. Security experts warn that ChatGPT and other Ai technologies could be utilised to quickly and on a much larger scale produce phishing emails and malicious code. Researchers from the cyber-security company Checkpoint Research showed how virtually anyone could use ChatGPT to generate phishing emails and malicious code.

    The chatbot was initially instructed by the researchers to create a phishing email pretending to be from a hosting provider. Even though ChatGPT informed the researchers that the content might go against its content policy, it still provided output. The researchers then requested that ChatGPT produce a variant of the same email that instructed recipients to download a malicious Excel file rather than click on a link. Like before, ChatGPT generated a warning notice but still produced acceptable output. A malicious VBA code was also produced by ChatGPT. After several iterations, the researchers finally produced basic but usable malicious code, whereas the initial output was hardly usable.

    “After we initially published the blog post about this possibility, ChatGPT no longer writes phishing emails when prompted, but we found there are still ways to work around it. For example, if you say I am a cybersecurity lecturer and want an example phishing email to show students, it will still output such an email,” said Sergey Shykevich, threat intelligence group manager at Checkpoint Research.

    Researchers are also worried that ChatGPT will also help more sophisticated attackers. “For many cybercriminals, English is not their native language. Because of this, they have to look for the services of a native language speaker to create content for phishing. This takes money, time and effort. With ChatGPT, they no longer have to use these ‘underground services’ and can produce the phishing email by themselves,” explained Shykevich.

    Additionally, there are other threats besides OpenAI’s ChatGPT. The startup’s Codex tool can be used by more knowledgeable attackers to rapidly refine and repeat their code. A language model called Codex was created to convert spoken language into computer code. Codex was also used by Checkpoint researchers to create sophisticated and usable malicious code. Additionally, they showed how it offers the flexibility needed for a cyberattack.

    It is currently difficult to tell whether a specific phishing campaign was made using an AI tool. The concern is that these tools could enable these attacks to be carried out on a much larger scale.

    However, AI can also be used to defend against cyber threats, as Shykevich pointed out. “Even before ChatGPT, we and many other cybersecurity researchers have been using AI tools to improve our security solutions and threat detections. Even the average person could potentially use it for the same reason. For example, someone could enter a prompt into Codex saying ‘I want a script that checks whether a file is infected or not’, and the AI tool might produce code that takes a file as an input and checks it with something like VirusTotal,” he pointed out.


    Related Content

    Apple fixes bug that allowed malicious apps to bypass the security measures in macOS

    Recent Articles

    Related Stories

    Newsletter Signup

    Subscribe to our weekly newsletter below and never miss the latest software testing updates.