𝐂𝐡𝐚𝐭𝐆𝐏𝐓 𝐓𝐡𝐫𝐞𝐚𝐭

February 7, 2023

ChatGPT’s human-like abilities have taken the internet by storm, but it has also put a number of industries on edge: a New York school banned ChatGPT over concerns that it could be used to cheat, copywriters are already being replaced, and reports claim Google is so concerned about ChatGPT’s capabilities that it has issued a “code red” to ensure the company’s search business’s survival.

The cybersecurity industry, which has long been skeptical of the potential implications of modern AI, appears to be taking notice as well, with concerns that ChatGPT could be abused by hackers with limited resources and no technical knowledge.

Is ChatGPT a cybersecurity threat?

A free language-generating AI model called ChatGPT has taken the internet by storm. While AI has the potential to improve the efficiency of IT and security teams, it also allows threat actors to develop malware.

Threat actors use ChatGPT to create malware. While the quality of ChatGPT’s code-writing capability has yielded mixed results, generative AI specialized in code development can accelerate malware development. We’ll eventually see it help exploit vulnerabilities faster – within hours of their disclosure rather than days.

ChatGPT will significantly reduce the skill-based barrier to entry for threat actors. Currently, the sophistication of a threat is more or less linked to the sophistication of the threat actor, but ChatGPT has opened up the malware space to a whole new level of rookie threat actors who will one day be able to punch far above their weight much more easily.

This is concerning because it not only increases the volume of potential threats and the number of potential threat actors, but it also increases the likelihood that people with little to no idea what they’re doing will be out there joining the fray. Even in the malware space, this level of inherent recklessness is unprecedented.

On the other hand, AI has the potential to improve the efficiency and effectiveness of IT and security teams by enabling automated and/or semi-automated vulnerability detection and remediation, as well as risk-based prioritization. That makes AI that can analyze data very promising for IT and security teams with limited resources; however, this type of tool does not yet exist, and if it does, it may be difficult to implement due to the training required for it to understand “normal” in a specific environment.

The industry must now focus on developing AI to assist defenders in analyzing and interpreting massive amounts of data. Until we see significant advancements in AI tools’ ability to understand, attackers will continue to have an advantage because today’s tools meet their needs.

Share:

Comments

Leave the first comment