๐€๐ซ๐ญ๐ข๐Ÿ๐ข๐œ๐ข๐š๐ฅ ๐ˆ๐ง๐ญ๐ž๐ฅ๐ฅ๐ข๐ ๐ž๐ง๐œ๐ž ๐ข๐ฌ ๐ฉ๐ฅ๐š๐ฒ๐ข๐ง๐  ๐š ๐›๐ข๐ ๐ ๐ž๐ซ ๐ซ๐จ๐ฅ๐ž ๐ข๐ง ๐œ๐ฒ๐›๐ž๐ซ๐ฌ๐ž๐œ๐ฎ๐ซ๐ข๐ญ๐ฒ

November 25, 2022

A growing number of attacks, including distributed denial-of-service (DDoS) attacks and data breaches, many of which are very expensive for the affected enterprises, are driving the demand for more advanced solutions.

In order to more effectively discover and stop assaults, many businesses were forced to boost their focus on cybersecurity and the usage of technologies with AI.

According to the Acumen analysis, market growth is anticipated to be fueled by trends including the growing usage of the Internet of Things (IoT) and the increase in linked devices. New applications of AI for cybersecurity may be made possible by the expanding use of cloud-based security services.

๐€๐ˆโ€™๐ฌ ๐ฌ๐ž๐œ๐ฎ๐ซ๐ข๐ญ๐ฒ ๐›๐จ๐จ๐ฌ๐ญ

Antivirus and anti-malware software, data loss prevention, fraud detection, anti-fraud software, identity and access management, intrusion detection/prevention systems, and risk and compliance management are some of the product categories that use AI.

The application of AI in cybersecurity has been rather underutilized up until now. According to Brian Finch, co-leader of the cybersecurity, data protection, and privacy practice at law firm Pillsbury Law, “companies aren’t going out and giving over their cybersecurity programs to AI right now.” This does not imply that AI is not being deployed. Companies are starting to use AI, but they are doing it sparingly. This is generally the case with products like email filters and malware detection systems, which use AI in some capacity.

The most intriguing development, according to Finch, is the growing use of AI in behavioral analytic tools. “By that, I mean software that analyzes data to identify hacker behavior and look for patterns in their timing, attack strategy, and movement inside networks. For defenders, gathering such information might be quite beneficial.

According to research vice president Mark Driver, the research firm Gartner recently surveyed close to 50 security companies and identified a few trends around AI usage.

Insofar as security analysts had a significant issue in separating the signal from the noise in very large data sets, they overwhelmingly responded that the initial purpose of AI was to “eliminate false positives,” Driver said. This can be reduced to a manageable size by AI, which is far more precise. As a result, analysts are able to respond to cyberattacks more quickly and intelligently.

AI is generally used to improve attack detection, then prioritize actions based on actual risk. Additionally, it enables automatic or partially automated reactions to attacks and, lastly, more precise modeling to anticipate upcoming attacks. When dealing with cyber threats, Driver stated, “All of this doesn’t necessarily cut the analysts out of the picture, but it does make the analysts’ job more agile and accurate.”

๐€๐๐๐ข๐ง๐  ๐ญ๐จ ๐œ๐ฒ๐›๐ž๐ซ ๐ญ๐ก๐ซ๐ž๐š๐ญ๐ฌ

On the other hand, malicious individuals can potentially benefit from AI in a number of ways. For example, AI can be used to find patterns in computer systems that indicate flaws in software or security protocols, allowing hackers to exploit those newly discovered gaps.”

Cybercriminals can utilize AI to generate a large number of phishing emails to spread malware or gather important data when combined with stolen personal information or open source data such as social media posts.

Security professionals have observed that AI-generated phishing emails actually had higher probabilities of being opened than manually constructed phishing emails, misleading potential victims into clicking on them and launching assaults, for example. AI can be used to create malware that is constantly evolving so that it can evade being discovered by automated defense technologies.

Attackers may be able to get through static defenses like firewalls and perimeter detection systems by using malware signatures that are constantly changing. Similar to this, malware powered by AI can lurk inside a system, gathering information and watching user behavior until it is ready to start another step of an attack or send out the data it has gathered with a minimal danger of being discovered. A “zero trust” paradigm, where defenses are put in place to constantly challenge and analyze network traffic and applications to ensure that they are not hazardous, is partly the reason why businesses are heading in this direction.

“Given the economics of cyberattacksโ€”that is, the fact that it is typically simpler and less expensive to launch assaults than it is to create effective defensesโ€”I’d argue AI will be, overall, more harmful than helpful. However, it should be noted that creating truly effective AI is challenging and requires a large number of personnel with specialized training. The best AI minds on the planet are not going to be available to common criminals.

A cybersecurity program may have access to “huge resources from Silicon Valley and the like [to] construct some pretty excellent defenses against low-grade AI cyberattacks,” according to Finch. “When we get into AI built by hacking nation states [such as Russia and China], their AI hack systems are going to be pretty advanced, and so the defenders will typically be playing catch up against AI-powered attacks.”

Share:

Comments

Leave the first comment