The cybersecurity industry is always changing due to advancements in technology, unpredictability caused by world events, and threat actors honing their malevolent strategies. Even if readiness is still one of the most crucial elements of successful organizational cybersecurity, there are a lot of unknowns, so making plans for the upcoming year can be challenging.
SYNTHETIC USERS ACCOUNTS ARE THE NEW WEAK POINT
The usage of fictitious user identities to test and monitor, dynamically query, and share data between capabilities is expected to increase as generative AI is integrated into commercial applications. The problem with this is that, once created, these accounts—which are typically created with excessive permissions—run the danger of being forgotten in business systems where they are frequently used to evaluate capabilities.
Adversaries must locate new targets as the number of successfully penetrated firms is approaching saturation (at least 89% were successfully breached in the previous 24 months). With the use of AI chatbots like ChatGPT, anybody can now speak any language with remarkable proficiency. Therefore, we should anticipate seeing assaults spread to additional regional tongues.
As we discovered in our study from the previous year, these attacks were not limited to the English language, and they had the most effect on non-English speaking nations. How come? They probably didn’t have the necessary experience handling these kinds of attacks, and AI chatbots are just accelerating the trend.
GROWING SKILLS GAPS POPULARIZE SECURITY OUTCOME BASED SERVICES
According to the ISC2 cybersecurity workforce research, while the number of cybersecurity professionals increased by about 13%, the number of open positions increased far more quickly, leaving roughly two out of every three posts empty. This will put pressure on businesses to increase employee productivity, which will further need the consolidation of cybersecurity skills. I have a suspicion that many of these businesses will also question how much of their own operations they control vs what they outsource to outcome-based services.
Email filtering disproved the notion that security was something that could only be handled internally. Parts of the SoC and IR capabilities are now outsourced by numerous enterprises.
SECURITY AROUND NON CLOUD ENABLED SYSTEMS GAINS NEW FOCUS
Meetings with organizations who had offline systems in the past make me recall hearing that they were “too sensitive to secure,” which is an absurd statement to say! I also recall speaking with other organizations that had eliminated their old OT systems and implemented conventional endpoint antivirus. The added burden was too much for these older systems, which frequently have a long lifespan.
Even though a lot of technology is moving to the cloud, a growing number of businesses’ critical systems aren’t cloud enabled. Because these systems are so important, we are seeing adversaries target them and, far too frequently, include them in the collateral damage of many more generic attacks. Since new regulations, like the EU NIS directive v2, are focusing on critical business systems and the supply chain that supports them, businesses must evaluate and identify security solutions that are intended to operate in completely offline environments. In other words, closed or offline networks are not always 100% offline.
DATA PRIVACY LAWS WILL BE TESTED
With AI tools making it possible to create content by scraping data to create a detailed profile of a person, we should anticipate problems with the amount of data that can be gathered and who is accountable for it—the tool or the user? Anticipate a rise in the right-to-forget requests in the near future. However, companies will also need to pay more attention to employee personal information that is available to the public. Companies will also investigate more into the data they own and if these AI technologies can access it. Longer term, we should also anticipate more changes to data privacy legislation.
THE ENTRY BARRIER TO CYBERSECURITY GETS BOTH LOWER AND HIGHER
On one extreme of the spectrum, generative AI enables more individuals with lower skill levels to do jobs by translating more complicated ones into natural language. For instance, because the AI would convert the logs, tier 1 SoC analysts should be able to work much more simply as they won’t need to interpret numerous vendors’ logs. Adversaries are already finding it easier to create malware thanks to programs like WormGPT, which can also speak in normal English.
However, organizations seeking to build or tailor their usage of Generative AI technologies will require ML engineers, AI engineers, and AI scientists—resources that are even more rare in the current market than cybersecurity professionals.
RISE IN PERSONALIZED ATTACKS
In the past, AI chatbots made it possible for everyone to easily compile and organize data that was in the public domain. Still, this was predicated on historical facts. With ChatGPT 4.0, users may now use APIs to obtain real-time internet information. We need to be ready for increasingly individualized attacks as it gets simpler to compile and create a thorough profile and employ AI Chatbots to create tailored messages on the fly. Whaling attacks, as well as those in the supply and communications channels of these targets, should be particularly anticipated to increase.
UNDERSTATING THE RISKS AND SECURING AI SYSTEMS
An adversary hacked the AI-based security engine of a security company back in 2019. They figure out how to manipulate the scoring system that supported AI to make seemingly innocuous or harmful objects appear benign. We should anticipate increased attention being paid to discovering exploits and vulnerabilities into increasingly important commercial systems powered by AI as these systems continue to expand in complexity, size, and application. I predict that greater attention will be paid to one particular area: synthetic users. Take the autonomous accounts, which frequently serve as a link between AI and apps from third parties or other AI users.
IDENTITY MANAGEMENT WILL BE CHALLENGED FURTHER
The proliferation of software as a service (SaaS) products in recent times has posed a difficulty for enterprises seeking to efficiently utilize single sign-on solutions. Now that public information can be scraped using Gen AI techniques, we have two additional challenges. Firstly, we have to anticipate that password brute forcing tools will be integrated with data scraping. Because people frequently use their pets’ and family members’ names—all common strategies for aiding in password memory—making sure all passwords are strong is becoming increasingly important.
In addition, because generative AI technologies support real-world context, they may be used to fool workers into thinking they are speaking with a reliable colleague or outsider. As a result, the number of ways to confirm that someone is who they claim to be will increase. As a result, multi-factor authentication must successfully expand into cross-border commercial areas.
Cybersecurity Predictions for 2024
Advancements in AI will fuel a surge in cybercrime – Experts predict that 2024 will be a turning point in the diversity of AI capabilities. Cybercriminals will now have access to text-to-video and other multi-media production tools, expanding their capabilities beyond text generating. With these developments, it will become increasingly difficult to distinguish between a genuine recorded video and a created one, particularly in situations where recordings are regularly edited, as in TV news.
Social engineering will become more pronounced – Threat actors are aware that social engineering, or playing on people’s emotions and vulnerabilities, is the most effective approach to achieve their goals. According to Gen experts, in 2024, cybercriminals will spread false news, misleading adverts, deepfakes of prominent personalities, and even direct communications that seem to come from reliable connections using AI-generated material on social media. Beyond social media, the evolution of Business Communication Compromise (BCC) attacks—previously known as Business Email Compromise, or BEC attacks—where threat actors use artificial intelligence (AI) to mimic the voice or appearance of senior executives, will make these kinds of social engineering attacks more serious in the business world.
Digital blackmail – for people and businesses – will become more targeted: Cybercriminals frequently demand ransom for stolen data, but according to Gen experts, blackmail will become more subtle in 2024 as data is either purchased on the dark web or obtained by exploiting VPN infrastructure. Furthermore, there will be a noticeable increase in attacks on cloud infrastructure, which will pose serious problems for cloud-based organizational structures and remote working. Beyond typical encryption, extortion techniques will also advance to include criminals using techniques similar to sextortion to coerce individuals and companies.
More mobile apps will spy and extort: Concern over the surge in dishonest activities in the rapid lending app market is rising as financial technology develops. Due to the growing popularity of rapid and simple loan availability through mobile applications, some dishonest lenders are turning to unethical means of collecting debt. Experts in general predict a rise in the development and dissemination of fraudulent chat programs aimed at mobile devices, which may include spyware or crypto-stealing modules hidden in their ostensibly harmless user interfaces.
Rising threats in cryptocurrency: The rapidly changing bitcoin landscape will present cybercriminals with fresh chances. Because cryptocurrency is decentralized, it is nearly hard to reverse fraud and identify frauds. Experts in the field predict that hackers will use various methods to target cryptocurrency owners, such as breaking into cryptocurrency exchanges or cross-currency exchange protocols, exploiting smart contracts, or utilizing malware-as-a-service stealers that are growing quickly.