This year alone, there have been over 950 major cyber incidents globally, with almost 5.5 billion data records breached. The global average cost of each data breach is $4.45 million – a figure that has grown by 15% in the last three years. Certainly, there are existing threats that businesses need to continue to contend with, but as we look towards 2024, a growing set of new cyber challenges is emerging.

There are many known threats, namely malware, network overloading, technical vulnerability exploitation and, the most common, phishing; over 3.4 billion phishing emails are sent daily, with 80% of companies noting a growing rate of incidents since last year. While these threats have had operational, financial, and often reputational impacts for several years, heading into a new year, organisations must now also pay heightened attention to newer forms of malicious activity – several of which have been enabled by AI. The benefits and opportunities AI presents are countless, but this same technology has also been weaponised by criminal actors. AI can be used to decipher individuals’ passwords, extort victims with advanced ransomware and cause widespread network disruption with botnets, to name a few key threats. This technology can also be used to infect other AI, manipulating models’ outputs for criminal gain. But perhaps the most recognised AI-driven threat comes in the form of deepfakes and disinformation, whereby genuine-looking content is produced to gain access to sensitive information or to spread false information.

Organisations must develop a greater awareness and understanding of AI, both so that its (positive) potential can be realised through innovation and collaboration, and so that individuals and businesses can be better prepared to combat the threat and impact of it being used harmfully. 

Read on for insights from our cyber and AI experts, Partners Rory Farquharson and Steve Steinberg, on how to defend against both known and growing threats in the cyber space this coming year.

How are organisations using AI to solve for these challenges today?

Today, multi-factor authentication is commonly used to mitigate data breaches, anti-virus and anti-malware software is installed on most office computers and attach rate throttling is used to defend against network overloading. Targeted education and simulations for employees have also helped them recognise and avoid phishing attempts. 

These same practices and tools can be used to defend against emerging threats, including those created or enhanced by AI – although there is a limit to how much protection these can provide. However, this technology is both part of the problem and the solution, as AI-enhanced solutions can and will be used to overcome these AI-driven threats. In fact, there are a growing number of AI-backed malware and ransomware solutions (e.g., Halycon), as well as automated vulnerability patching tools (e.g., NinjaOne Patch Management). We’re also seeing that organisations are frequently incorporating AI-supported Extended Detection and Response (XDR) tools into their digital architecture; these are used to scan companies’ digital infrastructure to identify, assess and resolve cyber threats. 

AI identifies divergences from patterns of behaviour; depending on the extent of the suspicious activity, tools like Microsoft Defender can adjust a system’s accessibility. Therefore, not only can AI be used to protect systems, but it can also judge the level of protection it requires in a particular scenario. With AI-powered security in place, the time taken to identify, react to, and resolve issues is reduced, as is the need for human intervention, not only offering more sophisticated layers of protection, but boosting efficiency of teams.

What can organisations do to bolster their defences against AI-enabled threats?

Improved education around AI will help to build stronger trust in the technology, control over its utilisation, and vigilance towards its challenges. The cyber-training on phishing that nearly all employees are now familiar with needs to be evolved to cover the next level of threats. Beyond education there are several AI-related opportunities that businesses can exploit to improve the robustness of their cyber security in 2024.

Identifying the next threat: It might sound obvious, but businesses need to think ahead as to what their next cyber security threat may be. Generative AI’s analytical and predictive capabilities can be leveraged to anticipate the new (and perhaps unique) challenges an organisation may face. Recognising these future risks will enable businesses to update and prepare their defensive measures accordingly.

Reinforcing AI capabilities: With the increased utilisation of AI models comes the risk that those models will be compromised by malicious actors. AI models are usually trained with agent-led reinforcement learning but, instead, multiple models can be paired together to sense-check each other’s outputs. This would enable irregularities to be flagged and resolved promptly, potentially without the need for human intervention.

Labelling genuine content: While better education may help employees and individuals to distinguish between genuine and artificial (e.g., deepfake) content, the risk that criminals use fake material to evade security systems will still exist. Organisations should take measures to label genuine content as far as possible, marking authentic copy with verification stickers, or by including unique tags in metadata. These short-term solutions can help to keep companies and their people safe. Technology designed to protect against deepfakes, such as Intel’s FakeCatcher, is also emerging, but there remains room for further advancement in this space (which we will almost certainly see emerging from Silicon Valley in the coming years).

Introducing a security framework: Businesses should also introduce consistent operational and technical processes to support their cyber defences. These processes should include a dedicated AI-security framework, such as the one recently published by ENISA, which outlines various critical considerations, including tools (such as AI) and roles that need to be introduced, and the procedures and governance that needs to be mobilised. Designing a clear, bespoke framework will enable organisations to determine their readiness to defend against AI-driven (and other emerging) cyber threats. 

As we look to 2024, it’s critical to have awareness of both the existing and future threats your business faces when it comes to cyber, and to put solid foundations in place to guard against these. 

If you want to protect your business against new and evolving cyber threats, connect with Rory Farquharson, Steve Steinberg or our team, today.

Meet our experts