Building a Resilient World:
The ISAGCA Blog

Welcome to the official blog of the ISA Global Cybersecurity Alliance (ISAGCA).

This blog covers topics on automation cybersecurity such as risk assessment, compliance, educational resources, and how to leverage the ISA/IEC 62443 series of standards.

The material and information contained on this website is for general information purposes only. ISAGCA blog posts may be authored by ISA staff and guest authors from the cybersecurity community. Views and opinions expressed by a guest author are solely their own, and do not necessarily represent those of ISA. Posts made by guest authors have been subject to peer review.

All Posts

Most Cybersecurity Teams Are Unprepared for AI Cyberattacks

Cybersecurity teams aren’t the only ones using artificial intelligence to their advantage — cybercriminals are using this technology to launch never-before-seen cyberattacks, catching organizations off guard. How can professionals prepare for this unknown? Here is the latest on cybersecurity threats and various ways teams can better prepare for cybercrimes. 

AI Cyber Threats Will Catch Cybersecurity Teams Off Guard

Recent research commissioned by Darktrace indicates an overwhelming majority of cybersecurity teams are unprepared to defend against artificial intelligence (AI)-powered cyber threats. The report echoes other studies highlighting the emerging trend of increasing cyberattack frequency, sophistication and severity.  

In surveying nearly 2,000 information technology (IT) security professionals — ranging from those in junior positions to Chief Information Security Officers (CISOs) — Darktrace research showed 74% of IT security leaders believe their organizations are currently experiencing the effects of AI-powered cyber threats.   

Despite growing concerns cybercriminals will leverage AI technology for malicious purposes, many organizations admit they are unprepared. While 89% of IT security teams agree AI-assisted cyber threats will substantially impact their organization by 2026, 60% report their current defenses are inadequate.   

Since AI technology accelerates the speed of cyber threat development, surveillance and deployment — and lowers entry barriers for cybercriminality — preventative action is increasingly pressing. If cybersecurity teams do not act quickly, the rapid evolution of the threat landscape will catch them off guard.  

The Exponential Emergence of AI-Powered Cyber Threats 

As AI advances, organizations become more susceptible to cyberattacks. According to one survey, the minimum level of cybersecurity resilience dropped by about 30% in 2023, leaving them more vulnerable to emerging cyber threats.   

One emerging threat involves the use of AI-powered deepfakes for phishing. Threat actors only need a single audio snippet or a handful of images to recreate a person’s voice and likeness. Concerningly, they may be able to use this technology to bypass biometric access controls.  

The frequency of these AI-driven phishing attempts has grown exponentially in recent years. In fact, the number of organizations experiencing deepfake-related security incidents increased to 66% in 2022, up from 13% in 2021. These modern social engineering attempts are often used in spear-phishing and whaling campaigns.  

AI-assisted ransomware attacks are another emerging threat, as cybercriminals use generative models to develop malicious code. The rapid evolution of malware poses a severe issue for understaffed cybersecurity teams.   

Another emerging trend is the deployment of distributed denial-of-service (DDoS) attacks led by an AI-driven botnet. This cyber threat is particularly dangerous because it is capable of autonomous execution and adapts to defend against countermeasures.   

Once cybercriminals infiltrate a network, they can leverage AI to launch trigger-based attacks at the most opportune time, enabling them to prioritize data exfiltration. The amount of damage they can do with this approach is proportional to setting explosive charges while robbing the building — the automation capabilities of algorithms elevate their attacks.   

Cybercriminals can automate cybercrime-as-a-service with the help of AI within the next few years. If this prediction becomes a reality, IT security teams will be inundated with untraceable, highly sophisticated cyberattacks.  

AI Cyber Threats Are Already Impacting Organizations 

Various industry professionals have claimed AI-powered cyber threats are a far-off possibility, implying the widespread growing concern is just hype. While it may be true that the worst impacts are yet to come, the sentiment could not be further from the reality of the situation — cybercriminals are already using algorithms to launch cyberattacks. 

 In 2020, a branch manager received a call from the director of his company’s parent business requesting authorization for a $35 million transfer for an upcoming acquisition. After emails from a lawyer hired to coordinate the process appeared in his inbox, he went ahead. He eventually discovered fraudsters used AI-powered deepfake technology to mimic the boss’s voice.  

Advanced AI malware has already exited the proof of concept stage. One research team developed a computer worm that attacks AI-powered email assistants using an adversarial self-replicating prompt. The virus forces models to output personally identifiable information, regardless of guardrails. Apparently, they can embed malicious prompts in emails to trigger a cascading infection, reaching more clients.   

The same sentiment is true for AI-powered cyberattacks. Back in 2018, TaskRabbit — a marketplace for freelance labor — was hit by a DDoS attack controlled by an AI-powered botnet. The cybercriminals exfiltrated the social security and account numbers of 3.8 million users before the platform temporarily shut down to recover. 

What Cybersecurity Teams Can Do to Strengthen Defenses 

While some industry experts suggest many AI-powered cyber threats are of no concern because they are largely conceptual — they are mistaken. It is not a matter of if these attacks will happen, but when — and all indicators suggest it will be an issue sooner than later. In the meantime,  security teams must strengthen their defenses. 

1. Deploy a Defensive AI 

Cybersecurity teams should deploy their own AI to make their defenses more dynamic. Research shows 96% of security decision-makers believe AI-driven countermeasures are critical for defending against malicious models, making this strategy sound.  

2. Audit AI Technology

Organizations should consider auditing data sources and model behavior periodically, whether they develop their own algorithm or rely on a third-party tool. This way, they can ensure no adversarial training, prompt engineering or data set poisoning takes place.  

3. Leverage Automation 

According to research from Darktrace and the MIT Technology Review, 60% of C-suite professionals agree that human-driven security solutions are inadequate for defending against AI cyber threats. Cybersecurity teams should instead rely on the power of automation.   

IT security professionals can use AI’s computational power to audit security logs, identify emerging cyber threats or optimize security parameters in real-time while focusing on high-priority matters.  

4. Raise Awareness 

With AI-driven deepfake phishing on the rise, cybersecurity teams would be wise to pressure the human resources department or the board into requiring organization-wide training. They can prioritize external threats when they don’t have to fix as many employee mistakes. 

5. Utilize Access Controls 

Cybersecurity professionals should leverage authentication measures and access controls regardless of their strategies. Considering AI deepfakes can bypass biometrics, their toolset should include various solutions.  

Is the Average IT Security Team Ready to Defend Itself? 

Today, few security teams can withstand a sudden onslaught of AI cyberattacks — but that does not mean their situation is hopeless. They can defend against the modern threat landscape with additional technology investments and upskilling.  

Zac Amos
Zac Amos
Zac Amos is the features editor at ReHack, where he covers trending tech news in cybersecurity and artificial intelligence. For more of his work, follow him on Twitter or LinkedIn.

Related Posts

Practical Insights for Implementing Control System Security

Introduction In this blog post, we’ll share practical insights from operational experience in managing cy...
Pinakin Gokhale Nov 29, 2024 7:00:00 AM

Innovations in R&D: How AI Is Transforming Industrial Cybersecurity Operations

Industrial control systems are becoming more complex as evolved cyberattacks threaten industry functions....
Devin Partida Nov 15, 2024 7:00:00 AM

In Conversation with Authors of ISAGCA White Paper on Zero Trust and ISA/IEC 62443

The ISA Global Cybersecurity Alliance (ISAGCA) recently published a white paper exploring the application...
Kara Phelps Nov 8, 2024 12:00:00 PM