Artificial Intelligence

Majority of cybersecurity professionals believe offensive AI will outpace defensive AI

5th October 2023
Harry Fowle
0

Seventy-six percent (76%) of cybersecurity professionals believe the world is very close to encountering malicious artificial intelligence (AI) that can bypass most known security measures.

More than a quarter (26%) see this happening within the next year, and 50% in the next 5 years. Phishing, social engineering tactics, and malware attacks are those most likely to become more dangerous with the use of AI. These are some of the sobering findings published in a new report by Enea and Cybersecurity Insiders. The report, “Artificial Intelligence in Cybersecurity” will be published on October 5th and the results of the survey on which the report is based will be discussed by AI specialists from Enea, Arista Networks, and Zscaler in a webinar presentation on the same day.

The report provides an in-depth, holistic view of how cybersecurity professionals see AI and its impact on the industry, including their anticipations, apprehensions, and various strategies for integrating AI into their network defences. The results are complemented by insights and recommendations, established through collaboration with Enea analysts, on how to build the capabilities, confidence, and resilience required to counter the emerging use of AI to execute cyberattacks.

The report breaks down key survey findings into fears, hopes, and plans around AI/ML in cybersecurity:

  • Fears: In addition to the concern about offensive AI outpacing defensive AI, a significant 77% of professionals express serious worries about rogue AI, where AI behaviour veers away from its intended purpose or objectives and becomes unpredictable and dangerous. Phishing, social engineering and malware attacks are seen as the top threats that will be strengthened by AI, but identity fraud, data privacy breaches, and distributed denial-of-service (DDoS) attacks were also cited as likely to become more effective.
  • Hopes: Respondents are nonetheless optimistic about AI's positive impact on cybersecurity. AI is anticipated to bolster threat detection and vulnerability assessments, with intrusion detection and prevention identified as the domain most likely to benefit from AI. Deep learning for detecting malware in encrypted traffic holds the most promise, with 48% of cybersecurity professionals anticipating a positive impact from AI. Cost savings emerged as the top KPI for measuring the success of AI-enhanced defences, while 72% of respondents believe AI automation will play a key role in alleviating cybersecurity talent shortages.
  • Plans: While a majority (61%) of organisations are yet to deploy AI in any meaningful way as part of their cybersecurity strategy, 41% consider AI as a high or top priority for their organisation. And a hopeful 68% of respondents expect a budget increase for AI initiatives over the next two years.

Workforce impact and training needs

Half (50%) of cybersecurity leaders report that their organisation has "extensive knowledge" regarding AI/ML in cybersecurity, and another 19% report “moderate knowledge," with the remaining roughly one-third reporting no-to-minimal knowledge. When asked what steps organisations should take to prepare for sophisticated or overwhelming AI attacks, 68% cited increased cybersecurity training and awareness for employees.

Developing AI-specific incident response plans followed close behind (65%), and 61% said regular security assessments and audits. Over half of all respondents said that strengthening traditional security controls such as zero-trust protocols, multi-factor authentication, next-gen firewalls, and threat intelligence were key to preparing for sophisticated AI attacks.

Moving from understanding to action

"Understanding the profound impact of AI on cybersecurity is crucial for navigating the evolving threat landscape,” said Laura Wilber, Sr. Industry Analyst at Enea. “That begins by listening closely to the concerns and hopes of cybersecurity leaders and their teams on the front lines.”

“This report confirms growing concerns around the malicious use of AI, but it also highlights some remarkable innovations in the use of AI to streamline and automate defences. Significant gains have already been made, such as a reduction in the average time it takes to detect and contain threats. However, AI is not a one-size-fits-all solution – it’s essential that businesses take a clear and methodical approach to implementing AI strategies in order to achieve maximum readiness and resilience. As we say at Enea – don’t be surprised, be ready.”

Product Spotlight

Upcoming Events

View all events
Newsletter
Latest global electronics news
© Copyright 2024 Electronic Specifier