Skip to content
a person holding a pen

AI’s Influence on Cybersecurity – Friend or Foe?

Published
Jul 11, 2023
By
Gaini Umarov
Joseph Nguyen
Ayobami Adebiyi
Share

Current Landscape of AI and Cybersecurity 

Although the concept of artificial intelligence (AI) has been around for decades, the corporate world is constantly reminded of the promise and challenges with AI.  The wave of information shared, both positive and negative, makes it challenging for companies to decipher and consider appropriate strategies toward effectively leveraging AI to manage ongoing cyber risk and safeguarding their corporate information and assets. 

Put simply, AI is the replication of human intelligence processes by machines, most especially computer systems, and this article will examine how it has been used to evolve the cybersecurity landscape. 

As cyberattacks and related threats grow in volume and complexity, AI and machine learning (ML) have been integral in helping security operations analysts stay ahead of them. One of its common applications is sorting through threat intelligence from millions of research papers, blogs and news stories. AI technologies like machine learning and natural language processing have been leveraged to provide rapid insights to cut through the noise of daily alerts, drastically reducing response times. Furthermore, today’s security personnel are inundated with multiple tasks and data overload, so AI has been used to increase the efficiency of these high-skilled security resources by automating routine tasks and allowing them to play more of a supervisory role.

Key Roles of AI in Cybersecurity 

AI’s key purposes in the domain of cybersecurity are:

  • Prediction: approximately 35% of companies currently in operation heavily rely on AI to predict cyber-attacks by sifting through various kinds of data.1 These companies can leverage AI to automatically analyze their assets and network to pinpoint weaknesses, inherently strengthening their network defenses against any potential assaults.
  • Detection: It is evident that AI is widely used to identify cyber threats and has become common practice, as 50% of organizations2 across industries employ AI-based security solutions. The key feature leveraged here is behavioral analysis, as it allows for identification of unusual traffic through machine learning or deep learning.
  • Response: AI is evolving in the sense of defending against threats, yet 18% of businesses are already utilizing AI for that purpose. “Replacing traditional techniques with AI can increase the detection rates up to 95%” according to Gerard Mondaca in an article published on eftsure. Processes here include building new defense mechanisms in real-time or automating development of virtual patches for threat identification purposes. This allows for threats to be detected and stopped in real-time by hindering anomalous behavior or bots.

Together, the key roles enable AI to be useful in cybersecurity for the following practices:

Malware Detection and Analysis Machine learning algorithms to identify patterns
Network Intrusion Detection AI-based systems to monitor network traffic and identify unusual patterns 
Vulnerability Management     AI to scan systems and networks for vulnerabilities 
Threat Intelligence  Natural language processing and machine learning to analyze data, ultimately to uncover threats and understand Techniques, Tactics and Procedures (TTP) of different threat actors
Security Automation Automating incident response, security monitoring, and threat hunting to improve efficiency and reduce workload
  Identity and Access Management Identifying and blocking suspicious login attempts
Cloud Security AI-based systems to monitor and protect cloud-based environments 

Challenges with AI

AI has the potential to revolutionize security, but it also poses significant risks. These risks include lack of transparency and explanation; overreliance on AI, bias, and discrimination; vulnerability to attacks; and lack of human oversight. These risks can lead to flawed decisions and a false sense of security which can negatively impact individuals or entire organizations. 

  • Lack of transparency and interpretability: The lack of transparency and interpretability in AI algorithms can make it difficult to understand how they make decisions, making it challenging to identify and address potential issues. While automation is useful in a lot of areas, especially routine tasks, it is critical that humans are still involved in high stake area decisions.
  • Bias in AI algorithms: AI algorithms can be biased due to the data used to train them, as all AI systems are only as effective as the data they are trained on. Use of biased or incomplete data can lead to unintended consequences and discrimination. This means that organizations developing AI systems should put ethical concerns first.
  • Security risks associated with AI systems: AI systems are vulnerable to attacks and, if compromised, can become a liability for organizations.
  • Cybercriminals/hackers can also use AI: One of the biggest challenges is the potential for hackers to use AI to develop more sophisticated cyber threats. For example, generative AI can be used to create realistic phishing emails, deploy malware or create convincing deepfake videos. Research shows just how easy it is to automate the creation of credible yet malicious code at incredible speed. As AI becomes more advanced, it is likely that hackers will find new and creative ways to use it to their advantage. In this case, chief information security officers (CISOs) need to prepare for the next wave of AI driven attacks (eccu.edu).3 

It’s important for organizations to understand these risks and take mitigating steps as they adopt AI-based security systems. By implementing secure design principles, continuously monitoring, and auditing AI systems, and having a framework in place to address bias, organizations can be better positioned to use AI in a way that serves the greater good and protects the rights of all individuals. 

1AI in Cyber Security Testing: Unlock the Future Potential (readwrite.com)
2AI in Cyber Security Testing: Unlock the Future Potential (readwrite.com)
3Why Artificial intelligence is the future of cybersecurity (eccu.edu)

What's on Your Mind?

a man in a suit

Gaini Umarov

Gaini Umarov is a Senior Manager in the firm. With over 10 years of experience in IT and business advisory services, Gaini leads the North East IT Risk, Data Privacy and Security Practice that is part of the overall Risk and Compliance Services (RCS) practice group.


Start a conversation with Gaini

Receive the latest business insights, analysis, and perspectives from EisnerAmper professionals.