AI in Cybersecurity
Game-Changer or Risky-Gamble?
Veronika Stefaniak
10/28/20243 min read
The futuristic and distant idea of building future with Artificial Intelligence, once seemingly far off, has now become an undeniable reality. We are witnessing the writing of our future’s history that we are actively designing. The question is- in what direction is it really heading right now?
In November 2023 a mysterious project called Q came to light, with a model capable of solving elementary school-level math problems. This might not sound particularly impressive until we realise that this innovation marked the beginning of AI's ability to think logically. Moreover, it’s not about performing calculations pre-programmed by engineers, but about finding solutions to problems the AI had never encountered before. This suggests that Q could be a key to creating AGI, allowing AI to learn across all fields with infinite computational power. If this becomes attainable, the final step would be a conscious and creative AI, capable of setting its own goals without any human intervention. Has the bar been raised today?
Global spending on cybersecurity is expected to increase by 15.1% in 2025, reaching $212 billion. This represents a rise from the projected $183.9 billion in 2024.
The statistics are clear: the digital world is rapidly becoming a less friendly place.
Enterprises are investing in AI tools that support application security, data protection, privacy, and infrastructure safety. AI enables the automation of threat detection processes, allowing for faster identification of potential attacks and reducing the risk of data breaches. The development of GenAI is also driving demand for security software, which could increase by 15% in 2025.
AI-driven coding is taking the tech world by storm, with developers in 83% of organisations now using it to code. But security leaders are sounding the alarm, warning it could spark a major security disaster, according to a newest Venafi survey.
The report published on September 17th highlights a stark reality: while 72% of security leaders feel they must let developers use AI to stay competitive, an overwhelming 92% are deeply concerned about the risks. In fact, nearly two-thirds (63%) of these leaders have even contemplated banning AI in coding altogether due to mounting security fears.
According to another research by CybSafe and NCA, over half (52%) of the survey’s respondents believe AI will make it harder to detect scams and 55% said this technology will make it more difficult to be secure online. Shouldn’t it be a call for the whole cybersecurity market?
On the other hand, as reported by Forbes, 76% of businesses have already made AI and machine learning a top priority in their IT budgets, highlighting the broad acknowledgment of AI's importance. These two points of view are so divided that, at this point, it's unclear whether to rely on AI for cybersecurity or steer clear of it entirely. Why is that?
Rapid and relentless improvement is an integral part of artificial intelligence. This makes it impossible to prevent data breaches and other attacks every time they happen. That is why it becomes a tool that both sides can use in a completely different way: to prevent cybercrimes or to accelerate them. AI-based tools (such as LLMs) are being increasingly used by cybercriminals, making their attacks more scalable and complex.
Data is the "fuel" that allows artificial intelligence to be used in a valuable way, however the term valuable is, of course, extremely debatable. Constant learning AI’s capabilities are its main superpower, but they can also be its biggest flaw. Both optimism and suspicion highlights AI’s potential in cybersecurity as well as its threats. Striking the right balance between challenges and advantages seems to be a key objective for upcoming years. We must be vigilant in maintaining this balance, as otherwise it may turn out that it is already too late to pull the plug.