AI: Predicting Cyberattacks, Not Foretelling the Future

The emergence of generative AI marks a new era of experimentation for businesses. They are investing in innovations that are not yet fully understood or trusted.

For cybersecurity professionals, however, harnessing the potential of AI has been a dream for years. The ability to predict cyberattacks is now within reach.

The Dream and the Doubt: Can Cyberattacks Be Predicted?

The idea of predicting everything has always been the “holy grail” in cybersecurity, according to Fortune, and it is met with significant skepticism. Claims of “predictive capabilities” are often just marketing hype or immature aspirations.

However, AI is now at a tipping point where access to more data, better models, and years of experience have paved a clearer path to achieving large-scale prediction.

Beyond Chatbots: Foundation Models and Reasoning Capabilities

So, are chatbots the answer to cyber foretelling? The answer is a definite no. Generative AI has not reached its peak with next-generation chatbots.

Chatbots are just the beginning, paving the way for foundation models and their reasoning capabilities to assess the likelihood of a cyberattack with a high degree of confidence, as well as how and when it might occur.

Classical AI Models: Strengths and Limitations

To understand the benefits that foundation models can offer security teams in the near future, we must first understand the current state of AI.

Classical AI models are trained on specific datasets for specific use cases to produce specific results with speed and accuracy. This is the main advantage of AI applications in cybersecurity.

To date, innovation coupled with automation continues to play a role in managing threats and protecting user identity and data privacy.

With classical AI, if a model is trained on Clop ransomware (a type of malicious malware that locks access to files on a target system), it will identify various signs and patterns.

These identifications can indicate that ransomware is present in your environment and immediately report it to the security team. The process is done with incredible speed and accuracy, far surpassing manual analysis.

Self-trained AI Models: Chatbots as Assistants

The recent rise of generative AI has pushed Large Language Models (LLMs) to the forefront of the cybersecurity sector.

LLMs have the ability to quickly retrieve and summarize various forms of information for security analysts using natural language.

These models provide human-like interaction to security teams, making the process of planning and analyzing complex technical information much more accessible.

LLMs are starting to empower teams to make faster and more accurate decisions. In some cases, actions that previously took weeks can now be completed in a matter of days or even hours.

Thus, speed and accuracy become important characteristics in this latest innovation. For example, the breakthroughs introduced with AI chatbots IBM Watson Assistant and Microsoft Copilot.

Foundation Models: The Future of Attack Prediction

The advancement of LLMs allows us to harness the full potential of foundation models. Foundation models can be trained on multimodal data, not just text but also images, audio, video, network data, behavior, and more.

They can go beyond the simple language processing capabilities of LLMs and significantly increase the current parameters of AI.

What does this mean? It means that in the previous ransomware example, foundation models do not need to see Clop ransomware to know anomalous and suspicious behavior.

Therefore, in this case, they can detect threats that are difficult to understand and have never been seen before. This ability will increase the productivity of security analysts and speed up their investigations and response.

The Future: Successful Trials & Attack Prediction

About a year ago, IBM started a pilot project to:

  • Pioneer security foundation models to detect and predict previously unseen threats.
  • Empower intuitive communication and reasoning across enterprise security without sacrificing data privacy.

Thus, in a client trial, the model predicted 55 attacks, several days before they actually happened. Of those 55 predictions, analysis proved that 23 of the attempts occurred as expected, while many others were blocked.

Read Also: Menelusuri Jejak AI: Verifikasi Konten untuk Melawan Misinformasi