AI’s Inherent Duality: Weaponizing Cybercriminals and Revolutionizing Defense

June 11, 2025
Beatriz Carratala LLeó By Beatriz Carratala LLeó

Today, no one doubts that Artificial Intelligence (AI) is playing an increasingly significant and disruptive role in the modern world: its influence on the cybersecurity landscape is an undoubted transformative reality.

However, AI is not just a new digital tool, but a true paradigm shift that alters the very nature of cyberthreats and, in turn, of the defences.

This is because at the core of this transformation is the duality inherent in AI. While it enhances the capabilities and reach of malicious actors, enabling more sophisticated and scalable attacks, it also offers defenders unprecedented capabilities to mitigate and eradicate those threats.

Therefore, it is not surprising that this duality is the driving force behind the new rules of the game in the new and challenging cybersecurity scenario, where the only certainty is that the outcome will be totally uncertain.

We must keep in mind that we are not dealing with a monolithic technology, but with an expansive and multidisciplinary field that encompasses a very broad spectrum of techniques and approaches - from Machine Learning and Deep Learning to Natural Language Processing (NLP).

This has resulted in the proliferation of AI tools, particularly generative AI, which are democratizing advanced cyber capabilities on the offensive side. This trend suggests a worrying potential that the rise of sophisticated attacks may venture out, especially now that they can be launched by a broader spectrum of malicious actors, not just highly specialized and well-funded organizations. Today, these competencies are no longer exclusive to Nation-States or organized groups of cybercriminals; today, the ease of access to them broadens the range of cyberthreats, enabling malicious actors to severely test existing defensive resources.

The ever-savvy ingenuity of cybercriminals has found its perfect ally in AI, allowing them to implement it in a cross-cutting manner, throughout the entire attack cycle: from the initial deception and penetration phase to the persistence and concealment of malicious activity.

Evidence of this is the increasing observation that AI facilitates the optimization of social engineering and phishing. Here, Natural Language Processing (NLP) and advanced generative models allow attackers not only to automate decoy creation but also to achieve previously unthinkable levels of hyper- personalization and contextualization.

This same generative capacity extends to the creation of deepfakes - ultra- realistic synthetic videos and audios generated by Generative Adversarial Networks (GANs) - which make it difficult for the average user to discern reality from fiction.

The most concerning thing is that this dissonance comes at a very high price, the unprecedented undermining of citizens on trust in digital interactions.

AI-powered malware and adaptive threats

Furthermore, attackers are no longer limited to perpetrating direct financial fraud, such as impersonating executives to authorize transfers, or conducting large-scale disinformation campaigns. Instead, through the intensive use of new AI tools, they seek to massively accelerate the identification and exploitation of vulnerabilities.

These include zero-day exploits, debugging for privilege escalation, and developing persistence and evasion techniques.

Attackers can develop polymorphic and adaptive malicious code, which manages to alter its own structure and modify its behaviour in real time to evade traditional signature-based security solutions. This "learning" malware is not a static tool, but a dynamic adversary that evolves in response to the environment and the defences it encounters.

Consequently, AI enables the orchestration of integrated campaigns where personalized social engineering can serve as a vector to deploy intelligent malware, which in turn exploits AI-identified vulnerabilities to persist and propagate. This synergy transforms malware from a simple piece of code to a tenacious and evolving threat, demanding a fundamental rethinking of defensive strategies towards equally dynamic and intelligent mechanisms capable of anticipating and countering these multifaceted offensives.

Paradoxically, it is also here that AI stands out more than ever as a necessary and powerful defensive tool. Naturally, industry professionals also find AI to be a disruptive tool that helps them counteract the amplification of threats.

AI's afore mentioned duality is also revolutionizing cybersecurity defences, offering proactive threat detection, automated response, and intelligent analysis capabilities that surpass traditional methods. AI's analytical and predictive capabilities enable AI-powered Security Orchestration, Automation, and Response (SOAR) systems to exponentially optimize their effectiveness. This reduces the rate of false positives that often plague Security Operations Centres (SOCs), automates the enrichment of security alerts with differential contextual information, and can even trigger predefined response programs based on the threat's nature and severity. This increased automation let analysts to focus on more complex and strategic tasks, such as mitigating and containing high- priority threats.

Mitigating insider threats with AI

However, not all threats come from outside. In fact, one of the most complex challenges faced by cybersecurity specialists are threats originated within organizations themselves, and this is where AI stands as a fundamental pillar in the development of observation capabilities. Observations that mitigate endogenous risk using User Behavior Analysis (UBA) systems that anticipate the detection of characteristic patterns in the daily activities of personnel, which may be due to either malicious intent or accidental oversights, both of which have the same harmful consequences for the organization.

These AI-driven defensive capabilities represent a radical shift in organizations' security posture, moving from inefficiently reactiveness to predominantly predictive and proactive approaches. The key to the intersection between AI and cybersecurity lies in AI's emergence as an enabler for anticipating threats and mitigating their impact early. This allows organizations' resilience to significantly reduce previously devastating incident rates.

Finally, AI acts as a crucial "force multiplier”, the perfect companion for often understaffed and overstretched security teams. Automating routine tasks and augmenting human analytical capabilities enables organizations to handle a much greater and more complex volume of threats more effectively and successfully.

However, we still have a long way to go; we must move forward recognizing that mastering AI and machine learning will be crucial to stay ahead in this constantly evolving environment.

The future of AI in cybersecurity: collaboration and adaptation

Here, other controversial debates affecting AI become relevant. One of these is that how good AI models are depends largely on the data used.

The "black box" problem-where the decision-making processes of complex AI models can be difficult to understand-requires continued attention to transparency and explainability. Emerging technologies like generative AI, while offering potential advantages for defence, also present new risks and vulnerabilities that attackers will exploit sooner than later. Therefore, the security of AI systems themselves is increasingly vital.

In conclusion, the future promises improved AI-driven defence mechanisms in various areas, such as networks, the IoT, cloud computing, and collaborative technologies. In these areas, comprehensive AI-driven automation will help improve the speed and efficiency of threat detection and response.

However, the future also poses persistent challenges, as adversarial AI will continually attempt to manipulate or compromise AI models used for security.

In any case, whatever strategy is chosen to incorporate AI into our cyber defence capabilities, it will require responsible deployment of AI and a strict governance framework to mitigate risks and unintended consequences.

All of this highlights an important challenge for the future: the need for strong collaboration between humans and AI. Human expertise will play a crucial and indispensable role in guiding AI, interpreting complex situations, and making critical decisions, thus complementing the seemingly unlimited capabilities of AI, which, despite its prospects for self-sufficiency, will still need to continue to advance under the umbrella of human restrain.

One thing is certain: navigating this future will require continuous vigilance, adaptation, and collaborative security efforts to improve the overall defensive posture.

About Beatriz Carratalá LLeó

Beatriz Carratalá LLeó is a cybersecurity and cyber risk management specialist with a strong legal and technological foundation. She holds various certifications in cyber risk management, ethical hacking, and compliance, and she is also a certified insurance broker (Level A) and judicial expert in compliance. In 2022, Beatriz co-founded SECURIZABLE, a cutting-edge consultancy in tech risk management, where she leads cyber risk consulting. A dedicated educator and mentor, she has co-authored training modules, tutored courses, and delivered talks on cybersecurity. Beatriz also serves on the Board of Directors of Women4cyber and actively supports ethical tech use through organisations like National Cybersecurity Institute of Spain (INCIBE).

Write new comment

Comments Terms and Guidelines

Welcome to our comments section! We encourage open discussion and look forward to your thoughts and contributions. To maintain a respectful and engaging community, please adhere to the following guidelines:

  1. Be respectful: Treat all commenters with respect. Avoid insults, personal attacks, or disparaging remarks about individuals or groups. Respect others’ opinions and consider your impact on the community.
  2. Stay on topic: Keep your comments relevant to the subject of the article or post. Off-topic discussions can detract from the purpose of the forum and may be removed.
  3. No spam or advertising: Please do not post spam or advertisements. This includes unsolicited promotions, links to external sites, or repetitive posts.
  4. Avoid offensive content: Do not post comments that are discriminatory, racist, sexually explicit, or violent. Always consider the diverse audience that might be reading your comments.
  5. Use polite language: Avoid using offensive or vulgar language. Comments should be suitable for a public forum with a wide-ranging audience.

By participating in our comments section, you agree to follow these rules and contribute positively to our community. Comments that fail to adhere to these guidelines may be moderated or removed, and repeat offenders may be banned from commenting.

Thank you for helping us create a friendly and inclusive space for discussion!

Comments (0)

No comments found!

Read more Grow Digital Insights

Read more expert posts about Digital Innovations.

Find out more

Sign up for updates

Receive the latest news and events updates by subscribing to our newsletter.

Sign up for our Newsletter

For media contacts

Are you a member of the media and would you like to contact us?
→ Get in touch with us here

Scroll up

Co-Funded by the European Union