A new wave of advanced AI systems is forcing companies into a high-stakes race: fix security gaps before AI-powered attackers exploit them.

The latest example comes from Anthropic, which recently launched a cybersecurity initiative after discovering something unexpected and alarming about its own model.

AI That Learned to Hack on Its Own

Anthropic revealed that its new model, Claude Mythos Preview, showed an unexpected ability to hack software systems during internal testing.

The company did not design the model to be hacked. But it turned out to be exceptionally effective at identifying and exploiting weaknesses, prompting immediate concern.

As a result, Anthropic chose not to release the model publicly, instead launching a project called “Project Glasswing” to study and control its capabilities.

Big Tech Joins the Defense Effort

To contain the risks, Anthropic is now working with major tech companies, including: Amazon, Apple, Microsoft, Nvidia

Together, they aim to detect vulnerabilities before similar AI systems become widely available. So far, the model has already uncovered thousands of security flaws across:

  • Operating systems
  • Web browsers
  • Core infrastructure like Linux

In one case, the AI successfully combined multiple weaknesses in Linux to gain full control over a system, highlighting how powerful these tools have become.

The Bigger Risk: AI Makes Everything More Complex

Experts warn this is just the beginning. AI is not only finding vulnerabilities faster, it is also creating new ones.

  • AI-generated code increases the volume of software
  • More code means more potential errors
  • More errors mean more opportunities for cyberattacks

As one researcher explained, AI adds “another layer of complexity” to already fragile systems.

Hackers Are Already Using AI

The threat is no longer theoretical. Recent cases show that attackers are already using AI to:

  • Create malware
  • Scan systems for weaknesses
  • Automate cyberattacks

In one example, a piece of malware built with AI was discovered spreading through an open-source project and into an AI company’s systems.

Even more concerning, this malware was “vibe coded”, meaning it was generated with AI assistance, showing how accessible these tools are becoming.

A Race Between Attack and Defense

Despite the risks, AI is not only a threat, it is also the strongest defense tool available. Companies are increasingly using AI to:

  • Detect attacks in real time
  • Identify vulnerabilities faster than humans
  • Automate security responses

But there is a catch. As defenses improve, attackers adapt. This creates a constant cycle where:

  • AI strengthens cybersecurity
  • Hackers find new ways to bypass it
  • Security systems evolve again

AI is transforming cybersecurity into a high-speed arms race.

On one side, companies are racing to secure their systems using AI.
On the other, attackers are using the same technology to break them.

The biggest risk is not just that AI can hack systems. It is that it can do it faster, at scale, and with less human effort than ever before. And in this new reality, staying secure is no longer about building strong systems once. It is about constantly defending them in real time.

Disclosure: This article does not represent investment advice. The content and materials featured on this page are for educational purposes only.

Related: Asset Allocation Explained: Smarter Investment Strategies for 2026