Ex-CISA Chief Predicts AI Could End Cybersecurity Industry by Fixing Code Quality Crisis

by Tashina
Cyber Security Matters. Spread the Word.

TLDR: Former CISA Director Jen Easterly claims AI cybersecurity transformation could eliminate the security industry entirely. She argues that most breaches stem from poor software quality, not sophisticated attacks. AI tools might finally fix these fundamental flaws faster than criminals can exploit them. However, experts warn that AI-generated code has actually worsened some vulnerabilities.

AI Could Make Security Teams Obsolete

The AI cybersecurity transformation might spell the end for security professionals everywhere. Jen Easterly, former CISA Director, made this bold prediction at AuditBoard’s conference in San Diego. She believes artificial intelligence will finally solve the industry’s core problem: rubbish software.

Easterly argues we don’t actually face a cybersecurity crisis. Instead, the industry suffers from chronic software quality vulnerabilities. Vendors prioritise speed and cost over safety, she claims. This creates the attack surface that criminals exploit daily.

The Real Threat Isn’t Fancy Hackers

Easterly wants to demystify cyber attackers completely. Groups with intimidating names like Fancy Bear or Scattered Spider don’t deserve the hype. She suggests more fitting titles: “scrawny nuisance” or “weak weasel.”

The so-called “advanced persistent threats” aren’t using exotic weapons either. Chinese hackers exploit the same boring flaws that plagued systems for decades. SQL injection, cross-site scripting, and memory-unsafe code remain the golden oldies. These software quality vulnerabilities appeared in MITRE’s research nearly 20 years ago.

Even China’s People’s Liberation Army relies on basic router vulnerabilities. Their strategy for a potential Taiwan conflict depends on simple network device flaws. Nothing jaw-dropping or innovative about their approach.

Software Vendors Created This Mess

Software companies convinced everyone that customers should bear all risk. Governments and regulators accepted this absurd arrangement without question. The result? A rickety mess of overly patched, flawed infrastructure everywhere.

Easterly points out that attackers have grown more capable recently. AI helps criminals create stealthier malware and hyper-personalised phishing campaigns. The technology also helps them spot vulnerabilities more quickly than before.

But artificial intelligence offers defenders hope too. CISA developed its own AI action plan to tip the balance. The goal is making security breaches an anomaly, not business as usual.

How AI Cybersecurity Transformation Works

AI excels at tracking and identifying code flaws automatically. The technology can tackle mountains of technical debt that humans can’t. Detection, countermeasures, and learning from attacks all improve with AI assistance.

The White House AI Action Plan emphasises secure by design principles specifically. Systems must be created, developed, and tested with security as top priority. This approach could finally force vendors to build better software.

Companies using network penetration testing services might see dramatic changes ahead. AI could identify vulnerabilities before human testers even start their assessments.

Expert Warning: AI Makes Things Worse

Not everyone shares Easterly’s optimism about AI cybersecurity transformation. William Fieldhouse, Director of Aardwolf Security Ltd, sees opposite results so far. “Vibe coding has actually exacerbated older vulnerabilities like SQL injections,” he warns.

Fieldhouse observes that AI-generated code often reintroduces problems that developers eliminated years ago. The technology lacks understanding of security context and best practises. Developers trust AI suggestions without proper security review.

This creates a dangerous situation where software quality vulnerabilities multiply rather than decrease. Teams need proper security training before implementing AI coding assistants.

The Road to Secure by Design

Easterly believes reducing software risk requires demanding more from vendors. That’s where risk enters the system originally. Organisations have power through procurement choices to drive down risk materially.

AuditBoard’s CISO Richard Marcus noted his company applies secure by design principles to suppliers. But they also turn the mirror on internal teams. Their own products must uphold identical design principles.

The current administration continues championing secure by design for software broadly. This regulatory pressure might finally force vendors to change their habits. Get a penetration test quote to assess your current security posture.

Conclusion: Will AI End Cybersecurity?

The AI cybersecurity transformation promises revolutionary changes for the industry. Easterly envisions a future where security teams become unnecessary entirely. AI would fix software quality vulnerabilities before attackers could exploit them.

However, early evidence suggests caution is warranted here. AI currently creates as many problems as solutions. Security professionals shouldn’t worry about redundancy just yet.

The industry needs time to develop AI tools properly. Secure by design principles must guide artificial intelligence development itself. Otherwise, we risk creating an even bigger mess than before.


Cyber Security Matters. Spread the Word.

You may also like