When AI Breaks Bad

Why Governance Might Be the Only Cure

November 13, 2025

Thank you, Silicon Valley! Yet again, you’ve f*cked up the world.
— Vince Gilligan, creator of Breaking Bad and Better Call Saul

When the man who gave us Walter White calls out AI for "breaking bad", you pay attention.


Vince Gilligan recently described AI as the world’s most expensive and energy-intensive plagiarism machine.

It’s a very sharp observation and a fair one in the creative world. Artists, songwriters, painters alike are right to feel that their originality is being chewed up and spat out by an algorithm trained on their life’s work. I agree, partly as I will explain later on.

But here’s where the plot thickens.

Act I: The Crime Scene - Where AI Gets a Bad Rap

Gilligan’s concern comes from a truth we can’t deny: Generative AI often repurposes what it consumes.

Much like Walter White’s moral erosion began with “good intentions,” the creative side of AI began as a fascination with possibility until it spiraled into a replication race resulting in what we call today as AI slop. A lot of it, in fact. In dizzying numbers.

In this corner of the AI universe, plagiarism is a legitimate concern. Think of artistic theft, blurred authorship and devaluation of originality. I agree in full that the creative industries deserve protection.

But as in any good drama, that’s only half the story.

Act II: The Counterpoint - Where AI Actually Works

As one who builds systems for a living, I use a different kind of camera lens, one that is outside the world of entertainment and art.

In my world, AI is not a plagiarist but a productivity partner.

It’s the silent engine behind workflow automation, process optimization, customer service and accessibility improvements. Think of:

  • voice assistants that sound more natural
  • logistics systems that self-correct
  • analytics tools that see what humans miss.

These aren’t acts of theft. They’re acts of engineering.

In fact, even the non-creative side of the film industry from production logistics, voice dubbing, post-editing to scheduling stands to benefit from this technology.

And here’s the Hollywood twist:

The same AI accused of plagiarism is now the most powerful plagiarism detector in history.

AI tools can flag copied content, detect deepfakes and protect intellectual property at a scale no human could ever match.

Hmm. The villain and the hero share the same face. The difference lies entirely in who’s directing the story.

Act III: The Real Threat - When Governance Breaks First

Gilligan worries about the day AI becomes sentient, i.e. the moment machines “break bad” and humanity faces an ethical crisis of ownership and freedom.

I’m not as worried about that day. I’m worried about the days before it.

The days when:

  • AI systems are rushed into critical workflows without oversight
  • Regulatory frameworks lag behind corporate ambition
  • And “innovation” breaks the guardrails meant to protect ourselves

This isn’t science fiction. It’s today’s reality.

We've already seen LLMs hallucinating facts, creators losing credit and regulators chasing ghosts.

The danger isn’t AI breaking bad... it’s governance breaking first.

Act IV: The Cure - An AI-Centric IT Governance Model

That’s precisely why AI9GM exists. It's a governance meta-framework designed for the Age of AI.

Where others see AI as a disruptor, I treat AI9GM as a new layer in the enterprise stack that coexists with battle-tested frameworks (like ITIL, COBIT, TOGAF) to complement and not replace them.

It’s the architectural backbone for AI maturity that aims to bring transparency and ethical control to the chaos of modern AI adoption.

Think of AI9GM as the “moral compass” before the meth lab, if I can borrow one more analogy from Gilligan. It's the system that ensures our ambitions don’t outpace our principles.

Vince Gilligan’s nightmare is a world where AI breaks bad. My mission with AI9GM is to make sure governance doesn’t break first.

Epilogue: The Human Element

I think Walter White's downfall wasn't chemistry. It was hubris.

In AI, our chemistry is data and our hubris is speed.

If we want to avoid our own Albuquerque ending, we need frameworks and not fears. Governance, not guesswork.

AI doesn't have to break bad. But without governance, it just might.

About the Author

Florante Pascual is the creator of AI9GM, the AI-Centric IT Governance Model. He helps organizations modernize responsibly by aligning AI adoption with enterprise governance, strategy and ethics.

Learn more about AI9GM in this explainer video:

AI Case Studies

Discover Possibilities

BEGIN AI TRANSFORMATION

From Learning to Action - Take the Next Step

Get a FREE 30-minute Consultation / Discovery Call

Clear, actionable recommendations

A straightforward, business-focused conversation

No credit card, no commitment—just value