I’ve been critical of how corporate America uses AI. Not because I oppose technology, but because I reject the idea that cutting jobs with generative and agentic AI is natural, necessary, or noble.

It’s not. It’s a choice.

Right now, AI is used to squeeze more productivity from fewer people. The same tools that generate slide decks and write boilerplate contracts also drive layoffs across marketing, HR, operations, and customer support. People are being cut loose without a plan, support, or even a thank you for helping to build the systems that are replacing them.

We’re treating workers like roadkill on the way to some imagined future. And I think that’s immoral.

If you’re a CEO, a board member, or a tech founder, you have a moral obligation to care for the people your tools displace. Period. There’s no glory in scaling if you have to shred the social contract to get there.

And if morality doesn’t move you, maybe this will: AI isn’t just disrupting jobs. It’s weakening our digital infrastructure. It’s opening doors to hackers, bad actors, and criminals.

Consider the case of Matthew Van Andel, a Disney employee who downloaded what he thought was a harmless AI image generator from GitHub. It was malware. That misstep gave a hacker access to everything, including his personal data, his family’s online accounts, and Disney’s internal Slack system. The hacker lived inside his digital life for months before launching a full-blown attack, leaking passwords, hijacking his kids’ Roblox accounts, sending threatening messages, and filling his inbox with harassment.

And what did Disney do?

They fired him.

The AI Intersection of Progress and Profit

This is precisely the kind of breakdown JPMorgan Chase is warning about. In a recent open letter, their Chief Information Security Officer called the current state of enterprise software “a substantial vulnerability weakening the global economic system.” Their message is clear: modern companies are not always focused on upgrading their security architecture, creating massive risk at scale.

We’re building convenience on top of complexity. AI tools are being plugged directly into calendars, inboxes, Slack, and proprietary systems with barely any guardrails. Default settings are insecure. Authentication is a joke. Hackers know it. And they’re taking full advantage.

This is the Wild West.

But we act like this is progress.

We launch products before they’re tested, prioritize feature velocity over safety, and then act shocked when a hacker takes down a company through one compromised vendor or rogue API integration.

This isn’t just a technology issue. It’s also a leadership failure. A values failure. It’s all connected.

So What’s the Path Forward?

Let’s be clear: this isn’t just the government’s problem to solve. The companies building and deploying AI must take responsibility.

You want to play in the AI sandbox? Pay the entry fee. That means:

Universal Basic Income

Universal is the keyword. There is no means testing, no hoops to jump through, just a guaranteed, unconditional cash payment to every adult. The goal is not to reward people for doing nothing but to recognize that people create value outside formal employment—raising families, caring for elders, and building communities. UBI is the floor, not the ceiling. It’s the baseline for dignity in a disrupted economy.

Universal Healthcare

Comprehensive and high-quality. Preventive, dental, vision, pharma, and long-term care are included because people shouldn’t lose access to doctors, medicine, or mental health support just because their role was automated.

Free Retraining and Education

If you’re going to bulldoze industries, you must fund the rebuild. That means real learning opportunities without debt traps. The future of work must come with pathways, not pink slips.

Additional Financial Incentives for Caregiving and Community Service

Our economy runs on unpaid labor. The parents, volunteers, caregivers, and neighbors doing the invisible work that keeps everything moving deserve recognition and support.

Identity Protection

If you build something that ruins lives or is available on your platform, you’re responsible for helping victims rebuild.

You want to lead the AI revolution?

Fine. Then start by putting people first.

Because if your version of innovation discards workers and weakens our collective safety, you’re not building the future. You’re burning the foundation.

There is no legitimate AI advancement without rebuilding our societal contract. Progress without care is just cruelty at scale. And this time, we don’t get to say we didn’t see it coming.

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.