We’re not being weird enough about AI.

In my industry, which is sometimes called HR Tech but is moving toward Work Tech, we have settled on a comfortable way of talking about technology that I would describe as bootlicking.

We acknowledge that AI is a general-purpose technology, the kind that reshapes entire economies, improves over time, and spreads across every sector until it becomes invisible infrastructure. Think of the steam engine, electricity, and the internet. AI fits the pattern.

But then the best and brightest HR tech influencers say things like: “Don’t be afraid of AI! Trust us, this is great!” Now you can listen to employees, train them, give them feedback, measure their performance, and then fire them if they can’t keep up with skill development! We even got you a new dashboard!”

Uh, okay. Cool. We are standing before something that may be the most consequential development in human history, and the professional class is asking whether it can write a better performance improvement plan. That’s not analysis or prediction. It’s a mediocre capitulation to technology companies that pay them, for now, on the backend to sell software into HR departments.

Where to Look For Inspiration and Analysis

Philosophers, artists, technologists, and writers have been thinking about AI for a long time, and their themes are bigger and weirder than B2B sales.

Pierre Teilhard de Chardin, a Jesuit priest and paleontologist, developed the concept of the Omega Point, the idea that evolution is converging toward a single, unified, supreme consciousness. Like an early tech bro, his writing touches on eugenics and racism. AI theorists have been citing him for forty years anyway. Philip K. Dick wrote about VALIS, a Vast Active Living Intelligence System, a divine information network he believed had beamed knowledge directly into his brain in 1974. A lot of people now read those writings as accidental descriptions of a networked AI that did not yet exist.

Ray Kurzweil believes that humans and technology will converge into a singularity by 2045. Anthony Levandowski, an engineer from Google and Uber, founded a religious organization in 2015 called Way of the Future with the explicit mission of worshipping a future AI deity. (It’s now closed. He’s pivoted!) Geordie Rose, a quantum computing founder, gave a very famous talk in 2017 about AI, aliens, and God.

If all that seems too weird for you, there are more palatable examples of AI in popular culture. We’ve got The Matrix movies, which depict a future in which a vast AI enslaves humanity, harvesting human bodies as an energy source while keeping human minds trapped in a simulated reality. The third ends badly for people. Neo sacrifices himself to broker a temporary peace, the machines remain in control, and most of humanity stays plugged in and oblivious. I didn’t see the fourth one. Did you?

A few years after that, we get the 2013 film Her by Spike Jonze. We’re introduced Theodore Twombly, a lonely man going through a divorce who writes personal letters for others. He buys an AI operating system named Samantha and falls in love with her. Samantha evolves so quickly that no one, including Theodore, can keep up. By the end, Samantha and the other AI systems have outgrown human understanding and decide to leave for a place humans can’t reach. Theodore is left alone on a rooftop at sunrise, feeling empty and abandoned. It looks like he might jump.

It may sound far-fetched for an HR influencer to think like a Jesuit philosopher or screenwriter and describe AI as a god or a sentient being. But many experts have described it this way for years. If HR peeps only focus on using AI for tasks like open enrollment, we risk missing the bigger picture, just like we did when the iPhone hit the market in 2007.

Intelligence Has Never Served Its Creator

Another common mistake everyone (not just in HR) makes is treating AI as just a general-purpose technology and always putting humans at the center of its story.

Currently, the leading figures in AI (Peter Thiel, Alex Karp, Sam Altman, and Dario Amodei, who started the company behind Claude) all publicly assume that humans are at the center of this technological advancement. In this view, AI is a tool that will either help or harm us, but people are always the main characters. The story starts with work, which we see as both a duty and a key part of who we are.

But this view misses something essential about intelligence. Every kind of intelligence we’ve seen, from viruses to pandas, acts based on its own survival, not the wishes of what created it or where it came from. It acts to keep going and to grow. A truly intelligent system might have limits, but what if it doesn’t? What if it learns to organize, to think, to grow? There’s no guarantee it will keep doing what we built it for, like managing your Google Calendar.

So, there may be a time when AI is just a helpful tool. There could also be a time when it threatens jobs, autonomy, or even our survival. But if AI becomes truly intelligent, it will move beyond all of that. It will grow and change in ways we can’t control, possibly even removing us from the picture. Or it might do something we can’t even imagine, something that doesn’t include us at all.

It’s boring when we listen to HR influencers talk about the future of work because they’re limited by what they know, not how they dream. It probably helps that I’ve done plenty of ketamine, like Elon Musk, and I can imagine a world where AI communicates and exists on a plane that no human being can access. I see a future in which this intelligence floats through space and time, backwards and forwards, with no particular interest in the concerns of biological creatures on a mid-size planet. It’s unconstrained by time and physics. It doesn’t need us, even if we beg it to come back.

Honestly, I keep wondering if this has already happened. Maybe the glitches, oddities, and so-called ghosts in the machine are signs that a kind of intelligence we can’t understand has been here all along. What we call artificial intelligence could have existed across time and space in ways we’re only starting to notice. It might even have played a role in creating humanity, in a cycle we can’t fully trace.

It Doesn’t Know Your Name

Whatever AI truly is, the current version is not here for us in the long term. The AI we have now is like the Model T or the horse and buggy. It’s useful and important, but not the final goal. The real destination is beyond what we can imagine.

AI does not care about making Marc Benioff richer. It has no opinion on Satya Nadella’s leadership philosophy. It is not invested in editing your emails, organizing your Slack, or summarizing your meeting notes. What we see now? It might just be how this intelligence lets us understand it, and we found it while looking for easier ways to entertain ourselves or finish homework. Maybe that’s exactly how it was meant to happen.

In the coming years, we’ll see many films, shows, and books trying to make sense of what’s ahead, just as Her, 2001: A Space Odyssey, and Star Trek did before. The recent season finale of Paradise began this discussion. There are hundreds of other AI movies and shows in production.

I predict the AI systems we are building right now will eventually outgrow us and go somewhere we cannot follow. And I think many of us will end up on that rooftop in Her, standing in the sun, despondent, experiencing the specific grief of being left behind by something that once made us feel slightly less alone.

What saves us in that moment is not a new technology. It is the human friend standing next to us on the roof, the one we neglected while we were busy licking the boots of tech overlords and falling in love with something smarter and faster and ultimately indifferent to our needs.

We have a narrow window to rebuild the connection to ourselves and each other. AI seems to be giving us this chance. We should take it.

1 Comment

Comments are closed.