“Be More Human” Is Bad Advice

by Laurie Ruettimann

Everyone has a hot take about AI and your career. Most of them are wrong.

The current favorite is “be more human.” Thought leaders, career coaches, and the LinkedIn content machine say the path to surviving the AI economy is to lean into your humanity. Ask better questions. Develop taste. Show empathy. Build integrity.

It sounds right. But it’s not right.

Capitalism has never compensated humanity as a trait. Not once. People who built financial security did so by controlling a scarce resource, pricing it correctly, and refusing to give it away. Warmth and relatability are not line items on a balance sheet. They never were.

Telling workers to “be more human” in response to AI is not a strategy. It is criminal misdirection.

WWDPD?

I admire Dan Pink more than just about any other management thought-leader, but I was horrified by his recent advice on how to differentiate yourself at work.

  • He says ask better questions, but answer engines take terrible questions and return nuanced, well-organized answers.
  • And he also says to develop taste, but taste requires leisure and exposure, two things not available to most workers right now, especially with GDP growth at 0.7% in the fourth quarter last year and professional unemployment at record highs.
  • Then he advises us to ship at 80% and iterate, which is jargon I never want to hear again. Did I survive a pandemic for that? Plus, the advice was written for founders and engineers with venture capital backing from 2013 to 2024. When a worker does anything at 80% and it fails, she gets a performance improvement plan. The AI that failed gets a model update. The cost of human failure and the cost of AI failure are not the same. They are not even close.
  • Pink says to get better at composition, which means assembling ideas into something coherent and meaningful. But that is exactly what AI does fast and well. He says to develop allocation skills by working alongside AI to decide which resources go where. That happens at the executive level, if at all. The HR coordinator, the marketing analyst, and the sales rep doing the actual work have no allocation authority. None.
  • And he says to develop integrity, because AI has no moral compass. But let me ask you this: Does your board have a moral compass? How about your CEO? A system programmed to make better strategic decisions under constraints is functioning exactly like a moral compass, whether we call it that or not.

I would rather take my chances with AI than with a man who has a weak chin and a chip on his shoulder.

The Deeper Problem with Career Advice

The deeper problem with Dan Pink’s advice, and so many others, is that it is built on a 20th- and early 21st-century assumption: that humans are fundamentally, meaningfully different from machines. That assumption is getting harder to defend.

Agents, bots, and robots are not just replacing human creativity only in small pockets at tech companies on the coasts. They are doing it at scale, at speed, and at a cost that changes the math entirely. Across all industries. All over the world. The differentiation gap between a Gen Z graduate and a bot is smaller than we thought. Much smaller.

This is where it gets uncomfortable, my friends. When AI can do your job faster and better, many people experience mental vertigo. If your identity is anchored in your ability to think, produce, or solve, and a machine now does that better, the advice to “be more human” does not help. How much more you can you get? It makes the vertigo worse.

The proposed solution, finding meaning through authentic human connection and creative expression, is real in the abstract and nearly impossible to monetize in the specific. I might like you and respect your taste, but when was the last time “taste” was a line item in a pragmatic corporate budget?

Your Work is Not Your Worth

Here’s something I say regularly: your work is not your worth. You were not born to have a job title. You were born with a soul. It’s your job to learn, grow, survive, and thrive. There is enough wealth generated by AI to pay every person a dividend to live comfortably and strive, or not, based on their own goals. That is not a fantasy. It is a policy choice we are not yet brave enough to make. Capitalism did not validate your humanity before AI, and it will not validate your humanity after AI. You are the only person who can determine your worth.

Stop listening to people who tell you that you can outcompete a machine by being more relatable. And stop scrolling LinkedIn looking for the answer, because the people posting advice are selling something, and none of them have solved this either.

The honest answer is that there is no good immediate answer. The economic transition we are living through is real, fast, and outpacing every framework anyone has built to explain it. What you can do right now is push for the structural changes that actually matter: universal health care so your survival is not tied to your employer, free education so you can keep learning without going broke, and an AI dividend paid to every citizen by the companies profiting from the automation that displaced you. Those are not utopian ideas. They are the floor that makes everything else possible.

Until then, develop skills the market actually pays for, price them correctly, and refuse to perform unpaid emotional labor because a thought leader told you it was your competitive edge.

There are no shortcuts here. Anyone telling you otherwise is monetizing your anxiety.