The Uncomfortable Truth About Your Value in 2025
You’ve felt it—that creeping anxiety as another AI demo shows something you thought only humans could do. That moment when GPT writes better marketing copy than your colleague, when an AI diagnoses medical conditions more accurately than experienced doctors, when a chatbot provides more thoughtful responses than many human support agents. We keep moving the goalposts, telling ourselves “AI will never be creative,” then watching it write symphonies. We insist “AI will never understand context,” then see it pass the bar exam. We claim “AI will never have emotional intelligence,” then observe it providing better therapeutic responses than many humans.
But here’s what nobody’s telling you: We’re asking the wrong question entirely. The question isn’t “What can humans do that AI can’t?”—that’s a losing game, a retreating battle line that gets redrawn every few months as capabilities expand. The real question is: “What remains valuable because a human does it?”
Consider this carefully. A handwritten letter means something different than a printed one, not because handwriting is superior technology, but because a human chose to write it, because human hands held the pen, because time and care were invested in each stroke. A friend showing up at your door matters in a way a delivery drone never will, regardless of efficiency metrics. A leader taking responsibility for a decision carries weight that an algorithm’s optimization never could, precisely because they have skin in the game, because they’ll lose sleep over the consequences, because their reputation and relationships hang in the balance. The value isn’t in the task—it’s in the humanity.
The Automation Hierarchy: Watching the Future Arrive Unevenly
William Gibson famously observed that the future is already here—it’s just unevenly distributed. Nowhere is this more apparent than in the current wave of automation. We’re watching entire categories of human work dissolve in real-time, but the pattern isn’t random. There’s a clear hierarchy to what gets automated, and understanding it might be the most important thing you do this year for your career and future.
The first wave, already complete or nearing completion, encompasses the obvious candidates: repetitive physical tasks that we’ve been automating since the industrial revolution, data processing and analysis that computers have handled for decades, pattern recognition within specific domains, standard creative variations like basic design templates, and basic customer service interactions. These were the low-hanging fruit, the tasks we almost expected to lose to machines.
But the second wave, happening right now as you read this, is far more unsettling. Complex analysis across multiple domains is being mastered by AI systems that can synthesize information from disparate fields in ways that would take human experts years to accomplish. Sophisticated content creation, from marketing copy to technical documentation to creative writing, is being generated at a quality level that matches or exceeds human output. Medical diagnosis, legal research, financial planning, code generation—professional fields that required years of education and experience are watching their core competencies get automated at breathtaking speed.
Yet what remains—what truly remains—isn’t what most people think. It’s not about finding ever-more-complex tasks that AI can’t do yet. It’s about understanding what will always matter precisely because a human chooses to do it.
The Paradox That Changes Everything
As I explored in “AI Is Your Cognitive Workbench,” the more powerful AI becomes, the more valuable human judgment becomes—not because AI lacks capability, but because capability without care is just sophisticated pattern matching. This creates a paradox that fundamentally reframes how we think about human value: The more we automate, the more valuable becomes what can’t be automated. The more efficient machines become, the more we value human inefficiency—the wandering conversation that leads to breakthrough insight, the “wrong” decision that turns out right because it accounted for factors no model could capture, the irrational commitment that changes everything because someone cared enough to push through when the data said to quit.
A machine can diagnose your illness with perfect accuracy, analyzing symptoms against millions of case studies in seconds. But only a human can hold your hand and mean it, can look into your eyes and communicate without words that you’re not alone, can share the weight of uncertainty because they too are mortal. A machine can optimize your company’s operations, finding efficiencies that would take human analysts years to discover. But only a human can stand in front of the team after a catastrophic failure and say, “This is on me,” and have those three words change everything about how that team faces the next challenge. A machine can generate a thousand strategic options, each backed by data and projections. But only a human can sense which one feels right in this particular moment, with these particular people, in this particular context—accounting for the unquantifiable dynamics of personality, history, and trust that actually determine success.
As computing enters what I call the “Cognition Age,” where agents become our cognitive collaborators rather than mere tools, this paradox intensifies rather than resolves. The agent can think with you, process vast amounts of context, maintain perfect memory of every interaction—but it can’t care for you. It can process context with perfect recall, but it can’t create meaning from that context, can’t decide what matters and why, can’t invest emotionally in outcomes. This isn’t a limitation to be overcome; it’s the fundamental distinction that makes human involvement irreplaceable.
The Irreducible Core: Eight Things That Will Always Matter Because Humans Do Them
1. Creating Meaning From Chaos
A machine can find patterns in data with superhuman accuracy, identifying correlations and trends that no human would ever spot. But only humans can decide which patterns matter and why. You’re not valuable because you can analyze data—Excel did that in 1985, and every generation of software has done it better since. You’re valuable because you can look at the same data everyone else has and see the story nobody else sees, because you bring context from your lived experience, because you understand not just what the data says but what it means for the humans who will be affected by decisions based on that data.
This is why the best analysts aren’t being replaced by AI—they’re using AI to test twenty hypotheses while their competitors are still building their first spreadsheet. They bring the questions AI wouldn’t think to ask because those questions come from human experience, from understanding how organizations actually work versus how they claim to work, from knowing the difference between what people say in surveys and what they actually do. The human creates meaning; the machine finds patterns. The combination is more powerful than either alone, but remove the human and you have patterns without purpose.
2. Building Trust Through Vulnerability
Trust isn’t generated by consistency—machines are perfectly consistent, never having a bad day, never showing favoritism, never making emotional decisions. Yet we don’t trust machines the way we trust humans. Trust is built when someone takes a risk on you, when they show you their weakness, when they admit they don’t know, when they choose to be vulnerable in ways that could hurt them. It’s built through the accumulated sediment of countless small moments where someone could have protected themselves but chose connection instead.
Watch any great leader and you’ll see this dynamic at work. The moment they say “I screwed up” isn’t a moment of weakness—it’s the moment their team actually starts following them. No algorithm can replicate this because algorithms don’t have anything to lose. They can’t be hurt, can’t be embarrassed, can’t risk their reputation. The vulnerability that builds trust requires genuine stakes, real consequences, actual humanity. When someone trusts you with something important, they’re not just evaluating your competence—they’re evaluating whether you care enough to be careful with what they’ve entrusted to you. That care can’t be programmed; it can only be felt and demonstrated through choices that cost something.
3. Taking Responsibility for Outcomes
This distinction is subtle but crucial for understanding irreducible human value. A machine can make decisions—often better ones than humans when judged by objective metrics. But a machine can’t take responsibility for those decisions in any meaningful sense. It can’t lose sleep over a bad call, can’t feel the weight of letting people down, can’t learn the deep lessons that come from personal accountability. When a human takes responsibility, they’re not just acknowledging causal connection—they’re accepting moral weight, emotional burden, social consequences.
When your surgeon says, “I’ll do everything in my power,” that statement means something precisely because they’re human, because failure will haunt them, because success will fulfill them, because they’ll carry the memory of this moment for the rest of their life. The human stakes create human value. An AI system might make fewer surgical errors, but it can’t bear the weight of the ones it does make. That weight—that caring about consequences beyond metrics—is what makes human involvement irreplaceable in situations that matter.
4. Exercising Judgment Without Precedent
AI is extraordinary at pattern matching, at finding what’s similar to what came before, at applying lessons from millions of previous cases to the current situation. But what about the decisions that have no precedent? The moments when the rulebook doesn’t apply? The situations where the right answer violates every best practice, where the optimal solution according to all available data is actually wrong because of factors that have never mattered before but suddenly do?
This is where human judgment becomes irreplaceable—not because we’re smarter or more logical, but because we can decide to break the rules and live with the consequences. We can sense when this time is different, when the model doesn’t fit the moment, when the technically correct answer is humanly wrong. We can account for factors that shouldn’t matter but do, can make decisions based on values that can’t be quantified, can choose to be inconsistent in ways that are actually more fair than perfect consistency would be. This kind of judgment isn’t just processing information differently—it’s bringing an entirely different framework to bear, one based on values, experience, and the willingness to be wrong in service of being human.
5. Caring About Consequences
A machine optimizes for the metrics you give it, pursuing those objectives with relentless efficiency. A human loses sleep over the metrics you didn’t think to measure, worries about the second-order effects nobody anticipated, feels responsible for impacts that weren’t part of the original calculation. This is the irreducible core of human value: We care—actually care, not “programmed to optimize for user satisfaction” care, but “I can’t stop thinking about whether this was the right decision” care.
Your best employee isn’t the one who hits every KPI with mechanical precision. It’s the one who notices the KPIs are measuring the wrong thing and cares enough to speak up, even when doing so is politically risky. It’s the one who sees that optimizing for the metric is hurting the team or the customer or the long-term health of the organization and chooses to do what’s right rather than what’s measured. This kind of caring can’t be programmed because it requires genuine stakes—the person caring has to have something to lose, has to be affected by the outcomes, has to live with the consequences of their choices.
6. Creating Culture and Belonging
Culture isn’t policy documents or value statements or even the perks and benefits a company offers. Culture is the accumulated sediment of a thousand human moments—the inside jokes that become shorthand for complex ideas, the shared struggles that become founding myths, the celebrations that mark not just achievement but identity, the conflicts that become understanding through the messy process of working through disagreement with people you have to see again tomorrow.
AI can facilitate communication, optimize team dynamics, even predict which group compositions will be most effective. But it can’t create that moment when everyone’s exhausted at 2 AM trying to meet a deadline and someone makes a joke that becomes legend, that gets referenced years later as shorthand for persistence or absurdity or team spirit. It can’t create belonging because belonging requires someone to actually belong to, requires the mutual vulnerability of people who have chosen to be part of something together. Culture emerges from the intersection of human personalities, histories, and choices in ways that can’t be engineered or optimized—only cultivated through time and presence and care.
7. Synthesizing Across Unrelated Domains
Here’s something fascinating about human cognition that becomes more valuable as AI becomes more capable: AI is excellent at finding patterns within domains but struggles with the wild, irrational leaps humans make between completely unrelated fields. The architect who solves a structural problem by thinking about how her grandmother folded origami, applying principles from paper art to steel and concrete. The CEO who restructures the company based on how ant colonies organize, seeing patterns in insect behavior that illuminate human organizational dynamics. The developer who fixes a particularly stubborn bug by remembering something from a poetry class about the importance of white space, applying aesthetic principles to code architecture.
These aren’t logical connections that could be found through systematic analysis. They’re human connections that come from living a full life, from having experiences across multiple domains, from the random collision of ideas that happens in a brain that’s simultaneously processing professional challenges and personal memories and random observations about the world. This kind of synthesis requires not just information but experience, not just data but life lived across multiple contexts. It’s the product of being a whole person, not just a professional function.
8. Inspiring Through Presence
You can’t automate presence. You can’t optimize charisma. You can’t algorithm your way to inspiration. When someone stands in front of a room and makes everyone believe the impossible is possible, that’s not just communication—it’s transmission of something ineffable, something that passes between humans in moments of genuine connection. Call it energy, call it leadership, call it magic. Whatever it is, it requires a human to generate it and humans to receive it.
This isn’t about the words spoken—AI can generate inspiring speeches, can craft perfectly structured narratives that hit all the right emotional notes. It’s about something that happens in the space between humans when one person’s genuine belief, genuine passion, genuine commitment becomes contagious. It’s about the speaker’s skin in the game, their willingness to fail publicly, their investment in the outcome. When someone inspires you, you’re not just receiving information—you’re witnessing someone taking a risk, making themselves vulnerable, caring about something enough to stand up and be counted. That human risk, that genuine stake, is what makes inspiration possible.
The Economic Reality Nobody Wants to Discuss
Here’s the brutal truth about the economics of the AI age: If you’re competing with AI on AI’s terms, you’ve already lost. If your value proposition is “I can do what AI does, just slower and more expensively,” then you’re right to be worried. But if you understand the irreducible human core, the economics flip entirely, revealing opportunities rather than threats.
The market is about to bifurcate dramatically between commodity work and human premium work. Commodity work—anything that can be automated—will race to zero cost. This isn’t just factory work or data entry. It’s basic analysis, standard creative output, routine problem-solving, conventional thinking, anything that can be reduced to a process or prompted into existence. If your work can be described in a prompt, it will be automated. This sounds devastating, but it’s actually liberating if you position yourself correctly.
Human premium work—anything that requires actual humanity—will become exponentially more valuable, not because it’s scarce in absolute terms, but because in a world of infinite artificial content, authentic human connection, judgment, and care become the only things that actually matter. We’re already seeing this shift. The best teachers aren’t being replaced; they’re in higher demand than ever because parents understand that education is more than information transfer. The best therapists have waiting lists stretching months because people need to be heard by someone who understands human suffering. The best leaders are being poached constantly because organizations realize that in times of change, human judgment and the ability to inspire human teams matters more than ever.
The key insight is that we’re not in a zero-sum competition with AI. We’re in a transformation where human work and machine work become complementary rather than competitive. The humans who thrive will be those who understand what should remain human and position themselves accordingly.
The Individual Strategy: Becoming More Human, Not More Machine-Like
The conventional response to AI advancement is to try to become more machine-like—more efficient, more productive, more consistent, more optimized. This is exactly backwards. Stop trying to compete with machines on their terms. Stop optimizing yourself for efficiency. Stop trying to be more productive by traditional measures. That game is over, and humans lost—which is actually wonderful news if you understand what it means.
Instead, the strategy for thriving in the AI age is to become aggressively human. Develop your judgment through experience, not information consumption. Make decisions, own the consequences, learn from the outcomes. AI can process infinite case studies instantly, but only you can develop wisdom from your specific failures, from the patterns you’ve seen in your particular context, from the intuitions you’ve developed through years of seeing how things actually play out versus how they’re supposed to work. Build relationship capital relentlessly—not networking in the transactional sense, but actual relationships where people trust you with things that matter, where your presence changes the dynamic of a room, where your involvement in a project signals something important about its priorities and values.
Cultivate your curiosity in directions that make no economic sense. Follow interests that have no clear ROI. Learn things that don’t connect to anything else—until suddenly they do, in ways nobody could have predicted. The random collision of ideas from different domains, processed through your unique perspective and experience, creates insights that no amount of computational power can replicate. Practice synthesis across domains deliberately. Read poetry if you’re an engineer. Study biology if you’re in business. Learn music if you’re in medicine. The connections you make between unrelated fields are uniquely yours and become more valuable as AI makes within-domain expertise more commodified.
Own your opinions, especially the unpopular ones. In a world where AI gives perfectly balanced, politically neutral, carefully hedged responses, having an actual point of view becomes revolutionary. Not for the sake of contrarianism, but because genuine perspective comes from genuine experience, from having skin in the game, from caring enough about outcomes to take a position and defend it.
Embrace inefficiency strategically. Take the long conversation that goes nowhere but builds relationship. Have the meeting that could have been an email but creates human connection. Write the handwritten note that takes ten times longer than a text but means infinitely more. These inefficiencies aren’t bugs—they’re features. They’re what make human interaction irreplaceable.
The Organizational Imperative: Designing for Humanity
For organizations, the implications of the irreducible human core are even more dramatic than for individuals. The companies that will thrive aren’t the ones that automate everything possible. They’re the ones that understand what should stay human and design their organizations accordingly. This requires completely rethinking how we structure work, measure value, and develop people.
Stop hiring for skills; hire for judgment, values, and curiosity. Skills can be automated or augmented. The ability to use any particular tool or technique can be made obsolete overnight. But judgment—the ability to make good decisions when there’s no clear right answer—comes from character and experience. Values—what someone cares about enough to fight for—determine whether they’ll make the right call when nobody’s watching. And curiosity—the compulsion to ask “what if” and “why not”—drives the insights that no amount of processing power can replicate. These three qualities compound over time and become more valuable as the pace of change accelerates, because they’re what enable humans to navigate uncertainty with wisdom rather than just data.
Stop training for tasks; develop wisdom. Tasks change constantly. Tools evolve weekly. But the ability to think clearly, to synthesize information from multiple sources, to exercise judgment under uncertainty—these compound over time and transfer across domains.
Stop measuring output; measure impact. Output is what machines excel at—generating more content, processing more transactions, analyzing more data. Impact is what humans create—the difference between motion and progress, between activity and accomplishment, between doing things right and doing the right things.
Create roles that require full humanity. Not “prompt engineer” or “AI supervisor” but roles that leverage everything that makes humans irreplaceable—creativity that comes from lived experience, judgment that comes from years of seeing patterns, relationship-building that comes from genuine care, meaning-making that comes from understanding human values.
Most importantly, protect space for inefficiency. The wandering conversation that leads nowhere but builds trust. The random collision in the hallway that sparks an innovation. The “waste of time” that turns out to be team-building. These aren’t costs to be optimized away—they’re investments in human potential, in the serendipity and connection and creativity that only emerge when humans have space to be human.
The Educational Revolution We Need Yesterday
Our entire educational system is optimized for creating humans who can compete with machines—training people to process information, follow rules, produce consistent output. We’re preparing people for a game that’s already over, teaching skills that are already obsolete, measuring capabilities that no longer matter. We need to completely reimagine education for the Cognition Age, starting with fundamental assumptions about what education is for.
Teach curiosity, not answers. In a world where any fact is instantly accessible and any question can be answered by AI, the ability to ask questions nobody else is asking becomes the ultimate differentiator. The value isn’t in knowing things but in wondering about things, in seeing what’s missing, in questioning assumptions, in being dissatisfied with surface explanations.
Teach judgment, not rules. Rules can be coded, processes can be automated, procedures can be optimized. But judgment—knowing when to break the rules, understanding why the standard approach won’t work this time, sensing what matters in this particular context—this can’t be automated because it requires understanding values, not just variables.
Teach synthesis, not specialization. The specialist who knows everything about one narrow domain is being replaced by AI that knows everything about every domain. But the synthesist who can connect ideas across domains, who can see patterns that span disciplines, who can apply insights from one field to problems in another—this becomes more valuable as information becomes more accessible.
Teach collaboration, not competition. The future belongs to humans working with AI and humans working with humans to create value that neither could create alone. The ability to build on others’ ideas, to create psychological safety, to facilitate group creativity—these become core skills rather than nice-to-haves.
Above all, teach wisdom, not just knowledge. Knowledge is commodity—instantly accessible, perfectly preserved, easily transferred. Wisdom—knowing what to do with knowledge, understanding how it applies to human situations, recognizing its limitations—this comes only from experience and reflection and becomes more precious as knowledge becomes cheaper.
The Societal Reckoning
We’re at an inflection point that goes beyond individual careers or organizational strategies. As a society, we face fundamental questions about human value in an age of artificial intelligence. We can either try to make humans more machine-like to compete with machines—or we can recognize that human value lies precisely in what makes us inefficient, unpredictable, irrational, emotional, creative, caring beings.
This isn’t just an economic question about the future of work. It’s an existential question about what we value in human existence. Do we value humans for their output or for their humanity? Do we measure worth by productivity metrics or by presence and connection? Do we optimize for efficiency or for meaning? The answers we choose will determine not just how we work but how we live, not just what we do but who we become.
The risk isn’t that machines will replace humans. The risk is that humans will try to become machines, that we’ll optimize away everything that makes us human in a misguided attempt to remain relevant. We’ve already seen this with the SaaS phenomenon—the mechanization and industrialization of human work into rigid workflows and ticket queues, turning knowledge workers into process-followers. The result? Workers feel like cogs in a machine they don’t understand, while managers and leaders express endless frustration that despite all the systems and metrics, nothing seems to actually improve. We turned humans into inferior machines, and everyone lost. And now the real machines are here. We’ll measure our worth in metrics that machines will always win. We’ll sacrifice connection for efficiency, meaning for productivity, wisdom for processing speed. We’ll forget that our value was never in competing with machines but in being irreducibly, irreplaceably human.
The Choice That Defines Everything
As I explored in my previous writings on becoming a generative human, we’re each facing a fundamental choice that will determine our trajectory in the AI age: surrender to the machines or become more deeply human. This isn’t a choice we make once but one we face every day in countless small decisions about how we work, how we think, how we relate to both AI and other humans.
The path of surrender is seductive in its simplicity. Let AI think for you—it’s faster and often produces better results by conventional metrics. Let algorithms decide for you—they process more variables and aren’t subject to cognitive biases. Let automation live for you—it’s more efficient and never gets tired or emotional. This path promises ease and efficiency, freedom from the burden of thinking and choosing and caring. It’s also a path to irrelevance, to becoming a biological interface between AI systems, to losing the very capabilities that make you human.
The path of humanity is harder and messier. It requires you to develop judgment through experience, including the experience of being wrong and living with the consequences. It demands building relationships through presence, investing time and energy in connections that can’t be optimized or automated. It involves creating meaning through struggle, finding purpose in challenge, developing wisdom through reflection on failure and success. It’s inefficient by design, messy by nature, uncertain by definition. It’s also the only path to remaining valuable—not just economically, but existentially.
The Ultimate Recognition
This is why I describe my work as “Systems Thinking About Thinking Systems”—because understanding our place in the AI age requires us to apply our uniquely human capacity for systems thinking to understand systems that are themselves beginning to think. It’s a recursive challenge that only humans can navigate: we need to think systemically about our relationship with thinking machines, to find patterns in the pattern-matchers, to create meaning from the meaning-processors. This meta-cognitive capability—our ability to step outside the system and examine it even while we’re part of it—is perhaps the most irreducibly human trait of all.
The industrialization and automation of everything that can be automated doesn’t diminish human value—it reveals what human value actually is. For decades, we’ve confused human value with human function, measuring our worth by our productivity, our efficiency, our ability to process information and execute tasks. AI’s rapid advancement in all these areas isn’t a threat to human value; it’s a liberation from a fundamental misunderstanding of what makes us valuable.
What makes us valuable cannot be coded, cannot be prompted, cannot be optimized, cannot be automated. It can only be lived. Our value lies not in our ability to process information but in our ability to create meaning from that information. Not in our consistency but in our capacity to know when to be inconsistent. Not in our efficiency but in our inefficiency—the wandering path that leads to unexpected insights, the “wrong” decision that accounts for factors no model could capture, the irrational commitment that changes everything because someone cared enough to persist when the data said to quit.
The future doesn’t belong to humans competing with machines at what machines do best. It belongs to humans being more deeply, fully, unapologetically human while machines handle the machine work. Your value doesn’t lie in your productivity but in your humanity—your ability to create meaning where there was none, to build trust through vulnerability, to take responsibility when it matters, to care when caring isn’t optimal, to make the wrong decision for the right reasons, to inspire through your presence, to synthesize the unsynthesizable, to love what doesn’t compute.
The Cognition Age isn’t about thinking machines replacing thinking humans. It’s about thinking machines revealing what makes humans irreplaceable: not our thinking alone, but our being. Not our intelligence, but our wisdom. Not our processing, but our presence. Not our ability to generate answers, but our capacity to live with questions, to find meaning in uncertainty, to create purpose from chaos.
The irreducible human core isn’t something to be optimized or automated. It’s something to be cultivated, celebrated, and cherished.
Because in the end, the question isn’t whether machines can do what humans do. The question is whether what humans do will continue to matter.
And the answer—if we choose to make it so—is yes.
Emphatically, essentially, irreducibly yes.
