
A few weeks ago, I wrote about AI and how worried I am that we’re losing faith in human agency at exactly the same rate that machines are getting more powerful. At the time, it felt like a philosophical concern — something to watch. But as I see how companies and people are actually responding to AI, this is becoming urgent.
Let me be clear: I’m not anti-technology. Quite the opposite. The upside of AI — productivity, medical breakthroughs, creative tools, scientific advances — is one of the biggest opportunities we’ve ever had. And plenty of organizations are using it thoughtfully.
But there’s another story happening beneath the hype. And if we don’t talk about it and deal with it, we risk losing something that matters way more than quarterly earnings — something commercial, cultural, and fundamentally human.
AI has blown up business models that stood for centuries. Fast. Entire industries — consulting, advertising, content, marketing — aren’t just being disrupted. They’re being completely reinvented. But that’s not the problem. Business has always had cycles of creative destruction.
The problem is how companies are reacting.
In the panic to stay competitive, a lot of companies are handing over core pieces of their identity to the exact tools that were supposed to make them stand out. When your competitors are all using the same AI systems — same language models, same tools, same algorithms — you’re all accidentally becoming the same thing. Omnicom, the loss of DBB, and the IPG merger to form an “AI-led super network” come to mind.
I call this strategic convergence syndrome. Differentiation collapses through technological sameness.
When competing companies become indistinguishable — same tone, same look, same language, same visual style — the thing that used to give you market position just evaporates. Customers stop choosing based on brand and start choosing based on convenience or price. And when that happens, loyalty becomes impossible.
This isn’t AI’s fault. This is what happens when the tool starts defining who you are instead of extending who you are. The means of production takes over the purpose.
But there’s something deeper happening that’s going to accelerate this sameness exponentially.
AI systems are increasingly training on content that AI already made. As synthetic content floods the internet — articles, images, videos, marketing copy, research — the training data for the next generation of models gets contaminated with outputs from the last generation. AI is basically learning from itself. And with each cycle, it moves further away from authentic human expression.
The implications are huge:
Diminishing returns. When AI just recycles variations of patterns it already knows, each new output is worth less. We’re not creating knowledge. We’re photocopying photocopies. Each generation loses fidelity.
Creative variance collapses. Human culture thrives on outliers. The unexpected connections. The weird perspectives. The idiosyncratic stuff that comes from actually living a life. As AI content takes over, these outliers get smoothed away into median outputs that sound like nobody.
Ideas narrow. If AI keeps reinforcing the same patterns instead of learning from genuinely new human thinking, we’re building an intellectual monoculture. The space for new ideas, new expression, and new innovations just shrinks.
This isn’t theoretical anymore. Research is already showing model collapse in AI systems trained mostly on synthetic data. The technology we built to expand what we can do might actually be limiting the diversity of thought and expression that drives innovation and culture.
When businesses outsource their strategy, their creativity, their brand voice to systems that are themselves becoming less original with each cycle, they’re not just risking commoditization. They’re contributing to a system-wide loss of originality.
This same thing is happening to individuals at scale.
AI has democratized personal branding — synthetic images, voice cloning, content generation, endless optimization. But what should be amplification tools have become homogenization machines. The distinctive stuff that used to create real connection — your voice, your edge, your lived experience, your friction — is getting algorithmically smoothed into whatever performs best on platforms.
The result? We’re turning ourselves into idealized replicas. Curated toward perfection but stripped of the real textures and vulnerabilities that actually build trust and influence.
If companies are commoditizing themselves through tech conformity, individuals are doing the exact same thing trying to stay relevant. The tools meant to amplify what makes us unique might actually be erasing it.
This isn’t science fiction. Look at the global policy discussions. These concerns have already moved from theoretical to operational.
Governments and institutions are dealing with real risks right now:
Loss of cognitive autonomy. When algorithms decide what information you see, what you believe, what you trust — what’s left of independent judgment?
Critical thinking atrophies. In a world of instant answers, the ability to think deeply, analyze nuance, reason through complexity — it just weakens.
Synthetic relationships replace real ones. AI companions, AI influencers, AI emotional support — these aren’t augmenting human connection. They’re replacing it. With serious implications for mental health and social bonds.
Job loss anxiety. This isn’t hypothetical anymore. It’s reshaping politics, policy, social stability across developed countries.
Truth becomes unreliable. Deepfakes threaten to make seeing and hearing no longer believing. Trust — already fragile — collapses.
Shared reality fragments. When truth becomes infinitely flexible, democracies lose the common ground they need to function.
Even the people building AI are calling for governance. When the creators warn against unconstrained development, we should ask ourselves: What’s our responsibility as the people using it?
If this sounds familiar, it should.
Social media launched with incredible promise. Democratized connection. Amplified voices. Frictionless ideas. Creative expression at scale. My first book, We First, came out of that optimism about what we could build together.
But slowly, then suddenly, the incentives shifted. Algorithms optimized for engagement over wellbeing. We got an attention economy built on outrage, performance, and polarization instead of connection. And we’re already seeing this same pattern with AI platforms like ChatGPT introducing ads even for Pro users.
We went in believing these platforms would improve public discourse and human flourishing. We came out more divided, distracted, and weirdly isolated despite being more “connected” than ever.
The lesson isn’t that social media was evil. The lesson is that technology follows whatever incentives we build into it — not the ideals we hope for. When profit motives misalign with human welfare, the technology amplifies that misalignment at massive scale.
The same dynamic is happening with AI now. Except unlike social media’s gradual rollout, AI is being deployed faster and more pervasively by orders of magnitude.
If we keep going like this without intervention, we’re facing costs that efficiency gains can’t offset.
For companies:
For individuals:
For society:
This is not AI’s inevitable path. This is what happens when we prioritize speed, scale, and optimization over meaning, identity, and human flourishing. It’s a design problem, not a technology problem.
AI won’t decide our values. It will amplify whatever values we build into how we use it.
We’re at a choice point. Not between progress and resistance, but between conscious leadership and just drifting. And that requires us to fundamentally reorient:
Use AI to elevate what makes us human, not replace it.
Companies can use AI without eroding their brand identity. Creators can use AI without surrendering their authentic voice. Leaders can use data without abandoning judgment, wisdom, and purpose.
The technology is neutral. How we deploy it is not.
In our rush for relevance, revenue, and reach, there’s something else we need to focus on: preserving what makes us distinctly valuable.
As organizations, creators, communities, human beings — the question isn’t whether AI will reshape everything. It’s already doing that and accelerating.
The real question is whether we let the speed of technological change outpace how deeply we think about it philosophically and strategically.
If we give up our distinctiveness individually, that’s a significant loss. If we give it up collectively, the consequence goes way beyond business impact. It becomes civilizational.
The future needs AI’s capabilities. But it equally needs us — fully human, fully distinct, fully present, and strategically intentional.
We can’t erase ourselves chasing efficiency. The competitive advantage of the next era belongs to whoever figures out how to use AI to amplify human distinctiveness instead of replacing it. And who recognizes that rich human insight, not volume of synthetic output, is what actually drives meaningful innovation.
That’s not just philosophy. It’s strategy. And it’s the leadership challenge of our time.
Let’s build that future together. Consciously. Deliberately. Without losing what makes us irreplaceable.