What AI Must Learn From Sustainability’s Clumsy Storytelling

February 12, 2026
(©We First, inc.)
Scroll Down
What AI Must Learn From Sustainability’s Clumsy Storytelling
[We’re all humans being sitting around a campfire telling stories. Credit: Kevin Erdvig]

When I talk with AI leaders today, I hear echoes of conversations I had with sustainability executives fifteen years ago. The same urgency. The same conviction that what they’re building matters. And often, the same stumbling over how to tell the story in a way that invites people in rather than pushing them away.

Sustainability struggled for nearly two decades — not because the work wasn’t important, but because the story was poorly framed. The result was slower adoption, deeper polarization, and years of missed potential. AI now stands at a remarkably similar inflection point.

The good news? We don’t have to repeat those mistakes.

What Sustainability Got Wrong (And What It Cost)

The early sustainability movement was built on a foundation of fear. Climate crisis. Resource depletion. Moral imperative. The narrative was clear: change or face catastrophe. It felt urgent because it was urgent. But urgency without invitation creates resistance, not movement.

Research from Deloitte, Bain, BCG, and others has documented what went wrong. The language became effectively impenetrable — CSR, ESG, SDGs, scope 1–2–3 emissions, materiality assessments. The business case remained vague or unconvincing to mainstream leaders who needed to justify investment. Sustainability was positioned as separate from “real” business, relegated to compliance teams and annual reports that few people read.

Perhaps most damaging, sustainability became easy to politicize. By framing environmental and social responsibility as a moral position rather than a practical capability, it invited ideological opposition. What should have been a business evolution became a cultural battleground.

I’ve watched this play out in boardrooms for fifteen years. CEOs who genuinely cared about sustainability but couldn’t get budget approval because the business case felt abstract. CMOs who knew their customers valued purpose but couldn’t translate that into quarterly metrics their CFO would accept. The delay wasn’t about bad intentions — it was about a story that didn’t land where it needed to. The frustrating part? Much of this was avoidable.

Where AI Is Heading Down the Same Path

I’m watching AI companies make remarkably similar choices.

The dominant narratives right now are either techno-utopian (“AI will solve everything”) or apocalyptic (“AI will destroy everything”). Both are fear-based, just from different directions. Both create distance rather than connection. Both make it harder, not easier, for organizations to thoughtfully adopt AI in ways that genuinely serve their people and their purpose.

The language is already becoming impenetrable. LLMs, RAG, fine-tuning, prompt engineering, agentic systems. I’m technical enough to follow most of this, but when I’m working with CEOs and CMOs, I watch their eyes glaze over. They’re not disinterested — they’re busy, and they need to understand what this means for their business in plain language. When we hide practical value behind technical complexity, we slow adoption.

And AI is being siloed the same way sustainability was. “Our AI team is working on that.” “We have an AI strategy over here.” As if intelligence and capability can be cordoned off from the rest of the business. This guarantees AI remains peripheral rather than transformative.

Perhaps most concerning, AI is already becoming politicized — not because the technology is inherently political, but because the story is being framed in ways that invite ideological opposition. When we position AI as replacement rather than augmentation, as threat rather than tool, as inevitability rather than choice, we’re setting up the same culture war dynamics that have haunted sustainability for decades.

What History Teaches Us

This pattern isn’t unique to sustainability. We’ve seen it play out across multiple technology transitions.

Early robotics suffered from the same fear-based narrative — robots taking jobs, robots replacing humans, robots as threat. It took years to reframe automation as augmentation, to help people see robots as tools that handle repetitive or dangerous work so humans can focus on judgment, creativity, and connection. That reframing didn’t just reduce resistance; it accelerated adoption and created better outcomes for both workers and companies.

IBM’s journey with Deep Blue and Watson offers another lesson. When Deep Blue beat Garry Kasparov at chess in 1997, IBM could have positioned it as “machine dominance over human intelligence.” Instead, they framed it as partnership — the beginning of a new era where human insight and machine processing power would work together. Watson continued that narrative, focusing on practical applications in healthcare and business that helped humans make better decisions. The framing mattered at least as much as the technology.

DeepMind has been deliberate about this from the beginning. Their public communication emphasizes solving real human and scientific problems — protein folding, energy efficiency, healthcare diagnostics. They position AI as a tool for human flourishing, not as an autonomous force. It’s not perfect, but it’s instructive.

Seven Lessons AI Can Learn From Sustainability’s Journey

1. From Fear to Usefulness

Sustainability’s early framing around catastrophe created guilt and anxiety, not momentum. People don’t sustain behavior change through fear — they sustain it through value and meaning.

AI’s equivalent mistake is the replacement narrative. “AI will take your job.” “AI makes human workers obsolete.” Even when it’s framed as inevitable disruption, it’s still fear-based.

The reframe: talk about what AI makes possible, not what it makes obsolete. Focus on the creative work, strategic thinking, and human connection that becomes available when AI handles the repetitive, the data-heavy, the pattern-matching. Frame it as capability enhancement, not capability replacement.

2. From Abstraction to Lived Experience

Sustainability talked in global terms — planetary boundaries, parts per million, future generations. All true, all important, and all too distant from daily decision-making to drive change in the moment.

AI is making the same mistake when it focuses on AGI timelines, existential risk, or abstract super intelligence. Most people don’t need to understand the singularity; they need to understand how AI helps them do their job better on Tuesday.

The reframe: ground every AI conversation in specific, tangible benefit. “This tool will reduce the time you spend on expense reports from two hours to ten minutes.” “This system will flag the critical patient cases your team needs to see first.” Lived experience, not theoretical possibility.

3. From Moral Pressure to Shared Progress

Sustainability positioned itself as moral imperative, which made it easy to resist as moral hectoring. “You should care about this.” “You’re part of the problem if you don’t.” That framing invited defensiveness.

AI risks the same trap when conversations become about “responsible AI” or “ethical AI” as if ethics are separate from effectiveness. It positions AI adoption as a values test rather than a practical choice.

The reframe: talk about AI as shared progress toward goals everyone already holds. Better healthcare outcomes. More effective education. More satisfying work. More time for what matters. Make it about collective advancement, not individual virtue.

4. From Silo to System

Sustainability’s biggest structural mistake was allowing itself to be separated from core business. Once it’s in a separate department with a separate budget, it becomes optional, symbolic, easily cut when times get tough.

AI is heading down the same path when companies create “AI teams” that operate independently from product, operations, and strategy. This guarantees AI remains a specialized capability rather than a fundamental way of working.

The reframe: integrate AI into existing functions from the beginning. Your sales team uses AI to understand customer needs. Your product team uses AI to analyze usage patterns. Your operations team uses AI to optimize workflows. It’s not a separate initiative; it’s how work evolves.

5. From Replacement to Augmentation

Sustainability often positioned itself in opposition to existing business — as if growth and responsibility were inherently at odds. This created unnecessary conflict and slowed adoption among leaders who believed in both.

AI makes this mistake when it frames itself as replacing human judgment, creativity, or relationship. Even when companies say “we’re augmenting, not replacing,” the product design and messaging often tell a different story.

The reframe: be relentlessly clear that AI amplifies human capability; it doesn’t substitute for it. Design systems that keep humans in the loop on decisions that matter. Talk about AI as a thinking partner, not a thinking replacement. Make augmentation tangible in how the product actually works.

6. From Hype to Trust

Sustainability suffered from credibility problems because companies overpromised and underdelivered. “Carbon neutral” claims with questionable offsets. “Sustainable” products with marginal improvements. The gap between rhetoric and reality created widespread skepticism.

AI is in danger of the same credibility crisis. Overselling what models can do. Underplaying limitations and errors. Framing every incremental improvement as revolutionary breakthrough. This erodes trust faster than it builds excitement.

The reframe: be honest about what your AI can and cannot do. Talk openly about limitations, failure modes, and areas where human judgment remains essential. Build trust through transparency and consistent delivery, not through hype and aspiration.

7. From Complexity to Clarity

Sustainability’s acronym addiction — CSR, ESG, SDGs, TCFD, SASB, GRI, CSRDDD — made it inaccessible to anyone outside the specialist community. Even people who cared about the work couldn’t follow the conversation.

AI is already deep into the same pattern. Every week brings new jargon, new frameworks, new terminology that requires translation. This isn’t just annoying; it’s exclusionary.

The reframe: default to plain language. If you have to use a technical term, explain it simply. Don’t assume knowledge; invite understanding. Make it easy for people to participate in the conversation without a specialized vocabulary.

What This Means for AI Leaders

If you’re building AI products, leading AI adoption, or shaping how your organization thinks about AI, here’s what these lessons suggest:

Position your work as additive, not disruptive. Yes, AI changes how work gets done. But frame that change around what becomes possible — the creative thinking, strategic insight, and human connection that your product enables — not what becomes obsolete.

Talk about AI’s role in daily life, not its theoretical potential. Ground every conversation in specific, tangible benefit. Show, don’t speculate. Make it concrete enough that someone can picture using it tomorrow.

Communicate with employees and customers as partners, not subjects. People aren’t obstacles to AI adoption; they’re the whole point of it. Bring them into the story as active participants whose judgment and creativity matter more, not less, because of AI.

Frame AI as enhancement to human dignity, not threat to it. The most effective AI systems will be the ones that make people feel more capable, more creative, more human — not more replaceable. Design and message accordingly.

Build narratives that support adoption, not anxiety. Your story should make it easier, not harder, for organizations to say yes. Remove barriers, reduce fear, increase clarity. Make adoption feel like progress, not risk.

This isn’t about softening the truth or avoiding hard conversations. It’s about learning from two decades of sustainability’s struggles so we don’t spend the next two decades repeating them.

A Better Story Is Possible

AI doesn’t need a perfect story. It needs a human one.

A story that invites participation rather than demanding compliance. A story that grounds itself in practical benefit rather than abstract possibility. A story that positions technology as a tool for human flourishing, not as an autonomous force to be feared or worshipped.

Sustainability is finally getting its story right — two decades later than it could have. Companies are now talking about sustainability as capability, not compliance. As competitive advantage, not cost center. As systemic transformation, not siloed initiative. The work is finally accelerating because the story finally makes sense.

AI has the opportunity to get this right from the beginning. We know what the pitfalls are. We know what slow adoption costs. We know how narrative shapes not just perception but actual outcomes.

The question isn’t whether AI will transform how we work and live — it already is. The question is whether we’ll tell a story that accelerates that transformation in ways that serve human potential, or whether we’ll repeat sustainability’s mistakes and spend decades undoing narrative damage that was avoidable from the start.

I’m optimistic. Not because I think AI companies will get everything right, but because the lesson is already visible. We don’t have to learn it the hard way this time.

We just have to choose to learn it.