
As we enter the holiday season — a time when we naturally think about what we give to others — here’s a question worth sitting with: what are we giving through the technologies we’re building? What legacy are we creating with tools that will shape lives we’ll never see?
For me, the concern is that AI is advancing faster than our ability to think through what it means. And nowhere does that gap matter more than in the work of social impact.
Sure, the tech world is moving at breakneck speed. Billion-dollar valuations. Packed conferences. Breathless promises about what’s possible. But the people actually doing the work of change — the ones on the ground, in communities — they’re asking something different: Will this actually help?
We’ve been here before. Remember early social media? All those promises of connection. I was giddy with optimism. Wrote my book, We First, about it. What we got was polarization. Algorithms that fractured our ability to talk to each other. “Move fast and break things” that broke our democracy.
We’ve seen how this movie ends. And honestly? It wasn’t great.
But here’s what I’ve learned from working with leaders who are getting this right: AI is not the mission. AI is the force multiplier for the mission.
That distinction matters more than you might think. Because in the rush to innovate, we keep forgetting to ask why we’re innovating in the first place. Technology without purpose is just noise. Speed without direction is chaos. And scale without wisdom? That’s just amplified harm.
The Part No Algorithm Can Handle
Here’s what I know to be true after fifteen years of work at We First: healing communities, restoring ecosystems, expanding opportunity — it all begins and ends with people.
Purpose is human. Trust is human. Dignity is human.
Technology just amplifies what we’re already committed to doing. Or what we’re failing to do.
This is where most conversations about AI go sideways. People start with the technology and work backward. They ask “What can this tool do?” instead of “What do these people need?” They optimize for efficiency when the moment calls for empathy. They scale before they understand.
And then they wonder why it didn’t work.
When AI is done well, it’s remarkable. Rural students getting access to tutors who never tire. Farmers predicting weather patterns that used to take generations to learn. Low-income families navigating bureaucracies that were designed to confuse them. Health workers catching diseases before they become crises.
This isn’t theoretical. It’s happening right now, in communities that every other wave of innovation forgot about.
But here’s the truth that keeps me up at night: AI can scale harm faster than humans can fix it.
Biased data misdirects aid. Algorithms discriminate in hiring, turning historical injustice into automated destiny. Surveillance masquerades as support. These aren’t edge cases. They’re predictable outcomes when you build tools without including the people they’re meant to serve.
Every dataset contains choices. Every algorithm embeds values. Every system reflects who built it. And when those builders look nothing like the people they’re building for? When they’ve never lived the problems, they’re trying to solve? That’s when things go wrong.
Design for People, Not the Other Way Around
So, here’s the responsibility: design the system around people, not people around the system.
This means co-designing with the people who actually understand the problems. Piloting before scaling. Learning before claiming. Listening before launching. It means treating affected communities not as test subjects, but as architects of their own futures.
It means asking who gets counted. Whose experiences are valid. Whose voices shape the data. Because these questions determine whether AI liberates or oppresses. Whether it heals or harms. Whether it serves or surveils.
The organizations getting this right aren’t just writing ethics statements or hiring compliance officers. They’re fundamentally changing how decisions get made. They’re giving communities veto power over technologies that affect them. They’re making transparency mandatory and accountability real.
Leadership Looks Different Now
Here’s something boards and CEOs need to understand: you can’t outsource this judgment. You can’t point to terms of service or blame the algorithm anymore. That era is over.
In a world where AI capabilities evolve faster than regulations, where unintended consequences emerge at scale, purpose becomes your compass. Lose it, and speed becomes recklessness. Innovation becomes destruction.
Today’s leaders need new skills. The ability to hold complexity. To navigate uncertainty. To make ethical calls in real-time. To understand that technology is never neutral — it always carries the fingerprints of its makers.
And here’s the most important thing: the most dangerous moment isn’t when technology fails. It’s when it succeeds at the wrong thing.
Keep Humans in the Work
When AI is used wisely, it removes the invisible burden — the paperwork, the analysis, the administrative grind that exhausts everyone — so humans can do what only humans can do: listen, empathize, advocate.
This is about augmentation, not automation. Reinforcement, not replacement.
AI shouldn’t replace teachers. It should help them understand how each student learns, so they have more time to actually connect. It shouldn’t replace case managers. It should help them prioritize who needs support first, so they can show up with real presence. It shouldn’t replace community organizers. It should help them see patterns, so they can mobilize more strategically.
Because social change is fundamentally human work. It requires contextual understanding no algorithm can replicate. It demands emotional intelligence — reading a room, sensing when someone’s holding back, knowing when to bend the rules. It depends on trust that only humans can build, through showing up consistently, especially when it’s hard.
Technology can enhance this. But it cannot replace it. Any approach that tries to automate away the human dimension isn’t innovation. It’s erasure.
The Real Question
The future isn’t human versus machine. It’s human plus machine, each doing what they do best.
Machines excel at processing massive amounts of information, identifying patterns, running simulations, operating consistently. They bring scale and speed.
Humans excel at understanding meaning, making judgment calls, adapting to new situations, building relationships, inspiring action, carrying the moral weight of our choices. We bring wisdom, empathy, creativity, accountability.
The magic happens in between — when technology expands what’s possible and humans ensure it stays aligned with what’s right.
So forward-thinking organizations aren’t asking, “How do we use AI?”
They’re asking, “Who will we become with AI?”
That question cuts through all the hype. Because technology doesn’t just change what we do. It changes who we are. Our tools shape our thinking. Our systems shape our values. Our choices today shape tomorrow’s world.
Will we become more efficient but less humane? More powerful but less accountable? Or will we become more capable of delivering on our deepest commitments? More effective at reaching people we’ve struggled to serve?
The answer depends on the choices we’re making right now.
Purpose as Infrastructure
Purpose-driven AI isn’t a slogan. It’s infrastructure. It translates values into workflows, ethics into incentives, accountability into structure. It makes doing the right thing the default, not the exception.
This means embedding purpose into every stage. Starting with the problem as experienced by those who live it, not the solution imagined by those who don’t. Including diverse voices as architects, not afterthoughts. Piloting thoughtfully. Gathering feedback continuously. Measuring impact through the eyes of those affected.
It means creating feedback loops that actually work. Building systems that learn from people, not just data. Making space for people to say “this isn’t working for us” without being dismissed as resistant to change.
When aligned with a movement mindset — with the understanding that lasting change requires collective action and sustained commitment — AI becomes one of the greatest accelerants of human progress we’ve ever known.
But only when aligned. Only when grounded. Only when governed by wisdom that understands progress isn’t just about speed — it’s about direction.
This Moment Matters
The next decade will reward leaders who treat AI as a sacred responsibility to do more good, more quickly, for more people than ever before.
Sacred because the stakes are that high. We’re making decisions that will compound over time, shaping opportunities for generations we’ll never meet.
We’re at an inflection point. A moment when the tools available fundamentally shift what’s possible. The last time we faced something like this, we got both incredible progress and devastating harm. The New Deal and atomic weapons. Universal healthcare and environmental destruction. Expanded rights and accelerated inequality.
Technology doesn’t have moral direction. We do. History doesn’t have an arc. We bend it. The future isn’t inevitable. We build it.
Right now, we get to choose what we build it with and who we build it for.
The leaders who understand this aren’t waiting for perfect solutions. They’re moving forward with humility and urgency. Experimenting boldly while governing carefully. Embracing potential while respecting risks. Treating communities as partners, not problems to solve.
They’re asking better questions. Not just “Can we?” but “Should we?” Not just “What’s possible?” but “What’s responsible?” Not just “How fast?” but “How equitably?” Not just “What will this let us do?” but “What will this make us become?”
Leading Together
That’s how we Lead With We in the age of AI.
Not by abandoning human wisdom for machine intelligence. Not by letting technology dictate our values. Not by optimizing for what’s easy to measure while neglecting what actually matters.
But by recognizing that our most powerful tools are only as good as our commitment to use them well. That technology can scale healing if we design it with intention and deploy it with care. That the future is the result of choices we’re making right now, one decision at a time.
This is the work. Not to resist AI or worship it, but to shape it. Not to let it define our humanity, but to let our humanity define how we use it. Not to ask what AI can do for us, but what we can become by using it in service of what we believe.
So as this year ends, and we gather with friends and family we care about, carry this question: What are we giving to the future? What gift are we leaving for communities we’ll never see, for kids not yet born, for a world we’re building but won’t inhabit?
The greatest gift we can give is the wisdom to use our most powerful tools not for what’s expedient, but for what’s right. Not for what scales fastest, but for what serves best. Not for what enriches the few, but for what empowers the many.
That’s the spirit of this season. And that’s the spirit we need to bring to the age of intelligence.