Some People Are Being Replaced by AI. Others Are Being Replicated.

For the past couple of years, most of the conversation around AI and work has centered on one question:

Will AI replace jobs?

It’s a fair question. And depending on who you ask, the answers range from cautious optimism to full-blown anxiety.

But something else is happening—quietly, and mostly at the top of organizations—that hasn’t gotten as much attention. Instead of being replaced by AI, some people are starting to replicate themselves with it.

Not metaphorically. Literally.

I started noticing this through a series of examples that, at first, felt disconnected.

At Peterborough City Council in the UK, a long-tenured employee named Geraldine—who had spent decades building deep institutional knowledge—was turned into an AI assistant. Colleagues can now “talk” to a system trained on her experience, her answers, even her way of helping people .

At UBS, analysts have been turned into AI-generated video presenters, delivering insights to clients without needing to be physically present.

And then there’s the executive layer. Otter.ai’s CEO has experimented with building an AI version of himself to attend meetings. Reid Hoffman created a digital twin that can answer questions in his voice. Other leaders are doing the same—building chatbots, avatars, and systems trained on their thinking.

Individually, these examples seem like experiments. Taken together, they start to look like a pattern.At a basic level, the motivation makes sense. There’s too much work, too much information, and not enough time.

And in most organizations, knowledge isn’t evenly distributed. It tends to concentrate in a few people—the ones who’ve been there the longest, seen the most, or simply know how things actually get done. The problem is, those people don’t scale.

They become bottlenecks. Not because they want to—but because they’re human.

So naturally, the question emerges: What if you could take what they know—and make it available everywhere? That’s the promise of these digital doubles.

 

If you look at it from a knowledge perspective, it’s actually pretty compelling.

For decades, companies have struggled with knowledge transfer. Someone leaves, and suddenly a whole layer of understanding disappears with them. Or someone stays, but becomes overwhelmed because everyone depends on them.

Digital replicas offer a different model. Instead of knowledge being trapped in individuals, it becomes something that can be:

  • Accessed on demand

  • Shared across teams

  • Reused over time

In that sense, it’s less about cloning a person—and more about scaling what they know.

But here’s where things get more complicated. Because knowledge isn’t the same as judgment.

And most of what matters in work—especially at higher levels—isn’t just what you know. It’s how you interpret it. When you push back. When you say “this doesn’t feel right.” AI can approximate patterns. It can mimic responses. But context? Nuance? Trade-offs? Those are harder. Even in controlled examples, there are moments where the AI version says something the real person wouldn’t—or misses something subtle but important.

Which raises a question that doesn’t have a clean answer:

If a digital version of you makes a decision—or gives advice—who’s responsible?

There’s also something else going on here that feels… slightly uncomfortable.  While employees are being told to adapt to AI—or risk being left behind—executives are using AI to multiply themselves.  It creates an odd dynamic. On one side, AI is positioned as a force of efficiency.

On the other, it starts to look like a tool of amplification—used to extend the reach of certain individuals more than others. You can see how that might land differently depending on where you sit in the organization.

 

And then there’s the question of presence.  If you send your digital version to a meeting, did you attend?  If someone asks a question and gets an answer from your AI, have they actually heard from you? At some point, scaling access starts to blur into outsourcing interaction.

And in a lot of cases, what people are actually looking for isn’t just information. It’s judgment. Context. Connection.

There’s a deeper layer to this too, which we’re only starting to grapple with: trust.

As these systems become more realistic, it becomes harder to know:

  • Who you’re talking to

  • Whether the response is accurate

  • Whether it reflects real intent

We’re already seeing early versions of this in hiring, where companies aren’t always sure if they’re interacting with a real candidate or some kind of AI-assisted proxy, Now imagine that same ambiguity inside an organization. If a system speaks on your behalf, it’s not just a tool—it’s a representation. And that comes with consequences.  None of this is to say the technology isn’t useful.  It clearly is.

There are real benefits:

  • Less repetitive work

  • Better access to expertise

  • Stronger knowledge retention

  • Faster onboarding and learning

In some cases, it might even help organizations navigate a looming problem—what happens when experienced workers retire and take decades of knowledge with them.

In that context, digital doubles aren’t just interesting—they’re practical. But like most powerful tools, the value depends on how they’re used.

There’s a version of this future where:

  • AI handles routine interactions

  • Humans focus on judgment and decision-making

  • Knowledge flows more freely across the organization

And there’s another version where:

  • People become less present

  • Accountability becomes unclear

  • Trust erodes in subtle ways

We’re somewhere in between right now.

If there’s one principle that feels worth holding onto, it’s this:

As the technology gets smarter, the humans can’t get smaller. Because at the end of the day, work isn’t just about output. It’s about how decisions get made. How people show up. How trust is built. And those things are harder to replicate than we might think.  I don’t think digital doubles are going away.  If anything, they’ll become more common.

The real question is: Are we using them to scale capability—or to replace presence?

And that’s not a technical decision.  It’s a human one.

 

Previous
Previous

How We Think Matters More Than What We Know

Next
Next

Beyond ROI: Learning & Development and what it actually delivers