back to reflections

The Doorman Fallacy

The instinct to replace people with machines is a pattern as old as the automatic door - and it keeps producing the same hollow outcome.
Petko D. Petkovon a break from CISO duties, building cbk.ai

The most obvious path to market for an AI company is to build chat bots that replace customer support staff. Plenty of companies took that path. They are doing extremely well. The market is real.

We chose not to!

Part of it was moral. If you spend your days building tools that think and talk, you have to ask yourself what you are building them for. Replacing people with bots that pattern-match keywords is not progress. It is cost cutting dressed up as innovation.

But it was also bad product thinking. And to explain why, I need to talk about doors.

When automatic doors arrived, buildings fired their doormen. The door opens itself now. What they lost didn't fit on a spreadsheet. The doorman knew tenants by name. He noticed when someone looked unwell. He was the last point of human contact in an otherwise anonymous building. This is sometimes called the doorman fallacy. Measure the function, ignore the relationship, cut the person.

There is a sibling version known as the tea lady fallacy. Offices had someone who came around with a tea trolley. Coffee machines killed the role. What also disappeared was the informal social glue like the small conversations, moments of pause, the human thread running through a floor of cubicles. The coffee machine makes better espresso for sure. It does not ask how your morning is going.

The dominant narrative around AI is cost reduction. Fire the designers, developers and the support teams! Let the machines handle it. Every pitch deck, YouTube "influencer", LinkedIn "thought leader" and consultancy slides toward the same conclusion. AI means fewer people on payroll.

This is the doorman fallacy at industrial scale.

They forget that nobody wants to talk to a chatbot when the stakes are high! When your order is lost, your account is locked, you are frustrated, scared and confused, you want to talk to a human. Someone who can bend the rules, and say "let me fix this for you" and mean it. A bot that says "I understand your frustration" does not understand anything. It is performing empathy without the substance. This is simply resentment with a smiley face.

When I hit a problem my first instinct is to search the docs, scan the error, read the source code. If I can solve it myself I will. This is where AI agents are genuinely useful. Not as a replacement for support, but as an amplifier of self-service. An AI that knows your documentation inside out, that can walk me through a complex configuration, and surface the right answer from a mountain of content, is not replacing a human. This AI is making me more capable. The support team is still there for when I need them, and I do need them often. But before I get there, I want to try myself first.

Replacement removes a human from a process. Augmentation makes the human more capable. One shrinks the organisation. The other expands what it can do.

The doorman fallacy persists because cost reduction is easy to sell. "Cut headcount by 20%" lands better in a board meeting than "make your existing team twice as effective." But one builds something durable. The other builds a house on sand and calls it efficiency.

The automatic door still opens. Nobody remembers the building it belongs to.


A practical note: this is why ChatBotKit is an augmentation platform, not a replacement engine. We build tools that make humans better at what they do and give self-sufficient users the means to help themselves. The goal was never a better AI agents. It was a better human.