When Someone Else Controls the Lights
Your electricity provider cannot decide tomorrow that your house no longer qualifies for power. The grid is a regulated utility! Access to it is treated, in most societies, as something close to a human right. You plan around it accordingly.
Your AI model provider can. They can deprecate the model your product is built on, change the pricing overnight, restrict access to your use case for legal or policy reasons, silently alter the model's behaviour in a way that breaks your assumptions, or simply shut down. None of that requires them to act in bad faith. It is just the nature of what they are selling.
Most companies building on AI today have not fully absorbed this distinction. They are treating a vendor relationship like infrastructure. The enterprise sales pitch from Anthropic or OpenAI reinforces this. Stability is implied. The contract language does not deliver it.
The deeper problem is not the API call. It is everything that accumulates around it like prompts tuned to a specific model's behaviour, Workflows built around a particular output format, latency and cost assumptions baked into the architecture. When you eventually need to move, and at some point you will, it is not a swap but a renegotiation with every assumption you made.
Rigid systems pay for this twice. First when they discover the migration cost, and again when they miss the window to respond because the work is too large to absorb quickly.
The answer is the same one that applies to any external dependency you cannot fully control Design so that it is replaceable. Not just at the model level, swapping one provider's model for another, but at the structural level. As the AI landscape evolves toward specialised models, smaller locally-run inference, and multi-model pipelines, the meaningful question is not which provider you chose today but whether your architecture can accommodate a different behaviour next year.
Electricity started as a commercial product that could be denied. Over time, access became regulated and eventually recognised as a right. It is not hard to imagine AI inference following a similar path as it becomes more deeply embedded in economic and social participation. The companies and countries that treat AI access as contingent today may be setting norms they will later regret.
We are not there yet. Until we are, the practical response is architecture.
A practical note: at ChatBotKit we built the platform to swap not just individual models but entire provider families, and to accommodate structural shifts in how AI is deployed as they happen. We did this because our customers need a stable environment, and we cannot offer that if our own stability depends on any single provider's decisions.