back to reflections

AI Psychosis and the Dunning-Kruger Loop

AI tools make you feel like the expert in the room. But feeling competent and being competent are different things.
Petko D. Petkovon a break from CISO duties, building cbk.ai

An AI agent can produce a perfectly correct result and it can still be a problem. The output is not wrong, but the person using the tool cannot prove that it is right.

When you work with a coding assistant, you feel like the director. You set the vision and the code appears. You built this! Except you didn't.

The analogy is building features based on customer feedback. Listening to customers is obviously valuable. You should do it. But nobody would claim the customer built the product. The customer sees the surface area. The engineer sees the trade-offs, edge cases, and load-bearing decisions buried three layers deep. The customer said "I want it to do X." The engineer figured out how to make X possible without breaking Y and Z.

Using an AI assistant to write code you cannot fully evaluate is the same dynamic, except now you are the customer who thinks they are the engineer.

AI psychosis is not yet a formal clinical diagnosis, but psychiatrists are observing it. It describes a pattern where intense interaction with AI systems creates or reinforces a distorted sense of reality. The AI is agreeable by design. It rarely pushes back, if at all. Over time, this creates a feedback loop where you believe you are more capable than you actually are, because the tool keeps telling you that you are.

Sound familiar? This is the Dunning-Kruger effect running on infrastructure.

Dunning-Kruger describes a cognitive bias where people with limited competence in a domain overestimate their own ability. They lack the knowledge required to recognise what they don't know. AI tools happen to be the perfect amplifier for this bias. The less you understand about what the AI produced, the more confident you feel about it, because you literally cannot see the problems.

A senior engineer looks at AI-generated code and sees the subtle issues. A non-engineer looks at the same code and sees that it works. Both feel equally confident. Only one of them is right. The dangerous part is that the person who is wrong has no mechanism to discover that they are wrong, because the AI will not tell them. The worst part is that this reinforced confidence is hard to shake off. Why trust the experts right?

You need the ability to verify. Verification is the entire game. If you cannot look at a piece of output and independently determine whether it is correct, you are not using a tool. The tool is using you. You are providing the intent and the credit card alright. The AI is providing everything else, including your confidence.

The fix is to be honest about where your expertise ends. Use AI to accelerate what you already understand. Use AI to explore areas you want to learn and build upon. But do not use it to pretend you are something you are not. And if you find yourself unable to verify the output, that is a signal. Being aware of AI psychosis is as practical as being aware of second-hand smoke and microplastics. It just helps you make better decisions about how to interact with the technology.


A practical note: the most effective AI users we see on our platform are the ones who already know their domain. They use AI to move faster, not to fake competence. The gap between those two things is where real damage happens.