Why You Won’t Switch AI Models
Endowment effects, default bias, and the behavioural economics of AI
The core concept of the endowment effect is that people value what they possess more than identical alternatives. A person who owns something, say a house, places a higher premium on it than the person who doesn't own it. This is why in housing markets you often see a discrepancy between the selling price and buying price: the buyer will often have a lower willingness to pay (WTP) than the owners willingness to accept (WTA). The classic study that illustrated this effect was Kahneman, Knetsch and Thaler’s 1990 paper, Experimental Tests of the Endowment Effect and the Coase Theorem. Participants were divided into two groups, one group were each given a mug, the other group, no mug. They found that, on average, WTA was about twice as high as WTP. This has been replicated many times.
Take this concept to the product market and add the default effect, which says that people disproportionately stick with the pre-selected or existing option, even when equally good or better alternatives are available. So, if you own an existing Apple phone, with your data looped into it’s cloud services and you’re used to its polished, distinct UI, even if Android products might be better, you are more likely to stick with Apple. Switching is cognitively costly, and humans reliably default to the status quo when friction is present - we often aim for effort minimisation.
The same issue that Android makers found when competing with Apple over endowment and default effects is occurring with ChatGPT and other large language model services. People are using LLM models heavily: for workflows, writing, programming, even cooking recipes. The list goes on. These models have memory. Not perfect memory, but memory nonetheless. They therefore learn your preferences, your history, tone, projects and cognitive patterns. This makes it costly to leave. Not just financially costly, but cognitively and informationally costly. Maybe even existentially costly, who knows. AI basically creates an accumulated informational endowment where if the person switches to a new model, they must re-explain themselves, rebuild context - be it project context or workflow - and slowly re-prompt the LLM until it’s giving the desired outputs. In the AI market, endowment, default bias and switching costs converge into a single reinforcing mechanism.
OpenAI and their ChatGPT models have had the luxury of a period of time in which they didn’t have many true competitors. In classical economics, monopoly power is slowly eroded away as competitors see the big profits to be made, produce similar quality products and eat away at market share. However, in the case of AI markets, the scenario is slightly different. Other companies cannot simply compete through pricing - users have already spent much time prompting ChatGPT - with the model learning what it can about the person and making customers reluctant to leave. Something deeper and more radical must pull customers away from the personal data advantage OpenAI possesses.
When you use a model like ChatGPT extensively, you’re using a product that has partially internalised a bit of you, becoming a representation of some of your preferences and behaviours. This is slightly different to the classic Apple vs Android scenario. With Apple, their endowment and default power came from ecosystem compatibility, hardware investment, social norms and familiar interface. The lock-in is more deeply behavioural for ChatGPT vs say, Google’s Gemini model. In remembering your long-term goals and interests, there is a compounding value effect, where the more you prompt, the more entrenched your goals and aspirations become to the existing model you are prompting.
So bringing it all back, even if Gemini or Claude become technically superior to ChatGPT, users, because of this heavy endowment and default effect, hesitate to switch. It isn’t really about which model is smarter but which model knows the person better. Right now, OpenAI is winning on that front.




