while it forms.
It becomes visible when the vendor changes terms.
Why AI vendor lock-in is more dangerous than classic lock-in.
Lock-in doesn't always appear where you look for it.
Your teams optimised workflows around a specific model (GPT-4, Claude, Gemini). Prompts, evaluations, QA processes are all calibrated on this model. Switching models doesn't mean switching APIs. It means recalibrating months of optimisation work.
Data transmitted to vendors for fine-tuning, embedding generation or RAG systems creates an asymmetric dependency. That data is no longer recoverable in its processed form. It is with the vendor.
Major AI vendors' usage terms evolve. Initially attractive conditions can be modified unilaterally at renewal. An organisation that depends on a single vendor is not in a position to renegotiate.
Are you already in a lock-in situation?
If your primary AI vendor doubled its pricing tomorrow, how long would it take to migrate to an alternative? If the answer exceeds 6 months : or if you don't know : you are in a significant lock-in situation.
If your AI vendor went offline for 48 hours, which critical processes would stop? That list is your real dependency perimeter.
Three principles to avoid lock-in without abandoning AI.
Principle 1 : Concentrate dependencies at infrastructure level, not model level. Infrastructure is more standardised and more substitutable than model-level lock-in.
Principle 2 : Document exit conditions before you enter. Every significant AI integration decision must include an exit condition analysis. Not after. Before.
Principle 3 : Maintain an internal evaluation capability. Organisations that can benchmark models internally retain real negotiating leverage. Those that fully delegate this evaluation progressively lose their independence of judgment.