
Not long ago, I worked with a global manufacturer that viewed itself as cautious—almost hesitant—about adopting AI. Its priorities were pragmatic: complete a major ERP migration to the cloud, modernize a handful of customer-facing applications, and strengthen security controls. The CIO was explicit about generative AI: it was on the roadmap, just not for now. “We’re not ready yet,” was the standing position.
Officially, the organization wasn’t pursuing AI initiatives at all. Unofficially, AI was already seeping into its environment. The company’s cloud provider had begun embedding AI-native capabilities directly into core services. A search platform used for a new customer portal shipped with semantic and vector search enabled by default. The observability stack quietly added AI-assisted analysis that reshaped how logs and telemetry were processed. Even the managed database service introduced an “AI integration” toggle that developers started enabling because it looked helpful and low risk.
Over time, these small decisions compounded. Developers built features that relied on the provider’s vector engine. Automation workflows began depending on proprietary AI agents. Data models were tuned specifically for the cloud vendor’s AI services. None of this was driven by a formal AI strategy—it happened organically, feature by feature, service by service.
Six months later, the consequences became clear. Cloud costs had climbed, and architectural flexibility had eroded. Migrating away from the provider would now require untangling deeply embedded AI dependencies. The organization hadn’t set out to become AI-first, but it had arrived there anyway—more tightly locked into its cloud vendor than ever, and with far fewer choices than it realized at the outset.

