Companies think their only choices are repatriation or complaining about the high costs of the public cloud. There's a third option, but it takes work. Credit: FrentaN / Shutterstock Based on the hype, you would think that enterprises are scurrying off cloud platforms like rats from a sinking ship. The reality is much more nuanced. According to Andover Intel research, only about 9% of enterprises have moved applications out of the cloud. They also found that less than 3% of enterprises see any reason for cloud repatriation other than cost, although more than half expressed some disappointment with higher than-anticipated cloud costs. Although cost is still the primary reason enterprises move applications off the cloud, it’s rare to move all their applications and data sets, and it’s usually only after they see those workloads hemorrhaging cash. A self-inflicted wound Ten years ago, everything was “cloud-first,” any economics were ignored, and finops was non-existent. I always tell my clients that this was (mostly) not the fault of public cloud providers. In the early days of cloud computing, big providers promoted the migration of applications and data to the cloud without modification or modernization. The advice was to fix it when it got there, not before. Guess what? Workloads were never fixed or modernized. These lift-and-shift applications and data consumed about three times the resources enterprises thought they would. This led to a disenchantment with public cloud providers, even though enterprises also bore some responsibility. As we move into 2025, it’s no surprise that enterprises face real challenges trying to manage cloud costs. There are no perfect options. You can repatriate the applications and data back to on-premises systems, hoping the cheaper hardware will save you some dollars. Or you can leave them in place, do nothing, and hope the bosses will overlook the steady cash drain. There is another option, even though it is rarely considered: Optimize the existing applications and data sets, which can provide financial relief. Splitting the baby Enterprises can optimize cloud usage and avoid cloud repatriation with careful planning and by exploring issues beyond cost. Warning: This path does not always work and can get you into deeper trouble. Still, it’s often the best approach for many workloads burning cash on public cloud providers. Most businesses need a better strategy than cloud repatriation for problematic applications. These applications hid their inefficiencies while running on premises because we never saw a bill for resource utilization, including storage, network, computing, etc. Often, these applications did not undergo any architecture review when they were built. “It works, doesn’t it?” was the metric that determined success. I would call something that works but costs five times more money in the cloud than on-premises a failure, but most did not. The compromise approach is to optimize in place. This means doing the bare minimum to get the applications and data sets in a state that minimizes resource use and maximizes optimization when running on a public cloud provider. Rethinking costs High cloud costs usually stem from the wrong cloud services or tools, flawed application load estimates, and developers who designed applications without understanding where the cloud saves money. You can see this in the purposeful use of microservices as a base architecture. Microservices are a good choice for some applications but can burn about 70% more cloud resources. Changing the architecture to a more simplistic approach (such as monolithic) can be more cost-effective. Tools often contribute to cost problems as well. In many cases, those charged with redeploying applications on public cloud providers don’t put much thought into the tools they use. The difference in the usage cost of one tool over another can be 3 to 5 times higher. Simply swapping out development, testing, and operations tools for services that provide better cost-effectiveness can reduce case burn by 50% to 70%. The key to winning this war is planning. You’ll need good architecture and engineering talent to find the right path. This is probably the biggest reason we haven’t gone down this road as often as we should. Enterprises can’t find the people needed to make these calls; it’s hard to find that level of skill. Cloud providers can also be a source of help. Many have begun to use the “O word” (optimization) and understand that to keep their customers happy, they need to provide some optimization guidance. Although I would not call this a massive movement, I see it emerging as a focused approach to delivering better cost efficiency. Steps you can take To effectively manage application costs on public cloud providers, enterprises can follow these guidelines: Select proper cloud services and tools. Carefully choose the cloud services and tools that match your application’s needs. Avoid advanced or costly features that may be unnecessary. Use accurate load estimations. Precise load estimations avoid unnecessary scalability and associated costs. Dig through historical data and growth projections to ensure you’re not overprovisioning or underutilizing resources. Be cost-aware when designing applications. Develop applications with a clear understanding of where and how the cloud delivers cost benefits. Align application architecture with cloud cost dynamics. Understand utilization patterns. Determine the usage patterns of your applications. For example, if server utilization is stable at around 70%, consider whether maintaining on-premises resources would be more economical. Will this be easy? Nope. It’s going to take hard work—a lot of it. However, there’s (usually) gold at the end of the cloud optimization rainbow. I suggest you at least take a look. Related content analysis Strategies to navigate the pitfalls of cloud costs Cloud providers waste a lot of their customers’ cloud dollars, but enterprises can take action. By David Linthicum Nov 15, 2024 6 mins Cloud Architecture Cloud Management Cloud Computing analysis Understanding Hyperlight, Microsoft’s minimal VM manager Microsoft is making its Rust-based, functions-focused VM tool available on Azure at last, ready to help event-driven applications at scale. By Simon Bisson Nov 14, 2024 8 mins Microsoft Azure Rust Serverless Computing how-to Docker tutorial: Get started with Docker volumes Learn the ins, outs, and limits of Docker's native technology for integrating containers with local file systems. By Serdar Yegulalp Nov 13, 2024 8 mins Devops Cloud Computing Software Development news Red Hat OpenShift AI unveils model registry, data drift detection Cloud-based AI and machine learning platform also adds support for Nvidia NIM, AMD GPUs, the vLLM runtime for KServe, KServe Modelcars, and LoRA fine-tuning. By Paul Krill Nov 12, 2024 3 mins Generative AI PaaS Artificial Intelligence Resources Videos