2023 might be the year of repatriation, but more challenging architectural decisions need to be made besides what saves a few cloud dollars. Credit: stevecoleimages / Getty Images 37Signals, led by CTO David Heinemeier Hansson, implemented a cloud repatriation plan that has already saved them $1 million. Previously, the company spent $3.2 million annually on cloud services. They viewed that as being too much. Their repatriation project invested $600,000 in eight servers hosted by Deft. Hansson now projects that the plan can save $10 million over five years. That’s money they can put back into the business directly, investing in innovations and digital transformation projects. As a result, their cloud spending has decreased by 60%, going from around $180,000 to less than $80,000 per month. Hansson expects another significant drop in expenditures. Despite managing their hardware, the ops team size has stayed the same. It’s not that easy Of course, those who drive repatriation projects based on this anecdotal data may not find not the same level of cost benefits. Indeed, just because money is saved by moving applications and data to cheaper owned hardware platforms, the benefits of cloud computing are more challenging to measure. Many enterprises may happily report cost reductions of 60% or more but miss the bigger picture in terms of agility and speed to innovation that cloud computing is able to provide over owned hardware systems. The danger is that enterprises will rush towards managed services providers and colocation services, even renting their own data center space, and end up with long-term fixed costs and capital expenses that are not cost-justifiable when considering all hard and soft benefits. This is not a push-back on repatriation, only that the value calculations are much more complex than many people understand. I fear enterprises may rush to on-premises systems to save a few bucks, much like they rushed to cloud platforms just a few years ago. The same mistakes can occur when companies don’t understand the true value that’s being delivered. A balanced view of technology Much of this comes down to carefully defining what value means to the business. For some businesses, cost savings can be turned into value if they are in an industry that does not value innovation and speed, and the cheapest and best product wins the day. Take a company that just makes staples, has made staples for the last 100 years, and will continue to make staples to meet a steady demand. For these types of more traditional companies, the cloud really does not have value and perhaps they should have never made the trek to the public cloud. Thus, repatriation is really “right-sizing,” working on platforms that are more cost-efficient for the type of computing they need and their type of business. For others, it’s not that easy. Most businesses succeed by their innovations, no matter if it’s a product, service, or process that makes the customer experience better, such as automated supply chains so optimized that products are delivered to the customer faster and through a superior experience. Even traditional companies such as banks can benefit from this type of innovation which is much easier when leveraging public clouds as the primary platforms even if it’s cheaper to operate on owned hardware. The value is in the innovation and speed to market, not in any savings that may be possible by taking cheaper paths to computing that limit agility and speed of growth. Now what? So, is Linthicum against repatriation or for repatriation? Neither. This has never been about one direction or another; it’s about matching up the technology configuration and resources to the needs of the business. Of course, many people don’t want to hear this; they want a simple answer to “Which one is better?” This is why we’re here in the first place. We seem to be missing some of the strategic planning to understand the business and match a technology configuration to maximize the business value. We seem to run headlong to whatever the cool kids are doing these days. That’s never been the right approach, and we’ll end up fixing things on the back end and gathering too much technical debt. Related content analysis Strategies to navigate the pitfalls of cloud costs Cloud providers waste a lot of their customers’ cloud dollars, but enterprises can take action. By David Linthicum Nov 15, 2024 6 mins Cloud Architecture Cloud Management Cloud Computing analysis Understanding Hyperlight, Microsoft’s minimal VM manager Microsoft is making its Rust-based, functions-focused VM tool available on Azure at last, ready to help event-driven applications at scale. By Simon Bisson Nov 14, 2024 8 mins Microsoft Azure Rust Serverless Computing how-to Docker tutorial: Get started with Docker volumes Learn the ins, outs, and limits of Docker's native technology for integrating containers with local file systems. By Serdar Yegulalp Nov 13, 2024 8 mins Devops Cloud Computing Software Development news Red Hat OpenShift AI unveils model registry, data drift detection Cloud-based AI and machine learning platform also adds support for Nvidia NIM, AMD GPUs, the vLLM runtime for KServe, KServe Modelcars, and LoRA fine-tuning. By Paul Krill Nov 12, 2024 3 mins Generative AI PaaS Artificial Intelligence Resources Videos