A recent outage caused many to speculate that a multicloud could have avoided the impact, but we need to consider a few technology realities (and price tags). Credit: Russian Danyliuk / Vladimir Timofeev I can tell when there is a public cloud outage because my phone blows up. It’s usually reporters who want quotes about the impact on businesses, ways to avoid the impact, and the likelihood that this is a forthcoming trend. What can I say? Everything technical fails from time to time. Public clouds are no different. The objective with technology is to get the number of failures as close to zero as possible. With that said, it makes sense to look for solutions that will lower the risk of outages taking down our business for any amount of time. Lately, that means looking at multicloud to mitigate risk. Let’s examine what multicloud means for outages and how that would work. Multicloud means we leverage two or more brands of public clouds, say AWS and Azure, or Azure and Google, or perhaps even all three. By not putting all our eggs into one public cloud basket, we lower the exposure that our systems could be taken out by a single public cloud outage. How would this work as an approach to business continuity? To protect ourselves from the impact of a single public cloud outage, as many are proposing, we need to take an active/active recovery approach. This means we keep the same application and connected data on two different cloud brands. When an outage occurs, you simply fail-over your cloud-based application from the primary cloud provider to a secondary cloud provider and then return to the primary after the outage ends. A solution some suggest is to move the application and data at the time of the outage to another cloud provider. Funny thing, you can’t move your working copy of the data and current version of the application if they are contained within an inoperable public cloud. So, let’s skip that one. The problem with keeping two versions of the same application and connected data on two or more cloud brands is that each cloud brand is different. They have different features and functions for storage, databases, compute, security, governance, and more. If you want to use cloud-native features on both clouds, which is typically preferred, the differences become even more problematic. The end state of this multicloud option is that you’ll pay the same standard operating costs twice, and you’ll also pay about twice as much to customize the application and data for a different cloud, especially when you consider the need for special development, databases, and administration skills for each. This pushes the value of moving to multiple cloud-based platforms for outage purposes out the window. It also overcomplicates the deployment of single applications with a finite business value. Of course, there are open source options and architectural portability capabilities that containers provide. While containers do provide better portability, if you plan to maintain identical containers on two different public clouds, think again. They require specialized development and administration for each set of containers on each public cloud. No, they’re not as different, but they’re still different. The ROI quickly goes south. Instead of using two or more cloud brands, a better active/active approach would be to leverage a single cloud brand and host the same application and data in a different geographic region. Most of the outages we see are localized to a single region with other regions remaining largely unaffected. A regional approach to business continuity means more money to build and operate at two locations, but much less than it would cost to keep different versions of the same apps and data running on different cloud brands. Outages are just a part of any technology. Public cloud providers have enjoyed a remarkable uptime track record during the past 10 years, yet the temptation remains to build in a bunch of redundancy around the fear of outages. For all practical purposes, multicloud redundancy is expensive overkill with only a few exceptions. Related content analysis Strategies to navigate the pitfalls of cloud costs Cloud providers waste a lot of their customers’ cloud dollars, but enterprises can take action. By David Linthicum Nov 15, 2024 6 mins Cloud Architecture Cloud Management Cloud Computing analysis Understanding Hyperlight, Microsoft’s minimal VM manager Microsoft is making its Rust-based, functions-focused VM tool available on Azure at last, ready to help event-driven applications at scale. By Simon Bisson Nov 14, 2024 8 mins Microsoft Azure Rust Serverless Computing how-to Docker tutorial: Get started with Docker volumes Learn the ins, outs, and limits of Docker's native technology for integrating containers with local file systems. By Serdar Yegulalp Nov 13, 2024 8 mins Devops Cloud Computing Software Development news Red Hat OpenShift AI unveils model registry, data drift detection Cloud-based AI and machine learning platform also adds support for Nvidia NIM, AMD GPUs, the vLLM runtime for KServe, KServe Modelcars, and LoRA fine-tuning. By Paul Krill Nov 12, 2024 3 mins Generative AI PaaS Artificial Intelligence Resources Videos