New best practices support defining cloud solutions from the outside in to take advantage of the strengths of cloud computing. Credit: Thinkstock It’s the last three weeks of a 22-month cloud architecture project. You defined and designed a configuration that defines many cloud computing resources: databases, artificial intelligence engines, application development platforms, devops toolchains, cloudops tools, as well as security and governance. Today you discovered that a few of the databases won’t store information in ways the applications require, the AI engine does not work with the security solution you selected, and the cost of the cloudops tools is 10 times the budgeted amount. Why did these things happen? Is it your fault? Sometimes we catch these mistakes during the design phase of the cloud solution, no matter if it’s a net-new system or a migration from traditional platforms. Unfortunately, these and similar problems arise all the time even though the cloud architecture should minimize these types of errors. What bothers me is that many of these mistakes go unnoticed until implementation or even later. The solution might work, but the underlying problems will still impact the business in a negative way because the solutions are grossly underoptimized. There will be more operational costs and fewer benefits to the business. For example, let’s say you choose the wrong AI engine to support a fraud detection system. You might only catch one-third of the issues that the system could catch if it leveraged an optimized AI engine. Nobody notices because the system is catching things, but it’s bleeding the company dry behind the scenes in lost revenue. As we progress farther down the road with cloud computing solutions, we are noticing more cloud architects making huge mistakes in terms of negative impact on the business. No one is perfect, but some architects do most things right to minimize the number of errors in their cloud solutions, both small and big. What are those architects doing right? Keep in mind there are no foolproof ways to avoid every mistake when it comes to configuring your cloud solution or picking the most optimized approaches. However, when I work with new architects, I’m quick to point out that you can do cloud architecture from the inside out or from the outside in. Each method has different advantages. Inside out The inside-out approach considers architecture from the most basic concepts and technology components, such as storage, compute, databases, networking, operations, etc. Then you work outward to define the more detailed requirements: database models, performance management, specific platform requirements, and enabling technology such as containers and container orchestration (e.g., Kubernetes). In other words, you begin with basics, such as infrastructure, and then work outward to the specific solution requirements. How do the holistic technology decisions and configurations (such as storage and compute designs or specific technologies) meet the specific business requirements? You build specific solutions to support the business. Outside in Outside in moves in the opposite direction. You begin with the specific business requirements, such as what the business use cases are for specific solutions or, more likely, many solutions or applications. Then you move inward to infrastructure and other technologies specifically chosen to support the many solutions or applications required, such as databases, storage, compute, and other enabling technologies. Most cloud architects move from the inside out. They pick their infrastructure before truly understanding the solution’s specific purpose. They partner with a cloud provider or database vendor and pick other infrastructure-related solutions that they assume will meet their specific business solutions requirements. In other words, they pick a solution in the wide before they pick a solution in the narrow. This is how enterprises get solutions that function but are grossly underoptimized or, more often, have many surprise issues such as the ones discussed earlier. Discovering these issues requires a great deal of work and typically requires the team to remove and replace technology solutions on the fly. They might have to add a database that supports the database model needed, even though they’re paying license fees connected to a major enterprise database deal. Or they might replace the security system so it works with the AI, even though they spent half a million dollars to test and deploy the existing system a few years back. I know from experience that many of you are living this now. I often hear the argument that the enterprise first needs to select the foundational technologies and does so based on existing assumptions, and then looks at what their existing application portfolio requires. Although that was more cost-effective in the days when enterprises bought their hardware and software, we now leverage cloud-based resources where that’s no longer the case. Now you can move from specific application and solution requirements to any number of infrastructure options to support those applications and solutions, fully optimized. You could even have a unique infrastructure that includes databases, security, governance, and operations that are one-offs for each application or small group of applications. The benefit is having supporting technology infrastructure that you can select and configure to optimally solve specific business problems. You no longer need to force-fit the applications to technology decisions you already made. This makes outside-in the preferred way to do cloud architecture because it truly leverages the power of the cloud. Related content analysis Strategies to navigate the pitfalls of cloud costs Cloud providers waste a lot of their customers’ cloud dollars, but enterprises can take action. By David Linthicum Nov 15, 2024 6 mins Cloud Architecture Cloud Management Cloud Computing analysis Understanding Hyperlight, Microsoft’s minimal VM manager Microsoft is making its Rust-based, functions-focused VM tool available on Azure at last, ready to help event-driven applications at scale. By Simon Bisson Nov 14, 2024 8 mins Microsoft Azure Rust Serverless Computing how-to Docker tutorial: Get started with Docker volumes Learn the ins, outs, and limits of Docker's native technology for integrating containers with local file systems. By Serdar Yegulalp Nov 13, 2024 8 mins Devops Cloud Computing Software Development news Red Hat OpenShift AI unveils model registry, data drift detection Cloud-based AI and machine learning platform also adds support for Nvidia NIM, AMD GPUs, the vLLM runtime for KServe, KServe Modelcars, and LoRA fine-tuning. By Paul Krill Nov 12, 2024 3 mins Generative AI PaaS Artificial Intelligence Resources Videos