Yesterday we built architectures around static requirements that changed slowly. Today's cloud-based configurations need to quickly adapt to growth and change. Let’s say there’s a five-year-old midsize biotech company. We’ll call it MidCo, and they specialize in the automated testing of blood and tissue samples. The bottom line boomed during MidCo’s first five years in business. However, new startup competitors produce more advanced products that can leverage technologies such as artificial intelligence, and MidCo’s competitors offer their solutions at much lower prices. In other words, MidCo is being disrupted. MidCo’s IT department can’t keep up with the changes needed by R&D and marketing to build more advanced and better-optimized testing technologies at a better price. Also, several of MidCo’s patented products can’t move into production since their existing cloud-based systems have too many limitations. Their systems will only integrate with a few suppliers, they cannot leverage a third-party parts producer, and they cannot access an optimized supply chain to automatically work around parts shortages or change suppliers. What’s killing this business is not bad luck or an increasingly crowded market. It’s the fact that those who created the initial cloud-based solution to automate the company’s major systems did not incorporate the ability to accommodate rapid change. This limitation became a bigger issue as the company grew within a market that changed more rapidly than anyone initially anticipated. These days, companies like MidCo are more the rule than the exception. You expect to see this problem in legacy companies that have too many unintegrated silos and technology stakes that go back 60 years. But these companies are typically less than 10 years old, and many have never used anything but cloud-based systems. So, how did they get disrupted? Many people think cloud computing automatically provides the best agility as well as cost optimization. The reality is that nothing is automatic. You must build your systems to accommodate rapid change, cloud or not. In many cases, it will be as hard to integrate rapid change features into existing cloud systems as it is to upgrade more traditional legacy systems. Without these features, the business will die the death of a thousand cuts as the disruptors take over. You only need to look at ride sharing, home sharing, and entertainment to see how it happens. Today we must plan ahead for change. Any architecture, cloud or no cloud, needs to be able to accommodate change. Remember, although a system can be optimized for an initial instance in time, that doesn’t mean it’s a good architecture ongoing. It’s no longer enough to create a system that works today. It must also accommodate rapid changes tomorrow and the next day. The ability to accommodate change or to be more agile is not new. Both SOA (service-oriented architecture) and cloud computing encouraged architects to provide the most agility. It’s no surprise that an architecture won’t provide rapid change capabilities unless you design for it. When you layer in more systems on top of a design that does not easily accommodate rapid change, you’ll have more layers of problems. How do we plan for what we don’t know will happen? We can define the actual ability to change into a system, and make sure it’s an architectural priority. This requires extra steps that will cost more time and money up front, but they will pay off in the long run. Some of these steps include: Placing volatility into domains. No matter if it’s data, business rules, or application behavior, provide the ability to incorporate quick configuration changes without redeveloping and retesting the entire system. This is much more powerful when used within public clouds, considering that widely distributed systems can leverage centralized configurations. Pervasive use of services. From microservices to standard services, break out discrete and non-discrete services that can be reused throughout the architecture. Give these services the ability to change without redeployment and retesting, at least most of the time. Use of abstraction. Abstraction is a key tool to deal with necessary changes to physical data and physical processing. Simply put, this mechanism allows volatility to be placed in domains, as defined above. We alter these structures and sequences in software to change physical processes and physical data storage and not force changes to back-end systems. Yes, there are many more tricks to consider. However, if you don’t understand what these solution patterns are, you probably won’t realize they’re needed until it’s too late. Most cloud architects still don’t prioritize designs that can accommodate rapid change. Most don’t understand how to do it, and those who do often cite the additional cost and time as reasons to avoid the extra steps. Unless we start planning for change, we’ll see billion-dollar companies disappear as they become disrupted. Most of these major failures will trace right back to a company’s inability to keep up with the changes to stay competitive. Being agile is no longer something that’s nice to have, it’s a requirement. It doesn’t matter if you’re building a new company or if you need to fix existing architecture. Figure out how to place volatility into domains. Break out and reuse services whenever and wherever possible. Define how a system can best use abstraction. Simple, right? It’s time to get to work. Related content analysis Strategies to navigate the pitfalls of cloud costs Cloud providers waste a lot of their customers’ cloud dollars, but enterprises can take action. By David Linthicum Nov 15, 2024 6 mins Cloud Architecture Cloud Management Cloud Computing analysis Understanding Hyperlight, Microsoft’s minimal VM manager Microsoft is making its Rust-based, functions-focused VM tool available on Azure at last, ready to help event-driven applications at scale. By Simon Bisson Nov 14, 2024 8 mins Microsoft Azure Rust Serverless Computing how-to Docker tutorial: Get started with Docker volumes Learn the ins, outs, and limits of Docker's native technology for integrating containers with local file systems. By Serdar Yegulalp Nov 13, 2024 8 mins Devops Cloud Computing Software Development news Red Hat OpenShift AI unveils model registry, data drift detection Cloud-based AI and machine learning platform also adds support for Nvidia NIM, AMD GPUs, the vLLM runtime for KServe, KServe Modelcars, and LoRA fine-tuning. By Paul Krill Nov 12, 2024 3 mins Generative AI PaaS Artificial Intelligence Resources Videos