It’s the week of AWS re:Invent and time for end-of-the-year cloud planning. Hopefully, we’ll see a shift in thinking toward making more effective use of cloud computing. I’m trained to look for patterns in technology. I developed this survival technique in my many roles as a CTO where you’re tasked with placing bets on what technology will be important, especially the timing of when to make investments and what those investments should be. It does not matter if you’re a true technology company (as were most of my employers), a services company, a traditional enterprise, or a new one. Everyone is attempting to figure all this out. Understanding what concepts are emerging, what concepts are going to be relevant, and the timing of that relevance is a career skill. The AWS re:Invent conference takes place this week (aka “Cloud Computing Woodstock”) and we’ll see a bunch of announcements, including many that you’ll need to consider as you look for patterns. Dave Vellante of SiliconAngle does a great job of looking at the preshow reveals by talking to AWS chief executive Adam Selipsky. However, we’ll see many more announcements from AWS and from other cloud technology providers in the days to come. Out of that will come some key data points that need to be considered in terms of emerging patterns, or more specifically, macropatterns. Beyond the noise from the shows and the press, I think we can determine that some new macropatterns are emerging. These patterns set a theme and then micropatterns emerge. For example, we’re seeing an acceleration in the focus on cloud operations (cloudops). This is a macropattern. We’re also seeing an acceleration of several micropatterns, such as AIops and observability to support cloudops. Of course, there may be other sub-micropatterns on the micropatterns, and so on. What are the new macropatterns we’ll see in 2023? As I alluded to last week, 2023 will likely focus on more pragmatic concepts. In short, planning and strategy will be the methods to get more value out of cloud technology—or any technology, for that matter. If I were going to name this macropattern, it would be “optimization.” We’ve beat the concept of architectural optimization to death here, with the understanding that we’re looking for cloud configurations that do more than just “work.” We want to return the most value back to the business for the smallest amount of spending. Of course, we want to do the same with cloud cost optimization using finops processes and tools. It looks like we’ll see corporate data optimization become an emerging theme as well in 2023. These may be an outcome of the patterns we’ll see this week at re:Invent. Most of this talk of “optimization” is driven by the fact that cloud computing ROI has been less than stellar for many companies, and it does not seem to track with spending. Indeed, we see same-size companies spending about the same amount on cloud computing migration, digital transformation, and other modernization efforts but having widely different results. Some companies find good business value. Others find negative value and have nothing to show for their cloud computing journeys. Boards, executives, and investors are starting to ask questions. So, it’s an easy call to say that many of the overall macropatterns for 2023 will focus more on optimization: optimization of cloud computing architectures, cloud spending, data, security, AI systems, etc.—anywhere we’re attempting to make things more valuable for the business, rather than just tossing money at technologies that may or may not work in an optimized way. In my opinion, this is a return to a better way of thinking about the use of cloud computing resources. However, it’s going to come with challenges. I’ll mention two. First, most of those working on cloud-based systems don’t understand how to optimize things, certainly technology. There is no fundamental understanding of how to find the sweet spots with any technology in terms of maximizing business value. Many understand how to make a business case, which means selling a plan internally, but there is unlikely to be any ongoing measure of what value is being returned to the business and what to do if ROI is low. Second, optimization requires self-assessments and self-reflection, and some of those self-assessments will uncover bad decisions that leaders made. If you’re the one who made those bad decisions, solid and real assessments will be scary. I suspect that many will be manipulated or ignored in defense of careers. I don’t have any easy answers, but I see it firsthand and I get comments that this is often an issue. Finally, avoid the temptation to toss tools at this. If you look at most ops tools today, including finops and AIops, they all brag about providing optimization analytics. The idea is to automate the ability to optimize cloud costs, cloudops, etc., by using a tool. Although tools are a core part of optimization, they should not drive strategy, processes, and metrics; those should be agreed on by the leadership. I’m kind of happy about this macropattern of optimization in terms of how we deal with cloud technology and how we better align it to the business. Not being naive, I understand that this will be another challenge for IT, but this one has a great deal of value. Related content analysis Strategies to navigate the pitfalls of cloud costs Cloud providers waste a lot of their customers’ cloud dollars, but enterprises can take action. By David Linthicum Nov 15, 2024 6 mins Cloud Architecture Cloud Management Cloud Computing analysis Understanding Hyperlight, Microsoft’s minimal VM manager Microsoft is making its Rust-based, functions-focused VM tool available on Azure at last, ready to help event-driven applications at scale. By Simon Bisson Nov 14, 2024 8 mins Microsoft Azure Rust Serverless Computing how-to Docker tutorial: Get started with Docker volumes Learn the ins, outs, and limits of Docker's native technology for integrating containers with local file systems. By Serdar Yegulalp Nov 13, 2024 8 mins Devops Cloud Computing Software Development news Red Hat OpenShift AI unveils model registry, data drift detection Cloud-based AI and machine learning platform also adds support for Nvidia NIM, AMD GPUs, the vLLM runtime for KServe, KServe Modelcars, and LoRA fine-tuning. By Paul Krill Nov 12, 2024 3 mins Generative AI PaaS Artificial Intelligence Resources Videos