The hyperscalers now offer multicloud ops tools. Cloud-native tools sound good in theory, but here are a few other things to keep in mind. Credit: Mattjeacock / Getty Images The rise of cloudops tools (such as AIops) is in full swing. There are three basic choices: on demand as a non-native tool, hosted on-premises, or a hosted cloud-native tool offered by a public cloud provider. Which door should you choose? The on-demand, non-native category includes the majority of AIops tools that run on a hosting service, sometimes on a public cloud. The wide variety of the tools’ options drives this choice more than the preferred deployment model. If more on-premises systems need to be monitored and controlled, that’s better accomplished using on-premises hosting because the data does not need to flow all the way back to a centralized hosting service over the open internet. At times, it may make sense for the ops tool to run in both places, and some tools provide the ability to do that in coordinated ways. If it’s a solid tool, then it should not matter how you deploy it. Cloud-native tools are owned by a specific cloud provider. They were created to monitor and control their own native cloud services, but they can also monitor and control services on other clouds. This support for multicloud deployments is logical when you consider the growing number of multicloud configurations. However, you need to consider the capabilities of the tool now, as well as its ability to address future needs as your deployments become more complex and heterogeneous over time. At this moment, I could make the argument that using a native tool is a good idea. Most enterprises have an 80/20 rule when deploying to multiple cloud brands. This means that 80% of the applications and data reside in a specific cloud brand while the other 20% reside within other brands, for example: 80% Microsoft, 15% AWS, 5% Google. Thus, it may make sense to leverage a cloud-native ops tool that does a better job of supporting its cloud-native services and can also be deployed as a multicloud ops tool that supports other public clouds. The mix makes sense given your ops approach—at least for now. The trouble with multicloud is that it’s always changing. Although the mix used in our example above is the state today, tomorrow’s market may include two more public clouds, say IBM and Oracle, as well as a normalized percentage of applications and data that run across different cloud brands. We could even see a common deployment pattern where a single public cloud holds less than 30% of the workloads and data on average, with the other applications and data distributed across four or more public clouds as part of the multicloud. Here’s the question that comes up: If you use a single cloud-native tool running on a single public cloud provider and it can monitor and control other cloud brands as well, should you select that ops tool? The answer is probably no, and it has nothing to do with the tool being native to a specific public cloud provider. It’s the architectural reality that ops tools need to be centralized and decoupled from the platforms they control. They need to support the monitoring and management of all public clouds as part of your multicloud, as well as most traditional on-premises systems. A hosted cloud-native tool (option 3) could solve your problems in the short run. However, in the long run, your cloudops tool needs to run on a neutral platform to ensure the most effective solutions now and into the future. Therefore, the best cloudops tool choices lie in options 1 (hosted, on demand) or 2 (on premises), or both. Related content feature 14 great preprocessors for developers who love to code Sometimes it seems like the rules of programming are designed to make coding a chore. Here are 14 ways preprocessors can help make software development fun again. By Peter Wayner Nov 18, 2024 10 mins Development Tools Software Development feature Designing the APIs that accidentally power businesses Well-designed APIs, even those often-neglected internal APIs, make developers more productive and businesses more agile. By Jean Yang Nov 18, 2024 6 mins APIs Software Development news Spin 3.0 supports polyglot development using Wasm components Fermyon’s open source framework for building server-side WebAssembly apps allows developers to compose apps from components created with different languages. By Paul Krill Nov 18, 2024 2 mins Microservices Serverless Computing Development Libraries and Frameworks news Go language evolving for future hardware, AI workloads The Go team is working to adapt Go to large multicore systems, the latest hardware instructions, and the needs of developers of large-scale AI systems. By Paul Krill Nov 15, 2024 3 mins Google Go Generative AI Programming Languages Resources Videos