We’ve learned to take services for granted, but they are actually more important than most cloud architects think Credit: Thinkstock Services are really old school if you think about it. We’ve progressed from early efforts around API-enabling applications, to object-oriented programming, to CORBA-based services, to SOA, to containers, to serverless functions, to today’s use of microservices. What’s common about the journey is the underlying belief that we can write something once and use it many times in many different applications or utilities, not to mention the ability to combine services so they become a new service unto itself. This is done through service decomposition. The word “service” is overused today; in the cloud computing world it describes anything that is exposed by a public cloud provider, such as storage, compute, database, etc. Services, at least the way I understand them, are the capability of exposing both behavior and data bound to that behavior in ways that allow developers to be more productive. For example, a service might be built to do predictive analytics on any type of data set passed to it. Thus, it could be invoked from an inventory management application or a sales order entry system. If the service is changed or improved, then both of those applications benefit. Also, by changing a single service, you change the way you do predictive analytics, without having to cull through the code of a hundred or so applications to fix or improve that feature. You place volatility into a domain, which is fundamental to good architecture. Now the bad news. Lift and shift is the enemy of service orientation, as well as of cloud-native features. Moving applications to the cloud as fast as you can without regard for service enablement is a bad idea. Unfortunately, this is also the most popular way to migrate to the cloud, by far. You lose out on three things taking this route. First, the inability to reuse services, and the productivity that services can bring. Second, you can’t mix application services with cloud-native services, such as advanced security and performance management. Third, you miss the opportunity to take advantage of an advanced architecture that should increase productivity, typically by a factor of two. If you’re spending $100 million a year on maintenance development, you’ll likely get that down to $50 million a year by leveraging services properly. So, why are enterprises not all-in on service orientation if the downsides are so unfavorable? Budget, of course. Refactoring applications to take advantage of cloud-native and application services pretty much triples migration costs. I can see why many enterprises have bypassed the wide use of services in favor of something that’s faster and cheaper. Understanding the trade-offs, perhaps it’s time to pursue at least a hybrid strategy of refactoring some applications for the use of services, but not all. If I see refactoring taking place for service enablement, organizations are typically using this approach. I’m not preaching a service-oriented sermon, just offering an idea worth considering. Related content analysis Strategies to navigate the pitfalls of cloud costs Cloud providers waste a lot of their customers’ cloud dollars, but enterprises can take action. By David Linthicum Nov 15, 2024 6 mins Cloud Architecture Cloud Management Cloud Computing analysis Understanding Hyperlight, Microsoft’s minimal VM manager Microsoft is making its Rust-based, functions-focused VM tool available on Azure at last, ready to help event-driven applications at scale. By Simon Bisson Nov 14, 2024 8 mins Microsoft Azure Rust Serverless Computing how-to Docker tutorial: Get started with Docker volumes Learn the ins, outs, and limits of Docker's native technology for integrating containers with local file systems. By Serdar Yegulalp Nov 13, 2024 8 mins Devops Cloud Computing Software Development news Red Hat OpenShift AI unveils model registry, data drift detection Cloud-based AI and machine learning platform also adds support for Nvidia NIM, AMD GPUs, the vLLM runtime for KServe, KServe Modelcars, and LoRA fine-tuning. By Paul Krill Nov 12, 2024 3 mins Generative AI PaaS Artificial Intelligence Resources Videos