The need to have on-premises systems talk to public cloud systems is becoming critical. But so few enterprises are prepared Credit: Thinkstock You are trying to get an end-of-quarter report out and you’re having some trouble. It seems that while sales are recorded on a public cloud system, inventory is recorded on an on-premises system. You need to combine both data stores for the report, and you have no way of doing so. How was this allowed to happen? The fact of the matter is that all legacy systems and data can’t migrate to the public cloud, so those on-premises systems need to integrate with the data on the public cloud systems to function. While this was a known problem in 2011 when we started on the cloud journey, in 2018 many organizations still have still not gotten around to solving it. Enterprises typically don’t think about data, process, and service integration until there is a tactical need. Even then, they typically get around the issues by pulling together a quick and dirty solution, which typically involves FTP, a file drop, or even Federal Express. The result of all this is that a lot of integration between the cloud and on-premises systems remains undone, be it data integration, process integration, or service integration. This will become a crisis in 2019 for many enterprises, because they can spend the entire year, or more, just pulling together integration solution for their public cloud systems—which they now depend on for some mission-critical processes. To avoid that crisis, here’s what you need to do. First, catalog all data, services, and processes, using some sort of repository to track them all.. You need to do this for all on-premises systems and all public cloud systems, and you need to do so with the intent of understanding most of the properties so you can make sure the right things are talking to the right things. Second, figure out logically how things need to be integrated. This means understanding at a high level what data needs to flow where, and why. You will take this and break it down to a more primitive level where you’ll identify the data elements and server properties as well. Third, pick the tools and technology you’ll need to carry out the integration. By the way, enterprises too often go directly to this step, but that will only ensure that you pick the wrong tools because you don’t yet know much yet. Of course, there are more complexities to deal with, such as security, governance, and network, which will have to be figured out as well. But start with the basics I covered here, because you can’t deal with the complexities until you’ve dealt with the basics. Related content analysis Strategies to navigate the pitfalls of cloud costs Cloud providers waste a lot of their customers’ cloud dollars, but enterprises can take action. By David Linthicum Nov 15, 2024 6 mins Cloud Architecture Cloud Management Cloud Computing analysis Understanding Hyperlight, Microsoft’s minimal VM manager Microsoft is making its Rust-based, functions-focused VM tool available on Azure at last, ready to help event-driven applications at scale. By Simon Bisson Nov 14, 2024 8 mins Microsoft Azure Rust Serverless Computing how-to Docker tutorial: Get started with Docker volumes Learn the ins, outs, and limits of Docker's native technology for integrating containers with local file systems. By Serdar Yegulalp Nov 13, 2024 8 mins Devops Cloud Computing Software Development news Red Hat OpenShift AI unveils model registry, data drift detection Cloud-based AI and machine learning platform also adds support for Nvidia NIM, AMD GPUs, the vLLM runtime for KServe, KServe Modelcars, and LoRA fine-tuning. By Paul Krill Nov 12, 2024 3 mins Generative AI PaaS Artificial Intelligence Resources Videos