Just because you can, doesn’t mean you should. Complexity, latency, and network outages may give you pause. Credit: MF3d / Getty Images The notion of the intelligent edge has been around for a few years. It refers to placing processing out on edge devices to avoid sending data all the way back to the centralized server, typically existing on public clouds. While not always needed, the intelligent edge is able to leverage machine learning technology at the edge, moving knowledge building away from centralized processing and storage. Applications vary, from factory robotics to automobiles to on-premises edge systems residing in traditional data centers. It’s good in any situation where it makes sense to do the processing as close to the data source as you can get. We’ve wrestled with this type of architectural problem for many years. With any distributed system, including cloud computing, you have to consider the trade-off of process and storage placement on different physical or virtual devices. The intelligent edge is no different. It’s easy to place processing and storage at the edge, but in many cases it becomes a management and operations nightmare. Keep in mind that your edge devices to centralized systems are always many-to-one. Managing a centralized systems is fairly simplistic considering that it’s in one virtual location. When you have to manage hundreds or thousands of intelligent edge devices, including configuration management, security, and governance, it becomes an operational nightmare. I’m finding companies that pushed processing and data storage out to the edge often pull them back to the centralized servers just due to management complexity. Latency and network outages can bite you in the butt. We depend on networks to keep us connected with edge computers, in many cases mobile and thus connected via cellular networks. You’ll probably have to deal with disconnected situations more often than you like, and you must figure out a way to ensure that these outages and performance issues don’t kill your overall system, both edge and centralized. If you don’t, you’ll find that data does not sync and processing is not managed properly. You may get to a point where the systems become unreliable and untrusted. Try explaining to a commercial pilot that the in-flight engine diagnostics on the intelligent edge failed due to a network problem. The resulting flameout won’t go over well on the flight deck. Of course, not all edge limitations are that profound. Typically you’re making architectural mistakes that won’t be discovered until the system begins to scale. By that time, too much has been committed to the intelligent edge architecture, and fixes require a systemic change. Try telling your boss that. It’s won’t go over well there, either. Make sure you consider the trade-offs. Related content analysis Strategies to navigate the pitfalls of cloud costs Cloud providers waste a lot of their customers’ cloud dollars, but enterprises can take action. By David Linthicum Nov 15, 2024 6 mins Cloud Architecture Cloud Management Cloud Computing analysis Understanding Hyperlight, Microsoft’s minimal VM manager Microsoft is making its Rust-based, functions-focused VM tool available on Azure at last, ready to help event-driven applications at scale. By Simon Bisson Nov 14, 2024 8 mins Microsoft Azure Rust Serverless Computing how-to Docker tutorial: Get started with Docker volumes Learn the ins, outs, and limits of Docker's native technology for integrating containers with local file systems. By Serdar Yegulalp Nov 13, 2024 8 mins Devops Cloud Computing Software Development news Red Hat OpenShift AI unveils model registry, data drift detection Cloud-based AI and machine learning platform also adds support for Nvidia NIM, AMD GPUs, the vLLM runtime for KServe, KServe Modelcars, and LoRA fine-tuning. By Paul Krill Nov 12, 2024 3 mins Generative AI PaaS Artificial Intelligence Resources Videos