Edge computing offers less latency and bandwidth savings, but the lack of standards and problems with interoperability and security still need to improve. Credit: Thinkstock Edge computing emerged as a revolutionary tool to address the rising demand for real-time data processing. By enabling data processing at the edge of the network, closer to where it’s generated, edge computing significantly reduces latency and bandwidth use. That’s the story we’ve been told for years, but how will it evolve with the new demands of generative AI and bandwidth explosion? Edge computing today Currently, edge computing is a major force in many sectors. It ensures lower latency and optimized data deliverability—at least it has the potential for both benefits. The internet of things, autonomous vehicles, and Industry 4.0 widely incorporate edge computing. However, edge entered its awkward teenage years. The number of applications was not what many had thought. In many instances, it first looked like edge computing would be the target architecture but it turned out to make more sense to centralize more processing and data storage. This is mainly due to the expanding availability of bandwidth, such as 5G, and problems with managing many devices and systems at the edge. I believe this to be the most significant hindrance, and I’ll explain why. Edge computing challenges Despite the many benefits, edge computing is full of challenges. For instance, decentralizing data processing brings security and privacy concerns. A friend who deployed edge systems on oil rigs had 10% of the edge computing devices stolen, along with data stored on the devices. The data was encrypted, but what a huge wake-up call when systems can grow legs and walk away. That’s never been a problem with the cloud. Standardizing edge computing devices and ensuring their interoperability are other significant hurdles. There is no way to leverage digital radio communications or management standards to operate these systems. Edge computing vendors need to get on the same page. Despite the rise of some common standards, edge computing largely lacks interoperability with systems in enterprise data centers. With each edge computing vendor supporting their own “standard,” it gets expensive to keep the various skills around to support edge-based systems. Edge computing vendors are quick to explain the lack of standards because each edge-based system’s mission is vastly different than the others. One may focus on high-speed data gathering and processing to support airplane engine operations. Others may support point-of-sale terminals. Both are edge computing, but they have very different missions. Edge computing continues to find a path of promising innovation. However, we may be at innovation saturation and need to focus on expansion and operations. The future at the edge Developments such as 5G networking and generative AI will further elevate edge computing’s potential. Knowledge engines running within the edge are a massive area of growth right now. The advent of 5G will dramatically speed up data relay and computational tasks, while AI will enable much more sophisticated data processing at the edge. The core issues with edge computing are the lack of standards and vast heterogeneity leading to complexity. The resulting operational problems may be more difficult to overcome than most understand. There are a few ways to look at this issue. First, seeing edge computing as a valid architecture pattern is an apparent success. We’ve understood that moving data and processing closer to the point of generation is a better approach for many use cases, and now we have the technology and bandwidth to pull it off. Second, given the diverse set of problems that edge computing solves, it’s unlikely that we’ll have common standards anytime soon. You can’t expect the data storage standards for an oil rig and an autonomous vehicle to be the same. They are attempting to solve very different problems, and you don’t want to implement “standards” to limit what they need to do. Edge computing will likely evolve into different usage patterns during the next few years. Most of these will be defined by technology developed for those applications. The standard will follow those usage patterns, and we’ll likely see many. Edge computing will grow with cloud computing, AI, cloud-native, etc., but we must understand that it will vary by application. It’s a concept that can leverage many different technology types, and that’s why it’s useful. Related content news Go language evolving for future hardware, AI workloads The Go team is working to adapt Go to large multicore systems, the latest hardware instructions, and the needs of developers of large-scale AI systems. By Paul Krill Nov 15, 2024 3 mins Google Go Generative AI Programming Languages news Visual Studio 17.12 brings C++, Copilot enhancements Debugging and productivity improvements also feature in the latest release of Microsoft’s signature IDE, built for .NET 9. By Paul Krill Nov 13, 2024 3 mins Visual Studio Integrated Development Environments Microsoft .NET news Microsoft’s .NET 9 arrives, with performance, cloud, and AI boosts Cloud-native apps, AI-enabled apps, ASP.NET Core, Aspire, Blazor, MAUI, C#, and F# all get boosts with the latest major rev of the .NET platform. By Paul Krill Nov 12, 2024 4 mins C# Generative AI Microsoft .NET news Red Hat OpenShift AI unveils model registry, data drift detection Cloud-based AI and machine learning platform also adds support for Nvidia NIM, AMD GPUs, the vLLM runtime for KServe, KServe Modelcars, and LoRA fine-tuning. By Paul Krill Nov 12, 2024 3 mins Generative AI PaaS Artificial Intelligence Resources Videos