With a newly expanded distributed workforce, many enterprises are considering a move to the edge. Make sure you've thought about security and data volume Credit: akurtz Getty Images / Thinkstock When computing first began computers that were way too expensive for most companies were shared via timeshare services. Processing was centralized, using multiuser systems. Then minicomputers, PC, and LANs came along, and we moved processing out to PC workstations and smaller compute platforms. We saw the decentralization of computing. Now, years later, we’re centralizing processing again on public cloud hyperscalers, but this time using a multitenant approach. Getting dizzy? These days we’re also considering decentralization again, with the rise of edge computing. We’ve talked about edge here before, and my conclusion remains that there are reasons to leverage edge computing, certainly to reduce latency and to store data locally. The pandemic has pushed employees and processing to a highly distributed model and not by choice. Edge computing is front and center as something that should be leveraged alongside cloud computing—and instead of cloud. Let’s clear a few things up. A few edge computing models are emerging. First is processing data directly on an IoT device, say a thermostat or an autonomous vehicle. Let’s call this “device oriented.” Second is using some compute platforms of services that are geographically distributed and used by multiple clients, typically workstations. Let’s call this “edge server oriented.” The second model is the most interesting to enterprises that are rethinking compute distribution post-pandemic. It’s also the newest use of the edge computing model and comes in two different flavors: the use of proprietary edge devices that are sold by the public cloud providers, and the use of private servers that sit in small geographically disbursed data centers, in office buildings, and even homes. In moving to these new edge models, most enterprises are skipping a few considerations, including: Security. Edge architectures add complexity, considering that the data must be secured on the client workstation as well as in the cloud, with some adding an intermediary server that also requires security. Rather than focusing on securing the data in a single public cloud, we’ve gone to securing the information on multiple systems that store data. Data volume. When you add lower-powered, distributed compute platforms, the volume of data may overwhelm them. A public cloud storage system with built-in automated database scaling can handle pretty much any volume of data that’s tossed at it. The same can’t be said for edge servers or client workstations. This is not to say that edge computing can’t be a focus of your post-pandemic move to cloud computing. I’m saying that you need to understand the issues you’ll face. Related content analysis Strategies to navigate the pitfalls of cloud costs Cloud providers waste a lot of their customers’ cloud dollars, but enterprises can take action. By David Linthicum Nov 15, 2024 6 mins Cloud Architecture Cloud Management Cloud Computing analysis Understanding Hyperlight, Microsoft’s minimal VM manager Microsoft is making its Rust-based, functions-focused VM tool available on Azure at last, ready to help event-driven applications at scale. By Simon Bisson Nov 14, 2024 8 mins Microsoft Azure Rust Serverless Computing how-to Docker tutorial: Get started with Docker volumes Learn the ins, outs, and limits of Docker's native technology for integrating containers with local file systems. By Serdar Yegulalp Nov 13, 2024 8 mins Devops Cloud Computing Software Development news Red Hat OpenShift AI unveils model registry, data drift detection Cloud-based AI and machine learning platform also adds support for Nvidia NIM, AMD GPUs, the vLLM runtime for KServe, KServe Modelcars, and LoRA fine-tuning. By Paul Krill Nov 12, 2024 3 mins Generative AI PaaS Artificial Intelligence Resources Videos