Microsoft brings your network to Azure in an effort to reduce latency and support distributed applications that work across infrastructure on-premises, in edge data centers, and in Azure Credit: Getty Images The scale of the public cloud and services such as Azure is astounding. Massive data centers full of compute and storage are available on demand, and the network pipes in and out of those sites give you tremendous bandwidth. But putting all your compute eggs in one cloud basket has its downsides, with network latency a significant issue. It’s not surprising to see Azure doing more with the edge. I’ve recently looked at how Microsoft is moving compute closer to end users, but compute is only part of the story. If we’re to get Microsoft’s promised consistent experience wherever we access Azure services, we need to be able to treat our edge resources and our Azure-hosted compute and storage as part of a single virtual network, with policy-driven security and routing. Bringing the edge to Azure The edge of the network is hard to define. To some, it’s the devices on our desks, in our homes, in our data centers, and built in to industrial equipment. To others it’s the equipment that sits on the provider side of the last mile. Microsoft is understandably agnostic—it has customers across all those markets. However, by thinking of its edge network integration as a part of Azure, a networking equivalent of the server, VM, and container management capabilities of Azure Arc, it’s clear that much of the attention is on the data center and the provider. It’s a focus that makes sense. Azure Stack’s various incarnations scale from devices that sit at provider sites close to the end user, to multirack stamps that extend Azure into your data center. As much as Azure is key to the company’s future, Microsoft is well aware that hybrid infrastructures that mix cloud and on-premises aren’t going away and are likely to be a key element of most businesses’ strategic architectural decisions. Azure already has a powerful virtual network platform, based on the open source containerized SONiC (Software for Open Networking in the Cloud) switch operating system and the SAI (Switch Abstraction Interface). It’s flexible and easy to configure, providing the backbone for the various networking tools built into the Azure portal. But as we move out into the edge of the network, we no longer rely on Azure’s own networks. We have to work with third-party network architectures and the hardware they’re built on. If it can’t control the hardware, how can Microsoft extend its network tools down into the edge? Introducing Azure Edge Zones Microsoft recently announced Azure Edge Zones, a set of technologies that extends its existing hybrid network platforms with a focus on distributed applications that work across infrastructure on-premises, in edge data centers, and in Azure. The intent is for all these network elements to be managed using the same APIs as Azure, allowing them to use the same security tools and the same portal. The economics of Azure mean that its data centers are often long distances from where users want their compute and storage. If you’re in the Pacific Northwest in the United States, your Azure instances will be close to the Columbia River, taking advantage of its cheap hydroelectric power. In Europe much of Azure is run in countries with favorable tax regimes. Microsoft’s cloud economists decide the best place to put servers to get not only the best financial returns, but to be in places with good regulatory environments that fit customer needs. Unless you’re close to one of those data centers, you’re going to get latency, making it hard to use Azure for an increasingly important class of real-time applications and services. For example, running Azure Remote Rendering in an Edge Zone would allow HoloLens to display complex 3D models in real time, rather than limiting user interactions to avoid the glitches and delays that come from high-latency connections. The preview set of Edge Zones will be in New York, Los Angeles, and Miami. They’re all dense metropolitan regions with a lot of demand—areas where in the past you’d have used a CDN to manage content from the cloud. With Edge Zones, you can push some of your compute to those cities along with your content, giving users a better, low-latency experience. Microsoft will operate this infrastructure, tying Edge Zones into its connectivity. This should ensure a consistent, high-bandwidth connection between your edge instances and code running on Azure. There should be very little difference between the capabilities of Edge Zones and the rest of Azure. Going Private: Azure Edge Zones in your network There are two other Edge Zone implementations: carrier Edge Zones and Private Edge Zones. Both are designed to work with next-generation wireless networks, with Private Edge Zones building on Microsoft’s Azure Stack Edge hardware. Private Edge Zones can be used for on-premises applications, with Azure management of existing SD-WAN solutions, supporting third parties alongside its own tools. With Private Edge Zone and a programmable orchestrated SD-WAN like NetFoundry’s, your Azure applications can now work across not only the public cloud and an on-premises data center, but also across your branch office network. There’s no need to connect each branch to Azure with expensive ExpressRoute connections or separate VPN links. Instead your existing network is managed through the Azure portal, with a single VPN connection to Azure, including any virtual networking appliances such as firewalls. You merge your network functions with Azure’s, deploying from Azure into your network. With a hybrid application in an Azure Private Edge Zone you can use Azure Arc to deploy VMs and containers, with the Azure portal managing both application and network. Using containers to host your Edge Zones services simplifies deployment to the edge and allows you to configure in-cloud infrastructures using the same components for users who aren’t in a supported area. There’s no need to have separate applications for each location when Azure Edge Zones blend Azure regions and Edge Zones into the same VLAN. Architecting cloud applications for the new hybrid edge Technologies like this will require rearchitecting applications. Edge compute instances won’t have the capabilities of the public cloud; you won’t get access to the same range of compute, and certainly won’t have much in the way of storage. If you have a lot of data to process, don’t stop using Azure. Your Edge Zones instances instead should concentrate on preprocessing data and handling user interactions, managing events, and passing less urgent requests up to Azure applications. It’s clear that Azure Edge Zones are part of a larger trend, as Microsoft moves to bring all aspects of hybrid clouds into the Azure portal. With an Azure Portal desktop application currently in preview, cloud developers and admins will be able to manage their Azure cloud estates from one screen, on their desktops and in the browser, and work with data in the public cloud and their data center. Putting it all together like this, it’s not surprising that Microsoft is retiring its Windows Server certifications in favor of their Azure equivalents. In Microsoft’s hybrid cloud, everything is going to be Azure. Related content analysis Strategies to navigate the pitfalls of cloud costs Cloud providers waste a lot of their customers’ cloud dollars, but enterprises can take action. By David Linthicum Nov 15, 2024 6 mins Cloud Architecture Cloud Management Cloud Computing analysis Understanding Hyperlight, Microsoft’s minimal VM manager Microsoft is making its Rust-based, functions-focused VM tool available on Azure at last, ready to help event-driven applications at scale. By Simon Bisson Nov 14, 2024 8 mins Microsoft Azure Rust Serverless Computing how-to Docker tutorial: Get started with Docker volumes Learn the ins, outs, and limits of Docker's native technology for integrating containers with local file systems. By Serdar Yegulalp Nov 13, 2024 8 mins Devops Cloud Computing Software Development news Red Hat OpenShift AI unveils model registry, data drift detection Cloud-based AI and machine learning platform also adds support for Nvidia NIM, AMD GPUs, the vLLM runtime for KServe, KServe Modelcars, and LoRA fine-tuning. By Paul Krill Nov 12, 2024 3 mins Generative AI PaaS Artificial Intelligence Resources Videos