Microsoft’s new hybrid cloud management tools bring Azure features to your servers, wherever they are Credit: Thinkstock One of the more interesting announcements at Microsoft’s 2019 Ignite conference was Azure Arc, a new management tool for hybrid cloud application infrastructures. Building on Azure concepts, Arc is designed to allow you to manage on-premises resources from the Azure Portal, deploying policies and services to virtual machines and Kubernetes. It also includes containerized versions of Azure’s SQL Database and PostgreSQL Hyperscale to give your Kubernetes-based hybrid applications Azure-consistent data options. Azure Arc extends the Azure Resource Manager model down to servers and Kubernetes clusters. It’s designed to manage resources in a cloudlike manner wherever they are, treating Azure’s resource tooling as your control plane. That puts it at a much higher level than most management tools. For example, if you’re using it with virtual machines running on a Windows Server network, you’d manage the VMs with Hyper-V management tools, and the server configuration and applications running on them with Azure Arc. Using Azure Arc with servers “Wherever they are” is a key principle behind Azure Arc. With its application management focus, it is infrastructure agnostic. Those VMs it manages can be running in your data center, in a hosting facility, or as virtual servers in a managed, shared environment. Server management with Azure Arc is now in public preview, with a connected machine agent for Windows and for Linux to handle connection to the Azure Arc service. Once connected to the cloud, you can start managing it as if it were an Azure resource, part of a resource group. This allows you to deploy PowerShell-based policies to connected servers, taking advantage of the work that’s been done to deliver just-in-time management and desired state configuration. Managed servers will need connectivity to Azure Arc, over SSL. Servers managed by Azure Arc don’t need to be physical servers; they can be virtual machines. This allows you to preload the agent into VM base images before they’re deployed. As part of setting up the service, Azure Arc generates a custom script that will run on unconnected servers, downloading and installing the agent, before connecting to Azure and adding the server as a resource. Managing Kubernetes applications in Azure Arc Microsoft hasn’t made Azure Arc’s Kubernetes support available in the public preview yet; it’s still limited to the service’s private access preview. However, Gabe Monroy, director of product for the Azure Application Platform, gave a short demonstration of it at Ignite. Using the Azure Portal, Monroy first showed a running Kubernetes cluster that was managed using Azure ARM-based policies. The initial policy he used controlled the network ports used by the cluster, locking down unneeded ports to reduce the cluster’s attack surface. The same policy could be used to manage all the clusters across a company’s global infrastructure. Writing policies once and using them many times like this keeps the risk of errors to a minimum; by testing all your policies in advance you can be sure they will work when deployed globally. The other advantage of a policy-based approach is that you can lock down clusters if they’re not compliant. Until a cluster reports that it’s compliant with all your policies, your application development team can’t deploy code. With the Azure Arc agent deployed to all the Kubernetes clusters in your network you have one pane of glass to manage all policies and all deployments. It’s important to note that you don’t have a way to manage the physical servers and the Kubernetes installation directly. All the Azure Portal gives you are the policies and the code running on the cluster. You can use policies to define how a cluster will behave, but you can’t deploy new nodes unless the Kubernetes runtime and the Azure Arc agent are installed. As soon as a new cluster is deployed and connected to Azure Arc, policies are automatically applied, ensuring that your security policies are in place without manually configuring everything. One interesting aspect of the demonstration was a policy that connected Azure Arc to GitHub, targeting either Kubernetes namespaces or clusters and handling deployments from a specific repository. Using this policy, any pull request to the repository would trigger a deployment of an updated application. The same policy could be used to load your code onto a new cluster as soon as it was configured. Any future update to the code would automatically deploy, keeping all your sites running the latest versions. It’s easy to imagine a new set of servers, preloaded with Kubernetes and the Azure Arc agent, being delivered to a new edge site. Once connected to a WAN and powered on, they’d automatically load the latest policies, and once in compliance would download their applications and start operating with minimal human interaction. Introducing a new cloud-centric, app-first management model It’s perhaps best to think of Azure Arc as the first of a new generation of policy-driven application management tools, especially when it’s used to automate application deployments across a global network. Integrating it into your gitops flow makes sense, using Arc to trigger application deployments when a pull request is merged and ensuring that the appropriate security policies are applied to the host Kubernetes cluster or virtual machines. Microsoft’s view of the cloud is that on-premises systems aren’t going away, and with the growing importance of edge deployments, the definition of on-premises is only going to grow, That doesn’t mean that those on-premises systems shouldn’t benefit from cloud technologies and from cloud-informed ways of working. Azure Arc is focused on automating the infrastructure for an application, using policy to ensure security compliance. When you think about it, this is a logical extension of devops, and part of the movement to a third tier of management in a cloud environment. With a focus on application virtual infrastructures, whether VM or container-based, Azure Arc is separating application operations from infrastructure operations. In a hybrid-cloud environment there’s no need for the applications team to know anything about the underlying physical infrastructure. Instead a separate team will have responsibility for physical servers, host operating systems, hypervisors, and Kubernetes installation, with tools like Azure Arc used by the application team to manage their applications at the edge, in hyperconverged systems, in traditional data centers, and in the cloud, all from the same cloud-hosted console. We’ve changed the way we run infrastructure with containers and virtualization, and the way we build and manage applications with devops. Why not provide tools to bring the two approaches together? With Azure Arc the ops side of the devops equation gets a platform to separate applications from infrastructure, with the ability to manage and control those applications from a new, cloud-hosted control plane. It’s an attractive vision, and it’ll be interesting to watch how Microsoft delivers it. Related content analysis Strategies to navigate the pitfalls of cloud costs Cloud providers waste a lot of their customers’ cloud dollars, but enterprises can take action. By David Linthicum Nov 15, 2024 6 mins Cloud Architecture Cloud Management Cloud Computing analysis Understanding Hyperlight, Microsoft’s minimal VM manager Microsoft is making its Rust-based, functions-focused VM tool available on Azure at last, ready to help event-driven applications at scale. By Simon Bisson Nov 14, 2024 8 mins Microsoft Azure Rust Serverless Computing how-to Docker tutorial: Get started with Docker volumes Learn the ins, outs, and limits of Docker's native technology for integrating containers with local file systems. By Serdar Yegulalp Nov 13, 2024 8 mins Devops Cloud Computing Software Development news Red Hat OpenShift AI unveils model registry, data drift detection Cloud-based AI and machine learning platform also adds support for Nvidia NIM, AMD GPUs, the vLLM runtime for KServe, KServe Modelcars, and LoRA fine-tuning. By Paul Krill Nov 12, 2024 3 mins Generative AI PaaS Artificial Intelligence Resources Videos