Microsoft’s ACS is shutting down in January 2020. Here are your options for Kubernetes, Docker, and DC/OS Credit: Getty Images Signaling another victory for Kubernetes in the container orchestration marketplace, Microsoft recently announced that it will retire its Azure Container Service in January 2020. ACS users are being encouraged to move their distributed infrastructures across to Microsoft’s newer Azure Kubernetes Service. It’s a logical move for Microsoft. While ACS’s alternative orchestration options for Docker Swarm and Mesosphere’s DC/OS will continue to be supported on Azure, Microsoft will no longer run them for you. Instead, Microsoft is concentrating its investment on improving Kubernetes support across Azure and on its own Kubernetes tools. What happens to your existing ACS apps While ACS is being retired, applications built on it won’t suddenly stop running. The APIs used to manage them will be blocked, so you have no control and you won’t be able to use Azure tools to add new clusters or update and scale your existing services. There’ll also be no support, so although code will run you won’t be able to manage it without using your own tools—and even then those capabilities will be limited. You’ll also be locked into an old version of whatever orchestration framework you’re using, and you won’t be able to rely on automated security updates. It’s clear that the added risk from staying on ACS after January 2020 will be too high for most applications, so take the year Microsoft is providing to plan and manage any migrations. For ACS applications using Kubernetes, that won’t be too difficult. But because the underlying orchestration concepts and tools will be different for Swarm and DC/OS, code that’s build on them will be harder to move. Even there, though, because all the services rely on the same container model, you won’t need to make significant code changes. Microsoft makes it clear that it expects users to migrate to alternative solutions, with its own Azure Kubernetes Service (AKS) its preferred option. Certainly AKS is where much of Azure’s open source investment in Kubernetes is going, with its acs-engine tools transitioning to a new GitHub repo, aks-engine. If you’re using acs-engine on your own Azure virtual infrastructures, you should be able to switch directly to aks-engine to manage your Kubernetes instances because it’s backward-compatible. How to migrate from ACS So how should you manage your migration? Although there’s quite a bit to consider, things aren’t as complex as they might seem. First, you need to consider the economic question. Moving from ACS to AKS shouldn’t have a significant impact on billing, nor should moving to the open source version of Mesosphere DC/OS. With a migration to Docker Enterprise for Swarm or to the Enterprise Release of Mesosphere DC/OS, you need licenses from either company in addition to any Azure infrastructure costs. That does mean you’ll be giving up on one of the advantages of Azure, instead splitting your billing relationship across multiple companies rather than having only one monthly payment. Second, once you’ve made your economic decision, a move from a Swarm or DC/OS ACS instance to working with the full product shouldn’t require significant engineering effort. However, it may incur new operations requirements, because you’ll need to work with product-specific management tools outside the Azure Portal. That could be an issue for some organizations, but in practice the tools provided by these platforms offers the same features as ACS. You’ll also get access to direct support, alongside Azure support. If you prefer, there’s also the option of rewriting code to run on AKS. Moving Kubernetes from ACS to AKS AKS has been Microsoft’s focus for Kubernetes on Azure for well over a year, and it’s added a lot of management features as it has matured and grown. So it’s not surprising that AKS is Microsoft’s preferred solution for deploying Kubernetes-managed applications on Azure. With support for features like virtual kubelets, via Azure Container Instances, there’s also the option to use AKS to take advantage of ACI’s higher level container management, reducing the workload associated with running Kubernetes even further. Moving from ACS to AKS does have some issues, and you may need to make changes to your application architecture to handle the differences between the two services. Some of the issues are due to AKS being a managed Kubernetes, and once handled should simplify running your application. First, you need to change your StorageClass objects from unmanaged to managed, and similarly for any PersistentVolumes. AKS uses Azure Managed Disks, letting it control your storage directly, adding capacity as needed. You also need to avoid using any Windows Server-based Kubernetes nodes for now, because the planned support is currently just in a private beta. You may also need to upgrade the version of Kubernetes you’re using for your code. This is best handled in a development environment, so you can deploy onto AKS directly. Once deployed, you’ll automatically be updated because your application will be managed by AKS’s control plane, with support for dynamic scaling. There are API differences as well, so if you’re planning on using external Kubernetes tools for monitoring and debugging, check that they support the AKS APIs. Dealing with Kubernetes storage and deploying to AKS If you’re migrating a stateless application from ACS to AKS, the process can be as easy as deploying your application YAML onto a new cluster and letting AKS deploy and start your containers. Once they’re running, you can test your code before switching public IP addresses from your ACS instance to AKS. Things are more complicated with stateful applications, and you may need to warn users of the possibility of extended downtime. One option is to set up a failover replica of your application on AKS, and let it replicate your data. Once it’s in sync with your existing ACS application, you can fail over from the ACS application to AKS, before redirecting traffic to AKS. Migrating storage to managed disks adds extra complexity, and will often require significant downtime if you have a lot of data. You need to stop writes from your ACS code, shutting it down and snapshotting any disks. The snapshots can be used to create new disks, which can be used as AKS Persistent Volumes. Once deployed, your application needs to be tested before you can start sending traffic to it. Users won’t be able to access it from the point you shut down your ACS release to the point your AKS deployment goes live. The method used to deploy code to AKS is similar to that used to deploy code to any Kubernetes cluster. You start by creating a cluster in AKS, using the same node definitions you used in ACS. You then need to edit your YAML to support AKS definitions and locations, and deploy using your existing CI/CD pipelines to the new service host. Once code is there, you can test it using your existing test tools and techniques before migrating your users to the new service. Once all that is done, you can take advantage of a fully managed, autoscaling environment with support for Virtual Kubelets—and get automatic updates, so you’re protected from security issues like the recently announced (and fixed) Kubernetes privilege-escalation bug. Related content feature 14 great preprocessors for developers who love to code Sometimes it seems like the rules of programming are designed to make coding a chore. Here are 14 ways preprocessors can help make software development fun again. By Peter Wayner Nov 18, 2024 10 mins Development Tools Software Development feature Designing the APIs that accidentally power businesses Well-designed APIs, even those often-neglected internal APIs, make developers more productive and businesses more agile. By Jean Yang Nov 18, 2024 6 mins APIs Software Development news Spin 3.0 supports polyglot development using Wasm components Fermyon’s open source framework for building server-side WebAssembly apps allows developers to compose apps from components created with different languages. By Paul Krill Nov 18, 2024 2 mins Microservices Serverless Computing Development Libraries and Frameworks news Go language evolving for future hardware, AI workloads The Go team is working to adapt Go to large multicore systems, the latest hardware instructions, and the needs of developers of large-scale AI systems. By Paul Krill Nov 15, 2024 3 mins Google Go Generative AI Programming Languages Resources Videos