Microsoft’s Windows Server strategy is about to change, with faster updates, a lot of Linux love, and a devops focus Credit: Victorgrigas Microsoft’s new continuous-delivery model, with Windows releases every six months or so, makes sense for the desktop. But does it work for Windows Server? Certainly, Microsoft didn’t think so at first. Windows Server 2016 comes from the Long-Term Servicing Build (LTSB) of Windows, with only the microservices- and container-focused Nano Server scheduled for regular releases. More Windows Server, more often That initial plan appears to be in flux, with Windows Server joining desktop Windows in the Insider program later this summer. Microsoft is also talking about a series of feature releases that will support the GUI-less Windows Server Core and quite possibly the awkwardly named Windows Server with Desktop Experience. It’s a shift that makes sense. With the RS1 and RS2 releases of Windows, the desktop OS has added developer-centric features at a fast pace. Tools like the Windows System for Linux (WSL) have made it an attractive platform for cloud development, with features that work well with Azure. Although cloud services have begun to eclipse on-premises servers, on-premises Windows Server is still an important part of any business infrastructure, and a key on-ramp for applications and services into the cloud. That’s why you can’t leave those servers as is for another two or three years, waiting for the next LTSB release, sometime in 2019. Developers are already using Windows Services for Linux on desktops, and they want access to the same tool chain on development servers both locally and in cloud virtual infrastructures. It’s this demand for Linux tools that’s driving the expected changes in the Windows Server life cycle. If your code development moves as fast as your desktop updates, and if the cloud platforms you use are out of sync with on-premises servers, you’ll be overwhelmed with a tidal wave of change when Windows Server finally updates. Putting Windows Server on the same six-month schedule as Windows makes a lot of sense. The two OSes share a lot of code, and they are part of the same development organization. Keeping developer features in-sync between Windows Server and desktop Windows reduces friction and keeps development aligned with the fast-moving Azure public cloud. It should also let Azure quickly adopt new features, rolling out new classes of virtual machines and adding new platform features. Running Linux on Windows Server Bringing WSL to Windows Server makes a lot of sense. Running Linux consoles in Server Core adds them to an increasingly heterogeneous tool chain, simplifying integration with devops tools. But can Windows Server go further than the desktop and bring large-scale Linux applications to enterprise Windows deployments? The answer, surprisingly, is yes. Microsoft is bringing WSL to Windows Server, keeping the same focus on development tools and management as it does on the desktop. Don’t expect to see it as an application host: WSL isn’t designed to run applications at scale, and although it supports an increasing amount of Linux userland, certain key APIs and services are missing. However, WSL won’t be the only Linux-focused part of Windows Server, with Linux containers running directly on your Windows Servers and managed using WSL to run local and remote scripts and tools. To understand how Microsoft will do this, recall how Windows Server supports containers. Windows’s container support is based on Docker, itself the basis for the proposed Containerd Linux service. With Windows Server 2016, you use Docker commands to build and deploy containers, either running them directly on top of Windows or in a new, more isolated mode, based on Hyper-V. It’s this last option that’s the basis for Window’s new Linux containers. Docker meets Hyper-V meets Linux Hyper-V containers are not Hyper-V virtual machines. You can’t manage them with System Center Virtual Machine Manager, and they’re invisible to Hyper-V’s own management tools. A virtualized Windows OS hosts the container, running on a container-optimized version of the Hyper-V hypervisor. Management is through familiar Docker commands in a Windows command line or, with a little work, from WSL. Getting Linux containers to run on Windows is just a matter of changing the Hyper-V guest container host. Hyper-V Linux containers aren’t limited to one Linux distribution, because Microsoft will open-source the code needed to integrate a container host OS. It’s already working with several major Linux distributions to offer support. When Linux container support launches, Ubuntu, RedHat’s Atomic Host, Suse and Intel’s Clear Linux will all be available, with minimal-OS images to support quick container deployment and launch. Support for Linux containers gives you access to Docker’s existing gallery of apps, as well as plugging you into tool chains already using Linux APIs to manage and deploy containers. Windows administrators can carry on managing servers the way they always have, while devops teams can use one set of tools to work with a heterogeneous infrastructure in and out the cloud. Moving applications to containers Moving existing applications to containers isn’t easy. They’re often hefty, monolithic pieces of code that don’t map well to the microservices model that containers normally require. It’s not just the code: The skills needed won’t be found in most traditional development groups. So, Microsoft will work with both Docker and the joint Microsoft-Accenture consultancy Avanade to offer a low cost, relatively quick route to converting an existing application. It’s aiming to do this in just five days, as part of a month-long engagement that includes training and support. Once containerized, applications need management. Microsoft’s Service Fabric PaaS is adding support for containerized microservices, handling scaling for you on Azure. For more complex applications, you can take advantage of Azure’s support for the open source Kubernetes datacenter operating system, and the tools it recently acquired from Engine Yard with the Deis team. Deis’s recently launched Draft should simplify new microservices development, taking code you’ve written and creating the appropriate container and Kubernetes descriptions, then uploading code to a test cluster. Once it’s there, you can continue development, with changes automatically synced to the test system. There’s no need to leave your laptop: Draft handles working with the cloud. Your tool chain remains the same, with two new commands to link your code to Kubernetes. Draft’s deployment configurations become part of your source, so you can use them to drive deployment pipelines and continuous-integration tools. Adding Linux container support strengthens Windows’ role in the hybrid cloud. It’s transitioning from an OS where you just run applications and VMs to one where it’s a host for all your applications and services. On-premises Windows Server is a stepping stone to the cloud, a place to develop and test containers before deploying them to Azure Stack or to the public Azure cloud. And with tools like Draft, that step is easier to take, bridging the gap from development laptop to hybrid cloud. Related content feature 14 great preprocessors for developers who love to code Sometimes it seems like the rules of programming are designed to make coding a chore. Here are 14 ways preprocessors can help make software development fun again. By Peter Wayner Nov 18, 2024 10 mins Development Tools Software Development feature Designing the APIs that accidentally power businesses Well-designed APIs, even those often-neglected internal APIs, make developers more productive and businesses more agile. By Jean Yang Nov 18, 2024 6 mins APIs Software Development news Spin 3.0 supports polyglot development using Wasm components Fermyon’s open source framework for building server-side WebAssembly apps allows developers to compose apps from components created with different languages. By Paul Krill Nov 18, 2024 2 mins Microservices Serverless Computing Development Libraries and Frameworks news Go language evolving for future hardware, AI workloads The Go team is working to adapt Go to large multicore systems, the latest hardware instructions, and the needs of developers of large-scale AI systems. By Paul Krill Nov 15, 2024 3 mins Google Go Generative AI Programming Languages Resources Videos