Build, manage, and deploy Kubernetes applications using infrastructure-as-code techniques, with separation of concerns and dependency graphs. Credit: pirke / Shutterstock The complexity of cloud-native applications appears bottomless. In addition to the familiar Kubernetes, cloud-native apps build on a growing ecosystem of services baked into the public cloud platforms. Developing and managing these applications requires a lot more than coding, going beyond devops into platform and infrastructure engineering. If you want a stable application, you need to have all the teams working together, with the aim of delivering a reproducible set of code and configurations that can be deployed as and when needed. That requires having a way of bringing together all the various working parts of a modern cloud-native application, building on the various tools we’re already using. After all, we don’t want to reinvent the wheel. For one thing, those tools work; it’s simply that they don’t work in unison. We’ve made various strides along the way. Infrastructure-as-code (IaC) tools such as Terraform and Azure Resource Manager allow you to automate the management of infrastructure services and platforms, defining and then building the networks, servers, and services your code needs. These tools are increasingly mature, and able to work directly against cloud service management APIs, offering familiar syntax with both declarative and programmatic approaches to infrastructure definitions. On the code side we have frameworks that simplify building applications, managing APIs, and helping us to define the microservices that make up a typical cloud-native application. Using a modern application framework, we can go from a few CLI commands to a basic application skeleton we can flesh out to deliver what our users need. So how do we bring those two distinctly different ways of working together, and use them to build and manage our cloud-native applications? Microsoft recently unveiled a new platform engineering tool that’s intended to do exactly that. Introducing Radius Developed by the Azure Incubations Team, Radius brings together existing distributed application frameworks and familiar infrastructure-as-code tools, as well as automated connections to cloud services. The idea is to provide one place to manage those different models, while allowing teams to continue to use their current tools. Radius doesn’t throw away hard-earned skills; instead it automatically captures the information needed to manage application resources. I had an email conversation with Azure CTO Mark Russinovich about Radius, how he envisions it developing, and what its role in cloud-native development could be. He told me, We want developers to be able to follow cost, operations, and security best practices, but we learned from customers that trying to teach developers the nuances of how Kubernetes works, or the configuration options for Redis, wasn’t working. We needed a better way for developers to “fall into the pit of success.” Russinovich noted another driver, namely the growth of new disciplines: “We’ve watched the emergence of platform engineering as a discipline. We think Radius can help by providing a kind of self-service platform where developers can follow corporate best practices by using recipes, and recipes are just a wrapper around the Terraform modules that enterprises already have. If we’ve got this right, we think this helps IT and development teams to implement platform engineering best practices, while helping developers focus on what they love, which is coding.” Radius is perhaps best thought of as one of the first of a new generation of platform operations tools. We already have tools like Dapr to manage apps, and Bicep to manage infrastructure. What Radius does is bring applications and infrastructure together, working in the context of cloud-native application development. It’s intended to be the place where you manage key platform information, like connection strings, roles, permissions… all the things we need to link our code to the underlying platform in the shape of Kubernetes and cloud services. Getting started with Radius You’ll need a Kubernetes cluster to run Radius, which runs as a Kubernetes application. However, most of Radius operation is done through a command line that installs under most shells, including support for both Windows Subsystem for Linux and PowerShell, as well as macOS. Once installed, you can check the installation by running rad version. You’re now ready to start building your first Radius-managed application. Use the <a href="https://docs.radapp.io/tutorials/new-app/" rel="nofollow">rad init</a> command to start Radius in the current context of your development cluster, add its namespace, and set up an environment to start work. At the same time, rad init sets up a default Radius application, creating a Bicep app that will load a demo container from the Azure Radius repository. To run the demo container, use the rad run command to launch the Bicep infrastructure application. This configures the Kubernetes server and starts the demo container, which contains a basic web server running a simple web application. You’re not locked into using the command line, as Radius also works with a set of Visual Studio Code extensions. The most obvious first step is adding the Radius Bicep extension with support for Azure and AWS resources. Note this isn’t the same as the full Bicep extension and is not compatible with it. Microsoft intends to merge Radius support into the official Bicep extension, but this will take some time. You can use the official HashiCorp Terraform extension to create and edit recipes. Under the hood is a Helm chart that manages the deployment to your Kubernetes servers, which Radius builds from your application definition. This approach allows you to deploy applications to Kubernetes using existing Helm processes, even though you’re using Radius to manage application development. You can build applications and infrastructures using Radius, store the output in an OCI-compliant registry, and use existing deployment tools to deliver the code across your global infrastructure. Radius will generate the Helm YAML for you, based on its Bicep definitions. That’s all pretty much run-of-the-mill for a basic cloud-native application, where you can use your choice of tools to build containers and their contents. However, where things get interesting with Radius is when you start to add what Microsoft calls “recipes” to the Bicep code. Recipes define how you connect your containers to common platform services or external resources, like databases. Managing platform services with Radius recipes What’s perhaps most useful about recipes is that they’re designed to automatically add appropriate environment variables to a container, such as adding database connection strings so your code can consume resources without additional configuration beyond what is in your Bicep. This allows platform teams to ensure that guardrails are in place, for example, to keep connections secure. You can author a recipe in either Bicep or Terraform, Terraform being the more obvious choice for cross-cloud development. If you’re already using infrastructure-as-code techniques, you should find this approach familiar, treating a recipe as an infrastructure template with the same Bicep parameters or Terraform variables you use elsewhere. Recipes define the parameters used to work with the target resource, managing the connections to the services your code uses. These connections are then defined in your application definition. In this way Radius recipes mark the separation of responsibilities between platform engineering and application development. If I want a Redis cache in my application, I add the appropriate recipe to my Radius application. That recipe is built and managed by the platform team, which determines how that functionality is deployed and what information I must provide to use it in my application. Out the box Radius provides a local set of basic recipes for common services. These can be used as templates for building your own recipes, if you want to connect an application to Azure OpenAI, for example, or define an object store, or link to a payment service. One interesting option is using Radius to build the scaffolding for a Dapr application. Here you define your application as a Dapr resource, using a Radius recipe to attach a state store using your preferred database. You’ll find a number of sample Dapr containers in the Radius repository to help you get started. All you need to do is add your connections to the state store recipe and add an extension for the Dapr sidecar. In practice, you’ll build your own containers using Dapr, using your usual microservice development tools, before adding them to a local repository and then managing the resulting application in Radius. Taming the cloud-native wild west Perhaps the biggest challenge Radius is designed to solve is the lack of visibility into the myriad resources and dependencies that make up sprawling cloud-native applications. Here Radius gives us a structure that ensures we have a map of our applications and a place where we can deliver architectural governance, with the aim of building and delivering stable, secure enterprise applications. A big advantage of a tool like Radius is the ability to quickly visualize application architectures, mapping the connections between containers and services as a graph. For now, the Radius application graph is a text-only display, but there’s scope for adding more user-friendly visualizations that could go a lot further to help us understand and debug large-scale applications. As Russinovich noted, We make it easy to query Radius and retrieve the full application graph. A third party could integrate our application graph with another source of data, like telemetry data or networking data. Seeing those graphs in relation to each other could be really powerful. In addition to giving us an understanding of what is composed together to create our application, the application graph will play a role in helping teams go from development to production, Russinovich said. For example, we could look at how the application is defined by a developer versus how the application is deployed in production. […] Having an application graph enables these teams to work together on how the application is defined as well as how it’s deployed. Cost is one part, infrastructure is another, but we can also imagine other overlays like performance, monitoring, and trace analysis. Cloud-native development needs to move from a world of hand-crafted code, as nice as that is, to one where we can start to apply trusted and reliable engineering principles as part of our everyday work. That’s why the arrival of platforms like Radius is important. Not only is it coming from Microsoft, but it’s also being developed and used by Comcast, BlackRock, and Portuguese bank Millennium BCP, shipping as an open-source project on GitHub. At the end of our email conversation, Mark Russinovich indicated how the Radius platform might evolve, along with community involvement through the Cloud Native Computing Foundation (CNCF). He said, Radius has multiple extension points. We’d love to see partners like Confluent or MongoDB contributing Radius recipes that integrate Radius with their services. We also think that cloud providers like AWS or GCP could extend the way Radius works with their clouds, improving multi-cloud support that’s inherent to Radius. Finally, we envision extensions to support serverless computing like AWS Fargate and Azure Container Instances. Related content feature 14 great preprocessors for developers who love to code Sometimes it seems like the rules of programming are designed to make coding a chore. Here are 14 ways preprocessors can help make software development fun again. By Peter Wayner Nov 18, 2024 10 mins Development Tools Software Development feature Designing the APIs that accidentally power businesses Well-designed APIs, even those often-neglected internal APIs, make developers more productive and businesses more agile. By Jean Yang Nov 18, 2024 6 mins APIs Software Development news Spin 3.0 supports polyglot development using Wasm components Fermyon’s open source framework for building server-side WebAssembly apps allows developers to compose apps from components created with different languages. By Paul Krill Nov 18, 2024 2 mins Microservices Serverless Computing Development Libraries and Frameworks news Go language evolving for future hardware, AI workloads The Go team is working to adapt Go to large multicore systems, the latest hardware instructions, and the needs of developers of large-scale AI systems. By Paul Krill Nov 15, 2024 3 mins Google Go Generative AI Programming Languages Resources Videos