Simon Bisson
Contributor

KAN: A Kubernetes edge environment for computer vision

analysis
May 22, 20238 mins
Artificial IntelligenceCloud ComputingMachine Learning

Microsoft’s open-source KubeAI Application Nucleus is a low-touch, Kubernetes-based system for building and running machine learning applications for edge devices.

shutterstock 1748437547 cloud computing cloud architecture edge computing
Credit: amgun / Shutterstock

Computer vision is an increasingly important industrial technology, not only for managing product lines or stock control, but also for safety. It’s a powerful technology, able to quickly classify objects and identify anomalies. But there’s a problem that comes with using it at the edge of your network: latency. When people’s lives are on the line, you don’t want to rely on a mix of wired and wireless networks, or on cloud resources that may need to spin up before they can be used.

That’s one of the key reasons why Microsoft CEO Satya Nadella talks about “the intelligent edge,” i.e. bringing cloud native tools and services to devices running on our networks. We need to be able to access, if not everything, then certainly a subset of cloud services at the edge. And that most definitely means computer vision.

Microsoft has already provided tools to containerize elements of Azure Cognitive Services, including its custom vision tooling, and deliver them to its own Azure IoT Edge platform. But what if you’re rolling your own edge solution?

Machine learning containers at the edge

It turns out that containers are an ideal way to deploy software to the edge of the network. Kubernetes and service meshes offer an agnostic platform for your code, using tools like Helm to pull containers and other assets from repositories hosted in private and public clouds. You can build and test code away from the edge, using tools like Azure Kubernetes Service to host your development networks, and packaging and delivering x86 and Arm containers to your edge repositories.

Using Kubernetes at the edge gives you a choice of hardware and software. Edge-optimized distributions from Microsoft and other vendors build on standard Kubernetes, as well as on Kubernetes distros targeted at smaller devices, like MicroK8s and K3s. What’s important is that they all have the same APIs, and while they may limit the number of operational nodes, there’s no need to have separate builds beyond those necessary for different processor architectures.

Introducing KAN: the KubeAI Application Nexus

What’s needed is a consistent way of building and managing edge machine learning applications on Kubernetes, one that can speed up development and delivery. That’s the role of KAN, the KubeAI Application Nexus. As the introductory blog post notes, the name is from a Mandarin verb that translates as “to watch” or “to see.” KAN is an open-source project, hosted on GitHub.

The goal is to provide an environment designed to deliver machine learning solutions at scale. That’s key in working with the industrial internet of things, where sites may have hundreds or thousands of devices that can benefit from an infusion of AI services, for example turning all the security cameras into a safety solution that monitors risky situations around machines or in warehouses.

KAN is designed to run your code on edge hardware, aggregating information from local connected devices and using pre-trained machine learning models to gain insights from them. At the same time KAN provides a monitoring and management portal and a low-code development environment that run on on-prem or in-cloud Kubernetes systems.

It’s important to note that the KAN management portal does not serve as the endpoint for your data; that will be your own applications or services, running where you need them. Note to that while you don’t need to host KAN in Azure, doing so will enable deeper integration with Azure Edge and AI services such as Azure IoT Hub and Azure Cognitive Services.

Getting started with KAN

To get started all you need is a Kubernetes cluster with Helm support. If you’re using Azure, KAN works with the managed AKS, which makes it simpler to set up without requiring additional Kubernetes platform engineering resources. Installation needs a bash shell, as it uses wget to download and run an installation script from the KAN GitHub repository. You’ll be prompted as to whether you’re using Azure or not.

The installation script walks you through the steps needed to set up KAN, from choosing a Kubernetes configuration, to adding storage support, to, if using Azure, connecting to a Cognitive Services subscription (either working with an existing one or creating a new one). Working outside of Azure skips a lot of steps, but both paths end up at the same place: a URL that points to the KAN portal. As the installer is a Helm script you can use Helm to uninstall both the portal and KAN.

Once KAN is installed you can start working with the KAN portal to build your first application. You’ll need to start by attaching a compute device. There’s a range of options, from NVIDIA edge hardware to Microsoft’s own Azure Stack Edge. Devices can be running on Kubernetes clusters, or they can be Azure Edge devices. Usefully KAN supports Microsoft’s recently released Azure Edge Essentials minimal Kubernetes environment, which should host single container models relatively easily.

You can use Azure VMs as test devices, allowing you to build your own digital twins that can be used to ensure your edge systems are running as expected. Cameras need to support RTSP, but that includes most off-the-shelf industrial IP cameras. KAN supports a many-to-many model for processing, so feeds from one camera can be processed by more than one application, or one application can work with feeds from several cameras, with the portal able to sample views from your feeds so you can debug applications visually.

Building a machine learning application with KAN

Using Azure IoT Hub can save you a lot of time, as you can manage your devices through this. Start by selecting the device architecture, as well as any available acceleration technologies. Microsoft recommends using KAN with accelerated devices, usually a GPU, though there is support for NVIDIA and Intel NPUs. Acceleration helps with larger, more accurate models, allowing them to run on limited hardware—which makes them key for safety critical edge applications.

While KAN was designed for building custom models, trained on your own data, there are prebuilt options for some common cases, and planned support for OpenVino’s Model Zoo. KAN takes your models and gives you a node-based graphical design tool to build what it calls “AI skills.” Here you can attach inputs from IP cameras to models, before adding nodes that transform and filter outputs, before exporting raw data and transformed outputs to other applications and services.

This approach lets you, for example, use a vision model to detect whether someone is getting too close to a machine, filtering its output so that only specific labelled data (identifying an object as a person) is exported. You can even chain different recognition models together, allowing one to refine the results of another. For example, a camera on a production line could use one model to classify products as flawed, and another model to determine if those flaws can be reworked. By passing only classified data to the second, more complex model, you can ensure that you’re using only as much compute as necessary.

Once your application is built and tested, you can package and deploy it to target devices from the KAN portal. Deployments link devices and cameras, though currently you’re only able to deploy to one device at a time. If KAN is to work at scale, it needs to support deployments to many devices, so you can use it to manage entire estates. Still, KAN certainly simplifies delivering machine learning applications to Kubernetes systems or to Microsoft’s Azure IoT Edge runtime container host, and gives you a single place to view all of your deployments. If that requires deploying to edge devices individually, it’s still a lot easier than managing manual deployments.

Learning from Azure Precept 

There’s a lot here that’s reminiscent of the now-cancelled, pre-packaged Azure Percept hardware and software solution. Both were intended to simplify deploying edge AI solutions, with a focus on practical implementations, and both have relatively low-code tooling for building and deploying machine learning applications. While Percept didn’t get off the ground, KAN appears to be taking lessons from the Percept developer experience. I found the Percept low-code approach to building computer vision applications well thought out, mixing concepts from IoT tooling, like the visual programming Node-RED environment, with familiar Power Platform-like features. So it’s good to see those ideas returning via KAN.

It will be interesting to see how KAN evolves. Managing edge software is still too complicated, and tools like this go a long way to providing necessary simplification, ensuring that we can build code quickly and test and deploy it at the same velocity. There are many problems out there that machine learning at the edge can solve, and KAN could be the tool we need to both experiment and work at scale.

Simon Bisson
Contributor

Author of InfoWorld's Enterprise Microsoft blog, Simon Bisson prefers to think of “career” as a verb rather than a noun, having worked in academic and telecoms research, as well as having been the CTO of a startup, running the technical side of UK Online (the first national ISP with content as well as connections), before moving into consultancy and technology strategy. He’s built plenty of large-scale web applications, designed architectures for multi-terabyte online image stores, implemented B2B information hubs, and come up with next generation mobile network architectures and knowledge management solutions. In between doing all that, he’s been a freelance journalist since the early days of the web and writes about everything from enterprise architecture down to gadgets.

More from this author