Let’s look at reasons to push core AI processes out of the centralized public cloud to the edge. Credit: Thinkstock It’s a beautiful Sunday, and you’re out for a ride on your motorcycle. About 10 minutes into your ride, you hear a chatbot in your Bluetooth-enabled helmet telling you that your blood pressure is 40% above normal and your front brakes are significantly above normal temperature. Both conditions increase the risk of an accident by 45%, says the disembodied voice in your ear. You’re also told that via permissions you set, your doctor will be notified of the issues with your blood pressure, including being sent a history of readings. An appointment has been made with your motorcycle dealer to check the overheating front brakes. Perhaps the artificial intelligence (AI) doesn’t save your life directly, but it significantly does reduce the risk of being injured or killed during your ride. This motorcycle scenario features an AI-enabled edge device. Although some of the AI processing and data processing takes place on the bike, some occurs within centralized public cloud systems as well. In our example, the diagnoses of high blood pressure and hot brakes would occur on the bike, but the capability of processing what this means in the context of the history of the data gathered would occur within more powerful AI and data processing resources in the cloud. This tiering approach (edge and centralized tiers) allows the designers of the system to better deal with connectivity issues and provide more responsiveness since the processing and AI happen close to where the data is gathered. However, powerful back-end systems are able to do much more than the small, cheap device strapped under your motorcycle seat. Taking a more pragmatic approach to the intelligent edge—and most new technology trends for that matter—means you need to consider the upsides and the downsides of leveraging edge. No matter if you’re implementing motorcycle safety systems, aircraft management, or remote factory optimization, the same concepts relate. First, the bad news: Intelligent edge systems, while valuable as a business solution for many applications, are difficult and costly to operate. Centralized storage and compute services are easy to deal with in terms of operations because they are in a logically and physically centralized location, such as virtually in a public cloud. You’re typically operating one knowledge engine, one database, and one analytics engine. The intelligent edge means that we’re dealing with tens of thousands of devices, perhaps even a million. These remote devices need to have fixes and patches applied, operating systems updated, and need to be monitored remotely to ensure reliability and uptime. The best example of this is the updates that are sent to our phones. Although billion-dollar smartphone companies can make this investment, most businesses will find that the cost of operating intelligent edge systems can be cost prohibitive. Specialized operational systems must be in place to deal with remote intelligent devices that could be anywhere and be connected in a thousand different ways. Now the good news. The intelligent edge solves problems, including the one I depicted above, that take the use of cloud computing to the next level. We’ve long understood that connectivity and data latency are the Achilles heel of using a public cloud. In leveraging the edge, intelligent or not, we now understand that a tiering architecture can focus AI and data analytics technology on the data, doing most of the important processing near the data where it’s able to do some good. In the motorcycle scenario, what if the crash avoidance system had to transmit data to a set of storage and compute services on a public cloud through your mobile phone before crash avoidance measures could be taken? You would have received the response 10 seconds after you hit the tree. There is a reason intelligent edge architecture is a good thing. Related content analysis Strategies to navigate the pitfalls of cloud costs Cloud providers waste a lot of their customers’ cloud dollars, but enterprises can take action. By David Linthicum Nov 15, 2024 6 mins Cloud Architecture Cloud Management Cloud Computing analysis Understanding Hyperlight, Microsoft’s minimal VM manager Microsoft is making its Rust-based, functions-focused VM tool available on Azure at last, ready to help event-driven applications at scale. By Simon Bisson Nov 14, 2024 8 mins Microsoft Azure Rust Serverless Computing how-to Docker tutorial: Get started with Docker volumes Learn the ins, outs, and limits of Docker's native technology for integrating containers with local file systems. By Serdar Yegulalp Nov 13, 2024 8 mins Devops Cloud Computing Software Development news Red Hat OpenShift AI unveils model registry, data drift detection Cloud-based AI and machine learning platform also adds support for Nvidia NIM, AMD GPUs, the vLLM runtime for KServe, KServe Modelcars, and LoRA fine-tuning. By Paul Krill Nov 12, 2024 3 mins Generative AI PaaS Artificial Intelligence Resources Videos