New evidence suggests that mobile device users prefer the processing and data to not reside on their devices. Is this a new best practice? Credit: dragana991 / Getty Images I’m a big fan of IEEE’s coverage of the emerging cloud computing space. The technical depth of the articles won’t tempt the average IT reader to subscribe, but I like their focus on new innovations, followed by the detailed solutions that prove the innovations—sometimes too much detail. I recently came across this article titled “Energy-Efficient Decision Making for Mobile Cloud Offloading.” It triggered a mental note that mobile computing devices have lived with clouds for more than 10 years. We have yet to put a stake in the ground or a best practice around the tiering of mobile device processing and data storage. Perhaps it’s time. The article states that the concept of mobile computing combines the advantages of public clouds with the advantages of mobile “terminals.” The word terminals once referred to dumb devices that presented and consumed information but did no processing. This is still an apt analogy, considering that we plan to push as much processing and data storage as possible into public clouds. The miniaturization of tech and its ready availability at lower prices makes it handy to keep some processing and storage capabilities on mobile devices. Thus, mobile devices become “smart” terminals, although the best practice is to push as much as you can to a remote, cloud-based system. What the article calls offloading is something that mobile computing applications have dealt with for years. It’s common to question the location of processing and storage. There’s an argument for keeping processing and data storage on the mobile device, where interactions with the user should be close to zero latency. Of course, there’s a trade-off. When you place most of the processing and storage on the device, you degrade the performance of the device. The additional processing power also depletes the battery faster. As the article points out, cloud-based storage and processing can bring many benefits, including energy savings, improved performance, and increased reliability. Not to mention the fact that it’s more convenient for mobile application developers to centrally access the programs and data. A truly “dumb” terminal mobile device would include no processing or data. For cloud-based mobile applications and data, the dependency on the back-end means that if the cloud side is not available, your mobile application is dead in the water. Today’s mobile devices are a hybrid of dumb and smart terminals. Those who fly a great deal with no Wi-Fi or travel in parts of the country with weak or no cellular signals have already discovered which apps will continue to function without live updates and which simply won’t work. Offloading data and processing to the cloud has been an emerging best practice for years as network connections become faster and more reliable (the move to 5G). Public clouds are now the preferred platform for the centralization of data and processing. This architecture has so many advantages that it’s the uncontested future of computing. Related content analysis Strategies to navigate the pitfalls of cloud costs Cloud providers waste a lot of their customers’ cloud dollars, but enterprises can take action. By David Linthicum Nov 15, 2024 6 mins Cloud Architecture Cloud Management Cloud Computing analysis Understanding Hyperlight, Microsoft’s minimal VM manager Microsoft is making its Rust-based, functions-focused VM tool available on Azure at last, ready to help event-driven applications at scale. By Simon Bisson Nov 14, 2024 8 mins Microsoft Azure Rust Serverless Computing how-to Docker tutorial: Get started with Docker volumes Learn the ins, outs, and limits of Docker's native technology for integrating containers with local file systems. By Serdar Yegulalp Nov 13, 2024 8 mins Devops Cloud Computing Software Development news Red Hat OpenShift AI unveils model registry, data drift detection Cloud-based AI and machine learning platform also adds support for Nvidia NIM, AMD GPUs, the vLLM runtime for KServe, KServe Modelcars, and LoRA fine-tuning. By Paul Krill Nov 12, 2024 3 mins Generative AI PaaS Artificial Intelligence Resources Videos