The next stage of cloud computing brings computing power closer to users, paving the way to better user experiences and more intelligent applications. Credit: Shutterstock / Wright Studio Businesses are always trying to improve the reliability and performance of their software for users, while at the same time trying to reduce their own costs. One strategy that accomplishes both of these goals at the same time is edge computing. According to Gartner only 10% of data today is being created and processed outside of traditional data centers. By 2025, that number is projected to increase to 75% due to the rapid expansion of the internet of things (IoT) and more processing power being available on embedded and mobile devices. McKinsey has identified more than 100 different use cases, and projects around $200 billion in hardware value for edge computing being created over the next five to seven years. What is edge computing? When developers hear the term “edge computing,” many think it applies only to IoT-type applications, but the edge is relevant to all software engineers. The simplest way to think of edge computing is that it is computing closest to the origin of the information being computed. Additionally, because an “edge” must be the edge of something, the edge is usually defined with respect to a central hub—i.e., a cloud. By this definition, any software that is being deployed across multiple data centers could be considered a form of edge computing, as long as there is a central component. CDNs (content delivery networks) are an early form of edge software, with companies originally serving static content from locations closer to their users. The rise of CDNs has made it easier to roll out your entire application as close to your users as possible. The next stage of cloud computing brings computing power even closer, in the form of being able to push workloads that were previously run in data centers directly onto user devices and making deployment of software to remote edge locations as seamless as deploying to the cloud. Two examples of this in action: Machine learning. Apple’s CoreML and Google’s TensorFlow Lite allow machine learning models to be created and run on mobile devices rather than requiring a round trip to a data center for AI-powered features. This not only improves the experience for the user but also reduces bandwidth and hardware costs for companies. Serverless edge computing. Cloudflare Workers and AWS Lambda Edge allow developers to push functionality to 250-plus points of presence (PoPs) with ease. This type of edge computing opens up many new architecture options for developers while reducing much of the complexity associated with edge computing. Benefits of edge computing The primary benefit of edge computing is that users get a better experience in terms of reliability, reduced latency, and potentially better privacy by keeping more of their data on-device or on the local network. For businesses, there are several benefits to adopting edge computing. First is the potential for cost savings by offloading processing to smaller edge devices and by using less bandwidth when moving data to the cloud. You also gain more fine-grained control over resource consumption via serverless edge computing platforms. Edge computing also can make it easier to comply with security regulations by keeping data on location while still being able to provide all of the features expected of modern cloud-based software. Even for consumer products, moving more features directly onto the user’s device can be considered a benefit for a business by attracting privacy-minded customers who want to own their data. Data at the edge One challenge with edge computing is striking the right balance between having full insight into your application by keeping high granularity data versus the cost of transferring and storing that data in the cloud. However, edge computing can help solve this problem by giving developers the best of both worlds. At the edge, you can store the more granular data that is needed to monitor software or hardware for potential operational issues. That data can then be downsampled to a less dense data set and moved from the edge to the cloud for use by the company at large for more high-level analysis. Many companies have built custom solutions to handle the management and lifecycle of their data to get it from the edge of their network to their cloud data store. One way to simplify this process would be to use a solution such as InfluxDB’s Edge Data Replication, which makes it easy to use your data at both the edge for collecting and monitoring your time series data and on the cloud for long-term analysis. InfluxDB takes care of many of the challenges associated with edge computing, including worrying about lost network connectivity, integrating systems, and numerous other edge cases involved with edge computing. By abstracting these problems away, developers can focus on the features that are crucial for their product rather than worrying about implementation details. How companies use InfluxDB at the edge Many companies are actively using InfluxDB at the edge as a core part of their infrastructure. Prescient Devices provides an edge computing development platform built on Node-Red that makes it easy for companies to start taking advantage of edge computing. Prescient Devices uses InfluxDB as a local data store for devices at the edge and as part of its platform in the cloud. Graphite Energy is another company that uses InfluxDB both at the edge and in the cloud. Graphite Energy provides a solution to the problem of variable rate renewable energy by converting solar and wind energy into steam, which can then be used to generate electricity at reliable amounts needed for manufacturing. This is a critical problem to solve as we move away from fossil fuels and towards renewable energy. By using InfluxDB, Graphite Energy is able to monitor its infrastructure at the edge and take action quickly if needed. They then send the lower-granularity data to the cloud and look at the aggregated data for trends that can drive long-term business decisions. There are a huge number of ways that the edge and cloud can be used to build modern applications. The key is to be aware of how the ecosystem is developing and to understand the strengths provided by the edge and cloud options. This will allow you to design your application in a way that best takes advantage of both, and best meets the needs of your customers and your business. Sam Dillard is senior product manager for edge computing at InfluxData. He is passionate about building software that solves real problems and the research that uncovers these problems. Sam has a BS in Economics from Santa Clara University. — New Tech Forum provides a venue to explore and discuss emerging enterprise technology in unprecedented depth and breadth. The selection is subjective, based on our pick of the technologies we believe to be important and of greatest interest to InfoWorld readers. InfoWorld does not accept marketing collateral for publication and reserves the right to edit all contributed content. Send all inquiries to newtechforum@infoworld.com. Related content feature 14 great preprocessors for developers who love to code Sometimes it seems like the rules of programming are designed to make coding a chore. Here are 14 ways preprocessors can help make software development fun again. By Peter Wayner Nov 18, 2024 10 mins Development Tools Software Development feature Designing the APIs that accidentally power businesses Well-designed APIs, even those often-neglected internal APIs, make developers more productive and businesses more agile. By Jean Yang Nov 18, 2024 6 mins APIs Software Development news Spin 3.0 supports polyglot development using Wasm components Fermyon’s open source framework for building server-side WebAssembly apps allows developers to compose apps from components created with different languages. By Paul Krill Nov 18, 2024 2 mins Microservices Serverless Computing Development Libraries and Frameworks news Go language evolving for future hardware, AI workloads The Go team is working to adapt Go to large multicore systems, the latest hardware instructions, and the needs of developers of large-scale AI systems. By Paul Krill Nov 15, 2024 3 mins Google Go Generative AI Programming Languages Resources Videos