In the coming year, organizations will seek to simplify, optimize, and consolidate observability through a mix of new tools and practices. Credit: Thinkstock The importance of observability has been well established, with organizations relying on metrics, logs, and traces to help detect, diagnose, and isolate problems in their environments. But, like most things in IT, observability is continuing to evolve rapidly – both in terms of how people define it and how they are working to improve observability in practice. I’ve argued in the past that observability is, at its core, a data analytics problem. The formal definition of observability tends to center on the external outputs of IT systems. I use a slightly broader definition of observability: “The capability to allow a human to ask and answer questions about the system.” I like this definition because it suggests that observability should be incorporated as part of the system design (rather than being bolted on as an afterthought) and because it underscores the need for engineers and system administrators to bring an analytics mindset to the challenge of enabling observability. In the coming year, look for organizations to deepen and diversify their telemetry data usage, while consolidating their tooling, in an attempt to level up their observability. Technologies such as eBPF and OpenTelemetry will lower the barrier to entry on instrumentation, and matured data analytics practices will enable IT and DevOps teams to identify and respond to issues more quickly and effectively. Broader adoption of distributed tracing Many IT and business leaders still don’t realize just how much potential distributed tracing holds, and this represents a huge missed opportunity in the quest to optimize observability. However, the next year is likely to see a significant uptick in adoption. As more organizations migrate their workloads to cloud-native and microservices architectures, distributed tracing will become more prevalent as a means to pinpoint where failures occur and what causes poor performance. Our recent DevOps Pulse survey shows a 38 percent year-over-year increase in organizations’ use of tracing, and 64 percent of respondents who are not yet using tracing said they planned to implement it within the next two years. Distributed tracing can open up a whole new world of observability into numerous processes beyond IT monitoring, in areas as diverse as developer experience, business, and FinOps. Distributed tracing relies on instrumenting application with the mechanics of propagating context when executing requests. You can easily use the context propagation mechanism for many other processes, such as tracking resource attribution or capacity planning information per product line or per customer account. Data privacy compliance is another extremely useful application of distributed tracing. In light of emerging compliance regulations such as GDPR and CCPA, data privacy is a huge priority, and this challenge is exacerbated by the fact that low-level storage is often unaware of user context. By propagating user IDs from downstream tiers to data storage tiers, distributed tracing can help organizations to better enforce their data privacy policies. Movement beyond the ‘three pillars’ of observability Discussions about observability often begin and end with what have come to be called the “three pillars of observability.” These are metrics, logs, and traces. Metrics help to detect problems and let DevOps or site reliability engineers understand what has happened. Logs, then, help to diagnose issues, providing the “why” behind the “what.” Finally, traces help engineers to pinpoint and isolate issues by indicating where they happened within distributed requests and elaborate microservice graphs. These three pillars continue to be critically important. But it’s important not to be confined by the “three pillars” paradigm and to choose the right telemetry data for your needs. In the coming year, I expect we’ll be seeing more organizations embrace additional types of observability signals, including events and continuous profiling. It is also important to remember that the “three pillars,” or any other telemetry signal for that matter, is just the raw data. As I wrote above, I firmly believe that observability is a data analytics problem, and as such, it is about proactively extracting insights out of that raw data, similar to BI analysts in a way. In December, I interviewed Frederic Branczyk, the founder of Polar Signals and a passionate advocate for observability. He shared the gap he sees in companies today: We pretend in our observability bubble that everybody has well-instrumented applications with distributed tracing and structured logging. But the reality is, when I look at a typical startup, they may not even be monitoring at all. They’re waiting for their customers to tell them something is wrong before they start investigating. More momentum behind eBPF Extended Berkeley Packet Filter, or eBPF, is a technology that allows programs to run in the operating system’s kernel space without having to change the kernel source code or add additional modules. Currently, observability practice is largely based on manual instrumentation, requiring the addition of code at relevant points to generate telemetry data, which often presents a significant barrier, and can even prevent some organizations from implementing observability. While auto-instrumentation agents do exist, they tend to be tailored to specific programming languages and frameworks. However, eBPF allows organizations to embrace no-code instrumentation across their entire software stack, right from the OS kernel level, providing easier observability into their Kubernetes environments and offering benefits around networking and security. Because eBPF works across different types of traffic, it helps organizations to meet their goal of unified observability. For instance, DevOps engineers might use eBPF to collect full body trace requests, database queries, HTTP requests, or gRPC streams. They can also use eBPF to collect resource utilization metrics, including CPU usage or bytes sent, allowing the organization to calculate relevant statistics and profile their data to understand the resource consumption of various functions. Additionally, eBPF can handle encrypted traffic. Netflix recently published a blog about how the company is using eBPF to capture network insights. According to the company, the use of eBPF has been highly efficient, consuming less than one percent of CPU and memory in any instance. Unification of siloed tools As observability matures, organizations will increasingly look to holistic observability platforms, favoring these integrated solutions over the more siloed tools that they have used in the past. Compared to stand-alone observability tools, these more holistic platforms can better position developers, DevOps, and SREs to address querying, visualization, and correlation across all of their different telemetry signal types and sources. We saw this unification trend in the past year, with major vendors such as Grafana Labs, Datadog, AppDynamics, and my company Logz.io coming out of their respective specialty domains in log analytics, infrastructure monitoring, APM, or others, and expanding into a more comprehensive observability offering. We’ll see this trend accelerating in 2022, adapting to the changing observability needs and changing the competitive landscape. Continued adoption of open source tools and standards The open source community created Kubernetes (and, essentially, the entire concept of “cloud native”). This same community is now delivering open source tools and standards to monitor these environments. New open standards like OpenMetrics and OpenTelemetry will mature, becoming de facto industry standards in the process. In fact, OpenMetrics may be adopted this coming year as a formal standard by IETF, the premier internet standards organization. The rise of open source tools not only provides companies with additional options for enabling observability, but also prevents the vendor lock-in that has historically plagued some corners of the IT industry. At the moment, the open source landscape for observability is quite dynamic, with a number of important projects emerging in just the past couple of years. It can sometimes be difficult for DevOps and system administrators to keep these solutions straight (especially because many have adopted the naming convention of “OpenSomething”), but they are beginning to converge. Each day in 2022, we will move closer to something resembling open source standardization – and closer to the ideal of unified observability. Dotan Horovits is principal technology evangelist at Logz.io. With more than 20 years in the high-tech industry as a software developer, a solutions architect and a product manager, he brings a wealth of knowledge in cloud computing, big data solutions, DevOps practices and more. Dotan is an avid advocate of open source software, open standards, and open communities. He organizes the local chapter of the Cloud Native Computing Foundation (CNCF) in Tel-Aviv and runs the OpenObservability Talks podcast, among others. — New Tech Forum provides a venue to explore and discuss emerging enterprise technology in unprecedented depth and breadth. The selection is subjective, based on our pick of the technologies we believe to be important and of greatest interest to InfoWorld readers. InfoWorld does not accept marketing collateral for publication and reserves the right to edit all contributed content. Send all inquiries to newtechforum@infoworld.com. Related content feature 14 great preprocessors for developers who love to code Sometimes it seems like the rules of programming are designed to make coding a chore. Here are 14 ways preprocessors can help make software development fun again. By Peter Wayner Nov 18, 2024 10 mins Development Tools Software Development feature Designing the APIs that accidentally power businesses Well-designed APIs, even those often-neglected internal APIs, make developers more productive and businesses more agile. By Jean Yang Nov 18, 2024 6 mins APIs Software Development news Spin 3.0 supports polyglot development using Wasm components Fermyon’s open source framework for building server-side WebAssembly apps allows developers to compose apps from components created with different languages. By Paul Krill Nov 18, 2024 2 mins Microservices Serverless Computing Development Libraries and Frameworks news Go language evolving for future hardware, AI workloads The Go team is working to adapt Go to large multicore systems, the latest hardware instructions, and the needs of developers of large-scale AI systems. By Paul Krill Nov 15, 2024 3 mins Google Go Generative AI Programming Languages Resources Videos