Granular visibility can help enterprises keep cloud costs in check. Follow these best practices when using monitoring methods to control Kubernetes-related spending. It can be a little too easy to let Kubernetes-related cloud costs slip out of control—and for many enterprises, that’s exactly what’s happening. Programmatic resource provisioning and access to high-cost resources like GPUs are just a couple of the factors that will balloon budgets without a conscious effort to temper expenses. And, as enterprises continue to scale their use of Kubernetes, every small bug and cost inefficiency scales in lockstep. The answer lies in visibility and ownership. Enterprises need to see where and how they are spending with enough granularity to enact change when needed, and they need to cultivate a culture of cost responsibility and accountability that touches engineering and finance teams alike. In many cases, the mere act of making engineering teams aware of their Kubernetes spending has a substantial influencing effect on more efficient spend. More mindful Kubernetes utilization also leads to more streamlined, productive, and secure environments, in addition to cost savings. Enterprises should understand they have four methods for Kubernetes cost monitoring at their disposal, with each option best suited for particular use cases: Limited cost monitoring. Under this method, a centralized team or teams (often finance or devops) are responsible for receiving monthly Kubernetes billing and then addressing unnecessary costs and any contributing issues. Organizations with small applications engineering teams and less advanced environments are the best fit for this method. Those with larger, multi-tenant environments need a more robust approach. Showbacks. The showback method introduces detailed cost breakdowns of Kubernetes and cloud spending for each team across the organization. Each team is given this accurate cost data so they can better understand and more proactively manage their spending responsibilities. Showbacks are ideal for organizations with three or more applications engineering teams and 20-plus engineers. Chargebacks. Chargebacks are showbacks with teeth. Here teams must pay from their own budgets to cover the Kubernetes and cloud costs they create. This method is best suited to the same larger organizations as showbacks. For a chargeback approach to succeed, though, enterprises must commit to the culture of chargebacks and agree that controlling these costs is a crucial shared goal they are capable of achieving. Limit-set cost monitoring. This approach requires teams to pay from their budgets if/when their resource costs go beyond set spending limits or, in some cases, to pay from their budgets for selected resources only. As with chargebacks, the company culture must be on board for this method to thrive. Whatever method an organization uses, Kubernetes cost controls will fail if their implementations are too abrupt, perceived as unfair, or poorly managed. To gain the trust, cooperation, and organization-wide buy-in you’ll need for your Kubernetes cost controls to succeed, follow these five best practices. Build up to a chargeback strategy, rather than trying to impose one overnight. Teams often get sticker shock at their first spending reviews, and need time to get a handle on why certain costs are occurring and how to change practices to reduce them. Putting them on the hook for the bill immediately—before they have time to deliberate and draw up careful spending reduction plans—will only lead to panic, poor decisions, and heaping resentment from team leaders. Starting with limited cost monitoring or showbacks lets teams ease into cost responsibility and provides fair warning for the bills that are coming. Make cost allocations fair and transparent. Teams need total trust in the cost metrics they’re held responsible for. However, without careful curating, costs in Kubernetes’ distributed system aren’t so cut and dry. To build buy-in, use transparent cost allocation models that ensure those metrics are reproducible, audited, and verified. Also, be sure to provide teams with actionable data and make it clear how they play a role in getting overspending under control. Take care with the allocation of idle resources, which usually fall to the team making cluster-level provisioning decisions. System-wide and shared resources also require watchful allocation. Assigning costs by namespace is a particularly powerful method for delineating spending responsibilities. Ideally, assign costs based on the maximum of teams’ resource requests and usage, but only if they have control over those settings (making it fair). Similarly, find fair approaches for handling high-cost one-off jobs, like research projects. Make ownership over each resource crystal clear. Leveraging an admission controller and “escalation approach” can clarify each resource’s owner. The escalation approach consists of defining the owner’s label at deployment, namespace, and cluster levels, thereby establishing an escalation path in case of issues. To enforce those labels, use an Open Policy Agent or admission controller webhook. Review spending data weekly. Planned, weekly data reviews allow teams to flag overspending and eliminate future waste while avoiding sticker shock when monthly bills come due. Automated alerts should also sound the alarm if resource usage becomes excessive or abnormal and needs attention to avoid cost overruns. Focus on the culture shift. For enterprises trying to lower Kubernetes costs as they scale, achieving a culture that values savings and respects the cost management approaches in place is the true hurdle. The technical methods behind these cost controls aren’t difficult to implement and follow—if all teams are motivated to do so. Make sure costs are clear, fair, transparent, and actionable, then give teams the tools they need to succeed, and the culture will come. In most cases, enterprises that implement a culture where teams actively regulate their own Kubernetes spending can expect to see cost savings of 30% or more, along with further boosts to productivity and security. Distributing responsibility for the costs of Kubernetes’ distributed system is a worthwhile pursuit, and one that is easier to instill earlier than later. Rob Faraj is a co-founder of Kubecost. — New Tech Forum provides a venue to explore and discuss emerging enterprise technology in unprecedented depth and breadth. The selection is subjective, based on our pick of the technologies we believe to be important and of greatest interest to InfoWorld readers. InfoWorld does not accept marketing collateral for publication and reserves the right to edit all contributed content. Send all inquiries to newtechforum@infoworld.com. Related content feature 14 great preprocessors for developers who love to code Sometimes it seems like the rules of programming are designed to make coding a chore. Here are 14 ways preprocessors can help make software development fun again. By Peter Wayner Nov 18, 2024 10 mins Development Tools Software Development feature Designing the APIs that accidentally power businesses Well-designed APIs, even those often-neglected internal APIs, make developers more productive and businesses more agile. By Jean Yang Nov 18, 2024 6 mins APIs Software Development news Spin 3.0 supports polyglot development using Wasm components Fermyon’s open source framework for building server-side WebAssembly apps allows developers to compose apps from components created with different languages. By Paul Krill Nov 18, 2024 2 mins Microservices Serverless Computing Development Libraries and Frameworks news Go language evolving for future hardware, AI workloads The Go team is working to adapt Go to large multicore systems, the latest hardware instructions, and the needs of developers of large-scale AI systems. By Paul Krill Nov 15, 2024 3 mins Google Go Generative AI Programming Languages Resources Videos