Recent reports supply old and new information about finops. Financial priorities are changing, and more employee training is needed. Hey, remember finops? Cost optimization? According to most surveys, it was a big deal in 2023, but you never would have known it, considering the amount of AI noise out there. The State of FinOps is an annual survey conducted by the FinOps Foundation to collect information about critical priorities, industry trends, and the direction of finops practices. The survey informs a range of Foundation activities and tells the broader market how finops is practiced in various organizations. Survey respondents are encouraged to be thorough and honest so the data will reveal valuable insights to the community. However, I bet that none of them admit to any waste on their end—ever. I think these reports are good. Not that we’re getting unbiased information—it’s never unbiased. However, it’s good to see how the FinOps Foundation functions as a standards body for finops and communicates information about finops. Top priorities are shifting Reducing waste and managing commitment-based discounts have become the top priorities for finops teams due to the economic pressure in 2023. Companies are more aware of ways to reduce cloud computing costs, such as purchasing resources ahead of need. Finops teams are also investing in forecasting capabilities and expect the cost of running artificial intelligence and machine learning to impact finops practices in 2024 significantly. You can’t have a conversation anymore without AI coming up. The concern is we’re so focused on AI-enabling something that we’re missing the more significant issues of optimizing our resources so that we can pay for our AI usage. Expense will be the most significant limitation to using AI. By the looks of Nvidia’s stock price, demand will be stiff, and thus prices are likely to be high. I expect these financial priorities of 2023 will shift in 2024 and 2025. We’ll go through significant and speedy transformations in cloud computing consumption, and that will affect all aspects of cloud finops, given that we will need to establish some kinds of cost governance before we make huge mistakes. Optimization is key Compute spending is the most heavily optimized area, but there is room for improvement in storage, databases, and newer technologies, such as AI. Although finops systems can account for usage, using cloud resources cost-effectively is the most significant challenge for IT organizations. The challenge in 2024 and 2025 is that the optimization may reach a saturation point, where the amount of money saved from the optimization process diminishes as the amount of wasted resources is reduced. As reported in the survey, the finops community created a library of optimization opportunities for AWS, Google Cloud, and Microsoft Azure in 2023. They have specific optimization processes for those specific public cloud providers. The more significant issue, however, is often overlooked. Optimizing a span of platforms, such as cloud, traditional, edge, mobile, etc., provides heterogeneous optimization processes. For instance, companies can move processing off of a public cloud and back to premises if the cost of the processing and storage is reduced. One of my larger concerns now is that although we have tools that address that, we need to provide more training and approaches to help finops staffers. Right now, I’m seeing a laser focus on public cloud cost savings and not a good view across all systems, which is a more significant problem to solve. Finops needs to catch up According to the study, considerable improvements are still needed in finops forecasting capabilities. I’m also hearing that finops teams want better features in order to get a handle on future costs so they can adjust spending, including using more reserved resources that are purchased ahead of need. Engineers get the most value from self-service finops reports that enable real-time decision-making. It is also called an “automatic hand slap,” when the finops tools spot areas where money can be saved and address it before the code is even written to access a specific cloud resource. This, along with looking at sustainability metrics as software is developed, will allow us to solve problems before they exist. Right now, some developers make mistakes, such as overprovisioning resources from an infrastructure-as-code-enabled application, and later the problem is found. Catching it in the bud is much better. The more a team is trained on finops, the more value it can gain from self-service reporting, with engineering getting the most benefit. Thus, the report also declared that during the past three years, investment in finops training has increased for all personas, particularly in engineering. Cost of generative AI Only 31% of survey respondents reported that the costs of generative AI are impacting their finops practice. This means there’s a significant opportunity for finops practices and tools to ensure value from AI spending. The report also found that for large cloud spenders ($100 million or more annually), AI is currently impacting their finops practice at a greater rate, rising from 31% to 45%. Organizations with a higher overall cloud spend tend to see AI/ML as a rapidly increasing source of variable costs that needs to be managed. Count on this changing a great deal the next time we see this report. Related content analysis Strategies to navigate the pitfalls of cloud costs Cloud providers waste a lot of their customers’ cloud dollars, but enterprises can take action. By David Linthicum Nov 15, 2024 6 mins Cloud Architecture Cloud Management Cloud Computing analysis Understanding Hyperlight, Microsoft’s minimal VM manager Microsoft is making its Rust-based, functions-focused VM tool available on Azure at last, ready to help event-driven applications at scale. By Simon Bisson Nov 14, 2024 8 mins Microsoft Azure Rust Serverless Computing how-to Docker tutorial: Get started with Docker volumes Learn the ins, outs, and limits of Docker's native technology for integrating containers with local file systems. By Serdar Yegulalp Nov 13, 2024 8 mins Devops Cloud Computing Software Development news Red Hat OpenShift AI unveils model registry, data drift detection Cloud-based AI and machine learning platform also adds support for Nvidia NIM, AMD GPUs, the vLLM runtime for KServe, KServe Modelcars, and LoRA fine-tuning. By Paul Krill Nov 12, 2024 3 mins Generative AI PaaS Artificial Intelligence Resources Videos