Enterprises want generative AI, but CIOs need a way to pay for it. Diverting spending from traditional cloud computing may not be the best strategy. Credit: ModernLife / Getty Images The rush to generative AI is driving unexpected spending. It’s no longer considered optional to have generative AI system development and deployment plans; it’s a priority for boards and executive leadership. Thus, the question comes up quickly of how to pay for it, cloud or no cloud. The numbers are scary to someone who has created these budgets in the past. IT executives now expect 2023 generative AI budgets to be 3.4 times greater than anticipated. However, only 15% of tech execs expect to fund this uptick with net-new spending. Robbing Peter to pay Paul Where is the money coming from? Few companies are sitting on unallocated piles of cash. Thus, 33% of tech execs plan to plunder other parts of the IT portfolio to pay for it. This includes 37% of tech execs who expect to pull generative AI spending from their broader AI investment portfolio. The cost of generative AI is more than the cloud fees to run these systems; it’s also the staffing costs. The impact of generative AI on labor and cloud spending is likely to be far-reaching, with high costs to find, train, and keep the right people to deploy your generative AI systems. These people will cost far more than employees who run the more traditional systems—you know, the ones you’re removing funding for. CEOs need a clear understanding of how high-impact projects will accurately tap resources so they can budget for associated costs. I suspect this will spiral off into a few disaster stories. Some enterprises will cut too much on one end of the budget and end up alienating the people who drive the business now. I’ve seen this occur with other technological shifts in the past, where the damage done to different sides of the company outweighs any benefits of the new technologies. This is why I’ve never taken any CIO positions offered to me. People and generative AI in the cloud Staffing costs could torpedo your AI strategy; at the very least they should be the highest concern. There are at least 20 open positions per qualified candidate. That’s likely to get better as this market matures and people take advantage of training or self-learning, but the fact remains that companies need internal expertise for a competitive advantage in generative AI in the cloud, and they may not be able to find it in time. For those of you asking what these scarce skills are: data science, engineering, and design thinking. Understanding the specific generative AI systems on a specific cloud is also important, but this is about skills that can work across these systems. Picking a candidate whose expertise is limited to a single cloud provider will only get you so far. The crisis of finding money and people As we build up AI in the cloud during the next few years, projects are not going to fail because technology doesn’t live up to expectations; it’s going to be underfunding and the inability to find the talent—pretty much the same reasons more traditional cloud projects fail. However, this could be five times worse, considering what generative AI is and where we are now. I have a few suggestions, of course. First, ask yourself if generative AI systems are needed in the first place. We are already seeing the misapplication of generative AI, a technology that doesn’t add value to simple business systems but is of particular use for systems that need access to large language models (LLMs) that can return at least 100 times that investment back to the business in cost savings and strategic value. Also, although most generative AI deployment will exist in the cloud, we must consider all platforms, including those in data centers, to find the most optimized way to operate this stuff. Again, we need good, objective architecture to make decisions that may look strange given the current hype, but are the best choices for the business. We’ve gone through this process with many trendy technologies, such as client/server, the internet, service-oriented architecture, cloud, and now generative AI. Given what you can do with generative AI, this technology will be a huge differentiator for enterprises that can weaponize it. I guess if that benefit is still out there, these kinds of problems will keep popping up. Related content analysis Strategies to navigate the pitfalls of cloud costs Cloud providers waste a lot of their customers’ cloud dollars, but enterprises can take action. By David Linthicum Nov 15, 2024 6 mins Cloud Architecture Cloud Management Cloud Computing analysis Understanding Hyperlight, Microsoft’s minimal VM manager Microsoft is making its Rust-based, functions-focused VM tool available on Azure at last, ready to help event-driven applications at scale. By Simon Bisson Nov 14, 2024 8 mins Microsoft Azure Rust Serverless Computing how-to Docker tutorial: Get started with Docker volumes Learn the ins, outs, and limits of Docker's native technology for integrating containers with local file systems. By Serdar Yegulalp Nov 13, 2024 8 mins Devops Cloud Computing Software Development news Red Hat OpenShift AI unveils model registry, data drift detection Cloud-based AI and machine learning platform also adds support for Nvidia NIM, AMD GPUs, the vLLM runtime for KServe, KServe Modelcars, and LoRA fine-tuning. By Paul Krill Nov 12, 2024 3 mins Generative AI PaaS Artificial Intelligence Resources Videos