Think your cloud apps are fully optimized for performance and cost? Guess again. These three tricks can cut thousands of dollars off cloud service bills and reduce latency. Credit: clusternote The elastic capabilities of public cloud computing are both good and bad. We can provision all the resources and cloud architects to solve performance issues. But tossing money at the situation willy-nilly doesn’t fix the root causes of the problems, which are typically traceable to bad design or bad coding. However some basic tricks of the cloud application development and design trade should allow you to at least reduce the number of performance issues (and by extension, trim monthly cloud bills as well). Here are three of the top ways to improve performance and cost efficiency and not break the bank: Leverage serverless computing. Although many report larger cloud bills when running serverless, that has not been my experience. Indeed, if you’re unwilling or not budgeted to go into the application code and database schema to fine-tune performance issues, this approach leaves the resource provisioning to the cloud providers (using automation). They know how to optimize their resources better than you do. The idea is that the serverless system won’t over- or underprovision, based on the real-time resource requirements of the application, as determined by the serverless system. Optimization of cloud resources will be the responsibility of the serverless platform itself; many will find that unoptimized applications and databases will better perform if refactored for serverless. If you’re thinking that this is just technological duct tape, you’re right. This assumes that you can’t or won’t redesign, recode, and redeploy the applications, but you can port to a serverless platform. The cost savings should pay for moving to serverless in the first month, but you should gather metrics before and after to make sure. Co-locate your data with your application. It’s a fundamental architectural principle to store data as close to the application using it as possible. I’m still seeing a number of databases stored in different regions than the applications, databases stored on different clouds, even on-premises databases with applications running in the public clouds. The result is more latency, poorer performance, more outages, and higher compute and network traffic bills. Many of these design flaws were driven by concerns about security and compliance. Even if your data needs to stay in country, or even on-premises, make sure that the application instances that use that data exist close to it. You’ll be happier with the cloud bill, and your users will be much happier with the performance and resiliency. Audit the applications. This is the most laborious and thus the most expensive of all three tricks, although the level of effort required has fallen in the past several years. Code scanners, application profilers, and applications testing platforms can reveal a great deal of information about how applications can be refactored to increase performance by as much as three-fold. Of course this means recoding the applications in places, including testing and redeployment, which makes this approach the most expensive. But it’s also the most effective. See how that works? Performance and optimization issues are like back pain—we’re all going to encounter it at one time in our lives. When you do, any of these tricks will be a good place to find relief. Related content analysis Strategies to navigate the pitfalls of cloud costs Cloud providers waste a lot of their customers’ cloud dollars, but enterprises can take action. By David Linthicum Nov 15, 2024 6 mins Cloud Architecture Cloud Management Cloud Computing analysis Understanding Hyperlight, Microsoft’s minimal VM manager Microsoft is making its Rust-based, functions-focused VM tool available on Azure at last, ready to help event-driven applications at scale. By Simon Bisson Nov 14, 2024 8 mins Microsoft Azure Rust Serverless Computing how-to Docker tutorial: Get started with Docker volumes Learn the ins, outs, and limits of Docker's native technology for integrating containers with local file systems. By Serdar Yegulalp Nov 13, 2024 8 mins Devops Cloud Computing Software Development news Red Hat OpenShift AI unveils model registry, data drift detection Cloud-based AI and machine learning platform also adds support for Nvidia NIM, AMD GPUs, the vLLM runtime for KServe, KServe Modelcars, and LoRA fine-tuning. By Paul Krill Nov 12, 2024 3 mins Generative AI PaaS Artificial Intelligence Resources Videos