Five guiding principles for building, managing, monitoring, and maintaining state-of-the-art web applications Credit: Tommy Video Modern web applications have a lot riding on them. Our customers depend on them, and our business depends on them. Without modern web applications, many businesses would not survive. Modern web applications must scale to our biggest needs without suffering outages or other availability issues. In order to meet and exceed this high bar, we must build, manage, and monitor our applications using modern principles, processes, and procedures. There are five guiding principles that a modern application requires in order to meet the expectations described above. Principle #1. Use service-based architectures Modern applications are large and complex, too complex to be handled as a single entity by a single team. Instead, multiple teams are required to develop, test, support, and operate these applications. This is not easy when the application is a single, large monolith. By splitting the application into multiple independent services, you can assign different pieces of the application to different teams, allowing parallel development and operation. Additionally, problems can be correlated and isolated to individual services, making it easier to decide who should be called on to resolve a particular issue. By keeping modules isolated, they can be built, tested, and operated in isolation. Scaling is not just about the amount of traffic an application receives, but about the size and complexity of the application itself. Using service-based architectures makes scaling the application easier and allows larger and more complex applications to be dealt with reasonably. Principle #2. Organize teams around services Architecting your application as a service-based application is only part of the answer. Once you have your application set up in services, you must structure your teams around those services. It is critical that a single team owns each service, front to back, top to bottom. This includes development, testing, operation, and support. All aspects of the development and operation of each service should be handled by one and only one team. There is a model for application management that stresses these ownership values. STOSA, which stands for Single Team Owned Service Architecture, provides guiding principles on team-level ownership, providing clear boundaries between services and promoting clear understanding and expectations between teams. You can read more about STOSA at stosa.org. Principle #3. Use devops processes and procedures Now that we have team organization and ownership taken care of, the next principle is to focus on the policies and practices that those teams use to build and operate their services. For modern applications, teams must make use of modern devops processes and procedures to build and maintain their services. Many organizations claim that they “use devops,” but ultimately they do not operate their organizations in a true devops manner. True devops techniques require a complete organizational flush of all processes and systems in order to incorporate the principles of devops within the entire organization. You cannot accomplish this transformation by simply “creating a DevOps team,” though many organizations try. Principle #4. Use dynamic infrastructures Customer traffic loads on our applications vary considerably. The upper bound or maximum amount of traffic your application needs to support is never known accurately. In the past, as application traffic grew, we simply threw additional hardware at the problem. This is no longer reasonable. Simply adding hardware may be fine to handle expected peaks, but what do you do when an unexpected spike in usage arrives? What if a message about your product goes viral on social media? How can you handle the unexpected surge from such an event? It is no longer possible to throw large quantities of hardware at an application. You never know how much hardware is enough hardware to handle your possible maximum need. Additionally, when your application is not operating at a high volume, what do you do with the extra hardware? It’s sitting idle, a waste of money and resources. Especially if your traffic needs are highly variable or highly spiky, simply adding hardware to handle your peaks is not an effective use of resources. Instead, you must add resources to your applications dynamically, as they are needed and when they are needed. These resources can be applied when your traffic needs are high, and they can be released when they are no longer needed. This is what dynamic infrastructures are all about. The only way to implement a dynamic infrastructure for an application with a highly variable load is to use the dynamic capabilities of the cloud. In the cloud, resources can be added on demand, and released when they are no longer needed, becoming available for other uses. Principle #5. Maintain high visibility and deep monitoring It is impossible to keep a modern application working if you do not have visibility into what your application is doing and how it is performing. Without proper visibility into the operation of your application, you have no idea if the application is meeting the needs of your customers, or if a ticking time bomb is ready to go off and make your application suddenly unavailable. With proper visibility, you can understand what your application is doing, you can understand how it’s doing it, and you can understand if there are any lurking issues that might impact the application’s ability to perform those operations in the foreseeable future. Further, while using tools such as synthetic testing and infrastructure performance monitoring is important, this level of monitoring is not sufficient. You cannot completely understand how your application is performing without an internal view of its operations. This is only possible using application performance monitoring, log monitoring, and trace analysis. Our applications are getting more complex, and becoming more intertwined with the fundamental operation of our business. As such, the expectations of our customers are growing, and the demands for reliability, scalability, and functionality from management are keeping pace. Only by modernizing our applications, using the principles described above, can we make our applications meet the needs of our customers and our business. Lee Atchison is the senior director of cloud architecture at New Relic. For the last seven years he has helped design and build a solid service-based product architecture that scaled from startup to high traffic public enterprise. Lee has 32 years of industry experience including seven years as a Senior Manager at Amazon.com. At Amazon, he led the creation of the company’s first software download store, created AWS Elastic Beanstalk, and managed the migration of Amazon’s retail platform to a new service-based architecture. He is author of the book “Architecting for Scale,” published in 2016 by O’Reilly Media. — New Tech Forum provides a venue to explore and discuss emerging enterprise technology in unprecedented depth and breadth. The selection is subjective, based on our pick of the technologies we believe to be important and of greatest interest to InfoWorld readers. InfoWorld does not accept marketing collateral for publication and reserves the right to edit all contributed content. Send all inquiries to newtechforum@infoworld.com. Related content feature 14 great preprocessors for developers who love to code Sometimes it seems like the rules of programming are designed to make coding a chore. Here are 14 ways preprocessors can help make software development fun again. By Peter Wayner Nov 18, 2024 10 mins Development Tools Software Development feature Designing the APIs that accidentally power businesses Well-designed APIs, even those often-neglected internal APIs, make developers more productive and businesses more agile. By Jean Yang Nov 18, 2024 6 mins APIs Software Development news Spin 3.0 supports polyglot development using Wasm components Fermyon’s open source framework for building server-side WebAssembly apps allows developers to compose apps from components created with different languages. By Paul Krill Nov 18, 2024 2 mins Microservices Serverless Computing Development Libraries and Frameworks news Go language evolving for future hardware, AI workloads The Go team is working to adapt Go to large multicore systems, the latest hardware instructions, and the needs of developers of large-scale AI systems. By Paul Krill Nov 15, 2024 3 mins Google Go Generative AI Programming Languages Resources Videos