Take advantage of these best practices to avoid memory pressure and improve performance when working in .NET or .NET Core applications. Credit: Thinkstock Garbage collection occurs when the system is low on available physical memory or the GC.Collect() method is called explicitly in your application code. Objects that are no longer in use or are inaccessible from the root are candidates for garbage collection. While the .NET garbage collector, or GC, is adept at reclaiming memory occupied by managed objects, there may be times when it comes under pressure, i.e., when it must devote more time to collecting such objects. When the GC is under pressure to clean up objects, your application will spend far more time garbage collecting than executing instructions. Naurally, this GC pressure is detrimental to the application’s performance. The good news is, you can avoid GC pressure in your .NET and .NET Core applications by following certain best practices. This article talks about those best practices, using code examples where applicable. Note that we will be taking advantage of BenchmarkDotNet to track performance of the methods. If you’re not familiar with BenchmarkDotNet, I suggest reading this article first. To work with the code examples provided in this article, you should have Visual Studio 2019 installed in your system. If you don’t already have a copy, you can download Visual Studio 2019 here. Create a console application project in Visual Studio First off, let’s create a .NET Core console application project in Visual Studio. Assuming Visual Studio 2019 is installed in your system, follow the steps outlined below to create a new .NET Core console application project in Visual Studio. Launch the Visual Studio IDE. Click on “Create new project.” In the “Create new project” window, select “Console App (.NET Core)” from the list of templates displayed. Click Next. In the “Configure your new project” window, specify the name and location for the new project. Click Create. We’ll use this project to illustrate best practices for avoiding GC pression in the subsequent sections of this article. Avoid large object allocations There are two different types of heap in .NET and .NET Core, namely the small object heap (SOH) and the large object heap (LOH). Unlike the small object heap, the large object heap is not compacted during garbage collection. The reason is that the cost of compaction for large objects, meaning objects greater than 85KB in size, is very high, and moving them around in the memory would be very time consuming. Therefore the GC never moves large objects; it simply removes them when they are no longer needed. As a consequence, memory holes are formed in the large object heap, causing memory fragmentation. Although you could write your own code to compact the LOH, it is good to avoid large object heap allocations as much as possible. Not only is garbage collection from this heap costly, but it is often more prone to fragmentation, resulting in unbounded memory increases over time. Avoid memory leaks Not surprisingly, memory leaks also are detrimental to application performance — they can cause performance issues as well as GC pressure. When memory leaks occur, the objects still remain referenced even if they are no longer being used. Since the objects are live and remain referenced, the GC promotes them to higher generations instead of reclaiming the memory. Such promotions are not only expensive but also keep the GC unnecessarily busy. When memory leaks occur, more and more memory is used, until available memory threatens to run out. This causes the GC to do more frequent collections to free memory space. Avoid using the GC.Collect method When you call the GC.Collect() method, the runtime conducts a stack walk to decide which items are reachable and which are not. This triggers a blocking garbage collection across all generations. Thus a call to the GC.Collect() method is a time-consuming and resource-intensive operation that should be avoided. Pre-size data structures When you populate a collection with data, the data structure will be resized multiple times. Each resize operation allocates an internal array which must be filled by the previous array. You can avoid this overhead by providing the capacity parameter to the collection’s constructor while creating an instance of the collection. Refer to the following code snippet that illustrates two generic collections — one having fixed size and the other having dynamic size. const int NumberOfItems = 10000; [Benchmark] public void ArrayListDynamicSize() { ArrayList arrayList = new ArrayList(); for (int i = 0; i < NumberOfItems; i++) { arrayList.Add(i); } } [Benchmark] public void ArrayListFixedSize() { ArrayList arrayList = new ArrayList(NumberOfItems); for (int i = 0; i < NumberOfItems; i++) { arrayList.Add(i); } } Figure 1 shows the benchmark for the two methods. IDG Figure 1. Use ArrayPools to minimize allocations ArrayPool and MemoryPool classes help you to minimize memory allocations and garbage collection overhead and thereby increase efficiency and performance. The ArrayPool class in the System.Buffers namespace is a high-performance pool of reusable managed arrays. This can be used in situations where you might want to minimize allocations and increase efficiency by avoiding frequent creation and destruction of regular arrays. Consider the following piece of code that shows two methods — one that uses a regular array and the other that uses a shared array pool. const int NumberOfItems = 10000; [Benchmark] public void RegularArrayFixedSize() { int[] array = new int[NumberOfItems]; } [Benchmark] public void SharedArrayPool() { var pool = ArrayPool<int>.Shared; int[] array = pool.Rent(NumberOfItems); pool.Return(array); } Figure 2 illustrates the performance differences between these two methods. IDG Figure 2. Use structs instead of classes Structs are value types, so there is no garbage collection overhead when they are not part of a class. When structs are part of a class, they are stored in the heap. An additional benefit is that structs need less memory than a class because they have no ObjectHeader or MethodTable. You should consider using a struct when the size of the struct will be minimal (say around 16 bytes), the struct will be short-lived, or the struct will be immutable. Consider the code snippet below that illustrates two types — a class named MyClass and a struct named MyStruct. class MyClass { public int X { get; set; } public int Y { get; set; } public int Z { get; set; } } struct MyStruct { public int X { get; set; } public int Y { get; set; } public int Z { get; set; } } The following code snippet shows how you can check the benchmark for two scenarios, using objects of the MyClass class in one case and objects of the MyStruct struct in another. const int NumberOfItems = 100000; [Benchmark] public void UsingClass() { MyClass[] myClasses = new MyClass[NumberOfItems]; for (int i = 0; i < NumberOfItems; i++) { myClasses[i] = new MyClass(); myClasses[i].X = 1; myClasses[i].Y = 2; myClasses[i].Z = 3; } } [Benchmark] public void UsingStruct() { MyStruct[] myStructs = new MyStruct[NumberOfItems]; for (int i = 0; i < NumberOfItems; i++) { myStructs[i] = new MyStruct(); myStructs[i].X = 1; myStructs[i].Y = 2; myStructs[i].Z = 3; } } Figure 3 shows the performance benchmarks of these two methods. IDG Figure 3. As you can see, allocation of structs is much faster compared to classes. Avoid using finalizers Whenever you have a destructor in your class the runtime treats it as a Finalize() method. As finalization is costly, you should avoid using destructors and hence finalizers in your classes. When you have a finalizer in your class, the runtime moves objects of that class to the finalization queue. The runtime moves all other objects that are reachable to the “Freachable” queue. The GC reclaims the memory occupied by objects that are not reachable. Moreover, an instance of a class that contains a finalizer is automatically promoted to a higher generation since it cannot be collected in generation 0. Consider the two classes given below. class WithFinalizer { public int X { get; set; } public int Y { get; set; } public int Z { get; set; } ~WithFinalizer() { } } class WithoutFinalizer { public int X { get; set; } public int Y { get; set; } public int Z { get; set; } } The following code snippet benchmarks the performance of two methods, one that uses instances of a class with a finalizer and one that uses instances of a class without a finalizer. [Benchmark] public void AllocateMemoryForClassesWithFinalizer() { for (int i = 0; i < NumberOfItems; i++) { WithFinalizer obj = new WithFinalizer(); obj.X = 1; obj.Y = 2; obj.Z = 3; } } [Benchmark] public void AllocateMemoryForClassesWithoutFinalizer() { for (int i = 0; i < NumberOfItems; i++) { WithoutFinalizer obj = new WithoutFinalizer(); obj.X = 1; obj.Y = 2; obj.Z = 3; } } Figure 4 below shows the output of the benchmarks when the value of NumberOfItems equals 1000. Note that the AllocateMemoryForClassesWithoutFinalizer method completes the task in a fraction of the time the AllocateMemoryForClassesWithFinalizer method takes to complete it. IDG Figure 4. Use StringBuilder to reduce allocations Strings are immutable. So whenever you add two string objects, a new string object is created that holds the content of both strings. You can avoid the allocation of memory for this new string object by taking advantage of StringBuilder. StringBuilder will improve performance in cases where you make repeated modifications to a string or concatenate many strings together. However, you should keep in mind that regular concatenations are faster than StringBuilder for a small number of concatenations. When using StringBuilder, note that you can improve performance by reusing a StringBuilder instance. Another good practice to improve StringBuilder performance is to set the initial capacity of the StringBuilder instance when creating the instance. Consider the following two methods used for benchmarking the performance of string concatenation. [Benchmark] public void ConcatStringsUsingStringBuilder() { string str = "Hello World!"; var sb = new StringBuilder(); for (int i = 0; i < NumberOfItems; i++) { sb.Append(str); } } [Benchmark] public void ConcatStringsUsingStringConcat() { string str = "Hello World!"; string result = null; for (int i = 0; i < NumberOfItems; i++) { result += str; } } Figure 5 displays the benchmarking report for 1000 concatenations. As you can see, the benchmarks indicate that the ConcatStringsUsingStringBuilder method is much faster than the ConcatStringsUsingStringConcat method. IDG Figure 5. General rules There are many ways to avoid GC pressure in your .NET and .NET Core applications. You should release object references when they are no longer needed. You should avoid using objects that have multiple references. And you should reduce Generation 2 garbage collections by avoiding the use of large objects (greater than 85KB in size). You can reduce the frequency and duration of garbage collections by adjusting the heap sizes and by reducing the rate of object allocations and promotions to higher generations. Note there is a trade-off between heap size and GC frequency and duration. An increase in the heap size will reduce GC frequency and increase GC duration, while a decrease in the heap size will increase GC frequency and decrease GC duration. To minimize both GC duration and frequency, it is recommended that you create short-lived objects as much as possible in your application. Related content feature 14 great preprocessors for developers who love to code Sometimes it seems like the rules of programming are designed to make coding a chore. Here are 14 ways preprocessors can help make software development fun again. By Peter Wayner Nov 18, 2024 10 mins Development Tools Software Development feature Designing the APIs that accidentally power businesses Well-designed APIs, even those often-neglected internal APIs, make developers more productive and businesses more agile. By Jean Yang Nov 18, 2024 6 mins APIs Software Development news Spin 3.0 supports polyglot development using Wasm components Fermyon’s open source framework for building server-side WebAssembly apps allows developers to compose apps from components created with different languages. By Paul Krill Nov 18, 2024 2 mins Microservices Serverless Computing Development Libraries and Frameworks news Go language evolving for future hardware, AI workloads The Go team is working to adapt Go to large multicore systems, the latest hardware instructions, and the needs of developers of large-scale AI systems. By Paul Krill Nov 15, 2024 3 mins Google Go Generative AI Programming Languages Resources Videos