How to improve memory allocation when creating HttpContent

In almost every application created in .NET, there will be a use case for making an outgoing HTTP call to an external resource. For some of these calls, especially when using POST and PUT verbs, a payload is usually attached to the HttpRequest object, most often in the form of a serialized JSON string.

By default, the legacy implementation for creating such content would be to leverage the ever-popular Newtonsoft.Json library. A typical example of such code would look something like this:

var jsonData = JsonConvert.SerializeObject(myContent);
using var content = new StringContent(jsonData, Encoding.UTF8, "application/json");
using var request = new HttpRequestMessage();
request.RequestUri = new Uri("https://httpbin.org/post");
request.Method = HttpMethod.Post;
request.Content = content;
using var response = await Client.SendAsync(request);

In more recent versions of .NET, the switch to using System.Text.Json for serialization has seen an increase in use due to the improved performance brought by the library. In larger payloads, we can see 3x performance gains by using System.Text.Json as the serializer.

var jsonData = System.Text.Json.JsonSerializer.Serialize(myContent);
using var content = new StringContent(jsonData, Encoding.UTF8, "application/json");
using var request = new HttpRequestMessage();
request.RequestUri = new Uri("https://httpbin.org/post");
request.Method = HttpMethod.Post;
request.Content = content;
using var response = await Client.SendAsync(request);

ArrayPoolBufferWriter and ReadOnlyMemoryContent

However, for this article, as you can see from the examples above, the way the code is written is mostly similar except for the serializer used. In the benchmarks shown below, you will see that when dealing with larger payloads, both approaches end up allocating to the Gen2 heap, which is a common problem for high-performance applications as most of us already know by now.

With the introduction of access to the System.Buffers namespace, more toolkits are being developed to enhance existing functionalities using Buffers and Pipelines. A common example is the CommunityToolkit.HighPerformance package.

using CommunityToolkit.HighPerformance.Buffers.ArrayPoolBufferWriter<byte> buffer = new();
using var writer = new Utf8JsonWriter(buffer);
var options = new JsonSerializerOptions { }; // Putting this here as an example only. Don't declare this inline as it prevents caching
System.Text.Json.JsonSerializer.Serialize(writer, myContent, options);
writer.Flush();
using var content = new ReadOnlyMemoryContent(buffer.WrittenMemory);
content.Headers.Add("Content-Type", "application/json");

using var request = new HttpRequestMessage();
request.RequestUri = new Uri("https://httpbin.org/post");
request.Method = HttpMethod.Post;
request.Content = content;
using var response = await Client.SendAsync(request);

Using the ReadOnlyMemoryContent provided by System.Net.Http in addition to the various IBufferWriter implementations provided by the HighPerformance toolkit, we can leverage the ReadOnlyMemory object to create a HttpContent that does not allocate to the heap.

In the benchmark below, we can see that using this approach makes no allocation to Gen0, Gen1 or Gen2 while maintaining the speed provided by the native System.Text.Json approach.

|                Method |      Mean |     Error |    StdDev |     Gen0 |     Gen1 |     Gen2 |  Allocated |
|---------------------- |----------:|----------:|----------:|---------:|---------:|---------:|-----------:|
|              Baseline | 21.284 ms | 0.3535 ms | 0.2952 ms | 968.7500 | 968.7500 | 968.7500 | 7183.69 KB |
|        SystemTextJson |  8.468 ms | 0.1693 ms | 0.4308 ms | 375.0000 | 375.0000 | 375.0000 | 3193.51 KB |
| ReadOnlyMemoryContent |  8.124 ms | 0.1486 ms | 0.2679 ms |        - |        - |        - |   30.15 KB |

The great thing about this approach is that it also works with other common serializers such as MessagePack and Protobuf.

// protobuf
using CommunityToolkit.HighPerformance.Buffers.ArrayPoolBufferWriter<byte> buffer = new();
ProtoBuf.Serializer.Serialize(buffer, myContent);
using var content = new ReadOnlyMemoryContent(buffer.WrittenMemory);
content.Headers.Add("Content-Type", "application/messagepack");

// messagepack
using CommunityToolkit.HighPerformance.Buffers.ArrayPoolBufferWriter<byte> buffer = new();
MessagePack.MessagePackSerializer.Serialize(buffer, myContent);
using var content = new ReadOnlyMemoryContent(buffer.WrittenMemory);
content.Headers.Add("Content-Type", "application/protobuf");

I feel that this is one of the many great features provided by .NET that has not been marketed properly as I am not able to find any reference to its usage online. Hope this article helps to drive adoption.

UPDATE: After posting this on Reddit I received some feedback and great suggestions on other ways to achieve the same result, so I decided to add those to this my benchmark and reran the scripts. I also decided to break down the test into 2 runs, one with a standard Json payload (70KB) and another with my original large payload (2MB).

RecyclableMemoryStream with StreamContent

I've always been a fan of the RecyclableMemoryStreamManager library and using this with a StreamContent proved to be great for both small and large size payloads.

private static readonly RecyclableMemoryStreamManager _recycle = new();
public async Task RecyclableStreamAndStreamContent()
{
using var stream = _recycle.GetStream();
System.Text.Json.JsonSerializer.Serialize(stream, Item);
stream.Seek(0, SeekOrigin.Begin);
using HttpContent content = new StreamContent(stream);
content.Headers.ContentType = new MediaTypeHeaderValue("application/json");
using var request = new HttpRequestMessage();
request.RequestUri = _uri;
request.Method = HttpMethod.Post;
request.Content = content;
using var response = await Client.SendAsync(request);
response.EnsureSuccessStatusCode();
}

PostAsJsonAsync

A similar suggestion was to just use the built-in functionality provided by .NET which optimised the HttpContent creation during the network transfer.

public async Task PostAsJsonAsync()
{
using var response = await Client.PostAsJsonAsync(_uri, Item);
response.EnsureSuccessStatusCode();
}

Results

|                                    Method |     Mean |    Error |   StdDev |   Gen0 | Allocated |
|------------------------------------------ |---------:|---------:|---------:|-------:|----------:|
|          RecyclableStreamAndStreamContent | 733.1 us | 19.63 us | 57.58 us | 0.9766 |   7.09 KB |
|                           PostAsJsonAsync | 970.6 us | 21.43 us | 62.51 us |      - |   7.08 KB |
|      SystemTextJson_ReadOnlyMemoryContent | 962.0 us | 28.02 us | 80.40 us |      - |   6.86 KB |
|            Protobuf_ReadOnlyMemoryContent | 788.1 us | 15.31 us | 17.63 us | 0.9766 |   5.35 KB |
| Protobuf_RecyclableStreamAndStreamContent | 767.8 us | 15.03 us | 34.54 us | 0.9766 |   5.75 KB |

For the first iteration (small payload), we see almost similar results for all 3 options. I also included Protobuf serialization to see if we get the same benefits by using StreamContent and the results show almost similar outcomes.

On the 2nd run using the much larger payload, we see a clear winner with StreamContent.

|                                    Method |      Mean |     Error |    StdDev | Allocated |
|------------------------------------------ |----------:|----------:|----------:|----------:|
|          RecyclableStreamAndStreamContent | 28.654 ms | 1.1542 ms | 3.3670 ms |  25.21 KB |
|                               JsonContent | 36.039 ms | 1.0074 ms | 2.9226 ms |  21.55 KB |
|      SystemTextJson_ReadOnlyMemoryContent | 34.717 ms | 0.6902 ms | 1.7443 ms |  21.32 KB |
|            Protobuf_ReadOnlyMemoryContent |  9.723 ms | 0.1953 ms | 0.5312 ms |   2.85 KB |
| Protobuf_RecyclableStreamAndStreamContent |  5.273 ms | 0.1054 ms | 0.2443 ms |    2.8 KB |

The benefits of using the RecyclableStream and StreamContent is that you can use it with other serializers like Protobuf to get faster performance compared to Json with dealing with large objects, assuming you don't optimize fully by going with gRPC.

I am however interested to dig deeper into why the System.Buffers approach which Microsoft says should give much better performance compared to Streams is not showing the supposed gains it should. If you have any idea on this I'm interested to know more.

Please join the discussion on Reddit if you have insights you wish to share. Thanks for reading this posting.

Did you find this article valuable?

Support Devindran Ramadass by becoming a sponsor. Any amount is appreciated!