Here's a summary of what's new in .NET Libraries in this preview release:
- Add Out-of-Proc Trace Support for Activity Events and Links
- Rate Limiting Trace Sampling Support
- New async Zip APIs
- Performance improvement in GZipStream for concatenated streams
.NET Libraries updates in .NET 10:
- What's new in .NET 10 documentation
The .NET Activity class enables distributed tracing by tracking the flow of operations across services or components. .NET supports serializing this tracing data out-of-process via the Microsoft-Diagnostics-DiagnosticSource event source provider.
An Activity can include additional metadata such as ActivityLink and ActivityEvent. We’ve added support for serializing these as well, so out-of-proc trace data can now include information representing links and events, like the following:
Events->"[(TestEvent1,2025-03-27T23:34:10.6225721+00:00,[E11:EV1,E12:EV2]),(TestEvent2,2025-03-27T23:34:11.6276895+00:00,[E21:EV21,E22:EV22])]"
Links->"[(19b6e8ea216cb2ba36dd5d957e126d9f,98f7abcb3418f217,Recorded,null,false,[alk1:alv1,alk2:alv2]),(2d409549aadfdbdf5d1892584a5f2ab2,4f3526086a350f50,None,null,false)]"
When distributed tracing data is serialized out-of-process via the Microsoft-Diagnostics-DiagnosticSource event source provider, all recorded activities can be emitted, or sampling can be applied based on a trace ratio.
We're introducing a new sampling option called Rate Limiting Sampling, which restricts the number of root activities serialized per second. This helps control data volume more precisely.
Out-of-proc trace data aggregators can enable and configure this sampling by specifying the option in FilterAndPayloadSpecs. For example:
[AS]*/-ParentRateLimitingSampler(100)
This setting limits serialization to 100 root activities per second across all ActivitySource instances.
.NET 10 Preview 4 introduces new asynchronous APIs for working with ZIP archives, making it easier to perform non-blocking operations when reading from or writing to ZIP files. This feature was highly requested by the community.
The new APIs, added to the System.IO.Compression and System.IO.Compression.ZipFile assemblies, provide async methods for extracting, creating, and updating ZIP archives. These methods enable developers to efficiently handle large files and improve application responsiveness, especially in scenarios involving I/O-bound operations.
The approved API surface was implemented here.
Usage examples:
// Extract a Zip archive
await ZipFile.ExtractToDirectoryAsync("archive.zip", "destinationFolder", overwriteFiles: true);
// Create a Zip archive
await ZipFile.CreateFromDirectoryAsync("sourceFolder", "archive.zip", CompressionLevel.SmallestSize, includeBaseDirectory: true, entryNameEncoding: Encoding.UTF8);
// Open an archive
await using ZipArchive archive = ZipFile.OpenReadAsync("archive.zip");
// Fine-grained manipulation
using FileStream archiveStream = File.OpenRead("archive.zip");
await using (ZipArchive archive = await ZipArchive.CreateAsync(archiveStream, ZipArchiveMode.Update, leaveOpen: false, entryNameEncoding: Encoding.UTF8))
{
foreach (ZipArchiveEntry entry in archive.Entries)
{
// Extract an entry to the filesystem
await entry.ExtractToFileAsync(destinationFileName: "file.txt", overwrite: true);
// Open an entry's stream
await using Stream entryStream = await entry.OpenAsync();
// Create an entry from a filesystem object
ZipArchiveEntry createdEntry = await archive.CreateEntryFromFileAsync(sourceFileName "path/to/file.txt", entryName: "file.txt");
}
}These new async methods complement the existing synchronous APIs, providing more flexibility for modern .NET applications.
A community contribution by @edwardneal improved the performance and memory usage of GZipStream when processing concatenated GZip data streams. Previously, each new stream segment would dispose and reallocate the internal ZLibStreamHandle, resulting in additional memory allocations and initialization overhead. With this change, the handle is now reset and reused using inflateReset2, reducing both managed and unmanaged memory allocations and improving execution time.
Highlights:
-
Eliminates repeated allocation of ~64-80 bytes of memory per concatenated stream, with additional unmanaged memory savings.
-
Reduces execution time by approximately 400ns per concatenated stream.
-
Largest impact (~35% faster) is seen when processing a large number of small data streams.
-
No significant change for single-stream scenarios.
For more details, see the pull request #113587.