@@ -169,7 +169,7 @@ Using the `LD_PRELOAD` environment variable to load it on a case-by-case basis
169169will not work when ` AT_SECURE ` is set such as with setuid binaries. It's also
170170generally not a recommended approach for production usage. The recommendation
171171is to enable it globally and make exceptions for performance critical cases by
172- running the application in a container / namespace without it enabled.
172+ running the application in a container/ namespace without it enabled.
173173
174174Make sure to raise ` vm.max_map_count ` substantially too to accommodate the very
175175large number of guard pages created by hardened\_ malloc. As an example, in
@@ -255,7 +255,7 @@ The following boolean configuration options are available:
255255* ` CONFIG_WRITE_AFTER_FREE_CHECK ` : ` true ` (default) or ` false ` to control
256256 sanity checking that new small allocations contain zeroed memory. This can
257257 detect writes caused by a write-after-free vulnerability and mixes well with
258- the features for making memory reuse randomized / delayed. This has a
258+ the features for making memory reuse randomized/ delayed. This has a
259259 performance cost scaling to the size of the allocation, which is usually
260260 acceptable. This is not relevant to large allocations because they're always
261261 a fresh memory mapping from the kernel.
@@ -341,7 +341,7 @@ larger caches can substantially improves performance).
341341
342342## Core design
343343
344- The core design of the allocator is very simple / minimalist. The allocator is
344+ The core design of the allocator is very simple/ minimalist. The allocator is
345345exclusive to 64-bit platforms in order to take full advantage of the abundant
346346address space without being constrained by needing to keep the design
347347compatible with 32-bit.
@@ -373,13 +373,13 @@ whether it's free, along with a separate bitmap for tracking allocations in the
373373quarantine. The slab metadata entries in the array have intrusive lists
374374threaded through them to track partial slabs (partially filled, and these are
375375the first choice for allocation), empty slabs (limited amount of cached free
376- memory) and free slabs (purged / memory protected).
376+ memory) and free slabs (purged/ memory protected).
377377
378378Large allocations are tracked via a global hash table mapping their address to
379379their size and random guard size. They're simply memory mappings and get mapped
380380on allocation and then unmapped on free. Large allocations are the only dynamic
381381memory mappings made by the allocator, since the address space for allocator
382- state (including both small / large allocation metadata) and slab allocations
382+ state (including both small/ large allocation metadata) and slab allocations
383383is statically reserved.
384384
385385This allocator is aimed at production usage, not aiding with finding and fixing
@@ -390,7 +390,7 @@ messages. The design choices are based around minimizing overhead and
390390maximizing security which often leads to different decisions than a tool
391391attempting to find bugs. For example, it uses zero-based sanitization on free
392392and doesn't minimize slack space from size class rounding between the end of an
393- allocation and the canary / guard region. Zero-based filling has the least
393+ allocation and the canary/ guard region. Zero-based filling has the least
394394chance of uncovering latent bugs, but also the best chance of mitigating
395395vulnerabilities. The canary feature is primarily meant to act as padding
396396absorbing small overflows to render them harmless, so slack space is helpful
@@ -424,11 +424,11 @@ was a bit less important and if a core goal was finding latent bugs.
424424 * Top-level isolated regions for each arena
425425 * Divided up into isolated inner regions for each size class
426426 * High entropy random base for each size class region
427- * No deterministic / low entropy offsets between allocations with
427+ * No deterministic/ low entropy offsets between allocations with
428428 different size classes
429429 * Metadata is completely outside the slab allocation region
430430 * No references to metadata within the slab allocation region
431- * No deterministic / low entropy offsets to metadata
431+ * No deterministic/ low entropy offsets to metadata
432432 * Entire slab region starts out non-readable and non-writable
433433 * Slabs beyond the cache limit are purged and become non-readable and
434434 non-writable memory again
@@ -649,7 +649,7 @@ other. Static assignment can also reduce memory usage since threads may have
649649varying usage of size classes.
650650
651651When there's substantial allocation or deallocation pressure, the allocator
652- does end up calling into the kernel to purge / protect unused slabs by
652+ does end up calling into the kernel to purge/ protect unused slabs by
653653replacing them with fresh ` PROT_NONE ` regions along with unprotecting slabs
654654when partially filled and cached empty slabs are depleted. There will be
655655configuration over the amount of cached empty slabs, but it's not entirely a
@@ -696,7 +696,7 @@ The secondary benefit of thread caches is being able to avoid the underlying
696696allocator implementation entirely for some allocations and deallocations when
697697they're mixed together rather than many allocations being done together or many
698698frees being done together. The value of this depends a lot on the application
699- and it's entirely unsuitable / incompatible with a hardened allocator since it
699+ and it's entirely unsuitable/ incompatible with a hardened allocator since it
700700bypasses all of the underlying security and would destroy much of the security
701701value.
702702
@@ -960,7 +960,7 @@ doesn't handle large allocations within the arenas, so it presents those in the
960960For example, with 4 arenas enabled, there will be a 5th arena in the statistics
961961for the large allocations.
962962
963- The ` nmalloc ` / ` ndalloc ` fields are 64-bit integers tracking allocation and
963+ The ` nmalloc ` / ` ndalloc ` fields are 64-bit integers tracking allocation and
964964deallocation count. These are defined as wrapping on overflow, per the jemalloc
965965implementation.
966966
0 commit comments