Skip to content

Commit 8e72d67

Browse files
jamillgitster
authored andcommitted
block alloc: allocate cache entries from mem_pool
When reading large indexes from disk, a portion of the time is dominated in malloc() calls. This can be mitigated by allocating a large block of memory and manage it ourselves via memory pools. This change moves the cache entry allocation to be on top of memory pools. Design: The index_state struct will gain a notion of an associated memory_pool from which cache_entries will be allocated from. When reading in the index from disk, we have information on the number of entries and their size, which can guide us in deciding how large our initial memory allocation should be. When an index is discarded, the associated memory_pool will be discarded as well - so the lifetime of a cache_entry is tied to the lifetime of the index_state that it was allocated for. In the case of a Split Index, the following rules are followed. 1st, some terminology is defined: Terminology: - 'the_index': represents the logical view of the index - 'split_index': represents the "base" cache entries. Read from the split index file. 'the_index' can reference a single split_index, as well as cache_entries from the split_index. `the_index` will be discarded before the `split_index` is. This means that when we are allocating cache_entries in the presence of a split index, we need to allocate the entries from the `split_index`'s memory pool. This allows us to follow the pattern that `the_index` can reference cache_entries from the `split_index`, and that the cache_entries will not be freed while they are still being referenced. Managing transient cache_entry structs: Cache entries are usually allocated for an index, but this is not always the case. Cache entries are sometimes allocated because this is the type that the existing checkout_entry function works with. Because of this, the existing code needs to handle cache entries associated with an index / memory pool, and those that only exist transiently. Several strategies were contemplated around how to handle this: Chosen approach: An extra field was added to the cache_entry type to track whether the cache_entry was allocated from a memory pool or not. This is currently an int field, as there are no more available bits in the existing ce_flags bit field. If / when more bits are needed, this new field can be turned into a proper bit field. Alternatives: 1) Do not include any information about how the cache_entry was allocated. Calling code would be responsible for tracking whether the cache_entry needed to be freed or not. Pro: No extra memory overhead to track this state Con: Extra complexity in callers to handle this correctly. The extra complexity and burden to not regress this behavior in the future was more than we wanted. 2) cache_entry would gain knowledge about which mem_pool allocated it Pro: Could (potentially) do extra logic to know when a mem_pool no longer had references to any cache_entry Con: cache_entry would grow heavier by a pointer, instead of int We didn't see a tangible benefit to this approach 3) Do not add any extra information to a cache_entry, but when freeing a cache entry, check if the memory exists in a region managed by existing mem_pools. Pro: No extra memory overhead to track state Con: Extra computation is performed when freeing cache entries We decided tracking and iterating over known memory pool regions was less desirable than adding an extra field to track this stae. Signed-off-by: Jameson Miller <[email protected]> Signed-off-by: Junio C Hamano <[email protected]>
1 parent 0e58301 commit 8e72d67

File tree

5 files changed

+167
-39
lines changed

5 files changed

+167
-39
lines changed

cache.h

Lines changed: 21 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -15,6 +15,7 @@
1515
#include "path.h"
1616
#include "sha1-array.h"
1717
#include "repository.h"
18+
#include "mem-pool.h"
1819

1920
#include <zlib.h>
2021
typedef struct git_zstream {
@@ -156,6 +157,7 @@ struct cache_entry {
156157
struct stat_data ce_stat_data;
157158
unsigned int ce_mode;
158159
unsigned int ce_flags;
160+
unsigned int mem_pool_allocated;
159161
unsigned int ce_namelen;
160162
unsigned int index; /* for link extension */
161163
struct object_id oid;
@@ -227,6 +229,7 @@ static inline void copy_cache_entry(struct cache_entry *dst,
227229
const struct cache_entry *src)
228230
{
229231
unsigned int state = dst->ce_flags & CE_HASHED;
232+
int mem_pool_allocated = dst->mem_pool_allocated;
230233

231234
/* Don't copy hash chain and name */
232235
memcpy(&dst->ce_stat_data, &src->ce_stat_data,
@@ -235,6 +238,9 @@ static inline void copy_cache_entry(struct cache_entry *dst,
235238

236239
/* Restore the hash state */
237240
dst->ce_flags = (dst->ce_flags & ~CE_HASHED) | state;
241+
242+
/* Restore the mem_pool_allocated flag */
243+
dst->mem_pool_allocated = mem_pool_allocated;
238244
}
239245

240246
static inline unsigned create_ce_flags(unsigned stage)
@@ -328,6 +334,7 @@ struct index_state {
328334
struct untracked_cache *untracked;
329335
uint64_t fsmonitor_last_update;
330336
struct ewah_bitmap *fsmonitor_dirty;
337+
struct mem_pool *ce_mem_pool;
331338
};
332339

333340
extern struct index_state the_index;
@@ -373,6 +380,20 @@ struct cache_entry *make_empty_transient_cache_entry(size_t name_len);
373380
*/
374381
void discard_cache_entry(struct cache_entry *ce);
375382

383+
/*
384+
* Duplicate a cache_entry. Allocate memory for the new entry from a
385+
* memory_pool. Takes into account cache_entry fields that are meant
386+
* for managing the underlying memory allocation of the cache_entry.
387+
*/
388+
struct cache_entry *dup_cache_entry(const struct cache_entry *ce, struct index_state *istate);
389+
390+
/*
391+
* Validate the cache entries in the index. This is an internal
392+
* consistency check that the cache_entry structs are allocated from
393+
* the expected memory pool.
394+
*/
395+
void validate_cache_entries(const struct index_state *istate);
396+
376397
#ifndef NO_THE_INDEX_COMPATIBILITY_MACROS
377398
#define active_cache (the_index.cache)
378399
#define active_nr (the_index.cache_nr)

mem-pool.c

Lines changed: 2 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -54,7 +54,8 @@ void mem_pool_discard(struct mem_pool *mem_pool)
5454
{
5555
struct mp_block *block, *block_to_free;
5656

57-
while ((block = mem_pool->mp_block))
57+
block = mem_pool->mp_block;
58+
while (block)
5859
{
5960
block_to_free = block;
6061
block = block->next_block;

read-cache.c

Lines changed: 100 additions & 19 deletions
Original file line numberDiff line numberDiff line change
@@ -46,6 +46,48 @@
4646
CE_ENTRY_ADDED | CE_ENTRY_REMOVED | CE_ENTRY_CHANGED | \
4747
SPLIT_INDEX_ORDERED | UNTRACKED_CHANGED | FSMONITOR_CHANGED)
4848

49+
50+
/*
51+
* This is an estimate of the pathname length in the index. We use
52+
* this for V4 index files to guess the un-deltafied size of the index
53+
* in memory because of pathname deltafication. This is not required
54+
* for V2/V3 index formats because their pathnames are not compressed.
55+
* If the initial amount of memory set aside is not sufficient, the
56+
* mem pool will allocate extra memory.
57+
*/
58+
#define CACHE_ENTRY_PATH_LENGTH 80
59+
60+
static inline struct cache_entry *mem_pool__ce_alloc(struct mem_pool *mem_pool, size_t len)
61+
{
62+
struct cache_entry *ce;
63+
ce = mem_pool_alloc(mem_pool, cache_entry_size(len));
64+
ce->mem_pool_allocated = 1;
65+
return ce;
66+
}
67+
68+
static inline struct cache_entry *mem_pool__ce_calloc(struct mem_pool *mem_pool, size_t len)
69+
{
70+
struct cache_entry * ce;
71+
ce = mem_pool_calloc(mem_pool, 1, cache_entry_size(len));
72+
ce->mem_pool_allocated = 1;
73+
return ce;
74+
}
75+
76+
static struct mem_pool *find_mem_pool(struct index_state *istate)
77+
{
78+
struct mem_pool **pool_ptr;
79+
80+
if (istate->split_index && istate->split_index->base)
81+
pool_ptr = &istate->split_index->base->ce_mem_pool;
82+
else
83+
pool_ptr = &istate->ce_mem_pool;
84+
85+
if (!*pool_ptr)
86+
mem_pool_init(pool_ptr, 0);
87+
88+
return *pool_ptr;
89+
}
90+
4991
struct index_state the_index;
5092
static const char *alternate_index_output;
5193

@@ -746,7 +788,7 @@ int add_file_to_index(struct index_state *istate, const char *path, int flags)
746788

747789
struct cache_entry *make_empty_cache_entry(struct index_state *istate, size_t len)
748790
{
749-
return xcalloc(1, cache_entry_size(len));
791+
return mem_pool__ce_calloc(find_mem_pool(istate), len);
750792
}
751793

752794
struct cache_entry *make_empty_transient_cache_entry(size_t len)
@@ -1668,13 +1710,13 @@ int read_index(struct index_state *istate)
16681710
return read_index_from(istate, get_index_file(), get_git_dir());
16691711
}
16701712

1671-
static struct cache_entry *cache_entry_from_ondisk(struct index_state *istate,
1713+
static struct cache_entry *cache_entry_from_ondisk(struct mem_pool *mem_pool,
16721714
struct ondisk_cache_entry *ondisk,
16731715
unsigned int flags,
16741716
const char *name,
16751717
size_t len)
16761718
{
1677-
struct cache_entry *ce = make_empty_cache_entry(istate, len);
1719+
struct cache_entry *ce = mem_pool__ce_alloc(mem_pool, len);
16781720

16791721
ce->ce_stat_data.sd_ctime.sec = get_be32(&ondisk->ctime.sec);
16801722
ce->ce_stat_data.sd_mtime.sec = get_be32(&ondisk->mtime.sec);
@@ -1716,7 +1758,7 @@ static unsigned long expand_name_field(struct strbuf *name, const char *cp_)
17161758
return (const char *)ep + 1 - cp_;
17171759
}
17181760

1719-
static struct cache_entry *create_from_disk(struct index_state *istate,
1761+
static struct cache_entry *create_from_disk(struct mem_pool *mem_pool,
17201762
struct ondisk_cache_entry *ondisk,
17211763
unsigned long *ent_size,
17221764
struct strbuf *previous_name)
@@ -1748,13 +1790,13 @@ static struct cache_entry *create_from_disk(struct index_state *istate,
17481790
/* v3 and earlier */
17491791
if (len == CE_NAMEMASK)
17501792
len = strlen(name);
1751-
ce = cache_entry_from_ondisk(istate, ondisk, flags, name, len);
1793+
ce = cache_entry_from_ondisk(mem_pool, ondisk, flags, name, len);
17521794

17531795
*ent_size = ondisk_ce_size(ce);
17541796
} else {
17551797
unsigned long consumed;
17561798
consumed = expand_name_field(previous_name, name);
1757-
ce = cache_entry_from_ondisk(istate, ondisk, flags,
1799+
ce = cache_entry_from_ondisk(mem_pool, ondisk, flags,
17581800
previous_name->buf,
17591801
previous_name->len);
17601802

@@ -1828,6 +1870,22 @@ static void post_read_index_from(struct index_state *istate)
18281870
tweak_fsmonitor(istate);
18291871
}
18301872

1873+
static size_t estimate_cache_size_from_compressed(unsigned int entries)
1874+
{
1875+
return entries * (sizeof(struct cache_entry) + CACHE_ENTRY_PATH_LENGTH);
1876+
}
1877+
1878+
static size_t estimate_cache_size(size_t ondisk_size, unsigned int entries)
1879+
{
1880+
long per_entry = sizeof(struct cache_entry) - sizeof(struct ondisk_cache_entry);
1881+
1882+
/*
1883+
* Account for potential alignment differences.
1884+
*/
1885+
per_entry += align_padding_size(sizeof(struct cache_entry), -sizeof(struct ondisk_cache_entry));
1886+
return ondisk_size + entries * per_entry;
1887+
}
1888+
18311889
/* remember to discard_cache() before reading a different cache! */
18321890
int do_read_index(struct index_state *istate, const char *path, int must_exist)
18331891
{
@@ -1874,10 +1932,15 @@ int do_read_index(struct index_state *istate, const char *path, int must_exist)
18741932
istate->cache = xcalloc(istate->cache_alloc, sizeof(*istate->cache));
18751933
istate->initialized = 1;
18761934

1877-
if (istate->version == 4)
1935+
if (istate->version == 4) {
18781936
previous_name = &previous_name_buf;
1879-
else
1937+
mem_pool_init(&istate->ce_mem_pool,
1938+
estimate_cache_size_from_compressed(istate->cache_nr));
1939+
} else {
18801940
previous_name = NULL;
1941+
mem_pool_init(&istate->ce_mem_pool,
1942+
estimate_cache_size(mmap_size, istate->cache_nr));
1943+
}
18811944

18821945
src_offset = sizeof(*hdr);
18831946
for (i = 0; i < istate->cache_nr; i++) {
@@ -1886,7 +1949,7 @@ int do_read_index(struct index_state *istate, const char *path, int must_exist)
18861949
unsigned long consumed;
18871950

18881951
disk_ce = (struct ondisk_cache_entry *)((char *)mmap + src_offset);
1889-
ce = create_from_disk(istate, disk_ce, &consumed, previous_name);
1952+
ce = create_from_disk(istate->ce_mem_pool, disk_ce, &consumed, previous_name);
18901953
set_index_entry(istate, i, ce);
18911954

18921955
src_offset += consumed;
@@ -1983,17 +2046,13 @@ int is_index_unborn(struct index_state *istate)
19832046

19842047
int discard_index(struct index_state *istate)
19852048
{
1986-
int i;
2049+
/*
2050+
* Cache entries in istate->cache[] should have been allocated
2051+
* from the memory pool associated with this index, or from an
2052+
* associated split_index. There is no need to free individual
2053+
* cache entries.
2054+
*/
19872055

1988-
for (i = 0; i < istate->cache_nr; i++) {
1989-
if (istate->cache[i]->index &&
1990-
istate->split_index &&
1991-
istate->split_index->base &&
1992-
istate->cache[i]->index <= istate->split_index->base->cache_nr &&
1993-
istate->cache[i] == istate->split_index->base->cache[istate->cache[i]->index - 1])
1994-
continue;
1995-
discard_cache_entry(istate->cache[i]);
1996-
}
19972056
resolve_undo_clear_index(istate);
19982057
istate->cache_nr = 0;
19992058
istate->cache_changed = 0;
@@ -2007,6 +2066,12 @@ int discard_index(struct index_state *istate)
20072066
discard_split_index(istate);
20082067
free_untracked_cache(istate->untracked);
20092068
istate->untracked = NULL;
2069+
2070+
if (istate->ce_mem_pool) {
2071+
mem_pool_discard(istate->ce_mem_pool);
2072+
istate->ce_mem_pool = NULL;
2073+
}
2074+
20102075
return 0;
20112076
}
20122077

@@ -2798,7 +2863,23 @@ void move_index_extensions(struct index_state *dst, struct index_state *src)
27982863
src->untracked = NULL;
27992864
}
28002865

2866+
struct cache_entry *dup_cache_entry(const struct cache_entry *ce,
2867+
struct index_state *istate)
2868+
{
2869+
unsigned int size = ce_size(ce);
2870+
int mem_pool_allocated;
2871+
struct cache_entry *new_entry = make_empty_cache_entry(istate, ce_namelen(ce));
2872+
mem_pool_allocated = new_entry->mem_pool_allocated;
2873+
2874+
memcpy(new_entry, ce, size);
2875+
new_entry->mem_pool_allocated = mem_pool_allocated;
2876+
return new_entry;
2877+
}
2878+
28012879
void discard_cache_entry(struct cache_entry *ce)
28022880
{
2881+
if (ce && ce->mem_pool_allocated)
2882+
return;
2883+
28032884
free(ce);
28042885
}

split-index.c

Lines changed: 42 additions & 8 deletions
Original file line numberDiff line numberDiff line change
@@ -73,16 +73,31 @@ void move_cache_to_base_index(struct index_state *istate)
7373
int i;
7474

7575
/*
76-
* do not delete old si->base, its index entries may be shared
77-
* with istate->cache[]. Accept a bit of leaking here because
78-
* this code is only used by short-lived update-index.
76+
* If there was a previous base index, then transfer ownership of allocated
77+
* entries to the parent index.
7978
*/
79+
if (si->base &&
80+
si->base->ce_mem_pool) {
81+
82+
if (!istate->ce_mem_pool)
83+
mem_pool_init(&istate->ce_mem_pool, 0);
84+
85+
mem_pool_combine(istate->ce_mem_pool, istate->split_index->base->ce_mem_pool);
86+
}
87+
8088
si->base = xcalloc(1, sizeof(*si->base));
8189
si->base->version = istate->version;
8290
/* zero timestamp disables racy test in ce_write_index() */
8391
si->base->timestamp = istate->timestamp;
8492
ALLOC_GROW(si->base->cache, istate->cache_nr, si->base->cache_alloc);
8593
si->base->cache_nr = istate->cache_nr;
94+
95+
/*
96+
* The mem_pool needs to move with the allocated entries.
97+
*/
98+
si->base->ce_mem_pool = istate->ce_mem_pool;
99+
istate->ce_mem_pool = NULL;
100+
86101
COPY_ARRAY(si->base->cache, istate->cache, istate->cache_nr);
87102
mark_base_index_entries(si->base);
88103
for (i = 0; i < si->base->cache_nr; i++)
@@ -331,12 +346,31 @@ void remove_split_index(struct index_state *istate)
331346
{
332347
if (istate->split_index) {
333348
/*
334-
* can't discard_split_index(&the_index); because that
335-
* will destroy split_index->base->cache[], which may
336-
* be shared with the_index.cache[]. So yeah we're
337-
* leaking a bit here.
349+
* When removing the split index, we need to move
350+
* ownership of the mem_pool associated with the
351+
* base index to the main index. There may be cache entries
352+
* allocated from the base's memory pool that are shared with
353+
* the_index.cache[].
338354
*/
339-
istate->split_index = NULL;
355+
mem_pool_combine(istate->ce_mem_pool, istate->split_index->base->ce_mem_pool);
356+
357+
/*
358+
* The split index no longer owns the mem_pool backing
359+
* its cache array. As we are discarding this index,
360+
* mark the index as having no cache entries, so it
361+
* will not attempt to clean up the cache entries or
362+
* validate them.
363+
*/
364+
if (istate->split_index->base)
365+
istate->split_index->base->cache_nr = 0;
366+
367+
/*
368+
* We can discard the split index because its
369+
* memory pool has been incorporated into the
370+
* memory pool associated with the the_index.
371+
*/
372+
discard_split_index(istate);
373+
340374
istate->cache_changed |= SOMETHING_CHANGED;
341375
}
342376
}

unpack-trees.c

Lines changed: 2 additions & 11 deletions
Original file line numberDiff line numberDiff line change
@@ -203,20 +203,11 @@ static int do_add_entry(struct unpack_trees_options *o, struct cache_entry *ce,
203203
ADD_CACHE_OK_TO_ADD | ADD_CACHE_OK_TO_REPLACE);
204204
}
205205

206-
static struct cache_entry *dup_entry(const struct cache_entry *ce, struct index_state *istate)
207-
{
208-
unsigned int size = ce_size(ce);
209-
struct cache_entry *new_entry = make_empty_cache_entry(istate, ce_namelen(ce));
210-
211-
memcpy(new_entry, ce, size);
212-
return new_entry;
213-
}
214-
215206
static void add_entry(struct unpack_trees_options *o,
216207
const struct cache_entry *ce,
217208
unsigned int set, unsigned int clear)
218209
{
219-
do_add_entry(o, dup_entry(ce, &o->result), set, clear);
210+
do_add_entry(o, dup_cache_entry(ce, &o->result), set, clear);
220211
}
221212

222213
/*
@@ -1802,7 +1793,7 @@ static int merged_entry(const struct cache_entry *ce,
18021793
struct unpack_trees_options *o)
18031794
{
18041795
int update = CE_UPDATE;
1805-
struct cache_entry *merge = dup_entry(ce, &o->result);
1796+
struct cache_entry *merge = dup_cache_entry(ce, &o->result);
18061797

18071798
if (!old) {
18081799
/*

0 commit comments

Comments
 (0)