Skip to content

Commit c87e47e

Browse files
murphyjacob4madolson
authored andcommitted
Fix invalid memory address caused by hashtable shrinking during safe iteration (#2753)
Safe iterators pause rehashing, but don't pause auto shrinking. This allows stale bucket references which then cause use after free (in this case, via compactBucketChain on a deleted bucket). This problem is easily reproducible via atomic slot migration, where we call delKeysInSlot which relies on calling delete within a safe iterator. After the fix, it no longer causes a crash. Since all cases where rehashing is paused expect auto shrinking to also be paused, I am making this happen automatically as part of pausing reshashing. --------- Signed-off-by: Jacob Murphy <[email protected]> (cherry picked from commit 1cf0df9) Signed-off-by: cherukum-amazon <[email protected]>
1 parent 3f243ff commit c87e47e

File tree

1 file changed

+9
-4
lines changed

1 file changed

+9
-4
lines changed

src/hashtable.c

Lines changed: 9 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -1231,14 +1231,18 @@ void hashtableResumeAutoShrink(hashtable *ht) {
12311231

12321232
/* Pauses incremental rehashing. When rehashing is paused, bucket chains are not
12331233
* automatically compacted when entries are deleted. Doing so may leave empty
1234-
* spaces, "holes", in the bucket chains, which wastes memory. */
1234+
* spaces, "holes", in the bucket chains, which wastes memory. Additionally, we
1235+
* pause auto shrink when rehashing is paused, meaning the hashtable will not
1236+
* shrink the bucket count. */
12351237
static void hashtablePauseRehashing(hashtable *ht) {
12361238
ht->pause_rehash++;
1239+
hashtablePauseAutoShrink(ht);
12371240
}
12381241

12391242
/* Resumes incremental rehashing, after pausing it. */
12401243
static void hashtableResumeRehashing(hashtable *ht) {
12411244
ht->pause_rehash--;
1245+
hashtableResumeAutoShrink(ht);
12421246
}
12431247

12441248
/* Returns true if incremental rehashing is paused, false if it isn't. */
@@ -1630,6 +1634,9 @@ void hashtableTwoPhasePopDelete(hashtable *ht, hashtablePosition *pos) {
16301634
assert(isPositionFilled(b, pos_in_bucket));
16311635
b->presence &= ~(1 << pos_in_bucket);
16321636
ht->used[table_index]--;
1637+
/* When we resume rehashing, it may cause the bucket to be deleted due to
1638+
* auto shrink. */
1639+
hashtablePauseAutoShrink(ht);
16331640
hashtableResumeRehashing(ht);
16341641
if (b->chained && !hashtableIsRehashingPaused(ht)) {
16351642
/* Rehashing paused also means bucket chain compaction paused. It is
@@ -1638,7 +1645,7 @@ void hashtableTwoPhasePopDelete(hashtable *ht, hashtablePosition *pos) {
16381645
* we do the compaction in the scan and iterator code instead. */
16391646
fillBucketHole(ht, b, pos_in_bucket, table_index);
16401647
}
1641-
hashtableShrinkIfNeeded(ht);
1648+
hashtableResumeAutoShrink(ht);
16421649
}
16431650

16441651
/* Initializes the state for an incremental find operation.
@@ -1816,7 +1823,6 @@ size_t hashtableScanDefrag(hashtable *ht, size_t cursor, hashtableScanFunction f
18161823
/* Prevent entries from being moved around during the scan call, as a
18171824
* side-effect of the scan callback. */
18181825
hashtablePauseRehashing(ht);
1819-
hashtablePauseAutoShrink(ht);
18201826

18211827
/* Flags. */
18221828
int emit_ref = (flags & HASHTABLE_SCAN_EMIT_REF);
@@ -1928,7 +1934,6 @@ size_t hashtableScanDefrag(hashtable *ht, size_t cursor, hashtableScanFunction f
19281934
} while (cursor & (mask_small ^ mask_large));
19291935
}
19301936
hashtableResumeRehashing(ht);
1931-
hashtableResumeAutoShrink(ht);
19321937
return cursor;
19331938
}
19341939

0 commit comments

Comments
 (0)