Skip to content

Conversation

@charlievieth
Copy link
Contributor

Changes:

  • Only resize the hash in Hashtable_put to reduce thrashing, otherwise
    we a large deletion results in multiple incremental resizes.
  • Reduce the load factor to 2/3 from 0.7 since testing shows the
    average and median probe lengths increases quickly after 2/3 of
    capacity is used.
  • Eliminate modulo math when incrementing the index in put, get, and
    remove.
  • Increase the number of available prime sizes.
  • Define the hash key type as uint32_t instead of "unsigned int", which
    is stable across platforms and large enough to hold any PID.
  • Change Hashtable_get to take a "const" Hashtable.

This was originally meant to fix an issue with the Hashtable shrink
factor that was fixed with 230dc9c.

@BenBE BenBE added enhancement Extension or improvement to existing feature code quality ♻️ Code quality enhancement labels May 6, 2022
@BenBE BenBE added this to the 3.2.1 milestone May 6, 2022
@BenBE
Copy link
Member

BenBE commented May 6, 2022

Actually trying to run the code produces an infinite loop in Hashtable_put because the resizing doesn't trigger. Can you have another look?

Copy link
Member

@BenBE BenBE left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM. Mostly cosmetic changes left.

Changes:
 * Only resize the hash in Hashtable_put to reduce thrashing, otherwise
   we a large deletion results in multiple incremental resizes.
 * Reduce the load factor to 2/3 from 0.7 since testing shows the
   average and median probe lengths increases quickly after 2/3 of
   capacity is used.
 * Eliminate modulo math when incrementing the index in put, get, and
   remove.
 * Increase the number of available prime sizes.
 * Define the hash key type as uint32_t instead of "unsigned int", which
   is stable across platforms and large enough to hold any PID.
 * Change Hashtable_get to take a "const" Hashtable.

This was originally meant to fix an issue with the Hashtable shrink
factor that was fixed with 230dc9c.
* | 0.75 | 287.00 | 85.23 | 26.15 |
* | 0.80 | 287.00 | 94.71 | 55.93 |
*/
#define USABLE_FRACTION(n) (((n) << 1)/3)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Prefer normal multiplication.

Suggested change
#define USABLE_FRACTION(n) (((n) << 1)/3)
#define USABLE_FRACTION(n) (((n) * 2)/3)

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Another point that should be integrated into this patch series …

* Currently set to 1/4 of the USABLE_FRACTION, which is ~13% of the total
* hash map size.
*/
#define SHRINK_THRESHOLD(n) (USABLE_FRACTION((n)) / 4)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Unnecessary parens

Suggested change
#define SHRINK_THRESHOLD(n) (USABLE_FRACTION((n)) / 4)
#define SHRINK_THRESHOLD(n) (USABLE_FRACTION(n) / 4)

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Further code style stuff that is not resolved …

Comment on lines +93 to 122
#define MIN_TABLE_SIZE 11

/* Primes borrowed from gnulib/lib/gl_anyhash_primes.h.

Array of primes, approximately in steps of factor 1.2.
This table was computed by executing the Common Lisp expression
(dotimes (i 244) (format t "nextprime(~D)~%" (ceiling (expt 1.2d0 i))))
and feeding the result to PARI/gp. */
static const size_t primes[] = {
MIN_TABLE_SIZE, 13, 17, 19, 23, 29, 37, 41, 47, 59, 67, 83, 97, 127, 139,
167, 199, 239, 293, 347, 419, 499, 593, 709, 853, 1021, 1229, 1471, 1777,
2129, 2543, 3049, 3659, 4391, 5273, 6323, 7589, 9103, 10937, 13109, 15727,
18899, 22651, 27179, 32609, 39133, 46957, 56359, 67619, 81157, 97369,
116849, 140221, 168253, 201907, 242309, 290761, 348889, 418667, 502409,
602887, 723467, 868151, 1041779, 1250141, 1500181, 1800191, 2160233,
2592277, 3110741, 3732887, 4479463, 5375371, 6450413, 7740517, 9288589,
11146307, 13375573, 16050689, 19260817, 23112977, 27735583, 33282701,
39939233, 47927081, 57512503, 69014987, 82818011, 99381577, 119257891,
143109469, 171731387, 206077643, 247293161, 296751781, 356102141, 427322587,
512787097, 615344489, 738413383, 886096061, 1063315271, 1275978331,
1531174013, 1837408799, 2204890543UL, 2645868653UL, 3175042391UL,
3810050851UL,
/* on 32-bit make sure we do not return primes not fitting in size_t */
#if SIZE_MAX > 4294967295ULL
4572061027ULL, 5486473229ULL, 6583767889ULL, 7900521449ULL, 9480625733ULL,
/* Largest possible size should be 13652101063ULL == GROWTH_RATE((UINT32_MAX/3)*4)
we include some larger values in case the above math is wrong */
11376750877ULL, 13652101063ULL, 16382521261ULL, 19659025513ULL, 23590830631ULL,
#endif
};
Copy link
Member

@BenBE BenBE May 7, 2022

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think this list should stay much sparser as it was before (maybe not as sparse, but not this dense either).

The reason the OEIS series was chosen in #277 was because primes just below a power of two seemed to empirically give the fewest collisions. Also the resize of a hash table is quite expensive and thus should occur as rarely as possible.

Thus I'd much prefer to keep the scaling factor in that list of primes a bit larger at >=~1.5. With this new list at scaling factor 1.2 we are resizing the hash table on common systems about 30 times before we get into a steady state (for some desktop system @ ~3000 threads). With a scaling factor at around 1.5 this number would be roughly cut in half.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Simulating the logic of this PR suggests that, with the logic defined here, growth usually happens 4 steps at a time (3 or 5 at some specific initial sizes), while shrinking usually happens 3 steps at a time or more due to deferred shrinking (4 steps or more at some initial sizes).
Using the logic in this PR with the OEIS list would result in some sizes increasing by 2 steps at once and in some cases in shouldResize firing while the shrunk size is still the same, which would be suboptimal but would also work.
In my opinion, 1.2 is too low for the hash size ratio, but I don't really understand the motivation for this part of this PR. Why do we want to have more size steps when the load ratio and expand/shrink ratio stay the same?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is still open and should be resolved.

@charlievieth
Copy link
Contributor Author

@BenBE Let's hold off on these changes for a moment. I was just reviewing the implications of only shrinking on Hashtable_put() and one of the side-effects is that a Hashtable created with an explicit size greater than MIN_TABLE_SIZE will shrink on the first call to Hashtable_put, which we obviously don't want.

The downside of moving the shrink logic back to Hashtable_remove is that when turnover is high we'd end up shrinking the table on removes only to grow it again on puts (basically doublind the work we have to do).

Have we considered changing the hash table to not automatically shrink? This would allow for adding something like Hashtable_reserve(N) (to reserve space for N items before inserts) and we could use the same compact logic that we use for Vectors (which we never shrink)?

@BenBE
Copy link
Member

BenBE commented May 8, 2022

Couldn't you achieve this effect by setting the shrink threshold very low? The performance of the hash table won't be hurt if it is "too empty", thus down-sizing it late should have no negative impact.

As per an actual Hashtable_reserve call: I'd appreciate such a function but this is likely best left as a separate PR as the involved code changes will be quite substantial.

@tanriol
Copy link
Contributor

tanriol commented May 8, 2022

The downside of moving the shrink logic back to Hashtable_remove is that when turnover is high we'd end up shrinking the table on removes only to grow it again on puts (basically doublind the work we have to do).

What are the cases for high turnover in htop - fighting off a fork bomb manually?

Furthermore, this likely won't be a problem for the current use in ProcessList because ProcessList_scan first adds all new processes and after that removes the expired ones, so in stable system state it would go n -> 2n -> n in worst case. Sure, we can still have resizes with highly variable process count, but that would trigger them in the Hashtable_put variant too.

Comment on lines +93 to 122
#define MIN_TABLE_SIZE 11

/* Primes borrowed from gnulib/lib/gl_anyhash_primes.h.

Array of primes, approximately in steps of factor 1.2.
This table was computed by executing the Common Lisp expression
(dotimes (i 244) (format t "nextprime(~D)~%" (ceiling (expt 1.2d0 i))))
and feeding the result to PARI/gp. */
static const size_t primes[] = {
MIN_TABLE_SIZE, 13, 17, 19, 23, 29, 37, 41, 47, 59, 67, 83, 97, 127, 139,
167, 199, 239, 293, 347, 419, 499, 593, 709, 853, 1021, 1229, 1471, 1777,
2129, 2543, 3049, 3659, 4391, 5273, 6323, 7589, 9103, 10937, 13109, 15727,
18899, 22651, 27179, 32609, 39133, 46957, 56359, 67619, 81157, 97369,
116849, 140221, 168253, 201907, 242309, 290761, 348889, 418667, 502409,
602887, 723467, 868151, 1041779, 1250141, 1500181, 1800191, 2160233,
2592277, 3110741, 3732887, 4479463, 5375371, 6450413, 7740517, 9288589,
11146307, 13375573, 16050689, 19260817, 23112977, 27735583, 33282701,
39939233, 47927081, 57512503, 69014987, 82818011, 99381577, 119257891,
143109469, 171731387, 206077643, 247293161, 296751781, 356102141, 427322587,
512787097, 615344489, 738413383, 886096061, 1063315271, 1275978331,
1531174013, 1837408799, 2204890543UL, 2645868653UL, 3175042391UL,
3810050851UL,
/* on 32-bit make sure we do not return primes not fitting in size_t */
#if SIZE_MAX > 4294967295ULL
4572061027ULL, 5486473229ULL, 6583767889ULL, 7900521449ULL, 9480625733ULL,
/* Largest possible size should be 13652101063ULL == GROWTH_RATE((UINT32_MAX/3)*4)
we include some larger values in case the above math is wrong */
11376750877ULL, 13652101063ULL, 16382521261ULL, 19659025513ULL, 23590830631ULL,
#endif
};
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Simulating the logic of this PR suggests that, with the logic defined here, growth usually happens 4 steps at a time (3 or 5 at some specific initial sizes), while shrinking usually happens 3 steps at a time or more due to deferred shrinking (4 steps or more at some initial sizes).
Using the logic in this PR with the OEIS list would result in some sizes increasing by 2 steps at once and in some cases in shouldResize firing while the shrunk size is still the same, which would be suboptimal but would also work.
In my opinion, 1.2 is too low for the hash size ratio, but I don't really understand the motivation for this part of this PR. Why do we want to have more size steps when the load ratio and expand/shrink ratio stay the same?

#include <stdint.h>


typedef unsigned int ht_key_t;
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Are you sure you can do that? Note that Hashtable is used in quite a few places in the codebase, and at least one uses it to store an inode number, which can be 64-bit at least on 64-bit systems (on 32-bit ones that seems to depend on _FILE_OFFSET_BITS=64).

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Good catch. unsigned int had been dodgy and uint32_t is just plain wrong. Good catch.

If we want to replace this, then uint64_t is the way to go, independent of the system architecture.

Also, the overall key handling for the hash table is a bit shaky to say the least: In C++ the STL uses a two-stage approach with the hash function for indexing, but a separate equals check on the full key object as a second layer in case the internal hash collided. Our implementation lacks this second comparison, thus we have to guarantee both in the key space as well as the hashing for reduction (which we currently don't do anywhere AFAIK) that no collissions occure (which can be reasonably assured, when we don't do hashing for key space reduction* and do only use unique keys within the (reduced) keyspace)

*Casting uint64_t to uint32_t is a hash function just masking to keep the lower bits.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hashtables are used with the following keys in the current htop code: PIDs; user IDs; BSD jail IDs; inode numbers (in LinuxProcessList_readMaps); PCP column and meter IDs.

On my system (Linux amd64) pid_t and uid_t are defined as (signed) int, and PCP IDs are counted sequentially, so these shall never have any problems with this. I don't know about BSD jail IDs.

Inode numbers, on the other hand, on Linux boil down to either unsigned 32-bit (by default on 32-bit) or unsigned 64-bit (on 64-bit or with _FILE_OFFSET_BITS=64), so this is the only case in the current codebase that can cause confusion. Fortunately, it's only used in LinuxProcessList_readMaps and thus the only possible problem (on Linux) seems to be usesDeletedLib and/or libraries size being wrong on 32-bit systems.

P.S. And if one really wants to reduce hashtable allocations, I'd suggest starting with making library size calc use some persistent hashtable.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thus if I get your list correctly, we mostly have unsigned values anyway (except for pid_t and uid_t).

Maybe the best (and cleanest) way forward is to make the hash functions for each type of key we use explicit:

static inline hkey_t Hashtable_keyFromUID(uit_t value) {
   return (hkey_t)value;
}
static inline hkey_t Hashtable_keyFromInode(inode_t value) {
   return (hkey_t)value;
}

Yes, this adds quite a bit of boilerplate in Hashtable.h, but ensures we don't just blindly assume a 1:1 mapping without being explicit about that assumption.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If we want to clean it up, shouldn't we go all the way and make the key type a union { pid_t pid; uid_t uid; /* other options */ }?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'd prefer to not do this as the key type should be numeric. Storing into a union in one field and reading from another is undefined behavior in the C standard and may even leave parts of the memory undefined (when writing uint32_t and reading back uint64_t).

With the suggested approach you could even partially remap entries to improve cache hit patterns by adjusting individual hash functions (e.g. most UIDs are sequential in a certain range), while memory (page) addresses jump in steps of 4KiB). Currently we handle both of these sets of key inputs identically.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Would be nice if this would be either left in it's original state or updated to make key space conversions/hashing explicit as elaborated in my comment from last year

@fasterit fasterit requested a review from BenBE June 1, 2022 06:10
@fasterit fasterit modified the milestones: 3.2.1, 3.2.2 Jun 1, 2022
@BenBE BenBE modified the milestones: 3.2.2, 3.3.0 Jan 31, 2023
Copy link
Member

@BenBE BenBE left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

There are still several things unresolved that have been noted in previous reviews. Please fix them.

If you could update the existing commits by squashing them/rebasing fixes, this would be much appreciated.

Comment on lines +93 to 122
#define MIN_TABLE_SIZE 11

/* Primes borrowed from gnulib/lib/gl_anyhash_primes.h.

Array of primes, approximately in steps of factor 1.2.
This table was computed by executing the Common Lisp expression
(dotimes (i 244) (format t "nextprime(~D)~%" (ceiling (expt 1.2d0 i))))
and feeding the result to PARI/gp. */
static const size_t primes[] = {
MIN_TABLE_SIZE, 13, 17, 19, 23, 29, 37, 41, 47, 59, 67, 83, 97, 127, 139,
167, 199, 239, 293, 347, 419, 499, 593, 709, 853, 1021, 1229, 1471, 1777,
2129, 2543, 3049, 3659, 4391, 5273, 6323, 7589, 9103, 10937, 13109, 15727,
18899, 22651, 27179, 32609, 39133, 46957, 56359, 67619, 81157, 97369,
116849, 140221, 168253, 201907, 242309, 290761, 348889, 418667, 502409,
602887, 723467, 868151, 1041779, 1250141, 1500181, 1800191, 2160233,
2592277, 3110741, 3732887, 4479463, 5375371, 6450413, 7740517, 9288589,
11146307, 13375573, 16050689, 19260817, 23112977, 27735583, 33282701,
39939233, 47927081, 57512503, 69014987, 82818011, 99381577, 119257891,
143109469, 171731387, 206077643, 247293161, 296751781, 356102141, 427322587,
512787097, 615344489, 738413383, 886096061, 1063315271, 1275978331,
1531174013, 1837408799, 2204890543UL, 2645868653UL, 3175042391UL,
3810050851UL,
/* on 32-bit make sure we do not return primes not fitting in size_t */
#if SIZE_MAX > 4294967295ULL
4572061027ULL, 5486473229ULL, 6583767889ULL, 7900521449ULL, 9480625733ULL,
/* Largest possible size should be 13652101063ULL == GROWTH_RATE((UINT32_MAX/3)*4)
we include some larger values in case the above math is wrong */
11376750877ULL, 13652101063ULL, 16382521261ULL, 19659025513ULL, 23590830631ULL,
#endif
};
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is still open and should be resolved.

* | 0.75 | 287.00 | 85.23 | 26.15 |
* | 0.80 | 287.00 | 94.71 | 55.93 |
*/
#define USABLE_FRACTION(n) (((n) << 1)/3)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Another point that should be integrated into this patch series …

* Currently set to 1/4 of the USABLE_FRACTION, which is ~13% of the total
* hash map size.
*/
#define SHRINK_THRESHOLD(n) (USABLE_FRACTION((n)) / 4)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Further code style stuff that is not resolved …

#include <stdint.h>


typedef unsigned int ht_key_t;
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Would be nice if this would be either left in it's original state or updated to make key space conversions/hashing explicit as elaborated in my comment from last year

@BenBE BenBE modified the milestones: 3.3.0, 3.4.0 Dec 26, 2023
@fasterit fasterit modified the milestones: 3.4.0, 3.5.0 Feb 20, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

code quality ♻️ Code quality enhancement enhancement Extension or improvement to existing feature

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants