|
| 1 | +--- |
| 2 | +Title: 'max-load-factor()' |
| 3 | +Description: 'Gets or sets the maximum average number of elements per bucket before rehashing occurs in an unordered map.' |
| 4 | +Subjects: |
| 5 | + - 'Computer Science' |
| 6 | + - 'Data Science' |
| 7 | +Tags: |
| 8 | + - 'Map' |
| 9 | + - 'STL' |
| 10 | +CatalogContent: |
| 11 | + - 'learn-c-plus-plus' |
| 12 | + - 'paths/computer-science' |
| 13 | +--- |
| 14 | + |
| 15 | +The **`unordered_map::max_load_factor`** function in C++ is used to get or set the maximum load factor of an `unordered_map`. The load factor represents the average number of elements per bucket (i.e., total elements divided by total buckets). By default, this value is set to `1.0`. |
| 16 | + |
| 17 | +## Syntax |
| 18 | + |
| 19 | +```cpp |
| 20 | +unordered_mapName.max_load_factor(); // Getter |
| 21 | +unordered_mapName.max_load_factor(value); // Setter |
| 22 | +``` |
| 23 | + |
| 24 | +**Parameters:** |
| 25 | + |
| 26 | +- `value` (optional): A `float` specifying the new maximum load factor. Must be greater than 0. |
| 27 | + |
| 28 | +**Return value:** |
| 29 | + |
| 30 | +Returns the current maximum load factor as a `float`. |
| 31 | + |
| 32 | +## Example 1: Getting the default load factor |
| 33 | + |
| 34 | +This example prints the default max load factor of an unordered map: |
| 35 | + |
| 36 | +```cpp |
| 37 | +#include <iostream> |
| 38 | +#include <unordered_map> |
| 39 | + |
| 40 | +using namespace std; |
| 41 | + |
| 42 | +int main() { |
| 43 | + unordered_map<int, int> myMap; |
| 44 | + cout << "Default max load factor: " << myMap.max_load_factor() << endl; |
| 45 | + return 0; |
| 46 | +} |
| 47 | +``` |
| 48 | + |
| 49 | +The output of this code is: |
| 50 | + |
| 51 | +```shell |
| 52 | +Default max load factor: 1 |
| 53 | +``` |
| 54 | + |
| 55 | +This shows that the default load factor is typically `1.0`, meaning the map aims for one element per bucket. |
| 56 | + |
| 57 | +## Example 2: Setting a custom load factor to reduce rehashing |
| 58 | + |
| 59 | +This example increases the max load factor to reduce the frequency of rehashing during insertions: |
| 60 | + |
| 61 | +```cpp |
| 62 | +#include <iostream> |
| 63 | +#include <unordered_map> |
| 64 | + |
| 65 | +using namespace std; |
| 66 | + |
| 67 | +int main() { |
| 68 | + unordered_map<int, int> data; |
| 69 | + data.max_load_factor(2.5); |
| 70 | + |
| 71 | + cout << "New max load factor: " << data.max_load_factor() << endl; |
| 72 | + return 0; |
| 73 | +} |
| 74 | +``` |
| 75 | + |
| 76 | +The output of this code is: |
| 77 | + |
| 78 | +```shell |
| 79 | +New max load factor: 2.5 |
| 80 | +``` |
| 81 | + |
| 82 | +Raising the load factor allows more elements per bucket, delaying rehashing and potentially improving insertion performance. |
| 83 | + |
| 84 | +## Codebyte Example: Lowering load factor to improve search speed |
| 85 | + |
| 86 | +This example sets a lower load factor to prioritize faster lookups in a time-critical application: |
| 87 | + |
| 88 | +```codebye/cpp |
| 89 | +#include <iostream> |
| 90 | +#include <unordered_map> |
| 91 | +
|
| 92 | +using namespace std; |
| 93 | +
|
| 94 | +int main() { |
| 95 | + unordered_map<string, int> wordCount; |
| 96 | + wordCount.max_load_factor(0.5); |
| 97 | +
|
| 98 | + wordCount["optimize"] = 1; |
| 99 | + wordCount["speed"] = 2; |
| 100 | +
|
| 101 | + cout << "Load factor set for quick access: " << wordCount.max_load_factor() << endl; |
| 102 | + return 0; |
| 103 | +} |
| 104 | +``` |
| 105 | + |
| 106 | +## Frequently asked questions |
| 107 | + |
| 108 | +### 1. What is the default `max_load_factor` for an unordered_map? |
| 109 | + |
| 110 | +It's usually `1.0`, meaning one element per bucket on average before rehashing is triggered. |
| 111 | + |
| 112 | +### 2. Does increasing the load factor make the map faster? |
| 113 | + |
| 114 | +It can speed up insertions by reducing rehashes, but may slow down lookups due to more collisions. |
| 115 | + |
| 116 | +### 3. What happens if I set the load factor too low? |
| 117 | + |
| 118 | +The map will rehash more often, using more memory and slowing down insertions, but lookups may become faster. |
0 commit comments