@@ -16,7 +16,7 @@ namespace DB
1616// /
1717// / Note: The cache may store more than the minimal number of matching marks.
1818// / For example, assume a very selective predicate that matches just a single row in a single mark.
19- // / One would expect that the cache records just the single mark as potentially matching:
19+ // / One would expect that the cache records just a single mark as potentially matching:
2020// / 000000010000000000000000000
2121// / But it is equally correct for the cache to store this: (it is just less efficient for pruning)
2222// / 000001111111110000000000000
@@ -51,14 +51,13 @@ class QueryConditionCache
5151
5252 // / (*) You might wonder why Entry has its own mutex considering that CacheBase locks internally already.
5353 // / The reason is that ClickHouse scans ranges within the same part in parallel. The first scan creates
54- // / and inserts a new Key + Entry into the cache, the 2nd ... Nth scan find the existing Key and update
54+ // / and inserts a new Key + Entry into the cache, the 2nd ... Nth scans find the existing Key and update
5555 // / its Entry for the new ranges. This can only be done safely in a synchronized fashion.
5656
5757 // / (**) About error handling: There could be an exception after the i-th scan and cache entries could
5858 // / (theoretically) be left in a corrupt state. If we are not careful, future scans queries could then
5959 // / skip too many ranges. To prevent this, it is important to initialize all marks of each entry as
6060 // / non-matching. In case of an exception, future scans will then not skip them.
61-
6261 };
6362
6463 struct KeyHasher
@@ -71,6 +70,7 @@ class QueryConditionCache
7170 size_t operator ()(const Entry & entry) const ;
7271 };
7372
73+
7474public:
7575 using Cache = CacheBase<Key, Entry, KeyHasher, QueryConditionCacheEntryWeight>;
7676
0 commit comments