-
-
Notifications
You must be signed in to change notification settings - Fork 4.6k
fix: dedupe aliases before final request #104900
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
| # Request 1 more than limit so we can tell if there is another page | ||
| data = self.data_fn(offset=offset, limit=limit + 1) | ||
| data = self.data_fn(offset=offset, limit=limit) | ||
| has_more = data[1] >= limit + 1 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Bug: Pagination never detects next page after limit change
The pagination has_more check is broken. Previously data_fn was called with limit + 1 to detect if more results exist, but now it's called with just limit. However, the has_more condition still checks if data[1] >= limit + 1. Since data_fn returns len(request_attrs_list) which is at most limit (due to the slice [offset : offset + limit]), this condition will never be true. As a result, pagination will never show a "next page" link even when more results exist.
| # Request 1 more than limit so we can tell if there is another page | ||
| data = self.data_fn(offset=offset, limit=limit + 1) | ||
| data = self.data_fn(offset=offset, limit=limit) | ||
| has_more = data[1] >= limit + 1 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Bug: Pagination never detects next page after limit change
The pagination has_more check is broken. Previously data_fn was called with limit + 1 to detect if more results exist, but now it's called with just limit. However, the has_more condition still checks if data[1] >= limit + 1. Since data_fn returns len(request_attrs_list) which is at most limit (due to the slice [offset : offset + limit]), this condition will never be true. As a result, pagination will never show a "next page" link even when more results exist.
❌ 2 Tests Failed:
View the top 2 failed test(s) by shortest run time
To view more test analytics, go to the Test Analytics Dashboard |
|
This pull request has gone three weeks without activity. In another week, I will close it. But! If you comment or otherwise update it, I will reset the clock, and if you add the label "A weed is but an unloved flower." ― Ella Wheeler Wilcox 🥀 |
|
|
||
| sanitized_keys.append(key) | ||
| sanitized_keys_set.add(internal_name) | ||
| sanitized_keys.append(internal_name) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Deduplication tracks internal names instead of public aliases
Medium Severity
The deduplication logic adds internal_name to sanitized_keys_set and checks against it, but internal_alias_attr_keys comes from dictionary keys which are already unique. The PR's goal is to deduplicate when different internal names map to the same public_alias, but the set tracks internal_name instead of public_alias. This means multiple internal names mapping to the same public alias will all be included, defeating the deduplication purpose.
The preflight was returning duplicated tag key names (we store the same
attribute key in different format in EAP). Dedupe it before making the
second request.