HTTP wrapper: Add helper for link header pagination#355
Conversation
WalkthroughA new pagination utility has been introduced in the Changes
Sequence Diagram(s)sequenceDiagram
participant Client
participant Server
participant PaginationUtil
Client->>Server: Send HTTP request
Server-->>Client: Respond with Link header (includes next page URL)
Client->>PaginationUtil: Call WithNextLinkPagination with response
PaginationUtil->>PaginationUtil: Parse Link header
PaginationUtil->>Client: Store next page token in pagination bag
Poem
Tip ⚡💬 Agentic Chat (Pro Plan, General Availability)
✨ Finishing Touches
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. 🪧 TipsChatThere are 3 ways to chat with CodeRabbit:
Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments. CodeRabbit Commands (Invoked using PR comments)
Other keywords and placeholders
CodeRabbit Configuration File (
|
There was a problem hiding this comment.
Actionable comments posted: 1
🧹 Nitpick comments (1)
pkg/uhttp/pagination.go (1)
28-62: Well-implemented Link header parsing, consider enhancing validation.The function effectively parses Link headers with good handling of basic cases. However, consider these improvements:
- Add URL validation to ensure extracted URLs are well-formed
- Document the behavior for duplicate rel values (current implementation will silently use the last occurrence)
- Consider adding more specific error types for different parsing failures
func parseLinkHeader(header string) (map[string]string, error) { if header == "" { // Empty header is fine, it just means there are no more pages. return nil, nil } links := make(map[string]string) headerLinks := strings.Split(header, ",") for _, headerLink := range headerLinks { linkParts := strings.Split(headerLink, ";") if len(linkParts) < 2 { continue } linkUrl := strings.TrimSpace(linkParts[0]) linkUrl = strings.Trim(linkUrl, "<>") + // Validate URL format + if _, err := url.Parse(linkUrl); err != nil { + return nil, fmt.Errorf("invalid URL in link header: %w", err) + } var relValue string for _, rel := range linkParts[1:] { rel = strings.TrimSpace(rel) relParts := strings.Split(rel, "=") if len(relParts) < 2 { continue } if relParts[0] == "rel" { relValue = strings.Trim(relParts[1], "\"") break } } if relValue == "" { continue } + // Consider adding a check for duplicates + if _, exists := links[relValue]; exists { + // Either log, return error, or document that we're overwriting + } links[relValue] = linkUrl } return links, nil }Don't forget to add the necessary imports:
import ( "fmt" "net/url" // existing imports... )
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (2)
pkg/uhttp/pagination.go(1 hunks)pkg/uhttp/pagination_test.go(1 hunks)
🧰 Additional context used
🧬 Code Graph Analysis (1)
pkg/uhttp/pagination.go (2)
pkg/pagination/pagination.go (2)
Bag(31-34)PageState(23-27)pkg/uhttp/wrapper.go (2)
DoOption(98-98)WrapperResponse(37-42)
🔇 Additional comments (1)
pkg/uhttp/pagination.go (1)
64-99: Good implementation with sensible defaults.The function handles NextLink pagination well with appropriate defaults and proper error handling. Consider these minor improvements:
- Add more context to the error returned from parseLinkHeader
- Consider validating or sanitizing the next link URL before pushing it to the pagination bag
| // Parses the link header and returns a map of rel values to URLs. | ||
| func TestParseLinkHeader(t *testing.T) { | ||
| //nolint:revive // This is fine | ||
| // Example link header value: <https://api.github.com/repositories/1300192/issues?page=2>; rel="prev", <https://api.github.com/repositories/1300192/issues?page=4>; rel="next", <https://api.github.com/repositories/1300192/issues?page=515>; rel="last", <https://api.github.com/repositories/1300192/issues?page=1>; rel="first" | ||
|
|
||
| //nolint:revive // This is fine | ||
| header := `<https://api.github.com/repositories/1300192/issues?page=2>; rel="prev", <https://api.github.com/repositories/1300192/issues?page=4>; rel="next", <https://api.github.com/repositories/1300192/issues?page=515>; rel="last", <https://api.github.com/repositories/1300192/issues?page=1>; rel="first"` | ||
|
|
||
| links, err := parseLinkHeader(header) | ||
| require.Nil(t, err) | ||
| require.Equal(t, "https://api.github.com/repositories/1300192/issues?page=2", links["prev"]) | ||
| require.Equal(t, "https://api.github.com/repositories/1300192/issues?page=4", links["next"]) | ||
| require.Equal(t, "https://api.github.com/repositories/1300192/issues?page=515", links["last"]) | ||
| require.Equal(t, "https://api.github.com/repositories/1300192/issues?page=1", links["first"]) | ||
| } |
There was a problem hiding this comment.
🛠️ Refactor suggestion
Enhance test coverage with edge cases.
The test effectively verifies the happy path, but would benefit from additional test cases:
- Empty header handling
- Malformed header formats
- Headers with missing rel attributes
- Headers with duplicate rel attributes
- Headers with invalid URL formats
Also, consider using require.NoError(t, err) instead of require.Nil(t, err) for more idiomatic Go error assertion.
func TestParseLinkHeader(t *testing.T) {
//nolint:revive // This is fine
// Example link header value: <https://api.github.com/repositories/1300192/issues?page=2>; rel="prev", <https://api.github.com/repositories/1300192/issues?page=4>; rel="next", <https://api.github.com/repositories/1300192/issues?page=515>; rel="last", <https://api.github.com/repositories/1300192/issues?page=1>; rel="first"
//nolint:revive // This is fine
header := `<https://api.github.com/repositories/1300192/issues?page=2>; rel="prev", <https://api.github.com/repositories/1300192/issues?page=4>; rel="next", <https://api.github.com/repositories/1300192/issues?page=515>; rel="last", <https://api.github.com/repositories/1300192/issues?page=1>; rel="first"`
links, err := parseLinkHeader(header)
- require.Nil(t, err)
+ require.NoError(t, err)
require.Equal(t, "https://api.github.com/repositories/1300192/issues?page=2", links["prev"])
require.Equal(t, "https://api.github.com/repositories/1300192/issues?page=4", links["next"])
require.Equal(t, "https://api.github.com/repositories/1300192/issues?page=515", links["last"])
require.Equal(t, "https://api.github.com/repositories/1300192/issues?page=1", links["first"])
+
+ // Test empty header
+ links, err = parseLinkHeader("")
+ require.NoError(t, err)
+ require.Nil(t, links)
+
+ // Test malformed header
+ malformedHeader := `<https://api.github.com/repositories/1300192/issues?page=2>; rel="prev" <https://api.github.com/repositories/1300192/issues?page=4>`
+ links, err = parseLinkHeader(malformedHeader)
+ require.NoError(t, err)
+ require.Len(t, links, 1)
+
+ // Test duplicate rel values
+ duplicateRelHeader := `<https://api.github.com/repositories/1300192/issues?page=2>; rel="next", <https://api.github.com/repositories/1300192/issues?page=4>; rel="next"`
+ links, err = parseLinkHeader(duplicateRelHeader)
+ require.NoError(t, err)
+ require.Equal(t, "https://api.github.com/repositories/1300192/issues?page=4", links["next"], "Should use the last occurrence for duplicate rel values")
}📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| // Parses the link header and returns a map of rel values to URLs. | |
| func TestParseLinkHeader(t *testing.T) { | |
| //nolint:revive // This is fine | |
| // Example link header value: <https://api.github.com/repositories/1300192/issues?page=2>; rel="prev", <https://api.github.com/repositories/1300192/issues?page=4>; rel="next", <https://api.github.com/repositories/1300192/issues?page=515>; rel="last", <https://api.github.com/repositories/1300192/issues?page=1>; rel="first" | |
| //nolint:revive // This is fine | |
| header := `<https://api.github.com/repositories/1300192/issues?page=2>; rel="prev", <https://api.github.com/repositories/1300192/issues?page=4>; rel="next", <https://api.github.com/repositories/1300192/issues?page=515>; rel="last", <https://api.github.com/repositories/1300192/issues?page=1>; rel="first"` | |
| links, err := parseLinkHeader(header) | |
| require.Nil(t, err) | |
| require.Equal(t, "https://api.github.com/repositories/1300192/issues?page=2", links["prev"]) | |
| require.Equal(t, "https://api.github.com/repositories/1300192/issues?page=4", links["next"]) | |
| require.Equal(t, "https://api.github.com/repositories/1300192/issues?page=515", links["last"]) | |
| require.Equal(t, "https://api.github.com/repositories/1300192/issues?page=1", links["first"]) | |
| } | |
| // Parses the link header and returns a map of rel values to URLs. | |
| func TestParseLinkHeader(t *testing.T) { | |
| //nolint:revive // This is fine | |
| // Example link header value: <https://api.github.com/repositories/1300192/issues?page=2>; rel="prev", <https://api.github.com/repositories/1300192/issues?page=4>; rel="next", <https://api.github.com/repositories/1300192/issues?page=515>; rel="last", <https://api.github.com/repositories/1300192/issues?page=1>; rel="first" | |
| //nolint:revive // This is fine | |
| header := `<https://api.github.com/repositories/1300192/issues?page=2>; rel="prev", <https://api.github.com/repositories/1300192/issues?page=4>; rel="next", <https://api.github.com/repositories/1300192/issues?page=515>; rel="last", <https://api.github.com/repositories/1300192/issues?page=1>; rel="first"` | |
| links, err := parseLinkHeader(header) | |
| require.NoError(t, err) | |
| require.Equal(t, "https://api.github.com/repositories/1300192/issues?page=2", links["prev"]) | |
| require.Equal(t, "https://api.github.com/repositories/1300192/issues?page=4", links["next"]) | |
| require.Equal(t, "https://api.github.com/repositories/1300192/issues?page=515", links["last"]) | |
| require.Equal(t, "https://api.github.com/repositories/1300192/issues?page=1", links["first"]) | |
| // Test empty header | |
| links, err = parseLinkHeader("") | |
| require.NoError(t, err) | |
| require.Nil(t, links) | |
| // Test malformed header | |
| malformedHeader := `<https://api.github.com/repositories/1300192/issues?page=2>; rel="prev" <https://api.github.com/repositories/1300192/issues?page=4>` | |
| links, err = parseLinkHeader(malformedHeader) | |
| require.NoError(t, err) | |
| require.Len(t, links, 1) | |
| // Test duplicate rel values | |
| duplicateRelHeader := `<https://api.github.com/repositories/1300192/issues?page=2>; rel="next", <https://api.github.com/repositories/1300192/issues?page=4>; rel="next"` | |
| links, err = parseLinkHeader(duplicateRelHeader) | |
| require.NoError(t, err) | |
| require.Equal(t, "https://api.github.com/repositories/1300192/issues?page=4", links["next"], "Should use the last occurrence for duplicate rel values") | |
| } |
|
|
||
| /* | ||
| * Uhttp pagination handling. | ||
| * There are three common types of pagination: |
There was a problem hiding this comment.
i'm not sure how common it is (maybe its not very), but for the Rootly API which followed the JSONAPI spec, links are included as part of the body, separate from data: https://github.com/ConductorOne/baton-rootly/blob/main/pkg/connector/client/models.go#L36
thought i'd mention for awareness
gontzess
left a comment
There was a problem hiding this comment.
LGTM! happy to take another look after you add the other easy cases, if you'd like
Inspired by baton-http.
This was the trickiest of the pagination options. I'll update this PR with the simpler ones.
Summary by CodeRabbit
New Features
Tests