Skip to content

Commit 16ce7e8

Browse files
authored
limit for aggregations (#201)
### TL;DR Added pagination support to the ClickHouse aggregation queries. ### What changed? Added a LIMIT and OFFSET clause to the GetAggregations method in the ClickHouseConnector. The implementation: - Adds LIMIT and OFFSET when both page and limit are specified - Adds only LIMIT when just the limit is specified - Calculates the correct offset based on page number and limit ### How to test? 1. Make a query to the API that uses the GetAggregations method with pagination parameters 2. Verify that the query returns the correct number of results based on the limit 3. Test with different page numbers to ensure proper pagination 4. Verify that queries without pagination parameters still work correctly ### Why make this change? This change enables proper pagination for aggregation queries, which is essential for handling large result sets efficiently. Without pagination, large query results could cause performance issues or timeout errors. This implementation allows clients to request specific pages of results with a defined size limit.
2 parents 1c10502 + 80b4abe commit 16ce7e8

File tree

1 file changed

+8
-0
lines changed

1 file changed

+8
-0
lines changed

internal/storage/clickhouse.go

Lines changed: 8 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -504,6 +504,14 @@ func (c *ClickHouseConnector) GetAggregations(table string, qf QueryFilter) (Que
504504
query += fmt.Sprintf(" ORDER BY %s %s", qf.SortBy, qf.SortOrder)
505505
}
506506

507+
// Add limit clause
508+
if qf.Page > 0 && qf.Limit > 0 {
509+
offset := (qf.Page - 1) * qf.Limit
510+
query += fmt.Sprintf(" LIMIT %d OFFSET %d", qf.Limit, offset)
511+
} else if qf.Limit > 0 {
512+
query += fmt.Sprintf(" LIMIT %d", qf.Limit)
513+
}
514+
507515
if err := common.ValidateQuery(query); err != nil {
508516
return QueryResult[interface{}]{}, err
509517
}

0 commit comments

Comments
 (0)