Merged
Conversation
* Add SQLite persistence as an alternative to in-memory storage
- Rename InMemorySink to just EventSink
- Create EventStorage abstraction with NoStorage, InMemoryStorage and SqliteStorage implementations
- SqliteStorage stores events as Analytics SDK JSON (valid and incomplete), ignores bad rows
- Add --storage CLI flag for SQLite database file path
- Change -- max-events to only work with SQLite
- SQLite storage is used when either --storage or --max-events (>0) are specified
- Split routing: InMemoryRoutes supports /micro/{all,good,bad,events}, SqliteRoutes only /micro/events
- Add SqliteStorageSpec tests covering all functionality
Co-Authored-By: Claude <noreply@anthropic.com>
* Added new endpoints: /micro/{columns,columnStats,timeline} that handle general information about the data (all used in the UI)
* Added a new POST /micro/events endpoint that allows filtering, sorting and pagination
* Everything implemented for both InMemory and SQLite storages, with the same test suite running for both
* Updated the frontend code to remove all the logic that’s now handled on the backend
Co-Authored-By: Claude noreply@anthropic.com
- Add cloud SDKs for AWS, Azure and GCP - Create BlobUtils with URI-to-client mapping for different URL schemes - Support s3://, gs://, abfss://, and https://account.blob.core.windows.net/ URLs - Add Azure SAS token authentication via MICRO_AZURE_BLOB_ACCOUNT and MICRO_AZURE_BLOB_SAS_TOKEN - Integrate with Run.downloadAssets to efficiently group and download enrichment files - Replace simple URL download with cloud storage client implementations 🤖 Generated with [Claude Code](https://claude.ai/code)
- Add AuthConfig with domain, apiDomain, audience, organizationId, clientId
- Implement auth middleware that validates tokens via Console authorization API
- Add http4s-client dependency for external API calls
- Create frontend auth service with Auth0 SPA SDK integration
- Add automatic login redirect
- Authorization calls: /api/msc/internal/authz/query/v1/{organizationId}/minis/list
🤖 Generated with [Claude Code](https://claude.ai/code)
* Improve performance of SQLite memory store - Remove LIKE %% clauses to ensure indexes are used - Remove direct use of boolean in queries to ensure indexes are used - Introduced approximate counts + pages for non-indexed columns - Increased parallel queries with wider thread-pool - ColumnStats limits scan depth to last 1000 rows to speed up queries for non-indexed JSON columns * Ignore stream-collector to make following instructions easier * Update columnStats to use parTraverse Requires using a production database in the tests to properly test the multiple connections we now leverage * Add conditional gzip compression 10x reduction in bytes returned to the UI * Update ui/src/components/DataTable.tsx Co-authored-by: Nick <nick.stanch@snowplowanalytics.com> * Update ui/src/components/DataTable.tsx Co-authored-by: Nick <nick.stanch@snowplowanalytics.com> * Use the headers * Update Routing.scala * Update SqliteStorage.scala * isApproximate -> approximateCount * Add time based expiry to sqlite backed storage * Match threadpool to connection pool * Feedback * Only allow order and sort on indexed columns * Remove isApproximate * Update sorting and filtering in the UI * Fix tests * Segment read and write pools for SQLite * Remove --max-events in favour of --no-storage * Cannot do case insensitive matching to maintain query speed * Use only composite indexes * Increase debounce to make it less aggressive * Actually commit batches not individual write per event * reverse the order * Move cleanup scheduling code to SQLiteStorage * Move filterable/sortable logic fully to the backend * Remove unnecessary string interpolation * Revert extra stats field in columnStats response * Fully determine sortable/filterable columns on the backend * Clean up configuration options * Fix background task scheduling * Amendments to PR #180 (#182) --------- Co-authored-by: Nick <nick.stanch@snowplowanalytics.com> Co-authored-by: Ian Streeter <ian@snowplowanalytics.com>
* Add 7-day timeline view alongside existing 30-minute timeline - Implement side-by-side timeline charts showing 7 days and 30 minutes - Replace minute-based system with flexible bucket-based approach - Update backend to handle TimelineRequest with custom time ranges - Use efficient SQL with VALUES clause for bulk bucket queries - Support click-to-filter functionality across both timelines 🤖 Generated with [Claude Code](https://claude.ai/code) Co-Authored-By: Claude <noreply@anthropic.com> * Split into subqueries to use a covering index --------- Co-authored-by: Claude <noreply@anthropic.com>
Remove SQLite storage implementation and replace with PostgreSQL-only solution: - Add PostgreSQL storage with JSONB support via doobie-postgres-circe - Remove all SQLite dependencies and code - Add storage configuration file approach with --storage CLI parameter - Update CI/CD with PostgreSQL service container 🤖 Generated with [Claude Code](https://claude.ai/code) Co-authored-by: Claude <noreply@anthropic.com>
🤖 Generated with [Claude Code](https://claude.ai/code) Co-authored-by: Claude <noreply@anthropic.com>
Enes Aldemir (spenes)
approved these changes
Jan 30, 2026
Contributor
Author
|
#180 refers to the SQLite store that no longer exists. Some of those improvements remain, but many are out (like separate transactors for reads and writes). #184 is a Github action fix... not sure if relevant for the changelog? #185 fixes a problem what was introduced in #178. So it’s basically a no-op compared to the previous release :) |
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
No description provided.