Proposal for getEvents v2 endpoint #1872
Replies: 12 comments 47 replies
-
|
How do you express a topic filter that is "all TRANSFER events to 0xme"? |
Beta Was this translation helpful? Give feedback.
-
|
I had AI do a deep dive into all of the GitHub issues and even our own Discord to find all of the complaints and issues that we've heard over the last few years and matched it against this proposal. The results are here in this Gist: https://gist.github.com/kalepail/f0b14c806f01b69817d128504d0ec523 The TLDR is that it seems like what's being proposed is a good solution. And personally, as I reviewed it, I'm pretty excited about it. Great work. |
Beta Was this translation helpful? Give feedback.
-
|
Quick clarification: for order=desc, is the ordering applied to the full event position tuple (ledger, tx_index, op_index, event_index) (i.e. fully reversed) or the intent is to only reverse ledger order while keeping tx/event order ascending within each ledger? Asking because this may impact cursor and index implementation. |
Beta Was this translation helpful? Give feedback.
-
Are there two ways to do cursor-like pagination? I see the boundary type supports an event ID, and separately there's a cursor. What are the planned use cases for the former? |
Beta Was this translation helpful? Give feedback.
-
It looks like the proposal introduces a new lever to control whether queries should be inclusive or exclusive of inputs. I don't think I've ever seen this in an API before, it seems novel. What problem is it solving for? It's not clear to me which problem at the start of the proposal it aligns with. When the min/max is a ledger I think inclusive makes sense as a default. And if someone wants exclusive it is very easy to plus or minus one. The event ID, I'm not clear on what the use case is for that and started a separate thread about that: #1872 (comment). If the goal is pagination, those inputs should probably be always exclusive. Unless the goal is to get a single event, in which case it might be clearer to offer a |
Beta Was this translation helpful? Give feedback.
This comment has been hidden.
This comment has been hidden.
-
|
The events system needs to be more robust. I agree with several points raised so far, but the most critical issue is the 7-day retention window. It needs to be removed. Being able to reliably fetch historical events is mandatory for any real application. A 7-day limit is a major blocker for app developers. Apps are not temporary experiments. They are meant to live for a long time (forever?), and require durable event access for indexing, recovery, analytics, and user state. In our case, we are building a privacy pool. By design, we do not want to rely on a backend, because users should not have to trust any backend service. The RPC must be the source of truth. Short-lived events or reliance on third-party infrastructure directly break this model. I understand and support the goal of decentralization and not forcing the Stellar Foundation to operate heavy indexers indefinitely. However, the ecosystem is (unfortunately) still small ! I asked on Discord and was pointed to the Providers page.
Needing to deploy a backend just to fetch historical events significantly slows development and increases complexity. For early-stage builders, this friction is often enough to push them to another chain. |
Beta Was this translation helpful? Give feedback.
-
|
IIRC topics are |
Beta Was this translation helpful? Give feedback.
-
|
I think it would be valuable to elaborate on the feature subset we plan to support for full history: it seems this v2 variant has both way more features yet also less than the original. If we could compare and contrast it with what's feasible for full history, it may clue us in better as to what can be pared down from this endpoint. |
Beta Was this translation helpful? Give feedback.
-
|
This is a timely proposal from my perspective and I look forward to implementing it for use in my sparse history Pakana Node. |
Beta Was this translation helpful? Give feedback.
-
|
Couple of random thoughts related to above:
|
Beta Was this translation helpful? Give feedback.
-
|
Starting a dedicated thread for nesting, following up on @leighmcculloch's comment here. Arrays of arrays tend to be confusing and there's a lot of potential nesting in the current structure. The following proposal replaces the current Definitionstype Filter = { // AND between txhash, contract id and all topic values
txHash?: string;
contractId?: string;
type?: "contract" | "system";
topics?: ScVal[]; //no further nesting
};
type FiltersQuery = {
filters: Filter[]; // UNION between filters
};Example, XLM transfers to and from a specific address{
filters: [
{
contractId: "CAS3J7GYLGXMF6TDJBBYYSE3HQ6BBSMLNUQ34T6TZMYMW2EVH34XOWMA",
type: "contract",
topics: [{ symbol: "transfer"}, { address: "GABC..." }]
},
{
contractId: "CAS3J7GYLGXMF6TDJBBYYSE3HQ6BBSMLNUQ34T6TZMYMW2EVH34XOWMA",
type: "contract",
topics: [{ symbol: "transfer"}, "*", { address: "GABC..." }]
}
]
}Example with topicN instead of topics arrayI find the topic array and the wildcards somewhat confusing so prefer a topicN approach as follows. Though not religious about it. {
filters: [
{
contractId: "CAS3J7GYLGXMF6TDJBBYYSE3HQ6BBSMLNUQ34T6TZMYMW2EVH34XOWMA",
type: "contract",
topic0: { symbol: "transfer"},
topic1: { address: "GABC..."}
},
{
contractId: "CAS3J7GYLGXMF6TDJBBYYSE3HQ6BBSMLNUQ34T6TZMYMW2EVH34XOWMA",
type: "contract",
topic0: { symbol: "transfer"},
topic2: { address: "GABC..."}
}
]
} |
Beta Was this translation helpful? Give feedback.
Uh oh!
There was an error while loading. Please reload this page.
Uh oh!
There was an error while loading. Please reload this page.
-
Overview
This proposal specifies an improved
getEventsStellar RPC API for querying events on the Stellar blockchain.Design Goals
The API addresses specific problems reported in GitHub issues stellar/stellar-rpc#426 and stellar/stellar-rpc#575:
endLedgerignored during paginationoldestLedgerfor client recoveryhasMore: true+scannedLedgershows scan progresstxHashquery modeorder: "desc"iterates newest-firstDoS Prevention
The API includes constraints which can be configured by an RPC provider to prevent any single query from overwhelming the backend
contractIdslimitscannedLedgerHow it works: When a query matches sparse data, the backend scans up to its configured limit, returns whatever matches were found (possibly zero), sets
hasMore: true, and providesscannedLedgerso clients can track progress. This ensures predictable response times regardless of data distribution. The scan limit is configurable per RPC implementation.Client impact: Queries exceeding limits return
invalid_paramserrors. Large filter sets must be split into parallel queries and merged client-side.Request
The API accepts three mutually exclusive request modes:
cursorortxHash)txHashpresentcursorpresentType Definitions
Field Definitions
minnumbermaxnumberorder"asc" | "desc""asc"(oldest first)contractIdsstring[]topicsTopicFilterlimitnumbertxHashstringcursorstringWhen both
contractIdsandtopicsare specified, events must match both filters (AND).Empty arrays (
contractIds: []ortopics: []) are invalid. Omit the field entirely to match all contracts or all topics.Ledger bounds are inclusive:
min: 1000, max: 5000includes events from both ledgers 1000 and 5000.Topic Filter
The
TopicFilteruses positional semantics, matching the Ethereumeth_getLogsconvention:topic[0], Position 1 filterstopic[1], etc.nullmatches any value at that positionSCVal formats: Topic values can be specified as base64-encoded XDR or JSON:
Matching rules:
null)Examples:
Typical SEP-41 transfer event:
Query patterns:
[{ symbol: "transfer" }][{ symbol: "transfer" }, { address: "G..." }][{ symbol: "transfer" }, null, { address: "G..." }]RangeQuery Rules
asc(default)minmax(latest ledger)descmax(latest ledger),min(oldest ledger)Rationale:
min: Ensures portable queries across RPCs with different retention policies (archive vs pruned)Examples:
hasMoresemantics by direction:hasMore: falsemeansmax— query completemin— query completeCursor
The cursor is an opaque string encoding the complete query state: position, bounds, order, filters, and mode. This enables simple continuation:
Only
limitmay be changed on continuation. All other parameters are encoded in the cursor.Response
eventscursorhasMoretrueif more events available now,falseif query complete or caught upscannedLedgeravailableLedgers.oldestavailableLedgers.latestPagination States
hasMoretruefalsefalsefalseSparse Data Handling
When querying for rare events, the API may return empty pages due to internal scan limits:
This indicates: no matches yet, but 20% through the scan (400k of 2M ledgers). Continue with cursor until
hasMore: false.Progress calculation:
(scannedLedger - min) / (max - min)(max - scannedLedger) / (max - min)Errors
invalid_paramsledger_prunedledger,oldestLedgerledger_futureledger,latestLedgercursor_malformedcursor_expiredoldestLedgercursor_futurecursorLedger,latestLedgertransaction_not_foundtxHashValidation Errors (
invalid_params)cursorwith other query paramstxHashwith range paramsmin > maxorder: "asc"withoutminlimit < 1limit > 1000contractIds.length === 0contractIds.length > 3topics.length === 0topics.length > 4nullinside topic position arraySupported Use Cases
{ min: 1000, max: 5000 }{ order: "desc", limit: 50 }{ min: 1000 }+ poll with cursor{ cursor: savedCursor }{ txHash: "abc" }{ min: 1000, topics: ["transfer", "0xMe"] }{ min: 1000, topics: ["transfer", null, "0xMe"] }{ min: 1000, topics: [["transfer", "approve"]] }{ min: 1000, contractIds: ["pool1", "pool2"] }Use Case Gaps
The following patterns require client-side handling:
Design Rationale
Why
min/maxinstead ofstart/end?start/endimplies iteration direction. Withorder: "desc", "start at 1000" is confusing when iteration begins at 5000.min/maxdefines bounds independent of direction.Why opaque event IDs?
Decouples clients from internal position representation, enabling backend flexibility.
Why cursor encodes filters?
Continuation requires only
{ cursor }. No parameter spreading, no mismatch risk.Why
hasMoreinstead of nullable cursor?Explicit signal for "more available now". Combined with always-present cursor, enables simple live tracking loops.
Why does ascending require
minbut descending doesn't?Descending from latest gives consistent results regardless of RPC retention policy — you always get recent events. Ascending without
minwould start from the oldest available ledger, which varies by RPC (genesis for archive nodes, 7 days ago for pruned nodes). Requiringminensures portable queries across different RPC configurations.Appendix: Code Examples
Latest N events:
Historical range with pagination:
Live tracking:
Resume from database:
Tracking multiple event types:
Wallet activity (sent + received):
Beta Was this translation helpful? Give feedback.
All reactions