-
Notifications
You must be signed in to change notification settings - Fork 1.1k
read write functions #10774
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: master
Are you sure you want to change the base?
read write functions #10774
Conversation
g1nt0ki
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
hmm, dont like the idea of using r2 strings as a db, can you replace it with a dedicated index (table in ES) in our coins ES db? are we storing the token mcaps in coins-current index now?
|
|
||
| export default async function getTokenPrices(timestamp: number) { | ||
| const writes: Write[] = []; | ||
| const assets = await getR2JSONString(r2Key); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
so, this either get manually triggered?
or we store the same distressed list to ddb every hour?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
currently we store every hour but we could change this by
- writing new distressed assets to SK=0 so we can fetch metadata
- adding a case to coins endpoints where latency checks are skipped where isDistressed() returns true
- adding cases to deficoins, bridged, coingecko where writes are removed when isDistressed() returns true
| function checkMcaps(address: string, mcapData: any, usdAmount: number, promises: Promise<void>[]) { | ||
| const mcap = mcapData[address]; | ||
| if (mcap && usdAmount > mcap) { | ||
| promises.push(addToDistressedList(address)); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
ah, this is the only place when we add a token to the potential distressed list? so, the r2 object is more of a placeholder?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We would also add tokens to the list through the UI tool
| }); | ||
|
|
||
| appendToStaleCoins(usdTvl, staleCoinsInclusive, staleCoins); | ||
| await Promise.all(distressedCoinsPromises); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I would design it slightly differently, one solution is, we send logs to ES (not the coins one but the one we use for logging/linked to our sdk), one per token& project, then hourly, we can pull these logs and show notification, you can check the implementation here:
logging: https://github.com/DefiLlama/defillama-server/blob/master/defi/src/adaptors/handlers/storeAdaptorData/index.ts#L353
usage: https://github.com/DefiLlama/defillama-server/blob/master/defi/src/notifyOutdated.ts#L106
other option is gathering all the distressed coins and writing only once to r2,
either way, I would avoid promise.all when we dont know how many tokens will be added to the list. Another issue with current approach is, it is ripe for losing tokens/overwriting because of race condition
| }); | ||
|
|
||
| return false; | ||
| if (isLocalClient) await client?.close(); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
can leave the connection open, can close it once when process exits, noticed this pattern in other places as well, not sure if it is a good idea to open & close connection per requests, most of our scripts run under an hour, so dont see why we need to close it per write
| export async function readDistressedLogs() { | ||
| const esClient = elastic.getClient(); | ||
| const hourAgo = Math.floor(Date.now() / 1000) - 3600; | ||
| let { lastCheckTS } = (await cache.readExpiringJsonCache("distressed-assets-last-check")) || { lastCheckTS: 0 }; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
can use this: elastic.getAllLogs() and skip the cache?
| ); | ||
|
|
||
| await storeR2JSONString(r2Key, JSON.stringify(data)); | ||
| await esClient?.close(); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
no need to close this, or use expiring cache. can use the es index as single source of truth
No description provided.