Make GUILD_MEMBER_CHUNK asynchronous #5088
Unanswered
VelvetToroyashi
asked this question in
API Feature Requests & Ideas
Replies: 0 comments
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
Uh oh!
There was an error while loading. Please reload this page.
-
As usual, if this has been duped, my search-fu has failed me again.
What
As it stands right now, when requesting guild members, you submit a command to the gateway, and receive the members and you receive
GUILD_MEMBER_CHUNK
in 1K chunks. This is great, you know all the members on that server, and you can go about your day.The issue here is that this does not scale well, rather, it doesn't scale well for bots.
When you request members, chunking isn't handled like normal events. Instead, the gateway essentially drops what it was doing, and prioritizes these chunking events.
Once chunking is complete, the gateway events resume as normal. This is fine for servers ~< 6000 members, as chunks can be sent and processed fairly quickly, but beyond that there's noticeable chugging, even if the bot doesn't actually do anything with the event.
Not only that, but almost no other events are sent during this time, and considering that chunking can last for minutes on some larger servers, this quickly becomes an issue.
How
Instead of basically killing a gateway session, chunks can/should be sent asynchronously; that is to say, they shouldn't block 99% of events, instead chunking should be queued like normal events to allow bots to handle anything they need to handle while they're chunking.
The docs specify there's a
ChunkIndex
property, and aChunkCount
, so keeping track of what guild is being chunked, and when chunking is finished won't even a problem.Alternative
REST. All hail the List Guild Members endpoint.
This does what we want, kinda. It's a paginated endpoint, which is...fine. The issue with this, outside of the whole having to make n/1000 API calls, is that, well, you have to make n/1000 API calls.
Under normal circumstances, this is OK; any decent HTTP library will re-use the underlying socket for subsequent requests, so it'll be pretty snappy. The issue is ratelimits (and also getting API banned but hey).
As it stands right now, the limit on this endpoint is 10/10, and with a limit of 1000 members...yikes.
This would be the simpler of the two, even if it poses some scalability issues.
Closing
This is undoubtedly a huge infrastructural issue, and something that's 100% been asked about before, but with Discord's newfound radicality when it comes to large changes and undertakings to infrastructure and how things work, the goal's that this is something that can be changed...at some point.
When I initially inquired about this...quirky behavior, the large bot developers I spoke to didn't seem to have a concrete answer behind why this is other than "It's Discord:tm:", and I didn't see an engineer speak up about this either when I mentioned it again, so hopefully there isn't some giant "We can't do this because x y z" I managed to miss when trying to learn more about this.
Beta Was this translation helpful? Give feedback.
All reactions