-
Couldn't load subscription status.
- Fork 414
MSC4293: Redact on kick/ban #4293
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
yes please!
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Implementation requirements:
- Server - Add support for MSC4293 - Redact on Kick/Ban element-hq/synapse#18540
- Sending client - Set MSC4293 flag when autoredacting users mjolnir#612
- Receiving/applying client:
Other non-qualifying (as of writing) implementations:
|
MSCs proposed for Final Comment Period (FCP) should meet the requirements outlined in the checklist prior to being accepted into the spec. This checklist is a bit long, but aims to reduce the number of follow-on MSCs after a feature lands. SCT members: please check off things you check for, and raise a concern against FCP if the checklist is incomplete. If an item doesn't apply, prefer to check it rather than remove it. Unchecking items is encouraged where applicable. Checklist:
|
|
With my SCT hat, I've verified the implementations (though welcome second/third opinions) |
|
@mscbot concern General clarity/understanding of goals and alternatives |
Co-authored-by: Andrew Morgan <[email protected]>
|
I believe this should be ready for re-review from folks, with an aim towards FCP. There have been no major changes to the core proposal, though the fallback behaviour has shifted in favour of "continue trying your best to send redactions alongside the ban, if you want". @mscbot resolve General clarity/understanding of goals and alternatives |
| ``` | ||
|
|
||
| Note that because `m.room.redaction` supports a `reason` field in the same place as `m.room.member`, | ||
| clients which look up that reason by going `event["unsigned"]["content"]["reason"]` will still get |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Do you mean event["unsigned"]["redacted_because"]["content"]["reason"]?
| 8. A spammer may attempt to work around this MSC's effects by joining and leaving the room during | ||
| their spam. This has relatively high cost (the impact of spam is lesser when they aren't joined | ||
| to the room, and re-joining will hit a more restrictive rate limit on most servers). | ||
|
|
||
| Moderation bots and similar community safety tools are encouraged to add restrictions to the number | ||
| of join+leave cycles a user may perform in a short window. This will further reduce the effectiveness | ||
| of such an attack. Issue 7 above also applies here. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
How does one effectively limit the number of joins a given user can perform? If the spammer is on their own homeserver, then there are no CS API rate limits.
What's to stop a user joining a room, waiting 24hrs to clear any join rate-limits, then spamming and sending another join? Relying on banning a user before they finish spamming and sending a state event seems very hopeful, especially when spam is typically reported via bystanders.
Why not just redact all of a user's event history since the creation of the room?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@mscbot concern The proposal doesn't prevent users from spamming and sending an m.room.member event to protect the spam event.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Originally the MSC shied away from doing so for performance concerns, but it feels increasingly valuable to say "anything before the ban", especially to simplify the implementations.
Thoughts from more folks would be helpful for steering the MSC in any particular direction.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
My client implementation is at least just "if current state says redact_events, hide all events sent by user". I'm assuming that if a spammer is unbanned, the events have either been redacted explicitly or weren't actually spam and can be shown again.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Couldn't a malicious user also get around the redactions by constantly sending displayname changes or no-op m.room.membership events? I'd be in favour of changing it to "anything before the ban". It seems to me like the most common case is that if someone gets banned and their messages redacted, it's likely that we'd prefer to err on the side of redacting too much, rather than redacting too little.
| as best effort (currently, moderation bots in particular work incredibly hard to ensure they get *every* | ||
| event redacted, but can run into delivery, reliability, and completeness issues with 1:1 redactions). | ||
|
|
||
| It's also important to note that this proposal intends to compliment mass redactions and coexist in |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Nitpick:
| It's also important to note that this proposal intends to compliment mass redactions and coexist in | |
| It's also important to note that this proposal intends to complement mass redactions and coexist in |
| ## Fallback behaviour | ||
|
|
||
| Servers which don't support this feature may be served redacted events over federation when attempting | ||
| to fill gaps or backfill. This is considered expected behaviour. | ||
|
|
||
| Clients which don't support this feature may see events remain unredacted until they clear their local | ||
| cache. Upon clearing or invalidating their cache, they will either receive redacted events if their | ||
| server supports the feature, or unredacted events otherwise. This is also considered expected behaviour. | ||
|
|
||
| Though this proposal makes it clear that `m.room.redaction` events aren't actually sent, senders of | ||
| kicks/bans MAY still send actual redactions in addition to `redact_events: true` to ensure that older | ||
| clients and servers have the best possible chance of redacting the event. These redactions SHOULD be | ||
| considered "best effort" by the sender as they may encounter delivery issues, especially when using | ||
| 1:1 redactions instead of mass redactions. "Best effort" might mean using endpoints like | ||
| [MSC4194](https://github.com/matrix-org/matrix-spec-proposals/pull/4194)'s batch redaction, or using | ||
| the less reliable `/messages` endpoint to locate target event IDs to redact. Senders SHOULD note that | ||
| this fallback behaviour will only target events they can see (or be made to see via MSC4194) and might | ||
| not be the same events that a receiving client or server sees. | ||
|
|
||
| Senders are encouraged to evaluate when they can cease sending fallback redactions like those described | ||
| above to minimize the event traffic involved in a ban. For moderation bots this may mean waiting for | ||
| sufficiently high *client* implementations existing in their communities. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
How does a moderation bot provide fallback if the server it is using implements this MSC?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The server's support is a lot less important than the client support I think. The general fallback behaviour is for the bot to do what it normally does alongside setting the flag, though individual projects can (and should) make their own decisions.
Rendered
Disclosure: I am Director of Standards Development at The Matrix.org Foundation C.I.C., Matrix Spec Core Team (SCT) member, employed by Element, and operate the t2bot.io service. This proposal is written and published as a Trust & Safety team member allocated in full to the Foundation.
MSC checklist
FCP tickyboxes