I started digging into this after seeing multiple large Discord servers get wiped with no explanation. What I found is genuinely alarming, and I'm publishing this so server owners can protect themselves before Discord does anything about it — which, as of today, they haven't.
- What's happening
- Timeline
- How the attack works
- Attack vectors
- Known victims
- Who's behind it
- Why Discord hasn't stopped it
- Related vulnerabilities
- How to protect your server
- Disclosure timeline
- Sources
Someone figured out how to make Discord permanently delete any server — instantly, automatically, and without any human review. The server owner's account gets terminated at the same time. No warning. No appeal. Everything gone.
This isn't a mass-reporting attack where you need hundreds of accounts. This is a single request that triggers Discord's own automated safety systems against you. As of April 10, 2026, it is still unpatched and being sold as a paid service on Telegram.
| Status | Patched |
| First seen | ~March 28, 2026 |
| Servers hit | 100+ confirmed |
| What you need to attack | An active invite link to the target server |
| Cost to buy the attack | €8,000 for full method; per-server pricing available |
| Date | What happened |
|---|---|
| ~March 25–28, 2026 | First cases observed — only targeting game cheat servers at this point |
| March 30, 2026 | A Discord-partnered server with 230,000 members gets wiped |
| April 1–4, 2026 | Attacks start hitting completely legitimate servers |
| April 5, 2026 | Public disclosure video published on YouTube |
| April 10, 2026 | Still no patch, no statement from Discord |
The attacker needs one thing: an active invite link to your server. Standard invite (discord.gg/XXXXXXXX) or a vanity URL (discord.gg/yourcommunity) — both work. A server ID alone is not enough.
This is important because almost every public server has an invite link indexed somewhere — Google, Reddit, old forum posts, Telegram groups. Finding one takes seconds.
Here's where it gets interesting. The attacker doesn't just report your server normally. They manipulate the metadata of the report request to make Discord's systems believe the invite link is associated with CSAM (child sexual abuse material).
They're not changing what's actually in your server. They're tricking Discord into thinking the link leads to illegal content — by spoofing what the report says about it before Discord's systems verify anything.
Platforms like Discord have legal obligations to respond to CSAM immediately — no delay, no manual review queue. The system is designed to act within seconds to avoid liability.
So when the spoofed report hits that pipeline, Discord's own safety automation terminates the server instantly. No human checks it. No one at Discord reviews it before pulling the trigger. The server is gone, and in most cases, the owner's account is terminated simultaneously — which means there's no account left to even file an appeal with.
Every invite link your server has ever had is a potential attack vector. Links shared years ago in Reddit threads, Telegram groups, Discord directories — they all count. Vanity URLs are especially dangerous since they're permanent, memorable, and widely shared.
The exploit most likely works by manipulating the HTTP request to Discord's report submission endpoint. Possible mechanisms include:
- Metadata injection in the report API request body
- Open Graph / embed spoofing — Discord fetches link previews when processing reports; poisoning that preview before filing the report could make Discord's backend "see" something different from what's actually there
- Link preview cache poisoning — getting a malicious embed cached against the invite URL before the report is submitted
- SSRF — tricking Discord's servers into fetching attacker-controlled content when resolving the reported URL
I'm intentionally not going deeper than this. The goal here is defense and awareness, not giving anyone a recipe.
The real issue is that a legal compliance tool has become the attack itself. The attacker isn't breaking into Discord. They're manipulating Discord into doing the damage for them — using a system that was built to protect children, against innocent communities.
CURRENT (BROKEN) FLOW:
Attacker submits report with spoofed metadata
↓
Discord's report API receives it
↓
Automated system reads metadata
↓
Detects CSAM signal
↓
Instant server + account termination <- NO VERIFICATION
WHAT IT SHOULD LOOK LIKE:
Attacker submits report with spoofed metadata
↓
Discord's report API receives it
↓
Automated system reads metadata
↓
Detects CSAM signal
↓
Discord independently fetches and analyzes the actual URL content
↓
Secondary verification (automated hash check or human review)
↓
Decision made — owner notified — appeal window opened
| Server | Members | When | Why targeted |
|---|---|---|---|
| discord.com/servers (partnered) | 230,000 | March 30, 2026 | Unknown |
| LL Tweaks (PC optimization) | Unknown | April 2026 | Vanity URL theft suspected |
| Cyber Design (graphic design) | Unknown | April 2026 | Vanity URL theft suspected |
| Sash Mood | Unknown | April 2026 | Vanity URL theft suspected |
| Hood (roleplay) | Unknown | April 2026 | Vanity URL theft suspected |
| Unnamed (3 servers, one owner) | ~40,000+ total | April 5, 2026 | Targeted harassment |
Live tracker of terminated servers: undetected.cc/takedowns
The exploit came out of the game hacking scene — specifically people who wanted to take down rival cheat servers. Once they confirmed it worked on any server (not just ToS-violating ones), they started monetizing it.
It's now being sold on Telegram in two ways:
- Full method: €8,000 one-time
- As a service: Price depends on server size — you just pay and they do it
The people selling this have been careful not to target servers with direct connections to Discord staff. They know that hitting the wrong target would get the method patched overnight and possibly draw law enforcement. They're operating deliberately.
Motives seen so far:
- Stealing a server's vanity URL to claim it for themselves
- Eliminating competition (rival gaming communities, rival cheat providers)
- Pure harassment
- Selling the attack to clients
The core problem is architectural. Discord's automated termination system for severe content reports has no verification step before taking permanent, irreversible action. It trusts the reporter's metadata without independently confirming that the reported content actually exists.
For Discord to fix this properly they'd need to:
- Independently fetch and verify the content of any reported URL before acting on metadata claims
- Run a proper CSAM hash check (PhotoDNA or equivalent) on the actual fetched content — not just what the reporter says is there
- Require human review, or at minimum a confirmed second automated pass, before permanent server deletion
- Give the server owner a notification and a grace period to respond before termination
- Separate the "terminate server" and "terminate owner account" decisions — these shouldn't be one automatic action
- Flag accounts that have previously triggered terminations
Attackers were re-registering expired Discord invite codes after communities let them lapse. Discord's UI shows "Never Expires" incorrectly for some links. After a link expired, attackers claimed the code and used it to redirect users into malware-distributing servers dropping AsyncRAT and Skuld Stealer (a crypto wallet credential stealer). Still present as of April 2026.
Discord's WebSocket API was leaking whether users had set themselves to "Invisible." The API included Invisible users in the presences array as "status": "offline" instead of omitting them entirely — making it possible to enumerate who was actively hiding.
An attacker compromised a Discord customer service vendor and accessed data from users who had previously contacted Discord's support or Trust & Safety teams. Discord revoked the vendor's access and brought in a forensics firm.
A separate vendor breach exposed roughly 1.5 TB of Discord user data including government IDs and facial scans submitted for account recovery. The data ended up circulating on dark web markets.
Discord's age verification rollout relied on a vendor called Persona. Researchers found Persona had left an internal frontend exposed on a government-authorized server, potentially exposing biometric data from users who had submitted facial scans for age verification. Discord delayed the global rollout.
Do this right now — don't wait.
1. Delete every invite link you have Server Settings → Invites → delete all of them. Old ones, new ones, all of them. This removes the attack surface entirely.
2. Turn off invite creation for members
Server Settings → Roles → @everyone → disable "Create Invite". Do the same for any other default roles.
3. If you have a vanity URL, delete it Yes, it hurts. But a vanity URL is findable, permanent, and makes you a much easier target. Server Settings → Overview → delete the vanity URL.
4. Clean up bot permissions Go through your bots and remove "Manage Server" and "Create Invite" from anything that doesn't actually need those permissions.
5. Back up your server structure Use Xenon or a similar backup bot to snapshot your channels, roles, and settings. If something does happen, you'll at least be able to rebuild.
If you need people to still join your server: Some owners are switching to Discord OAuth2 bot flows — users authorize a bot which adds them directly, no invite link required. This works, but only do it if you actually know what you're doing on the backend. A badly built OAuth system can leak user IPs, emails, and tokens.
| Date | Event |
|---|---|
| April 5, 2026 | Public disclosure video published on YouTube |
| April 10, 2026 | This research published on GitHub by Zoubaire |
| TBD | Waiting for Discord to respond or patch |
- Original disclosure video — https://www.youtube.com/watch?v=NZNBwzID-CI (April 5, 2026)
- Terminated servers tracker — https://undetected.cc/takedowns
- Check Point Research — expired invite hijacking — https://research.checkpoint.com/2025/from-trust-to-threat-hijacked-discord-invites-used-for-multi-stage-malware-delivery/ (June 2025)
- SentinelOne — CVE-2026-24332 — https://www.sentinelone.com/vulnerability-database/cve-2026-24332/ (January 2026)
- Discord — vendor breach statement — https://discord.com/press-releases/update-on-security-incident-involving-third-party-customer-service (October 2025)
- Hoplonn InfoSec — 1.5TB data leak — https://hoploninfosec.com/discord-data-breach (November 2025)
- Malwarebytes — Persona exposure — https://www.malwarebytes.com/blog/news/2026/02/age-verification-vendor-persona-left-frontend-exposed (February 2026)
- GitHub — CVE-2026-Discord — https://github.com/NiceTop1027/CVE-2026-Discord
- EFF — Discord age verification — https://www.eff.org/deeplinks/2026/02/discord-voluntarily-pushes-mandatory-age-verification-despite-recent-data-breach (February 2026)
Everything in this document comes from public security disclosures. I'm publishing this to warn people and put pressure on Discord to fix it — not to help anyone run the attack. No working exploit code or specific API payloads are included anywhere in this document.
By Zoubaire — April 10, 2026 — Status: waiting on Discord
If this helped you protect your server, share it. The more people know about this, the harder it gets for the people selling this attack to keep operating quietly.