We’re thrilled to announce the initial release of Coop! This v0 release includes core review capabilities alongside specialized child safety workflow functionality:
- Context-rich review console for content flagged by ML models, user reports, or detection rules
- Flexible signal abstraction for integrating classifiers
- Queue routing and orchestration based on signals, priority, and policy
- Reviewer wellness capabilities built-in (per organization/individual)
- Automated and manual enforcement workflows with complete audit trails
- Appeals handling with dedicated review workflows
- Compatible with OSS storage (Postgres, Scylla 5.2, Clickhouse) but other storage can be plugged in
- Insightful dashboards for tracking performance and feedback to improve rules and policies
- HMA integration for hash matching (CSAM, TVEC, NCII, internal hash banks, etc.)
- Comprehensive child safety tools for known CSAM, novel CSAM detection, and reporting
As an early v0 release, we've focused on getting core review capabilities and child safety workflows into a usable state—but there's still active development ahead. You can expect features and documentation that will evolve based on community feedback.
For more information about Coop, check out our announcement blog post.
Get Involved
We're developing Coop in the open and want to hear from you. Whether you're testing it out, running into issues, or have ideas for improvements, please open an issue or join our Discord. Your feedback directly shapes our roadmap.
Thank You
This release was possible because of the efforts of contributors who worked through complex redesign, partners who believed in the vision of open source safety tools, and the broader trust and safety community who provided feedback and guidance. Thank you especially to @juanmrad, @pawiecz, @kbicevski, Sjoerd Simons, @emanueleaina, @dom-notion, @cassidyjames, @vinaysrao1, @wayjaywang, and @julietshen.