Collaborative hub for sharing and shaping AI best practices in Medicaid through the Safe AI in Medicaid Alliance (SAMA).
Welcome to the Safe AI in Medicaid Alliance. SAMA is a public-private initiative working to define safe, practical, and program-aligned uses of AI across the Medicaid enterprise.
We are focused on:
- Developing responsible AI use guidelines grounded in Medicaid program values
- Sharing real-world policies and use cases from states and partners
- Aligning with national frameworks such as the NIST AI Risk Management Framework
- Building toward shared deliverables at MESC 2025
SAMA has adopted the following definition of the term SAFE AI:
Safe AI in Medicaid refers to the ethical, accountable, and transparent use of AI tools that improve program operations while maintaining fairness, human oversight, and compliance with privacy and legal standards.
State Medicaid AI Use Cases, Risk Evaluation Framework, and Implementation Guide: https://docs.google.com/document/d/1_kjHoM9Q-kVX2sW34wgPOdRTarcXtG0a/edit
SAMA is open to:
- State Medicaid programs
- Federal partners
- Medicaid-focused vendors and collaborators
If you are part of this group and would like to contribute, please read below.
All real submissions — including AI policies, use cases, and draft guidance — are stored in a private repository to protect participant contributions.
-
Create a free GitHub account: github.com/signup
-
Email your GitHub username to:
daniel.hallenbeck@acentra.com
📌 If GitHub is blocked on your network, email us and we’ll provide an alternate upload method.
By contributing, you’ll help shape:
- A national repository of AI use cases in Medicaid
- A library of state and vendor AI policies
- Draft guidance for a Medicaid-specific overlay to the NIST AI RMF
Thanks for being part of this effort. Let's build something practical, meaningful, and Medicaid-specific — together.