π Welcome! This repo contains demos showcasing TrustyAI's guardrailing and model evaluation features within Red Hat Openshift AI.
- Evaluation Quickstart: This demo will quickly get you started running an evaluation against a deployed model.
- Guardrails Quickstart: This demo will quickly get you started with three detectors, for detecting hate speech, gibberish, and jailbreaking respectively.
- Lemonade Stand: A demo showing manual configuration of guardrails, as shown in the Guardrails for AI models video on the RH YouTube channel