Skip to content

trustyai-explainability/llama-stack-provider-ragas

Repository files navigation

Llama Stack Provider

Ragas as an External Provider for Llama Stack

PyPI version

About

This repository implements Ragas as an out-of-tree Llama Stack evaluation provider.

Features

The goal is to provide all of Ragas' evaluation functionality over Llama Stack's eval API, while leveraging the Llama Stack's built-in APIs for inference (llms and embeddings), datasets, and benchmarks.

There are two versions of the provider:

  • inline: runs the Ragas evaluation in the same process as the Llama Stack server. This is always available with the base installation.
  • remote: runs the Ragas evaluation in a remote process, using Kubeflow Pipelines. Only available when remote dependencies are installed with pip install llama-stack-provider-ragas[remote].

Prerequisites

Setup

  • Clone this repository

    git clone <repository-url>
    cd llama-stack-provider-ragas
  • Create and activate a virtual environment

    uv venv
    source .venv/bin/activate
  • Install (optionally as an editable package). There's distro, remote and dev optional dependencies to run the sample LS distribution and the KFP-enabled remote provider. Installing the dev dependencies will also install the distro and remote dependencies.

    uv pip install -e ".[dev]"
  • The sample LS distributions (one for inline and one for remote provider) is a simple LS distribution that uses Ollama for inference and embeddings. See the provider-specific sections below for setup and run commands.

Inline provider (default with base installation)

Create a .env file with the required environment variable:

EMBEDDING_MODEL=ollama/all-minilm:l6-v2

Run the server:

dotenv run uv run llama stack run distribution/run.yaml

Remote provider (requires optional dependencies)

First install the remote dependencies:

uv pip install -e ".[remote]"

Create a .env file with the following:

# Required for both inline and remote
EMBEDDING_MODEL=ollama/all-minilm:l6-v2

# Required for remote provider
KUBEFLOW_LLAMA_STACK_URL=<your-llama-stack-url>
KUBEFLOW_PIPELINES_ENDPOINT=<your-kfp-endpoint>
KUBEFLOW_NAMESPACE=<your-namespace>
KUBEFLOW_BASE_IMAGE=quay.io/diegosquayorg/my-ragas-provider-image:latest
KUBEFLOW_PIPELINES_TOKEN=<your-pipelines-token>
KUBEFLOW_RESULTS_S3_PREFIX=s3://my-bucket/ragas-results
KUBEFLOW_S3_CREDENTIALS_SECRET_NAME=<secret-name>

Where:

  • KUBEFLOW_LLAMA_STACK_URL: The URL of the llama stack server that the remote provider will use to run the evaluation (LLM generations and embeddings, etc.). If you are running Llama Stack locally, you can use ngrok to expose it to the remote provider.
  • KUBEFLOW_PIPELINES_ENDPOINT: You can get this via kubectl get routes -A | grep -i pipeline on your Kubernetes cluster.
  • KUBEFLOW_NAMESPACE: The name of the data science project where the Kubeflow Pipelines server is running.
  • KUBEFLOW_PIPELINES_TOKEN: Kubeflow Pipelines token with access to submit pipelines. If not provided, the token will be read from the local kubeconfig file.
  • KUBEFLOW_BASE_IMAGE: The image used to run the Ragas evaluation in the remote provider. See Containerfile for details. There is a public version of this image at quay.io/diegosquayorg/my-ragas-provider-image:latest.
  • KUBEFLOW_RESULTS_S3_PREFIX: S3 location (bucket and prefix folder) where evaluation results will be stored, e.g., s3://my-bucket/ragas-results.
  • KUBEFLOW_S3_CREDENTIALS_SECRET_NAME: Name of the Kubernetes secret containing AWS credentials with write access to the S3 bucket. Create with:
    oc create secret generic <secret-name> \
      --from-literal=AWS_ACCESS_KEY_ID=your-access-key \
      --from-literal=AWS_SECRET_ACCESS_KEY=your-secret-key \
      --from-literal=AWS_DEFAULT_REGION=us-east-1

Run the server:

dotenv run uv run llama stack run distribution/run.yaml

Usage

See the demos in the demos directory.

About

TrustyAI's RAGAS provider for Llama Stack

Resources

License

Contributing

Stars

Watchers

Forks

Packages

No packages published

Contributors 5