This sample builds upon the basic CrewAI example and demonstrates a multi-agent workflow. A lightweight classifier determines whether the user is requesting a summary, a deeper research answer or a translation. Depending on the decision different agents – powered by multiple LLM sizes – are executed and the progress is streamed back to the browser using Django Channels.
- Download Defang CLI
- (Optional) If you are using Defang BYOC authenticate with your cloud provider account
- (Optional for local development) Docker CLI
To run the application locally, you can use the following command:
docker compose -f ./compose.local.yaml up --build
For this sample, you will need to provide the following configuration:
Note that if you are using the 1-click deploy option, you can set these values as secrets in your GitHub repository and the action will automatically deploy them for you.
The password for the Postgres database.
defang config set POSTGRES_PASSWORD
The SSL mode for the Postgres database.
defang config set SSL_MODE
The secret key for the Django application.
defang config set DJANGO_SECRET_KEY
Three different LLM endpoints are used to demonstrate branching. Configure them via:
defang config set SMALL_LLM_URL
defang config set SMALL_LLM_MODEL
defang config set MEDIUM_LLM_URL
defang config set MEDIUM_LLM_MODEL
defang config set LARGE_LLM_URL
defang config set LARGE_LLM_MODEL
In addition the embedding model is configured with EMBEDDING_URL
and EMBEDDING_MODEL
.
Note
Download Defang CLI
Deploy your application to the Defang Playground by opening up your terminal and typing:
defang compose up
If you want to deploy to your own cloud account, you can use Defang BYOC.
Title: Crew.ai Advanced Django Sample
Short Description: Demonstrates branching CrewAI workflows with multiple LLM sizes to handle summarisation, research and translation requests.
Tags: Django, Celery, Redis, Postgres, AI, ML, CrewAI
Languages: Python