You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: blog/2025-06-16-crew-ai-sample.md
+8-8Lines changed: 8 additions & 8 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -8,9 +8,9 @@ author: Defang Team
8
8
9
9
## Why Build a Starter Kit for RAG + Agents?
10
10
11
-
Let’s be honest: every developer who’s played with LLMs gets that rush of “wow” from the first working demo. But the real headaches show up when you need to stitch LLMs into something production-grade—something that can pull in real data, coordinate multi-step logic, and more. Suddenly, you’re not just writing prompts. You’re managing queues, adding vector databases, orchestrating workers, and trying to get things back to the user in real-time.
11
+
Let’s be honest: every developer who’s played with LLMs gets that rush of “wow” from the first working demo. But the real headaches show up when you need to stitch LLMs into something production-grade: an app that can pull in real data, coordinate multi-step logic, and more. Suddenly, you’re not just writing single prompts. You’re coordinating between multiple prompts, managing queues, adding vector databases, orchestrating workers, and trying to get things back to the user in real-time. We've found that [CrewAI](https://www.crewai.com/) (coordinating prompts, agents, tools) + [Django](https://www.djangoproject.com/) (building an api, managing data), with a bit of [Celery](https://docs.celeryproject.org/en/stable/) (orchestrating workers/async tasks), is a really nice set of tools for this. We're also going to use [Django Channels](https://channels.readthedocs.io/en/stable/) (real-time updates) to push updates back to the user. And of course, we'll use [Defang](https://www.defang.io/) to deploy all that to the cloud.
12
12
13
-
If this sounds familiar (or if you're dreading the prospect of dealing with it), you’re the target audience for this sample. Instead of slogging through weeks of configuration, and permissions hell, you get a ready-made template that runs on your laptop, then scales—unchanged—to Defang’s Playground, and finally to your own AWS or GCP account. All the gnarly infra is abstracted, so you can focus on getting as much value as possible out of that magical combo of CrewAI and Django.
13
+
If this sounds familiar (or if you're dreading the prospect of dealing with it), you’re the target audience for this sample. Instead of slogging through weeks of configuration and permissions hell, you get a ready-made template that runs on your laptop, then scales—unchanged—to Defang’s Playground, and finally to your own AWS or GCP account. All the gnarly infra is abstracted, so you can focus on getting as much value as possible out of that magical combo of CrewAI and Django.
14
14
15
15
:::info[Just want the sample?]
16
16
You can [find it here](https://github.com/DefangSamples/sample-crew-django-redis-postgres-template).
@@ -24,17 +24,17 @@ Imagine you're building a system. It might use multiple LLM calls. It might do c
24
24
25
25
## Architecture at a Glance
26
26
27
-
Behind the scenes, the workflow is clean but powerful. The browser connects via WebSocket to Django Channels. Heavy work is pushed to a Celery worker. That worker generates an embedding, checks Postgres with pgvector for a match, and either returns the summary or, if there’s no hit, fires up a CrewAI agent to generate one. Every update streams back through Redis and Django Channels so users get progress in real time.
27
+
Behind the scenes, the workflow is clean and powerful. The browser connects via [WebSockets to our app using Django Channels](https://channels.readthedocs.io/en/latest/deploying.html#http-and-websocket). Heavy work is pushed to a [Celery worker](https://docs.celeryq.dev/en/stable/). That worker generates an [embedding](https://en.wikipedia.org/wiki/Embedding_(machine_learning)), checks [Postgres](https://www.postgresql.org/) with [pgvector](https://github.com/pgvector/pgvector) for a match, and either returns the summary or, if there’s no hit, fires up a [CrewAI agent](https://www.crewai.com/) to generate one. Every update streams back through [Redis](https://redis.io/) and Django Channels so users get progress in real time.
Durable state lives in Postgres and Redis. Model services (LLMs and embeddings) are fully swappable, so you can upgrade to different models in the cloud or localize with the Docker Model Runner without rewriting the full stack.
31
+
Durable state lives in Postgres and Redis. Model services ([LLMs](https://en.wikipedia.org/wiki/LLM) and embeddings) are fully swappable, so you can upgrade to different models in the cloud or localize with the [Docker Model Runner](https://docs.docker.com/compose/how-tos/model-runner/) without rewriting the full stack.
32
32
33
33
## Under the Hood: The Services
34
34
35
35
### Django + Channels
36
36
37
-
The Django app is the front door, routing HTTP and WebSocket traffic, serving up the admin, and delivering static content. It’s built on Daphne and Django Channels, with Redis as the channel layer for real-time group events. Django’s admin is your friend here: to start you can check what summaries exist, but if you start building out your own app, it'll make it a breeze to debug and manage your system.
37
+
The Django app is the front door, routing HTTP and WebSocket traffic, serving up the admin, and delivering static content. It’s built on [Daphne](https://github.com/django/daphne) and Django Channels, with Redis as the channel layer for real-time group events. Django’s admin is your friend here: to start you can check what summaries exist, but if you start building out your own app, it'll make it a breeze to debug and manage your system.
38
38
39
39
### PostgreSQL + pgvector
40
40
@@ -58,15 +58,15 @@ With CrewAI, your agent logic is declarative and pluggable. This sample keeps it
58
58
59
59
## How the Compose Files Work
60
60
61
-
In local dev, your `compose.local.yaml` spins up Gemma and Mixedbread models, running fully locally and with no cloud credentials or API keys required. URLs for service-to-service communication are injected at runtime. When you’re ready to deploy, swap in the main `compose.yaml` which adds Defang’s `x-defang-llm`, `x-defang-redis`, and `x-defang-postgres` flags. Now, Defang maps your Compose intent to real infrastructure—managed model endpoints, Redis, and Postgres—on cloud providers like AWS or GCP. It handles all networking, secrets, and service discovery for you. There’s no YAML rewriting or “dev vs prod” drift.
61
+
In local dev, your `compose.local.yaml` spins up [Gemma](https://hub.docker.com/r/ai/gemma3) and [Mixedbread](https://hub.docker.com/r/ai/mxbai-embed-large) models, running fully locally and with no cloud credentials or API keys required. URLs for service-to-service communication are injected at runtime. When you’re ready to deploy, swap in the main `compose.yaml` which adds Defang’s `x-defang-llm`, `x-defang-redis`, and `x-defang-postgres` flags. Now, Defang maps your Compose intent to real infrastructure—managed model endpoints, Redis, and Postgres—on cloud providers like AWS or GCP. It handles all networking, secrets, and service discovery for you. There’s no YAML rewriting or “dev vs prod” drift.
62
62
63
63
## The Three-Step Deployment Journey
64
64
65
-
You can run everything on your laptop with a single `docker compose -f ./compose.local.yaml up` command—no cloud dependencies, fast iteration, and no risk of cloud charges. When you’re ready for the next step, use `defang compose up` to push to the Defang Playground. This free hosted sandbox is perfect for trying defang, demos, or prototyping. It automatically adds TLS to your endpoints and sleeps after a week. For production, use your own AWS or GCP account. `DEFANG_PROVIDER=aws defang compose up` maps each service to a managed equivalent (ECS, RDS, ElastiCache, Bedrock models), wires up secrets, networking, etc. Your infra. Your data.
65
+
You can run everything on your laptop with a single `docker compose -f ./compose.local.yaml up` command—no cloud dependencies, fast iteration, and no risk of cloud charges. When you’re ready for the next step, use `defang compose up` to push to the Defang Playground. This free hosted sandbox is perfect for trying Defang, demos, or prototyping. It automatically adds TLS to your endpoints and sleeps after a week. For production, use your own AWS or GCP account. `DEFANG_PROVIDER=aws defang compose up` maps each service to a managed equivalent (ECS, RDS, ElastiCache, Bedrock models), wires up secrets, networking, etc. Your infra. Your data.
66
66
67
67
## Some Best Practices and Design Choices
68
68
69
-
This sample uses vector similarity to try and fetch summaries that are semantically similar to the input. For more robust results, you might want to embed the original input. You can also thinkg about chunking up longer content for finer-grained matches that you can integrate in your CrewAI workflows. Real-time progress via Django Channels beats HTTP polling, especially for LLM tasks that can take a while. The app service is stateless, which means you can scale it horizontally just by adding more containers which is easy to specify in your compose file.
69
+
This sample uses vector similarity to try and fetch summaries that are semantically similar to the input. For more robust results, you might want to embed the original input. You can also think about chunking up longer content for finer-grained matches that you can integrate in your CrewAI workflows. Real-time progress via Django Channels beats HTTP polling, especially for LLM tasks that can take a while. The app service is stateless, which means you can scale it horizontally just by adding more containers which is easy to specify in your compose file.
0 commit comments