Skip to content
Draft
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
147 changes: 147 additions & 0 deletions blog/2025-06-10-crew-ai.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,147 @@
---
title: "Building Intelligent Web Applications with Django, Celery, and CrewAI"
description: "A sample project that shows how to build intelligent web applications using Django, Celery, and CrewAI."
tags:
[
Cloud,
NoDevOps,
BYOC,
AI,
LLMs,
GCP,
AWS,
]
author: Defang Team
---

# Building Intelligent Web Applications with Django, Celery, and CrewAI

Integrating AI capabilities into web applications is in high demand! But building robust, scalable AI-powered applications can be challenging. Enter our Django-Redis-Postgres-CrewAI sample: a powerful foundation that makes it easier than ever to build and deploy AI-driven web apps.

## The Stack

This project combines several battle-tested technologies into a coherent architecture:

**Django**: A high-level Python web framework that encourages rapid development and clean, pragmatic design.

**Celery**: An asynchronous task queue/job queue based on distributed message passing.

**Redis**: A lightning-fast in-memory data store used as a message broker and result backend.

**Postgres with pgvector**: A robust relational database with vector similarity search extensions.

**CrewAI**: A framework for building AI systems with autonomous agents that can collaborate to achieve complex goals.

Let's dive into why this architecture provides an excellent foundation for your next AI-powered web application.

## How It Works

The system can be broken down into the following components:

1. **App Service**: The main Django web application, handling HTTP requests and WebSocket connections.

2. **Worker Service**: A Celery worker that processes background tasks, including CrewAI operations.

3. **Postgres Service**: Postgres database with pgvector extension for storing and querying vector embeddings.

4. **Redis Service**: Message broker for Celery and backend for Django Channels.

5. **LLM Service**: Large Language Model service for generating text. Uses the [Docker Model Runner](https://docs.docker.com/model-runner/) locally, and uses Defang's the [OpenAI Access Gateway](https://docs.defang.io/docs/concepts/managed-llms/openai-access-gateway#docker-model-provider-services) to run a managed version in your cloud environment.

6. **Embedding Service**: Text embedding model for converting text into vector representations. Uses the [Docker Model Runner](https://docs.docker.com/model-runner/) locally, and uses Defang's the [OpenAI Access Gateway](https://docs.defang.io/docs/concepts/managed-llms/openai-access-gateway#docker-model-provider-services) to run a managed version in your cloud environment.

This architecture allows for horizontal scaling of different components based on load patterns and allows Defang to provision managed services where appropriate.

## Real-Time AI Processing with Django Channels and Celery

One of the neat features of this sample project is its implementation of real-time communication. We've built a very simple demo to show how this works. In your example, when a user submits text for summarization:

1. The request is received by Django and forwarded to a Celery worker so that it can handle longer-running tasks.
2. The Celery worker processes the text using CrewAI.
3. As the AI processes the text, updates are streamed back to the user in real-time through Django Channels.
4. The final result is stored in the Postgres database with a vector embedding for future similarity searches.

This approach keeps the web interface responsive even during computationally intensive AI operations.

## Vector Embeddings for Semantic Search

The sample project leverages Postgres with pgvector to store AI-generated summaries along with their vector representations. This enables:

- Deduplication of semantically similar content
- Fast retrieval of related content
- Semantic search capabilities

The system automatically checks for similar existing summaries before generating new ones, improving efficiency and consistency.

## Why This Combination Works

### Scalability and Performance

By using Celery with Redis as a broker, the system can easily distribute workloads across multiple worker processes or even separate machines. This ensures that AI processing tasks do not block the main API, providing a smooth user experience even under heavy load: you could imagine building complex flows and processes with CrewAI that take a long time to run and process. This architecture makes it easy to independently scale the AI processing service.

### Flexibility and Extensibility

Django's rich ecosystem of packages and plugins makes it easy to extend the application with additional functionality. Whether you need authentication, administration interfaces, or API endpoints, Django has you covered. It's also really nice to have the PGVector integration with Django's ORM: it makes it super easy to start querying your embeddings.

### Reliability

The combination of Django, Celery, and Redis provides multiple layers of reliability:

- Failed tasks can be automatically retried
- Task results can be persisted and can queried later
- The system can recover gracefully from worker failures

### Developer Experience

Django's "batteries included" philosophy means you can focus on building your application's features rather than reinventing the wheel. The project structure follows many Django best practices, making it easy to understand and extend.

## Expanding the Sample Project

This sample project can be extended in numerous ways to suit your specific needs:

### Adding More AI Capabilities

We built a really minimalist project, but you can do so much more with CrewAI. Check out the [CrewAI documentation](https://docs.defang.io/docs/crewai/overview) for more information.

### Enhancing User Interface

The sample project provides a basic web interface, but you could add a separate UI service if you want to build with your favorite frontend framework.

### Implementing User Authentication

For applications requiring user-specific content, Django's authentication system can be easily integrated to:

- Store user-specific AI results
- Implement access controls
- Track usage and quotas

### Adding API Endpoints

Django REST framework can be added to expose API endpoints for:

- Programmatic access to AI capabilities
- Integration with mobile applications
- Third-party service integration

## Deployment Options

The sample project is designed to be deployable through various methods:

1. **Local Development**: Using Docker Compose for a consistent development environment.

2. [**Defang Playground**](https://docs.defang.io/docs/providers/playground): Quick deployment for testing and demonstration.

3. [**Defang BYOC**](https://docs.defang.io/docs/concepts/defang-byoc): Deployment to your own cloud infrastructure with Defang's tools.

## Conclusion

The Django-Redis-Postgres-CrewAI sample project should give you a solid foundation for building intelligent web applications. By combining all these technologies, it addresses many common challenges in AI application development:

- Real-time user interfaces
- Scalable background processing
- Persistent storage of AI results
- Vector-based similarity search with embeddings for retrieval augmented generation (RAG)

Whether you're building a content generation tool, an AI-powered research assistant, or a recommendation system, this sample project gives you a head start with a robust architecture.

Happy coding!