This repository contains the complete implementation of the Final Degree Project (TFG):
“Development and Comparison between a Traditional Chatbot (RASA) and a Large Language Model (LLM) Conversational System”
The objective of this project is to design, deploy, and evaluate a unified conversational platform that integrates and compares:
- A traditional rule-based chatbot implemented with RASA
- A Large Language Model (LLM) executed locally using Ollama
Both systems are accessible through the same web interface and are evaluated under identical conditions.
The platform allows users to interact with two assistants:
- Alex — Traditional chatbot (RASA)
- Taylor — LLM-based chatbot (Ollama)
The user is not informed about the underlying technology of each assistant, ensuring an unbiased comparison focused on:
- Conversational quality
- Latency
- Context handling
- User perception
The entire system is fully containerized using Docker Compose.
The system is composed of the following main components:
-
Web Interface (HTML / JavaScript)
Provides the user interface to select an assistant and interact via text or voice. -
Backend (FastAPI)
Acts as an orchestration layer:- Routes user messages to RASA or Ollama
- Manages conversational context
- Measures latency
- Stores interaction metrics
-
RASA Server
Hosts the traditional chatbot, including intents, rules, stories, and dialogue policies. -
RASA Actions Server
Executes custom Python actions required by RASA (context updates, extended responses, double-intent handling, etc.). -
Ollama Server
Runs the local LLM used for the generative conversational system.
All services communicate through an internal Docker network.
The repository is organized into two main directories: TFG_RASA and TFG-chatbots.
.
├── TFG_RASA/ # Traditional chatbot (RASA)
│ ├── actions/
│ │ ├── __init__.py
│ │ └── actions.py # Custom RASA actions
│ │
│ ├── data/
│ │ ├── nlu.yml # Intents and training examples
│ │ ├── rules.yml # Dialogue rules
│ │ └── stories.yml # Conversation stories
│ │
│ ├── models/ # Trained RASA models
│ ├── tests/ # RASA test files
│ │
│ ├── config.yml # RASA pipeline and policies
│ ├── domain.yml # Intents, entities, slots, responses, actions
│ ├── endpoints.yml # Action server configuration
│ ├── credentials.yml # RASA channel credentials
│ ├── Dockerfile.actions # Dockerfile for RASA Actions server
│ ├── requirements-actions.txt
│ └── .gitignore
│
├── TFG-chatbots/ # Main platform (dockerized)
│ ├── backend/
│ │ ├── app.py # FastAPI backend
│ │ ├── adapters/ # RASA / LLM adapters
│ │ ├── context/ # Conversation context handling
│ │ ├── evaluation/ # Metrics and logging logic
│ │ ├── utils/ # Utility functions
│ │ ├── requirements.txt
│ │ └── Dockerfile
│ │
│ ├── evaluation/
│ │ ├── exports/
│ │ │ └── metrics.jsonl # Logged interaction metrics
│ │ └── questionnaires/ # User evaluation questionnaires
│ │
│ ├── web/
│ │ ├── index.html # Web interface
│ │ └── main.js # Frontend logic
│ │
│ ├── docker-compose.yml # System orchestration
│ ├── nginx.conf # Nginx configuration
│ ├── .env # Environment variables
│ └── README.md
- Docker
- Docker Compose
-
Abrir Docker Desktop Asegúrate de que Docker Desktop esté en ejecución y sin errores.
-
Iniciar todos los servicios
Desde PowerShell en la carpeta del proyecto:
cd C:\Users\fjpm2\Desktop\robot-rasa-vs-llm\TFG-chatbots docker compose up -d --build
Esto lanza:
- Ollama (modelo LLM)
- RASA y rasa-actions
- Backend (FastAPI)
- Web (interfaz en Nginx)
Cuando todos estén healthy, abre: http://127.0.0.1:8080/ http://127.0.0.1:8080/api/docs
-
Parar todo docker compose down
-
Actualizar código y reconstruir imágenes
(útil tras modificar app.py, rasa_client.py, etc.)
docker compose down docker compose up -d --build --force-recreate
-
Consultar estado y logs docker compose ps docker compose logs backend --tail=50 docker compose logs rasa-actions --tail=50
-
Si algo falla
Revisar rápidamente salud:
curl.exe http://127.0.0.1:5055/health # action server curl.exe http://127.0.0.1:5005/status # RASA
✅ Notas Los datos de métricas se guardan en TFG-chatbots/backend/evaluation/exports/metrics.jsonl. No es necesario activar entornos virtuales ni ejecutar comandos Python manualmente. Para cerrar todo y liberar memoria: docker compose down -v
Cerrar: docker compose down
Levantar: docker compose up -d
Levantar reconstruyendo: docker compose up -d --build
Ver estado: docker compose ps
Ver logs: docker compose logs rasa-actions --tail=50 docker compose logs rasa --tail=50 docker compose logs backend --tail=50
📄 Academic Context This project was developed as the Final Degree Project (TFG) in Telecommunications Engineering at the Universidad Politécnica de Madrid (ETSIT), focusing on conversational AI systems and comparative evaluation.
👤 Author Francisco Javier Payá Martínez Universidad Politécnica de Madrid — ETSIT