Skip to content

Repository for a research comparing a classical Chatbot based on RASA vs an advance LLM chatbot in Social Robots

License

Notifications You must be signed in to change notification settings

gsi-upm/rasa-vs-llm

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

18 Commits
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

robot-rasa-vs-llm

ChatTFG — Development and Comparison of Conversational Systems

This repository contains the complete implementation of the Final Degree Project (TFG):

“Development and Comparison between a Traditional Chatbot (RASA) and a Large Language Model (LLM) Conversational System”

The objective of this project is to design, deploy, and evaluate a unified conversational platform that integrates and compares:

  • A traditional rule-based chatbot implemented with RASA
  • A Large Language Model (LLM) executed locally using Ollama

Both systems are accessible through the same web interface and are evaluated under identical conditions.


📌 General Overview

The platform allows users to interact with two assistants:

  • Alex — Traditional chatbot (RASA)
  • Taylor — LLM-based chatbot (Ollama)

The user is not informed about the underlying technology of each assistant, ensuring an unbiased comparison focused on:

  • Conversational quality
  • Latency
  • Context handling
  • User perception

The entire system is fully containerized using Docker Compose.


🧱 System Architecture

The system is composed of the following main components:

  • Web Interface (HTML / JavaScript)
    Provides the user interface to select an assistant and interact via text or voice.

  • Backend (FastAPI)
    Acts as an orchestration layer:

    • Routes user messages to RASA or Ollama
    • Manages conversational context
    • Measures latency
    • Stores interaction metrics
  • RASA Server
    Hosts the traditional chatbot, including intents, rules, stories, and dialogue policies.

  • RASA Actions Server
    Executes custom Python actions required by RASA (context updates, extended responses, double-intent handling, etc.).

  • Ollama Server
    Runs the local LLM used for the generative conversational system.

All services communicate through an internal Docker network.


📂 Repository Structure

The repository is organized into two main directories: TFG_RASA and TFG-chatbots.

📂 Repository Structure

.
├── TFG_RASA/                     # Traditional chatbot (RASA)
│   ├── actions/
│   │   ├── __init__.py
│   │   └── actions.py             # Custom RASA actions
│   │
│   ├── data/
│   │   ├── nlu.yml                # Intents and training examples
│   │   ├── rules.yml              # Dialogue rules
│   │   └── stories.yml            # Conversation stories
│   │
│   ├── models/                    # Trained RASA models
│   ├── tests/                     # RASA test files
│   │
│   ├── config.yml                 # RASA pipeline and policies
│   ├── domain.yml                 # Intents, entities, slots, responses, actions
│   ├── endpoints.yml              # Action server configuration
│   ├── credentials.yml            # RASA channel credentials
│   ├── Dockerfile.actions         # Dockerfile for RASA Actions server
│   ├── requirements-actions.txt
│   └── .gitignore
│
├── TFG-chatbots/                  # Main platform (dockerized)
│   ├── backend/
│   │   ├── app.py                 # FastAPI backend
│   │   ├── adapters/              # RASA / LLM adapters
│   │   ├── context/               # Conversation context handling
│   │   ├── evaluation/            # Metrics and logging logic
│   │   ├── utils/                 # Utility functions
│   │   ├── requirements.txt
│   │   └── Dockerfile
│   │
│   ├── evaluation/
│   │   ├── exports/
│   │   │   └── metrics.jsonl       # Logged interaction metrics
│   │   └── questionnaires/         # User evaluation questionnaires
│   │
│   ├── web/
│   │   ├── index.html              # Web interface
│   │   └── main.js                 # Frontend logic
│   │
│   ├── docker-compose.yml          # System orchestration
│   ├── nginx.conf                  # Nginx configuration
│   ├── .env                        # Environment variables
│   └── README.md

🚀 Deployment Instructions

Prerequisites

  • Docker
  • Docker Compose
  1. Abrir Docker Desktop Asegúrate de que Docker Desktop esté en ejecución y sin errores.

  2. Iniciar todos los servicios

Desde PowerShell en la carpeta del proyecto:

cd C:\Users\fjpm2\Desktop\robot-rasa-vs-llm\TFG-chatbots docker compose up -d --build

Esto lanza:

  • Ollama (modelo LLM)
  • RASA y rasa-actions
  • Backend (FastAPI)
  • Web (interfaz en Nginx)

Cuando todos estén healthy, abre: http://127.0.0.1:8080/ http://127.0.0.1:8080/api/docs

  1. Parar todo docker compose down

  2. Actualizar código y reconstruir imágenes

(útil tras modificar app.py, rasa_client.py, etc.)

docker compose down docker compose up -d --build --force-recreate

  1. Consultar estado y logs docker compose ps docker compose logs backend --tail=50 docker compose logs rasa-actions --tail=50

  2. Si algo falla

Revisar rápidamente salud:

curl.exe http://127.0.0.1:5055/health # action server curl.exe http://127.0.0.1:5005/status # RASA

✅ Notas Los datos de métricas se guardan en TFG-chatbots/backend/evaluation/exports/metrics.jsonl. No es necesario activar entornos virtuales ni ejecutar comandos Python manualmente. Para cerrar todo y liberar memoria: docker compose down -v


Cerrar: docker compose down

Levantar: docker compose up -d

Levantar reconstruyendo: docker compose up -d --build

Ver estado: docker compose ps

Ver logs: docker compose logs rasa-actions --tail=50 docker compose logs rasa --tail=50 docker compose logs backend --tail=50

📄 Academic Context This project was developed as the Final Degree Project (TFG) in Telecommunications Engineering at the Universidad Politécnica de Madrid (ETSIT), focusing on conversational AI systems and comparative evaluation.

👤 Author Francisco Javier Payá Martínez Universidad Politécnica de Madrid — ETSIT

About

Repository for a research comparing a classical Chatbot based on RASA vs an advance LLM chatbot in Social Robots

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Contributors 2

  •  
  •