This repository is part of the "DevOps: Engineering for Deployment and Operations" course at the Technical University of Munich, taught by Prof. Dr. Stephan Krusche and Prof. Dr. Ingo Weber.
This project is leveraging GenAI to create smart summaries of lecture materials. By automating the summarization process, it aims to provide students with a structured and up-to-date overview of their course content, enabling them to focus on understanding and applying concepts rather than summarizing.
- DevOps 2025 - Team ReleaseRangers
- Section Overview
- Our Team
- Student Responsibilities
- Subsystem Ownership
- Key Features
- Project Overview
- Tech Stack
- Quick Local Setup (Recommended)
- Individual Setup Instructions
- Additional Documentation
- Database Schema
- API documentation
- CI/CD Instructions
- Monitoring & Alerting
- Monitoring Instructions
- Testing Instructions
- Code Quality: SpotBugs & Checkstyle & pnpm Audit
- Deployment Instructions
This project is maintained by:
- Florian Charrot (FC)
- Jonathan MĂĽller (JM)
- Luis Leutbecher (LL)
- Florian Charrot (FC): GenAI microservice, LLM Integration, Kubernetes Deployment
- Jonathan MĂĽller (JM): Frontend Development, Authentication Service, Database Design, Client Testing, Terraform and Ansible Setup, AWS Deployment, GenAI Service Testing
- Luis Leutbecher (LL): SpringBoot Backend, GitHub Actions, CI/CD Pipeline, Spring Boot Testing, Docker Setup, Monitoring and Observation
- Client: Jonathan MĂĽller (JM)
- Authentication Service: Jonathan MĂĽller (JM)
- GenAI Service: Florian Charrot (FC)
- Course Management Service: Luis Leutbecher (LL)
- Upload Service: Luis Leutbecher (LL)
- Upload all lecture material at a single place
- Get smart summaries of your lecture material categorized into learning chapters
- Integrate new content throughout the semester
Our application helps students to study efficient by leveraging LLM generated smart summaries of their lecture material. Our vision is to create one single place where one can get a summarized overview of the lecture material needed for exam preparation. We want to enable students to easily add new content throughout the semester which constantly gets summarized to always provide the student with an up-to-date overview of the current course content.
- Problem Statement: Learn about the motivation, main functionality, and user scenarios for ReleaseRangers.
- System Architecture: See the technical structure, technologies, and initial backlog for the project.
- Frontend: Next.js (React)
- Backend: Spring Boot
- GenAI Integration: LangChain
- Database: PostgreSQL
- Node.js (v22 or later)
- Java JDK 21+
- Python 3.x
- Maven
- Docker and Docker Compose
- Git
- For deployment:
- AWS
- AWS CLI
- Terraform
- Ansible
- Kubernetes
- kubectl
- Helm
- AWS
To get started, clone the repository:
git clone https://github.com/AET-DevOps25/team-releaserangers.gitAnd navigate into the project directory:
cd team-releaserangers
⚠️ Note: Make sure to configure theJWT_SECRETandLLM_API_KEYenvironment variables before running the project locally. These are required for authentication and GenAI features to work. You can use script setup to create or update the necessary.envfiles.
Attention: The JWT secret must be the same in both .env and
authentication-service/src/main/resources/application.properties files.
Currently the authentication service is using the JWT_SECRET from the .env
file, so you only need to consider this if you make manual changes to the
authentication-service/src/main/resources/application.properties file.
The easiest way to configure your environment for local development is to use the provided setup script:
This script considers the .env.example file which contains all necessary
environment variables and creates the respective .env files for the app.
Hence, you can always adapt the .env.example file to your needs and run the script again
to update your .env files.
For giving you a head start, we have Google Gemini preconfigured as GenAI provider. Hence for this script-setup, you need a Google Gemini API key. You can get your own free-tier Gemini API key from Google AI Studio.
chmod +x setup-env.shThen run the script and follow the prompts to create and configure the necessary .env files:
./setup-env.shYou can manually adapt the .env file or the genai/.env file to change the
GenAI provider or other settings.
For running any other GenAI provider except Google Gemini, you need to
remove the FILE_PARSING variable from the genai/.env file.
E.g. for using OpenWebUI instead of Gemini, you can change the variables in the genai/.env file to:
Note: This will then take longer as our pipeline will then first extract the text from the pdf and then query the LLM, thus depending on the size of the pdf this might take a few seconds longer.
LLM_API_URL=https://gpu.aet.cit.tum.de/api/chat/completions
LLM_API_KEY=<your-openwebui-api-key>
LLM_MODEL=llama3
LLM_BACKEND=openwebuiTo start the entire application stack (client, server, database, etc.) locally, simply run:
docker compose up --build -dThis will build and start all services as defined in the docker-compose.yml file.
For step-by-step instructions on setting up and running each service (client, server, GenAI/LLM service, and database) individually, see the Start Individual Services Guide. This guide covers environment variable setup, dependency installation, and how to start each service separately for development or testing.
All diagrams giving an insight into our architecture are available as PDFs in the docs/models folder and can be checked out for more details.
- ArchitectureOverview.drawio.pdf: Offers a comprehensive view of the overall system architecture, showing how the main components interact and communicate.
- SubsystemDecomposition.drawio.pdf: Breaks down the system into its core subsystems, detailing the responsibilities and boundaries of each module/microservice.
- AnalysisObjectModel.drawio.pdf: Presents the object model used for analysis, including key entities and their relationships.
- UseCaseDiagram.drawio.pdf: Visualizes the main user interactions and use cases supported by the application.
You can find and view these diagrams in the docs/models folder. They provide valuable insights into the design, data flow, and user experience of the ReleaseRangers platform.
For a detailed list of user stories and requirements, please refer to the User Stories document. This outlines the key functionalities and user interactions that the application supports. It also represents the basis for our initial product backlog for development and testing efforts, ensuring that we meet the needs of our users effectively.
Our application uses PostgreSQL as the primary database with tables distributed across three microservices:
- DBML File: database_schema.dbml - Import this into dbdiagram.io for interactive editing
- users (Authentication Service): User accounts and authentication
- courses (Course Management): Course information and metadata
- chapters (Course Management): Individual learning chapters within courses
- uploaded_files (Upload Service): File uploads associated with courses
The entire API is defined using OpenAPI (see docs/api/openapi.yml).
You can view the Swagger UI via GitHub Pages:
- Open your browser and navigate to https://aet-devops25.github.io/team-releaserangers/api/index.html
This provides a complete, interactive overview of all endpoints, request/response formats, and authentication details.
For detailed documentation on the CI/CD workflows, please refer to the following documentation:
We use Prometheus and Grafana for monitoring metrics and alerting, with Loki and Promtail for log aggregation. Alerts are configured in Grafana and can send notifications via email when certain conditions are met (e.g., upload errors). For full details, see Monitoring_Alerting.md.
For more information on how to set up and use monitoring and alerting in this project, please refer to the Monitoring Instructions.
Check the client setup from the Start Individual Services Guide and install the necessary dependencies for the client if not already done:
cd client
pnpm installTo run the playwright end-to-end (E2E) tests for the client, you have to start the whole stack using Docker Compose. It is advised to use a fresh database to avoid conflicts with existing data.
docker compose up --build -dThen still in the client directory, you can run the tests:
pnpm testFor a nice UI interface you can run the tests in headed mode:
pnpm test:uiTo run tests for the server and each microservice, you can use Maven commands. Each microservice has its own set of tests, and you can run them individually or for the entire server.
For the entire server, navigate to the server directory and run:
cd server
mvn clean packageFor individual microservices, navigate to the specific service directory and run:
cd server/authentication-service
mvn clean package
cd ../coursemgmt-service
mvn clean package
cd ../upload-service
mvn clean packageFor the GenAI service tests, please refer to the GenAI Service Testing Guide. This guide provides detailed instructions on how to set up and run tests for this service.
SpotBugs and Checkstyle are integrated into the Maven build lifecycle for the server and each microservice (authentication-service, coursemgmt-service, upload-service).
You can run these tools manually or as part of the Maven build:
-
To run both SpotBugs and Checkstyle for all modules:
cd server mvn verifyThis will execute both plugins as part of the
verifyphase. -
To run only SpotBugs:
mvn spotbugs:check
-
To run only Checkstyle:
mvn checkstyle:check
-
To run for a specific microservice:
cd server/<microservice-folder> mvn verify
Replace
<microservice-folder>withauthentication-service,coursemgmt-service, orupload-service.
- Checkstyle runs during the
validateandverifyphases. - SpotBugs runs during the
verifyphase.
If you run mvn verify, both tools will be executed and any violations will fail the build.
To ensure the client dependencies are secure and up-to-date, you can use pnpm audit to check for vulnerabilities:
-
Navigate to the
clientdirectory:cd client -
Run the audit command:
pnpm audit
This will analyze the installed dependencies and report any known vulnerabilities.
To ensure code quality and consistency, we have set up a pre-commit hook. For more information on that, check out the Pre-commit Hook Documentation.
For deployment instructions for AWS, please refer to the AWS Deployment Guide. This guide provides step-by-step instructions on how to set up Terraform and Ansible for deploying the application on AWS. For Kubernetes deployment, you can refer to the Kubernetes Deployment Guide.

