- Docker and Docker Compose installed on your machine
- Git to clone the repository
-
Clone the repository:
git clone <repository-url> cd team-rest-in-peace
-
Create a
.envfile in the root directory:cp .env.template .env
Then edit the
.envfile and configure the following environment variables:POSTGRES_PASSWORD DF_BUNDESTAG_API_KEY (-> https://dip.bundestag.de/%C3%BCber-dip/hilfe/api) DF_DB_PASSWORD NLP_DB_PASSWORD NLP_GENAI_API_KEY (-> Gemini-API-Key) BS_DB_PASSWORD NS_MAIL_PASSWORD (-> Google App PW) NS_DB_PASSWORD GRAFANA_ADMIN_PASSWORD -
Build and start the services:
docker compose up --build
-
Access the application:
- The client application will be available at http://localhost:80
- Grafana dashboard will be available at http://localhost:9091
- OpenAPI documentation (SwaggerUI) will be available at http://localhost:80/docs
- kubectl installed and configured to access your Kubernetes cluster
- Helm 3.x installed
- Access to a Kubernetes cluster (local minikube, cloud provider, etc.)
-
Ensure kubectl is configured:
kubectl version
-
Create a custom Helm values file: Helm uses a file called
values.yamlto configure how your application is deployed. To customize the deployment for your environment (for example, to set your own domain, database passwords, or API keys), you should create a new file namedhelm/policy-watch/values-custom.yamlthat only contains the values you want to override from the default configuration.- Create
helm/policy-watch/values-custom.yamland specify only the settings you want to change. For example:
url: your.custom.domain.com db: password: your-db-password genaiService: apiKey: "your_genai_api_key"
- Leave out any values you do not wish to override; Helm will use the defaults from
values.yaml.
When you deploy with Helm, use this custom file to provide your configuration.
- Create
-
Deploy the application using Helm:
helm upgrade policy-watch helm/policy-watch \ --install \ --create-namespace \ --namespace <your-namespace> \ -f helm/policy-watch/values-custom.yaml -
Verify the deployment:
kubectl get pods -n <your-namespace> kubectl get services -n <your-namespace> kubectl get ingress -n <your-namespace>
-
Access the application: The application will be available at your configured domain
- AWS CLI installed and configured
- Terraform installed
- Ansible installed
- SSH key for accessing EC2 instances
-
Configure AWS credentials:
aws configure
Alternatively, ensure your AWS credentials are set up in
~/.aws/credentials:[default] aws_access_key_id=YOUR_ACCESS_KEY aws_secret_access_key=YOUR_SECRET_KEY -
Create a
.envfile in the root directory as described in the Local Setup section. -
Deploy the infrastructure using Terraform:
cd terraform terraform init terraform apply -auto-approve cd ..
-
Create a new .env.aws file:
NEW_URL=$(sed -n '2p' ansible/inventory.ini | sed 's/^/http:\/\//; s/$/\//') sed "s|^CLIENT_BASE_URL=.*|CLIENT_BASE_URL=${NEW_URL}|" .env > .env.aws
-
Deploy the application using Ansible:
- Replace
[SSH_PKEY_PATH]with the path to your SSH private key for AWS (e.g.,~/.ssh/labuser.pem).
cd ansible ansible-playbook --private-key=[SSH_PKEY_PATH] playbook.yml cd ..
- Replace
-
Access the application:
- The application will be available at the IP address listed in
ansible/inventory.ini - You can view the IP with:
source <(grep '^CLIENT_BASE_URL=' .env.aws) echo "${CLIENT_BASE_URL}"
- The application will be available at the IP address listed in
To monitor the system locally:
- Grafana is available at http://localhost:9091
- (Prometheus is available at http://localhost:9090)
To access the grafana dashboard of the k8 cluster run:
kubectl port-forward service/grafana-service 9091:9091 -n team-rest-in-peace-monitoring
and then access via http://localhost:9091
- SpringBoot RIP Dashboard – Monitors the three Spring Boot applications
- FastAPI RIP Dashboard – Monitors the GenAI FastAPI service
Tip: When testing locally, set the Grafana time window to the last 5 minutes for the most relevant metrics.
- High CPU Usage (> 80%) – Triggered when any Spring Boot service exceeds 80% CPU usage.
- HTTP 5xx Responses – Triggered when an instance returns a 5xx (server error) HTTP status code.
Due to time constraints, we created two separate dashboards instead of consolidating metrics from both systems into a unified view.
Build Docker Images - Automatically builds and pushes Docker images for all services to GitHub Container Registry when code is pushed to the main branch.
CI for Browsing Service - Runs linting and testing for the Java-based browsing service using Gradle whenever changes are made to the browsing-service directory.
CI for Client - Validates the frontend client by running linting and tests using Node.js and npm when client code changes.
CI for Data Fetching Service - Performs quality checks and testing for the Java-based data fetching service using Gradle on pull requests.
CI for Genai Service - Runs Python code formatting checks with Black and executes tests using pytest for the AI/GenAI service.
CI for Notification Service - Validates the Java-based notification service through Gradle linting and testing on code changes.
Deploy to AWS [DEPRECATED] - Automatically deploys the application to EC2 using Docker Compose, copying configuration files and starting containers on the target instance.
Deploy to K8s - Deploys the application to Kubernetes using Helm charts after successful Docker image builds, managing configuration and secrets injection.
Component Diagram Use Case Diagram Database Diagram Analysis Object Model| Student | Responsibilities |
|---|---|
| Ramona | Developed the client frontend, implemented Kubernetes and Helm charts, managed the browsing service, contributed to the CD pipeline, implemented tests for her components, and co-created the presentation slides. |
| Marc | Built the data fetching and notification services, set up Docker Compose, led the AWS deployment using Terraform and Ansible, maintained the CI pipeline, and implemented tests for his components. |
| Bene | Implemented the GenAI service, configured system monitoring with Grafana and Prometheus inclusive helm monitoring setup, created the OpenAPI documentation, implemented tests for his components, and co-created the presentation slides. |
| All | Collaboratively worked on the problem statement, overall project management, diagrams, service interfaces, and testing strategy. While responsibilities were divided, all team members supported each other across components to ensure smooth integration and delivery. |




