A recommendation system prototype for Computer Science education activities that leverages automated content processing and category-based scoring algorithms to support teachers in activity selection and lesson planning.
This system was developed as part of a Master's thesis at the Technical University of Munich, Chair of Applied Education Technologies. The architecture prioritises transparency and explainability, enabling teachers to understand how recommendations are generated rather than relying on opaque black-box algorithms.
LEARN-Hub addresses the challenge of finding appropriate educational activities for computer science courses by implementing an intelligent recommendation engine. The system processes educational activity documents, analyses their pedagogical characteristics, and generates personalised recommendations based on teacher requirements such as target age group, available resources, and learning objectives aligned with Bloom's Taxonomy.
The recommendation engine implements content-based filtering with category-based scoring, offering an explainable alternative to collaborative filtering approaches. Teachers receive detailed scoring breakdowns across age appropriateness, topic relevance, duration fit, Bloom alignment, and series cohesion, fostering agency and trust in the recommendation process.
The system implements a three-tier containerised web application architecture following the System Design Document approach:
Client Subsystem: A React single-page application provides an interactive user interface for teachers and administrators. The client implements role-based access control, supports both light and dark themes, and maintains session-based authentication using sessionStorage for enhanced security on shared school computers.
Server Subsystem: A Spring Boot REST API server orchestrates the core application logic through specialised internal systems. The Recommendation System encapsulates the algorithmic intelligence. The User System manages identity through user, history, and favourites services. The Document System oversees content ingestion via PDF processing and LLM-assisted metadata extraction using Spring AI.
Data Layer: PostgreSQL serves as the primary data store, managing activities, user accounts, search history, and favourites. The database schema supports complex relationships between activities, topics, and user preferences whilst maintaining referential integrity.
Containerisation: Docker Compose orchestrates three containerised services on a single host, connected via an internal bridge network. The deployment includes health checks and dependency chains to ensure proper sequencing during startup.
The architecture addresses several key quality attributes:
- Transparency (QA3): Category-based scoring with detailed breakdowns enables teachers to understand recommendations
- Maintainability (QA7): Clear, explicit code with dependency injection favours clarity over convenience
- Performance (QA5, QA6): Two-stage scoring pipeline and hard filtering ensure sub-three-second response times
- Extensibility (QA1, QA8): Comprehensive OpenAPI documentation enables integration with external learning platforms
The docs/figures/ directory contains UML diagrams documenting the system architecture:
- Subsystem Decomposition (
docs/figures/final-lucid-subsystem.svg): Shows the internal components of the server and client subsystems - Deployment Diagram (
docs/figures/final-lucid-deployment.svg): Container topology, volumes, and external service dependencies - Analysis Object Model (
docs/figures/final-lucid-aom.svg): Domain entities and their relationships
See docs/dev-setup.md for the full development setup guide.
Quick start:
# Start PostgreSQL
docker compose -f docker/compose.yml up postgres -d
# Run migrations
make db-migrate
# Start server and client
make devOnce running, access the system at:
- Client: http://localhost:3001
- Server API: http://localhost:5001
- API Documentation: http://localhost:5001/api/openapi/swagger
The application requires environment variables for API integrations, security keys, and database configuration:
cp example.env .envKey configuration variables:
LLM_API_KEY- API key for automated content processingJWT_SECRET_KEY- JWT token signingPOSTGRES_DB_URI- PostgreSQL JDBC connection stringEMAIL_*- Email service configuration for teacher verification
See example.env for a complete list of configurable variables.
User Documentation: https://ls1intum.github.io/LEARN-Hub/
Developer Documentation is organised in the docs/ directory by architectural layer:
docs/dev-setup.md- Quick local development setup guide
docs/core/recommendation-engine.md- Recommendation algorithm design, scoring methodology, and architectural decisions
docs/server/server-architecture.md- Server architecture and design patternsdocs/server/api.md- REST API endpoints and data modelsdocs/server/server-cicd.md- Development workflow and deployment procedures
docs/client/client-architecture.md- Client architecture and component designdocs/client/api-integration.md- Client-server integration patternsdocs/client/client-cicd.md- Build system and deployment
Server:
- Java 21 with Spring Boot 3.4.1
- Spring Data JPA with Hibernate ORM
- Flyway for database migrations
- PostgreSQL 17+ for relational data persistence
- Spring AI with Ollama for LLM integration
- Spring Security with JWT authentication
- Maven for dependency management
Client:
- React 19 with TypeScript for type safety
- Vite for rapid build tooling with hot module replacement
- Tailwind CSS for utility-first styling
- shadcn/ui for accessible interface elements
- Nginx for production serving and API proxying
Infrastructure:
- Docker for containerisation with multi-stage builds
- Docker Compose for container orchestration
- GitHub Container Registry for image distribution
Development Tools:
- Maven for Java dependency management
- JUnit for server testing
- Vitest for client testing
cd server/
make dev # Start development server
make test # Run tests
make build # Build the application
make db-migrate # Run Flyway migrationscd client/
npm run dev # Start development server
npm run test:run # Run tests
npm run build # Build for production
npm run lint # Check code quality