Skip to content

Hackathon-Preply/hackathon-2026-03-20

 
 

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

121 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Relay — Pre-Lesson AI Warm-Up for Preply

Live Demo: Start the warm-up →

Relay sits between Preply classes. Before the next live lesson, the learner gets a short AI-powered warm-up that recaps what stuck, captures speaking confidence via biomarkers, and briefs the tutor on exactly where to start.

AI Recap Cards    Avatar Narration    Speaking Exercise    Ready Screen

Phase 1: Recap cards → Avatar narration  |  Phase 2: Speaking exercise → Ready screen

Teacher Dashboard

Teacher dashboard — KPIs generated by the outcome LLM using recap data + Thymia biomarker signals


Try the Demo

Open the warm-up flow and walk through these steps:

1. Home screen

You land on a Preply-style lesson dashboard showing your upcoming class with tutor Marta Petrova — "Travel fluency & irregular past tense". Tap Go to lesson.

2. Lesson detail

See lesson goals, tutor notes, and the class schedule. Tap Join class — this triggers the warm-up instead of going straight to the call.

3. Warm-up intro

A brief intro explains the 2-minute warm-up. Tap Start warm-up.

4. Phase 1 — AI Recap (Step 1: Recap)

  • Gemini (nanobanana) generates a visual infographic recap of the last lesson — it picks a visual metaphor based on lesson content (grammar contrast, travel cheat-sheet, exam cues) and renders a mobile-optimized sketch image
  • Three recap cards auto-reveal: Last lesson recap, Today's focus, Your strengths
  • An Anam AI avatar appears and narrates the recap aloud via TTS relay — the avatar speaks only what our system tells it to (no autonomous conversation)
  • When the avatar finishes, tap Continue to warm-up

5. Phase 2 — Speaking Exercise (Step 2: Warm up)

  • A contextual passage appears (generated by the LLM using vocabulary from the last lesson)
  • Tap the mic button and read the passage aloud at a natural pace (~25 seconds)
  • While you speak, Thymia captures biomarkers (stress, confidence, fluency) through an embedded activity running in the background
  • Tap Continue when done — wait ~10-15 seconds for biomarker capture to complete

6. Ready screen (Step 3: Ready)

  • Summary: lesson recap ✓, live warm-up ✓, readiness assessment
  • A teacher handoff note is generated and "sent to your tutor"
  • From here you can tap Join class now, or check the dashboards:

7. Dashboards

  • Teacher Dashboard — The outcome LLM takes recap data + Thymia biomarker signals and generates: readiness score (68%), tutor brief (strong points, needs focus, recommended opening move), session prep (opening drills, revisit items, session goal), progress timeline, grammar focus patterns
  • Student Dashboard — Student-facing progress tracking and session history

Architecture

┌─────────────┐     ┌──────────────┐     ┌─────────────┐
│  Gemini API  │────▸│  Recap Gen   │────▸│  Anam Avatar│
│ (nanobanana) │     │  + Sketch    │     │  TTS Relay  │
└─────────────┘     └──────────────┘     └─────────────┘
                           │
                    ┌──────▼──────┐     ┌─────────────┐
                    │   Outcome   │◂────│   Thymia    │
                    │   LLM       │     │  Biomarkers │
                    └──────┬──────┘     └─────────────┘
                           │
              ┌────────────▼────────────┐
              │   Teacher + Student     │
              │     Dashboards          │
              └─────────────────────────┘

How data flows: Gemini generates the recap content and sketch image. The Anam avatar narrates it via TTS. During the speaking exercise, Thymia captures biomarker signals. After the warm-up, the outcome LLM combines recap data + biomarker signals to generate the teacher's KPIs (readiness score, tutor brief, session prep). These populate the teacher dashboard automatically.

Stack

  • Frontend: Next.js 16.2.1 (App Router, Turbopack) + Tailwind CSS v4
  • Monorepo: Turborepo + pnpm
  • Avatar: Anam AI SDK (@anam-ai/js-sdk) — TTS relay mode, no autonomous conversation
  • Recap images: Gemini API (nanobanana) — content-aware visual infographics with SVG fallback
  • Biomarkers: Thymia embedded activity — stress, confidence, fluency capture during speaking
  • Voice infra: Agora (real-time classroom)
  • LLM pipeline: OpenAI (outcome generation, teacher handoff)
  • Deploy: Vercel (auto-deploy on push to main)

Quick Start

pnpm install
cp .env.example .env.local  # fill in API keys
pnpm dev                     # starts on http://localhost:3000

Required Environment Variables

# Anam AI (avatar)
AVATAR_ID=
ANAM_API_KEY=
ANAM_VOICE_ID=
ANAM_LLM_ID=

# Gemini (recap sketch)
GEMINI_API_KEY=

# Thymia (biomarkers)
THYMIA_API_KEY=

# OpenAI (outcome LLM)
OPENAI_API_KEY=

# Agora (classroom)
AGORA_APP_ID=
AGORA_CUSTOMER_ID=
AGORA_CUSTOMER_SECRET=

Kanban & Project Tracking

  • Project Board: Relay Kanban
  • 12 completed tickets (infra, UI, AI integrations, dashboards)
  • 7 future roadmap tickets (programmatic Thymia, multi-language, longitudinal analytics)

How We Built This

See HOW_WE_BUILT.md for our AI-assisted development process, prompting strategies, and lessons learned.

Team

Four people who met at the hackathon, found a shared idea, and shipped it in a weekend. Built during the Preply x Agora Hackathon, March 2026.

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors

Languages

  • JavaScript 39.8%
  • TypeScript 27.2%
  • Python 14.1%
  • HTML 11.9%
  • CSS 6.7%
  • Shell 0.3%