Skip to content

chrismarth/GaitKeeper

Repository files navigation

GaitKeeper

AI-powered running gait analysis application built with Next.js, TensorFlow.js, and Claude AI.

⚠️ EXPERIMENTAL SOFTWARE DISCLAIMER
This is experimental software under active development and is provided "as-is" without warranty of any kind. It is not production-ready and should be used for research, educational, or experimental purposes only. Do not rely on this software for medical decisions or professional gait analysis. Always consult qualified healthcare professionals for medical advice.

GaitKeeper_Demo_Video.mp4

Features

  • Video Upload: Upload running videos (treadmill or outdoor) for analysis
  • Advanced Pose Estimation: MediaPipe BlazePose with 33 keypoints including detailed foot tracking (heel/toe)
  • Comprehensive Gait Metrics: Cadence, stride length, foot strike pattern, foot strike position, joint angles, and estimated loads
  • Kalman Filter Tracking: Smooth ankle tracking with velocity-based foot-strike detection
  • Real-time Foot-Strike Visualization: Animated arrows showing exact foot contact points during landing
  • AI Suggestions: Get personalized form improvement recommendations from Claude AI
  • Interactive Chat: Ask questions about your gait analysis and get expert advice
  • Smooth 60 FPS Playback: Interpolated pose rendering for fluid visualization

Tech Stack

  • Framework: Next.js 14 (App Router)
  • UI: React, TailwindCSS, shadcn/ui
  • Pose Detection: MediaPipe BlazePose (TensorFlow.js runtime with automatic MoveNet fallback)
  • Motion Tracking: Custom Kalman Filter implementation for ankle trajectory smoothing
  • Database: Prisma ORM with SQLite (local) / PostgreSQL (production)
  • AI: Anthropic Claude API
  • Storage: Local filesystem (dev) / Vercel Blob (production)

Getting Started

Prerequisites

Installation

  1. Clone the repository (if not already done):

    git clone <your-repo-url>
    cd GaitKeeper
  2. Install dependencies:

    npm install
  3. Set up environment variables: Create a .env file in the root directory:

    cp .env.example .env

    Edit .env and add your Anthropic API key:

    DATABASE_URL="file:./dev.db"
    ANTHROPIC_API_KEY="your_anthropic_api_key_here"
    
  4. Initialize the database:

    npx prisma generate
    npx prisma db push
  5. Run the development server:

    npm run dev
  6. Open your browser: Navigate to http://localhost:3000

Adding a Sample Video (Optional)

To get started quickly with a sample running video:

  1. Download a sample video from Pexels:

  2. Place the video:

    mkdir -p public/uploads/videos
    mv ~/Downloads/your-video.mp4 public/uploads/videos/sample-running.mp4
  3. Add to database:

    npm run add-sample
  4. Alternatively

    • Take your captured or downloaded video and manually add it using the "Upload Video" button on the home page.

See SAMPLE_VIDEO_GUIDE.md for detailed instructions and recommended videos.

Usage

Upload a Video

  1. Click the "Upload Video" button on the home page
  2. Enter a name for your video
  3. Select a video file (MP4, MOV, etc.) - ~10 seconds recommended
  4. Click "Upload"

Analyze Your Gait

  1. Click on a video thumbnail to open the analysis page
  2. Click "Analyze Gait" to start pose estimation
  3. Wait for the analysis to complete (may take 1-2 minutes)
  4. View metrics as you scrub through the video

Get Improvement Suggestions

  1. After analysis is complete, go to the "Suggestions" tab
  2. Click "Generate Suggestions" to get AI-powered recommendations
  3. Review suggestions categorized by priority (high, medium, low)

Chat About Your Gait

  1. Go to the "Chat" tab
  2. Ask questions like:
    • "Why is my cadence important?"
    • "How can I reduce knee impact?"
    • "What does my foot strike pattern mean?"

System Architecture

Pose Detection & Tracking

MediaPipe BlazePose Integration:

  • 33 Keypoints: Full body tracking including detailed foot landmarks (heel, ankle, toe)
  • MediaPipe Runtime: Uses Google's production-grade WebAssembly implementation for stability
  • Automatic Fallback: Gracefully falls back to MoveNet (17 keypoints) if BlazePose fails
  • Model-Agnostic Design: Dynamic keypoint mapping adapts to whichever model is loaded

Kalman Filter Foot-Strike Detection:

  • State Tracking: Tracks ankle position (x, y) and velocity (vx, vy) in real-time
  • Velocity-Based Detection: Distinguishes landing (downward deceleration) from toe-off (upward acceleration)
  • Noise Filtering: Smooths out pose detection jitter for stable tracking
  • Ground Level Estimation: Automatically calibrates ground plane from ankle positions

Visual Feedback:

  • Animated Arrows: Fade-in/fade-out arrows at foot-strike locations (300ms duration)
  • Precise Positioning: Uses actual heel and toe keypoints to show midfoot contact point
  • Color Coding: Amber for left foot, red for right foot
  • 60 FPS Rendering: Smooth interpolation between detected poses for fluid playback

Gait Metrics Calculation

Comprehensive Analysis:

  • Cadence: Steps per minute calculated from ankle vertical motion patterns
  • Stride Length: Distance covered per stride using body height calibration
  • Foot Strike Pattern: Heel/midfoot/forefoot classification using heel-toe angle analysis
  • Foot Strike Position: Landing position relative to center of gravity
  • Estimated Loads: Impact forces on major joints

Adaptive Processing:

  • All metrics automatically adapt to available keypoints (BlazePose vs MoveNet)
  • Graceful degradation when heel/toe keypoints unavailable
  • Confidence scoring for each metric based on keypoint visibility

Project Structure

GaitKeeper/
├── app/                      # Next.js app directory
│   ├── api/                  # API routes
│   │   ├── chat/            # Chat endpoint
│   │   └── videos/          # Video upload & analysis
│   ├── analysis/[id]/       # Analysis page
│   ├── layout.tsx           # Root layout
│   ├── page.tsx             # Home page
│   └── globals.css          # Global styles
├── components/              # React components
│   ├── ui/                  # shadcn/ui components
│   ├── analysis-view.tsx    # Main analysis view
│   ├── chat-interface.tsx   # Chat component
│   ├── metrics-panel.tsx    # Metrics display
│   ├── upload-dialog.tsx    # Upload modal
│   ├── video-list.tsx       # Video grid
│   └── video-player.tsx     # Video player with pose overlay
├── lib/                     # Utilities and libraries
│   ├── ai.ts               # Claude AI integration
│   ├── db.ts               # Prisma client
│   ├── kalman-filter.ts    # 2D Kalman filter for ankle tracking
│   ├── metrics.ts          # Gait metrics calculation
│   ├── pose-estimation.ts  # MediaPipe BlazePose integration
│   ├── storage.ts          # Storage abstraction layer
│   └── utils.ts            # Helper functions
├── prisma/                 # Database schema
│   └── schema.prisma       # Prisma schema definition
└── public/                 # Static files
    └── uploads/            # Uploaded videos (gitignored)

Database Schema

  • Video: Stores video metadata and file paths
  • Analysis: Stores pose data and calculated metrics
  • Suggestion: AI-generated improvement suggestions
  • ChatMessage: Chat conversation history

Deployment

Vercel (Recommended)

  1. Push your code to GitHub

  2. Create a new project on Vercel:

    • Import your GitHub repository
    • Vercel will auto-detect Next.js
  3. Add environment variables:

    • ANTHROPIC_API_KEY: Your Claude API key
    • DATABASE_URL: PostgreSQL connection string (use Vercel Postgres)
    • STORAGE_TYPE: Set to vercel-blob
    • BLOB_READ_WRITE_TOKEN: Your Vercel Blob token
  4. Update storage adapter (if using Vercel Blob):

    • Implement VercelBlobStorageAdapter in lib/storage.ts
    • Install @vercel/blob package
  5. Deploy:

    vercel --prod

Database Migration for Production

# Switch to PostgreSQL in production
# Update DATABASE_URL in .env
npx prisma db push

Future Enhancements

  • User authentication (NextAuth.js)
  • Video comparison (compare multiple runs)
  • Progress tracking over time
  • Export analysis reports (PDF)
  • Mobile app support
  • Real-time video capture
  • Coach/athlete collaboration tools

Contributing

Contributions are welcome! Please feel free to submit a Pull Request.

License

See LICENSE file for details.

Acknowledgments

  • Google MediaPipe team for BlazePose model
  • TensorFlow.js team for the runtime and MoveNet fallback model
  • Anthropic for Claude AI API
  • shadcn/ui for beautiful UI components