Skip to content

Latest commit

 

History

History
213 lines (143 loc) · 6.43 KB

File metadata and controls

213 lines (143 loc) · 6.43 KB

🔓 Unlok

See what you've been hiding.

Unlok uses Gemini 3's multimodal AI to analyze your facial expressions, voice patterns, and body language in real-time — then delivers the confrontational insight you need to hear.

Gemini 3 Hackathon Next.js TypeScript

🌐 Live at unlok.cam


🎯 The Problem

We all have blind spots. Things we avoid thinking about. Truths we don't want to face.

Traditional journaling and self-reflection tools ask you to tell them what's wrong. But what if the real insight comes from what you're NOT saying?

💡 The Solution

Unlok turns your camera into a mirror that sees beyond the surface:

  1. Record yourself talking about a challenge, goal, or question
  2. Gemini 3 analyzes your micro-expressions, vocal hesitations, eye movements, and word choices
  3. Receive the insight — the uncomfortable truth you've been avoiding

It's like having a brutally honest friend who can read you like a book.


✨ Features

🎭 Multiple Practice Modes

Mode Use Case
Emotional Mirror Deep self-discovery and emotional unlocking
Job Interview Practice professional presence and confidence
Business Pitch Rehearse investor presentations
First Date Build authentic conversation skills
Difficult Conversation Prepare for tough talks
Public Speaking Master stage presence

🌍 Multilingual Support

  • English, Portuguese (BR), Spanish, French, German, Japanese
  • On-the-fly translation for 100+ languages via Gemini Flash

🔐 Privacy-First

  • BYOK (Bring Your Own Key) support
  • No video storage — analysis happens in real-time
  • All processing via Gemini API

🛠️ Tech Stack

Technology Purpose
Gemini 3 Pro Multimodal video analysis (expressions, voice, body language)
Gemini 3 Flash Fast translations & text processing
Next.js 15 React framework with App Router
TypeScript Type safety
Zustand State management
Tailwind CSS Styling
MediaRecorder API Browser-native video capture

🚀 Getting Started

Prerequisites

Installation

# Clone the repository
git clone https://github.com/murilo/unlok.git
cd unlok

# Install dependencies
npm install

# Set up environment variables
cp .env.example .env.local
# Add your GOOGLE_AI_API_KEY to .env.local

# Run development server
npm run dev

Open http://localhost:3000 to see the app.

Environment Variables

GOOGLE_AI_API_KEY=your_gemini_api_key_here

📸 Screenshots

View Screenshots

Category Selection

Choose your practice mode based on what you want to work on.

Recording Session

See yourself while recording. The app captures video for real-time analysis.

AI Insights

Receive personalized feedback with scores, key insights, and "The Question" — a powerful prompt for deeper reflection.


🧠 How It Works

┌─────────────────┐     ┌──────────────────┐     ┌─────────────────┐
│  Record Video   │────▶│  Gemini 3        │────▶│  Get Insights   │
│  (2-5 minutes)  │     │  Analyzes:       │     │                 │
│                 │     │  • Expressions   │     │  • Key Insight  │
│  Talk about     │     │  • Voice tone    │     │  • Scores       │
│  your challenge │     │  • Body language │     │  • The Question │
│                 │     │  • Word choices  │     │                 │
└─────────────────┘     └──────────────────┘     └─────────────────┘

Gemini 3 Integration

Unlok leverages Gemini 3's unique capabilities:

  • 2M Token Context Window: Process full video sessions without chunking
  • Native Video Understanding: Direct analysis of visual + audio streams
  • Multimodal Reasoning: Correlate facial expressions with vocal patterns
  • Multilingual Output: Deliver insights in the user's preferred language

🎨 Design Philosophy

Inspired by Alex Hormozi's Value Equation:

Value = (Dream Outcome × Perceived Likelihood) / (Time × Effort)

Unlok maximizes value by:

  • Dream Outcome: Genuine self-awareness and emotional breakthroughs
  • Likelihood: AI analysis is consistent and unbiased
  • Minimal Time: 2-5 minute sessions
  • Zero Effort: Just talk naturally — AI does the analysis

🗺️ Roadmap

  • Core video analysis with Gemini 3
  • Multiple practice scenarios
  • Internationalization (i18n)
  • BYOK (Bring Your Own Key)
  • Audio feedback with AI coaches
  • Mobile app (React Native)
  • Progress tracking over time
  • Talking avatar responses (D-ID integration)

🤝 Contributing

Contributions are welcome! Please feel free to submit a Pull Request.

  1. Fork the repository
  2. Create your feature branch (git checkout -b feature/AmazingFeature)
  3. Commit your changes (git commit -m 'Add some AmazingFeature')
  4. Push to the branch (git push origin feature/AmazingFeature)
  5. Open a Pull Request

📄 License

This project is licensed under the MIT License - see the LICENSE file for details.


🙏 Acknowledgments

  • Google DeepMind for the Gemini 3 API
  • Devpost for hosting the hackathon
  • Pablo Marçal's confrontational coaching style for inspiration

👤 Author

Murilo


Built for the Gemini 3 Hackathon 2026
"See what you've been hiding."