Skip to content

ldmrqs/youwill-like-this

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

8 Commits
 
 
 
 
 
 
 
 
 
 

Repository files navigation

You Will Like This

An AI experimental project where an AI bot gives you music recommendations based on your preferences and mood.

Purpose

This project was built for learning and portfolio purposes. It explores:

  • Building REST APIs with FastAPI
  • Integrating local LLMs using Ollama
  • Prompt engineering for personalized responses

How it works

  1. Input your musical preferences (name, genres, artists, mood)
  2. Get personalized music recommendations powered by a local LLM
  3. Clean, fast API responses with detailed explanations

Requirements

  • Python 3.10+
  • Ollama installed with gemma2:9b model

Setup

  1. Clone the repository

  2. Create and activate virtual environment:

python3 -m venv .venv
source .venv/bin/activate
  1. Install dependencies:
pip install -r requirements.txt
  1. Make sure Ollama is running with the model:
ollama pull gemma2:9b
ollama serve
  1. Start the API:
uvicorn app.main:app --reload
  1. Access the API docs at http://127.0.0.1:8000/docs

Usage

Option 1: Two-step flow

Step 1: Register your preferences

curl -X POST http://127.0.0.1:8000/users \
  -H "Content-Type: application/json" \
  -d '{"name": "Your Name", "genres": ["rock", "indie"], "artists": ["Radiohead"], "mood": "melancholic"}'

Step 2: Get recommendations

curl http://127.0.0.1:8000/recommendations

Option 2: Direct flow

Send everything in one request:

curl -X POST http://127.0.0.1:8000/recommendations/direct \
  -H "Content-Type: application/json" \
  -d '{"name": "Your Name", "genres": ["rock", "indie"], "artists": ["Radiohead"], "mood": "melancholic"}'

API Endpoints

Endpoint Method Description
/users POST Register user preferences
/recommendations GET Get recommendations (uses saved data)
/recommendations/direct POST Get recommendations (pass data directly)
/docs GET Swagger UI documentation

Next Steps

  • Integrate LangChain for better LLM orchestration
  • Add observability with Langfuse or LangSmith
  • Track metrics: latency, tokens used, request history

Tech Stack

  • FastAPI - Web framework
  • Ollama - Local LLM runner
  • gemma2:9b - LLM model
  • Pydantic - Data validation

About

experimental project.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors

Languages