Melody Maker is a machine-learning driven music generation toolkit developed by Waltworks. It leverages RNN models to learn patterns from MIDI datasets and generate new melodies and chord progressions.
- Data preprocessing: Convert MIDI files to note-state matrices.
- Model training: Train melody and chord RNN models on your MIDI dataset.
- Music generation: Produce new MIDI files from trained models.
- Configurable: Easily adjust sequence length, model hyperparameters, and data directories.
- Python 3.8+
make(GNU Make)- Virtual environment tool (e.g.,
venv,conda)
numpy
mido
tensorflow>=2.6
scikit-learn
tqdm-
Clone the repository
git clone https://github.com/your-org/melody-maker.git cd melody-maker -
Create and activate a virtual environment
python3 -m venv .venv source .venv/bin/activate -
Install dependencies
pip install -r requirements.txt
All configurable parameters live in src/config.py. Key settings include:
| Variable | Description | Default |
|---|---|---|
MIDI_DIR |
Path to your raw MIDI training files | Rock_Music_Midi |
LOWER_BOUND |
Lowest MIDI pitch to include | 24 |
UPPER_BOUND |
Highest MIDI pitch to include | 102 |
SEQ_LEN |
Sequence length (timesteps) | 60 |
BATCH_SIZE |
Training batch size | 64 |
EPOCHS |
Number of training epochs | 20 |
MELODY_MODEL_PATH |
Output path for melody model checkpoint | melody_model.h5 |
CHORD_MODEL_PATH |
Output path for chord model checkpoint | chord_model.h5 |
Note: You only need to run training once per dataset.
make trainThis will:
- Preprocess all MIDI files in
MIDI_DIR. - Train the melody RNN model (saved to
MELODY_MODEL_PATH). - Train the chord RNN model (saved to
CHORD_MODEL_PATH).
You can regenerate music at any time once models are trained:
make generateThis will:
- Load the trained models.
- Generate a new melody and chord sequence.
- Export a MIDI file named
output_generated.mid.
.
├── Makefile # Defines `train` and `generate` targets
├── requirements.txt # Python dependencies
├── README.md # This document
└── src
├── config.py # Configuration parameters
├── data_utils.py # Encoders and scalers
├── midi_tools.py # MIDI ↔ state-matrix conversion
├── rnn.py # Model definitions & generation logic
└── generate.py # Entry point for training & generation
We welcome contributions! Please:
- Fork the repository
- Create a feature branch (
git checkout -b feature/YourFeature) - Commit your changes (
git commit -m "Add your feature") - Push to the branch (
git push origin feature/YourFeature) - Open a Pull Request
Please adhere to the existing code style and include tests for new functionality.
This project is licensed under the MIT License. See LICENSE for details.