Skip to content

freq1062/music-transformer

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

40 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Description

A pretty small decoder-only transformer model that I wrote using pytorch for an Extended Essay research project. Later came back to it because the original version was not working very well.

Based on Google's Magenta.

This is the fourth version of the model, here is what I have tried:

  1. An encoder-decoder model from this tutorial
  2. Decoder-only except with regular absolute attention.
  3. Added the special skewing procedure found in this paper.
  4. Current revisit, I corrected some issues with dropout, added learning rate scheduling and updated the hyperparameters now that I have access to lab machines. To sequence length went from 200 -> 1024, exactly like in the aforementioned paper.

Architecture

Sequence length(seq_len): 1024 Embedding dimensionality(d_model): 512 Depth: 6

  1. Input: Midi file converted to tokens, padded to length 1024 if necessary. Truncated if too long. This is the "seed" song that the model will continue.
  2. Convert to embeddings: (seq_len, d_model) and scale by sqrt(d_model)
  3. Decoder block: run depth times
  1. Relative self attention using the efficient skewing procedure
  2. Dropout
  3. Normalize
  4. Fully connected layer
  5. Normalize
  1. Apply final fully connected layer to output probabilities for each token
  2. Output: Choice between top-k, top-p, top-p with a section of the seed appended to decode. See python showcase.ipynb to try each of them!

How to run

python showcase.ipynb currently contains everything required to train the model on one song, Reverie by Claude Debussy; mostly as a proof of concept. I am re-training on the full Maestro dataset and will commit the model once it's finished.

About

Decoder only transformer model for midi files

Topics

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors