Skip to content

Commit 02936ec

Browse files
committed
Finish README.md
1 parent 1b2dc35 commit 02936ec

File tree

2 files changed

+69
-2
lines changed

2 files changed

+69
-2
lines changed

README.md

Lines changed: 68 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -1,6 +1,72 @@
11
# SOME
22
SOME: Singing-Oriented MIDI Extractor.
33

4-
Warning SOME is an alpha version
4+
> WARNING
5+
>
6+
> This project is under beta version now. No backward compatibility is guaranteed.
7+
8+
## Overview
9+
10+
SOME is a MIDI extractor that can convert singing voice to MIDI sequence, with the following advantages:
11+
12+
1. Speed: 9x faster than real-time on an i5 12400 CPU, and 300x on a 3080Ti GPU.
13+
2. Low resource dependency: SOME can be trained on custom dataset, and can achieve good results with only 3 hours of training data.
14+
3. Functionality: SOME can produce non-integer MIDI values, which is specially suitable for DiffSinger variance labeling.
15+
16+
## Getting Started
17+
18+
### Installation
19+
20+
SOME requires Python 3.8 or later. We strongly recommend you create a virtual environment via Conda or venv before installing dependencies.
21+
22+
1. Install PyTorch 2.1 or later following the [official instructions](https://pytorch.org/get-started/locally/) according to your OS and hardware.
23+
24+
2. Install other dependencies via the following command:
25+
26+
```bash
27+
pip install -r requirements.txt
28+
```
29+
30+
3. (Optional) For better pitch extraction results, please download the RMVPE pretrained model from [here](https://github.com/yxlllc/RMVPE/releases) and extract it into `pretrained/` directory.
31+
32+
### Inference via pretrained model
33+
34+
Download pretrained model of SOME from [releases](https://github.com/openvpi/SOME/releases) and extract them somewhere.
35+
36+
To infer with CLI, run the following command:
37+
38+
```bash
39+
python infer.py --model CKPT_PATH --wav WAV_PATH
40+
```
41+
42+
This will load model at CKPT_PATH, extract MIDI from audio file at WAV_PATH and save a MIDI file. For more useful options, run
43+
44+
```bash
45+
python infer.py --help
46+
```
47+
48+
To infer with Web UI, run the following command:
49+
50+
```bash
51+
python webui.py --work_dir WORK_DIR
52+
```
53+
54+
Then you can open the gradio interface through your browser and use the models under WORK_DIR following the instructions on the web page. For more useful options, run
55+
56+
```bash
57+
python webui.py --help
58+
```
59+
60+
### Training from scratch
61+
62+
_Training scripts are uploaded but may not be well-organized yet. For the best compatibility, we suggest training your own model after a stable release in the future._
63+
64+
65+
## Disclaimer
66+
67+
Any organization or individual is prohibited from using any recordings obtained without consent from the provider as training data. If you do not comply with this item, you could be in violation of copyright laws or software EULAs.
68+
69+
## License
70+
71+
SOME is licensed under the [MIT License](LICENSE).
572

6-
We do not guarantee compatibility

infer.py

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -40,6 +40,7 @@ def infer(model, wav, midi, tempo):
4040

4141
midi_path = pathlib.Path(midi) if midi is not None else wav_path.with_suffix('.mid')
4242
midi_file.save(midi_path)
43+
print(f'MIDI file saved at: \'{midi_path}\'')
4344

4445

4546
if __name__ == '__main__':

0 commit comments

Comments
 (0)