Skip to content
This repository was archived by the owner on Feb 12, 2022. It is now read-only.

Commit 8e30524

Browse files
authored
Add missing models (#7)
* update requirement to conda * update env * add argument to change model * update models * remove model details as we now have multiple models
1 parent b18fd5f commit 8e30524

File tree

15 files changed

+291
-452
lines changed

15 files changed

+291
-452
lines changed

.gitignore

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -1,7 +1,6 @@
11
#### joe made this: http://goel.io/joe
22

3-
env3/
4-
env/
3+
env*/
54

65
#####=== Python ===#####
76

@@ -70,7 +69,8 @@ target/
7069
.LSOverride
7170

7271
# Icon must end with two \r
73-
Icon
72+
Icon
73+
7474

7575
# Thumbnails
7676
._*

Pipfile

Lines changed: 0 additions & 40 deletions
This file was deleted.

Pipfile.lock

Lines changed: 0 additions & 366 deletions
This file was deleted.

README.md

Lines changed: 40 additions & 17 deletions
Original file line numberDiff line numberDiff line change
@@ -2,24 +2,29 @@
22

33
<img width="400" align="right" alt="screen shot 2017-11-21 at 12 35 28" src="https://user-images.githubusercontent.com/72940/33071669-be6c35b2-cebc-11e7-8822-9b998ad1ea09.png">
44

5-
Estimating the number of concurrent speakers from single channel mixtures is a very challenging task that is a mandatory first step to address any realistic “cocktail-party” scenario. It has various audio-based applications such as blind source separation, speaker diarisation, and audio surveillance. Building upon powerful machine learning methodology and the possibility to generate large amounts of learning data, Deep Neural Network (DNN) architectures are well suited to directly estimate speaker counts.
5+
_CountNet_ is a deep learning model to estimate the number of concurrent speakers from single channel mixtures is a very challenging task that is a mandatory first step to address any realistic “cocktail-party” scenario. It has various audio-based applications such as blind source separation, speaker diarisation, and audio surveillance.
66

7-
## Publication
7+
This repo provides pre-trained models.
88

9-
#### Accepted for ICASSP 2018
9+
## Publications
1010

11-
* __Title__: Classification vs. Regression in Supervised Learning for Single Channel
11+
### 2019: IEEE/ACM Transactions on Audio, Speech, and Language Processing
12+
13+
* __Title__: CountNet: Estimating the Number of Concurrent Speakers Using Supervised Learning
1214
Speaker Count Estimation
13-
* __Authors__: Fabian-Robert Stöter, Soumitro Chakrabarty, Bernd Edler, Emanuël
15+
* __Authors__: [Fabian-Robert Stöter](https://faroit.com), Soumitro Chakrabarty, Bernd Edler, Emanuël
1416
A. P. Habets
15-
* __Preprint__: [arXiv 1712.04555](http://arxiv.org/abs/1712.04555)
16-
17-
## Model
17+
* __Preprint__: [HAL](https://hal-lirmm.ccsd.cnrs.fr/lirmm-02010805)
18+
* __Proceedings__: [IEEE](https://ieeexplore.ieee.org/document/8506601) (paywall)
1819

19-
<img width="360" align="right" alt="screen shot 2017-11-21 at 12 35 28" src="https://user-images.githubusercontent.com/72940/33072095-60d1929c-cebe-11e7-91de-1dff3fc50bde.png">
20-
21-
In this work a recurrent neural network was trained to generate speaker count estimates for 0 to 10 speakers. The model uses three Bi-LSTM layers inspired by a model for singing voice separation by [Leglaive15](https://hal.archives-ouvertes.fr/hal-01110035).
20+
### 2018: ICASSP
2221

22+
* __Title__: Classification vs. Regression in Supervised Learning for Single Channel
23+
Speaker Count Estimation
24+
* __Authors__: [Fabian-Robert Stöter](https://faroit.com), Soumitro Chakrabarty, Bernd Edler, Emanuël
25+
A. P. Habets
26+
* __Preprint__: [arXiv 1712.04555](http://arxiv.org/abs/1712.04555)
27+
* __Proceedings__: [IEEE](https://ieeexplore.ieee.org/document/8462159) (paywall)
2328

2429
## Demos
2530

@@ -34,22 +39,22 @@ This repository provides the [keras](https://keras.io/) model to be used from Py
3439
[Docker](https://www.docker.com/) makes it easy to reproduce the results and install all requirements. If you have docker installed, run the following steps to predict a count from the provided test sample.
3540

3641
* Build the docker image: `docker build -t countnet .`
37-
* Predict from example: `docker run -i countnet python predict_audio.py examples/5_speakers.wav`
42+
* Predict from example: `docker run -i countnet python predict.py --model CRNN examples/5_speakers.wav`
3843

3944
### Manual Installation
4045

41-
Make sure you have Python 3.6, `libsndfile` and `libhdf5` installed on your system (e.g. through Anaconda). To install the requirements run
46+
To install the requirements using Anaconda Python, run
4247

43-
`pip install -r requirements.txt`
48+
`conda env create -f env.yml`
4449

45-
You can now run the command line script and process wav files
50+
You can now run the command line script and process wav files using the pre-trained model `CRNN` (best peformance).
4651

47-
`python predict_audio.py examples/5_speakers.wav`
52+
`python predict.py examples/5_speakers.wav --model CRNN`
4853

4954
## Reproduce Paper Results using the LibriCount Dataset
5055
[![DOI](https://zenodo.org/badge/DOI/10.5281/zenodo.1216072.svg)](https://doi.org/10.5281/zenodo.1216072)
5156

52-
The full test dataset is available for download on Zenodo.
57+
The full test dataset is available for download on [Zenodo](https://doi.org/10.5281/zenodo.1216072).
5358

5459
### LibriCount10 0dB Dataset
5560

@@ -85,6 +90,24 @@ In the following example a speaker count of 3 speakers is the ground truth.
8590
]
8691
```
8792

93+
### Running evaluation
94+
95+
```python eval.py ~/path/to/LibriCount10-0dB --model CRNN``` outputs the _mean absolute error_ per class and averaged.
96+
97+
### Pretrained models
98+
99+
| Name | Number of Parameters | MAE on test set |
100+
|----------|----------------------|-----------------|
101+
| `RNN` | 0.31M | 0.38 |
102+
| `F-CRNN` | 0.06M | 0.36 |
103+
| `CRNN` | 0.35M | __0.27__ |
104+
105+
106+
## FAQ
107+
108+
#### Is it possible to convert the model to run on a modern version of keras with tensorflow backend?
109+
110+
Yes, its possible. But I was unable to get identical results when converting model. I tried this [guide](https://github.com/keras-team/keras/wiki/Converting-convolution-kernels-from-Theano-to-TensorFlow-and-vice-versa) but it still didn't help to get to the same performance compared to keras 1.2.2 and theano.
88111

89112
## License
90113

env.yml

Lines changed: 49 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,49 @@
1+
name: countnet
2+
channels:
3+
- defaults
4+
dependencies:
5+
- ca-certificates=2020.1.1
6+
- certifi=2020.4.5.2
7+
- intel-openmp=2019.4
8+
- libcxx=10.0.0
9+
- libedit=3.1.20191231
10+
- libffi=3.3
11+
- mkl=2019.4
12+
- mkl-service=2.3.0
13+
- ncurses=6.2
14+
- openssl=1.1.1g
15+
- pip=20.1.1
16+
- python=3.6.10
17+
- readline=8.0
18+
- setuptools=47.3.0
19+
- six=1.15.0
20+
- sqlite=3.32.2
21+
- tk=8.6.10
22+
- wheel=0.34.2
23+
- xz=5.2.5
24+
- zlib=1.2.11
25+
- pip:
26+
- audioread==2.1.8
27+
- backports-weakref==1.0rc1
28+
- bleach==1.5.0
29+
- cffi==1.14.0
30+
- decorator==4.4.2
31+
- h5py==2.10.0
32+
- html5lib==0.9999999
33+
- joblib==0.15.1
34+
- keras==1.2.2
35+
- librosa==0.7.2
36+
- llvmlite==0.32.1
37+
- markdown==2.2.0
38+
- numba==0.43.0
39+
- numpy==1.18.5
40+
- protobuf==3.12.2
41+
- pycparser==2.20
42+
- pyyaml==5.3.1
43+
- resampy==0.2.2
44+
- scikit-learn==0.22
45+
- scipy==1.4.1
46+
- soundfile==0.10.3.post1
47+
- theano==0.9.0
48+
- werkzeug==1.0.1
49+
- tqdm

eval.py

Lines changed: 101 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,101 @@
1+
import numpy as np
2+
import soundfile as sf
3+
import argparse
4+
import os
5+
import keras
6+
import sklearn
7+
import glob
8+
import predict
9+
import json
10+
from keras import backend as K
11+
12+
import tqdm
13+
14+
eps = np.finfo(np.float).eps
15+
16+
17+
def mae(y, p):
18+
return np.mean([abs(a - b) for a, b in zip(p, y)])
19+
20+
21+
def mae_by_count(y, p):
22+
diffs = []
23+
for c in range(0, int(np.max(y)) + 1):
24+
ind = np.where(y == c)
25+
diff = mae(y[ind], np.round(p[ind]))
26+
diffs.append(diff)
27+
28+
return diffs
29+
30+
31+
if __name__ == '__main__':
32+
parser = argparse.ArgumentParser(
33+
description='Load keras model and predict speaker count'
34+
)
35+
parser.add_argument(
36+
'root',
37+
help='root dir to evaluation data set'
38+
)
39+
40+
parser.add_argument(
41+
'--model', default='CRNN',
42+
help='model name'
43+
)
44+
45+
args = parser.parse_args()
46+
47+
# load model
48+
model = keras.models.load_model(
49+
os.path.join('models', args.model + '.h5'),
50+
custom_objects={
51+
'class_mae': predict.class_mae,
52+
'exp': K.exp
53+
}
54+
)
55+
56+
57+
# print model configuration
58+
model.summary()
59+
60+
# load standardisation parameters
61+
scaler = sklearn.preprocessing.StandardScaler()
62+
with np.load(os.path.join("models", 'scaler.npz')) as data:
63+
scaler.mean_ = data['arr_0']
64+
scaler.scale_ = data['arr_1']
65+
66+
input_files = glob.glob(os.path.join(
67+
args.root, 'test', '*.wav'
68+
))
69+
70+
y_trues = []
71+
y_preds = []
72+
73+
for input_file in tqdm.tqdm(input_files):
74+
75+
metadata_file = os.path.splitext(
76+
os.path.basename(input_file)
77+
)[0] + ".json"
78+
metadata_path = os.path.join(args.root, 'test', metadata_file)
79+
80+
with open(metadata_path) as data_file:
81+
data = json.load(data_file)
82+
# add ground truth
83+
y_trues.append(len(data))
84+
85+
# compute audio
86+
audio, rate = sf.read(input_file, always_2d=True)
87+
88+
# downmix to mono
89+
audio = np.mean(audio, axis=1)
90+
91+
count = predict.count(audio, model, scaler)
92+
# add prediction
93+
y_preds.append(count)
94+
95+
y_preds = np.array(y_preds)
96+
y_trues = np.array(y_trues)
97+
98+
99+
mae_k = mae_by_count(y_trues, y_preds)
100+
print("MAE per Count: ", {k: v for k, v in enumerate(mae_k)})
101+
print("Mean MAE", mae(y_trues, y_preds))

models/CNN.h5

33.1 MB
Binary file not shown.

models/CRNN.h5

4.08 MB
Binary file not shown.

models/F-CRNN.h5

832 KB
Binary file not shown.

models/RNN.h5

3.64 MB
Binary file not shown.

0 commit comments

Comments
 (0)