@@ -28,6 +28,7 @@ Table of Contents
2828 * [ Searching Hyperparameters] ( #searching-hyperparameters )
2929 * [ Examining Errors] ( #examining-errors )
3030 * [ Ensemble Models] ( #ensemble-models )
31+ * [ Transfer Learning] ( #transfer-learning )
3132 * [ Testing Densely] ( #testing-densely )
3233 * [ Discovering Novel Sounds] ( #discovering-novel-sounds )
3334 * [ Overlapped Classes] ( #overlapped-classes )
@@ -1502,6 +1503,36 @@ the thresholds file of one of the constituent models, and (3) calculate the
15021503Manually copy this file into the newly created ensemble folder, and use
15031504it whenever classifying recordings with this ensemble model.
15041505
1506+ ## Transfer Learning ##
1507+
1508+ Manually annotation can be a lot of work. Fortunately, the effort spent doing
1509+ so can be reduced by leveraging someone else's work. Let's say your colleague
1510+ Alice has trained a model to do a task similar to what you need. You can take
1511+ her model, keep the first few layers intact with their learned weights, replace
1512+ the last couple of layers with randomly initialized ones of your own, and then
1513+ iteratively train and annotate as described above. The features in the early
1514+ layers will already be quite rich, and so the new latter layers will not need
1515+ as much ground truth data to learn your task. Moreover, if your colleagues Bob
1516+ and Carol also have similar trained models, you can combine all three in a
1517+ similar fashion.
1518+
1519+ SongExplorer comes with an architecture plugin (see [ Customizing with
1520+ Plug-ins] ( #customizing-with-plug-ins ) for details on how plugins work) called
1521+ "ensemble-transfer" that makes all this easy. Modify the ` architecture_plugin `
1522+ variable in your "configuration.py" file to be "ensemble-transfer". Then in
1523+ the SongExplorer GUI specify (1) the checkpoint(s) of the pretrained model(s)
1524+ you want to use, (2) whether you want to update the weights of the pretrained
1525+ model when you train with your data (or just the new layers), (3) how many of
1526+ the layers of the pretrained model(s) you want to use, (4) how many new
1527+ convolutional layers you want to add (for each layer: kernel time size x kernel
1528+ frequency size x num. features; e.g. "5x5x32,10x10x64"), (5) how many new dense
1529+ layers you want to add (for each layer: num. units; e.g. "128,32,8"), and (6)
1530+ the dropout rate (e.g. 50). Then iteratively train and fix mistakes as before.
1531+
1532+ Note that the pretrained models do * not* necessarily have to use the sampling
1533+ rate-- the "ensemble-transfer" plugin will automatically insert a resampling
1534+ layer if necessary.
1535+
15051536## Discovering Novel Sounds ##
15061537
15071538After amassing a sizeable amount of ground truth one might wonder whether one
0 commit comments