Hello, i don’t know if this repo is active but if i can help, i used this repo for my project and i find a method to improve the loss / accuracy just by L2-normalizing the output of the encoder
My score with it is actually 0.97% accuracy and 0.02 val-BCEloss training for 25 epochs on a mixt of LibriSpeech and CommonVoice (fr) datasets (360 speakers in train set and 150 in validation set with 200 pairs for each speaker (100 same and 100 not same) (batch_size of size 32 (16 same and 16 not) with embedding dim 64)
Hello, i don’t know if this repo is active but if i can help, i used this repo for my project and i find a method to improve the loss / accuracy just by L2-normalizing the output of the encoder
My score with it is actually 0.97% accuracy and 0.02 val-BCEloss training for 25 epochs on a mixt of LibriSpeech and CommonVoice (fr) datasets (360 speakers in train set and 150 in validation set with 200 pairs for each speaker (100 same and 100 not same) (batch_size of size 32 (16 same and 16 not) with embedding dim 64)