Training the boundary #39
SuchitReddi
started this conversation in
General
Replies: 1 comment 8 replies
-
|
We use both classes to train. The idea of one-class learning is to make the bonafide representation compact and separate the spoofing attacks far away from the boundary. This is consistent with [1] in their third case of definition, where the negative class is not statistically representative in the training data. [1] Khan, S., & Madden, M. (2014). One-class classification: Taxonomy of study and review of techniques. The Knowledge Engineering Review, 29(3), 345-374. doi:10.1017/S026988891300043X |
Beta Was this translation helpful? Give feedback.
8 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
-
In the paper, it's mentioned that:
The key idea of one-class classification methods is to capture the target class distribution and set a tight classification boundary around it, so that all non-target data would be placed outside the boundary.What I understood was that we take bona fide speech and to train and create a tight boundary around it using oc-softmax loss function.
But when I saw the ASVspoof dataset, in LA cm protocols, the train file has bona fide and also spoof audio files.
So, what I can't conclude is, does one class classification mean we only take the bona fide speech for making a classified boundary?
I am confused about how we train the boundary. Using only bona fide speech data or using both bona fide and spoof data.
Can you clarify this for me @yzyouzhang ?
Beta Was this translation helpful? Give feedback.
All reactions