Skip to content

Conversation

@larissakl
Copy link
Contributor

Adds a config-parameter to the TreeTimesyncBeamSearch for defining a word-exit-penalty which is added to the overall hypothesis score at each word end along with the LM-score.

@larissakl
Copy link
Contributor Author

I also added a separate exit penalty for silence

@curufinwe curufinwe changed the title Add word-exit-penalty to Search::TreeTimesyncBeamSearch Add word/silence-exit-penalty to Search::TreeTimesyncBeamSearch Nov 4, 2025
Lm::Score lmScore = languageModel_->score(wordEndExtension.lmHistory, st);
wordEndExtension.score += lmScore;
wordEndExtension.lmScore = lmScore;

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

You could move this part out of if (sts.size() != 0) sentence,
then if (lemma == lexicon_->specialLemma("silence")) will be done only once:

                if (sts.size() != 0) {
                    require(sts.size() == 1);
                    const Bliss::SyntacticToken* st = sts.front();
                    // Add the LM score
                    Lm::Score lmScore = languageModel_->score(wordEndExtension.lmHistory, st);
                    wordEndExtension.score += lmScore;
                    wordEndExtension.lmScore = lmScore;

                }

                // Add exit penalty for silence or for non-silence word
                if (lemma == lexicon_->specialLemma("silence")) {
                    wordEndExtension.score += silencePenalty_;
                }
                else {
                    wordEndExtension.score += wordExitPenalty_;
                }

@SimBe195
Copy link
Collaborator

SimBe195 commented Nov 5, 2025

I would prefer to see this implemented as a new transition type instead of a config parameter so that the score is retrieved from the labelScorer_.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

5 participants