How can the backoff weights a language model be positive?
A. Here’s how we can explain positive numbers in place of backoff weights in the LM: The numbers you see in the ARPA format LM (use din CMU) are not probabilities. They are log base 10 numbers, so you have log10(probs) and log10(backoffweights). Backoff weights are NOT probabilities. Consider a 4 word vocab A B C D.
Related Questions
- I seem to remember that the facility to specify sampling weights (via the Weights option from the Model menu or the WEIGhts command) was introduced as an experimental feature. Is this still the case, or has thorough testing now been completed?
- How can persuasive language be used in a positive and negative way to influence people ?
- How can the backoff weights a language model be positive?