The Baum–Welch algorithm is used to find the unknown parameters of a hidden Markov model (HMM).
Questions tagged [baum-welch]
39 questions
21
votes
2 answers
What are the differences between the Baum-Welch algorithm and Viterbi training?
I am currently using Viterbi training for an image segmentation problem. I wanted to know what the advantages/disadvantages are of using the Baum-Welch algorithm instead of Viterbi training.

Mykie
- 491
- 1
- 3
- 9
5
votes
1 answer
Viterbi and forward-backward algorithm in HMM
I am learning HMM recently and got confused with the training problem (training model parameters and hidden state given outcome sequence).
As far as I know, both Viterbi learning and Baum-Welch (forward-backward algorithm ) are used to estimate…

unicorn
- 739
- 4
- 13
5
votes
2 answers
Simple Explanation of Baum Welch/Viterbi
I'm looking for a very simple explanation as possible for Baum Welch and Viterbi for HMMs with a straightforward well annotated example. Almost all of the explanations I find on the net invariably jump from the beginning into an almost completely…

A D
- 103
- 1
- 5
5
votes
2 answers
Scaling step in Baum-Welch algorithm
I am implementing the Baum-Welch Algorithm for training a Hidden Markov Process, to basically better understand the training process.
I have implemented the iterative procedures described in Rabiner's classic paper. My Implementation is in Wolfram…

Vahagn Tumanyan
- 153
- 1
- 6
4
votes
1 answer
What are some applications of unsupervised HMMs?
Supervised HMMs can be applied to many problems like POS tagging and OCR (optical character recognition).
I've learned that HMMs can be trained unsupervisedly using EM (Baum-Welch algorithm), what are some example applications of this unsupervised…

dontloo
- 13,692
- 7
- 51
- 80
4
votes
1 answer
Baum-Welch and hidden Markov models: Continuous observation densities in HMMs
I am currently trying to understand how parameter are being reestimated for hidden Markov models (HMMs), using expectation-maximization (EM).
What I seem to have problems understanding is what the symbol emission probability is actually modelling. …

Bob Burt
- 545
- 1
- 6
- 23
4
votes
1 answer
Baum Welch training of HMM
I have 200k sequences and each element of the sequence is vector of length 200. I plan to learn a HMM using this data, using the Baum-Welch EM algorithm to infer transition and emission probabilities. I wanted to know if I can do the fitting in…

Subraveti Suraj
- 406
- 3
- 10
4
votes
1 answer
re-estimation of emission probabilities in HMM
I am confused about the re-erstimation procedure for emissions in HMMs with Baum-Welch (still). I posted two questions concerning this general topic already and I thought I had cleared up my confusion, but not so. At least not in its entirety. The…

lo tolmencre
- 199
- 2
- 10
3
votes
0 answers
Initialisation strategies for learning Hidden Markov Models
I used hmmlearn library to initialize an HMM (Hidden Markov Model). sampled observations from the HMM, and used the sampled data to re-estimate the parameters of the HMM.
For re-estimating the parameters I randomly initialized the parameters and…

abhishek
- 177
- 6
3
votes
1 answer
Expectation Maximisation vs Expectation Propagation in the context of Bayesian Networks
I am confused about Expectation Maximisation and Expectation Propagation algorithms in the context of Bayesian Networks, especially whether one comprise another.
What is the difference between expectation maximisation and
expectation propagation?…

Pumpkin
- 131
- 2
3
votes
1 answer
Forward-backward algorithm for HMM
I am currently studying this paper In which i am having some problems understanding the purpose of the forward-backwards algorithm.
First of all why even have both forward and backwards?
It seems to me that after one have computed the forward…

Bob Burt
- 545
- 1
- 6
- 23
2
votes
1 answer
Optimizing HMM log-likelihood with time-dependent prior
I have a HMM (Hidden Markov Model) which emits an observation Z.
The parameters of the HMM are $\boldsymbol\theta$. $$\boldsymbol\theta = {\boldsymbol{A},\boldsymbol{B},\pi}$$
Where $\boldsymbol{A}$ is the transition matrix, $\boldsymbol{B}$ is the…

robotlover
- 21
- 2
2
votes
1 answer
MFCCs and MoG-HMMs for speech recognition
BACKGROUND
MFCCs are coefficients which represent the most important parts of speech, and about 12 of them are used to model a one 512 points long frame (of speech). Along with them you would use delta coeffients, which track the change of MFCCs…

Desperado
- 131
- 4
2
votes
0 answers
HMM (Baum-Welch) - convergence rate differences between the transition and output matrices
I am trying to learn more about the convergence properties of the Baum-Welch algorithm for estimating the HMM parameters.
I ran a test comparing the convergence of both the transition and output matrices as a function of the sequence length using…

Goek
- 101
- 5
2
votes
1 answer
How to Train HMM model with two different sequences using the Baum-Wech algorithm
I am using HMM to visualize drinking gestures of different container types.
I began training HMM with one sequence corresponding to one container type, but I want to visualize it with python now with different container types.
How can I map…

Emna Jaoua
- 233
- 2
- 11