I'm using a Hidden Markov Model in order to train an application for gesture recognition.
Currently, I have gathered samples, and, using a library have trained these values (They have been passed through a Baum–Welch function in order to train the dataset.
I then (in real-time) have computed the gestures of each model, however, I get the wrong result back. I have only ever used a HMM and trained using MFCC features.
Does the actual input data have to be passed through the Baum-Welsh function too, in order to compute the Viterbi-Decoder algorithm? When using MFCC, I did not have to perform such an operation, but, I was lead to believe that the HMM can be trained using the MFCC features, in this instance the real-data is just of doubles.