2

I am using HMM to visualize drinking gestures of different container types.

I began training HMM with one sequence corresponding to one container type, but I want to visualize it with python now with different container types.

How can I map different sequences in one model?

For the training data do I have to insert them as one sequence (of different container types)?

Ferdi
  • 4,882
  • 7
  • 42
  • 62
Emna Jaoua
  • 233
  • 2
  • 11
  • "I began training HMM with one sequence" - This is clearly not a good idea. Your HMM will suffer from overfitting... You need more than 1 sequence to train a model. So your last guess is right! However, if you know the containers you are studying, the best way of doing thing might be to train one HMM for each container type. HMMs is my field of research, I can write a more complete answer later today. – Eskapp Dec 19 '16 at 15:02
  • Thank you so much for your reply. Actually after posting I have tried to make one model (and of course add transitions) for drinking in two different containers. I have tried a topology which I am not sure whether it is the suitable one or not. Then I have trained my model for two training data (for drinking in two containers)(two times model.train()) but after that I don't know how I can evaluate my top level model. – Emna Jaoua Dec 19 '16 at 15:46

1 Answers1

1

I am missing some information about your problem for a complete answer, but assuming that you have a finite number of known containers that can be used and that this number is not too big, here is what I would suggest you to start with.

  • Gather multiple training sequences for each type of container

  • Train one HMM for each type of container. Depending of the nature of your sequences samples (discrete? continuous values?) use either discrete emission probabilities or Gaussian ones. If your sequences always start from the "same moment" in the action of drinking, using a left-to-right topology could be appropriate (for this, you often just need to initialize the transition matrix as an upper-triangular matrix). With Python, there is the hmmlearn library that provides you with the functions needed to train an HMM in all the aforementioned cases (discrete, Gaussian, left-to-right). Here's the doc. (The documentation also explains how to handle multiple training samples.)

  • Once you have a trained HMM for each type of container, a new sequence can be classified by computing its likelihood with respect to each HMM. The highest this value is, the more probable the sequence belongs to this class. This likelihood can be computed using a forward algorithm which is very simply described in this paper as the "Solution to the first Problem" (on page 5). (In case the link dies one day the paper is: An introduction to hidden Markov models, by Rabiner and Juang, in ASSP Magazine (1986))

From your comment I feel that you misunderstood how to use multiple sequences in HMMs training. The idea is not to train or re-train the model for each sequence! The training procedure is only done once. Using multiple sequences for this training will avoid overfitting and make the model more robust to the natural variation data from a same class have. From my personal experience, all libraries implementing HMMs allow to pass multiple sequences for training.

Eskapp
  • 613
  • 5
  • 13
  • Thank you so much for your answer. I am sorry for not being clear. Actually , I have 4 different types of containers and I have began training my model with two sequences (just as first try to understand). and I have trained them separately. I meant for each sequence I have made a submodel (from right to left) and I have mapped my two submodels in one model.I have trained this model for two different training sets then I have generated a threshold HMM model. My question is after training my model for two different sequences – Emna Jaoua Dec 20 '16 at 19:04
  • after that I am using a top level model which maps the threshold model with the gesture model.I have to understand it better to know to deal with it using more than one observation sequence. – Emna Jaoua Dec 20 '16 at 19:10