0

I'm interested in training a neural network on 1xN size inputs, but actually only using 1xM inputs for when I use my learned model. 1 < M < N

Specifically speaking, I'm interested in training some model for a Poker Bot. I was hoping to train the model on data that has ALL hands revealed. Obviously, one can only play poker with only his hand visible.

I'm having trouble doing my own research/learning on this idea since it's fairly long to describe. Is there a proper term for what I'm trying to do?

user3605508
  • 103
  • 2

1 Answers1

1

The term you want is "missing data". This is a type of missing-data problem. Namely, you have variables that you'd like to use as predictors (the other players' hands), but you're missing the actual values. Note that a variable that is always missing at testing, as in this case, is unlikely to be helpful for training, because there won't be any way to exploit any relationship between this variable and the dependent variable come test-time.

Kodiologist
  • 19,063
  • 2
  • 36
  • 68
  • The term "hidden variable" (a.k.a. latent variable) could also apply. In terms of being helpful, I could imagine it might be, but it would have to be "pre-training", as the final network would not have the hidden input. (I am wholly un-qualified to answer the OP's question, but this response reminded me of ["Why ever use generative models when discriminative models are better?"](http://stats.stackexchange.com/q/12421/127790), to paraphrase). – GeoMatt22 Sep 14 '16 at 04:24
  • Thank you! As an addendum, is there another term for when half your inputs take continuous values while the other half take discrete values? I'll mark your answer as correct :) – user3605508 Sep 15 '16 at 01:11
  • @user3605508 No, that's a pretty common situation, anyway. Since discrete variables can be dummy-coded, they can largely be treated the same as continuous variables on the independent-variable side, unlike the dependent-variable side. – Kodiologist Sep 15 '16 at 01:55