Neural network language model - prediction for the word at the center or the right of context words?
On Bengio's paper, the model predicts probability by n words for the next word, like predicting probabilities of "book", "car", etc., by n words before it, like "this", "is", "a", "good". However, in tagging problems in NLP, like those in Collobert's papers, a common setup, the window model, is tag prediction for the center word by surrounding words.
Are there some studies on neural network language models for prediction of word probabilities at the center by surrounding words, like predicting probabilities of word at the center like "a", "the" by context words "this", "is" (at the left) and "good", "car" (at the right)?