The language model I am referring to is the one outlined in "Attention is All You Need":
My understanding is (please correct me if I am wrong) that when the task is translation, the encoder's input could be "Hi, my name is John." And the decoder's input could be "Bonjour, je m'appelle" and then the Transformers would output "John" as the next word.
However, when it comes to language modeling, I don't see what the encoder's input could be (just as there is no encoder in RNN when the task is language modeling).
So if we're dealing with language modeling, is the left part of the transformer (in Attention is All You Need), the encoder, removed? If it is still used, what is the input to it?
Thanks in advance!