The word2vec model saves its layer weights as embeddings. But do CBOW and skipgram both store the input layer weights?
I know they learn different embeddings for the words in the context and for the word as center word. CBOW features the context embeddings in the input layer and Skip-gram the center words.
I had a look at the C-Code and it looks like they are both storing the input layer syn0 as word vectors.
Am I right in my understanding?
Are the CBOW vectors therefore the context vectors and not the center word vectors as in skip-gram?
Why does CBOW not store the center word embeddings (output layer)?
Thank you