When building predictive models, it's common practice to split your data into three sets which you have correctly identified as training, validation and test.
The purpose of these splits are simple:
You train your model using the training set. In the case of a supervised classification problem, you would feed in your data with its classification for the learning algorithm to learn.
The test set is used to evaluate the performance of your model. Essentially, you do not supply your model with labels but instead what your model to predict.
The validation set is usually sampled from the training set and is used in conjunction with the training set during training used to fine tune some of your hyper parameters according to some metric.
I.e. if training a neural network, a hyper parameter you may wish to tune is the weight decay term based on the SSE metric.
Essentially the validation scores and testing scores are calculated based on the predictive probability (assuming a classification model). The reason we don't just use the test set for validation is because we don't want to fit to the sample of "foreign data". We instead want models to generalise well to all data.
This is by no means an extensive answer and you should research this further.
What is the difference between test set and validation set?
https://www.quora.com/Should-I-split-my-data-to-train-test-split-or-train-validation-test-subset