Firstly, there is no need to split the data set into a training, CV and testing set. It is usually sufficient to do a train and test split and using the training set to perform CV.
When doing 10-fold CV, your training set is split into 10 buckets, hence if your dataset has 530 observations, 85% of this results in 450 observations in your training set. When performing 10-Fold CV, your validation set will have 10% of 450, 45 data points for calculating accuracy/error and 405 data points for training your model.
Because you have relatively few data points, I assume your algorithm is running fast and time is not an issue. Instead of performing K-fold cross validation I would perform leave one out CV. In this method, you train your model using $n-1$ (449) data points and test against 1. The errors are then averaged.
When performing CV don't look at only the best hyper parameters which reduce the errors most, but also pay particular attention to overfitting. A simple model which significantly reduces errors is sufficient, as opposed to higher degree models which reduce errors only marginally.
To answer your first question, yes. After performing any type of CV, you always train your final model against your FULL training set.
Your second question, it is okay to calculate testing error multiple times, but be careful what you do with the data. The reason we performed CV in the first place, was to estimate test error. We train the model first and test it against data it has never seen before. That is the whole reason behind CV. Therefore, CV is a good representation of testing error.
If we start to repeatedly calculate the testing error and tweaking our model, we immediately start overfitting the testing data. Do not do this. It is okay to collect data on how the two models perform after you have optimized your CV error and just report these results.
I like to do bar plots of the two model error and report the data. The average on its own does not convey the whole story, hence a bar/violin plot will how the first, second quartile, outliers and the mean. It makes a good story.