It is clear that more training data will help lower the variance of a high variance model since there will be less overfitting if the learning algorithm is exposed to more data samples.
However, what impact does training data size have on a high bias model? Generally, will more training data lower the bias, will it have no effect, or will it cause a further increase in the bias?
This question is more specific than the following question which is similar: What impact does increasing the training data have on the overall system accuracy?
One of the answers actually says that "high bias models will not benefit from more training examples". But there does not seem to be any consensus.