I never heard the definition of "incremental". Where do you get the "backpropagation not incremental"? Could you give the reference?
I am pretty sure that, for stochastic gradient descent (SGD), we can update the "weights" from one single data point.
Details can be found in following post. The post is not about SGD on neural network, but linear model.
How could stochastic gradient descent save time comparing to standard gradient descent?
There is no fundamental difference between SGD in neural network and linear model. The only difference is that for linear model, gradient has a closed form solution, but backpropagation algorithm is used to calculate gradient in neural network.