I'm not a statistician and this is my first post on stackexchange, so I apologize if I'm not following guidelines.
My situation is that I have a prognosticator who projects sales revenue for 30 different products in my company and I have 3 years worth of data. My goal is to determine what his predictions would look like over 1000 years.
First, I normalized the difference between his predictions and actual results to percentages of the predicted guess, since different products have different projections.
So I'm assuming that the difference between his predictions and actual results look like a normal distribution.
Right now, I'm getting a mean of 2% and standard deviation of 13%.
My question is that I want to see what the normal distribution looks like around a mean of 0. I know that right now, that on average he tends to over predict by 2%, but I'm assuming that due to the small sample size that he could just as like be under predicting by 2%. So I'd like to take the same data and have a mean of 0 and know how I'm supposed to change the standard deviation.
For example, if I shift the mean to 0, would the standard deviation stay 13%? I assume the standard deviation should move up to account for the fact that if we're moving the mean to 0 that we're in essence saying that his predictions are more accurate and therefore the standard deviation would need to be increased to take this into consideration.
Does that make sense? Thx in advance.