I have created a algorithm that signals when to buy a certain stock. When that signals is generated a buy the stock and hold it for lets say 10 days. So the log returns for that period will be ln(p(t+10)/p(t)). So lets say that I get like 3000 log returns from this and I can plot it as a histogram and compute the mean and the standard deviation. How can I test if the mean of these returns are statistically significant higher than 0. Can I use a one sided t-test for this or would that be misleading since we are not sure about the distribution of the 10 day returns?
I appreciate any advice. Thank you!