This is an interesting problem, and one that has broad relevance. The answer revolves around the notion that this is not at its core a statistical question.
Given a large enough sample, any trivial difference in the speed of your algorithms can be found to be statistically significant at any desired level of statistical significance. (Note that this is only the case when there is a real difference in the speeds. See my answer here:Why does frequentist hypothesis testing become biased towards rejecting the null hypothesis with sufficiently large samples?)
However, as your question implies, at some level a small enough difference is effectively no practical difference. Thus the trick is to decide on a threshold for 'not large enough to care about' rather than to fob off the responsibility for decision onto a statistical routine.
I suggest that you decide on either a fractional difference or maybe an absolute time difference for your threshold. Then make sure that your sample is large enough that the upper end of a confidence interval for the difference between the mean speeds does not cross that threshold. (What level should your confidence interval be? That depends on the consequences of the decision might be, so note that you do not have to use a 95% interval.)