Each time you run a cross validation you get an estimate of the true population test error. Different random seeds give you different estimates of this population quantity, but each is an estimate of the same underlying truth.
If you re-seed the random number generator in a quest to drive down the estimated test error you are going to bias your estimate of the test error downwards (this is sometimes called seed-hacking, and is a bad practice). If you take your final test error estimate to be the minimal you have seen over many runs of the random number generator, then you're definitely going to end up with a very optimistic estimate of the test error, and should not be surprised when your model performs much worse in production.
Instead, pick a seed and stick with it. My personal seed is 154.