As you report accuracy, I assume you're talking about classification. Also, the question in the 1st paragraph and the following description do not seem to be the same question to me.
Anyways, accuracy is the fraction of correct classified cases among the tested cases. Such proportions can be described by a binomial distribution, and the random uncertainty of the test result roughly depends on the absolute number of tested cases.
Cross validation procedures test all cases in turn, so there is not much difference with varying size of the splits.
If you have 90 independent (!) cases in each class, then the class-wise accuracies (sensitivities) are tested with 90 cases each after the cross validation is finished.
Binomial 95% confidence interval for 89% of 90 cases yields about 81 - 94 % (you see that there is no point in reporting further digits here!)
Things look different if you go into one-time splitting (the typical setup for train - test aka hold out splits - as opposed to resampling techniques such as various flavors of cross validation or out-of-bootstrap validation). For one time splits, only a fraction of the available cases is ever tested. So splitting your 90 cases 80:20 would yield 18 test cases with a corresponding binomial 95% confidence interval for a proportion with 89% point estimate of maybe 66 - 98% (which is typically useless for practical purposes).