Probably be careful the way you state differences in variance within and between, as these are ANOVA terms. And variance explanation by predictor variables is a regression term.
What if your classes were not linearly separable, and the $X,Y$ scatter plot looked like the image below, where yellow objects were bad and blue were good, and the red and white regions were the ground truth for bad and good? How would variance be approached in this setting. A key issue is that your question is about a 2-class classification problem, and as a classification problem, the solution space can take on a chameleon-like behavior.
For your continuous predictors, start performing univariate regression with a new artificial $y$-variable which is set to $y_i=-1$ for bad and $y_i=+1$ for good $y$-values. Regress $y$ on each univariate $x$. For categorical variables with $k$ levels, recode into $k-1$ dummy indicator variables $(x_i=0,1)$ and then regress the same $y$ on them univariately.
During each univariate run, predict the $\hat{y}_i$ values and assign $\hat{y}_i>0$ to good and $\hat{y}_i \leq 0$ to bad. Then, just determine the correct number of classified objects out of 23, and you have the classification accuracy. (Actually, sens/spec is what you want, because for $n$=100 objects if 95 are normal and 5 are tumor, and the classifier does nothing and assigns normal to everything, then the classification accuracy will still be 95% -- not true for sens/spec).
What is described above will work if the data are linearly separable. There are numerous other options, but you first need to find out if the 2 classes are linearly separable based on single predictors.
