AUC means Area Under the Curve. What curve are you referring to?
I will assume it is the ROC curve, which is the most common curve used.
ROC AUC
ROC AUC is a metric of ranking. Just like spearman correlation or kendall-tau.
Example: the true classes are [0,1,1], and your logistic regression gives you probabilities [0.1,0.2,0.2]. You'll have a perfect 100% ROC AUC, even if the probability of $P(y=1|x)$ is close to zero.
ROC AUC can, of course, only be used with "scores" produced by the model. This is usually a probability, but it needs not to be. If your model gives you values outside of [0,1], it works as well. (A lot of people do not know this.)
Again, the absolute values do not matter for ROC AUC. Only the relative ranking. Sometimes, the ROC curve is also used to help decide a threshold to convert the probabilities/scores into classes.
G-mean
Geometric-mean of the errors, accuracy, F1-scores, etc work in absolute values. You predict a class and you build a confusion matrix. These metrics do not care about probabilities, they only care how many times you said it was positive and it was negative.
These ones are pretty straight-forward and widely used metrics, therefore I don't think it even requires an example.
Class-imbalance
ROC AUC, kendall-tau correlation, G-mean, weighted average, etc all are good metrics for class imbalance. I would advise you to use at least one metric for relative ranking and another for absolute values.