AUC, loosely speaking, checks accuracy at all possible cutoff thresholds, not just $0.5$. It happens to be the case that, for your models, all have the same accuracy when you set that threshold to the default of $0.5$. Test the accuracy of your models when you set the cutoff at $0.4$ or $0.7$ (or whatever). Whatever software you’re using will have documentation that explains how to do this.
There’s nothing special about $0.5$ when it comes to a threshold for making a decision.
Additionally, I encourage you to look around Cross Validated for discussions about proper scoring rules, particularly comments by our member Frank Harrell. Accuracy has flaws. Shamelessly, I will mention that I posted a question a few weeks ago that gives an example where accuracy may not be a good performance metric: Proper scoring rule when there is a decision to make (e.g. spam vs ham email).