If you are running the experiment
If you're concerned that the randomization might be implemented incorrectly, it is generally more effective to take care at the front end to ensure the randomization is done right, than to try to detect it at the back end. There are several reasons for this. First, detecting that the randomization failed is difficult, error-prone, and ambiguous. Second, even if you obtained evidence that the randomization failed, what would you do? It's unclear and anything you did at that point would be on shaky ground.
It's probably better to put your energy into reviewing the randomization strategy, making sure it is appropriate, and making sure it is implemented correctly.
If you are analyzing an experiment run by others
Whether you should or shouldn't test for a failure of randomization is a matter of opinion. It might depend on your assessment of the likelihood that there is a failure of randomization.
My sense is that post-hoc testing for a failure of randomization might not make a lot of sense, in the absence of some specific reason to suspect a failure of randomization. If you don't trust that the experimenters did the randomization correctly, why do you trust that they did anything else correctly?
If you do detect a failure of randomization, there's not a great answer. If you know that randomization wasn't done properly, then you don't have a randomized controlled trial. You can try to control for the variables that show a difference between the two groups, but at that point you're basically turning it into an observational study, and who is to say that you have controlled for all possible confounders? If a failure of randomization has caused some systematic differences between the treatment vs control groups that you're aware of, there may well be other systematic differences that you're not aware of. So it's a sticky situation.
One other possibility is that perhaps you think randomization was done correctly, but you got unlucky with the split into treatment and control group and an unlikely split caused an imbalance between the two groups: e.g., maybe the treatment got many more older people than the control group, just by bad luck. That could happen. If the sample size is large, the chances that there is a large imbalance is very small, so we could question whether it is worth worrying about it. This is perhaps the one case where you might consider controlling for that factor (age). However, we also know that the probability of this happening by pure chance is very low, so it is questionable whether it is worth the energy to worry about such very-low-probability events. There are many other very-low-probability events that you probably don't worry about, so it's not clear why we'd single this one out.