An answer to this will generally depend on the number of matched subjects as do most rules of thumb which are meant for approximately 100-1000 observations. If you have significantly less subjects, testing is questionable with or without matching as your power is expectedly low and the chance for type II errors in such test very high. For significantly larger numbers of observations the chance of being 'off' by a bit more than 0.1 increases, but the tests might now be overpowered resulting in significance of 'differences' even if these are not relevant.
Whether testing is sensible or not, will also depend on the methodology of matching, whether you choose exact matching by categorical variables for which testing is less relevant or matching based on a score derived e. g. by logistic regression where your concern applies.
For all tests you make you should consider a correction for multiple testing and control of family-wise error rate. Thus, I would recommend to 'explore' the differences mainly through descriptive means such as comparisons of boxplots or violin plots to get a better impression of the distribution for not too small sample sizes (>50) and only choose to test differences, if you have a good reason to believe that these are different, as it will affect all tests performed in your whole analysis due to the necessity to control family-wise error rates for multiple tests.