You already found that the additional variance explained is statistically significant. This of course is different from so-called "clinical significance".
Here is a possible strategy to see whether the additional variance explained is actually "meaningful". Run cross-validation on models with and without your variable of interest. Save the out-of-bag prediction accuracy of both models. Compare. Is the difference in predictive power meaningful in your context?
A small increase in predictive power can be very meaningful indeed, if you are predicting stock prices. Or a large difference can be meaningless. For instance, maybe you are predicting demand for a particular grade of steel and managed to halve prediction errors - but the production process still needs to fire up the blast furnace to make a full batch in make-to-stock, and your better prediction doesn't change the subsequent process in any way. (If so, best to look for a different job.)