I do not see any reason why you should not do this. Maybe from pure statistics point of view this might be quite rare, but for data science the technique above is actually used and very useful to improve machine learning model.
I do not know if you have ever heard the term before but what you are doing above is called feature engineering. Not only that you can also do interaction between term (which is quite similar to polynomial regression) but functions could be more complicated. However, usually you might want to remove some feature otherwise you either get dimensionality problem or overfitting.
For interpretability of this model this might not necessarily be a problem. In real life relationship between dependent and independent variable is not necessarily clear and linear hence this is actually very helpful and on top of that suppose one of the feature is actually a good predictor, it actually adds another fun question "why does this happen?".