There is some research on this topic. However, it remains less explored than its classification counterpart, as you have likely encountered.
I might suggest the paper cited below (and the conference it was presented at http://proceedings.mlr.press/v74/) depending on how interested you are in understanding it from a research perspective. I really appreciated the introduction of Gaussian noise in generating the synthetic observations.
If you're more interested in a practical solution, the first author has an R implementation available on her Github page. https://github.com/paobranco/SMOGN-LIDTA17
Also, I am currently developing an entirely Pythonic implementation of the SMOGN algorithm that will be available shortly. https://github.com/nickkunz/smogn
If you need a fast and intuitive solution to a highly skewed distribution, a common method is just using the log of your variable. Although, I understand that this has its obvious limitations. I hope this helped.
Branco, P., Torgo, L., Ribeiro, R. (2017). "SMOGN: A Pre-Processing Approach for Imbalanced Regression". Proceedings of Machine Learning Research, 74:36-50.
http://proceedings.mlr.press/v74/branco17a/branco17a.pdf.