Historically, statistics has grown up and developed based on assumptions of Gaussian normality and its ubiquity in the form of the bell-shaped curve, with a rich and wide-ranging set of methodologies unfolding out of that assumption. There are many reasons for these developments which are well articulated in Hastie and Efron's recent book, Computer Age Statistical Inference. One consequence of this assumption of ubiquitous normality is that deviance from it -- outliers -- are viewed as a problem to be solved by normalizing, transforming, and/or deleting the extreme values using techniques such as trimming and winsorizing or transformations such as the natural log, Lambert's W function, the inverse hyperbolic sine, and others in an effort at forcing the pdf to conform to normality.
Robust, nonparametric models are another, less widely employed set of methodologies in the statistical toolkit for dealing with nonconforming data. However, these approaches are less well understood by unsophisticated practitioners or, better, practitioners whose understanding begins and ends with Gaussian assumptions. Inevitably, this includes those on the dissertation committees of many hapless graduate students. One consequence of this lack of understanding is that, not surprisingly given the predominance of Gaussian assumptions, robust solutions are significantly less rich and well developed compared to historically earlier, more parametric and traditional approaches.
Both of these "approaches" suffer, if you will, from assuming that Gaussian normality is the "correct" view of nature and behavior in spite of its irremediable flaws. These flaws have to do with the just-as-ubiquitous fact that extreme values and/or large deviations from normality are not outliers but empirical realities. Mandelbrot and Taleb, in their paper Mild vs. Wild Randomness, (published in The known, the unknown, and the unknowable in financial risk management : measurement and theory, Princeton University Press, 2010), note that it is possible to shift one's viewpoint from assumptions based on smooth, Gaussian bell-shapes to assumptions that exceptional extreme values, jumps and discontinuities conform more closely to reality (than normality) and can be taken as the starting point for theoretical development. Their view inevitably relegates normal, ordinary data -- the mass of information in the pdf -- to a significantly less consequential role.
Their paper is a good introduction to extreme value theory (EVT), one of the least well-known and understood subdisciplines in statistics. Most importantly for the OP, EVT offers a completely different approach to thinking about and dealing with nonnormal data.