That explanation is a very lousy reason to not test for the proportional hazards assumption. Just because a research project isn't a clinical trial doesn't mean it won't have a long term effect on policy or evidence based medicine. Part of the job description of being a statistician is striving to present accurate data analyses no matter what stage or phase or impact or audience is reviewing the information.
However, formally testing proportional hazards doesn't come without issues. It tends to create more problems than it creates. See the related CV post here: Checking the proportional hazard assumption.
Question: Can the results of a Cox proportional hazards analysis be trusted if the proportionality of hazards is not tested?
Answer: It depends.
Question: Can the results of a Cox proportional hazards analysis be trusted if the proportionality of hazards is not checked using plots and sensitivity analyses?
Answer: Probably not.
Question: Can we trust that hazards are proportional if the test of proportional hazards is not statistically significant at the 0.05 level?
Answer: Not necessarily.
I think experienced statisticians recognize there is a serious risk of basing inferential decisions on intermediate hypothesis testing, and that global tests such as those of distribution or, in this case, proportionality are difficult to calibrate. My general approach to this issue is to inspect the plots of the smoothed baseline hazards by separating variables into discrete intervals, as well as inspecting for crossing KM curves. Methods to obtain consistent inference is to use robust standard errors, estimate restricted mean survival times (partial area under the KM), or use p rho gamma estimators.