To apply parametric MLE, you need to specify a parametric distribution. For non-parametric MLE, you do not specify a parametric distribution.
The most popular of the non-parametric MLE approaches is called Empirical Likelihood https://en.wikipedia.org/wiki/Empirical_likelihood (not much of a write up on that page). The classic book in the field is "Empirical Likelihood" by Art B, Owen https://www.amazon.com/Empirical-Likelihood-Art-B-Owen/dp/1584880716 . The freely accessible paper "Empirical Likelihood", Art B, Owen, Annals of Statistics 1990, Vol. 18, pp. 90-120 https://projecteuclid.org/download/pdf_1/euclid.aos/1176347494 will give you a pretty good idea of the field. Freely available slides by Owen are at http://statweb.stanford.edu/~owen/pubtalks/DASprott.pdf .
Basically, Empirical Likelihood makes use of the empirical distribution of the data, as the basis for forming an empirical likelihood. This empirical likelihood can be maximized, subject to various constraints, sometimes in closed form, but often requiring numerical constrained nonlinear optimization methods. It can be used as the basis for computing non-parametric likelihood ratio tests and confidence regions (not necessarily ellipsoidal or symmetric).
There are relationships between empirical likelihood and bootstrapping, and indeed, the two can be combined.
If you don't have a solid rationale for use of a particular parametric distribution, you're generally better off using a non-parametric method, such as empirical likelihood. The downside may be that the computations are more computationally intensive, and the confidence regions which result do not look like those most people have come to expect based on, for instance, Normal distribution assumptions.