In some sense there is no such thing as a statistics without "parameters" and "models". It is an arbitrary labelling to some extent, depending on what you recognise as a "model" or "parameter". Parameters and models are basically ways of translating assumptions and knowledge about the real world into a mathematical system. But this is true of any mathematical algorithm. You need to somehow convert your problem from the real world into whatever mathematical framework you intend to use to solve it.
Using a probability distribution which has been assigned according to some principle is one way to do this conversion in a systematic and transparent way. The best principles I know of are the principle of maximum entropy (MaxEnt) and the principle of transformation groups (which I think could be also called the principle of "invariance" or "problem-indifference").
Once assigned you can use Bayesian probability theory to coherently manipulate these "input" probabilities which contain your information and assumptions into "output" probabilities which tell you how much uncertainty is present in the analysis you're interested in.
A few introductions from the Bayes/MaxEnt perspective described above can be found here, here, and here. These are based on the interpretation of probability as an extension of deductive logic. They are more on the theoretical side of things.
As a minor end-note, I recommend these methods mainly because they seem most appealing to me - I can't think of a good theoretical reason for giving up the normative behaviours which lie behind the Bayes/MaxEnt rationale. Of course, you may not be as compelled as I am, and I can think of a few practical compromises around feasibility and software limitations though. "real world" statistics can often be about which ideology you are approximating (approx Bayes vs approx Maximum Likelihood vs approx Design based) or which ideology you understand and are able to explain to your clients.