This is an extremely philosophical (and logical) question, and it's unclear if it can be answered in the context of Cross Validated.
The first part of the questions seems straightforward, but actually is not. How is simplicity or complexity defined? In any natural science, that is actually never an issue -- but there the correct term would be "accuracy", and there is a kind of truth that one can measure (in the broadest sense) the deviation between theory and truth.
There is the notion to measure to the complexity of a theory, similar to the "complexity" or the "informational content" of an entity. But this leads to real-world systems, as well. This might be the cross link to entropy or Shannon entropy. But this is not viewed in the sense of statistics and statistical theory.
So the categorization in relation to statistics leads more towards the "Kolmogorov complexity", but this opens up an entire new field of descriptive vocabulary. The pathway to follow is most likely by reading Ray Solomonoff (starting point). It is highly interesting, but not so easy to follow. The key idea in terms of what "complexity" means is that there is a countable set, which divides a theory into easier if the set is smaller. It might be interesting to note, that this leads to one of the foundations of the modern statistical learning methodologies. So one way to describe 'artificial intelligence' follows this path.
But as a scientist I have to think of a dialogue in "Life of Galileo" by B. Brecht. IIRC there is an argument why the motion of objects is supposed to be simple in the sense of intuitive, and not "crazy" or "random". The answer is one that modern science usually despises of, which is kind of anthropocentric (e.g. the anthropic principle). So the math we use and it's properties are an artificial point of view. A different kind of math could be far simpler.
But ultimately this is not the end of the discussion, but the following is: the incompleteness theorem (another wikipedia link). However simplicity (or complexity) is defined, it has to be within some kind of formal system. And ultimately the limitations of the system cannot be overcome. Any proof of a theory being the most simple must happen within the framework, but no proof exists for the framework itself -- it's inherently incomplete. Interestingly, following this line of thinking, one can also end up with Turing machines, and their limitations; which is basically the same conclusion as Solomonoff dichotomy of completeness vs (un)computability.
For a more practical and purposeful answer, it makes much more sense to look at theories in context of an application or in terms of their usefulness. In natural sciences a theory can be more or less inaccurate. But if a certain level of accuracy is pre-set, the complexity of a theory can be assigned a metric (e.g. the number of parameters). But it should be noted that the accuracy of a method is not the same as simplicity/complexity. The question of simplicity is usually the wrong question for a given problem-solution pair.
So, to give a somewhat summarised answer: there is no free lunch. This is only partially meant as joke, since Solomonoff's work also touches on that theorem. But finally, there are many more dimensions to a particular formulation of theory than "economy of practicality" and "economy of expression". In the context of an algorithmic formulation, there are lots of practical aspects, too. Ease of implementation on a computer system factors into it, as well as checkability or ease of verification.