1

There is an SEP article on the proposed incommensurability of at least some conflicting pairs of scientific theories, which goes over Kuhnian and Feyerabendian proposals regarding this incommensurability. Perhaps ironically, the article concludes by comparing two theories of incommensurability that might themselves be meta-incommensurable! (Note that the article does briefly emphasize the commensuration/comparison distinction, however, in that same concluding section.)

Now, is it possible that a manifestation of scientific incommensurability might be explanatory incommensurability with regards to degrees of theoretical simplicity/complexity? Must we always assume that elements of pairs of theories can be ratioed one to another along this line? I wonder, for example, if an elaborate conglomerate of multiversal set theory, pluralistic modal logic, and a mechanism for a unified field would be neither more nor less nor equally complex (much less simple!) in comparison with a divine nature having exotic meta-properties such as divine simplicity (plus, somehow, tri-unity and the capacity for incarnation, say). As I've noted before, in Cantor's set world, there seemingly (as far as I can tell, anyway) would have been no nontrivial elementary embeddings of his divine counterpart to V, into that counterpart itself or models of V (or sets that model fragments of V), for such would conflict with his depiction of the divine transet (as an ens simplicissimum). But the presence or absence of order-indiscernibles/sharps (including class-many of them) is the kind of thing that might make commensurability hard to attain to even on a purely mathematical level, so if we tried to move from that level to a physically explanatory one, would we be able to commensurate a physical theory involving e.g. Cantor's God with a physical theory coupled to a set theory with endless amounts of order-indiscernibles/sharps/w/e along those lines?

1 Answers1

0

You have a lot going on, so I'm going to tackle what I see as the pith of the issue, and where the leverage in analysis lies.

Now, is it possible that a manifestation of scientific incommensurability might be explanatory incommensurability with regards to degrees of theoretical simplicity/complexity?

Yes, because incommensurability, the very notion of theoretical simplicity and complexity, is domain-specific language regarding the quantification and qualification of conceptual and linguistic structure. From WP:

Commensurability is a concept in the philosophy of science whereby scientific theories are said to be "commensurable" if scientists can discuss the theories using a shared nomenclature that allows direct comparison of them to determine which one is more valid or useful. On the other hand, theories are incommensurable if they are embedded in starkly contrasting conceptual frameworks whose languages do not overlap sufficiently to permit scientists to directly compare the theories or to cite empirical evidence favoring one theory over the other.

So, let's go to Searle's analysis of natural fallacy fallacy in his work Speech Acts where he makes a simple claim. Every claim of an objective and descriptive notion of logical inference relies on normative (or as he calls it evaluative) logical standards. There is no divorcing the logic that undergirds the conditional nature of language, particularly natural language, from evaluative logical criteria. Thus, any theory of complexity itself necessarily relies on normative criteria that two distinct camps of theoreticians might disagree on.

So, what this really boils down to is that Occam's razor presupposes a consistent normative framework on parsimony and complexity when being applied to distinct theories. That means one aspect of incommensurability is the dispute over the theory of parsimony that characterizes the theories themselves. Parties simply can dispute each other's claims on such matters.

Exemplification: Camp A asserts their theory is simpler, and camp B attempts to refute that claim asserting their theory is simpler. Any presumption that a third-party can make a determination objectively presumes that there is some non-normative framework from which two theories might be assessed. But what might such a framework look like? How does one measure the complexity of a theory? The number of words? The use of phrasal syntactic grammars to analyze natural language arguments? The topological characterization of natural language ontologies? The number of calculations or mathematical proofs involved? If one follows Alan Gross, should an analysis of the complexity of the rhetoric be a factor? It's easy to see how any sophisticated thinker could quickly become being at odds with another over what is parsimonious in a theory.

Without such a consensus on what constitutes "parsimonious scientific theory", the use of Occam's Razor has no teeth, because fundamentally, the notion of parsimony becomes just another aspect of incommensurability of scientific theory.

J D
  • 40,172
  • 4
  • 32
  • 138