I know that maximum likelihood estimates are invariant to re-parametrization (https://stats.stackexchange.com/a/335368/267430). Is the MLE also invariant to rearrangements of the constants and parameters in the statistical model? Is this somehow just a case of re-parametrization that I am not seeing? For example, consider the model:
$\text{Model 1: }X \sim N(0, \sigma^2)$
I think this is equivalent to:
$\text{Model 2: }X/2 \sim N(0, \sigma^2/4)$
In other words, these are just two ways of writing down the exact same model. Yet, when you work out the log-likelihood of an observed data $x$ under each model, you get different results:
$\mathcal{L}_1 = -\frac{1}{2} \log(2\pi)-\frac{1}{2}\log(\sigma^2)-\frac{x^2}{2\sigma^2}$
$\mathcal{L}_2 = -\frac{1}{2} \log(2\pi)-\frac{1}{2}\log(\sigma^2/4)-\frac{(x/2)^2}{2\sigma^2/4}$
which are not equal, (although MLE estimates of $\sigma$ are equal).
In general (not only for this example, but for any similar situation where you are moving constants or parameters from the left-hand side to the right-hand side of the distributional assumptions), what is the relationship between $\mathcal{L}_1$ and $\mathcal{L}_2$? When doing maximum likelihood estimation for parameters in a model, how can you prove that the results don't change based on rearrangements in how the model is written down?