Iterative reweighted least squares (IRLS) is used when errors are heteroscedastic. Let us assume that error comes from a distribution where its mean is zero and the variance is a function of the absolute value of the input. From what I have read, IRLS is applicable here and will give better results than OLS.
My question is can I solve this using MLE? Let us say that I define my output to be from a normal distribution, whose probability density is written as follows:
\begin{equation} \prod_{i=1}^n p(y_i | x_i, w_0, w_1, m, n) = \prod_{i=1}^n \frac{1}{\sqrt{2\pi\sigma^2}} \exp(-\frac{(y_i - (w_0 + w_1\cdot x_i))^2}{2\sigma^2} \end{equation}
where $\sigma = m|x| + n$, the absolute value of $x$ is denoted by $|x|$. For simplicity, $x, y \in \mathbb{R^1}$.
Now we can solve this by MLE to find values of $w_0, w_1, m, n$.
Is this better or worse than IRLS? I haven't seen much discussion about this method. So is there a problem with this method. The only disadvantage that comes to my mind is that we are assuming a functional form for the variance, which if turned out to be wrong, can affect the regression quality greatly. But then, even IRLS assumes a diagonal weight matrix where each entry is filled as follows (from here):
If a residual plot against a predictor exhibits a megaphone shape, then regress the absolute values of the residuals against that predictor. The resulting fitted values of this regression are estimates of σi. (And remember $w_i=\frac{1}{\sigma^2_i}$)
Another disadvantage of MLE is maybe it is more sensitive than IRLS.
This paper is the only thing that I found comparing MLE and IRLS, but was a little difficult to understand.
Any thoughts or is anyone aware of any studies. Also, I am still learning about this so please point out if there are any mistakes in my analysis.