I have asked about this before and have really been struggling with identifying what makes a model parameter and what makes it a latent variable. So looking at various threads on this topic on this site, the main distinction seems to be:
Latent variables are not observed but have an associated probability distribution with them as they are variables and parameters are also not observed and have no distribution associated with them which I understand as that these are constants and have a fixed but unknown value that we are trying to find. Also, we can put priors on the parameters to represent our uncertainty about these parameters even though there is only one true value associated with them or at least that is what we assume. I hope I am correct so far?
Now, I have been looking at this example for Bayesian weighted linear regression from a journal paper and been really struggling to understand what is a parameter and what is a variable:
$$ y_i = \beta^T x_i + \epsilon_{y_i} $$
Here $x$ and $y$ are observed but only $y$ is treated as a variable i.e. has a distribution associated with it.
Now, the modelling assumptions are:
$$ y \sim N(\beta^Tx_i, \sigma^2/w_i) $$
So, the variance of $y$ is weighted.
There is also a prior distribution on $\beta$ and $w$, which are normal and gamma distributions respectively.
So, the full log likelihood is given by:
$$ \log p(y, w, \beta |x) = \Sigma \log P(y_i|w, \beta, x_i) + \log P(\beta) + \Sigma \log P(w_i) $$
Now, as I understand it both $\beta$ and $w$ are model parameters. However, in the paper they keep referring to them as latent variables. My reasoning is $\beta$ and $w$ are both part of the probability distribution for the variable $y$ and they are model parameters. However, the authors treat them as latent random variables. Is that correct? If so, what would be the model parameters?
The paper can be found here (http://www.jting.net/pubs/2007/ting-ICRA2007.pdf).
The paper is Automatic Outlier Detection: A Bayesian Approach by Ting et al.