I have a problem understanding some basic theory related to regression analysis.
I will now be "quoting" the lectures of one of my professors. They assume the case of linear regression model and write it like this:
$Y = b_1X_1 + ... + b_nX_n + e$
Then they say that in terms of the sample this model corresponds to:
$y_i = x_i^T b +e_i$
i = 1, ..., n,
where x_i and b are vectors.
And then they state the assumptions for the model like this:
"the process/sequence
$(y_i , x_i^T)$
i ∊ ℕ
is iid."
And this is where I get confused. As far as I understand, the random variables are those Y and X_1, ..., X_n of the initial model. And y_i and x_i are a single (i-th) observation of Y and a vector of (i-th) observations of X respectively. So, they are fixed. But here comes a question: how can observations be iid from each other if they are fixed, so they don't have their own distribution, they are just constants.
That's why I initially thought that such observations are called "fixed but random", because we consider them to be not yet observations (we look at the distribution from which they occur), but not random variables, too, since we know that they are supposed to be fixed numbers. It turned out that no such term exists for this case.
I would be very grateful if you could provide the explanation. I hope I described the situation clearly.
Thank you in advance!