What is the geometrical shape of the resulting function when you do a multivariate linear regression (aka General linear model)? (roughly speaking: Y = f(X) = A.X where f is the resulting function, A is a matrix and X, Y vectors)
Firstly, I thought that it was always a line in a r dimensional space (r = input + output dimension). But if you work in 3D, with real inputs of size 2 (X=(x1, x2)) and real outputs to predict with size 1 (y), it seems that you obtain a plane (f(X) = f(x1, x2) = a.x1 + b.x2 = y).
With inputs of $n$ dimensions and outputs of $m$ dimensions, I find that you obtain the equation of $m$ hyperplanes in a $r=m+n$ space (or m linear equations). So you have a line only if you have n=1 (I didn't take into account the special case where there are collinear hyperplanes). So in most cases, you find the resultant function is a linear subspace of size n (since the intersection of m hyperplanes has a dimension of r-m = n+m-m = n).
Am I right about that?
I'm curious but, is it possible to train a multivariate regression model (i.e. a training model where independent and dependant variables are vectors, not scalars) with always a line shape? (for me, it seems theoretically impossible)
I use sklearn.linear_model.LinearRegression to do my multivariate linear regression (which seems to be ok).