I am reading An Introduction to Statistical Learning with Applications in R by G. James, D. Witten, T. Hastie and R. Tibshiran 2013 after taking a basic statistics course a little while ago.
On page 21 it states the parametric method for determining a $Y = f(X)$:
First, we make an assumption about the functional form, or shape, of $f$. For example, one very simple assumption is that $f$ is linear in $X$:
$$ f(X)= \beta_0 + \beta_1 X_1 + \beta_2 X_2 + \dots + \beta_p X_p. $$
This is a linear model, which will be discussed extensively in Chapter 3. Once we have assumed that $f$ is linear, the problem of estimating $f$ is greatly simplified. Instead of having to estimate an entirely arbitrary $p$-dimensional function $f(X)$, one only needs to estimate the $p + 1$ coefficients $\beta_0, \beta_1,\dots,\beta_p$.
My question is, how does beta $\beta$ fit into the estimate of $f(X)$ and therefore $Y$ when I would normally associate beta $\beta$ with something that is not linear or am I reading the symbol incorrectly? Is this referring to the angle at which the line positively or negatively slopes?
Sorry if this is a poorly written question and I am a little nervous about posting on here.