When you use the regression equation to make a prediction by plugging in a value of $x$, you are not predicting the value of $y$ for that value of $x$. You are predicting the mean of the $y$-values for that value of $x$. In detail:
The regression equation
$$y = \beta_0 + \beta_1 x + \epsilon$$
says that $y$ is equal to a linear function of $x$ plus some random scatter. If you set $x=3$, say, you have
$$y = \beta_0 + 3\beta_1 + \epsilon$$
and there is still some random scatter there. In other words, you are saying that "my prediction is that $y$ is normally distributed with mean equal to $\beta_0 + 3\beta_1$". To get an actual value for $y$, you need to take the expectation. So you are saying that "The mean of all the $y$-values for which $x=3$ is $\beta_0 + 3\beta_1$".
If you make a prediction by inverting the regression equation, say by plugging in $y=4$, then you are saying "The $x$-value for which the mean of all the corresponding $y$-values is equal to $4$ is $(4-\beta_0)/\beta_1$", which isn't usually the kind of prediction that you want.
Statistical courses often don't help by talking about "the line of best fit", which makes it sound like the situation is symmetrical in $x$ and $y$, which is very not the case. Recently there was a debate on the ANZSTAT mailing list, and someone posted a link to a good introductory course which explains it well:
https://www.stat.berkeley.edu/~stark/SticiGui/Text/regression.htm