Strictly speaking, neither are the errors. The errors are the random noise that is added to the mean as part of the data generating process. What you have access to in your model are the residuals. Of course, the residuals are taken as estimates of the errors, so this is a little bit academic, but it's worth being clear about this nonetheless.
In a multilevel model, you have (at least) two sources of randomness. The predicted random intercepts are conceptualized as the mean score for each school, where the schools are understood as a sample drawn from a population of schools. The errors are the difference between each student's individuals score and their school's mean score, where the students are understood as a sample drawn from the school's population. When your model fits these data, the difference between a student's observed score and the predicted mean score for their school is that student's residual (which is taken as an estimate of the student's error). Although there is an analogy between the residuals and the random intercepts, they aren't really the same thing.
The following is vaguely related, and may be helpful to read (in particular, thinking through the code used in the simulations may help you to understand the parts of the models): Why do the estimated values from a Best Linear Unbiased Predictor (BLUP) differ from a Best Linear Unbiased Estimator (BLUE)?