Your original data lives in $\cal{R}^{120}$ dimensional space. When you do PCA, you essentially do some change of basis. If you discard nothing, this will be a mere rotation. If you discard one basis, you just eliminate one dimension. The basis vectors corresponding to low eigenvalue are typically the dimensions where is no dynamics (according to variance) and are typically removed. To summarize, PCA does the change of basis (maximum variance directions) and removing low eigenvalue dimensions gets rid of non informative/noisy dimensions in the output. This is effectively dimensionality reduction assuming that the variance captures the dynamics. Jon Shlen's tutorial on PCA gives an excellent example with the spring moving in 3D space [link].
Answering to the original question (thanks, amoeba), the answer is "no". Each eigenvector is a new basis vector. It depends on all dimensions from the original input space. The corresponding eigenvalue just tells how much variance that particular basis explains from the total. The probabilistic PCA formulation can shed more light as the latent variables correspond to the hidden factors which explain the data in variance terms...