Given you have found the principal modes of variation in your sample (eigenvectors) you use these as your new axis system. When you compute the PC scores you simple project the data on the axial system defined by these eigenvectors.
Let's say your original data is an $N \times p$ matrix $X_0$ with a valid covariance $p \times p$ matrix $C$ (the columns of $X_0$ are assumed to have mean $0$). The covariance $C$ can then be eigen-decomposed as $C = U \Lambda U^T$ where $U$ is the $p \times p$ matrix of eigenvectors (each column is an eigenvector) and $\Lambda$ is the diagonal matrix holding the corresponding eigenvalues. This is the most important step, now $U$ can allow a change of basis from $X_0$ to $Z$, where we define $Z = X_0 U$, $Z$ are now the projected scores $N \times p$ matrix; these are the projections of the original data $X_0$ onto the axis defined by the columns of $U$. Note that here we used the full matrix $U$. If we used the optimal $k$-th dimensional approximation of the data $X_0$ we would use only the first $k$ columns of $U$. As you see the data $X$ are directly employed for their dimensional reduction using the eigenvectors $U$. (Sometimes the columns of $U$ are also called loadings.)
Please also see the thread here; it contains some great answers to assist your intuition further.
Note that I used the work-flow of calculating PCA using the covariance matrix. Most implementations use the singular value decomposition of the original data directly by default because of its better numerical properties in some cases. The results from the two routines are perfectly equivalent. I used the covariance-based approach because I think that it a bit more intuitive. The thread here, contains an excellent answer on how SVD relates to PCA.