There is absolutely no difference.
There is absolutely no difference between standard PCA and what C&K suggested and called "asymptotic PCA". It is quite ridiculous to give it a separate name.
Here is a short explanation of PCA. If centered data with samples in rows are stored in a data matrix $\mathbf X$, then PCA looks for eigenvectors of the covariance matrix $\frac{1}{N}\mathbf X^\top \mathbf X$, and projects the data on these eigenvectors to obtain principal components. Equivalently, one can consider a Gram matrix, $\frac{1}{N}\mathbf X \mathbf X^\top$. It is easy to see that is has exactly the same eigenvalues, and its eigenvectors are scaled PCs. (This is convenient when the number of samples is less than the number of features.)
It seems to me that what C&K suggested, is to compute eigenvectors of the Gram matrix in order to compute principal components. Well, wow. This is not "equivalent" to PCA; it is PCA.
To add to the confusion, the name "asymptotic PCA" seems to refer to its relation to factor analysis (FA), not to PCA! The original C&K papers are under paywall, so here is a quote from Tsay, Analysis of Financial Time Series, available on Google Books:
Connor and Korajczyk (1988) showed that as $k$ [number of features] $\to \infty$ eigenvalue-eigenvector analysis of [the Gram matrix] is equivalent to the traditional statistical factor analysis.
What this really means is that when $k \to \infty$, PCA gives the same solution as FA. This is an easy-to-understand fact about PCA and FA, and it has nothing to do with whatever C&K suggested. I discussed it in the following threads:
So the bottom-line is: C&K decided to coin the term "asymptotic PCA" for standard PCA (which could also be called "asymptotic FA"). I would go as far as to recommend never to use this term.