Your question seems to be not about "two types" of PCA. The second "type" is a continuation of the explanation of the "first": the concept of loadings and how to get them, is introduced this time. Loadings are more important in factor analysis than in PCA because they are the source of interpretation of the latents (in PCA, you not often interpret the components).
Those formulas you cite from section 14.7.1 is about one of the ways to compute the loadings $A$ (which are actually eigenvectors normalized by the corresponding eigenvalues) in PCA, and then to restore the original data $X$ via them.
As I've said, computationally there exist several equivalent ways to do PCA. Some are based on eigen-decomposition, some are on SVD. Your formulas demonstrate one of the SVD-based paths.
I'll show the equivalence of a more "classic" approach and "your book's" approach. Because variances and covariances are customarily computed on df=N-1 (rather then N) I will change $N$ in your formulas to $N-1$.
X (N=10, P=5). Columns (variables) are already centered
.8585 .0568 1.2111 1.1736 .3553
-1.2951 .9300 .0384 -.1677 -.9702
-1.0160 1.2390 .1478 1.0005 -1.1788
-.4922 -.1105 -1.0723 -.4102 -.5783
-1.1354 .9282 -.2975 -2.2278 -.1559
.8969 -1.0482 .4764 -.1490 1.9451
1.7236 -.7944 .1691 .3063 -.0586
-.6571 -.6302 .8333 -.3861 .5726
.5776 -.4670 -.6474 .0858 1.5036
.5391 -.1038 -.8589 .7746 -1.4349
Do PCA via eigen-decomposition of the covariance matrix
COV
1.0899 -.6261 .1155 .4505 .5092
-.6261 .6259 -.0708 -.0929 -.5922
.1155 -.0708 .5373 .1782 .2820
.4505 -.0929 .1782 .9345 -.1696
.5092 -.5922 .2820 -.1696 1.2500
Eigen-decompose it as COV = V * L * V'
Eigenvalues are the diagonal of L
2.2551
1.2374
.6092
.2132
.1227
Eigenvectors V
.6005 .3243 -.3643 .4278 .4675
-.4661 .0595 .2700 -.0756 .8370
.1768 .0323 .8109 .5447 -.1162
.1848 .7802 .2696 -.5262 -.0870
.5973 -.5306 .2535 -.4875 .2445
Loadings A = V * sqrt(L).
.9017 .3607 -.2843 -.1976 .1637
-.7000 .0662 .2108 .0349 .2932
.2655 .0360 .6329 -.2515 -.0407
.2775 .8679 .2104 .2430 -.0305
.8970 -.5902 .1978 .2251 .0856
Scaled (standardized) principal component values ZC = X * inv(A') [while Raw PC values not shown here are C = X * V]
.7540 .9422 1.3979 -.5022 .8363
-1.2085 .0184 .5931 .0914 -.1545
-1.1192 1.0383 1.0192 .8653 .4843
-.5693 -.1925 -1.2520 .6248 -.8670
-1.1133 -1.7783 -.2782 -1.1484 1.2461
1.4954 -.8130 .2940 .3194 -.0711
.9700 .7077 -.8169 -1.6394 .2291
.2113 -.7449 1.0070 -.3127 -2.1638
.9083 -.5324 -.5858 1.8373 .8981
-.3287 1.3543 -1.3783 -.1356 -.4376
Data are restored as X = ZC * A'
.8585 .0568 1.2111 1.1736 .3553
-1.2951 .9300 .0384 -.1677 -.9702
-1.0160 1.2390 .1478 1.0005 -1.1788
-.4922 -.1105 -1.0723 -.4102 -.5783
-1.1354 .9282 -.2975 -2.2278 -.1559
.8969 -1.0482 .4764 -.1490 1.9451
1.7236 -.7944 .1691 .3063 -.0586
-.6571 -.6302 .8333 -.3861 .5726
.5776 -.4670 -.6474 .0858 1.5036
.5391 -.1038 -.8589 .7746 -1.4349
Do PCA via "your" path of SVD decomposition
Singular-value-decompose X as X = U * D * V'
U
-.2513 -.3141 .4660 .1674 -.2788 -.1197 -.4790 .4706 -.1616 -.1549
.4028 -.0061 .1977 -.0305 .0515 .4986 -.2315 .0909 .6514 -.2454
.3731 -.3461 .3397 -.2884 -.1614 .1885 .5965 .1991 -.2917 .0163
.1898 .0642 -.4173 -.2083 .2890 .0567 -.1659 .3104 -.3984 -.6107
.3711 .5928 -.0927 .3828 -.4154 .0371 .0318 .3563 -.1429 .1830
-.4985 .2710 .0980 -.1065 .0237 .7576 -.0050 -.0022 -.2693 .1004
-.3233 -.2359 -.2723 .5465 -.0764 .1195 .4867 .1960 .2385 -.3387
-.0704 .2483 .3357 .1042 .7213 -.1719 .1959 .4296 .0961 .1671
-.3028 .1775 -.1953 -.6124 -.2994 -.1808 .1295 .4201 .3829 .0370
.1096 -.4514 -.4594 .0452 .1459 .2155 -.2015 .3284 .0000 .5958
D
4.5051 .0000 .0000 .0000 .0000
.0000 3.3371 .0000 .0000 .0000
.0000 .0000 2.3416 .0000 .0000
.0000 .0000 .0000 1.3853 .0000
.0000 .0000 .0000 .0000 1.0507
.0000 .0000 .0000 .0000 .0000
.0000 .0000 .0000 .0000 .0000
.0000 .0000 .0000 .0000 .0000
.0000 .0000 .0000 .0000 .0000
.0000 .0000 .0000 .0000 .0000
V (these right eigenvectors are what we had in eigendecomposition)
-.6005 -.3243 -.3643 .4278 -.4675
.4661 -.0595 .2700 -.0756 -.8370
-.1768 -.0323 .8109 .5447 .1162
-.1848 -.7802 .2696 -.5262 .0870
-.5973 .5306 .2535 -.4875 -.2445
Now do what your "section 14.7.1" suggests.
S = U * sqrt(N-1)
-.7540 -.9422 1.3979 .5022 -.8363 -.3592 -1.4369 1.4118 -.4849 -.4647
1.2085 -.0184 .5931 -.0914 .1545 1.4958 -.6945 .2727 1.9543 -.7361
1.1192 -1.0383 1.0192 -.8653 -.4843 .5655 1.7896 .5973 -.8751 .0489
.5693 .1925 -1.2520 -.6248 .8670 .1701 -.4977 .9312 -1.1951 -1.8322
1.1133 1.7783 -.2782 1.1484 -1.2461 .1114 .0953 1.0690 -.4288 .5490
-1.4954 .8130 .2940 -.3194 .0711 2.2728 -.0149 -.0067 -.8079 .3011
-.9700 -.7077 -.8169 1.6394 -.2291 .3584 1.4601 .5881 .7156 -1.0162
-.2113 .7449 1.0070 .3127 2.1638 -.5157 .5877 1.2888 .2882 .5013
-.9083 .5324 -.5858 -1.8373 -.8981 -.5425 .3886 1.2604 1.1488 .1109
.3287 -1.3543 -1.3783 .1356 .4376 .6465 -.6044 .9851 .0001 1.7873
A' = D * V' /sqrt(N-1)
A (note that the non-empty part are the Loadings)
-.9017 -.3607 -.2843 .1976 -.1637 .0000 .0000 .0000 .0000 .0000
.7000 -.0662 .2108 -.0349 -.2932 .0000 .0000 .0000 .0000 .0000
-.2655 -.0360 .6329 .2515 .0407 .0000 .0000 .0000 .0000 .0000
-.2775 -.8679 .2104 -.2430 .0305 .0000 .0000 .0000 .0000 .0000
-.8970 .5902 .1978 -.2251 -.0856 .0000 .0000 .0000 .0000 .0000
Data are restored as X = S * A'
.8585 .0568 1.2111 1.1736 .3553
-1.2951 .9300 .0384 -.1677 -.9702
-1.0160 1.2390 .1478 1.0005 -1.1788
-.4922 -.1105 -1.0723 -.4102 -.5783
-1.1354 .9282 -.2975 -2.2278 -.1559
.8969 -1.0482 .4764 -.1490 1.9451
1.7236 -.7944 .1691 .3063 -.0586
-.6571 -.6302 .8333 -.3861 .5726
.5776 -.4670 -.6474 .0858 1.5036
.5391 -.1038 -.8589 .7746 -1.4349
So, once again, it is all about different ways to program/implement PCA. I don't know if the approach you cite has any advantage (e.g. faster or numerically more stable?). One potential disadvantage though is that it deals with large matrices $U$ and $S$ and produces void cells in $A$.