I don't have the book, but we can just look at the information for the function.
The help for the function states explicitly:
Description:
This function allows a set of univariate density estimates to be
compared, both graphically and formally in a permutation test of
equality.
So if the help is correct, it uses a permutation test (which is what I'd have done in this situation).
Examining the code of the function, it appears to be doing exactly what the help says -- a permutation test.
The test statistic is given by the line:
ts <- sum((estimate[1, ] - estimate[2, ])^2)
Which is to say, it's a sum of squared differences in the density estimates, (by the look of it, summed over the evaluation points of the density estimates; in effect an integrated squared difference).
When there are more than two densities it looks like it uses a sum of squares of deviations from the average of the estimates.
The p-values in a true permutation test are found by generating all possible arrangements of the data into the two samples (of the same sample sizes as the original sample) and calculating the test statistic under each arrangement, counting the proportion of them at least as extreme$^\dagger$ at the particular one in the sample.
$\dagger$ (where more extreme = more consistent with the alternative)
However, in this case, all possible arrangements will generally be far too many to calculate, and instead the arrangements are sampled (they're usually sampled with replacement, which is convenient, and usually of little consequence if the number of possible arrangements is much larger than the number of sampled arrangements). This is properly a randomization test, but it's often the case that people call this a permutation test as well. The p-value is calculated in the same fashion (after adding the sample test statistic to the set of re-sampled statistics). (When this is done, it's possible to calculate a standard error for what is an estimate of the p-value.)