I'd say logistic regression isn't a test at all; however a logistic regression may then lead to no tests or several tests.
You're quite correct that labelling something nonparametric because it's not normal is insufficient. I'd call the exponential family explicitly parametric, so I'd usually regard logistic regression (and Poisson regression and Gamma regression and ...) as parametric, though there can be circumstances in which I might accept an argument that particular logistic regressions could be regarded as nonparametric (or at least in a vaguely hand-wavy sense, only quasi-"parametric").
Beware any confusion over the two senses in which a regression may be called nonparametric.
If I fit a Theil linear regression it is nonparametric in the sense that I have left the error distribution undefined (it corresponds to adjusting the regression slope until the Kendall correlation between residuals and $x$ is 0) ... but it is parametric in the sense that I have a fully specified relationship between $y$ and $x$ parameterized by the slope and intercept coefficients.
If on the other hand I fit a kernel polynomial regression (say a local linear regression), but with normal errors, that is also called nonparametric, but in this case it's the parameterization of the relationship between $y$ and $x$ that's nonparametric (at least potentially infinite-dimensional), not the error distribution.
Both senses are used, but when it comes to regression, the second kind is actually used more often.
It's also possible to be nonparametric in both senses, but harder (with sufficient data, I could, for example, fit a Theil locally-weighted linear regression).
In the case of GLMs, the second form of nonparametric multiple regression include GAMs; that second form is the sense in which Hastie is generally operating (and under which he's operating in that quote).