I am confused at how the normal distribution's PDF capable of calculating a density for a single variable. I understand that the CDF probability of an exact continuous random variable $X$ is 0. Therefore, to calculate probability of $X$, we may define a range such that probability of $X$ is $P(a < X < b)$. It appears this range is usually referred to as the interval (please correct me if I am wrong).
PDF for normal distribution is $\frac{1}{\sigma\sqrt{2\pi}}\, e^{-\frac{(x - \mu)^2}{2 \sigma^2}}$ so if we assume $x=1$, $\mu=0$ and $\sigma=1$ the result from these parameters is 0.2419707 density using dnorm in R. How is the PDF capable of coming to this conclusion as we do not specify a interval?