Having a correct distributional assumption doesn't automatically determine what estimator you use. Knowing that I have an exponential distribution with rate parameter $\lambda$ (for example) doesn't tell me whether in estimating $\lambda$ I should use maximum likelihood or method of moments or method of quantiles or minimum Kolmogorov-Smirnov distance or something else. Some estimators given a parametric assumption may be biased and some may be unbiased.
Some commonly used estimators for parametric situations are biased.
It's even possible to be in a situation where no unbiased estimator exists - e.g see here or here.
If any assumptions required for the nonparametric method to yield a suitable estimator for the parameter hold, the estimator might still be biased, or it may be unbiased.
It's quite possible for a nonparametric estimator to be unbiased while a parametric one is biased. That doesn't mean that the nonparametric one is better, however.
As an example, consider estimating the population mean when sampling from a lognormal distribution. The sample mean is a nonparametric estimator (I don't need to make any specific distributional assumption to arrive at it, since I can simply rely on the weak law of large numbers), and it's unbiased, but the maximum likelihood estimator and the method of moments estimator - each based on the parametric assumption that it's actually lognormal - will both be biased.
Accuracy, considered more generally
Accuracy -- how close we are to the thing we're estimating -- is related to both bias and variability. An estimator that is biased may nevertheless be very accurate (in that the estimates tend typically to be very close to the thing being estimated).
It's this consideration that lead many people to look at measures like mean square error when comparing estimators.
If the distributional assumption is correct that will add information (and sometimes a lot of information, depending on what estimator is then chosen) over the distributional information in nonparametric methods -- which assumptions are usually very weak -- such as only assuming continuity or symmetry.
This added information - if fully utilized or nearly so -- will generally affect variability. It may not always lead to an estimator with less bias but it can lead to a quantifiably better estimator (more accurate in a particular sense).