I have to calculate the KL divergence between a distribution $q$ and a prior distribution $p$, both of which are univariate Gaussians, i.e. $KL(q|p), q \sim \mathcal{N}(\mu, \sigma^2), p \sim \mathcal{N}(\mu', \sigma'^2)$. This term is part of a larger formula, which is justified in some way not relevant to this question.
Now, to be honest, I don't want to put any asusmptions on $p$ except that it is Gaussian. My intuition is to just say that if I do that, I can just say $p=q$ and thus $KL(p|q) = 0$.
I wonder if there is a way to phrase this intuition into proper math. I read about non-informative priors, but found nothing about calculating KL divergences in this scenario.
Update:
I try to make the question more clear.
I have two distributions. One is the prior, the other driven by data. My prior is not tied to any parameter ranges (e.g. in form of a conjugate prior), but only specified in its functional form (i.e. disitrbution family). In typical Bayesian frameworks, such a thing is called an informative prior. Does the same concept exist for KL based objective functions?