It depends on what is your problem. $f(x|\theta)$ is the likelihood function. Same as in maximum likelihood it describes the distribution of the data. Choice of the likelihood function to a great degree defines the statistical model you assume for the data. To choose it, you ask yourself what distribution would describe my data the best? This of course needs you to understand the data and familiarity with the probability distributions. Additionally, there is a whole family of nonparametric methods (see nonparametric-bayes).
$p(\theta)$ is the prior. It is the distribution for the parameter of the model. If you assume an "uninformative", constant density, notice that this leads to canceling out the prior
$$
p(\theta | x) \propto f(x|\theta) \, p(\theta) = f(x|\theta) \times C \propto f(x|\theta)
$$
in such a case, the mode of the posterior distribution would be the same as the maximum likelihood estimate. There are however problems with flat priors, because they lead to distributions that don't integrate to one and can sometimes be problematic when sampling. Flat priors are generally not recommended, you should rather consider a weekly informative one if you have no information that could guide a choice for the informative prior.
As about $f(x)$, by the law of total probability, it is
$$
f(x) = \int f(x|\theta) \, p(\theta)\, d\theta
$$
However notice that when using MCMC sampling, or optimization to obtain maximum a posterior estimate, etc you need to know it only up to a constant, so it is not needed and can be ignored for most of the practical applications.