I have some trouble with score functions in likelihood calculation. I'm not good at statistics or probability, so I'm still confused on formalism and mathematical-probabilistic language.
Some background: I'm on particle filter, so inferring a pdf that is analytically intractable by a Monte Carlo sampling: put some random particles, evaluate them with a likelihood function, and the normalised-weighted set of particles can be an approximation of the pdf.
I have to give a weight to each particle and compute the likelihood with a likelihood function or score function.
If my hidden state described by this difficult pdf that I have to infer is denoted as $X$, and I have some observation $z$, the likelihood can be formalised by:
$$ p( X | z ) = \text{likelihood} $$
What I have now: a nice distance function that give me the distance between the particle and the target. It works. It is the Bhattacharyya distance (I'm working with images, and colour histogram is a relevant feature). So that:
$$ d = \text{Bhattacharyya(particle, target)} $$
This $d$ is a distance, and for a likelihood function I need a probability, so I'm putting it into a Gaussian with zero means and variance = 0.4:
$$ \text{likelihood} = \frac{e^{-\frac{(d-\mu)^2}{2 \sigma^2}}}{\sqrt[2]{ \pi \sigma}} $$
What I want: Now I'd like to improve my experiments, adding similarly another score function so that the total likelihood is calculated with two score functions (let's say a shape-distance or something else) $s_1$ and $s_2$.
I remember from school (but don't remember exactly why) that if you want to combine 2 probabilities, you have only to multiply, not sum. So I know that:
$$ \text{likelihood} \propto s_1 s_2 $$
where $s_1$ is the Gaussian with the Bhattacharyya with the colour histogram, and $s_2$ is the shape distance.
The two score functions are Gaussian, and they are not independent (they depend on the same hidden state).
Question 1: Is it true I have to multiply them? Why? So that my final formula will be:
$$ \text{likelihood} = \frac{e^{-\frac{(d_1-\mu)^2}{2 \sigma^2}}}{\sqrt[2]{ \pi \sigma}} \frac{e^{-\frac{(d_2-\mu)^2}{2 \sigma^2}}}{\sqrt[2]{ \pi \sigma}} $$
Question 2: What if I want to empirically add some more importance to one of the two scores because I think it is more descriptive of my data? Like:
$$ \text{likelihood} = \alpha s_1 (1-\alpha)s_2 $$