Early analysis of this question can be found in Lancaster (1951) and Leipnik (1961). In the latter paper the author analyses the conditions required for uncorrelatedness to imply independence in a bivariate continuous distribution. If you have access to scholarly journals (e.g., through a university) then I recommend starting with these papers. I will also give a bit of insight for a special case.
It is worth noting that independence of $X$ and $Y$ is equivalent to the following moment condition:
$$\mathbb{E}(h(X) g(Y)) = \mathbb{E}(h(X)) \mathbb{E}(g(Y)) \quad \text{for all bounded measureable } g \text{ and } h. $$
If both random variables have bounded support then the Stone-Weierstrass theorem allows us to uniformly approximate the functions $g$ and $h$ with polynomials, which gives the equivalent condition:
$$\mathbb{E}(X^n Y^m) = \mathbb{E}(X^n) \mathbb{E}(Y^m) \quad \text{for all } n \in \mathbb{N} \text{ and } m \in \mathbb{N}. \quad \quad \quad$$
(The case $n=m=1$ implies zero correlation, and the other cases are conditions for higher-order moment separability.) Thus, for the case of random variables with bounded support, correlation of zero, plus higher-order moment separability at all orders is equivalent to independence.