A is positively related to B.
C is the outcome of A and B, but the effect of A on C is negative and the effect of B on C is positive.
Can this happen?
A is positively related to B.
C is the outcome of A and B, but the effect of A on C is negative and the effect of B on C is positive.
Can this happen?
The other answers are truly marvelous - they give real life examples.
I want to explain why this can happen despite our intuition to the contrary.
Correlation is the cosine of the angle between the (centered) vectors. Essentially, you are asking whether it is possible that
Yes, of course:
In this example ($\rho$ denotes correlation):
However, your surprise is not misplaced.
The angle between vectors is a distance metric on the unit sphere, so it satisfies the triangle inequality:
$$\measuredangle AB \le \measuredangle AC + \measuredangle BC$$
thus, since $\cos \measuredangle AB = \rho(A,B)$,
$$\arccos\rho(A,B) \le \arccos\rho(A,C) + \arccos\rho(B,C) $$
therefore (since $\cos$ is decreasing on $[0,\pi]$)
$$\rho(A,B)\ge\rho(A,C)\times\rho(B,C) - \sqrt{(1-\rho^2(A,C))\times(1-\rho^2(B,C))} $$
So,
Yes, two co-occuring conditions can have opposite effects.
For example:
I've heard this car analogy which applies well to the question:
The key here is the driver's intention to maintain a constant speed (C), therefore the positive correlation between A and B naturally follows from that intention. You can construct endless examples of A, B, C with this relationship thus.
The analogy comes from an interpretation of Milton Friedman's Thermostat and comes from an interesting analysis of monetary policy and econometrics, but that's irrelevant to the question.
Yes, this is trivial to demonstrate with a simulation:
Simulate 2 variables, A and B that are positively correlated:
> require(MASS)
> set.seed(1)
> Sigma <- matrix(c(10,3,3,2),2,2)
> dt <- data.frame(mvrnorm(n = 1000, rep(0, 2), Sigma))
> names(dt) <- c("A","B")
> cor(dt)
A B
A 1.0000000 0.6707593
B 0.6707593 1.0000000
Create variable C:
> dt$C <- dt$A - dt$B + rnorm(1000,0,5)
Behold:
> (lm(C~A+B,data=dt))
Coefficients:
(Intercept) A B
0.03248 0.98587 -1.05113
Edit: Alternatively (as suggested by Kodiologist), just simulating from a multivariate normal such that $\operatorname{cor}(A,B) > 0$, $\operatorname{cor}(A,C) > 0$ and $\operatorname{cor}(B,C) < 0$
> set.seed(1)
> Sigma <- matrix(c(1,0.5,0.5,0.5,1,-0.5,0.5,-0.5,1),3,3)
> dt <- data.frame(mvrnorm(n = 1000, rep(0,3), Sigma, empirical=TRUE))
> names(dt) <- c("A","B","C")
> cor(dt)
A B C
A 1.0 0.5 0.5
B 0.5 1.0 -0.5
C 0.5 -0.5 1.0
$$ C = mB + n (A-proj_B(A)) $$
then $$ \left<C,A\right> = m\left<B,A\right> + n\left<A,A\right> -n \left<B,A\right> $$
Then covariance between C and A could be negative in two conditions: