I'm using the formula from Bayesian ab testing in order to compute results of AB test using Bayesian methodology.
$$ \Pr(p_B > p_A) = \sum^{\alpha_B-1}_{i=0} \frac{B(\alpha_A+i,\beta_B+\beta_A)}{(\beta_B+i)B(1+i,\beta_B)B(\alpha_A, \beta_A)} $$
where
- $\alpha_A$ in one plus the number of successes for A
- $\beta_A$ in one plus the number of failures for A
- $\alpha_B$ in one plus the number of successes for B
- $\beta_B$ in one plus the number of failures for B
- $B$ is the Beta function
Example data:
control: 1000 trials with 78 successes
test: 1000 trials with 100 successes
A standard non Bayesian prop test gives me significant results (p < 10%):
prop.test(n=c(1000,1000), x=c(100,78), correct=F)
# 2-sample test for equality of proportions without continuity correction
#
# data: c(100, 78) out of c(1000, 1000)
# X-squared = 2.9847, df = 1, p-value = 0.08405
# alternative hypothesis: two.sided
# 95 percent confidence interval:
# -0.0029398 0.0469398
# sample estimates:
# prop 1 prop 2
# 0.100 0.078
while my implementation of the Bayes formula (using the explanations in the link) gave me very weird results:
# success control+1
a_control <- 78+1
# failures control+1
b_control <- 1000-78+1
# success control+1
a_test <- 100+1
# failures control+1
b_test <- 1000-100+1
is_control_better <- 0
for (i in 0:(a_test-1) ) {
is_control_better <- is_control_better+beta(a_control+i,b_control+b_test) /
(b_test+i)*beta(1+i,b_test)*beta(a_control,b_control)
}
round(is_control_better, 4)
# [1] 0
that means that that $P(TEST > CONTROL)$ is $0$, which doesn't make any sense given this data.
Could someone clarify?