We have a binomial process that yields samples of 60 trials. To save time, once 2 failures have been observed the process is reset.
So if a test series hits 2 failures early, the resultant sample ends up being truncated.
ex <- data.frame(FAIL = c(1,1,0,2,0,2), PASS = c(59,59,60,5,60,2))
Samples that have 2 failures could be anywhere between 2/58 to 2/2.
This causes a problem. Because I can't know if the 2/2 sample wouldn't have been maybe 5/55 if the series wasn't terminated early.
Population P for each sample is also different due to some changing IVs.
I'm having a hard time thinking about how to analyze these samples.
I know odds ratios would be valid but I have lots of zero count cells when stratifying 2x2 tables.
Is this the proper way to weight for changes in N/variance with GLM?
GLM(FAIL/PASS)~IV1+IV2+IV3, family=quasibinomial, ex)
I'm confused about when to use weights or offset or both.
e.g.
GLM(FAIL/PASS)~IV1+IV2+IV3+offset(FAIL+PASS), family=quasibinomial, ex)
or
GLM(FAIL/PASS)~IV1+IV2+IV3, weights=(FAIL+PASS), family=quasibinomial, ex)