I have a process which, after fixing the values of some parameters, generates samples from a Bernoulli distribution with unknown $p$.
The value of $p$ is typically small, and what I want to do is to discover suitable values for my parameters so that $p$ is at least $0.1$. An added problem is that generating each sample (i.e. running my “experiment” once) takes some considerable amount of time.
One thing I can do is, say, fix the parameters, generate 100 samples, count the number $k$ of successes and, if $k/100 < 0.1$, try again with different parameters until I find ones that yield $k/100 > 0.1$.
However, as generating each sample takes some time, intuitively I would like to stop generating samples for a fixed set of parameters if it “doesn't look promising”. For example, say, if i've already seen 30 samples and not a single success then, according to this question, with 95% confidence the value of $p < 0.1$; so it would be reasonable to stop here and not generate the remaining 70 cases. I would like to generalise this idea.
I guess the question I really want to ask is the following:
Given that I've already seen $k$ successes on $n$ samples of a Bernoulli distribution with unknown parameter $p$, what is the probability that, if I keep sampling from the same distribution, after $N$ observations (say $N = 100$), I'll see at least $K$ successes (say $K = 10$).
Update: In case anybody is interested, these are some example values I computed with a small script in python using the answers provided below. Note that both methods are trying to compute two slightly different probabilities, you might want to read the exact details below.
Rasmus' method | Zen's method
k=0 k=1 k=2 k=3 k=4 k=5 | k=0 k=1 k=2 k=3 k=4 k=5
n= 0 0.90 | 0.90
n= 1 0.81 0.99 | 0.81 0.99
n= 2 0.72 0.97 1.00 | 0.73 0.97 1.00
n= 3 0.66 0.95 1.00 1.00 | 0.66 0.95 1.00 1.00
n= 4 0.59 0.92 0.99 1.00 1.00 | 0.60 0.92 0.99 1.00 1.00
n= 5 0.53 0.89 0.98 1.00 1.00 1.00 | 0.54 0.88 0.98 1.00 1.00 1.00
n= 6 0.48 0.85 0.97 1.00 1.00 1.00 | 0.49 0.85 0.97 1.00 1.00 1.00
n= 7 0.43 0.81 0.96 0.99 1.00 1.00 | 0.45 0.81 0.96 0.99 1.00 1.00
n= 8 0.39 0.78 0.95 0.99 1.00 1.00 | 0.41 0.78 0.94 0.99 1.00 1.00
n= 9 0.35 0.74 0.93 0.99 1.00 1.00 | 0.37 0.74 0.92 0.98 1.00 1.00
n=10 0.32 0.70 0.91 0.98 1.00 1.00 | 0.34 0.70 0.90 0.98 1.00 1.00
n=11 0.28 0.66 0.89 0.97 1.00 1.00 | 0.31 0.67 0.88 0.97 0.99 1.00
n=12 0.25 0.62 0.87 0.97 0.99 1.00 | 0.28 0.63 0.86 0.96 0.99 1.00
n=13 0.23 0.58 0.84 0.96 0.99 1.00 | 0.25 0.60 0.83 0.95 0.99 1.00
n=14 0.21 0.55 0.82 0.94 0.99 1.00 | 0.23 0.56 0.81 0.93 0.98 1.00
n=15 0.19 0.51 0.79 0.93 0.98 1.00 | 0.21 0.53 0.78 0.92 0.98 0.99
n=16 0.17 0.48 0.76 0.92 0.98 1.00 | 0.19 0.50 0.76 0.90 0.97 0.99
n=17 0.15 0.45 0.73 0.90 0.97 0.99 | 0.18 0.47 0.73 0.89 0.96 0.99
n=18 0.14 0.42 0.70 0.88 0.96 0.99 | 0.16 0.45 0.71 0.87 0.95 0.98
n=19 0.12 0.39 0.68 0.87 0.96 0.99 | 0.15 0.42 0.68 0.85 0.94 0.98
n=20 0.11 0.37 0.65 0.85 0.95 0.99 | 0.14 0.40 0.65 0.83 0.93 0.98
n=21 0.10 0.34 0.62 0.83 0.94 0.98 | 0.13 0.37 0.63 0.82 0.92 0.97
n=22 0.09 0.32 0.59 0.81 0.93 0.98 | 0.12 0.35 0.60 0.80 0.91 0.96
n=23 0.08 0.29 0.56 0.79 0.91 0.97 | 0.11 0.33 0.58 0.78 0.90 0.96
n=24 0.07 0.27 0.54 0.76 0.90 0.97 | 0.10 0.31 0.56 0.76 0.88 0.95
n=25 0.07 0.25 0.51 0.74 0.89 0.96 | 0.09 0.29 0.53 0.73 0.87 0.94
n=26 0.06 0.23 0.48 0.72 0.87 0.95 | 0.08 0.27 0.51 0.71 0.85 0.93
n=27 0.05 0.22 0.46 0.69 0.86 0.95 | 0.08 0.26 0.49 0.69 0.84 0.92
n=28 0.05 0.20 0.43 0.67 0.84 0.94 | 0.07 0.24 0.47 0.67 0.82 0.91
n=29 0.04 0.18 0.41 0.65 0.82 0.93 | 0.06 0.23 0.45 0.65 0.81 0.90
n=30 0.04 0.17 0.39 0.62 0.81 0.92 | 0.06 0.21 0.43 0.63 0.79 0.89