hopefully this isn't too dumb of a question. I've been thinking about it for a while now and don't have a clear answer. I saw this question, where the answer mentions the Poisson Distribution, but it doesn't seem like the Poisson applies here because of the time aspect of the Poisson (please feel free to correct if wrong).
Suppose I have a set of sales data like this:
| Opportunity | Dollar Value |
| Sales Opportunity 1 | \$34,000 |
| Sales Opportunity 2 | \$78,000 |
| ... | ... |
| Sales Opportunity x | \$98,000 |
Each "opportunity" represents a currently-pending sales deal. The actual value of the deal is subject to change as salespeople add/remove line items from the deal.
Given the standard deviation of this dataset, how could I calculate the probability of a certain deal closing out above a certain dollar level, e.g. $100,000?
For instance, sales opportunity 1 is at \$34K, and opportunity 2 is at \$98K, what is the probability of opportunity 1 closing above \$100K, and what is the probability of opportunity 2 closing above \$100K?