Suppose you have one variable, $x$, with 8 data points, with a sample mean of 60%, and a sample standard deviation of 7%, and let’s also assume you know the sample comes from a lognormal distribution (or from a distribution with a heavier tail than lognormal).
A sample with mean = 60%, SD = 7%, could be produced by an underlying distribution with, say, mean = 55%, SD = 10%; or it could be produced by a true mean of 53% and SD = 9%; or… Any number of other combination of mean and SD could produce the data we see.
Now, I looked at true means running from 30% to 100% and SD’s running from 1% to 50% (all in 1% increments) – in other words, 3,550 combinations. For each combination, I simulated 10,000 groups of 8 data points from a lognormal distribution, calculated the mean and SD of those 8 points, and figured out how many of the 10,000 were “close” to my sample mean of 60% and SD of 7%. I defined “close” as being within 0.5% either way – in other words, if the simulated mean was between 59.5% and 60.5%, and the SD was between 6.5% and 7.5%, I assumed it “close.”
Let’s suppose with a true mean of 53% and true SD of 9%, 10 out of the 10,000 simulations were “close” to my sample. After doing that for each of the 3,550 combinations, I had a distribution of the likelihood that each combination is really the true underlying combination.
Next, given the likelihood of each combination, we can ask what is the probability that $x$ is, say, 70% or less. Again, we run through each of the 3,550 combinations, and get the answer. For example, if the true mean is 40%, with a 15% SD, that probability is, say, 0.903; if the true mean is 65% with a 12% SD, the probability is, say, 0.539. If those were the only 2 combinations, and they were equally likely, then we’d say the probability of $x$ being 60% or less is, say, (0.903+0.539)÷2 = 0.721.
So my question is do you think that these calculations make sense? If no, please correct me, if yes, can you please provide me with any other way that is more neat than this one. Thanks.