2

I'm reading a scientific article that's investigating whether some prehistoric human populations had differently shaped long bones. The methods are as follows:

  1. Define 21 features of arm long bones. For example, the "robusticity index" is defined as the ratio of the bone's perimeter at the center and its total length: $Robusticity = \frac{Perimeter\;at\;the\;center}{Total\;length}$
  2. Collect data for a very small number of individuals in each population (n of pop A: 1 to 14, n of pop B: 1 to 6).
  3. Eyeball the results. Then without using statistical tests propose transcontinental patterns in the geographical distribution of the 2 or 3 features (out of 21) that look patterned.

I t-tested everything and applied a p-value correction for multiple testing with Benjamini-Hochberg. (All corrected p-values were above 0.3). But 17 of the 21 studied features, like the robusticity index, are ratios and I'm uneasy about having used t-tests on them.

After spending some time hunting for data collected on some 300 recent skeletons for these features, I believe some of them are log-normally distributed.

How can I run a (preferably parametric) test on these data given that the full data are not available? I only have the sample size, the mean and the standard deviation, as well as the min and max. I can guess all data points when the sample size is 3 or less (because I have the min, max, and mean), and I could probably figure it out when the sample size is 4 (with the standard deviation). But what can I do when the sample size is more than 4? I can't log-transform without the actual data. Is their a special t-like-test I could use?

Pertinax
  • 577
  • 3
  • 16
  • 1
    There are a variety of tests I would consider suggesting if you had more than summary information. (For sample size four, you can work out the 4 observations given min, max mean and sd in general, so you suppose correctly there; you get a quadratic with two equivalent solutions, the second solution just swaps the two unknown observations) – Glen_b Jul 18 '17 at 03:16
  • One method which is used with summary data is abc (approximate bayesian computation) see for instance https://stats.stackexchange.com/questions/521484/handling-a-big-number-of-summary-statistics-in-abc – kjetil b halvorsen Nov 12 '21 at 00:45

0 Answers0