I'm reading a scientific article that's investigating whether some prehistoric human populations had differently shaped long bones. The methods are as follows:
- Define 21 features of arm long bones. For example, the "robusticity index" is defined as the ratio of the bone's perimeter at the center and its total length: $Robusticity = \frac{Perimeter\;at\;the\;center}{Total\;length}$
- Collect data for a very small number of individuals in each population (n of pop A: 1 to 14, n of pop B: 1 to 6).
- Eyeball the results. Then without using statistical tests propose transcontinental patterns in the geographical distribution of the 2 or 3 features (out of 21) that look patterned.
I t-tested everything and applied a p-value correction for multiple testing with Benjamini-Hochberg. (All corrected p-values were above 0.3). But 17 of the 21 studied features, like the robusticity index, are ratios and I'm uneasy about having used t-tests on them.
After spending some time hunting for data collected on some 300 recent skeletons for these features, I believe some of them are log-normally distributed.
How can I run a (preferably parametric) test on these data given that the full data are not available? I only have the sample size, the mean and the standard deviation, as well as the min and max. I can guess all data points when the sample size is 3 or less (because I have the min, max, and mean), and I could probably figure it out when the sample size is 4 (with the standard deviation). But what can I do when the sample size is more than 4? I can't log-transform without the actual data. Is their a special t-like-test I could use?