In this thread, it is proposed that if you don't know what the variance-covariance matrix looks like for a dataset that you want to run a multi-level meta-analysis on, you can, among other things, "make a rough/educated guess how large the correlations are". I have a dataset with a huge amount of studies, with a nested structure where there are several samples for each study and several effect sizes for each sample. It would be virtually impossible for me to estimate reasonable intra-sample and intra-article corraltions for each single study an/or type of test, so I will basically have to settle on one intra-sample correlation and one intra-article correlation that is used across the board.
Now, I'm trying to figure out how one could go about figuring out suitable correlation values that could at least somewhat be argued for. Right now, I've settled on using a correlation for intra-sample effect sizes that explains 50% of the variation and a correlation for intra-article effect sizes that explains 25% of the variation, but these values are basically only selected because they are nice round numbers, and because they feel quite conservative (the probability that I'm underestimating the correlation seems low). However, in reality, I have no idea what reasonable values would be for my field (psychology).
What are some good/smart/ingenious ways going about trying to find out these correlation values? Are there other who have solved/tackled this problem before?