2

What should be a acceptable r squared value for social science behavioral science psychology for multiple regression analysis?

Xi'an
  • 90,397
  • 9
  • 157
  • 575
  • 1
    Welcome to CV. Besides being primarily opinion based, an answer to the question would depend on the number of variables, as $\text{R}^2$ *always* increases as you keep adding variables, without necessarily resulting in a better model. – Frans Rodenburg Jul 15 '19 at 03:50
  • Check also https://stats.stackexchange.com/q/13314/35989 – Tim Jul 15 '19 at 05:29
  • Also relevant: https://stats.stackexchange.com/q/414349/121522 – mkt Jul 15 '19 at 07:32

1 Answers1

5

"Acceptable" to whom?

I do not have any experience publishing in psychology or behavioural science, but one would certainly hope that every scientific discipline would now have shed the filtering mechanism by which a study is only considered "acceptable" if it meets some threshold of "statistical significance" for its findings. Regardless of whether this is a requirement for a "significant" p-value or a requirement for a minimum coefficient of determination, or some minimal level of predictive power, acceptance on the basis of statistical significance of results is known to cause publication bias, which skews published scientific evidence. Analysis with null results are still valuable scientific information, and so a regression leading to a low coefficient of determination should be just as "acceptable" as one leading to a high coefficient of determination. While the latter might be more exciting, or lead to more important follow-up questions, both results give the reader information on the associations found in a set of data.

The practice of regarding a statistical analysis as "acceptable" only if you get a "positive" answer is an anachronism that has been criticised by statisticians for decades. Fortunately, most reputable disciplines are now coming around to the view that papers should be just as "acceptable" and publishable if they have null results or low predictive power. Publication bias of this kind has been a huge problem in the social sciences, and it has taken a long time for the message of statisticians to start to get through. Protocols and mechanisms such as pre-registration of research, journals for null results, etc., are some responses that have been developed to try to slay this dragon. There are some statisticians who argue for a peer-review process where reviewers are blinded to the study results, so that no study is considered for publication based on the outcome of the data analysis.

Thus, if you are conducting research involving regression analysis, and you find that the response variable in your analysis is not very closely related to your explanatory variables (e.g., low coefficient-of-determination), I would strongly encourage you to try to publish your work anyway, just as you would have if you found a "significant" result. If you receive negative comments from journal referees or editors, on the basis of a lack of statistical "significance" or predictive power, then you have a good basis to argue with them and try to have them see reason. If this doesn't work you can target journals that are specifically devoted to reporting null results (see e.g., here, here or here).

Ben
  • 91,027
  • 3
  • 150
  • 376