0

My knowledge of statistical methods is so poor, so I don't know exactly what is the best statistical method to opt in my case.

Ultimate goal: These parameters(A, B and C) could affect parameter D. I want to know whether they do affect it? What is the correlation between each of the parameters and parameter D and how significant this correlation is.

Imagine, I measured D (7 times during a year). Each time, I also measured A, B and C quantities of each sample (I have three different samples).

So now, I have C (dependent variable) vs A, B and C parameters.

What I want to know is the statistical significance of any observed correlation between the dependent variable and the independent variables.

My null hypothesis to be tested is 'there is no correlation between the dependent and independent variables.'

And I want to investigate these significances by means of defining three confidence level of 90%, 95% and 99%.

example of expected output: the correlation between A and D is Significant for 99% and 95% confidence level but not for 90% confidence level.

Could anyone help me how to perform this in Excel?

Gala
  • 8,323
  • 2
  • 28
  • 42
Mona
  • 1
  • 1
  • 1
    There are many things that seem imprecise or incorrect in your question, maybe it could be useful to step back a little and explain what you are trying to learn from all this. – Gala Jan 27 '14 at 14:40
  • I added extra explanaition to my question Gael above, thanks. – Mona Jan 27 '14 at 15:22
  • You should probably add the excel-tag, as it seems important to you. Have you tried the add-on in Excel called "Data Analysis"? You can enable it in "Options" and it will appear in the "Data" tab. Here you just have to specify your y's and x's, and not much more. – pkofod Jan 27 '14 at 15:50
  • Yes, I want to do it with excel. And I played with data analysis add-on. I know how to find the correlation of my data, but then I don't know about how to find the significant of my correlation(Confidence level)! Should I perform ANOVA after correlation? – Mona Jan 27 '14 at 16:06
  • Don't do an ANOVA! You can see http://stats.stackexchange.com/questions/61026/can-p-values-for-pearsons-correlation-test-be-computed-just-from-correlation-co for a formula that should be reasonably easy to adapt for Excel, a reference to a paper on the topic by Nick Cox and a link to a web calculator that should address your immediate problem but I am not sure that testing correlations is necessarily the right approach. (I also remember that pure how-do-I coding questions are frowned upon on this site so simply posting Excel formulas is not encouraged). – Gala Jan 27 '14 at 17:00
  • [this site](http://vassarstats.net/textbook/ch4apx.html) has a different suggestion from the linked answer. – Hans Roggeman Jan 27 '14 at 21:22
  • @HansRoggerman In practice the difference is (numerically) very small, see the discussion in comments on the other question. IIRC, the paper by Nick Cox also provides some details. – Gala Jan 28 '14 at 02:36
  • One thing to watch out for when looking at one variable at a time: [Simpson's Paradox](http://en.wikipedia.org/wiki/Simpson%27s_paradox). So my advice would be "much better to do a regression/ANOVA than fall into Simpson's Paradox". – Glen_b Jan 28 '14 at 03:33
  • Just to clarify my earlier comment: I think a regression is probably the best idea but I thought that Mona proposed somehow feeding correlation coefficients in an ANOVA. – Gala Jan 28 '14 at 09:09

0 Answers0