If you can assume that that two variables you wish to calculate the correlation on stem from a bivariate normal distribution, then you can use the Pearson product moment correlation coefficient and perform a test of whether or not the correlations are statistically different from zero (i.e. no correlation).
To carry out this test, you must first calculate the pearson correlation coefficient, $r_{12}$ between variables 1 and 2. It sounds as though you have this calculated already, but I include it here for completeness:
$r_{12}={{\sum_{i=1}^n(y_{i1}-\bar{y}_1)(y_{i2}-\bar{y}_2)}\over{\sqrt{{\sum_{i=1}^n(y_{i1}-\bar{y}_1)^2\sum_{i=1}^n(y_{i2}-\bar{y}_2)^2}}}}$
where $n$ is the number of observations in your analysis (this should be five in your case according to the description, but you've only included 4 observations in your table - I'm guessing you accidentally left out the 5th observation) and $\bar{y}_1$ and $\bar{y}_2$ are the means of variables 1 and 2 respectively.
Then, you'll test:
$H_0: \rho_{12}=0$ versus $H_1: \rho_{12}\ne0$
where $\rho_{12}$ is the population correlation that you are trying to estimate.
The test statistic is given by:
$t^*={r_{12}\sqrt{n-2}\over{\sqrt{1-r_{12}^2}}}$
If the null hypothesis is true, then $t^*$ follows a student $t$ distribution with $n-2$ degrees of freedom. The $t$-value can then be looked up in any standard $t$-distribution table, and if you always have five observations, you could simply hard code the value for the critical value for the $t$-distribution with $5-2=3$ degrees of freedom which would allow you to determine whether or not you reject the null hypothesis from within your program.
If you wanted to calculate the $p$-value for your test, you'd simply need to pass the value of your test statistic along with the degrees of freedom into any program that is capable of calculating the Cumulative Distribution Function (CDF) of the student $t$ distribution and have it return the corresponding probability. Nearly every computing language has the CDF probabilities built in, so you should use the pre-built functions available to you in their statistics libraries. But if you are really interested in "rolling your" own (I can't encourage you enough to NOT do this though), then you can find all sort of psuedocode on the internet for how to do this, but my favorite resource is the classical Numerical Recipes 3rd Edition: The Art of Scientific Computing (see pages 324 and 325). Many of the standard statistical libraries included in software today use the algorithms presented in this classic text.
A word of caution (updated)
- This test is not very robust if your data are bivariate normal. In theory, the test does not require bivariate normality (only finite variances and a finite covariance), but that being said, the test seems to perform poorly without bivariate normality in small samples. If bivariate normality is not an appropriate assumption, you should consider using a nonparametric statistic like the Spearman Rank Correlation Coefficient and its corresponding test. See this work for more details.
- Recall that the correlation coefficient measures the strength of a linear association. So, it's quite possible your data are very strongly related to one another, but just not linearly. So for example, data that could be modeled by a sinusoidal function over time ($x_1=heart.rate$ and $x_2=time$for example), may show a correlation of zero, but be strongly related to each other. This is because heart rate might follow a sinusoidal function of time. Thus, the linear association between heart rate and time during the day (increasing heart rate) may cancel out the effective of the linear association between heart rate during the night (or during sleep with decreasing heart rate).