The condition number of a correlation matrix is not of great interest in its own right. It comes into its own when that matrix gives the coefficients of a set of linear equations, as happens for multiple linear regression using standardized regressors.
Belsley, Kuh, and Welsh--who were among the first to point out and systematically exploit the relevance of the condition number in this context--have a nice explanation, which I will broadly quote. They begin by giving a definition of
the spectral norm, denoted $||A||$ and defined as $$||A|| \equiv {\sup}_{||z||=1}||Az||.$$
Geometrically, its the maximum amount by which $A$ will rescale the unit sphere: its maximum "stretch," if you will. They point out the obvious relations that $||A||$ therefore is the largest singular value of $A$ and $||A^{-1}||$ is the reciprocal of the smallest singular value of $A$ (when $A$ is invertible). (I like to think of this as the maximum "squeezing" of $A$.) They then assert that $||A||$ actually is a norm, and add the (easily proven) facts
$||Az|| \le ||A|| \cdot ||z|| \tag{4}$
$||AB|| \le ||A||\cdot ||B|| \tag{5} $ for all commensurate $A$ and $B$.
These remarks are then applied:
We shall now see that the spectral norm is directly relevant to an analysis of the conditioning of a linear system of equations $Az = c, A $ $n\times n$ and nonsingular with solution $z=A^{-1}c$. We can ask how much the solution vector $z$ would change $(\delta z)$ if there were small changes or perturbations in the elements of $c$ or $A$, denoted $\delta c$ and $\delta A$. In the event that $A$ is fixed but $c$ changes by $\delta c$, we have $\delta z = A^{-1}\delta c$, or $$||\delta z|| \le ||A ^{-1} || \cdot || \delta c ||.$$ Further, employing property $(4)$ above to the equation system, we have $$||c|| \le ||A|| \cdot ||z||;$$ and from multiplying these last two expressions we obtain $$\frac{||\delta z||}{||z||} \le ||A|| \cdot ||A^{-1}|| \cdot \frac{||\delta c || }{||c||}.$$
That is, the magnitude $||A||\cdot ||A^{-1}||$ provides a bound for the relative change in the length of the solution vector $z$ that can result from a given relative change in the length of $c$. A similar result holds for perturbations in the elements of the matrix $A$. Here it can be shown that $$\frac{||\delta z||}{||z + \delta z||} \le ||A|| \cdot ||A^{-1}|| \cdot \frac{||\delta A||}{||A||}.$$
(The key step in this demonstration, which is left as an exercise, is to observe $\delta z = -A^{-1}(\delta A)(z + \delta z)$ and apply norms to both sides.)
Because of its usefulness in this context, the magnitude $||A||\cdot ||A^{-1}||$ is defined to be the condition number of the nonsingular matrix $A$ ... .
(Based on the earlier characterizations, we may conceive of the condition number as being a kind of "aspect ratio" of $A$: the most it can stretch any vector times the most it can squeeze any vector. It would be directly related to the maximum eccentricity attained by any great circle on the unit sphere after being operated on by $A$.)
The condition number bounds how much the solution $z$ of a system of equations $Az=c$ can change, on a relative basis, when its components $A$ and $c$ are changed.
However, these inequalities are not tight: for any given $A$, the extent to which the bounds are reasonably accurate representations of actual changes depends on $A$ and the changes $\delta A$ and $\delta c$. Condition numbers are assertions about worst cases. Thus, a matrix with condition number $9$ can be considered to be $70/9$ times better than one with condition number $70$, but that does not necessarily mean that it will be precisely that much better (at not propagating errors) than the other.
Reference
Belsley, Kuh, & Welsch, Regression Diagnostics. Wiley, 1980: Section 3.2.