I never worked with factor analysis, but your question can be asked about PCA as well, and as @ttnphns commented above, the choice between covariance and correlation matrices is exactly the same there.
It is clear (and already mentioned here) that if the original variables are measured in different and incomparable units, the only reasonable choice is to use correlation matrix. If the units are the same, but the original variances are very similar, then it does not matter which matrix to use as they are nearly proportional. So the real question is what to do in a situation when all variables are measuring the same quantity in the same units, but have very different variances.
Let me give you an example (the one I am working with on a daily basis) when the covariance matrix makes more sense. Each variable is an activity of one neuron in the brain. It is measured at different points in time and perhaps in different experimental conditions. Many neurons are recorded simultaneously, so a dataset can encompass e.g. 1000 neurons. PCA can be used to perform a dimensionality reduction.
Each variable is a firing rate (number of spikes per second). But some neurons fire more and some fire less; some change their firing rate more and some less. So individual variances can be very different (we are talking several orders of magnitude). These differences are clearly "important": a neuron that fires more (and changes its firing a lot) is presumably more involved in the task in question than a neuron that almost does not fire at all. They are arguably not "equal", and so it makes sense to use covariance matrix directly.
A related issue with correlation matrix is that if a neuron is almost silent and has a tiny variance, one would divide by almost zero and greatly amplify what is probably just noise. Getting rid of such neurons (in order to work with correlation matrix) would be another preprocessing step that would only lead to further complications.
Update. Related discussion: PCA on correlation or covariance?