I've used factor analysis for questionnaires and such before, but I have a new research question now and it doesn't seem like anything similar has been done. I've had participants rate 8 news articles on a variety of measures. However, each measure is provided by the participant themselves based on meaningful differences that they see between the articles. This is a repertory grid, where the elements (columns, variables) are different articles, and each participant provides a number of constructs (rows, cases) which they see as important (such as 'objectivity', 'persuasiveness', 'accessibility' and so on). Each article is rated on each construct which that participant provides, so my data looks something like:
Subject Construct Art1 Art2 Art3
1 Objectivity 5 3 8
1 Persuasiveness 3 8 7
1 Accessibility 7 6 3
2 Formality 3 4 6
2 Factuality 8 2 6
2 Use of Sources 7 6 9
(and so on, with 8 articles and around 8 constructs per subject)
My aim is to find the latent variables that best describe the variance between the different constructs provided by my participants. So far, I've been under the impression that factor analysis is appropriate for this. With 8 articles and ~160 cases, I should be getting a couple of factors and their loadings on each article, as well as the factor scores from each construct that I can use for interpretation. But is this crazy? I know this isn't exactly what the method was designed for, but I'm not aware of any theoretical reason to be wary here. My initial analyses are throwing up low communalities and failures to converge, so I'm starting to suspect I may be using the wrong method (or I don't have enough data, but most of the commonly used criteria put me in the clear)
Should I be using something else, like cluster analysis or IRT models?