4

I've used factor analysis for questionnaires and such before, but I have a new research question now and it doesn't seem like anything similar has been done. I've had participants rate 8 news articles on a variety of measures. However, each measure is provided by the participant themselves based on meaningful differences that they see between the articles. This is a repertory grid, where the elements (columns, variables) are different articles, and each participant provides a number of constructs (rows, cases) which they see as important (such as 'objectivity', 'persuasiveness', 'accessibility' and so on). Each article is rated on each construct which that participant provides, so my data looks something like:

Subject    Construct         Art1    Art2    Art3
1          Objectivity       5       3       8
1          Persuasiveness    3       8       7
1          Accessibility     7       6       3
2          Formality         3       4       6
2          Factuality        8       2       6
2          Use of Sources    7       6       9

(and so on, with 8 articles and around 8 constructs per subject)

My aim is to find the latent variables that best describe the variance between the different constructs provided by my participants. So far, I've been under the impression that factor analysis is appropriate for this. With 8 articles and ~160 cases, I should be getting a couple of factors and their loadings on each article, as well as the factor scores from each construct that I can use for interpretation. But is this crazy? I know this isn't exactly what the method was designed for, but I'm not aware of any theoretical reason to be wary here. My initial analyses are throwing up low communalities and failures to converge, so I'm starting to suspect I may be using the wrong method (or I don't have enough data, but most of the commonly used criteria put me in the clear)

Should I be using something else, like cluster analysis or IRT models?

kjetil b halvorsen
  • 63,378
  • 26
  • 142
  • 467
Adb
  • 41
  • 2

2 Answers2

2

As @jeremymiles pointed out the use of FA or its extensions to three-mode FA is only sensible when elements and constructs are supplied, thereby sacrificing the idiographic nature of the repgrid.

I would recommend an alternative road that preserves the individual data while at the same time allowing to analyze aggregates and search for underlying latent factors: Procrustes Analysis. The method is commonly applied in the field of sensory research, where free choice profiling (FCP) yields a data structure with idiographic attributes across a set of objects, just like a repgrid. The PCP community (with a few exceptions) has been quite reluctant to adopt this method. There is a non-technical introduction to grids and procrustes analysis (Grice & Assad, 2009). The method of how to analyze the data and identify underlying factors is described for FCP data in many places (e.g. Arnold & Williams, 1986; Williams & Langron, 1984). For grids, procrustes analysis is implemented in James Grice's Idiogrid software which is freely available on Windows. An example of the output can be found here.

  • Arnold, G. M., & Williams, A. A. (1986). The use of generalised procrustes techniques in sensory analysis. In J. R. Piggott (Ed.), Statistical procedures in food research (pp. 233–254). London, New York: Elsevier.
  • Grice, J. W., & Assad, K. K. (2009). Generalized procrustes analysis: A tool for exploring aggregates and persons. Applied Multivariate Research, 13(1), 93–112.
  • Williams, A. A., & Langron, S. P. (1984). The use of free-choice profiling for the evaluation of commercial ports. Journal of the Science of Food and Agriculture, 35(5), 558–568. doi:10.1002/jsfa.2740350513
Mark Heckmann
  • 261
  • 1
  • 11
1

Principal components analysis (PCA; which is related to factor analysis, and sometimes found under the same menus in stats packages) is used to analyze repertory grids - but it tends to be one person's grid at a time. If the user generated the constructs, I don't think you can combine grids because the variables don't make sense over time.

Hierarchical cluster analysis (HCA) is also used, but again on an individual grid, not on multiple combined grids.

This article http://onlinelibrary.wiley.com/doi/10.1348/014466501163652/pdf (which I don't think is behind a paywall) describes the use of PCA and HCA in the analysis of a single person's repertory grid before and after therapy.

With repertory grids you can give more or less 'starting' information to your respondents. You can have them generate elements, or you can generate elements - you generated the elements (the articles). Often you ask the respondent to generate elements so that they are meaningful (e.g. a person you liked; a person who rejected you). You can provide constructs, or you can ask the respondent to provide constructs - in your case, you asked the person to provide the constructs. You can also vary the number of constructs that are generated - some respondents want to use more, some fewer (i don't think you did this.)

As you add more restrictions, you increase the comparability of the grids, but potentially decrease the usefulness.

You can only really combine and compare grids where you provide both elements and constructs, but most people would argue that you're not doing rep grids any more. Rep grids are (usually) considered to be a structured qualitative research tool - usually, you do the first stage of analysis on the grids - using something like PCA or HCA, and then you look at the results for emerging themes.

Jeremy Miles
  • 13,917
  • 6
  • 30
  • 64