First, a clarification: in a traditional SEM, indicators of a latent variable are specified as an outcome of a latent variable--not a cause (i.e., the causal arrow is in the direction of from latent variable to indicator, not the other way around).
I'm away from my office, and therefore SEM texts, but I've written about SEM sample size needs on a related (though not identical) post here. In a nutshell, the simulation research cited by Little (2013) suggests that these observations:variable ratio guidelines perform quite poorly. Sometimes you could get away with as few as 50-100 observations, and other times your needs will be many 100s.
One way to think of power in SEM is to strive for enough observations that you are confident that you have relatively precise estimates of variances and covariances for each of your observed variables. After all, SEM is a means of more parsimoniously representing these variances and covariances.
All else being equal, model complexity is probably your main concern. You never articulate your model (maybe edit your post to include?), but if you're just talking about modelling a few predictors of a latent variable or two, your sample size needs might be more modest. But the more latent variables you estimate, and the more indicators each has, etc., the larger your sample size needs with be. If you have a complex model, you might want to consider parceling indicators to simplify the measurement model (Little, Cunningham, Shahar, & Widaman, 2002).
AdamO's suggestion is a good one: if you know the model you want to evaluate, you could run a few Monte Carlo power simulations, varying the sample size to see approximately how many you need. This is pretty straightforward to do in MPlus, and the simsem
package for R
is a nice alternative, especially if you plan on doing your analyses with the lavaan
package.
Two additional pieces of information would be helpful in tailoring this answer to your needs:
- What, specifically, are you concerned about having adequate power for? To test the significance of a particular model parameter (e.g., a latent correlation, or regression)? To test a particular model constraint (e.g., examining group-invariance)? To appropriately reject poor-fitting models? Your sample-size needs might vary a bit depending on which is your primary concern.
- What is/are the response scale(s) for your ordinal variables? Rhemtulla, Brosseau-Liard, & Savalei (2012) suggest that variables with four or fewer response options are bests modelled with categorical estimators, whereas anything more than four is good enough to consider "continu-ish", and estimate using robust maximum-likelihood estimators (see here for a related post and answer). However, for what it's worth, I tend to strive to have bigger sample sizes when using a categorical estimator (I don't know if simulation research has been done on this topic, or supports my tendency).
References
Little, T. D. (2013). Longitudinal structural equation modeling. New York, NY: Guilford Press.
Little, T. D., Cunningham, W. A., Shahar, G., & Widaman, K. F. (2002). To parcel or not to parcel: Exploring the question, weighing the merits. Structural Equation Modeling, 9, 151-173.
Rhemtulla, M., Brosseau-Liard, P. E., & Savalei, V. (2012). When can categorical variables be treated as continuous? A comparison of robust continuous and categorical SEM estimation methods under suboptimal conditions. Psychological Methods, 17, 354-373.