Power analysis is typically undertaken a priori to conducting an experiment, in which the researcher, given a pre-specified effect size, identifies a sample size that sufficiently controls type II error. So two things are immediately different in observational designs, 1) you can't change the sample size , 2) the effects of interest are likely to have non-zero correlations with one another (whereas in experiments they should be zero by design.)
So even with your very large individual sample, I would not take for granted the power of tests for the direct effects of the higher levels, especially given ecological correlations between aggregate units tend to be very high. Also I don't know the distribution of the outcome, and if re-admittances are rare one needs (much) larger sample sizes than in typical experiments with continuous outcomes. Further, cross-level interactions may potentially have little power as well.
As far as motivation for conducting power analysis in observational settings (to be clear power analysis before the study is undertaken), one may wish before hand to check whether the study is sufficiently powered, or whether more data should be collected before the study is undertaken. Also one may wish to determine if certain exploratory analysis (such as identifying interaction effects) are worth undertaking.
I admit, this conflicts with the notion that one should have an identified model before hand (e.g. if you need to include an interaction effect for other effects to be properly identified it needs to be included regardless of its power). But, my response would be such interaction effects are very rarely specified at the onset as necessary components in the model, and are typically added in a model building strategy. Such as that advocated by Raudenbush and Bryk starting with empty models and iteratively adding higher level variance components and interactions (as esimation problems frequently occur with complicated models, especially if a higher level variance component is zero).
For estimating power, it is unlikely any standard equation will capture the complexity of the situation. And so one frequently turns to simulation. One then generates a fake set of data that approximates what the true data will look like, then generates an outcome distributed according to pre-specified effect sizes, then estimates the model and calculates the number of true and false negatives. Even if your client can't put a number on a specific effect size, you can graph the expected power of a test given a range of effect sizes.
As Michael already stated, it will likely be a chore to specify the variability in the data (i.e. the variances and the co-variances of the independent variables). It will likely take custom coding to simulate the nested structure of the data while at least approximately constraining the variances and covariances of the independent variables. Quickly reviewing the multi-level modelling books on my shelf suggests most power analysis were considered in the context of multi-level experiments, so you may be foraying into new territory.
I will refer to some other literature in hopes others have provided more directly actionable advice.
- Andrew Gelman and Jennfier Hill in their book, Data Analysis Using Regression and Multilevel/Hierarchical Models has a chapter on power analysis, and gives a brief example in R code. They also give a few references at the end of the chapter on the topic.
- Tom Snijders webpage has a windows program that calculates power for fixed effects in two-level models, PINT. He also appears to have publications of potential interest. I found this via the chapter in Joop Hox's multi-level book on power analysis.
- The G*Power 3 software tool doesn't appear to have any modules for multi-level analysis, but they do have a wealth of references to other literature on power analysis.
As a note on post hoc power analysis, I agree it is typically considered inappropriate (see this response by Freya Harrison on our site). Although I admit I recently conducted one in a publication at the request of a reviewer. Although to be fair, in retrospect I wish I had conducted a-priori power analysis, as the question of power in and of itself is interesting, even though it was an observational, quasi-experimental study.