Approach 1. Compare all the series of 292 cluster partitions directly for internal quality (see pt 3 here).
One way to compare cluster solutions is to apply some internal validity clustering criterion (which are plenty). We typically do that to select the "best" number of clusters. However, those same criterions can be used to choose the "best" partitioning method or the best-partitioned dataset of objects.
In respect to the latter, let me cite my document describing some such clustering criterions (see collection "clustering criterions" on my web-page"):
Usage: comparing not identical sets of objects. This is possible. One should understand that for a clustering criterion objects “i” in
the set – are just anonymous rows. Therefore it will be correct to
compare, by the criterion value, partitions P1 and P2 which partly or
completely are comprised of not the same objects. At so doing k may be
one or different in the partitions. However, if P1 and P2 consist of
different number of objects one may use criterion only if it is
insensitive to the number of objects N.
In your case you have 73 sets of objects - 73 correlation matrices. (You may just ignore that rows and columns in all the matrices represent the same brain parts: since individuals are different that is 73 different sets of some objects.) It is important and nice that the "calibre" of the 73 data sets is the same: it is correlation values, which are directly comparable across the sets. Also the size of the sets (matrices) is the same, so you can use any clustering criterion, not just "insensitive to N". As the citation goes, k (number of clusters) may be different.
So, select one or several clustering criterions to use (it could be, say, C-index or Dunn's; but given that Pearson r is convertable to the euclidean distance you could use virtually any criterion, any you find it worth). And compare the series of 73*4 (4 different k) = 292 partitions for their internal validity.
Observe the results (plot the criterion values, produce summary stats). What partitions have tended to be the "better"? Maybe most of the 73 3-cluster solutions prevail among the best? Then you can say k=3 should be chosen for all 73 subjects. Or maybe there are few subjects whose cluster results suggest different k? or no good k at all (subjects with no clear-cut cluster structure)?
Approach 2. Cluster the 73 matrices by their similarity first.
Consider each subject (correlation matrix) as an object for clustering task. Since all the matrices are the same size, measure (correlation) and items (same brain parts) you may compute a similarity/distance measure between them as datasets and cluster them. Those proved to be quite similar so that might be seen as if "one subject" you might average into a single matrix. Consequently you end up with lesser number of matrices, subject types (say, 20) to cluster in each then. Best number of clusters k, sure, will need not be one the same for those 20 datasets, since they are knowingly quite different structures. That was a bit blunt, still potentially helpful approach.