I have found two measures of effect size in the literature for Cochran's $Q$ test of $b$ blocks (subjects) and $k$ treatments (groups):
Serlin, Carr and Marascuillo's (2007) maximum-corrected measure of effect size ($\eta^{2}_{Q}$), which is given by:
$$\eta^{2}_{Q} = \frac{Q}{b(k-1)},$$
where $0\le\eta^{2}_{Q}\le 1$.
Berry, Johnston and Mielke (2007) offer a chance-corrected measure of effect size ($\mathcal{R}$), which is given by:
$$\mathcal{R} = 1 - \frac{\delta}{\mu_{\delta}},$$
where:
$$\delta = \left[k {b\choose 2}\right]^{-1}\sum_{i=1}^{k}\sum_{j=1}^{b-1}\sum_{l=j+1}^{b}{\left|x_{ji}-x_{li}\right|}$$
for observations $x$ in data matrix $\mathbf{X}$,
$$\mu_{\delta} = \frac{2}{b\left(b-1\right)}\left[\left(\sum_{i=1}^{b}{p_{i}}\right)\left(b-\sum_{i=1}^{b}{p_{i}}\right)-\sum_{i=1}^{n}{p_{i}\left(1-p_{i}\right)}\right],$$
and $p_{i}$ is the proportions of successes across all treatments in the $i^{\text{th}}$ block.
Update: From personal correspondence with Berry, the published Equation [7] contains a typographical error, and the $2/[k(k-1)]$ term in the equation for $\mu_{\delta}$ should be replaced with $2/[b(b-1)]$ as I have represented above.
Berry &Co. also make a critique of $\eta^{2}_{Q}$ versus $\mathcal{R}$, writing (I substitute the symbols $b$ and $k$ for the symbols $n$ and $c$ appearing in their paper):
Chance-corrected measures of effect size, such as $\mathcal{R}$, possess distinct advantages in interpretation over maximum-corrected measures of effect size,
such as $\eta^{2}_{Q}$. The problem lies in the manner in which $\eta^{2}_{Q}$ is maximized. The denominator of $\eta^{2}_{Q}$, $Q_{\max}=b(k-1)$, standardizes the observed value of $Q$ for the sample size and the number of treatments. Unfortunately, $b(k-1)$ does not standardize $Q$ for the data on which $Q$ is based but rather standardizes $Q$ on another unobserved hypothetical set of data.
A little farther, they sell the merits of $\mathcal{R}$ over those of $\eta^{2}_{Q}$:
$\mathcal{R}$ is completely data dependent, whereas $\eta^{2}_{Q}$ relies on an unobserved, idealized data set for its maximum value. Thus, $\mathcal{R}$ can achieve an effect size of unity for the observed data, while this is usually impossible for
$\eta^{2}_{Q}$. Second, $\mathcal{R}$ is a chance-corrected measure of effect size. Furthermore, $\mathcal{R}$ is zero under chance conditions, unity when agreement among the $b$ subjects is perfect, and negative under conditions of disagreement. Therefore, $\mathcal{R}$ has a clear interpretation corresponding to Cohen's coefficient of agreement (1960) and other chance-corrected measures that is familiar to most researchers. On the other hand, $\eta^{2}_{Q}$ possesses no meaningful interpretation except for values of 0 and 1. Although takes the form of a correlation ratio, it cannot be interpreted as a correlation coefficient unless the marginal frequency totals are identical
I have implemented both of these effect size measures in Stata in the cochranq package, which can be accessed within Stata by typing net describe cochranq, from(https://alexisdinno.com/stata)
. The nonpar package for R on CRAN contains the cochrans.q program, which will give the classical *Q test, but does not offer the more recent and precise non-asymptomatic tests statistic, or effect size calculations, or adjustments for multiple comparisons.
References
Berry, K. J., Johnston, J. E., and Paul W. Mielke, J. (2007). An alternative measure of effect size for Cochran’s $Q$ test for related proportions. Perceptual and Motor Skills, 104:1236–1242.
Serlin, R. C., Carr, J., and Marascuillo, L. A. (1982). A measure of association for selected nonparametric procedures. Psychological Bulletin, 92:786–790.