Let's take the example of a two-group comparison conducted with the independent samples t-test. The test statistic is given by $$t = \frac{\bar{x}_1 - \bar{x}_2}{s_p \sqrt{\frac{1}{n_1} + \frac{1}{n_2}}},$$ where $\bar{x}_1$ and $\bar{x}_2$ are the means of the two groups, $s_p$ is the pooled standard deviation, and $n_1$ and $n_2$ are the group sizes.
The standardized mean difference (Cohen's d) is given by $$d = \frac{\bar{x}_1 - \bar{x}_2}{s_p}.$$ Therefore, one can compute $d$ from $t$ with $$d = t \sqrt{\frac{1}{n_1} + \frac{1}{n_2}}.$$ So, one will get the exact same value whether one computes $d$ directly from the means and SDs or by converting $t$ to $d$. Therefore, no information or precision is lost when computing $d$ in this manner.
More generally, this will work, as long as you use the correct equation to convert the test statistic to the corresponding effect size measure.
UPDATE: I'll add two more examples to illustrate this point:
Suppose you conduct a one-way ANOVA on a total of $n$ individuals (all groups combined) with $g$ groups. The usual $F$ statistic is then equal to $F = MS_B / MS_E$, where $MS_B$ and $MS_E$ are the between-group (or treatment) mean square and the error mean square, respectively, which in turn are equal to $MS_B = SS_B / (g-1)$ and $MS_E = SS_E / (n-g)$, where $SS_B$ and $SS_E$ are the between-group sum of squares and the error sum of squares.
The usual effect size measure in this context is "eta-squared" ($\eta^2$), which can be computed with $\eta^2 = SS_B / (SS_B + SS_E)$ (i.e., the between-group sum of squares divided by the total sum of squares). But one can just as easily compute this with $$\eta^2 = \frac{F \times (g-1)}{F \times (g-1) + (n-g)}.$$ The results will be exactly the same.
Suppose you have a $2 \times 2$ table to examine the relationship between two dichotomous variables X and Y of the form:
| X not_X |
------+--------------+
Y | a b |
not_Y | c d |
------+--------------+
The usual test of independence yields a chi-square value ($\chi^2$) with one degree of freedom. One measure of the strength of the association between the two variables is the phi coefficient. It can be computed with $$\phi = \frac{a*d - b*c}{\sqrt{(a+b)(c+d)(a+c)(b+d)}}.$$ But one can also compute this with $\phi = \sqrt{\chi^2 / n}$, where $n = a+b+c+d$ (assuming that $\chi^2$ was not computed with a continuity correction -- see this question). One caveat: One does not know the correct sign of $\phi$ when it is computed that way (i.e., whether the coefficient was positive or negative). So, to be precise, we get $\sqrt{\chi^2 / n} = |\phi|$.