You don't very clearly explain your concern, but I suppose that you're probably worried about the relative weight the chi-square puts on the cases where $(O_i-E_i)^2$ is large relative to the $E_i$ on the denominator. A single such term can dominate the statistic.
I also assume (at least to start with) that you're asking about the multinomial goodness of fit chi-square.
Note that your statistic is $\sum_{i=1}^k|\frac{O_i-E_i}{E_i}|= \sum_{i=1}^k|\frac{O_i}{E_i}-1|$.
If you want to reduce the effect of the larger differences between observed
and expected values for multinomial goodness of fit tests,
there's the power-divergence family[1]:
$$2nI^\lambda=\frac{2}{\lambda(\lambda+1)}\sum_{i=1}^k O_i\left\{\left(\frac{O_i}{E_i}\right)^\lambda-1\right\}\,;\,\lambda\in\mathbb{R}$$
Some authors refer to $2nI^\lambda$ as $\text{CR}(\lambda)$.
The choice $\lambda=1$ gives the ordinary chi-square,
$\lambda=0$ gives the G test, $\lambda=-\frac{_1}{^2}$
corresponds to the Freeman-Tukey statistic[2][3], and so on.
These all have asymptotic chi-square distributions.
Of those, two that would seem to come more-or-less near to what you
were seeking in the statistic (at least in the sense of having a power of $O_i$ near 1) would be
$\lambda=0$, the G-test (likelihood ratio test):
$$G = 2\sum_{i=1}^k O_i\cdot\ln\left(\frac{O_i}{E_i}\right)$$
and the (usual form of the) Freeman-Tukey:
$$T^2 = 4\sum_{i=1}^k \left(\sqrt{O_i}-\sqrt{E_i}\right)^2$$
If you're looking for a test for a contingency table, the likelihood ratio
test is widely accepted and it has good properties; the distribution of the test statistic tends to work a little better at small sample sizes as well. I've seen at least one paper where power divergence statistics (aside from the usual chi-square and likelihood ratio test) were adapted to the contingency table case, but haven't pursued them.
--
More generally, you can use pretty much whatever statistic you choose if you can sample from the null distribution of your test statistic, but (as whuber points out) you should consider the properties of your choice of statistic. Just choosing statistics at whim may produce poor power characteristics (power may be investigated for specific alternatives of interest).
You should justify your choice of test statistic carefully - why that statistic, rather than some other, similar statistic? e.g. why $\sum_i|\frac{O_i-E_i}{E_i}|= \sum_i|O_i/E_i-1|$ rather than something that would seem to be more natural, perhaps ones such as $\sum_i |\frac{O_i-E_i}{\sqrt{E_i}}|$ or $\sum_i |\frac{O_i-E_i}{\sqrt{E_i(1-\pi_i)}}|$.
Under multinomial sampling under $H_0$ its easy enough to produce random tables of counts and so investigate the distribution of some test statistic under the null (and so produce a test). If you condition on the margins, it's also possible to sample contingency tables of counts under the null of independence (e.g. R has a function to do so).
It's generally better to start with something whose good characteristics are established.
[1] Cressie, N. and Read, T.R.C (1984),
"Multinomial Goodness-of-fit Tests"
JRSSB, 46(3), 440-464
[2] Read, C. B. (1993),
"Freeman—Tukey chi-squared goodness-of-fit statistics"
Statistics & Probability Letters, 18(4), November: 271–278
[3] Freeman, M. F. and Tukey, J. W. (1950),
"Transformations related to the angular and the square root",
The Annals of Mathematical Statistics 21(4): 607–611,
doi:10.1214/aoms/1177729756