L-moments might be useful here?
Wikipedia article
The L-moments page (Jonathan R.M. Hosking, IBM Research)
They provide quantities analogous to conventional moments such as skewness and kurtosis, called the l-skewness and l-kurtosis. These have the advantage that they don't require calculation of high moments as they are computed from linear combinations of the data and defined as linear combinations of expected values of order statistics. This also means they are less sensitive to outliers.
I believe you only need second-order moments to calculate their sample variances, which presumably you'd need for your test. Also their asymptotic distribution converges to a normal distribution much faster than conventional moments.
It seems the expressions for their sample variances get quite complicated (Elamir and Seheult 2004), but i know they've been programmed in downloadable packages for both R and Stata (available from their standard repositories), and maybe in other packages too for all i know. As your samples are independent once you've got the estimates and standard errors you could just plug them into a two-sample z-test if your sample sizes are "large enough" (Elamir and Seheult report some limited simulations that appear to show that 100 isn't large enough, but not what is). Or you could bootstrap the difference in l-skewness. The above properties suggest that may perform considerably better than bootstrapping based on the conventional skewness.