I don't think squaring will necessarily do what you want even if it makes things look normal.
If you want to test equality of a population mean to a hypothesized mean then by testing a transformed variable you can be highly likely to reject when the original population mean is the one given in the null (that is, you will be likely to reject true nulls).
Consider some random variable $X$ which has some distribution with $\mu=\mu_0$ and non-zero variance.
Let $Y=X^2$.
$E(Y)=E(X^2) = E(X)^2 +\text{Var}(X)=\mu_0^2+\sigma^2_X$
Consequently, a test of $H_0^*:\mu_Y=\mu_0^2$ should reject (and in large samples will become essentially certain to, even though the original hypothesis $H_0:\mu_X=\mu_0$ was true.
Beware of mixing hypothesis tests and transformations unless you actually understand how they behave!
Illustration
Here's a sample from a somewhat left-skew distribution with population mean 5:

By chance, the sample mean came out really close to the population mean:
> mean(y)
[2] 5.000247
Now we square it. How does the mean compare with 25?
> mean(y^2)
[1] 27.97773
Almost 28 (the population variance of Y was about 3, so this is to be expected)
So if we test whether the population mean of $Y^2$ is 25 ... we're likely to reject. (In this particular sample the p-value would only be about 0.08)
Code was requested; unfortunately I didn't keep the code I used to generate
the example; this is vaguely similar to the example in that it's left skew with mean 5 and variance is substantial (though not as large as in the original):
n=100;x=ifelse(runif(n)<.5,pmax(runif(n),runif(n),runif(n))*5,runif(n,5,7.5))
Here's the results from a sample of 1000 rather than 100 with that code:
> mean(x);var(x);mean(x^2)
[1] 4.985436
[1] 2.35402
[1] 27.20623
> mean(x)^2+var(x)*(1-1/length(x)) # adjust for Bessel's correction
[1] 27.20623
(The adjustment to undo Bessel's correction on samples makes it work like the algebra for the population)
[How relevant would this be to a two sample case? If the two populations from which the samples were drawn don't have the same variance, the means of their squares will be different. This is quite different from the usual issue with different variance and the equal-variance t-test -- the test in this case is much more impacted.]
So what to do? We have to start with the precise hypothesis of interest and figure out a reasonable way to (at least to a good approximation) test that.
It appears the null is definitely equality of means.
There are several options I see:
Use the t-test as is; depending on how skewed and heavy-tailed the distribution is, significance level and power may not be so badly impacted.
Come up with a suitable parametric model for the variables in question.
A permutation test is possible but may present difficulties; under the usual assumptions it would be necessary to assume symmetry under the null (this doesn't imply that the sample should look symmetric, only that if the null were true that it should be expected to be symmetric).
A form of bootstrap test might be employed; it may be reasonable if sample sizes were fairly large for the two variables.