The test you use should depend on the question you are trying to answer as much as test assumptions.
The t-test addresses the question of whether the means of two variables are equal. It is often used because the issue of the equality of means is very interpretable in terms of an empirical question e.g. if the mean SAT scores of a group of students who took a prep course is greater than mean SAT scores of a control group, then that can be interpreted in terms of the efficacy of the prep course. However, another question you could have is whether the two variables follow exactly the same distribution, which is not theoretically the same question (e.g. two variables could have the same mean, but different variances).
If you are interested in whether the means of your two variables are equal, you should not transform one of them using the log-transform. Then your t-test asks whether the mean of Var 1 is equal to the mean of log(Var 2), and unless you are interested in and able interpret what log(Var 2) represents, you're probably not interested in what that t-test will tell you.
As has been noted, non-normality of your data is not necessarily a problem for the t-test. What is important is whether the sample mean is non-normal, not whether the data itself is non-normal. If your data is normal, the sample mean will automatically be normal as well. However, even if your data is non-normal, the central limit theorem (CLT) implies that the sample mean will be (approximately) normal so long as the sample size is sufficiently large. It is because of this that the t-test is often described as robust.
Thus the only question you have to ask in applying your t-test is whether the sample size is sufficiently large that the sample mean of your non-normally distributed variable is approximately normal. This depends greatly on how much "non-normality" your data exhibits. There are no absolute and objective rules giving a specific sample size that guarantees approximate normality of the sample mean. There are ways you could possibly examine the normality of the sample mean directly (e.g. parametric bootstrap, if you can guess the non-normal distribution of your second variable). But if your sample size is in the hundreds and it's only an issue of skew, you're probably fine (although not definitely).
If you want to be conservative (both statistically and practically) you can use a nonparametric test. Others have brought up some of them, another one you could use is a randomization test. However, make sure to keep in mind what exactly these tests answer. They are not always the same as what the t-test answers. A randomization test would answer the question of whether the variables come from the same distribution, which is a stronger question than whether the means are the same.