There are many different approaches you can use to determine which model performs better. You will not want to compare estimated regression coefficients which is what I think you are attempting to do. This will not be useful since you've log transformed your data in your second model and your two models are reporting regression coefficients corresponding to different underlying scales of your independent variable. I will highlight a very popular method and then provide you references to alternative procedures for model selection.
Mean Squared Prediction Error (MSPE) with a Hold-out Sample:
If you have a sufficient number of observations in your dataset (you didn't tell us how many you're working with), then one approach proceeds as follows:
- Randomly select observations from your dataset (70%-80% of your data is used by many statisticians) and then build your two models using this "training dataset."
- After the model is built, obtain predicted values from your hold out sample (the remaining 20%-30% of your observations) by applying your regression equation to this data.
- Next, compute the Mean Squared Predicted Error of your hold out sample. This is computed as:
\begin{eqnarray*}
MSPE & \equiv & \frac{\sum_{i=1}^{n^*}(Y_{i}-\hat{Y_{i}})^{2}}{n*}
\end{eqnarray*}
where
- $n^*$ is the number of observations in your hold-out sample
- $Y_i$ is the actual value of your dependent variable from the hold-out sample
- $\hat{Y}_i$ is the predicted value of your dependent variable from the hold-out sample
- You will compute the $MSPE$ separately for each model and compare the results.
- Select as your "best" model the one that corresponds to the smallest $MSPE$.
The idea behind this procedure is that you are seeing how well your model predicts future observations that were not seen during the model building process (so they can't bias the results). If your logged model is better at predicting data, then your actual values $Y_i$ will be close to your predicted values $\hat{Y_i}$ and so the entire $MSPE$ will be smaller than a model that is worse at predicting accurately. One thing to note is that if you don't have a large sample there are other similar procedures you can use like leave-one-out cross validation that basically get after the same basic result. Another similar procedure is $K$-fold cross-validation.
Other Model Selection Criteria
Other methods you can use include the following:
- Akaike Information Criterion (AIC)
- Bayesian Information Criterion (BIC)
- Mallow's $C_p$ Criterion
I've hyperlinked each of these model selection criteria to references where you can find more information about these procedures. If your ultimate goal, however, is to find the model that most accurately predicts new data, I'd strongly suggest you use the $MSPE$ method I outlined above for you.