You can get an un-biased estimate of the classification error with the out-of-bag error estimate. See explanation here: http://www.stat.berkeley.edu/~breiman/RandomForests/cc_home.htm#ooberr
I suppose you could fit the model many times with different random seeds. If your classification error is better than expected by pure chance at least 95% of your trials, then your model is significant (at an alpha level of 0.05).
You may not even have to go to all that trouble -- your out-of-bag error should converge to some value as more trees are added. I do not know how to estimate a confidence interval for it without the above procedure, but someone smarter than I am may know...
Edit: This thread looks relevant to the question of estimating a confidence interval on OOB classification error - Bootstrapping estimates of out-of-sample error