First you'll have to define "confidence risk". The only definition I could find, talks about stock's sensitivity to unexpected changes.
You're confusing a number of different topics here :
The alpha error
The alpha error or type I error is the probability of accepting the alternative hypothesis when the null hypothesis is actually true. That's a false positive. The amount of error you allow, is your own choice. Standard one uses 0.05 (i.e. you accept a probability of 5% to call something significant while it isn't). You apparently want to use 0.01
The p value
The p-value is the probability of obtaining a test statistic as extreme or more extreme than the one you observe, given that the null-hypothesis is true. In other words, it's the probability that the effect you think you see in the data, is not a real effect but the result of randomness.
This p-value is solely dependent on the assumptions you make about your test statistic. In this case, you use a $\chi^2$ test, meaning you calculate a test statistic that you assume to follow a $\chi^2$ distribution with a certain amount of degrees of freedom. Changing the p-value can only be obtained by changing the assumptions on the distribution of your test statistic, as the p-value is directly constructed from said test statistic.
Statistic critical threshold
Just as you can consider a test to be significant if the reported p-value is less than your acceptable threshold for the alpha error, you can express the exact same criteria on the scale of the test statistic. For example, a criteria of $p<0.05$ corresponds to a criteria of $\chi^2>3.841$ for a $\chi^2$ distribution with one degree of freedom. However, just as R reports a p-value but does not itself compare it to your threshold value, it also just reports a value for the $\chi^2$ statistic but does not compare it to any specific threshold value. You are free to compare it to whatever threshold value you deem appropriate.
Confidence interval
A confidence interval is based on the standard error around the estimate for which you construct that confidence interval. A 95% confidence interval means that in 95% of the experiments, the interval will contain the true value of the estimate. If you construct a 99% confidence interval, you have a wider interval because you now want the interval to contain the true value of the estimate 99% of the time. But this is yet another thing.