Short answer is yes, it is. However, it will only test the null hypothesis that all parameters are equal. If you are interested which of the $p_i$ are different, you will need to run a post hoc test.
EDIT: OK, I see now where the confusion is. $\chi^2$ tests the goodness of fit, but one of the most common application is testing for dependence. In a way this is exactly the same thing, but commonly one shows it as a contingency table.
We have two variables, $X$ (e.g. "round vs square" or "0 vs 1") and $Y$ (e.g. red vs blue). This gives us $X \times Y$ bins (e.g. four bins: round red, round blue, square red and square blue), which can be neatly represented as a contingency table. Each cell contains the number of events in that cell.
square round
red 10 40
blue 20 40
How does this relate to your question? Under independence, we assume that the probabilities of getting a "red" outcome is the same for "squares" and "rounds". We assume that "squares" and "rounds" follow a binomial distribution with the same parameter $p$. This is the null ($H_0$); rejecting it means that there is a dependence between $X$ and $Y$ -- that is, $p$ is different for different $X_i$.
Once again -- "square" and "round" is $X_1$ and $X_2$ from your question, it each follows binomial distribution (with results "red" or "blue") and we want to test, using $\chi^2$, whether the probability of getting "blue" is the same for "squares" as for "rounds".
To do that, we are first proposing a theoretical distribution of the values based on $p_{i,\bullet}$ and $p_{\bullet,j}$ (row-wise and column-wise proportions). In our example,
square round p_i,∙
red 10 40 50/110 = 0.45
blue 20 40 60/110 = 0.55
p_∙,j 30/110=0.27 80/10=0.73
O.45 is the experimental probability of getting a "red" under independence; this is the $p$ from your question, but same for "squares" and "rounds" -- same for all $X_i$ (remember, this is our null!).
Note that the four types of events (four bins) are disjoint and their probabilities obviously add up to 1, just as you have described it for the goodness of fit. Each cell in the contingency table is your $A_{i,j}$, and $\bigcup\limits_{i,j} A_{i,j} = \Theta$, and $A_{i,j} \cap A_{m,n} = \emptyset$ if $i \neq m$ or $j \neq n$.
Assuming independence, the expected numbers of observations in each cell will be $N\cdot p_{\bullet,j} \cdot p_{i,\bullet}$.
This is gives the distribution, and we test how good the data fits it. For this we use $\chi^2$. If we reject the null, we infer that the independence assumption is not fulfilled, which means that some of the $p_i$ corresponding to $X_i$ are different. Precisely what you wanted to test.