what is independence?
The statistical definition of independence between two variables (e.g. two measures of an outcome of interest) is that the occurrence of one variable does not affect what is observed for the other. This means any test for independence must be able to assess whether the events in one variable affect the other.
I've not been able to find any useful sources that support with reasoning the claim McNemar's measures independence or dependence, in fact some sources directly refute this or simply give different explanations:
Chi squared test of independence vs McNemars
CV answers here
Comparision with several contingency table statistics
Independence vs McNemar's
The statistical definition of independence is one variable is not dependent or conditional on the status of another i.e. $P(X|Y)=P(X)$
but McNemar's evaluates the probability when each test disagreeing so is $P(X|(Y\neq X))$
This means McNemar's cannot be measuring dependence or independence since the probability is conditioned on both X and Y.
What does McNemar's test measure?
What the McNemar's test measures is extremely sensitive to the study design and must be carefully understood to understand what it is telling you.
In the confusion matrix you present the results are of paired tests carried out to predict the same binary outcome. There is no assessment of truth in this matrix, just agreement and disagreement. The McNemar's test reduces this further, focusing on only the disagreements.
The main diagonal for two accurate tests would be predominantly determined by the fact that both tests are good predictors of the outcome (i.e. they are co-dependent on the outcome of interest). An interesting question is is whether the noise elements of the two tests have any dependence separate from the agreement induced by the regression to the outcome. McNemar's is designed to ignore what is expected to be consistent to focus on what is not expected to be consistent so has an appeal here.
However, if the two tests are weak predictors of the outcome of interest then correct predictions come from both the signal in the model and also from random chance fluctuations in the noise that happen to give a correct result. The main diagonal reflects where the underlying co-dependencies plus noise coincide. In this case ignoring the main diagonal will not evaluate the random agreements due to any interaction in the noise when assessing dependence. For this reason I do not believe it can be claimed to assess dependence of test noise in the expected way.
The off-diagonal measures disagreements, so in a binary test this means one is right and one is wrong but the table cannot tell us which is which. A wrong result arises from noise, but a right result in strong performing models will be signal. This means we are not comparing noise in one test with noise in the other. What we are testing is noise in one test vs signal in the other, but blindfolded as to which is which.
Things are much messier in comparing poor models as a high proportion of 'correct' results will be flukes. This means we are comparing incorrect results with both signal and flukes.
What does it mean?
McNemar's is assessing whether there is any skew in how the tests disagree. This is a useful statistic to understand which test tends to skew more towards positive or negative results. My interpretation of the p-value being below your alpha level is that questionnaires predict compliance with RDA more than food diaries do.