I have 6 categorical variables that can have the values -1, 0 and +1. The extremes are assigned to a semantic label. During rating, the rater could select either one of the labels (-1, +1) or neither (0). For instance, one variable consists of the classes dark (-1), < neither > (0) and bright (+1). This scale assumes that the semantic labels are perfectly opposing, thus one dimensional. However, it could also be that the labels are not perfectly opposing which would mean that we are hiding 2 separate variables spanning 2 dimensions in one dimension.
The question is, how can I prove that this is the case? Or how can I prove that the variable is indeed one dimensional?
One idea I had was to use the correlation between one variable $V$ and another variable $R$ as reference. The approach would be as follows: I split my data set in two subsets $A$ and $B$, so that one subset contains variable $V_A$ of $V$ which only includes values [-1,0], the other subset contains variable $V_B$ of $V$ which only includes values [0,+1]. I then use something like Spearman's Rho to evaluate the correlation between $V_A$ and $R_A$ and $V_B$ and $R_B$. If $V$ is one dimensional, I should expect the correlation coefficient to be similar for both sub classes. If not, the coefficients should differ. As a note: Nearly all of my 6 variables are significantly correlated, some even very strong (rho = 0.4).
Is this approach valid? If not, is there a common approach for this kind of issue?
UPDATE: Context
Here is a bit of context for the question: The variables I described are response variables in a machine learning problem. I have a bunch of other variables, mostly continuous, that I use as predictors. I want to evaluate whether or not the variables are onedimensional to find hints towards reasons for the poor performance of my ML classification.