I have two features for a binary classification problem which are highly positively correlated. (0.79) But when I build a logistic regression classification then I see their weights are opposite in sign. This gives different insights on the response variable based on these features.
Why is that?
Please note there is no correlation in any other features. If I drop one of these features and keep only one of them in the model, still the sign is different for the feature kept separately.
Is this multicollinearity? I mean it is to some extent but then dropping one of the feature should then give sign of individual retained feature same right?.
How to rectify this?