0

As far as I understood - at least form a very raw conceptual point of view, LDA (Linear Discriminant Analysis), when used as a dimensional reduction technique, does two things (I'll stick to the 2-class case):

  1. It computes the direction which maximizes class separation
  2. It projects data onto that direction.

enter image description here

Then, data can be used for classification.

I know some Python and R packages offer some convenient ways to perform LDA.

However, while I find it very easy to get the projected data (for example by using scikit-learn in Python or MASS in R), I could not find a way to get the "direction" itself.

So:

  1. Does it make sense to look for the "discriminative" direction?
  2. Is there any convenient way to get it by using any of Python or R packets?
ttnphns
  • 51,648
  • 40
  • 253
  • 462
ImAUser
  • 53
  • 5
  • 2
    The "direction" is expressed by the discriminant coefficients. – ttnphns Aug 21 '18 at 01:07
  • What's the different between the coefficient and the scaling? I'm particularly confused for the LDA sklearn documentation, which has coef_ (n_features,): weight vector(s) and scalings_ (rank, n_classes - 1): scaling of the features in the space spanned by the class centroids. My confusion probably comes from the fact that I'm in a p>2 feature setting. Does anything change conceptually? – ImAUser Aug 21 '18 at 08:56
  • 1
    Go to https://stats.stackexchange.com/a/83114/3277. Do LDA of iris data and compare your output results with the published there. It will be a good starting point. – ttnphns Aug 21 '18 at 09:14

0 Answers0