1

It is a general rule that for multivariate data $\boldsymbol{X}$ and a matrix $\boldsymbol{A}$, their entropy is

$$h(\boldsymbol{A} \boldsymbol{X}) = h(\boldsymbol{X}) + \ln |\det \boldsymbol{A}|$$ (see https://en.wikipedia.org/wiki/Differential_entropy#Properties_of_differential_entropy).

What is the rule for the entropy of $\boldsymbol{X}$ multiplied by a vector $\boldsymbol{b}$?

$$ h(\boldsymbol{X} \boldsymbol{b} ) = ?$$

develarist
  • 3,009
  • 8
  • 31
  • 1
    Emulate the analysis given at https://stats.stackexchange.com/questions/415435. It's unclear what you mean by your "also" question, since all of this is explicitly about distributions. – whuber Sep 25 '20 at 22:34
  • thanks, edited the last part. the question here is what the non-reduced entropy solution is of multiplying a multivariate dataset by a vector of weights, each corresponding to a univariate component of the multivariate. Not about the effect of shift or scale. Your link is all about univariate entropy only – develarist Sep 25 '20 at 22:55
  • The rule you linked assumes X is a random vector, not a random matrix. So, by Xb, do you mean the inner product of X and b? – PedroSebe Sep 26 '20 at 01:15
  • 1
    My link is about *how to work with entropy.* The method there extends, with no essential change, to multivariate problems. – whuber Sep 27 '20 at 12:35

0 Answers0