1

I'm not a mathematician, so I have a feeling this has an answer, but I'm probably using the wrong words.

In a neural network, you have a set of input vectors ($x_{1}$, $x_{2}$, ... $x_{n}$ for $n$ features), one or more hidden layers, and an output vector. There are several ways to do input feature selection, but I was wondering if there was an established way to measure the degree to which any two features "interact" to produce the output by taking into account the coefficients across all of the hidden layers? To say, for example, that the combination of $x_{1}$ and $x_{2}$ has more information than the combination of $x_{1}$ and $x_{3}$, normalizing for linear dependence or mutual information between the two vectors?

thefourtheye
  • 111
  • 3
  • 1
    Possible duplicate of [Deep learning : How do I know which variables are important?](https://stats.stackexchange.com/questions/261008/deep-learning-how-do-i-know-which-variables-are-important) – Sycorax Aug 19 '18 at 15:37

0 Answers0