Generally speaking, Information Value provides a measure of how well a variable $X$ is able to distinguish between a binary response (e.g. "good" versus "bad") in some target variable $Y$. The idea is if a variable $X$ has a low Information Value, it may not do a sufficient job of classifying the target variable, and hence is removed as an explanatory variable.
To see how this works, let $X$ be grouped into $n$ bins. Each $x \in X$ corresponds to a $y \in Y$ that may take one of two values, say 0 or 1. Then for bins $X_i$, $1 \leq i \leq n$,
$$
IV= \sum_{i=1}^n (g_i-b_i)*\ln(g_i/b_i)
$$
where
$b_i= (\#$ of $0$'s in $X_i)/(\#$ of $0$'s in $X) =$ the proportion of $0$'s in bin $i$ versus all bins
$g_i= (\#$ of $1$'s in $X_i)/(\#$ of $1$'s in $X) =$ the proportion of $1$'s in bin $i$ versus all bins
$\ln(g_i/b_i)$ is also known as the Weight of Evidence (for bin $X_i$). Cutoff values may vary and the selection is subjective. I often use $IV < 0.3$ (as does [1] below).
In the context of credit scoring, these two resources should help:
[1] http://www.mwsug.org/proceedings/2013/AA/MWSUG-2013-AA14.pdf
[2] http://support.sas.com/resources/papers/proceedings12/141-2012.pdf