1

I have a set of $N$ high-dimensional vectors. I use some approximation routine to make my output faster. Now I would like to evaluate the error of the approximation. Typically I use the RMSE to compute errors, but I'm getting stuck here on how to do it right.

For each vector $\vec x_i \in \mathbb R^D$ I have approximation $\vec y_i$.

I would compute $RMSE = \sqrt{\frac{\sum_i^N (\|\vec x_i - \vec y_i\|_2)^2}{N}} = \sqrt{\frac{\sum_i^N \sum_j^D (x_{i_j} - y_{i_j})^2}{N}}$.

Or is it more reasonable to scale like this: $\sqrt{\frac{\sum_i^N \sum_j^D (x_{i_j} - y_{i_j})^2}{N\cdot D}}$?

Or would it make more sense to just do $\frac{\sum_i^N \|\vec x_i - \vec y_i\|_2}{N}$ and how would that be called?

How would I compute the maximum absolute deviation (MAD)?

$MAD = \arg\max_i \|\vec x_i - \vec y_i\|_2$ or $MAD = \arg\max_i \|\vec x_i - \vec y_i\|_\infty$?

I know that MAD typically stands for median absolute deviation, is there a better acronym?

ypnos
  • 119
  • 3
  • 2
    what you call the mad ('maximum absolute deviation') is called the Chebychev norm (or sup norm). – user603 Sep 06 '13 at 08:52
  • Yes, it is $L_\infty$. I wonder if you would just say "We measure the Chebychev norm over all concatinated vectors".. – ypnos Sep 06 '13 at 13:17

0 Answers0