9

Recently, I saw new published papers like

about denoising images using Weighted Nuclear Norm Minimization (WNNM) approach and I am wondering what the physical intuition behind it is.

The main idea is to subdivide the image into patches $Y_j$ then estimate the free-noise patch as the solution of

$$\min\limits_{X_j}{\frac{1}{\sigma_n^2}\|X_j-Y_j\|_F + \sum_i|w_i\sigma_i(X_j)|}$$

where $\sigma_n^2$ is the variance of the noise, $X_j$ is the $j^{th}$ denoised patch to estimate, $\sigma_i(X_j)$ is $i^{th}$ singular value of the matrix $X_j$ and the weights $w_i$'s are non-negative values and chosen in a non-ascending order to satisfy the convexity property.

More specifically, I would like to understand what that means if an image has the singular values of its patches (or regions) constrained to be sparse (according to the expression of the objective function) and I am wondering also if this optimization problem would yield characteristic patterns that are the same in all the images.

user2987
  • 245
  • 2
  • 6

1 Answers1

4

Most of the Denoisers in Image Processing make a simple assumption - The data has small number of freedom degrees while noise has high number.
Hence if we try to represent the given noisy data with small number of parameters we probably match the data while exclude most of the noise.

In the case above we try to limit the number of Singular Values of the base which represents the patches of the image.
The assumption means that most of the patches can be represented by a dictionary with small number of Singular Values -> Small number of degrees of freedom to create "Crazy Patches".

Royi
  • 33,983
  • 4
  • 72
  • 179