1

I know there are different ways for partial derivation of an image, among others: Sobel kernel, LoG, Prewitt and so on.

But the simplest one is the central difference:

$$ \frac{d}{dx} f(x) \approx \frac{f(x+1) - f(x-1)}{2} \longrightarrow 0.5[1\ 0\ -1] $$

Which means convolving the image with above matrix.

Assume the image looks like this:

$$ I(x,y) = \left( \begin{matrix} 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & 1 & 1 & 1 & 1 & 0 & 0 \\ 0 & 0 & 1 & 1 & 1 & 1 & 0 & 0 \\ 0 & 0 & 1 & 1 & 1 & 1 & 0 & 0 \\ 0 & 0 & 1 & 1 & 1 & 1 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ \end{matrix} \right) \in \mathbb R^{8\times8} $$

Convolving this image with matrix $$G_x = 0.5\ [1\ 0\ -1] \in \mathbb R^{1 \times 3} $$ results in:

$$ \lvert I \ast G \rvert = \left( \begin{matrix} 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ 0 & 0.5 & 0.5 & 0 & 0 & 0.5 & 0.5 & 0 \\ 0 & 0.5 & 0.5 & 0 & 0 & 0.5 & 0.5 & 0 \\ 0 & 0.5 & 0.5 & 0 & 0 & 0.5 & 0.5 & 0 \\ 0 & 0.5 & 0.5 & 0 & 0 & 0.5 & 0.5 & 0 \\ 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ \end{matrix} \right) $$

The cells containing $0.5$ are the edges of the image in $x$ direction.

Know assume we would extend our filter $G_x$:

$$ G_x = 0.5 \left( \begin{matrix} 1 & 0 & -1 \\ 1 & 0 & -1 \\ 1 & 0 & -1 \\ \end{matrix} \right) \in \mathbb R^{3 \times 3} $$

Now convolving this filter with our image results to:

$$ \lvert I \ast G \rvert = \left( \begin{matrix} 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ 0 & 1 & 1 & 0 & 0 & 1 & 1 & 0 \\ 0 & 2 & 2 & 0 & 0 & 2 & 2 & 0 \\ 0 & 3 & 3 & 0 & 0 & 3 & 3 & 0 \\ 0 & 3 & 3 & 0 & 0 & 3 & 3 & 0 \\ 0 & 1 & 1 & 0 & 0 & 1 & 1 & 0 \\ 0 & 2 & 2 & 0 & 0 & 2 & 2 & 0 \\ 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ \end{matrix} \right) $$

Now instead of one unique number for edge like we had before with $0.5$ we get a gradient in $x$ direction $[1, 1, 2, 2, 3, 3, 2, 2, 1, 1]$.

Now my Questions:

1) Which approach is better, convolving the image with a $\mathbb R^{1 \times 3}$ -Filter or with a $\mathbb R^{3 \times 3}$-Filter?

2) And why is one better than the other?

Thanks

Laurent Duval
  • 28,803
  • 3
  • 26
  • 88
arash javan
  • 145
  • 7

1 Answers1

2

Non-zero pixels after derivation provide you with two informations: a potential location (where the pixel is), and a potential strength (in magnitude) of an edge-pixel. Linear edge detectors are linear filters. A $3\times3$ kernel adds a smoothing effect in the orthogonal direction. It will favor edges with a certain spatial extend, over short-lengthed isolated pixels that do not qualify for real-edges. In other words, an isolated $[0,1,0]^T$, a duplet $[1,1,0]^T$ will be affected a smaller magnitude than three vertical ones $[1,1,1]^T$ after a $3\times3$ kernel.

In different words, the Prewitt $3\times 3$ separable gradient/smoothing kernel can be written as a (Cartesian) product of two $3\times 1$ and $1\times 3$ kernels:

$$ \left( \begin{matrix} 1 & 0 & -1 \\ 1 & 0 & -1 \\ 1 & 0 & -1 \\ \end{matrix} \right) = \left( \begin{matrix} 1 \\ 1 \\ 1 \\ \end{matrix} \right) \times \left( \begin{matrix} 1 & 0 & -1 \\ \end{matrix} \right) $$

This scheme can be generalized: any smoothing filter in one direction, combined with any derivative in the other (orthogonal) direction, can form an horizontal or vertical square/rectangular image edge detector. You can find a source in Smoothed Differentiation Filters for Images, Meer and Weiss, 1992:

For the two-dimensional case we restrict ourselves to square neighborhoods. In such neighborhoods it is always possible to define a separable two-dimensional orthonormal basis built by the Cartesian product of two identical one-dimensional bases.

and in Derivative-based Operations or the wiki page Image derivatives .

If you have clear vertical edges, with same pixel amplitudes, little noise, the $1\times 3$ filter will do the job. Otherwise, good $3\times 3$ are almost always preferable. An image is really a 2D thing, and processing it through 1D tools often wastes information.

Laurent Duval
  • 28,803
  • 3
  • 26
  • 88
  • You can have some hints on other operators at http://dsp.stackexchange.com/questions/19175/sobel-vs-gaussian-derivative – Laurent Duval Jan 12 '17 at 10:27
  • 1
    @Laurant Duval could you give me some more references about this topic. I have already read some books about this topic but no one mentioned for example the smoothing effect in the orthogonal direction! – arash javan Jan 12 '17 at 11:04
  • I have added early references, tell me if you need more – Laurent Duval Jan 15 '17 at 13:17