3

I'm confused about how to do the extended algorithm. For example, here's the gcd part gcd(8000,7001)

$$\begin{align}8000 &= 7001\cdot1 + 999\\ 7001&=999\cdot 7+8\\ 999&=8\cdot 124+7\\ 8&=7\cdot 1+1\\ 7&=1\cdot 7+0\\ \gcd(8000,7001)&=1\end{align}$$

Now, the extended algorithm

$$\begin{align}1 &= 8 - 1\cdot7\\ &= 8 - 1\cdot(999 - 8\cdot124)\\ &= -1\cdot999 + 8\cdot125\end{align}$$

How do you get this 8*125 from the previous line?

Bill Dubuque
  • 265,059
  • 37
  • 279
  • 908
Seb
  • 273
  • 4
  • 11

2 Answers2

8

$\begin{eqnarray}\!\text{By the distributive law} && \,8\:\!\ \overbrace{ -\, 1\cdot(999\,-\,8\cdot 124)}^{\textstyle -1\,(a\!-\!b) = -a\! +\! b\ }\\ &=&\ 8\cdot\color{#c00}1\, -\, 999\, +\, 8\cdot\color{#c00}{124}\\ &=&\ 8\cdot\color{#0a0}{ 125} - 999\ \ \,{\rm by}\ \ \color{#c00}{124 + 1} = \color{#0a0}{125}\end{eqnarray}$

This common "$\rm\color{#90f}{back}$-substitution" extended Euclidean gcd algorithm is notoriously error-prone. Better, this $\rm\color{#90f}{forward}$ method is simpler to compute and easier to remember. It keeps track of each remainder's expression as a linear combination of the gcd arguments. Here we get the table below, where each line $\,\ a\ \ b\ \ c\ \,$ means $\ a = 8000\, b + 7001\, c.\ $

$\qquad\quad\, \begin{array}{rrr} [\![1]\!]\ \ \ \ 8000 & 1 & 0\\ [\![2]\!]\ \ \ \ 7001 & 0 & 1\\ [\![1]\!]\:\!\ -\:\!\ 1\,[\![2]\!]\,\to\,[\![3]\!] \ \ \ \ \ \ 999 & 1 & -1\\ [\![2]\!]\ \:\!-\:\!\ 7\,[\![3]\!]\,\to\,[\![4]\!]\ \ \ \ \ \ \ \ \ \ 8 & -7& 8\\ [\![3]\!]\!-\!125\,[\![4]\!]\,\to\,[\![5]\!]\ \ \ \ \ \ \ {-}1 & \!\!\color{#c00}{876} & \!\!\!\color{#0a0}{-1001}\\ \end{array}$

Thus the final line implies that: $\,-1 = \color{#c00}{876}(8000)\!\color{#0a0}{-\!1001}(7001)\,\ $ [= negated Bezout equation]


Another example: $ $ we compute $\rm\ gcd(141,19)\,$ as above, with the equations written explicitly

$\qquad\qquad\rm\begin{eqnarray} [\![1]\!]\ \ \ \, \color{#C00}{141}\!\ &=&\,\ \ 1&\cdot& 141\, +\ 0&\cdot& 19 \\ [\![2]\!]\quad\ \color{#C00}{19}\ &=&\,\ \ 0&\cdot& 141\, +\ 1&\cdot& 19 \\ \color{#940}{[\![1]\!]-7\,[\![2]\!]}\, \rightarrow\, [\![3]\!]\quad\ \ \ \color{#C00}{ 8}\ &=&\,\ \ 1&\cdot& 141\, \color{darkorange}{-\ 7}&\cdot& 19 \\ \color{#940}{[\![2]\!]-2\,[\![3]\!]}\,\rightarrow\,[\![4]\!]\quad\ \ \ \color{#C00}{3}\ &=& {-}2&\cdot& 141\, + \color{#90f}{15}&\cdot& 19 \\ \color{#940}{[\![3]\!]-3\,[\![4]\!]}\,\rightarrow\,[\![5]\!]\quad \color{#C00}{{-}1}\ &=&\,\ \ 7&\cdot& 141\, -\color{#0A0}{ 52}&\cdot& \color{#0A0}{19} \\ {\rm negating}\ \Rightarrow\ \ \ \ \ \ {1}\ &=& {-}7&\cdot& 141\, +\color{#0A0}{ 52}&\cdot& \color{#0A0}{19}\ \ \ \rm [Bezout\ equation] \end{eqnarray}$

The prior Bezout equation $\Rightarrow 141^{-1}\equiv \color{c00}{-7}\pmod{\!19},\,$ & $\,\color{#0a0}{19^{-1}\!\equiv 52}\pmod{\!141}\,$ by reducing the Bezout equation $\bmod19\,$ and $\bmod 141\,$ resp., as explained here. Thus we see that using the extended Euclidean algorithm to compute the gcd Bezout equation yields one method of computing modular inverses (and fractions). See here & here for more examples of this and related methods.

Equivalently $\!\bmod 141\!:\ \dfrac{0}{\color{#c00}{141}}\overset{\large\frown}\equiv\dfrac{1}{\color{#c00}{19}}\overset{\large\frown}\equiv\dfrac{\color{darkorange}{-7}}{\color{#c00}8}\overset{\large\frown}\equiv\dfrac{\color{#90f}{15}}{\color{#c00}{3}}\overset{\large\frown}\equiv\dfrac{\color{#0a0}{-52}}{\color{#c00}{-1}}\Rightarrow \color{#0a0}{19^{-1}\equiv 52},\,$ where we used a succinct fractional form of the above extended Euclidean algorithm, which boils down to viewing the above equations $\!\bmod 141.$

Bill Dubuque
  • 265,059
  • 37
  • 279
  • 908
2

$$ \begin{align} 8000&=7001\cdot1+999\\ 7001&=999\cdot7+8\\ 999&=8\cdot124+7\\ 8&=7\cdot1+1\\[12pt] 1 &=8-7\cdot1\\ &=8-1(999-8\cdot124)\\ &=8\cdot125-1\cdot999\\ &=125(7001-7\cdot999)-1(8000-7001\cdot1)\\ &=126\cdot7001-1\cdot8000-875\cdot999\\ &=126\cdot7001-1\cdot8000-875\cdot(8000-7001)\\ &=1001\cdot7001-876\cdot8000 \end{align} $$ As Bill Dubuque mentions, the back-substitution method is hard to follow, and therefore, error-prone. There is also Euclid-Wallis version of the Extended Euclidean Algorithm: $$ \begin{array}{r} &&1&7&124&1&7\\\hline \color{#C00000}{1}&0&1&-7&869&\color{#C00000}{-876}&7001\\ 0&\color{#00A000}{1}&-1&8&-993&\color{#00A000}{1001}&-8000\\ \color{#C00000}{8000}&\color{#00A000}{7001}&999&8&7&\color{#0000FF}{1}&0\\ \end{array} $$ Which gives $1001\cdot7001-876\cdot8000=1$

robjohn
  • 336,406
  • 34
  • 445
  • 831
  • Note that what you call the Euclid-Wallis version is just the (matrix) transpose of the version that I mention (except that I used least (signed) remainders to shorten the computation). The reason I prefer the non-transposed way is that it is faithful to the way we write the (implicit) equations, i.e. horizontally, e.g. see the final example in my post, where the implicit equations are written explicitly. – Bill Dubuque Dec 23 '13 at 22:22
  • @BillDubuque: pretty much, but there are some steps that your version skips since you do not require positive remainders so that you can skip steps. EWA requires those so that it can give the continued fraction. – robjohn Dec 23 '13 at 22:28
  • Right, using least magnitude ("signed") remainders usually shortens the computation. The only advantage to using the longer way with strictly positive remainders is when something external requires such positivity, e.g. when computing continued fraction coefficients. Often the shorter way will be *much* shorter. – Bill Dubuque Dec 23 '13 at 22:33
  • @BillDubuque: Indeed, using non-positive remainders usually produces shorter scaffolds, and I might recommend it to more experienced users. However, I like using positive remainders, since it is closer to the standard Euclidean Algorithm, and I've found it easy to describe to new users. – robjohn Dec 23 '13 at 23:13
  • Agreed. That's usually not an issue for me because I teach least magnitude residue systems before the extended Euclidean algorithm. The augmented-matrix method works for any remainder sequence (or any sequence of linear transformations - just as does the analogous augmentation method in linear algebra, e.g. sequences of row-reductions). – Bill Dubuque Dec 23 '13 at 23:45