8

I'm looking for books or resources that cover the following in detail:

  • implementing mathematical functions (e.g., logarithm, exponential, sine, cosine, inverse) in fixed point arithmetic for DSP purposes.

  • techniques like using lookup tables, Taylor series, etc.

I am fairly familiar with C programming and am more interested in the algorithms on how to go about implementing various mathematical functions in an efficient manner.

RuD
  • 83
  • 4
  • 1
    [This](http://dspguru.com/dsp/tricks/fixed-point-atan2-with-self-normalization) is just one trick, but it's very useful. It's about computing the atan2 function, i.e. computing the argument of a complex number from its real and imaginary parts. – Matt L. Feb 10 '15 at 11:31
  • 1
    i can give you some optimized finite-order power series that i developed more than a decade ago. other than the initial client, i haven't gotten any money for it since so i think i may as well make it public domain. originally it was developed for floating point in a context where the client couldn't include the stdlib in the build. and the evaluation is **not** bit perfect. i.e. there is error, but very small and optimized for the particular function being evaluated. lemme find that file, i'll pull the coefficients out and post the series's. – robert bristow-johnson Feb 11 '15 at 00:43
  • @robertbristow-johnson i look forward to making use of the series and seeing how it goes! thanks! – RuD Feb 11 '15 at 07:18
  • Ruchir, i posted the series's below. there are common-sense things you need to do to extend the range. like $$ 2^x = 2^k \times 2^{x-k} $$ if $ k \le x \le k+1 $. similar thing for $\log_2()$ and the periodic sinusoids. for exp and log, this will require arithmetic bit shifting of what comes out (for exp) or what goes in (for log). i think you can figger that out. and, of course, for exp and log of different bases (like $e$), you just scale with the appropriate constant what goes in to $2^x$ and what comes outa $\log_2(x)$. – robert bristow-johnson Feb 12 '15 at 16:26

4 Answers4

9

the general polynomial form is:

$$\begin{align} f(u) &= \sum\limits_{n=0}^{N} \ a_n \ u^n \\ \\ &= a_{\small{0}} + \Bigg(a_{\small{1}} + \bigg(a_{\small{2}} + \Big(a_{\small{3}} + \,... \big(a_{\small{N-2}} + (a_{\small{N-1}} + a_{\small{N}} \,u \,)u \, \big)u \ ...\Big)u \, \bigg)u \, \Bigg)u\\ \end{align}$$

the latter form is using Horner's method, which is highly recommended, especially if you're doing this in single-precision floating point.

then for a few specific functions:

square root:

$$ \begin{align} f(x-1) & \approx \sqrt{x} \quad \quad 1 \le x \le 2 \quad \quad N=4\\ a_0 & = 1.0 \\ a_1 & = 0.49959804148061 \\ a_2 & = -0.12047308243453 \\ a_3 & = 0.04585425015501 \\ a_4 & = -0.01076564682800 \\ \end{align} $$

if $2 \le x \le 4$, use the above to evaluate $\sqrt{\tfrac{x}{2}}$ and multiply that result with $\sqrt{2}$ to get $\sqrt{x}$. as with $\log_2(x)$, apply power of $2$ scaling to scale the argument to the necessary range.

base 2 logarithm:

$$ \begin{align} x\cdot f(x-1) & \approx \log_2(x) \quad \quad 1 \le x \le 2 \quad \quad N=5\\ a_0 & = 1.44254494359510 \\ a_1 & = -0.7181452567504 \\ a_2 & = 0.45754919692582 \\ a_3 & = -0.27790534462866 \\ a_4 & = 0.121797910687826 \\ a_5 & = -0.02584144982967 \\ \end{align} $$

base 2 exponential:

$$ \begin{align} f(x) & \approx 2^x \quad \quad 0 \le x \le 1 \quad \quad N=4\\ a_0 & = 1.0 \\ a_1 & = 0.69303212081966 \\ a_2 & = 0.24137976293709 \\ a_3 & = 0.05203236900844 \\ a_4 & = 0.01355574723481 \\ \end{align} $$

sine:

$$ \begin{align} x\cdot f(x^2) & \approx \sin\left(\tfrac{\pi}{2} x \right) \quad \quad -1 \le x \le 1 \quad \quad N=4 \\ a_0 & = 1.57079632679490 \\ a_1 & = -0.64596406188166 \\ a_2 & = 0.07969158490912 \\ a_3 & = -0.00467687997706 \\ a_4 & = 0.00015303015470 \\ \end{align} $$

cosine (use sine):

$$ \cos(\pi x) = 1 \, - \, 2 \, \sin^2 \left(\tfrac{\pi}{2} x \right) $$

tangent:

$$ \tan(x) = \frac{\sin(x)}{\cos(x)} $$

inverse tangent:

$$ \begin{align} \frac{x}{f(x^2)} & \approx \arctan(x) \quad \quad -1 \le x \le 1 \quad \quad N=4 \\ a_0 & = 1.0 \\ a_1 & = 0.33288950512027 \\ a_2 & = -0.08467922817644 \\ a_3 & = 0.03252232640125 \\ a_4 & = -0.00749305860992 \\ \end{align} $$

$$ \arctan(x) = \tfrac{\pi}{2} - \arctan\left( \tfrac{1}{x} \right) \quad \quad 1 \le x $$

$$ \arctan(x) = -\tfrac{\pi}{2} - \arctan\left( \tfrac{1}{x} \right) \quad \quad x \le -1 $$

inverse sine:

$$ \arcsin(x) = \arctan\left( \frac{x}{\sqrt{1-x^2}} \right)$$

inverse cosine:

$$\begin{align} \arccos(x) &= \frac{\pi}{2} - \arcsin(x) \\ &= \frac{\pi}{2} - \arctan\left( \frac{x}{\sqrt{1-x^2}} \right)\\ \end{align}$$

robert bristow-johnson
  • 16,504
  • 3
  • 29
  • 68
  • That seems pretty useful! Thank you a lot for sharing that. But still first reference entry in `man sox` is best ;) – jojek Feb 12 '15 at 15:52
  • dunno `sox`. what does the manual say about it? – robert bristow-johnson Feb 12 '15 at 15:56
  • 2
    Simply `[1] R. Bristow-Johnson, Cookbook formulae for audio EQ biquad filter coefficients, http://musicdsp.org/files/Audio-EQ-Cookbook.txt` :) – jojek Feb 12 '15 at 15:58
  • BTW, the series's minimizes the maximum **weighted** error. error is weighted in such a manner that makes sense for the function. uniform maximum error for $\log_2()$. proportional maximum error for $\sqrt{x}$ and $2^x$. something similar to proportional for $\sin()$ having to do with computing coefficients for resonant filters so that the max error of $\log(f_0)$ is minimized. i can't remember what criterion i used for $\arctan()$. and for some reason i can't find my file that tells me **what** the maximum errors are, given the range of $x$. someone with MATLAB can find out. – robert bristow-johnson Feb 12 '15 at 16:04
  • the other thing i tried to do was to have zero error at the end-points (so the remez exchange alg was modified a little), and i chose the order of the series (even or odd) so that when the functions are spliced with copies of themselves to extend the range, the slope of the error is nearly continuous at the splice. if you plot the difference of the approximation to the real thing, you'll see what i mean. – robert bristow-johnson Feb 12 '15 at 16:10
  • [**Results**](http://ideone.com/BwcT47) are pretty cool! – jojek Feb 12 '15 at 16:45
  • 1
    you should plot the error with your python, if you can. also `np.max(np.abs(sqrt_1px(xp)-np.sqrt(1+xp)))` might be instead `np.max(np.abs((sqrt_1px(xp)-np.sqrt(1+xp))/np.sqrt(1+xp)))` and same for `2**x` the error weighting for the sin is different and i'll have to find how i did that. i have old MATLAB scripts that used to sorta work in Octave, but now i can't even get Octave to plot on my old G4 Mac laptop. – robert bristow-johnson Feb 13 '15 at 00:57
  • Anyone can feel free to play with the code. I had to remove matplotlib calls since ideone doesn't support it. – jojek Feb 13 '15 at 07:50
  • hay @jojek, how does the $\arctan(x)$ do? just for $-1 \le x \le 1$. unweighted error for $\cos()$ should be no different than $\frac{1}{2}\sin^2()$. – robert bristow-johnson Feb 13 '15 at 22:26
  • @robertbristow-johnson Thanks a lot! Gonna apply it and see what kind of performance its giving on my DSP compared to the present look-up table algorithm I'm using – RuD Feb 16 '15 at 14:24
  • it's gonna cost more than look-up in instruction count, but the error and memory consumption will be far less (you can trade those to costs off against each other with table lookup). for $e^x$ and $\log(x)$, you will need to pre or post-scale your result from $2^x$ and $\log_2(x)$. and for the latter two, there is post or pre-bit shifting you'll have to do. other ranging you need for $\sin\left( \frac{\pi}{2}x\right)$. looks like the $\arctan(x)$ is naturally scaled. – robert bristow-johnson Feb 16 '15 at 20:32
  • @robertbristow-johnson Do you happen to have any series for `f(x) = 1/x` - the reciprocal function? I am trying to implement that too using a look-up table and have been struggling with it for a while and would appreciate any help or pointers on the topic. – RuD Feb 17 '15 at 05:02
  • i don't. probably what i would recommend is $$ \frac{1}{x}=x^{-1}= 2^{-\log_2(x)} $$. two function calls and negating the intermediate result. shouldn't be too hard. and, if you deal with the ranging, the pre-shifting for $\log_2(x)$ and the post-shifting for $2^x$, you've dealt with the ranging you would have had done if you implemented $x^{-1}$ directly where it's defined for $1 \le x < 2$. – robert bristow-johnson Feb 17 '15 at 20:30
  • Would you mind adding maximum error statistics as well as the coefficients? – Jason S Jul 02 '19 at 22:58
  • i'm considering it @JasonS . jojek has some information in his **Results** above. but the way that error was weighted is not reflected in his results. all's i remember at this time was **uniform** error weighting for $\log_2(x)$, **proportional** error for $2^x$ and $\sqrt{x}$ and something similar to proportional for $\sin(\frac{\pi}{2}x)$. – robert bristow-johnson Jul 03 '19 at 01:04
2

Although not specific to fixed point, I would highly recommend the "Math Toolkit for Real-Time Programming" book by Jack Crenshaw. It comes with a CD with the source code.

David
  • 2,596
  • 9
  • 12
2

TI has IQMath libraries for all of their fixed point microcontrollers. I have found them to be a goldmine of fixed-point math and DSP functions not necessarily limited to TI chips.

MSP430 C28X

Otto Hunt
  • 21
  • 2
  • I'm more interested in the algorithms rather than just the implementation of the functions – RuD Feb 16 '15 at 14:26
1

Chebyshev approximation can help compute polynomial coefficients that are close to optimum for approximating a function over a finite range. You run the approximation routine on a PC to achieve a particular set of polynomial coefficients, which you can then apply on whatever platform you like (e.g. embedded / DSP) The fine print is more or less as follows:

  • This only works for functions of one variable; if you have some function $z=f(x,y)$ then Chebyshev approximation isn't going to help you.
  • The function you're approximating should be "polynomial-like". Corners, sharp bends, and lots of wiggles will require higher-order polynomials to achieve a given level of accuracy.
  • Keeping the polynomial order low is important -- above 5th or so, you may start to see numerical errors.
  • Chebyshev approximation uses the Chebyshev polynomials evaluated over the domain [-1,1], so if the input range for your function is significantly different, you may need to scale/offset the input appropriately before determining the coefficients and before applying them. For example, if I care about a function over the input range $x \in [0,20]$ then I might define $u = (x\times 0.1) - 1$ so that $u$ ranges from -1 to +1 and I can evaluate a polynomial in $u$ to compute the required result. Without this scaling, you can run into numerical errors more easily -- or stated another way, for the same precision you may need higher bit lengths to compute intermediate values, and this is usually undesirable.
Jason S
  • 959
  • 9
  • 14
  • Jason, i did not use Tchebyshev polynomials, but i did use a weighted MinMax error (sometimes called the "$L_\infty$ norm" on error), which is, i thought, the same as Tchebyshev approximation. the method i used was Remez Exchange Algorithm. – robert bristow-johnson Jul 03 '19 at 01:11
  • Remez is superior (albeit more complex) than Chebyshev. Chebyshev just approximates the minimax condition, but usually it's good enough. – Jason S Jul 04 '19 at 22:27