Lovely. In about 1974 I was paid to write a function, in IBM 360 assembly language, to compute square roots. I was asked to make it as efficient as possible. I was in my last year as an undergraduate student. I used a Chebyshev approximation for the initial guess (after scaling the input to lie between 0 and 1), and then used two (or was it three) unrolled iterations of Newton's method to get the solution. First money I ever received for writing code!
love this -- still recall how eye-opening my first course was on numerical solving and appreciating for the first time yet again the sheer potential of compute
twenty years back, numerical analysis or numerical methods or sometimes numerical optimization are good keywords for current trending texts on Amazon etc [1]
alternatively subset of the numerical analysi wikipedia page or these algorithms in the book is a good seed filter [2]
ideally covers some linalg like gaussian elim, power method, newton's root finding dynamic, and issues like approximating by discretization, recursion to reduce solve iterations, and things like convergence due to numerical instability due to IEEE 754 fp16 limits etc
This is really nicely done! Great work. I fell in love with just how efficient these can be and it explained a lot about why many of the trig and other mathematical functions implemented in 8-bit computers are they way they are.
Here's a rather wonderful original document from the BBC Research Department (I had no idea that was a thing) back in 1969 going over just what makes them so great (https://downloads.bbc.co.uk/rd/pubs/reports/1969-10.pdf).
If all you've come across are Taylor approximations, these things can seem a little like magic at first.
Sollya is probably the best modern tool for doing this. Under the hood, it does a Remez approximation followed by LLL to quantize to floating point. No use of Chebyshev directly.
Sollya can also do rational approximants, which are only faster in some circumstances, and Chebfun does not (as far as I know) account for floating point quantization, which is a big deal if you are trying to be accurate.
Possible misunderstanding: I mean a rational function in the sense of Padé approximation or CF [1], not just representing individual numbers as p/q.
I did not find anything related to this in Sollya [2].
> Math.sin(x)/x (the sinc function) for 7 terms over [-3,3] gives coefficients c0...c6 that are all NaNs. Is this a bug?
That wouldn't exactly be a bug. The code is undoubtedly calculating the Chebyshev coefficients by evaluating the function on something like x_j = (xmin) + (xmax - xmin)/2(1 + cos(pi[0..j-1]/(j-1)). If one of those grid points happens to be exactly 0, it will try to evaluate Math.sin(0)/0, giving the NaN.
Another workaround is to have a slightly asymmetric range, such as [-3,+3.0000001]
The problem here likely is, that your first expression was not well defined for x=0 and seemingly the poor approximation code stumbled over it. Shame on it!
Yes, that is a bug, good catch. The app should show an error if the function is not defined in all chebyshev nodes. Like you have already discovered, it's easy to work around this issue for now.
Excellent work. I wanted to do this recently, but it was surprisingly hard to find code to calculate an approximation. I've bookmarked it for when I next need a quick approximation for a function.
Obligatory callout to Nick Trefethen (et. al.)'s Chebfun[1] which extends this stuff in just about every direction you can think of. 'Chebfuns' are to functions as floats are to actual mathematical numbers. It's some seriously impressive software.
I'll second this. Their methods are very powerful and very fast. For those out of the loop, the Chebyshev (and ultra-spherical) machinery allows a very accurate (machine precision) approximation to most functions to be computed very quickly. Then, this representation can be manipulated more easily. This enables a variety of methods such as finding the solution to differential algebraic equations to machine precision or finding the global min/max of a 1-D function.
I believe they use a different algorithm now, but the basic methodology that used to be used by Chebfun can be found in the book Spectral Methods in Matlab by Trefethen. Look at chapter 6. The newer methodology with ultraspherical functions can be found in a SIAM review paper titled, "A Fast and Well-Conditioned Spectral Method," by Olver and Townsend.
I've been wondering about something and I don't know if this is the place to ask it, but here it goes. I saw a video the other day about how the Nintendo 64 did not have the ability to calculate sine, so they used a lookup table from 0 to 2PI (with some clever trick to reduce the size of the table). Would it have been possibly to train a NN and store the weights or even a function and store the coefficients to calculate the sine, cosine?
Neural networks often have trigonometric functions internally, so it would be massively more computation than necessary.
If you have a few spare CPU cycles, a hybrid approximation could start with a sparse lookup table of values as the initial guess for a few rounds of a numerical approximation technique. Or you just store the first few coefficients of a polynomial approximation (as in the OP's work).
Obviously you could train some kind of neural net to calculate any function, but this would never make sense for a well-known function like sine. Neural nets are a great solution when you need to evaluate something that isn't easy to analyze mathematically, but there are already many known techniques for calculating and approximating trigonometric functions.
Training a neural net to calculate sines is like the math equivalent of using an LLM to reverse a string. Sure, you *can*, but the idea only makes sense if you don't understand how fundamentally solvable the problem is with a more direct approach.
It's always worth looking if mathematicians already have a solution to a problem before reaching for AI/ML techniques. Unfortunately, a lot of effort is probably being spent these days programming some kind of AI/ML to solve problems that have a known, efficient, maybe even proven optimal solution that developers just don't know about.
Input: Please reverse the string "Dlrow, Olleh!"
Output (chatgpt): Sure! The reversed string is "!helleO ,worldD"
Output (liquid): The reversed string is "!ehT, Llord!"
Output (llama): The reversed string is "Hellol, Wlod."
Output (phi): The reversed string of "Dlrow, Olleh!" is "!HoleL ,owrdL" or "Hello, World!" backwards.
Output (qwen): The reversed string of "Dlrow, Olleh!" is "!hlelo ,wolrD".
Honestly some of them are doing better than I expected.
A neural network is essentially just a curve fitter, so yeah. You might find this[1] video illuminating.
The main strength of a neural network comes into play when there's a lot of different inputs, not just a handful. For the simpler cases like sin(x) we have other tools like the one posted here.
Very cool. The tinkerer in me wanted to see how quickly I could come up with a function that wouldn't be approximated well.
Math.cos(x * Math.exp(Math.cos(x * x))) is the best I got so far as it is highly composite, which leads to rapid oscillations and steep gradients that can't easily be approximated by Chebyshev.
You could look into using the ChebyshevExpansion class directly. It takes as one of its arguments a callback that returns f(x) for a given x. In your case, f(x) would be your sensor values with some suitable interpolation. A more ambitious route is to add support for somehow importing tabular data into the app.
With either of those, you're still representing your polynomial as a combination of powers: 1, x, x^2, x^3, x^4, etc.
For many purposes it's much better to represent a polynomial as a combination of Chebyshev polynomials: 1, x, 2x^2-1, 4x^3-3x, etc.
(Supposing you are primarily interested in values of x between -1 and +1. For other finite intervals, use Chebyshev polynomials but rescale x. If x can get unboundedly large, consider whether polynomials are really the best representation for the functions you're approximating.)
Handwavy account of why: Those powers of x are uncomfortably similar to one another; if you look at, say, x^4 and x^6, they are both rather close to 0 for smallish x and shoot up towards 1 once x gets close to +-1. So if you have a function whose behaviour is substantially unlike these and represent it as a polynomial, you're going to be relying on having those powers largely "cancel one another out", which means e.g. that when you evaluate your function you'll often be representing a smallish number as a combination of much larger numbers, which means you lose a lot of precision.
For instance, the function cos(10x) has 7 extrema between x=-1 and x=+1, so you should expect it to be reasonably well approximated by a polynomial of degree not too much bigger than 8. In fact you get a kinda-tolerable approximation with degree 12, and the coefficients of the best-fitting polynomial when represented as a combination of Chebyshev polynomials are all between -1 and +1. So far, so good.
If we represent the same function as a combination of powers, the odd-numbered coefficients are zero (as are those when we use the Chebyshev basis; in both cases this is because our function is an even function -- i.e., f(-x) = f(x)), but the even-numbered ones are now approximately 0.975, -4.733, 370.605, -1085.399, 1494.822, -994.178, 259.653. So we're representing this function that takes values between -1 and +1 as a sum of terms that take values in the thousands!
(Note: this isn't actually exactly the best-fitting function; I took a cheaty shortcut to produce something similar to not quite equal to the minimax fit. Also, I make a lot of mistakes and maybe there are some above. But the overall shape of the thing is definitely as I have described.)
Since our coefficients will be stored only to some finite precision, this means that when we compute the result we will be losing several digits of accuracy.
(In this particular case that's fairly meaningless, because when I said "kinda-tolerable" I meant it; the worst-case errors are on the order of 0.03, so losing a few places of accuracy in the calculation won't make much difference. But if we use higher-degree polynomials for better accuracy and work in single-precision floating point -- as e.g. we might do if we were doing our calculations on a GPU for speed -- then the difference may really bite us.)
It also means that if we want a lower-degree approximation we'll have to compute it from scratch, whereas if we take a high-degree Chebyshev-polynomial approximation and just truncate it by throwing out the highest-order terms it usually produces a result very similar to doing the lower-degree calculation from scratch.
The advantage of Chebyshev polynomials over ordinary power-basis polynomials will be much less in that situation. But sometimes you're approximating something that doesn't have convenient domain-reduction relations available.
import numpy as np
p = np.polynomial.Chebyshev.interpolate(f, degree, domain=(xmin, xmax))
# insert your code to print out some C code
Also strongly recommend some basic familiarity with the theory. Approximating `Math.abs(x)` to even a few digits of uniform accuracy on any interval containing 0 requires tens if not hundreds of thousands of coefficients.
Faster Math Functions:
https://basesandframes.files.wordpress.com/2016/05/fast-math...
https://basesandframes.files.wordpress.com/2016/05/fast-math...
Even faster math functions GDC 2020:
https://gdcvault.com/play/1026734/Math-in-Game-Development-S...
alternatively subset of the numerical analysi wikipedia page or these algorithms in the book is a good seed filter [2]
ideally covers some linalg like gaussian elim, power method, newton's root finding dynamic, and issues like approximating by discretization, recursion to reduce solve iterations, and things like convergence due to numerical instability due to IEEE 754 fp16 limits etc
1. https://en.wikipedia.org/wiki/Numerical_analysis
2. https://chatgpt.com/share/67002e8c-c5c4-8008-83f9-a6ebc603ad...
Here's a rather wonderful original document from the BBC Research Department (I had no idea that was a thing) back in 1969 going over just what makes them so great (https://downloads.bbc.co.uk/rd/pubs/reports/1969-10.pdf).
If all you've come across are Taylor approximations, these things can seem a little like magic at first.
Note: results. The software itself is a bit of a pain to use.
[1]: https://www.jstor.org/stable/2157229 [2]: https://www.sollya.org/releases/sollya-8.0/sollya-8.0.pdf
May not have been merged yet.
Pade approximants are also less useful than you might think - it's very hard to get to truly correctly rounded functions with the division.
Fair point about the correct rounding. We'd be fine with several ULPs.
To work around it, I handled the x near zero case by just forcing to 1.0.
if(Math.abs(x) > 1e-8 ){ Math.sin(x)/x } else { 1.0 }
That wouldn't exactly be a bug. The code is undoubtedly calculating the Chebyshev coefficients by evaluating the function on something like x_j = (xmin) + (xmax - xmin)/2(1 + cos(pi[0..j-1]/(j-1)). If one of those grid points happens to be exactly 0, it will try to evaluate Math.sin(0)/0, giving the NaN.
Another workaround is to have a slightly asymmetric range, such as [-3,+3.0000001]
One’s first go to method should be Chebyshev. Neural nets used as a last resort.
[1]: https://www.chebfun.org
I believe they use a different algorithm now, but the basic methodology that used to be used by Chebfun can be found in the book Spectral Methods in Matlab by Trefethen. Look at chapter 6. The newer methodology with ultraspherical functions can be found in a SIAM review paper titled, "A Fast and Well-Conditioned Spectral Method," by Olver and Townsend.
If you have a few spare CPU cycles, a hybrid approximation could start with a sparse lookup table of values as the initial guess for a few rounds of a numerical approximation technique. Or you just store the first few coefficients of a polynomial approximation (as in the OP's work).
Neural nets can be useful when you have samples of a function but no idea how to approximate it, but that's not the case here.
Training a neural net to calculate sines is like the math equivalent of using an LLM to reverse a string. Sure, you *can*, but the idea only makes sense if you don't understand how fundamentally solvable the problem is with a more direct approach.
It's always worth looking if mathematicians already have a solution to a problem before reaching for AI/ML techniques. Unfortunately, a lot of effort is probably being spent these days programming some kind of AI/ML to solve problems that have a known, efficient, maybe even proven optimal solution that developers just don't know about.
The main strength of a neural network comes into play when there's a lot of different inputs, not just a handful. For the simpler cases like sin(x) we have other tools like the one posted here.
[1]: https://www.youtube.com/watch?v=FBpPjjhJGhk But what is a neural network REALLY?
Math.cos(x * Math.exp(Math.cos(x * x))) is the best I got so far as it is highly composite, which leads to rapid oscillations and steep gradients that can't easily be approximated by Chebyshev.
Doesn't handle divide by zero very well though i.e. f(x)=1/x. Should probably consider that as undefined rather than a bad expression.
I'd like to generate a Chebyshev approximation for a set of X, Y sensor values. Any hints on how to modify your code to do that?
Any chance you can add a rational function version?
For many purposes it's much better to represent a polynomial as a combination of Chebyshev polynomials: 1, x, 2x^2-1, 4x^3-3x, etc.
(Supposing you are primarily interested in values of x between -1 and +1. For other finite intervals, use Chebyshev polynomials but rescale x. If x can get unboundedly large, consider whether polynomials are really the best representation for the functions you're approximating.)
Handwavy account of why: Those powers of x are uncomfortably similar to one another; if you look at, say, x^4 and x^6, they are both rather close to 0 for smallish x and shoot up towards 1 once x gets close to +-1. So if you have a function whose behaviour is substantially unlike these and represent it as a polynomial, you're going to be relying on having those powers largely "cancel one another out", which means e.g. that when you evaluate your function you'll often be representing a smallish number as a combination of much larger numbers, which means you lose a lot of precision.
For instance, the function cos(10x) has 7 extrema between x=-1 and x=+1, so you should expect it to be reasonably well approximated by a polynomial of degree not too much bigger than 8. In fact you get a kinda-tolerable approximation with degree 12, and the coefficients of the best-fitting polynomial when represented as a combination of Chebyshev polynomials are all between -1 and +1. So far, so good.
If we represent the same function as a combination of powers, the odd-numbered coefficients are zero (as are those when we use the Chebyshev basis; in both cases this is because our function is an even function -- i.e., f(-x) = f(x)), but the even-numbered ones are now approximately 0.975, -4.733, 370.605, -1085.399, 1494.822, -994.178, 259.653. So we're representing this function that takes values between -1 and +1 as a sum of terms that take values in the thousands!
(Note: this isn't actually exactly the best-fitting function; I took a cheaty shortcut to produce something similar to not quite equal to the minimax fit. Also, I make a lot of mistakes and maybe there are some above. But the overall shape of the thing is definitely as I have described.)
Since our coefficients will be stored only to some finite precision, this means that when we compute the result we will be losing several digits of accuracy.
(In this particular case that's fairly meaningless, because when I said "kinda-tolerable" I meant it; the worst-case errors are on the order of 0.03, so losing a few places of accuracy in the calculation won't make much difference. But if we use higher-degree polynomials for better accuracy and work in single-precision floating point -- as e.g. we might do if we were doing our calculations on a GPU for speed -- then the difference may really bite us.)
It also means that if we want a lower-degree approximation we'll have to compute it from scratch, whereas if we take a high-degree Chebyshev-polynomial approximation and just truncate it by throwing out the highest-order terms it usually produces a result very similar to doing the lower-degree calculation from scratch.
Surely you shouldn't get those effects then?
I could do:
and copy-paste it wherever I need.