Help


from Wikipedia
« »  
Even the " exact " FFT algorithms have errors when finite-precision floating-point arithmetic is used, but these errors are typically quite small ; most FFT algorithms, e. g. Cooley – Tukey, have excellent numerical properties as a consequence of the pairwise summation structure of the algorithms.
The upper bound on the relative error for the Cooley – Tukey algorithm is O ( ε log N ), compared to O ( εN < sup > 3 / 2 </ sup >) for the naïve DFT formula ( Gentleman and Sande, 1966 ), where ε is the machine floating-point relative precision.
In fact, the root mean square ( rms ) errors are much better than these upper bounds, being only O ( ε √ log N ) for Cooley – Tukey and O ( ε √ N ) for the naïve DFT ( Schatzman, 1996 ).
These results, however, are very sensitive to the accuracy of the twiddle factors used in the FFT ( i. e. the trigonometric function values ), and it is not unusual for incautious FFT implementations to have much worse accuracy, e. g. if they use inaccurate trigonometric recurrence formulas.
Some FFTs other than Cooley – Tukey, such as the Rader-Brenner algorithm, are intrinsically less stable.

1.821 seconds.