When I use the C Model to simulate a pipelined streaming FFT with convergent rounding, the following error occurs:
" ERROR:c_model:to_hex: input val 1 is out of range -1 to <+1>"
Why does this error occur, and why do I see a mismatch between the FFT C Model and the HDL Core netlist?
This issue is fixed in FFT v7.0.
This is a problem in the C Model that occurs infrequently, and only affects the streaming architecture.
When convergent rounding is used, the result of the final butterfly stage is very close to +1.0. The rounding, in conjunction with the change in widths, results in this value becoming exactly +1.0 (which the core cannot actually produce); this causes the C Model to issue the above error message.
The user will see a mismatch against the core if this case occurs, the results of the HDL Core netlist are correct.
You can work around this problem by using truncation, or a more conservative scaling schedule.
For a detailed list of LogiCORE Fast Fourier Transform (FFT) Release Notes and Known Issues, see (Xilinx Answer 29209).
Available versions of the FFT that have a C Model:
v6.0
v5.0
For a detailed list of LogiCORE Fast Fourier Transform (FFT) Release Notes and Known Issues, see (Xilinx Answer 29209).
AR# 32391 | |
---|---|
日期 | 12/15/2012 |
状态 | Active |
Type | 综合文章 |