Abstrakt: |
Floating-point numbers are profusely used in a variety of computer and signal processing applications. Though floating-point numbers are less error prone when compared to fixed point representation, error in floating point representation occurs due to squeezing of fractional part to fit back into its finite number of bits. In this paper, various methods of rounding are presented and modelled with an expression. A new LGRS method (with Last bit, Guard bit, Round bit & Stick bit) is proposed after a detailed comparison review of ten types of rounding methods, and this new proposal will help to find the error while using any rounding. Using this LGRS method error due to any rounding can be predefined. The various rounding functions are statistically analyzed using the convergence method and graphically illustrated. These rounding algorithms are modelled in Verilog HDL and synthesized with TSMC180nm semi-custom technology node for single precision floating point number representations and the power, area performances are compared. The overhead of various rounding circuit on single precision floating-point adder/subtractor (SPFPAS) in terms of area, power and delay are compared. Adding a rounding circuit in SPFPAS increase the area by 18% for Round-to-Nearest, 14% for HUB with Tie case (HUBT) methods and only 0.3% for (Half Unit Biased) HUB. The power increases by 12% for Round-to-nearest, 9% for HUBT and 1.7% for HUB. The mean error and delay is least for HUBT. [ABSTRACT FROM AUTHOR] |