LabVIEW

cancel
Showing results for 
Search instead for 
Did you mean: 

single precision constant changes value

Solved!
Go to solution

See attached "vi". If I put a double precison numeric on my block diagram, change it to SINGLE precision, then enter the value .001 into it, it adds values to the far right of the decimal. Why is it doing this? I'm using Labview 2011 SP1.

0 Kudos
Message 1 of 4
(2,320 Views)
Solution
Accepted by topic author hammer3

Posting by phone and just taking a guess.

 

The precision is defined in bits and many fractional values don't have an exact decimal translation.

For example 0.001 cannot be represented exactly in sgl or dbl.

 

This is inherent to the floating point representation and not language specific. You simply get the closest possible value. Set it back to DBL and change the format to show 20 decimal digits. Same difference.

0 Kudos
Message 2 of 4
(2,315 Views)

It is mostly a reminder to you that the actual floating point representation is different than the value that you enter.  You can change the constant from automatic formatting to a fixed precision to see the more friendly 0.01. 

 

Automatic formatting seems to like a value of 13 digits by default.  If you examine the value for epsilon you will notice that for DBL it is 2e-16 so no problems with 13 digits.  For SGL however, it is 1.2e-7 so you get 6-7 digits of actual precision and therefore showing 13 is going to get interesting.

 

Edit:  BTW I think this is a minor bug, automatic formatting should not choose more digits than the representation allows.

0 Kudos
Message 3 of 4
(2,303 Views)

Thank you both for  your help. It makes sense.

0 Kudos
Message 4 of 4
(2,267 Views)