01-28-2013 01:37 PM
See attached "vi". If I put a double precison numeric on my block diagram, change it to SINGLE precision, then enter the value .001 into it, it adds values to the far right of the decimal. Why is it doing this? I'm using Labview 2011 SP1.
Solved! Go to Solution.
01-28-2013 01:42 PM - edited 01-28-2013 01:55 PM
Posting by phone and just taking a guess.
The precision is defined in bits and many fractional values don't have an exact decimal translation.
For example 0.001 cannot be represented exactly in sgl or dbl.
This is inherent to the floating point representation and not language specific. You simply get the closest possible value. Set it back to DBL and change the format to show 20 decimal digits. Same difference.
01-28-2013 01:49 PM - edited 01-28-2013 01:50 PM
It is mostly a reminder to you that the actual floating point representation is different than the value that you enter. You can change the constant from automatic formatting to a fixed precision to see the more friendly 0.01.
Automatic formatting seems to like a value of 13 digits by default. If you examine the value for epsilon you will notice that for DBL it is 2e-16 so no problems with 13 digits. For SGL however, it is 1.2e-7 so you get 6-7 digits of actual precision and therefore showing 13 is going to get interesting.
Edit: BTW I think this is a minor bug, automatic formatting should not choose more digits than the representation allows.
01-29-2013 06:07 AM
Thank you both for your help. It makes sense.