08-21-2009 01:09 PM
I found a curious problem with subtracting and comparing DBL numbers in LabVIEW (8.6). Here's how it works:
1. Subtract two DBL numbers such that the result is non integral. (Say, 3.2 - 3.1)
2. Compare the result with a constant/control which is set to the result of the subtraction (in the example, 0.1)
3. View the output of the comparison.
If the result of the subtraction is an integer, the boolean output is set (as expected). If the result is non-integral, the comparison fails.
(Now, if I simply compare two floating point numbers, it works, so the issue is not with the comparison node itself, but with the data that's input to the node). Can someone figure out why this happens? If it's unexpected, it's a pretty serious bug.
Solved! Go to Solution.
08-21-2009 01:20 PM
08-21-2009 01:21 PM
A quote of a typo from a couple of years ago;
"1 can not be represnted in binary." (Christian Altenbach, LabVIEW Champion, Knight of NI)*
What he should have wrote was "0.1" can't be represented in binary.
The easiest way to see this for yourself is to
1) take a numeric control and set it for about 14 decimal places.
2) Use your cursor tool to select the least significant digit.
3) Use your keyborad up/down arrow to increment the value.
If you look closesly, the PC is approximating the value.
Ben
* From what i can remeber it was the only time he has ever screwed up.
08-21-2009 06:22 PM
Thanks, that makes sense.
Also, for anyone else who might chance upon this thread.