LabVIEW

cancel
Showing results for 
Search instead for 
Did you mean: 

Tricky Rounding Issuie

Solved!
Go to solution

There are, for all practical purposes, two ways computers represent floating point numbers. The general equation is a*b^e. The natural computer way is for the base b to be 2 (binary). This is used in C, C++, etc. for single, float, and double, and apparently Labview also uses that. the difference between single, float, and double is the number of bits used to represent the number.

 

Computers can also represent numbers with base10 (don't ask me how). In the .Net world, the decimal type uses base 10, and I believe dbms's also have that type, and it looks like calculators also do. 

 

If you are doing financial problems you need to use the decimal (base10) type. That way you don't run into the problem where you have a lot of extra digits after doing aritmetic. The problem with that type is that it is inefficient, so if you are doing a lot of calculations it could take extra time.

 

LV isn't better than a calculator, just different. If you are doing heavy duty calculations where precision really counts, you need to do the analysis to see if any tool you are using is adequate.

 

0 Kudos
Message 21 of 29
(1,088 Views)

 


chuckgantz wrote:
.....

Computers can also represent numbers with base10 (don't ask me how). In the .Net world, the decimal type uses base 10, and I believe dbms's also have that type, and it looks like calculators also do. 

......


 


 

Smiley Very HappySmiley Very HappySmiley Very Happy Please tell me how.

0 Kudos
Message 22 of 29
(1,076 Views)

chuckgantz wrote:
...

 

If you are doing financial problems you need to use the decimal (base10) type. That way you don't run into the problem where you have a lot of extra digits after doing aritmetic. The problem with that type is that it is inefficient, so if you are doing a lot of calculations it could take extra time.

 

LV isn't better than a calculator, just different. If you are doing heavy duty calculations where precision really counts, you need to do the analysis to see if any tool you are using is adequate.

 


 
LV comes in 32 or 64 bits, there are no base 10, as far as I know..
your post reminds me of the T-shirt motto : "there are 10 types of men..; those who understand binary and the others"

 

0 Kudos
Message 23 of 29
(1,066 Views)

He's referring to

Binary-coded decimal

They're slower, use more space and are tend to be less accurate (assuming they are limited in space the amount same as the normal floating point format you're comparing against) They can be more accurate if your values and their inbetweens can be represented completely in base10, which does happen, commonly in accounting problems for instance.

 

The suffer similar problems since they can't completely represent numbers like 1/3 . Then there's rational math, but those can't do irrationals like pi perfectly. The tricky part when dealing with floating point, in my opinion, is dealing with compounding rounding errors. For instance things a simple as summation can have problems with sum inputs.

0 Kudos
Message 24 of 29
(1,045 Views)

It's actually not BCD, which is still another format used. Some calculators use that.

 

Also since someone asked how pc's do the Decimal format, even though I asked them not to ask me,

here is what the Microsoft web site says:

 

The Decimal value type represents decimal numbers ranging from positive 79,228,162,514,264,337,593,543,950,335 to negative 79,228,162,514,264,337,593,543,950,335. The Decimal value type is appropriate for financial calculations requiring large numbers of significant integral and fractional digits and no round-off errors. The Decimal type does not eliminate the need for rounding. Rather, it minimizes errors due to rounding. For example, the following code produces a result of 0.9999999999999999999999999999 rather than 1.

 

C#
decimal dividend = Decimal.One;
decimal divisor = 3;
// The following displays 0.9999999999999999999999999999 to the console
Console.WriteLine(dividend/divisor * divisor);
 

 

A decimal number is a floating-point value that consists of a sign, a numeric value where each digit in the value ranges from 0 to 9, and a scaling factor that indicates the position of a floating decimal point that separates the integral and fractional parts of the numeric value.

 

The binary representation of a Decimal value consists of a 1-bit sign, a 96-bit integer number, and a scaling factor used to divide the 96-bit integer and specify what portion of it is a decimal fraction. The scaling factor is implicitly the number 10, raised to an exponent ranging from 0 to 28. Therefore, the binary representation of a Decimal value is of the form, ((-296 to 296) / 10(0 to 28)), where -296-1 is equal to MinValue, and 296-1 is equal to MaxValue.

 

The scaling factor also preserves any trailing zeroes in a Decimal number. Trailing zeroes do not affect the value

of a Decimal number in arithmetic or comparison operations. However, trailing zeroes can be revealed by the ToString method if an appropriate format string is applied.

0 Kudos
Message 25 of 29
(1,015 Views)

Also, in reply to nitad54448, it's true LV comes in 32 or 64 bits, but that's because windows now comes in 32 or 64 bits. For many complex engineering, database, and other programs, you need to get  the version of the program that corresponds to the version of Windows you are using. And if you have a 32 bit program it will often run under a 64 bit OS, but a 64 bit program won't run under a 32 bit OS.

 

That's a completely different issue than therounding problem.

 

Chuck

0 Kudos
Message 26 of 29
(1,010 Views)

I have trouble understanding HOW a decimal representation of numbers would work in an intrinsically binary world.....

 

While I was writing a response to this, I came across THIS which is (apparently) a paper by someone working at IBM describing exactly what we're ridiculing here.

 

Even then I still have trouble imagining this so could somebody tell me if it's an actual area of research or whether this paper happened to be published on the first day of the fourth month by chance?

 

Shane

0 Kudos
Message 27 of 29
(992 Views)

For the life of me (and despite prejudice) I can't help but believing that decimal number representation is a real area of research by the guys that REALLY understand floating point numbers.....

 

http://www.acsel-lab.com/arithmetic/ 

 

Sane.

0 Kudos
Message 28 of 29
(989 Views)

Actually the subject is hugely important for anyone doing numerical analysis. If you look in the Numerical Recipes or other books on numerical analysis, you will see a lot of discussion about number systems and accuracy. It has nothing to do about understanding floating point numbers, even though you do have to understand floating point numbers.

 

And as an aside, in my younger (much younger) days as a student, I worked on an IBM 1620. That was the last DECIMAL computer made. All of the internal arithmetic was done in decimal using decimal hardware. It had a 20,000 digit (not binary)memory which was circular, so when you reached memory location 19,999you whould next go to memory location 0. And it had a math lookup table at around position 100 which was used for multiplication and division. If you had a big program and accidentally wrote over the the lookup table you couldn't do arithmetic anymore until you rebooted. Those were the good old days.

 

Chuck

0 Kudos
Message 29 of 29
(956 Views)