LabVIEW Idea Exchange

cancel
Showing results for 
Search instead for 
Did you mean: 
kam226

Automatic Precision Roundoff in Number to String functions

Status: Declined

Any idea that has received less than 2 kudos within 2 years after posting will be automatically declined. 

Hi all,

 

Tried looking for this suggestion but could not find it so I'll bring it up:

 

Automatic Precisional Rounding of  Write to Fractional String and/or Decimal according to input's bit-precision.   Determines formatting string of and/or converts to string with the level of significant figures or precision inherent to the number and bit-precision of its input, all done automatically.    Example:   A single precision numeric, after arithmetic operation, expects a value of 0.52, but due to precision is representatively stored as 0.51999998903.  When you wire this value into the Write to String but select  "automatic" on the precision terminal, it truncates to the maximal precision theoretically (0.520000).

 

suggestion.png

7 Comments
X.
Trusted Enthusiast
Trusted Enthusiast

Maybe that's just me, but I don't understand what you are looking for. Both outputs above look correct to me (I don't know what precision you requested in the red cross example, but I suppose that was 12-digit precision). The 0.52 number (I suppose that's what you typed it the control) has a funky SGL internal representation that results in the red cross display when asking for 12-digit precision, and the green check mark display when asking for 6-digit precision. This is not due to some "maximal precision" rule or whatever, it is just the default number of digits for the function (there's got to be one)...

kam226
Member

Since the computer knows the precision of the numeric it is fed, it ought to know the maximum number of reliable digits when the numeric is converted to a

string.   Allowing the computer to determine and crop to the maximum number of reliable digits when it converts to string can be handy because the string may be used in reports.  The automatic rounding also provides a subtle indicator of the bit-precision of the original numeric, so tracing precision drops may be easier in the long run.

 

The caveat to this method is that it may make some people appear more knowledgeable about rounding to significant figures than they really are.  But it's hard to say whether this caveat carries any weight in computing, since computing has fixed precisions whereas science/engineering has variable precision.

X.
Trusted Enthusiast
Trusted Enthusiast

Except again that when the number is stored by the computer, it is actually converted and the original decimal representation is lost. Unless there was a different data type with a "string value" AND the corresponding DBL (or SGL, or EXT) representation stored together, what you want cannot happen.

kam226
Member

AFAIK, one should only need the corresponding representation stored if one wanted to convert the string back to a numeric with the same precision.  But how often does a single programmer take a numeric, convert to string, and move back to numeric within a set of programs?  Usually I see some report with string text made by some other programmer and have to convert to numeric with no knowledge of the original precision. 

RavensFan
Knight of NI

I'm still not sure what you are looking for.  Now you are talking about the opposite direction of having a string and going back to a numeric.  If you have a string text, what does it matter what the "original precision" of the number was before that programmer converted it to a string?  You can only assume that the programmer has represented the number as a string the way he wanted to.

kam226
Member

 

Darren
Proven Zealot
Status changed to: Declined

Any idea that has received less than 2 kudos within 2 years after posting will be automatically declined.