LabVIEW

cancel
Showing results for 
Search instead for 
Did you mean: 

Converting binary data to float using typecast

Solved!
Go to solution

Hello,

i am trying to create a function that reads the binary data from an instrument and converts it to float.

Format is the Definite Length Block format and the read string should be a 32bit float.

Will this work with a simple typecast?

 

Thx

0 Kudos
Message 1 of 3
(2,087 Views)
Solution
Accepted by topic author OnlyOne

Hi One,

 


@OnlyOne wrote:

i am trying to create a function that reads the binary data from an instrument and converts it to float.

Format is the Definite Length Block format and the read string should be a 32bit float.

Will this work with a simple typecast?


It depends! (It works when the received string is in the correct format…)

 

What is a "Definite Length Block" format?

Have you searched the forum on this kind of conversion? This is asked very often - and there are a lot of thread with possible solutions!

 

One possible solution:

Using UnflattenFromString (in favor of TypeCast) allows you to change byte order easily…

Best regards,
GerdW


using LV2016/2019/2021 on Win10/11+cRIO, TestStand2016/2019
Message 2 of 3
(2,083 Views)

Looks like a Keysight device! 😀

 

As is described in the manual there is a header in front of the data. It starts with the pound sign followed by a single digit indicating the number of digits that follow for the number of bytes in the binary data.

After that the binary data is appended, but please note that Keysight uses by default Little Endian format. Some newer devices with according firmware support a :SYSTem:BORDer command that allows to set the endianness of the returned data but it is probably safer to go with the default.

 

Since the Typecast always assumes Big Endian, you should instead go with the Unflatten from String node which allows to select the desired endianess, just as Gerd already mentioned.

Rolf Kalbermatter
My Blog
Message 3 of 3
(2,029 Views)