LabVIEW

cancel
Showing results for 
Search instead for 
Did you mean: 

Using auto-indexing tunnels to write data files?

Solved!
Go to solution

Hi,

 

I am using ni-motion controller to control a servo motor and collect position and torque data. I want to write the collected data to TDMS files.

 

Recently I learned about the producer/consumer design pattern and I figured that would be a good approach to ensure that writing the files didn't slow down my data collection timed loop.

 

However, I also figured out that my program seems to run well if I wire the data I collect to auto-indexing tunnels. Then I use a structure that only executes after all the data collection is done to write the arrays I built to TDMS files.

 

Is there any reason that the latter method would be advised against? Can the auto idexing tunnels slow down my loop enough for it cause concern? I am only collecting about 5000 data points for each channel.

 

Cheers,

Kenny

 

 

0 Kudos
Message 1 of 4
(2,812 Views)

Well, autoindexing tunnels don't write data files, they just accumulate data until the loop completes. If this is a FOR loop with a known number of iterations, the size of the output data can be allocated in one swoop, which is very efficient. If you are autoindexing on a WHILE loop, the final array size is not known, so LabVIEW needs to make a guess and and requires occasional new memory allocations whenever the last guess is exceeded. This is inefficient.

One problem with these approaches is if the program or computer crashes. In this case the data in the shift register is lost forever, while if you would stream it to disk, you would have most of the data acquired so far.

 

If you use a proper producer/consumer architecture, you should be able to asynchronously write the data and it will not slow down your acquisition. No need ot wait for the competion of data gathering.

 

 

Message 2 of 4
(2,802 Views)

So I am using a WHILE loop. I am following your explanation of why it's inefficient but I don't have a grasp on how inefficient it will be... as in whether or not its an inefficiency I can live with. Do you think it would be noticable for a while loop that usually runs a few thousand times at 50ms each?

0 Kudos
Message 3 of 4
(2,791 Views)
Solution
Accepted by topic author kdmart12

To my mind, the main consideration is timing.  If you are using Consumer/Producer, you basically do the writing while the data collection process is "waiting" for the next point to be collected.  It should be the case that Consuming will be quicker than Producing, so at any time, 99% of the data are already written, so (perhish forbid!) if the program crashes, you've already got most of the data to disk.fo

 

Alternatively, you could use tunnels and send an array of 5000 points (from the output tunnel) to the writing process.  This forces them to be serial rather than parallel -- no writing takes place until all 5000 points are generated, and the writing process, instead of being done almost as soon as it starts, takes 5000 times as long (more or less).

 

The Serial way is "simpler", particularly for a beginner.  The Producer/Consumer, if you understand the Design Pattern,would be my preference.

 

BS

Message 4 of 4
(2,764 Views)