LabVIEW

cancel
Showing results for 
Search instead for 
Did you mean: 

How to store data hourly and implement cyclic storage

I bought a ni daq device and put it on the car to collect acceleration for a long time. I need to store data every hour. Meanwhile, if the memory is full within a month, the new data will start to overwrite the old data and realize cyclic storage, similar to the data collected on April 3, April 4, and April 5, which makes the memory full. The new data collected on April 6 overwrites the data collected on April 3, and the data collected on April 7 overwrites the data collected on April 4, so that you don't have to get on the bus to remove the memory card, wipe it and plug it back into the device. Is there anyone who can help me? Thank you

0 Kudos
Message 1 of 13
(1,022 Views)

You don't say where/how you are saving the data, nor how you are naming the files.  This sounds like a "File management" problem.

 

There are functions on the File I/O palette that can help you (look in the Advanced File Functions folder).  One is "File/Directory Info" that can tell you the sizes of the files, so you can estimate how much room you will need.  Another is "Get Volume Info", to tell you how much Total and Free Space you have.

 

The algorithm you use to decide "when to delete" is up to you to design.  I recommend not filling your drive past 90%, as you'll surely encounter file fragmentation and slower I/O.  When you see your device "getting full", you can start removing ("deleting") files, oldest first.  If all the files are more-or-less the same size, delete the oldest first, then start writing the newest.  [There is a Delete function in File I/O].

 

Bob Schor

0 Kudos
Message 2 of 13
(965 Views)

The DAQ.vi explains how I save data and name files. I have no way to solve its error, such as invalid TDMS file reference problem, I set a total of 4 files to save the program to continue to generate data to save more than 4 files to delete the file serial number 0, this allows data to be looped, but the delete time error display file is already open, so what should I do?

0 Kudos
Message 3 of 13
(933 Views)

'delete time error' is due to the file is not closed properly while deleting it.

Make sure you are closing the file (File Reference Close) before deleting it (Allow some time delay of 2 sec, between close and next operation).

I would suggest, following logic.

Check & Manage Files:

1. Get Files in Order

2. Check if 'Files or File Size or Number of File' exceeds the limit, then

 3. Close the present file under writing.

 4. Create new file for writing.

 5. Delete the older file.

 6. Else, do nothing.

7. Write the Data...

Regards,
Yogesh Redemptor
0 Kudos
Message 4 of 13
(906 Views)

The problem is that the data is collected in real time. There is no way to store data every hour. And if we allow a two-second delay between the shutdown and the next operation, what happens to the data that is collected in real time during those two seconds

0 Kudos
Message 5 of 13
(893 Views)

There are many things that are not clear (and I can't read the font (Korean?) that you use for your variables), but here are some questions:

  • What is the rate at which you are collecting data, i.e. the "sampling rate"?
  • How much data are collected in each "sample"?
  • What is the time for one "loop" of the While Loop take?  [I expect it should be the product of the first two "rate" and "sample size" quantities).

Your original Post described "hourly storage", which made me think you saved once an hour, but a more recent post suggests this isn't true, as you worry about a two-second delay.

 

I think you need to take advantage of LabVIEW being a Data Flow language and start writing routines that handle tasks in parallel.  You want one loop, which I'll call the "Producer", generating "data-to-be-saved" at some rate (the rate of the While Loop in your code), and as fast as the data "produced", they are sent out of the loop to a parallel "Consumer" loop that decides whether to write them to a file immediately, or whether, instead, to do the following:

  1. Close the current Output File.
  2. Open a new Output File (with another name).
  3. Write the record to the new Output File.

Let's say that this process of closing the old file, opening a new one, and writing the current record to the new file takes 1 second (it probably takes a fraction of this time).  Let's also say that you are generating "requests to save the data" to this Consumer loop at 5 requests/second.  This means that while you were switching output files, five more requests piled up, with data waiting to be written.  But that's the beauty of parallel loops -- the Consumer will "see" that there is more data to write, and will zip around 5 times and write those "delayed" writes before the next one comes in, so you never lose data as they are "buffered" in a structure like a Queue (or, my preference, a Stream Channel Wire).

 

Note that this does "fragment" your output Data structure, as you will have multiple "short files", but that was part of your Design process, to break your data up into smaller chunks so that, if necessary, you could delete "old data" if you were running out of disk space.

 

And how would you handle deleting old files without "interrupting" the process of saving new ones?  Simple, have the above "Consumer" loop that writes the files, generating new files when necessary, also "play Producer" and send the name of "old files to be deleted" to a "Delete the Old File" Consumer loop.  As long as File I/O is slower than 100 requests/second (I'm guestimating here -- you'll probably have seconds, if not minutes, between "requests to write"), you should have no trouble at all.

 

Bob Schor

 

0 Kudos
Message 6 of 13
(862 Views)

Thank you very much for your reply and help, read your reply is very inspired, currently only do when the iteration terminal i=0 when the storage of data once and once per second data storage procedures, After i stopped running vi, I made a comparison between "data directly imported into the file generated by TDMS Write.vi" and "files generated by storage every second after passing through the queue or when iterating terminal i=0" and found that the latter had about 10000 less data points than the former. May I ask how to solve this?

0 Kudos
Message 7 of 13
(818 Views)

I have converted the vi program into English, and the data acquisition rate "sample rate" is 1000. One thousand pieces of data were collected in each "sample." I bought Ni cDAQ-9132 and the corresponding acquisition card, the instrument is placed on the train, and the power supply of Ni instrument comes from the train, so the instrument starts at the same time when the train starts, the train rests at the same time the instrument Ni cDAQ-9132 shuts down. The collected data is used to characterize whether the train components are normal.

0 Kudos
Message 8 of 13
(813 Views)

Your Producer loop looks OK to me.  You start the DAQ, then collect 1000 samples from some unknown number of channels, save as a 1D Array of Waveforms (one/channel), and place on a Queue to the Consumer.

 

The Consumer, however, is flawed.  Before entering the Consumer, you need to get it ready to "Consume", which means you need to open a single TDMS fileand prepare it to write all of the (unknown number of) 1D Arrays of Waveforms that the Producer is sending.

 

A "feature" of the Producer/Consumer pattern is how you (properly) terminate the process.  The code you posted doesn't stop either of the Loops.

 

What generally happens is that after "Producing" for a while, the User/Operator decides "I have enough", and stops the Producer.  That's simple, but how do you stop the Consumer?  The answer is "You send the Consumer a signal that the Producer has stopped".  The way that LabVIEW used to advocate (and maybe still does!) is for the Producer to Release the Queue, causing an Error condition in the Consumer that you "trap" and say "OK, must be time to quit".  Crude!

 

A better way is to send a "unique signal" to the Consumer.  One such would be to have the Producer exit (because you pushed the Stop button) and put one last entry, an empty Array of Waveforms (in this case), on the Queue, and then simply stop running.  You modify the Consumer so when it dequeues its "Array of Waveforms", it checks if the Array is empty, wired to a Case Structure.  If False, the Array is not empty, we write it to the TDMS file, and keep running.  Otherwise, ifTrue, then we're done, so we wire this to the Stop Control of the While Loop, and when we exit the Loop.  We know the Producer has stopped, so the Consumer releases the Queue, and also closes (or otherwise terminates) the TDMS file.

[I haven't used TDMS much, so my terminology might be incorrect].

 

In LabVIEW 2016, Asynchronous Channel Wires were introduced.  The Stream Channel is what I use for Producer/Consumer loops.  The Stream Writer is similar to an Enqueue, except:

  • You don't need an Obtain Queue.  The Channel is created when you first create a Channel Writer.
  • The Channel Writer has a "Last Element?" input to solve the "shutdown" problem for you.  It also has a "Valid Entry?" so you can tell the Consumer "I'm done, and I've already given you all I've got, so you can just stop now".
  • The Channel Reader, of course, also has two Indicators for "Last Element?" and "Valid Entry?".
  • And, best of all, the Channels don't send their data "backwards" through a Queue Reference, but forward "down a pipe" to their destination.

See if these comments can "fix" your code.  

 

Bob Schor

0 Kudos
Message 9 of 13
(799 Views)

@Bob_Schor wrote:
<snip>

What generally happens is that after "Producing" for a while, the User/Operator decides "I have enough", and stops the Producer.  That's simple, but how do you stop the Consumer?  The answer is "You send the Consumer a signal that the Producer has stopped".  The way that LabVIEW used to advocate (and maybe still does!) is for the Producer to Release the Queue, causing an Error condition in the Consumer that you "trap" and say "OK, must be time to quit".  Crude!

<snip>
Bob Schor

Not only "crude" but "flawed" as well.  Imagine that you still have a GB of data in the queue to crunch when you release the queue.  It gets dumped into the bit bucket and you lose it.  By explicitly telling the consumer that you are done via the queue, the consumer continues to munch on the data until it comes to that message, then see's "all done", does whatever housekeeping it needs to do before exiting, then exits.

Bill
CLD
(Mid-Level minion.)
My support system ensures that I don't look totally incompetent.
Proud to say that I've progressed beyond knowing just enough to be dangerous. I now know enough to know that I have no clue about anything at all.
Humble author of the CLAD Nugget.
0 Kudos
Message 10 of 13
(788 Views)