LabVIEW

cancel
Showing results for 
Search instead for 
Did you mean: 

Data timing mismatch via read and write DAQmx functions in while loop

Solved!
Go to solution

Hi everyone,

 

I am trying to set up a relatively straightforward system which repeatedly outputs a pre-defined array of analogue voltage values via terminal ao0 while simultaneously reading corresponding (synchronised) input voltage levels from a voltage sensor attached to an analogue input terminal ai7.

 

It is mostly working except the measured input V data shows a kind of "compressed" version of the expected signal. The signal is acquired but with much smaller number of data points than has been defined, and seemingly over a much shorter time than defined (see attached screenshot example). The attached front panel screenshot shows some numerical conditions including the number of data points requested and the number received.

 

I can force the number of samples to be read to the original higher amount by re-defining it at the DAQmx Read VI in the loop but this just extends the data array acquired to include a longer trail of noise after the signal is measured. I have set an example sample number and rate of 20k so as to expect 1s time per loop. The output terminal ao0 is clocked via the OnboardClock. The input terminal ai7 is clocked and triggered via the output sample clock. I have a feeling the issue is related to these settings and the way I am queuing the data perhaps.

 

I am using an NI PCI-6221 DAQ connected to the sensor and output scanner via a NI breakout board. LabVIEW version is 2014.

 

Any and all help much appreciated! Many thanks in advance.

Ross

0 Kudos
Message 1 of 4
(2,327 Views)

When I run your VI with a simulated PCI-6221 it works perfectly. My only improvement recommendation would be that you request exactly 1s of data from your DAQmx-Read VI:

grafik.png

 

Your screenshot of your frontpanel also indicates that an error occured, but the error-code number that you put as an indicator in your VI is not shown in the screenshot. I recommend to evaluate that error code number...

 

Regards, Jens

 

Kudos are welcome...
Message 2 of 4
(2,314 Views)
Solution
Accepted by topic author Ross.vi

I see some other issues to consider:

 

1. You need to start the AI task *before* the AO task to be sure their start times are sync'ed.  Then the AI task will be started and "aware" when the AO task asserts its start trigger and first sample clock pulse.  (BTW, you really don't need to configure AI triggering.  The shared sample clock alone will be at least as effective, possibly better, once you sequence the DAQmx Start's.)

 

2. It's unclear to me why you have an AO task configured to allow regeneration, but then you keep feeding it the same data repeatedly anyway via DAQmx Write.

 

3. The way you write, read, and display are probably misleading you a bit.  

  • your AI Read doesn't specify a # samples.  That means it will simply return whatever quantity happens to be available.  That # could potentially vary wildly from one iteration to the next, depending on your loop timing.
  • you aren't really controlling your loop timing very carefully.  Eventually, the AO Write will become the pacing item but not for the first x # of iterations.   Actually, you ought to write the first chunk of data *before* starting the AO task.  After that, each request to AO Write could take a variable amount of time, depending on how much space happens to be available in the task buffer (unknown), how many samples you're writing (constant), and your sample rate (constant).  It won't return until enough space has cleared up to accept all the samples you're writing OR the timeout period expires.
  • I don't see where you are showing any accumulation of data over multiple iterations.  Part of the discrepancy you see is from the expected variation in # samples read from AI from one iteration to the next due to imprecise loop timing.
  • Another thing to consider for the future:  in any given iteration, the data written to the AO task is going to a buffer and won't appear on a physical terminal until some time in the future.   Meanwhile, the data read from the AI task is coming from a buffer because it already appeared on a physical terminal some time in the past.   Consequently, those two chunks of data should generally *NOT* be expected to correspond to one another.

 

-Kevin P

CAUTION! New LabVIEW adopters -- it's too late for me, but you *can* save yourself. The new subscription policy for LabVIEW puts NI's hand in your wallet for the rest of your working life. Are you sure you're *that* dedicated to LabVIEW? (Summary of my reasons in this post, part of a voluminous thread of mostly complaints starting here).
Message 3 of 4
(2,290 Views)

Many thanks guys, this has helped solve the issue and better equip me for projects in future.

 

Best wishes,

Ross

0 Kudos
Message 4 of 4
(2,270 Views)