LabVIEW

cancel
Showing results for 
Search instead for 
Did you mean: 

PXI-6602 Counter Input "-200141 Data Overwritten Error"

I am using a counter input on the PXI-6602 card to read of an external signal frequency.  The frequency is 830KHz.  The majority of the time I am able to read back the frequency without any issues but about 10% of the time when I request a frequency read I get the following error:

 

testdesign_1-1714158289515.png

 

By default the DAQmx channel runs as DMA so that didn't help the issue.  Here is a code snippet...

 

testdesign_2-1714158676903.png

 

How can I resolve the intermittent "data overwritten" error?  Thanks!

  

0 Kudos
Message 1 of 4
(427 Views)

 

Short answer: quite possibly, you can't.

 

Longer answer: the source of this problem is a combo of the tiny 2-sample FIFO on the 6602 and the use of the shared-access PXI bus.   You're running into a PC system limitation.  About all you can hope to do is find ways to reduce other usage of the PXI bus while this task runs, but even that may not end up helping enough to prevent the error.

 

The best solution would be to use a newer DAQ device with a DAQ-STC3 timing chip (such as the PXIe-6612, or any PXIe-63xx X-series multifunction device).  These would give you a much bigger FIFO and dedicated access to on a faster bus.

 

Alternately, you can try the suggestion in the error text to divide the signal to create a lower frequency, then scale it back up.  This will limit your measurement resolution, but that's likely to be a better tradeoff than a fatal error that prevents you from getting any measurement at all.

 

An easy way to divide by 4 or more is to set up a continuous counter pulse train using the external signal as the "timebase source".   Define the pulses in units of "Ticks" and set Low, High, and Initial Delay (just to get in a good habit) to at least 2 each.  The sum of Low + High will be the integer division factor.

 

Use a different counter to measure the frequency of that first counter's output, then scale by the sum of Low + High to calculate the original frequency.

 

 

-Kevin P

CAUTION! New LabVIEW adopters -- it's too late for me, but you *can* save yourself. The new subscription policy for LabVIEW puts NI's hand in your wallet for the rest of your working life. Are you sure you're *that* dedicated to LabVIEW? (Summary of my reasons in this post, part of a voluminous thread of mostly complaints starting here).
0 Kudos
Message 2 of 4
(410 Views)

@Kevin_Price,

 

Thanks for the thorough explanation.  Since I am limited to the use of the PXI-6602 card for now, I was thinking of creating a DAQmx channel that counts the counter input edges and then divides that by the time of the data sampling to get the frequency.  Are there any other alternatives to get around the 6602 card buffer deficiency?  Thanks!

0 Kudos
Message 3 of 4
(60 Views)

No, the buffer deficiency is fixed in the hardware.  It *will* put an upper limit on your sustainable sample rate.  The two main methods to reduce the sample rate are either my earlier suggestion to divide down the original pulse train or your idea to simply count the pulses during fixed intervals.  As per usual, there are tradeoffs.

 

A. Original measurement: sample implicitly at 830 kHz, the rate of the original signal.  When you don't error out, you get 830 kiloSamples/sec, each of which can have ~1% quantization error due to the 80 MHz onboard clock.

 

B. Divide down by (let's say) a factor of 10.  Sample implicitly at 83 kHz, giving you 83 kiloSamples per sec, each with ~0.1% quantization error.  You're essentially getting an average measurement across 10 consecutive intervals, so you lose some ability to see high-speed dynamics in return for your improved % error.

 

C. Count pulses over fixed intervals.  Sample *explicitly* at (let's say) 1000 Hz, giving you 1 kiloSample per sec, each with ~0.12% quantization error (1 part in 830).   Again, you're essentially getting another average measurement, but now it's across a fixed amount of time rather than a fixed # of original pulses.

 

In general, different measurement methods let you trade off sampling rates (and thus data bandwidth) with quantization error, and they aren't entirely equivalent.  Compare B & C above where both yield approximately the same % error but with quite different sample rates.  Out of apps that demand that kind of % error, some may prefer the higher rate with more data and more of the higher speed dynamics, others may prefer the lower rate with less data and dynamics.

 

 

-Kevin P

CAUTION! New LabVIEW adopters -- it's too late for me, but you *can* save yourself. The new subscription policy for LabVIEW puts NI's hand in your wallet for the rest of your working life. Are you sure you're *that* dedicated to LabVIEW? (Summary of my reasons in this post, part of a voluminous thread of mostly complaints starting here).
0 Kudos
Message 4 of 4
(7 Views)