Multifunction DAQ

cancel
Showing results for 
Search instead for 
Did you mean: 

Simulated DAQmx Device Timing

Hi,
 
My C app is calling DAQmx V 8.6 every 5 ms like so:
DAQmxReadAnalogF64(taskHandle, DAQmx_Val_Auto, 0.0, DAQmx_Val_GroupByScanNumber, data, chans*rate, &read, NULL);
which is supposed to read all available samples. The task is set for continuous samples at 1 kHz.
 
When I run on real hardware, I get 4, 5, or 6 samples each time, which is expected because of software timing jitter. However, when I try to run on a Simulated Device, I get 0 samples one time, then 10 or 11 the next, then 0, then 10 or 11, and so on. The documentation says that Simulated Devices should simulate the timing of actual hardware. But what I'm getting is about 10 ms response instead of 5 ms. What gives?
 
Claus Buchholz
SAKOR Technologies
 
0 Kudos
Message 1 of 4
(2,990 Views)
Hi Claus,

The DAQmx simulated device will give you timing in the sense that if you acquire 1,000 samples at 1kHz, it will take about 1 second.  NI-DAQmx versions 8.1 and earlier did not have this capability and simulated data would appear nearly instantaneously.  The purpose is that the simulated device will not be able to create an unnatural speed-up in your code if your real device would still be waiting for samples before moving samples from the buffer.

The simulation has a finite resolution to which it can model the filling of the buffer.  In a real device, the DMA transfer used to move the samples is a hardware operation.  It is very fast and requires no CPU power.  The samples are acquired according to the sample clock and number available is as you expect, about 5 per 5 ms when sampling at 1kHz. If the simulated device were to fill the buffer continuously, the CPU would have a process continuously filling it with simulated data, taking away from the processing power from the rest of the program, making it run slower than it would with a real device.  The simulation "cheats" a bit by moving simulated data into the buffer in chunks rather than one at a time so it doesn't use unneccessary CPU power.  The effect is what you see, the buffer doesn't fill in the exact same way as a real device, but in the bigger picture, the timing is still simulated accurate enough to see timing effects in your code.

Is there a reason you want to empty the buffer so frequently in your program?  You will likely get the simulated and real devices to behave more similarly if you acquire a larger number of samples at a time from the buffer.  Either way, the actual data that you acquire will be simulated with correctly timed data. The only difference is the amount available in the buffer at any given time.
Regards,
John Bongaarts
Message 2 of 4
(2,968 Views)

Thanks, John, for the explanation.

Our app does both data collection and real-time control with the DAQ readings. For control, we need samples no more than 5 ms old but for logging, we would like at least 1 kHz. That is why we are reading the buffer so often.

I guess it's just bad luck that the Simulated Device seems to update its buffer at 100 Hz and we want 200. We'll deal with it.

 

0 Kudos
Message 3 of 4
(2,939 Views)
Hi Claus,

That is certainly a good reason to be reading from the buffer so quickly.  If you find that your feedback loop is not responding fast enough with a multifunction DAQ device, you may also be interested in our R Series devices that feature FPGA technology.  This allows you to do on board signal processing and decision making for very tight contol loops.

Introduction to Intelligent DAQ
http://www.ni.com/swf/presentation/us/inteldaq/

R Series Intelligent DAQ Devices
http://sine.ni.com/nips/cds/view/p/lang/en/nid/11829
Regards,
John Bongaarts
0 Kudos
Message 4 of 4
(2,914 Views)