Linux Users

cancel
Showing results for 
Search instead for 
Did you mean: 

Best method for waiting for samples without taking CPU time.

Hi guys, I'm wondering what is the best method for waiting in a thread for new samples without taking cpu time (Something like poll() on Unix systems). At the moment I'm using a thread that calls ReadAnalogF64 with a long timeout, after setting the property ReadAllAvailSamp to true, and I read a big amount of samples at once (250000 samples per second, so it's enough to call this function once per second). This seems to work fine in Windows XP system, with the 6221 PCI board that I have the CPU usage it's on average lower than 25-30% using a sampling period of 4000 nanoseconds (250000 Hz the faster than this card can work), but when I use the same code on Scientific Linux, CPU usage always is between 90-100%, this is using latets NIDAQmx release on Scientific Linux 6.3. For more info the card seems to be using DMA transfer mode and also is using Sleep Read Wait Mode with a 0.001000 value (The defaults as far as I know).

If I understand it well, the function should block until 250000 samples are ready to be read from the driver's buffer, at 4000 nanoseconds and using only one analog channel that should take at least more or less one second right? then why the function doesn't block on Linux? is this a bug or did I miss something?.

Also, it's possible to set the ReadAllAvailSamp property on NIDAQmxBase? I can't see any function for setting properties in it, only DAQmxBaseGetReadAttribute() for reading read properties.

Also would be great to have a more "Unix like" Interface to the driver, something like getting a file descriptor from the task, that you will be able to read/write/poll... and so on, but this is only a geek suggestion .

Thanks for your support.

0 Kudos
Message 1 of 3
(3,454 Views)

Just adding more details. I'm using Continuous mode, so I've seen that ReadAllAvailSamp shouldn't have any effect in the behaviour of the read functions as far as I know.  Also I've been profiling the program and checking time between calls in Windows XP and in Scientific Linux and calling the read function (ReadAnalogF64) on Windows XP (Using NI IO Trace) (With a simulated 6221 PCI device) seems to block until the requested samples can be read, but on Scientific Linux (With the NIDAQmx drivers and using gprof+oprofile) this isn't happening, and that seems the cause for the high cpu usage. Just wondering if this can be a bug in the driver's code and in other case why is taking much more cpu time on Linux (Note that this only happens for very low periods like 4000 nanoseconds, with higher sampling periods cpu usage looks the same).

I'm going to code a simple example in plain C without using any of my code (GUI, etc...) to see if I can reproduce this behaviour with only calls to NIDAQmx.

0 Kudos
Message 2 of 3
(2,807 Views)

Ok, so I solved it using the following procedure (polling) for reading samples. 1.- Sleep a reasonable amount of time, 2.- Check the AvailReadSamples property if it's a equal or higher to your required value read the desired amount of samples.  3.- Go back to step 1.

For ensuring that the old samples in the NIDAQ buffer doesn't get overwritten by the new samples it's helpful to manually set the NIDAQ buffer size, and store a greater number of samples than the number of samples that you are going to read, cause you know, Sleep()/nanosleep() is not real time and the amount of time that the function really sleeps can be a bit higher than the value requested, depending on the operating system/hardware combination.

Now I can keep the cpu at 18-25% while reading at the higher speed that the card can achieve.

A.F.A.I.K this shouldn't be required in Windows systems, cause the driver seems to be smarter there (Maybe because it's a more recent version than on Linux), anyway using this method the cpu usage seems a bit lower.

Hope it's helpful for someone.

0 Kudos
Message 3 of 3
(2,807 Views)