12-20-2011 03:56 PM
I'm having trouble with latency in a DSA & DAQ system. I'm using a PCI-4461 & PCI-6221. The ANSI-C code below is compiled in release version, and the thread is put on Real Time Priority (which doesn't actually matter). The CPU usuage while running is zero, so it appears that all of the latency is on the card side (the CPU isn't slowing things down).
I want to trim an amplifier. So I send out a 20ms burst of 1kHz sine wave while measuring the results on the PCI-4461, and then send out some digital trim bits on the PCI-6221, and then repeat. This happens thousands of times, so I want to remove the long latency in the system. There is about 30ms of latency between the end of the sine burst, and the beginning of the next burst. Is there anyway to make this faster? Code below (based on ANSI C Example program: SynchAI-AO.c)
The attached figure shows the measurement latency capture on a scope. I want to be able to transfer the measured analog sine waveform into memory, and then very quickly (less than 5ms) start outputing the sine wave again. I can't imagine that transfering 1000 data bytes requires 30ms of time. There must be some pause for unkown reasons slowing things down. Is there anything I can do to make this faster?
// **** I WANT TO SPEED UP THIS LOOP. I SEND OUT A 20MS SINE BURST, SEND SOME TRIM INFORMATION, AND THEN WANT TO IMMEDIATELY REPEAT THE PROCESS.
//*** THE CODE IN RED BELOW APPEARS TO BE PARTICULARLY SLOW.
for (i=0;i<20;i++) {
time_inc=0;
time_current[time_inc]=(float64) clock() / CLOCKS_PER_SEC ;time_inc=time_inc+1; //0
//READ OUT ANALOG WAVEFORM
DAQmxErrChk (DAQmxReadAnalogF64(AItaskHandle,Number_Of_Samples,10.0,DAQmx_Val_GroupByChannel,AIdata,Number_Of_Samples,&readAI,NULL)); //31msec
//SIGNAL PROCESSING WOULD GO HERE
time_current[time_inc]=(float64) clock() / CLOCKS_PER_SEC ;time_inc=time_inc+1; //1
//WRITE OUT TRIM VALUES
DAQmxErrChk (DAQmxWriteDigitalU8(taskHandle,61*2,1,10.0,DAQmx_Val_GroupByChannel,Digital_Data,NULL,NULL)); //0ms
time_current[time_inc]=(float64) clock() / CLOCKS_PER_SEC ;time_inc=time_inc+1; //2
//WAIT FOR TRIM TO FINISH
DAQmxErrChk (DAQmxWaitUntilTaskDone(taskHandle,10.0)); //16ms (actual time data is toggling is only 1.5ms)
time_current[time_inc]=(float64) clock() / CLOCKS_PER_SEC ;time_inc=time_inc+1; //3
//STOP TRIM TASK
DAQmxStopTask(taskHandle); //0ms
time_current[time_inc]=(float64) clock() / CLOCKS_PER_SEC ;time_inc=time_inc+1; //4
//STOP ANALOG OUT TASK
DAQmxStopTask(AOtaskHandle); //0ms
time_current[time_inc]=(float64) clock() / CLOCKS_PER_SEC ;time_inc=time_inc+1; //5
//STOP ANALOG IN TASK
DAQmxStopTask(AItaskHandle); //0ms
time_current[time_inc]=(float64) clock() / CLOCKS_PER_SEC ;time_inc=time_inc+1; //6
//RESTART ANALOG OUT TASK
DAQmxErrChk (DAQmxStartTask(AOtaskHandle)); //15ms
time_current[time_inc]=(float64) clock() / CLOCKS_PER_SEC ;time_inc=time_inc+1; //7
//RESTART ANALOG IN TASK
DAQmxErrChk (DAQmxStartTask(AItaskHandle)); //0ms
time_current[time_inc]=(float64) clock() / CLOCKS_PER_SEC ;time_inc=time_inc+1; //8
//WAIT FOR TASKS TO FINISH
DAQmxErrChk (DAQmxWaitUntilTaskDone (AItaskHandle,10.0));
DAQmxErrChk (DAQmxWaitUntilTaskDone (AOtaskHandle,10.0)); //16ms
time_current[time_inc]=(float64) clock() / CLOCKS_PER_SEC ;time_inc=time_inc+1; //9
//PRINT OUT LATENCIES
for (n=1;n<time_inc;n++) {
printf("Time: %d %0.6lf\n",n,time_current[n]-time_current[n-1]);
}
}
12-20-2011 04:33 PM
Convert to a continuous sample mode and you should only have to wait for the latency once. Recommend that you do not allow regeneration even though you have to take care to supply a continuous signal to the AO task so that your application does not get a buffer underrun error.
12-20-2011 04:37 PM
OK, I'll try that. What about the latency in the DAQ digital output? I have to send a different code every time, so that can't be a continous output. The DAQ output latency is about 10ms:
DAQmxErrChk (DAQmxWriteDigitalU8(taskHandle,61*2,1,10.0,DAQmx_Val_GroupByChannel,Digital_Data,NULL,NULL));