Multifunction DAQ

cancel
Showing results for 
Search instead for 
Did you mean: 

Error due to small settling size in NI-6211

Solved!
Go to solution

Hello, I have the following question about NI6211.

 

I use it in multichannel regime and the signals at different channels differ greatly. For example, 10 mV at ch 1  and a few volts at ch 2. The sampling rate is great (100 kS/s) so there is crosstalk between channels due to small settling time. The sources have large output impedance - 10 kOhm. The question is - how can I calculate the amplitude or the crosstalk (in dB) for this measurement conditions. In the specification in paragraph "Settling Time for Multichannel Measurements" the accuracy for different convert intervals is given. But I don't understand this data - why 90 ppm of step size is +-6 of LSB. I thoght that step size is equal to LSB. And for what impedance of source this data is given? And at the time performance graphs much greater values of error (10000 ppm) are given.

 

Thank you for your attention.

 

Specification for NI-6211: http://www.ni.com/pdf/manuals/371932f.pdf

0 Kudos
Message 1 of 4
(3,210 Views)
Solution
Accepted by topic author ivoc

If you check out the top graph on the right side of that page, we show the graph for multiple source impedences.

 

In the paragraph you are referring to:

2011-02-22_092123.JPG

You'll notice that at the top we not that this is for a "full scale step".  This means going from the maximum input range to the minimum input range between channels (e.g. on the ±10V range, reading 10V on ch. 0 then -10V on ch. 1).  This is what "step" is referring to, not LSB on the ADC but rather the magnitude of the difference between readings.

 

The graph that you will need is the graph on the right hand side labelled "Settling Error Versus Time for Different Source Impedances".

2011-02-22_092449.JPG

 

If you are sampling at 100kS/s aggregate on two channels, you have about a 10uS convert time.  At 10kOhms source impedence, this means that you will see an error of about 4000ppm of your step size.  If you go from 10mV to 2V, this equates to an error of about 8mV on the readings.

 

Given that your first signal is so small, you would probably be best served by adding a Unity Gain Buffer between your source and your channels to reduce the source impedence, which will give you much better settling time performance.

Seth B.
Principal Test Engineer | National Instruments
Certified LabVIEW Architect
Certified TestStand Architect
Message 2 of 4
(3,188 Views)

Hello, Seth.

 

Thank you very much for detailed answer. This thing became clear to me. But I've got another question concerning noise characteristics of NI-6211 DAQ board. What's the minimum value of noise (for example, for the following conditions: scale - +/- 200 mV, sampling rate: 100 kS/s, 100 kSamples - duration, differential input) in the case when both inputs are connected to the ground (AIGRND). I've carried out the experiment and the results are the following: mean value = 98 uV, standard deviation = 14 uV. But it is much greater than the value of LSB - 6 uV and greater than the value of accuracy from the specification - 88 uV, but it is for full scale signal, and my signal is zero. My question is: is it possible to remove this offset and to further increase accuracy?

 

Thank you very much,

 

Alex. 

0 Kudos
Message 3 of 4
(3,156 Views)

Your setup is correct for calculating the minimum noise value.  The main addition that I'd recommend is that the offset isn't part of noise, but rather the offset component of accuracy.  As a result, I'd recommend subtracting the mean value from your data in order to only characterize noise.  I would expect that you'd see about the 14 uV that you are seeing.

 

In the specifications (page 4) you'll find our Accuracy specifications.  The "Absolute Accuracy" specification is calculated based on the forumla given below the chart.  Namely, note that this number assumes you are averaging 100 points to reduce noise.  The "random noise" spec is what you'll want to refer to in order to determine how much noise you'd expect to see on your signal.  On the ±200mV range, the spec is 12uVrms.  Converting this to an absolute voltage (RMS = Absolute/sqrt(2)) gives 16.97uV of noise, which looks consistent with what you are seeing.

 

As for the offset, to calculate what you should see on a grounded signal, we can use this table as well.  Because the signal is grounded, Gain Error doesn't play a part, so we just want to use the formula for Offset Error.

 

OffsetError = ResidualAIOffsetError + OffsetTempco · (TempChangeFromLastInternalCal) + INL_Error

 

For the ±200mV range:

ResidualAIOffsetError = 40ppm(200mV) = 8uV

OffsetTempco = 116ppm(200mV) = 23.2uV

INL_Error = 76ppm(200mV) = 15.2uV

 

So you can calculate the expected offset as:

 

OffsetError = 8uV + 23.2uV*(TempChangeFromLastInternalCal) + 15.2uV

 

Within 5°C of the last internal calibration (last time you self calibrated the device), this is 138.2uV.  If you self cal, then immediately calibrated it, it would be about 46.2uV.

 

I would recommend using the Self Calibration function, then immediately performing the same test.  To check offset, calculate the mean value.  To check noise, zero the data using the mean value.

 

Seth B.
Principal Test Engineer | National Instruments
Certified LabVIEW Architect
Certified TestStand Architect
0 Kudos
Message 4 of 4
(3,151 Views)