EDIT: I decided to enter this in the Project14 competition for DIY Test Instrumentation and that post contains a lot of the information originally in this one but in more detail. This post has been changed to just concentrate on the DAC and ADC characterisation and test results.
This is part of a set of posts describing the Instrument Control Board, more detail on which can be found here.
Characterisation
As part of testing, I wanted to see how accurately the DAC and ADC channels were working across their voltage ranges. To that end I used the LabVIEW NXG Community Edition: Post 2 - Hardware Abstraction Framework I built to create two test programs (one for the DAC, one for the ADC) to drive the board via the USB-UART connector. It was easy enough to define the Instrument Control Board as a serial device in my framework and set up the tests given the earlier work.
Philosophy of Testing
Whilst doing these tests I had somewhat of a philosophical argument with myself over the accuracy of results. Take the ADC for example: I have the ICB under test, driven from a PSU and cross-referenced against a DMM. That's three instruments involved, each with their own specifications for accuracy. At one point I would have been naive enough to think that a good PSU would put out 1mV when asked and that a good DMM would read 1mV! Clearly any results I gain here have to be viewed in light of the accumulated +/- errors of each instrument: if the ADC is reading 1.498V and the DMM 1.499V when the PSU should be putting out 1.5V where does the fault lie? (Answer: a mix of the error characteristics of each.) On that basis, it would be pointless looking for theoretical accuracy and instead take a considered view of the results and a determination of whether or not they are 'good enough'. It should be possible to apply the accuracy specs of the DMM and PSU to see how well the ICB is performing (or not!) Typically, of course, any testing would be to ensure that the DUT was operating within required parameters - e.g. it may not matter that a PSU generates a ripple voltage of +/- 150mV - but I'm in a position of not knowing those parameters per-se. I will be looking for anything that is clearly bad but will make my own mind up on 'good enough' results.
SPOILER: I'm pretty happy with the results.
DMM6500 Accuracy (from datasheet)
I've had it a year, so let's use 1Year specifications where necessary.
DAC Characterisation
I'm testing each channel (A, B, C and D) independently with the DMM connected to the Board IO connector for the channel. The Arduino is instructed to output a voltage from 0V to 5V and the DMM takes a reading and the results captured. DMM is configured to take the average of 10 readings at 2 NPLC. All the raw data is in the attached files, but I'm only presenting graphs for simplicity. To actually iterate over 65535 codes would have taken many hours - rough calculation, 14 hours - so I concentrated on codes 0-1000; 32268 - 33268; and 64535 - 65535. In other words, the bottom range, middle range and upper range on the assumption, perhaps incorrect, that these ranges will give me a good view of how the channel operates over the full range.
For each range, I give a full scale representation of the output and then zoom into a small subset to show more detail (at the full scale it's not possible to see any separation of DAC and DMM values)
LabView NXG Program:
I select the two instruments I'm using, the specific DAC channel (DACA, DACB, DACC, DACD), the starting code, increment and end code. The graph seen here is created as the program runs and data is collected. Raw data is captured in "All Results" in the top right corner: For a code, I correlate the Timestamp and Voltage from the DMM, what the Nominal voltage should be (i.e. what the DAC should be outputting for the code), the difference between them and the LSBs error that represents. So on this image, you can see for code 64535, the DMM is reading 4.9229V and it should be reading 4.923629...V (LSB is approximately 76uV per code, but, accurately, Vout = 2.5V x 2 x (code/2^16) and for code = 1 it is 76.29394531uV; thus for code 64535 Vout = 4.923629761V.
A point worth noting here is that LabView NXG doesn't seem very good at handling floats and float arithmetic. When adding 0.001 to a progression from 0.000 to 5.000, it doesn't take long for the progression to result in offsets: e.g. 1.410, 1.411,1.411999, 1.412998... I have to round these values to 3d.p. in order to get correct values to plot. I've noted on other posts that NXG is a bit buggy and crashy: NI aren't progressing it anymore so it's a bit moot and if I can be bothered I'll have to switch over to LabView and re-write all my codebase.
DAC A
I'm going to take the liberty of posting the first two graphs individually for this channel and give some commentary; for the remaining and for the other channels, I'll show them in a gallery to save space and results can be interpreted in the same vein.
So, for the range 0 to 1000 I can see that it's a linear progression; zooming in on the first 10 codes it's possible to see the discrepancy between what should be output and what is being measured, e.g. for code 10 the difference is 445uV ( 5.829343 LSBs.) Note that it's not been possible to output 0V, the lowest being 473uV (6.200613 LSBs.) Looking at the datasheet for the DAC, typical error specs are: Zero-code error 0.4mV (only relevant to code 0); Offset error 0.1mV; Gain Error 0.02% of full scale range (in this case 1mV) and full-scale error is 0.01% of full-scale range (only relevant to code 65535.) Clearly the 445uV zero-code reading is pretty much bang-on to the datasheet and I see no issue with the other values which seem to lie within the gain error.
The relative accuracy should be between 1 and 2 LSBs which I'm a little away from, although I don't know what it means really by relative accuracy given the other specs. However, the DMM reading is subject to error: take code 10, 0.001208V (thus the DMMs 100mV range) would give an accuracy discrepancy of 3.54uV / 0.05 LSBs. That's pretty accurate and certainly not enough to impact the result at this voltage range so I'm not able to determine the relationship between 'relative accuracy' and the other accuracy specs.
Notwithstanding, a discrepancy of 445uV is pretty good in my book! It should be possible to make adjustments in firmware for the discrepancies but at these values there seems little point.
The other ranges for this channel are just as good:
{gallery} DACA Characterisation |
---|
Mid-range: Showing the full 1000 codes |
Mid-range, zoomed: showing 10 readings around the mid-point (nominally, 2.5V.) |
Top-range: showing the full 1000 codes |
Top-range, zoomed: showing the top 10 codes. Note that the FSR as mentioned earlier prevents a full 5V output |
{gallery} DAC B Characterisation |
---|
Bottom-range: Showing the full 1000 codes |
Bottom-range, zoomed: showing the lowest 10 codes |
Mid-range: Showing the full 1000 codes |
Mid-range, zoomed: showing 10 readings around the mid-point (nominally, 2.5V.) |
Top-range: showing the full 1000 codes |
Top-range, zoomed: showing the top 10 codes. Note that the FSR as mentioned earlier prevents a full 5V output |
DAC C
{gallery} DAC C Characterisation |
---|
Bottom-range: Showing the full 1000 codes |
Bottom-range, zoomed: showing the lowest 10 codes |
Mid-range: Showing the full 1000 codes |
Mid-range, zoomed: showing 10 readings around the mid-point (nominally, 2.5V.) |
Top-range: showing the full 1000 codes |
Top-range, zoomed: showing the top 10 codes. Note that the FSR as mentioned earlier prevents a full 5V output |
DAC D
{gallery} DAC D Characterisation |
---|
Bottom-range: Showing the full 1000 codes |
Bottom-range, zoomed: showing the lowest 10 codes. The only channel that shows this non-linear tail (on every repeated test so not a testing discrepancy.) |
Mid-range: Showing the full 1000 codes |
Mid-range, zoomed: showing 10 readings around the mid-point (nominally, 2.5V.) |
Top-range: showing the full 1000 codes |
Top-range, zoomed: showing the top 10 codes. Note that the FSR as mentioned earlier prevents a full 5V output |
DAC Channels Combined
To make comparison easier, here are the graphs combined together
{gallery} Channels Combined |
---|
Bottom-range: Showing the full 1000 codes |
Bottom-range, zoomed: showing the lowest 10 codes. |
Mid-range: Showing the full 1000 codes |
Mid-range, zoomed: showing 10 readings around the mid-point (nominally, 2.5V.) |
Top-range: showing the full 1000 codes |
Top-range, zoomed: showing the top 10 codes. Note that the FSR as mentioned earlier prevents a full 5V output |
Switching supply vs Linear supply (to the Power Board)
Just for comparison, I used a linear supply rather than a wall wart to power the boards, ostensibly to check if any noise was impacting results. As you can see, it doesn't.
{gallery} Switching Supply vs Linear Supply |
---|
Bottom-range: Showing the full 1000 codes |
Bottom-range, zoomed: showing the lowest 10 codes. |
Mid-range: Showing the full 1000 codes |
Mid-range, zoomed: showing 10 readings around the mid-point (nominally, 2.5V.) |
Top-range: showing the full 1000 codes |
Top-range, zoomed: showing the top 10 codes. Note that the FSR as mentioned earlier prevents a full 5V output |
DAC Characterisation Summary
All channels seem to be outputting within 500uV of the requested nominal value, with channels being more/less accurate over different ranges (look how close to nominal channel D is at the low range.) The choice of noisy or quiet power supply for the board doesn't impact the results so there's some good rejection going on. Progression across the FSR is linear which bodes well: it would be simple to create a LUT for addressing the differences if <1mV accuracy was required, albeit at the expense of a small reduction of FSR of codes at the bottom and top end.
ADC Characterisation
I'm testing each channel (1, 2, 3 and 4) independently using a PSU connected to the Board IO to provide voltage from 0V to 2.1V (actually 2.5V for channels 1 and 2 to make obvious the upper read limit of 2.048V.) For channels 1 and 2 I connected the DMM at the Board IO and for channels 3 and 4 connected it directly to the ADC input pin as I wanted to see the impact of the downstream components on the signal before it reached the ADC. The DMM was configured to take an average of 10 readings using 2 NPLC. Each channel was configured as 16-bit, continuous read with Gain set at x2 for 0V - 1.024V and x1 for 1.025 - 2.100V. Additionally, for channels 1 and 2 I took the latest reading available and for channels 3 and 4 I took the average of the number of samples taken over 1 second (so 15 samples for 16-bit.)
All the raw data is in the attached files, but I'm only presenting graphs for simplicity. It was quick enough to iterate over the full 2.048V range so graphs are a characterisation of the full range. For each range, I give a full scale representation of the output and then zoom into a small subset to show more detail (at the full scale it's not possible to see any separation of DAC and DMM values). I also present a graph of the difference between the DMM reading and the ADC converted value. The PSU has a 1mV resolution, and I'm mostly, if not exclusively, likely to use this with a mV resolution, probably even 10mV resolution, so I also plot the difference between the nominal value that should be produced by the PSU and the actual values read by the DMM and ADC rounded to mV. This to show how far from nominal the ADC is working across it's range.
Getting the timing of the test correct was difficult, I'll admit it. It was possible to introduce too much delay such that the iterating eventually failed with a timeout; asking the DMM to do too many readings, e.g. 25, before averaging seemed to cause glitches where it didn't report any changed value over 3 consecutive output changes; and in the 'best' setup I still see what I consider the odd glitch with the DMM/ADC readings, showing up as spikes in the graphs. This is apparent as only channels 3 and 4, where averaging on ADC readings is occurring, show these spikes. As I ran and re-ran the tests, these spikes would still occur but in different places in the output. My suspicion lies with the LabView software/my test code/the Arduino timing, and not the instruments, but I couldn't reliably reproduce any issue on-demand to track it down. Having said that the overall trend across the channels is the same and it's clear that at the cost of a small amount of time, accuracy of reading can be improved by up to 1mV
LabView NXG Program:
The test program allows me to select an ADC channel and a voltage range for the PSU to produce. The graph shows a run from 1.025V to 2.1V with the obvious corner at 2.048V; the x-axis scale is a count and not an ADC code. Here, you can see the floating point calculation issue as "Current Volts" is reading 2.10104 by the end of the run! Raw data is gathered in "All ADC Results", shown in the top right of the panel, and I am capturing the generated voltage, ADC read voltage, DMM read voltage, difference between the ADC and DMM values and that difference as LSBs.
ADC 4
I'm going to start with this channel as this was one of the channels that I would expect to be reporting more accurate results as the ADC is averaging across 15 readings. Again, I'll present the graphs individually to commentate but the other channels I'll show in a gallery. The "Requested Voltage" is the voltage that should be output and not necessarily the voltage that the PSU is actually outputting. It's only necessary to compare what the ADC reads against what the DMM reads.
This is showing a full range of readings, with the upper limit shown at 2.048V. The progression is linear and at this scale impossible to distinguish between the PSU, DMM and ADC values.
Here's a zoomed view of the first 26 readings. Again, you can see a linear progression and it's still not really possible to distinguish between the values.
Here, then, is a clearer view of what is happening. This graph shows the difference between the ADC reported value and the DMM reported value. It's clear that the channel is more accurate at lower voltages (left side), gradually becoming less accurate over the full range. You can clearly see the occasional 'glitch' in readings which I couldn't track down. Ignoring those outliers, the overall accuracy deteriorates to around 2mV at the top end of the range and is uV accurate up to around the middle of the range. It's somewhat interesting that the error progression is linear. I suspect that the clear shift at mid-range is due to the gain change (x2 prior, x1 after.) The datasheet reports a gain error of 0.1% for a gain of x1 at 16-bit (15SPS.) This would give a gain error of 1.025mV to 0.002048mV in the range 1.025V to 2.048mV: that seems to correlate with what the graph is showing. The datasheet also states a 0.1% PGA Gain Error Match between any 2 PGA settings which may explain the shift point.
To my mind, this looks pretty accurate, taking into account the gain error, which could easily be removed in software, and it looks like it's conforming well to expectations against the datasheet.
I looked at one other graph as well. It struck me that the 'moment' I captured a DMM reading was different to the 'moment' I captured an ADC reading, notwithstanding the averaging. That would imply a possibility that noise/ripple on the signal could have a small impact on the difference between the two. I would expect to use this at a mV level rather than uV so I rounded the ADC and DMM readings to the nearest mV and compared both against a nominal voltage to get a clearer picture of error. With no error, both lines should run directly along the x-axis, so this shows where in the range errors were occurring: at 1.025V and above there's a general trend to be 1mV out right up to the highest codes where it increases again. I'm not sure in my mind if it's a valid representation but it seems to give a clearer indication of errors in the readings.
ADC 1
{gallery} ADC 1 Characterisation |
---|
Full Range: 0V to 2.5V |
Zoomed range: 0V to 0.025V |
ADC and DMM difference: Full range, showing the difference between the ADC and DMM readings |
DMM and ADC readings against nominal: Readings rounded to mV and plotted against the nominal value. No errors would show no spikes. |
ADC 2
{gallery} ADC 2 Characterisation |
---|
Full Range: 0V to 2.5V |
Zoomed range: 0V to 0.025V |
ADC and DMM difference: Full range, showing the difference between the ADC and DMM readings |
DMM and ADC readings against nominal: Readings rounded to mV and plotted against the nominal value. No errors would show no spikes. |
ADC 3
{gallery} ADC 3 Characterisation |
---|
Full Range: 0V to 2.1V |
Zoomed range: 0V to 0.025V |
ADC and DMM difference: Full range, showing the difference between the ADC and DMM readings |
DMM and ADC readings against nominal: Readings rounded to mV and plotted against the nominal value. No errors would show no spikes. |
ADC 3, 12-bit
Given that I'm likely to use this in mV readings, I thought it useful to see what the impact of readings gained at 12-bit resolution with the same approach to gain and continuous readings. As you can see, it makes no difference even though the ADC is averaging over 240 samples at this resolution.
{gallery} ADC 3, 12-bit Characterisation |
---|
Full Range: 0V to 2.5V |
Zoomed range: 0V to 0.025V |
ADC and DMM difference: Full range, showing the difference between the ADC and DMM readings |
DMM and ADC readings against nominal: Readings rounded to mV and plotted against the nominal value. No errors would show no spikes. |
ADC Characterisation Summary
You can see from the graphs that, unsurprisingly, averaging a number of ADC readings, is more accurate than taking spot readings but apart from that there is little difference in the performance of the channels. Using a different resolution doesn't impact the results in any meaningful way. Taking the datasheet's gain error at 0.1% (at gain x1, unknown at gain x2) it would explain the linear progression the error in result read from the ADC at each voltage seems to match that error rate. In other words, it feels like the ADC is working accurately within the specifications of its datasheet and the error rate could easily be factored out in software. I'm very happy with these results.
Summary
This post was mainly to show characterisation data for the DAC and ADC on the Instrument Control Board. My expectation was that I would be dealing with 10's mV of error in output or readings but I couldn't be happier with the results which really blow away my expectation. Both devices seem to be operating within the accuracy of their datasheet. The RTCC is maintaining accurate timing over hours of operation and battery backup. All other components are working correctly as well and I now have a set of libraries and test script to drive the board.
My intention originally was to just use these boards and an actual test instrument within one enclosure. Now I think a better idea will be to encase the ICB in its own case with plugs for controlled instruments so that I can plug-and-play without having to build more of these boards. The next stage of this project is to create an Electronic Load as a test instrument to be driven by this ICB.
Attachments
I'm attaching an Excel spreadsheet containing data and graphs but the data is also available in CSVs.
Top Comments