This is part of my Instrument Control Board project in which I describe the choice of DAC, ADC and Op Amp.
Component Selection
No great science behind many of the components selected - mainly "do they do what I want?" and "price vs similar component". So, for example, the choice of I2C isolator or Digital Isolator was really made around channels and required signal direction. Confirmation of the chosen parts was made during the design to ensure (in my limited experience) they would suffice - prototyping will give me a better idea.
The DAC, ADC and OpAmp choice I spent a lot more time on. I've not had a lot of experience with these components and that's one of the reasons I'm doing this board actually. I've used the Arduino's on-board DAC/ADC before and sundry simple experiments with an LM741 but nothing really complicated. In all the tables that follow, I've used the MAX specifications and tried to normalise to a standard measurement level (e.g. 5V at 25C) - I know that's not ideal as its highly likely that my requirements won't incur those penalties but at this stage (for prototyping) it seems the most useful way to compare.
DAC
When I started out on this, I selected a 12-bit MCP4728 DAC because it was cheap. From there, I was able to work out by experiments with a breadboard what the important specifications to look out for. The MCP4728 isn't particularly accurate but the benefit is it gives much wider insight into the errors that occur during the data conversion process and possible approaches to solving. Moving forward, I wanted something a bit more accurate to start off with and looking across the main suppliers and data sheets there is actually very little choice available given these 3 requirements:
- I2C interface
- Quad channel
- 16-bit resolution
Texas Instruments, for example, has one model and Analog Devices/Linear Technology 4 models. Notwithstanding, I drew up a table for comparison purposes and included the MCP4728 for reference. Where necessary, I've normalised the values against a Full Scale Range so that the specification is better contextualised (datasheets linked to part names.)
Specification | MCP4728 (Microchip) | AD5696R-B (Analog Devices) | DAC8574 (Texas Instruments) |
---|---|---|---|
Resolution (bits) | 12 | 16 | 16 |
Full Scale Range (volts) to normalise specifications | 4.096 from external Vref | 5V from internal Vref at gain=2 | 5V from external Vref |
LSB (volts) | 1mV | 76.3uV | 76.3uV |
Relative Accuracy, INL | +-13 LSB | +- 1 LSB | +- 64 LSB (Yes, really - 4.88mV) |
DNL | +-0.75 LSB | +- 1 LSB | +- 1 LSB |
Zero Code Error | Not stated | 1.5mV | 20mV |
Offset Error | 20mV | +-1 LSB | Not stated |
Full Scale Error | Not stated | +- 0.1% FSR (5mV) | +- 1% FSR (50mV) |
Gain Error | +1.25% FSR (51.2mV) | +- 0.1% FSR (5mV) | +- 1% FSR (50mV) |
Total Unadjusted Error | Not stated | +- 0.1% FSR (5mV) at gain = 2 (Internal Vref outputs 5V) | Not stated |
Offset Error Drift | +- 1.8uV / C | +- 1uV / C | +- 7uV / C |
Gain Temperature Coefficient | -3 ppm FSR/C (12.3uV/C) | +-1 ppm FSR/C (5uV / C) | +- 3 ppm FSR/C (15uV / C) |
Linearity range (output codes) | 100 to 4000 | 256 to 65280 | 485 to 64714 |
Internal Vref accuracy | +- 41mV, 45ppm/C, 290uV p-p noise | +- 2.5mV, 5 ppm/C, 12uV p-p noise | External only |
Price (ex-vat) | £1.51 | £15.19 | £15.00 |
It's worth describing what some of these specifications mean as their relevance will be clearer in terms of component selection:
- LSB: Least Significant Bit - One unit of output from 0 to full scale range -1, converted to a voltage value: FSR/output code range -1. So, for a 16-bit DAC, with a 5V FSR that would be 5/65535 = 76.295uV
- INL: Drawing an ideal line from a 0 LSB to full scale range (FSR) output (a straight line based on the transfer function), the INL is a measure of deviation (+- LSB) away from that ideal line that the actual output is. This error is not the same throughout the output range.
- DNL: Each step change of output code should equate to 1 LSB and this is a measure of actual change against the ideal 1 LSB change. Where the specification is +-1 LSB the DAC guarantees montonicity - i.e. each output code will result in a different output voltage.
- Zero Code Error: When the DAC is sent output code 0x0000, this is a measure of the actual voltage out against the ideal 0V output. A positive ZCE is not possible to compensate for in software because it would require sending a -LSB adjustment to the DAC; all these DACs will only ever have a positive ZCE due to their design. What this means in practice is that the low range of output codes of a DAC will not cause a change in the voltage out - e.g. for the DAC8574, worse case, codes 0x0000 to 0x0014 will all output 20mV.
- Offset Error: This is a measure of the actual voltage out against the ideal voltage out in the linear range of the DAC. The offset error is the same across the linear range.
- Full Scale Error: When the DAC is sent output code 0xFFFF, this is a measure of the actual voltage out against the ideal FSR - 1LSB output. Essentially, it means the DAC reaches its full output value at a lower output code.
- Gain Error: This is a measure of the span error of the DAC across the range as a % of the FSR. It is a deviation from the straight line transfer function of the DAC and typically has a bigger impact at the higher output codes.
I have chosen the Analog Devices AD569R-B to move forward with. Its starting accuracy is very good compared to the DAC8574 so after compensation of the errors, the output should be very good as well. I don't feel I'm cheating here: the compensation approach is exactly the same regardless of starting position but the final usable range is likely to be wider - at least that's my view right now. Offset error and gain error can be compensated for in software; INL and DNL are intrinsic to the DAC and cannot be compensated, but can be improved by layout and supporting component choice; and the zero code error and full scale error would need additional components to adjust, or alternatively move the 0x0000 and 0xFFFF output codes to the linear region in software - which obviously reduces the range of the DAC and almost certainly affects its monotonicity. These are all things that I will be looking at with the prototype.
Worth mentioning here is the AD5696's Internal Vref against external Vref specifications to see if it's worth using an external reference:
Specification | AD5696R-B (Analog Devices) | REF5040 (Texas Instruments) | REF5050 (Texas Instruments) |
---|---|---|---|
Vref value | 2.5V / 5V at gain 2 | 4.096V | 5V |
Accuracy | +- 0.1% | +- 0.1% | +- 0.1% |
Temperature Co-efficient | 5ppm/C | 8ppm/C | 8ppm/C |
Output noise | 12uV p-p | 12uV p-p | 15uV p-p |
Line Regulation | 100uV/V (250uV/500uV) | 1ppm/V (4.096uV) | 1ppm/V (5uV) |
Load Regulation | 40uV/mA | 50ppm/mA (204.8uV) | 50ppm/mA (250uv) |
On these specifications, there's not a lot in it (bearing in mind these are MAX specifications) and I've decided it isn't worth using an external voltage reference with the chosen ADC.
ADC
I started out with the MCP3428 16-bit DAC from Microchip when I was breadboarding and it's not a bad component when compared to similar components from Texas Instruments, e.g. the ADS1115. Again, I bought it because it was cheap. Like DACs, they suffer from the same types of errors so I want to start out from a better position if possible. Again, like DACs, choice diminishes with the requirements I have:
- I2C interface
- Quad channel
- 16-bit resolution
The specifications given in the table are for a Full Scale Range of +-2.048V so they are comparable. The ADS115 can operate over a range of sample rates so 16 has been used for relevant specifications, which is comparable to the MCP3428 sample rate. Conversion time in the table includes switching, settling and sampling times.
Specification | MCP3428 (Microchip) | ADS1115 (Texas Instruments) | LTC2487 (Analog Devices) |
---|---|---|---|
Effective Number of Bits | 16 (also capable of 12- and 14-bit) | 16 | 16 |
Data rate (samples per second) | typically, 15 (66.7ms conversion time) | 8 - 860 (125ms - 1.2ms conversion time) | 6.1 (163.5ms conversation time) |
Output Noise | 2.5uVrms (at 15 SPS) | 62.5uVrms | 850nVrms |
INL | 40.96uV | 62.5uV | 100uV with Vref 5V; 5uV with Vref 2.5V |
Full Scale Error | Not Stated | Not Stated | 160uV with Vref 5V; 80uV with Vref 2.5V |
Gain Error | 4.09mV of FSR | 6.14mV of FSR | Not stated |
Gain Error Drift | 61.4uV/C of FSR | 163.8uV/C of FSR | Not stated |
Offset Error | 30uV | +- 187.5uV | 5uV |
Offset Drift | 50nV/C | 312nV/C | 10nV/C |
Total Unadjusted Error | Not stated | Not stated | 75uV with Vref 5V; 35.25uV with Vref 2.5V |
Internal Vref accuracy | +- 0.05%, 15ppm/C (also included in Gain and Gain Drift errors) | Not stated, included in the gain and gain drift errors | External only |
I will stick with the MCP3428 as, on balance, it has better specifications than the TI component - it also has automatic calibration for offset and gain on each conversion. In fairness to the ADS115 there is a big discrepancy between the INL in the specifications table (max 1LSB - 62.5uV) and the INL in the graphs (circa 5uV at 2V input, albeit 8SPS) - I'm a bit sceptical about the graph as 1LSB is almost certainly closer to the true error. The LTC2487, once the datasheet is properly analysed, is a lot poorer than the other two, except under very specific circumstances. The MCP3428 can only use its internal voltage reference - 2.048V - which limits input voltage range to 0V to +2.048V in single-ended conversions. That cuts out half the resolution of the ADC which doesn't seem acceptable to me. Small signals, say measuring the voltage drop over a 0.01Ohm or 0.05Ohm sense resistor would be fine as the signal would be in mV range. If I wanted to, say, accept a signal in the range 0V to 5V so that the board was in control of any pre-conversion manipulation then a 0V to +2.048V would require losing a fair bit of signal resolution. What I will do is scale and bias the incoming signal to -2.048V to 2.048V and use the differential inputs to measure and convert. This would map an incoming 0V to -2.048V and give me a full scale range of 4.096V. HEADS UP: I describe this scaling and biasing in this post and shabaz pointed out that I've misunderstood the datasheet so for my purposes I will stick to single-ended conversions. For prototyping purpose, and interesting comparison, I think I'll actually compare all three under the same conditions and see if I can draw any conclusions based on actual specifications (not just those listed as Maximum in the datasheets) and functionality.
OpAmp
Here I'm on rockier ground as I've not had much to do with these before so what I describe below is what I think are the important considerations. The MCP3428 ADC has an input impedance of 25MOhms (single-ended) and the datasheet has this to say about source impedance:
The conversion accuracy can be affected by the input signal source impedance when any external circuit is connected to the input pins. The source impedance adds to the internal impedance and directly affects the time required to charge the internal sampling capacitor. Therefore, a large input source impedance connected to the input pins can degrade the system performance, such as offset, gain, and Integral Non-Linearity (INL) errors. Ideally, the input source impedance should be zero. This can be achievable by using an operational amplifier with a closed-loop output impedance of tens of ohms
It's a switched capacitor input stage ADC as per this image in the datasheet:
But it also positions the PGA amplifier between the MUX (input channel selector) and the ADC circuitry which sort of implies low impedance already to the circuitry:
However, the sampling circuitry in the first image is actually encapsulated into the black blocks ahead of the MUX with the channel labels. The MUX is allowing the sampling capacitor to discharge through the PGA into the Delta-Sigma convertor. So I will be using an OpAmp to buffer the ADC inputs - unity gain, with the PGA to provide actual gain - and, separately, as part of a compensation circuit for DAC outputs. I will select for the ADC buffer, since that is critical, and use the same for the compensation circuit. I think that this component choice is the one most likely to change depending on the downstream circuit that is being controlled (e.g. a DC load or sensor board), however I will select on the following criteria:
- Minimise impact on ADC performance
- Settling time faster than ADC transient settling time
- Low noise
- Low distortion
- Low Power
- Differential supply rails to cover -2.5V to 2.5V.
Key specifications then:
- Input Offset voltage: Voltage required across the input terminals to bring the output to 0V due to imperfections in construction. Lower is better.
- Input Offset drift: How this changes over temperature. Lower is better.
- Settling time: How long it takes the output to respond and reach a final value when the input changes. Faster is better and I need the op amp output to settle faster than the switch-and-sample time of the ADC.
- Slew rate: Time taken for the op amp to change to a new output level on a change of input. Higher voltage at a faster time is better and I need the op amp to get to the new output level fast enough that it can settle in time for the switch-and-sample of the ADC
- Input bias current: Average current used by the Op Amp in operation at each input. Lower is better.
- Input offset current: Difference in current used by the Op Amp in operation at each input. Lower is better but is obviously related to the Input Bias Current.
- Input Voltage Noise: noise generated at the inputs of the Op Amp. Lower is better.
- Common Mode Voltage Range: A Voltage range, at the inputs, over which normal operation is guaranteed: input signal should be within this range.
- Output Voltage Swing: A voltage range, at the outputs, indicating how close to the rails the output can get - this limits the effective range. Values closer to 0 at each rail is better although what is really important is that the voltage swing range doesn't restrict the range of 0V - FSR for the ADC.
There are other considerations as well including the 'type' of Op Amp influencing how easy it may be to use in practice. Useful specs for the MCP3428 ADC:
- Output noise is 3 - 11.5 uVrms dependent upon the input voltage, 0V to full scale.
- Input voltage saturates the output code at +-2.048V but can range up to Vdd (5V) - basically, this means limiting the input voltage to +-2.048 and providing op amp output headroom between the rails for this range (as noted against CMVR and output voltage swing.)
- LSB in 16-bit mode is 62.5uV; half-LSB is 31.25uV and this should be a reference point for offset voltage and noise.
- Settling time for a 16-bit ADC (internal, without reference to external connected circuitry) is 11.784 Time Constants: LN(2^(16+1)).
The datasheet gives the value for the sampling capacitor, 3.2pF but not the sampling resistor Rs; it also doesn't give a specific value for the settling time - the closest is a data rate of 15 samples per second (66ms conversion time per sample) - so this makes it hard to determine what the actual switch-and-sample time is. I intend to put a low pass filter between the op amp output and the ADC input and this will have an impact on settling time so I may be able to draw a decision from that. In any case, I think it's likely to be much slower than a reasonably performant op amp.
Of course, there are a substantial number of Op Amps to choose from at uk.farnell (and no doubt, all major suppliers), so trying to narrow down: no delivery surcharge, suitable for new designs, 4 amplifiers gives 1,391 results - still way too many to check. There's going to be plenty of suitable options, I'm sure, but I'm going to limit myself to those I've seen others use in similar designs. Providing the power rails for single-supply op amps will require additional components so I will also consider those that I could provide at least +-5V.
Specification | |||||
---|---|---|---|---|---|
Input Offset Voltage | +- 8uV | 25uV | +-25uV | 5uV | +-5uV |
Input Offset Drift | +- 0.05uV/C | 1uV/C | 0.8uV/C | 0.02uV/C | +-0.05uV/C |
Slew Rate | 5V/uS | 2V/uS | 20V/uS | 1V/uS | 45V/uS |
Input Bias Current | +-600pA | 10pA | +-20pA | 300pA | +-65pA |
Input Offset Current | +-1100pA | 10pA | +-20pA | 200pA | +-125pA |
Settling Time | 0.75uS | 1.6uS | 0.9uS | 1uS | Not specified |
Input Voltage Noise/Density | 0.14uVpp, 7nV/sqrt(Hz) 10Hz to 10KHz | 0.8uVpp, 7.5nV/sqrt(Hz) 1kHz | 1.3uVpp, 5.55nV/sqrt(HZ) 1kHz | 0.5uVpp, 22nV/sqrt(Hz) 1kHz | 1.5uVpp, not specified |
Input Common Mode Voltage Range | (V-) -0.1V to (V+) + 0.1V | (V-) -0.1V to (V+) + 0.1V | (V-) -0.1V to (V+) +0.1V | Not specifically stated, just not greater than Vdd | (v-) -0V to (V+) -1.5V |
Output Voltage Swing | +ve rail 50mV, -ve rail 60mV | +ve rail 50mV, -ve rail 50mV | +ve rail 500mV, -ve rail 500mV | +ve rail 10mV, -ve rail 20mV | +ve rail 500mV, -ve rail 500mV |
Supply Range, differential | 5.5V, yes but +-2.5V | 5.5V, yes but +-2.5V | 36V, yes | 5V, yes but +-2.5V | 16V, yes |
Notes | Fast settling, zero drift, zero-crossover. Auto-zero Amp | Auto-zero techniques to reduce switching noise | Zero drift |
Out of these, the OPA4333, OPA4376, OPA4192 and AD8630 look the most suitable: all have fast settling times, low noise and offset voltage below the half-LSB; the two chopper amps have quite large input bias and offset currents but low Input Offset Voltage. Of these, I think the OPA4376 and OPA4192 may give me fewer problems as the other two are Chopper Amps and michaelkellett has already commented "...they bring their own special problems..." which I interpret as "good luck with that". I've tried reading around the potential issues with these types of op amps without gaining much insight; a problem might be with introducing switching noise on the output.
The OPA4376 is limited to a supply voltage of 5.5V which is fine for this application but requires the creation of a -ve power rail of upto -2.5V whereas the OPA4192 can be powered from a +-5V rail (or +12V, -5V) which is simpler to provide. The OPA4192 is faster but slightly noisier and although it can only get within 500mV (worst case 2k load, 15mV no load, 110mV 10k load) of the rails that is still outside my input range of 0V-2.048V (and even 0V-4.096V.)
Looking then at the OPA4333 and OPA4192, I'm not sure what advantages the former may give me and would require the provision of the +-2.5V rails (or level shifting the signal away from the -ve rail) and a limit on input range possibilities for the ADS1115 if I went with that ADC after prototyping; it's not significantly faster and the bias and offset currents are worse. I suspect the benefit is in the zero-drift, zero-crossover and auto-zero features. The OPA4192 at worse case consumes a lot of the half-LSB accuracy budget but the typical Input Offset Voltage specification is 8uV over a 0C to 85C temperatures range.
Given I'm prototyping, I'd be tempted to get the OPA4192 and the OPA4333 and see if I can do a comparison but neither are particularly cheap. I think I'll stick with the OPA4192.
Low Pass Filter
It seems recommended to front the analog input pins of the ADC with a low pass filter to attenuate noise. The Sigma-Delta ADCs with oversampling do a good job of attenuating noise at the sampling frequency - this is the MCP3428:
At 15Hz the attenuation is quite pronounced; it's hard to tell but it might have reached the -3db level at 7.5Hz. The recommendation is for a simple passive RC filter but the calculations for that don't seem to tie in with the ADCs need for a low source impedance.
Also, the Op Amp performs best at low load: Vos typically 5mV at no load rising to 430mV at 2K and sinking again to 95 at 10K. Except when Vos is characterised by output current (this is a 10K load):
This would imply a higher resistance would be better (for a smaller current) although it obviously isn't linear against load. How a higher resistance impacts the gain, offset and INL of the ADC isn't clear either - the datasheet is quiet on the impact of source impedance against error. In other words, this has me confused so the best approach I think is to characterise it with the prototype and try different configurations, as long as I provide some capacitive reservoir for the ADC and some resistance for the Op Amp. I'll need to bear in mine settling time and time constants for any RC filter, plus the ADC internal RC filter formed from the switch resistance and sampling capacitor.
Ultimately, a LPF would need to be determined based on actual usage characteristics so anything I do here is likely to be wrong anyway. It goes back to my earlier point about this part of the circuit being most likely to change.
Summary
I will be prototyping these choices to characterise them and see what problems arise and what I can hope to achieve with different approaches to compensation. Irrespective, my choice of ADC and DAC is severely limited by choosing to use I2C, but that's what I'm used to. SPI is much more common at 16-bit and higher resolutions.
Further Posts
Creating an Instrument Control Board
Instrument Control Board - Component Selection (this post)
Top Comments