So I had a chance to try this out:
I put 3x TMP36 sensors side by side on a breadboard, hooked it up to use the 3.3v reference for better accuracy, set up the code to discard the first reading (to settle the ADC) and then take the average of 8 analogRead's with 20 ms delays between, and this is the result:
Temperatures = 25.04, 24.72, 24.40
Temperatures = 25.37, 24.40, 24.08
Temperatures = 25.04, 24.40, 24.08
The list is much more steady than that, but I copied the part that shows the bit of fluctuation that happens every so often - it steps up/down in increments of (reference voltage / 1024), which for my 3.3v reference is about 0.32 degrees. In other words, one point difference in the analogRead result equals 0.3 degrees. Using a 5 volt reference therefore would make that about a 0.5 degree increment.
Now, my handheld laser-guided temperature reading doohickey says it's about 24 degrees C in here - I know! it's a warm day today!! (I live on the very temperate West Coast of Canada)
So obviously there is a difference between sensors, as the first one is nearly a degree higher than the last one.
How would you normally adjust for specific sensors? Or would you adjust?
For example, if I trust my handheld reader, I would say that (based on the values of the last line) sensor 3 is perfect, and for sensor 1 I would (in the code) always subtract a degree, and for sensor 2 I'd subtract 0.3 of a degree.
But I'm a software guy and I'm betting there's a very clever hardware solution people would use for this.
This is for a one-off project, not sure if that would change the approach.
Also, the plan is to use 2 of these sensors to track temperatures inside and outside of my greenhouse. In my area I would not expect temperatures to exceed the -20 to +50 C range. Doing the math, +50 comes to 1 volt. Is there a way (an easy way?) to set the reference voltage to 1 volt so I can get accuracy of 0.1?
Thanks!
-Nico