- Any sufficiently precise instrument is a thermometer
That’s in addition to whatever it’s supposed to be measuring, of course, but it’s amazing how temperature effects creep into those last few digits without you noticing anything different.
The differences between precision, accuracy, and resolution remain relevant, if commonly misunderstood. In particular, precision is not the same as resolution. A good introduction is there.
I stand in awe of the analog IC design folks who can build temperature compensation into a chip by tweaking junction areas and currents. A tip o’ the cycling helmet to ’em!
I’m an engineer at a company that builds instruments for process control (oil refineries, food processing, etc.). These are very accurate and stable instruments and most are accurate to a fraction of a percent over the industrial temperature range of -40 – +85 degrees C. The various instruments measure things like flow, density, level, pressure, temperature, chemical composition, and a bunch of other things. We just take it as a given that all sensors measure at least three things; the PV (Process Variable), sensor temperature, and electronics temperature. There are often other confounding effects as well. This includes temperature sensors. In addition all electronic components have temperature effects.
Besides precision (I think what you mean there is what we call repeatability) and the other characteristics you mentioned we also need to consider non-linearity, hysteresis, and a few other things. We design our own A/D chips that are optimized for our sensors. We try to design them to minimize the temperature effects in our chips and we do a pretty good job. Still, it’s usually not not good enough for the total accuracy we need to achieve (18+ bits) so we always measure the temperature of the PV sensor and usually measure the temperature of the electronics. We then do a temperature compensation of the entire instrument where we cycle the electronics and sensor through a large number of temperatures and PV values. The results of these measurements are crunched in a computer and a set of correction coefficients are created that are specific to each instrument. These coefficients are stored in the instrument and used to compute the PV that the user sees.
As you can imagine, this is a very expensive process. As a result, our instruments are pretty expensive. They start at $1k and go up from there. In the end, once you get beyond a certain total system accuracy, it’s not possible to build in that accuracy so we don’t even try. What we go for is high repeatability and then use software to convert the actual measurement into the value that the users see.
This is all pretty hard to do but it makes life interesting and keeps me employed.
once you get beyond a certain total system accuracy, it’s not possible to build in that accuracy so we don’t even try.
I admit that I started losing traction about where we started dithering ADC inputs to improve the results: I know why adding noise works, but it still seems like black magic.
The workbench now sports five thermocouple meters, all of which display different values for five holes in the same aluminum block. A man with one clock know the time, a man with two clocks always has his doubts… and it’s the same with temperature. I plan to pick the Fluke meter, on the principle they knew what they were doing, and curve-fit the others to make the answers consistent.
it makes life interesting and keeps me employed.
A fine situation to be in!
And sometimes adding thermoelectric cooling to the chip layout to actively control the temp of a small bit where a junction reference is located. Alas, *we* don’t do that (yet) but it’s pretty cool that some people do.