With the thermistors nestled all snug in their wells, I turned on the heat and recorded the temperatures. I picked currents roughly corresponding to the wattages shown, only realizing after the fact that I’d been doing the calculation for the 5 Ω Thing-O-Matic resistors, not the 6 Ω resistor I was actually using. Doesn’t matter, as the numbers depend only on the temperatures, not the wattage.

This would be significantly easier if I had a thermocouple with a known-good calibration, but I don’t. Assuming that the real temperature lies somewhere near the average of the six measurements is the best I can do, so … onward!

Plotting the data against the average at each measurement produces a cheerful upward-and-to-the-right graph:

So the thermocouples seem reasonably consistent.

Plotting the difference between each measurement and the average of all the measurements at that data point produces this disconcertingly jaggy result:

The TOM thermocouple seems, um, different, which is odd, because the MAX6675 converts directly from thermocouple voltage to digital output with no intervening software. It’s not clear what’s going on; I don’t know if the bead was slightly out of its well or if that’s an actual calibration difference. I’ll check it later, but for now I will simply run with the measurements.

Eliminating the TOM data from the average produces a better clustering of the remaining five readings, with the TOM being even further off. The regression lines show the least-squares fit to each set of points, which look pretty good:

Those regression lines give the offset and slope of the best-fit line that goes from the average reading to the actual reading, but I really need an equation from the actual reading for each thermocouple to the combined average. Rather than producing half a dozen graphs, I applied the spreadsheet’s SLOPE() and INTERCEPT() functions with the average temperature as Y and the measured temperature as X.

That produced this table:

TOM MPJA Craftsman A Craftsman B Fluke T1 Fluke T2 M = slope 1.0534 0.5434 0.5551 0.5539 1.0112 1.0154 B = intercept -1.6073 -15.3703 -19.4186 -16.9981 -0.7421 -0.3906

And then, given a reading from any of the thermocouples, converting that value to the average requires plugging the appropriate values from that table into good old

- y = mx + b

For example, converting the Fluke 52 T1 readings produces this table of values. The **Adjusted** column shows the result of that equation and the **Delta Avg** column gives the difference from the average temperature (not shown here) for that reading.

Fluke T1 Adjusted Delta Avg Max Abs Err 21.0 20.5 -0.4 0.78 29.0 28.6 -0.3 34.8 34.4 -0.3 45.5 45.3 -0.2 50.1 49.9 0.0 52.0 51.8 0.2 69.3 69.3 0.3 76.4 76.5 0.4 78.9 79.0 0.6 107.9 108.4 0.2 112.3 112.8 0.4 117.5 118.1 0.3 127.8 128.5 -0.2 133.2 134.0 0.1 136.6 137.4 0.1 138.1 138.9 0.1 146.4 147.3 -0.4 155.8 156.8 -0.8

The **Max Avg Error** (the largest value of the absolute difference from the average temperature at each point) after correction is 0.78 °C for this set. The others are less than that, with the exception of the TOM thermocouple, which differs by 1.81 °C.

So now I can make a whole bunch of temperature readings, adjust them to the same “standard”, and be off by (generally) less than 1 °C. That’s much better than the 10 °C of the unadjusted readings and seems entirely close enough for what I need…