**Selecting a thermistor (& series resistor) value**

Most of the material you find on thermistors makes the assumption that you are trying to maximize sensitivity and interchangeability. But oversampling gives you access to enough resolution that sensitivity is less critical, and interchangeability only makes sense if you are putting them in a product with good voltage regulation. In that case, precision thermistors like the ones from US sensor are a good option, but according to Campbell Scientific, that choice has other knock-on implications:

*“The resistors must be either bought or selected to 0.02% tolerance and must also have a low temperature coefficient, i.e. 10 ppm or **preferably 5 ppm/°C**.”*

Like many better quality components, these resistors are often only available in SMD format, with minimum order quantities in the thousands. If you use a typical 1% resistor with a T.C. of 50 ppm or more, you could introduce errors of ±0.1°C over a 50°C range, which defeats the point of buying good thermistors in the first place.

Still, if I was only building a few sensors, I’d spring for the good ones. But now that I have oversampling working on the Arduino, I’d like to add a thermistor to every logger in the field, and the mix of different boards already in service means I’ll have to calibrate each sensor/board combination. That time investment is the same whether I choose a 10¢ thermistor or $10 one.

Power consumption is also important, making 100kΩ sensors attractive although I couldn’t even find a vendor selling interchangeable thermistors above 50k. A low temperature limit of 0°C (the units are underwater…) and putting 1.1v on aref to boost sensitivity, requires a 688k series resistor, which is far from the 1-3x nominal usually recommended:

Using the internal band-gap voltage as aref improves the ADC’s *hardware* resolution from 3.22mV/bit to 1.07mV/bit. This trick gives you a extra bit of precision when you use it at the default 10bit resolution, and I figured I could do it again to compensate for the lost sensitivity due to that big series resistor.

In return, I get a combined resistance of at least 700k, which pulls only 4.7μA on a 3.3v system. Such low current means I could ignore voltage drops inside the processor and power the divider with one of Arduino’s digital pins. In practical terms, burning less than a milliamp-second per day means adding a thermistor won’t hurt the power budget if I leave it connected to the rails *all the time*; which you can only do when self-heating isn’t a factor. Even 100 ohms of internal resistance would produce only 0.5mV drop, so depending on your accuracy spec, you could use 16-channel muxes to read up to 48 thermistors without worrying about cable length. There aren’t many of us trying to connect that many temperature sensors to one Arduino, but using a 100k thermistor also makes me wonder if you could mux a bank of different series resistor values, pegging the divider output at it’s maximum sensitivity over a very large temperature range.

**What is a reasonable accuracy target?**

Combining 5¢ thermistors & 1¢ metfilms, means my pre-calibration accuracy will be worse than ±1°C. Cheap thermistor vendors only provide nominal & βeta numbers, instead of resistance tables, or a proper set of Steinhart-Hart coefficients. So I might be limited to ±0.4°C based on that factor alone. And it took me a while to discover this, but βeta values are only valid for a specific temperature range, which most vendors don’t bother to provide either. Even with quality thermistors, testing over a different temperature range would give you different βeta values.

In that context, *I’d be happy to approach ±0.1°C* without using an expensive reference thermometer. Unfortunately, temperature sensors in the hobby market rarely make it to ±0.25°C. One notable exception is the Silicon Labs Si7051, which delivers 14-bit resolution of 0.01°C at ±0.1°C. So I bought five, put them through a series of tests, and was pleasantly surprised to see the group hold within ±0.05°C of each other:

Ideally you want your reference to be an order of magnitude better than your calibration target, but given the other issues baked into my parts, that’d be bringing a gun to a knife-fight.

So my calculations, with oversampling, and the internal 1.1v as aref become:

**1) MaxADCReading** (w scaling factor to compensate for the two voltages)

= ( [2^(OverSampledADCbitDepth)] * (rail voltage/internal aref) ) -1

**2) Thermistor Resistance ** (w series resistor on high side & thermistor to GND)

= Series Resistor Value / [(MaxADCReading / OverSampledADCreading)-1]

**3) Temp(°C) **(ie: the βeta equation laid out in Excel)

=1/**(**[ln(ThermResistance/Tnominal R)/βeta]+ [1.0 / (NomTemp + 273.15)]**)** -273.15

**Seeing the error in my ways**

I knew that the dithering noise would have some effect on the readings, and all the other source ADC of error still apply. Switching to 1.1v reduces the absolute size of most ADC errors, since they are proportional to the full scale voltage. But the internal reference is spec’d at ±0.1v; changing the initial (rail voltage/aref voltage) scale factor by almost 10%. Since all I needed was the *ratio*, rather than the actual voltages, I thought I could address this chip-to-chip variability with the code from Retrolefty & Coding Badly at the Arduino.cc forum. This lets Arduinos read the internal reference voltage using the rail voltage as aref.

I started testing units in the refrigerator to provide a decent range for the calibration:

and strange artifacts started appearing in the log. The voltage readings from both the main battery and the RTC backup battery were *rising* when the units went into the refrigerator, and this didn’t seem to make sense given the effect of temperature on battery chemistry:

I think what was actually happening was that the output from the regulator on the main board, which provided the ADC’s reference voltage for the battery readings, was falling with the temperature.

When I dug into what caused *that* problem, I discovered that temperature affects bandgap voltages in the *opposite* direction by as much as 2 mV/°C. So heating from 0°C to 40°C (and some loggers will see more than that…) reduces the 328P’s internal reference voltage by as much as a tenth of a volt. In fact, bandgap changes like this can be used to measure temperature without other hardware. This leaves me with a problem so fundamental that even if I calculate S&H constants from a properly constructed resistance table, I’d still be left with substantial accuracy errors over my expected range. Argh!

**Becoming Well Adjusted**

These wandering voltages meant I was going to have to use the internal voltmeter trick every time I wanted to read the thermistor. It was mildly annoying to think about the extra power that would burn, and majorly annoying to realize that I’d be putting ugly 10bit stair-steps all over my nice smooth 15bit data. This made me look at that final temperature calculation again:

Temp(°C) =

1/([ln(ThermResistance/Tnominal R)/βeta]+ [1.0 / (NomTemp + 273.15)]) -273.15

which I interpret as:

=fixed math( [(ADC outputs / Therm. nominialR ) / Therm. βeta] + (a #) ) – (a #)

Perhaps tweaking the thermistor’s nominal value (which I only know to ±5% anyway) and changing the (fictional) βeta values would compensate for a multitude of sins; including those voltage reference errors? Then I could just pretend that (rail/aref) scaling factor had a fixed value, and be done with it: * (click image to expand)*

So in my early tests, all I had to do was adjust those two constants until the thermistor readings fell right on top of the reference line. Easy-peasy!

Well …*almost.* Repeat runs at 15bit (1024 samples) and 14bit (256 samples) didn’t quite yield the same numbers. Applying the best fit Nominal and βeta values obtained from a 15bit run to 14bit data moved the thermistor line down by 0.05°C across the entire range (and vice versa). So the pin toggling method I used to generate the dither noise introduces a consistent offset in the raw ADC readings. While that doesn’t completely knock me out of my target accuracy, I should generate new calibration for each oversampled bit depth I intend to use. It’s still good to know that the dithering offset error is consistent.

**Throwing a Big Hairy Fit**

I was pleased with myself for the simplicity of the Nominal/βeta approach for about two days; then I pushed the calibration range over 40° with a hot water bath:

This gave me targets at around 40, 20 and 5°C. But no combination of Nominal & βeta would bring all three into my accuracy range at the same time. Fitting to the 20 & 40 degree data pushed the error at 5°C beyond 0.2° : *(click image to enlarge)*

…and fitting to 20 & 5, pushed the 40C readings out of whack. After more tests I concluded that tweaking βeta equation factors won’t get you much more than 20° of tightly calibrated range.

My beautiful plan was going pear-shaped, and as I started grasping for straws I remembered a comment at the end of that Embedded Related article:

*“… in most cases the relationship between voltage divider ratio and temperature is not that nonlinear. Depending on the temperature range you care about, you may be able to get away with a 3rd-order polynomial or even a quadratic..”*

Perhaps it was time to throw βeta under the bus, and just black-box the whole system?

To find out, I needed to prune away the negative temperature regions where the voltage divider had flat-lined, and remove the rapid transitions since the thermistor responds to changes more quickly than the si7051: *(click image to inflate)*

Then it was time for the dreaded Excel trend line:

The trick for getting workable constants is to right-click the default equation that Excel gives you, re-format it to display scientific notation, and then increase the number of displayed digits to at least six.

Some people use the LINEST function to derive these polynomial constants but I’d advise against it because seeing the raw plot gives you a chance to spot problems *before* you fit the curve. When I generated the first Temp vs ADC graph, the horizontal spread of the data points showed me where the thermistor and the reference thermometer were out of sync, so I removed that data. If I had generated the constants with =LINEST(Known Y values, X values^{1,2,3,4}) I could have missed that important step.

For the following graphs, I adjusted the trend line to display to nine digits:

It took a 4th order polynomial to bring the whole set within ±0.1° of the reference line and 5th order did not improve that by much. Now I *really* have no idea where the bodies are buried! And unlike the βeta equation, which just squeaks in under the calculation limits of an Arduino, it’s beyond my programming ability to implement these poly calcs on a 328 with high bit depth numbers. I certainly won’t be writing those lunkers on the bottom of each logger with a sharpie, like I could with a pair of nominal/βeta constants.

This empirical fit approach would to work for *any* type of sensor I read with ADC oversampling, and it’s so easy to do that I’ll use it as a fall back method whenever I’m calibrating new prototypes. In this case though, a little voice in my head keeps warning that wrapping polynomial duct tape around my problems, instead of simply using the rail voltage for both aref & the divider, crosses some kind of line in the sand. Tipping points can only be predicted when your math is based on fundamental principles, and black-boxes like this tend to fail dramatically when they hit one. But darn it, I wanted those extra bits! Perhaps for something as simple as a thermistor, I’ll be able to convince the scientist in the family to look the other way.

**Addendum 2017-04-28**

Seeing that trend line produce such a good fit to the temperature data, made me think some more about how I was trying to stuff all those system side errors into the βeta equation, which just doesn’t have enough terms to cope. By comparison, the Steinheart-Heart equation is a polonomial already, so perhaps if I could derive some synthetic S&H constants (since my thermistors didn’t come with any), it could peg that floppy ADC output to the reference line just as well as Excel did?

Once again, I rolled all the offsets & voltage errors into the thermistor resistance calculation by setting the (rail voltage/internal aref) scaling factor to a fixed value of 3

**1) MaxADCReading** (w scaling factor to compensate for the two voltages)

=(2^(OverSampledADCbitDepth) * (3)) –1

**2) Thermistor Resistance ** (w series resistor on high side & thermistor to GND)

= Series Resistor Value / ((MaxADCReading / OverSampledADCreading)-1)

and I went back to that trimmed 40-20-5 calibration data to re-calculate the resistance values. Then to derive the constants, I put three Si7051 temp. & thermistor resistance pairs into the online calculator at SRS:

(Note that my nominal is not 25°C any more. There are plenty of premade spreadsheets on the web for this from the companies who make sensors, or you can calculate them in excel)

With those Steinhart-Hart model coefficients in hand, the final calculation becomes:

**3) Temp °C =** **A** + (**B** * ln(ThermR)) + (**C** *** (**ln(ThermR)**)^3**) – 273.15

and when I graphed the **S&H** **(in purple)** output against the **si7051** **(blue)** and the 4th order **poly** **(yellow),** I was looking at these beauties:

and that fits *better* than the generic poly; nearly falling within the noise on those reference readings. With the constants being created from so little data, it’s worth trying a few temp/resistance combinations for the best fit. And this calibration is only valid for that one specific board/sensor/oversampling combination; but since I’ll be soldering the thermistors permanently into place, that’s ok. I’m sure if I hunt around, I’ll find a code example that manages to do the S&H calculations safely with long integers on the Arduino.

So even with cheap parts, oversampling offsets & bandgap reference silliness, I still made it below ±0.2°C over the anticipated temperature range. Now, where did I put that marker…