A quick glance at the code confirms that temps are optimised when they are near room temperature. If the temperature can be represented as the sum of the room temperature (21C, 294.15K) and a 8-bit signed integer (which is in the in the interval [-128; 127]), it is actually saved as this signed 8-bit offset. This means temperatures in the interval [-107C; 148C]*. If not, it's saved as a full 16-bit integer (in the interval [0; 10000]).
In both cases the temp is saved as an integer, but when the offset from 21C is saved, at load time it's calculated as offset + 294.15, thus becoming a non-integer value on the Kelvin scale but an integer value on the Celsuis scale. On the contrary, when it's saved as a full 16-bit integer, it stays an integer value on the Kelvin scale but ends up being a non-integer value on the Celsius scale (.85, or .15 when below 0).
*: That is actually not true as the lowest temperature that can be saved in its integer form is -105C. I guess the game tries to be on the safe side with interval checks.