I started working again on the multicell voltmeter, I soldered some header pins onto the two 16 channel multiplexers, and plugged them into a breadboard with arduino. Some basic testing proved that my optimism was completely unfounded. So using best testing practices, I peeled off layers until I got to the bit that worked, and started testing each addition in turn. So first I plugged a battery cell in to the analog pin and ground, and made sure that the cells were showing the correct voltage. The 16 cells are scanned twice in the programs, once at set up to determine how many cells the battery has, and then again while displaying the actual graph and voltages. The first part was just not working, so to debug it I started adding debug statements until I got to the point where I was successfully (mostly) detecting and showing a voltage between the A0 pin and ground. Unfortunately, it was a different voltage to the displayed voltage on the screen, and both were different to the actual voltage of the cell being tested.
Recorded voltage through an analog pin is represented in a 10 bit field, and theoretically the value is actual voltage divided by the reference voltage (5v) multiplied by 1024. After much hair tearing, it became apparent that the 5v Reference Voltage, which should be fixed, actually varies when the TFT LCD screen was attached, and also dependant on how much was being displayed on the screen! So without the screen, the voltage was almost exactly 5 volts (4.99 in fact), with the screen attached it dropped to 4.80, and when working hard down to 4.74. This meant the 3.34 volts of the cell were being reported as 3.44 volts before initialisation of the screen, and 3.53 after.
The inaccurate 5v reference voltage is well documented, but more as a generic thing rather than specifically because of an attached device. So the solutions proposed involved using either an external reference voltage, or one of the two internal internal reference voltages (1.1 volts or 2.56 volts). An external reference voltage is just too hard at this stage, and reference voltages of 1.1 volts and 2.56 volts are not much use when trying to measure 3.0 to 4.2 volts. It also appears that neither of these values are particularly accurate or stable!
While reading up on all this, the 3.3 volt pin was mentioned very rarely, but on a whim I used my multimeter to measure the voltage at the 3.3v pin in all three circumstances, and it stayed stable at 3.31 volts. It seems that the voltage drop from a 5v device only affects the 5v supply, not the 3.3v.
So, by connecting the known accurate 3.3v pin into another analog pin and doing an analog read on it, we can measure the voltage against the “5v reference voltage” current value. The measured voltage gives us a starting point to work out the error in all measured voltages. At 5v the 3.3v voltage should translate to 676 in the 10 bit number ((3.3 /5) * 1024). Divide this by what it actually produces (e.g 676/715 if 715 was the value returned), and you get an adjustment factor which can be used to correct any measured voltage to an accurate value. Simply calculate the factor once in the main loop, and use it to correct each measured voltage as read from the analog pin, and if the 5v reference voltage changes for any reason (e.g switching the screen on or off) then the measured voltage remains accurate.
Accuracy is down to under 1/100th of a volt, and I am happy with that!
So now I have an accurate measurement of the voltage, I can now get down to getting the two 16 channel multiplexers to work, and to measure each cell in turn.
A good days testing and development!