The Netduino forums have been replaced by new forums at community.wildernesslabs.co.
This site has been preserved for archival purposes only
and the ability to make new accounts or posts has been turned off.

I'm using an ADC port to measure voltage (of a solar charged battery). I use the standard 3.3V Aref and the default 0-1023 range. With a voltage divider (5k6 + 30k) I'm able to measure just over 20V.

Logic tells me (as well as some examples I found) I should calculate the actual voltage by dividing by the number of steps (float actualVoltage = (float)adcValue / 1024 * 3.3). Except it doesn't match the voltage I'm measuring with my DVM. If I use 1023 (float actualVoltage = (float)adcValue / 1023 * 3.3) the voltages match perfectly. I first missed the mismatch when measuring the voltage directly at the port, since it's much smaller there. At the solarpanel/battery the difference is more noticeable.

Now, simplifying the problem, connecting the ADC port directly to 3.3V gives me 3.2968 when dividing by 1024 and exactly 3.3000 when dividing by 1023.

Can anyone tell me, what the logic is behind this? Or am I missing the logic here completely? Thanks in advance!

Hi Codeblack.
Consider the range 0..1024 as a segment, built by 1023 units. The first spans from 0 to 1, the second from 1 to 2, etc. The segment range is exactly equivalent to the voltage range 0.0V to Vref, and the read indicates which unit part "contains" the actual voltage is at the input.
So, except for conversion errors (I mean hardware inside the microcontroller), you should consider any read affected of an error of *AT LEAST* 1/2 LSB. LSB=least significant bit.
However there is always a conversion. If you take a look at the microcontroller specs, they stated that the absolute accuracy of the ADC is +/- 4 LSB.
About the DVM. One LSB (i.e. the span of an unit) is about 3.22mV, that is less than 0.1% of course.
A normal handheld DVM has a precision of 1% or maybe 0.5%, plus the last digit could be +/- 1 unit. So, the comparison won't be applicable.
If you checked the voltages using a lab DVM (4+ digits), that's much more reliable.
Another point to consider is the Vref. Probably that is *NOT* exactly 3.300V. The board works perfectly with 3.32V or 3.28V. By the way, consider that a deviation as little as 3.22mV generates an error on the read of one LSB.
So, don't expect to obtain a full 10-bit precision. I'd consider 9.5 bit at max.
If you take in account the voltage divider (the resistor), the overall accuracy could be around 9 bit or less.
Cheers

Biggest fault of Netduino? It runs by electricity.

Hi Mario,
Thanks for the quick and helpful reply . This morning I also came to roughly the same explanation, although not so detailed. It's been a while since I've worked with electronics, especially analog. I'm so used to software, with exact calculations and perfect results. If my off software is off by as much as 0.1%, I'm probably thrown off the project!
I think what threw me off, was the fact that the results exactly matched when dividing by 1023, instead of 1024. But, as you've mentioned, probably both DVM and ADC are wrong anyway .
Thanks again.

Everything Mario says is correct but I think he missed the question slightly.
The ADC returns a value from 0 to 1023 not 0 to 1024.
0 to 1023 is 1024 different values because zero is a value too.
so ADC value of zero corresponds to 0v and ADC value 1023 corresponds to 3.3v
That is why you divide by 1023 and not 1024.

so ADC value of zero corresponds to 0v and ADC value 1023 corresponds to 3.3v

To be precise, the ADC has 1024 quantization levels, the value of zero corresponds to any analog input in range <0, Aref/1024), the value of one represents any analog input in range <Aref/1024, 2*Aref/1024) etc., the last value represents <1023*Aref/1024, Aref). Thus, the trick here is the interpretation of the measured value.

The ADC returns a value from 0 to 1023 not 0 to 1024.
0 to 1023 is 1024 different values because zero is a value too.
so ADC value of zero corresponds to 0v and ADC value 1023 corresponds to 3.3v
That is why you divide by 1023 and not 1024.

Not exactly.
Consider the nominal "width" of the ADC reading: it is 1024. That is strictly related to Vref (typically 3.3V).
Now we must slice that segment in many smaller parts: in total are 1024 "tiles" (if you like numbered from 0 to 1023).
By the way, this number is the index of the tile, *NOT* the distance from the origin of the given range.
Let's depict it:

So, when you read "1" from the ADC, the actual value will fall within the segment spanning from 3.3mV and 6.6mV. Since there's no way to know what's the exact point where the voltage is in, you must take in account that the ADC leaks of an accuracy of 1/2 LSB (i.e. half tile).

FYI, there are some trick to improve "artificially" the precision.
If you add some white noise to the input signal, your ADC readings will fall within a range centered to the real one. When you perform a good average on the samples, you may gain some bit more than the declared accuracy of the ADC.
However that's is much like a math trick, because there are other errors involved in the A/D conversion, and all that work won't have any benefit.

Cheers

Biggest fault of Netduino? It runs by electricity.

Thanks, CW2 for taking the time to explain that.
Mario didn't miss anything either.
So we should be dividing by 1024 and not 1023.
So...
for an ADC value of "n" (assuming Aref=3.30v)
n*3.3/1024 < voltage < (n+1)*3.3/1024
So to calculate a voltage that represented the ADC value with the least quantising error you need to add 1/2 an LSB to the value.
voltage(mV) = (n+0.5)*3300/1024 quantising error = ±0.5*3300/1024
= n * 3.22265625 + 1.6 quantising error = ±1.6mV
I get what Mario was saying about there being other sources of error eg Aref and conversion error.
Do I have this right CW2 and Mario?

So the number returned by the ADC should not be converted into a single analog value, but into a range of values:
float rangeStart = (float)adcValue / 1024 * 3.3;
float rangeEnd = (float)(adcValue + 1) / 1024 * 3.3;
And the actual analog signal is somewhere in between. This makes perfect sense and, in my case, makes the measurements from the Netduino match with the DVM measurements.
I think the trick is, to not get carried away by the number of decimals of the resulting float values. This, of course, says nothing about the accuracy of the number. Especially with A/D conversions.

For Mike, I'd suggest to have a look at Wikipedia: it explains very well all the main source of non-accuracy.

For Codeblack, that's right about the decimals. If you have to measure you height having just a 1-meter long stick, how can you give your height precise at centimeter-level?
Anyway, I'd not throw away the project. There's some guy expanding the Netduino with an external ADC chip. As usual, the specialized chips work much better than a bunch of logic doing a bit of all.
The Netduino embedded ADCs are good for a rough reading, such as 1%-accuracy position/voltage (e.g. potmeter, level, etc).
Otherwise why some brands like Analog Devices would create high precision ADC chips?
Cheers

Biggest fault of Netduino? It runs by electricity.

I cant measure 0.0mV even I put the analog pin to Gnd. it's still read as 3.3 - 6.4mV.... I'm using RTC on A4/A5. If I remove this RTC and undeclare it on the code, I can reach 0.0mV, but impossible while RTC is plug in.

It's annoying... Most My sensor has sensitivity on millivolts range. Does anyone have the same experience? any way out?