So, I am somewhat curious about the backend implementation of the .NET MicroFramework.
I am programming a temperature monitoring system for my house and using the Netduino to do all the interpretation and logic to drive everything.
I have been using floats since my temperature sensor is only accurate to ~0.1 degrees, my issue is AnalogInput, most of the built in Math library functions, and any deicmal number I type in the program seems to use use doubles, which I have been casting as floats for my math.
Is this actually saving me anything? Since we have a 32-bit uC, I presume this is saving a memory location and register for operations with my program because double is 64-bit, but for all I know I am adding an operation without gaining anything because all the calculations will still be done using double.
I have a code snippet below of what I am talking about:
private static float GetTemp() { return (((float)0.442368) * (float)_adc.Read()) - ((float)4.92304); }
0.442368, 4.92304, and _adc.Read() are being treated as doubles according to the IDE, and I am casting as a float.