
NetDuino Quadrocopter
#21
Posted 13 January 2011 - 08:43 PM
#22
Posted 13 January 2011 - 08:52 PM
#23
Posted 13 January 2011 - 09:52 PM
#24
Posted 13 January 2011 - 10:55 PM
#25
Posted 14 January 2011 - 12:10 AM
#26
Posted 14 January 2011 - 12:35 AM
#27
Posted 14 January 2011 - 02:30 AM
I didnt mean to start any kind of war here either but i must object to the radical inefficiencies that you are both speaking of, there's a difference between 20 million operations and processing 20 million operational messages a second on one thread updating a domain with 100 millions objects in memory. Perhaps this is the disconnect between hardware programmers that are hardware first software second. I am going to admit i am a noob when it comes to the hardware side but i cant imagine it this inefficient. no program i have ever made is content with a 20 second lag because of gc, thats drag and drop college kids. These are teeny objects and GC should have no impact if you are cognizant day one (which i always am). I have worked in .NET since the beta, I have run billions of dollars through .NET at millisecond efficiency(i agree on the sleep statement btw, .NET timer classes are awful), i have beat other programs to the punch($) by being more efficient and agile, these include ones written in c and c++, by programmers that are not as aware of speed given their bias of their language.
I'm not starting a war here either, neither is Chris, we're simply trying to inform you of the limitations of an interpreted language running on a 48mhz processor. We're not saying it can't be done - you just seem to think you're going to get the same pure C# performance out of a tiny little SoC device that you're used to at work/in full .net.
The financial application you are running I'm guessing is on a dedicated server with multiple quad or hex core xeons running in the 2.2-3.2GHz region, with tens of gigabytes of ram. This is a 48mhz, single chip with 60kb of ram. Full .NET runs JIT, NetMF is fully interpreted.
During the day I work on mine scheduling software. This uses complex genetic algorithms and we run our service on a bunch of 24 core Xeon blades (4x hex core) with 8gb of ram per core. With the size of the data we are processing (1-2 million blocks with all the genetic crossovers, village, population data etc), optimisations that take 10 microseconds off code execution time can save minutes. I do a lot of optimisation work and have been working with the CLR since the beginning on the full framework, as well as CE and asp.net.
NetMF is a whole different beast. I've also worked heavily with the NetMF CLR, I have a large library of methods in C# that are faster than their native CLR equivalents, sometimes 3-5x faster. I'm not a college tweenie or amateur developer, I'm employed as a software engineer and have been for the past 8 or 9 years - only working with C# in the office. I say this not to brag, but to try to give what I'm saying some weight. I know the CLR inside out, I have no issue with writing MSIL, I've published articles on C#, I know how to optimise C#. In my own time, i write C#, C and C++ on embedded systems.
A 48MHz ARM running IL natively would be fine, a 48MHZ arm running JIT would be OK, a 48MHZ ARM running C++ code that interprets IL is slow. Each IL call takes dozens, or hundreds, if not thousands of instructions (depending on what it is) to execute. You don't have the pipelining or co-processing that a Xeon or other intel/AMD chips have.
I'm not a hardware person who's learnt software, I'm a software guy who over the past 4 years has learnt a lot about hardware through trial and error.
What exactly would cause this randomness and unpredictability?
We do have the source code to this, so can we point to the spot that is causing the problem?
C# events, threading and the garbage collector (among other things) can interrupt the execution flow of your code, causing this unpredictability. Sample code? Create an application, run it - there you go. I can guarantee that you will not see the exact number of clock ticks pass for every single loop of code unless it is an ultra simple app like blinking LED's. The simple fact is, this is a managed language - you don't have total control over what happens when and how.
#28
Posted 14 January 2011 - 02:45 AM
The simple fact is, this is a managed language - you don't have total control over what happens when and how.
Shouldn't it be deterministic? Given that it is just our code/firmware running on a closed system, why wouldn't every loop iteration be deterministic?
(I am reminded that we landed on the Moon with less memory and processing power than is contained on a Netduino.)
#29
Posted 14 January 2011 - 02:56 AM
Shouldn't it be deterministic? Given that it is just our code/firmware running on a closed system, why wouldn't every loop iteration be deterministic?
(I am reminded that we landed on the Moon with less memory and processing power than is contained on a Netduino.)
If you're not using events or timers and are not creating objects which require garbage collection, .NET MF can be near-deterministic. Interrupts will still fire to queue up incoming data (on I2C/SPI/UART buses for instance) and the scheduler will still interrupt the current thread every 20ms to make sure nothing else is waiting, but generally things will run pretty smoothly.
Windows CE recently got "native real time" support. We could help do the same thing for .NET MF if there was enough demand for it, but it would probably need to happen in the core. In the meantime we can certainly create interrupt-driven features with "near-real time requirements" although that's not nearly as easy as writing managed code.
Chris
#30
Posted 14 January 2011 - 03:15 AM
I would interpret this as saying that every loop iteration is going to be variable but more-or-less deterministic until your quadcopter takes its first reading from the outside world, and at that point you'll lose the determinism.Given that it is just our code/firmware running on a closed system, why wouldn't every loop iteration be deterministic?
And Columbus made it to the New World with even less!(I am reminded that we landed on the Moon with less memory and processing power than is contained on a Netduino.)

Just as a micro-point on the topic of reducing GC, I kinda like the style of programming where all your objects are structs, and you pass around refs everywhere (being sure to pass "ref x" rather than "x" in order to avoid making a copy). This will hardly solve all of your problems, but might be a nice tip to keep in mind.
#31
Posted 14 January 2011 - 04:02 AM
#32
Posted 14 January 2011 - 04:18 AM
#33
Posted 14 January 2011 - 04:23 AM
#34
Posted 14 January 2011 - 04:30 AM
#35
Posted 14 January 2011 - 04:34 AM
#36
Posted 14 January 2011 - 06:11 AM
I agree, some code out there is absolutely hideous - but everyone learns somewhere. I'd hate to look at my first attempts at coding (which i probably thought were brilliant).
As far as you go optimising, i'd suggest you start with making the code as easy to work with as possible, then look for the slowest sections of code. There is no profiling on NETMF so it's pretty much a matter of knowing where it is going to be slow, making a test harness and then executing that code 1000 times, make a change, execute another 100 times, see if there was a speed improvement, rinse and repeat.
Rather than going multiple netduino's to handle the task (this is basically the mythical man month) go with a real time processor - you could use something high level such as a Propeller or low level such as a dsPIC. This would handle the PID loop (or whatever pattern you use) and take input from the netmf chip. NetMF would be handling navigation, position, etc.
#37
Posted 14 January 2011 - 08:51 AM
#38
Posted 14 January 2011 - 02:04 PM
I used this code:
namespace TightLoopTest1 { public class Program { public static void Main() { OutputPort d2 = new OutputPort(Pins.GPIO_PIN_D2, false); bool status = false; while (true) { status = !status; d2.Write(status); } } } }
and got this @ 24Mhz:

#39
Posted 14 January 2011 - 02:20 PM
I would guess this is caused by scheduler, it has to check what should be run next after 20 ms time quantum elapses.Is this evidence of the "issue" - or is it just a precision issue with the Saleae Logic Probe?
#40
Posted 14 January 2011 - 02:38 PM

1 user(s) are reading this topic
0 members, 1 guests, 0 anonymous users