The Netduino forums have been replaced by new forums at community.wildernesslabs.co.
This site has been preserved for archival purposes only
and the ability to make new accounts or posts has been turned off.
Is data lost when UDP socket receive buffer is too small?
Answer - YES, bytes are lost when the buffer is too small.
I wrote a program that sends data to itself using a tx and rx UDP socket with IP address 127.0.0.1. (port 11000).
The main program sends three 10 byte arrays one after another, then sleeps 10 seconds.
(Each array has different data so it can be recognised when received.)
A second thread receives from the same port, and displays the data array received.
I added an extra one second delay after each receive to ensure that if the thread wakes up immediately when the first message is received, the socket has two messages waiting when the thread next wakes up and calls the receive method.
Results:
When the receive byte array is equal to the message size (10 vs 10), then each receive operation gets one whole message. No message is lost.
When the receive byte array is greater than the message size (15 vs 10), then the same behaviour occurs. One whole message per receive. (Parts of the subsequent messages are not added onto the first.)
When the receive byte array is twice the size of a message (20 vs 10), then the same behaviour occours. One whole message per receive. (The entire subsequent message is not added to the first, even though it will fit in the buffer.)
When the receive byte array is less than the size of a message (7 vs 10), then the last bytes of every message are lost. The susequent read(s) of the socket do NOT deliver the missing bytes. Subsequent reads deliver the first bytes of the next message.
So, it looks like the socket receive method returns one datagram at a time, and the receive array needs to be as long as the longest message, then nothing will be lost.
I am fleshing out a class that will handle a socket to receive UDP datagrams.
Something has worried me when reading online documentation for the socket Receive() method.
Hence the question, to save me an afternoon of testing, does anybody know whether data is lost when attempting to recevie from a socket into an array that is too small?
Obviously I can setup a big receive byte array to accept the largest message possible, but what happens if two messages are waiting to be read from the socket - do I lose part of the second message because it will not fit in the array, or do I just need to make a second read?
Or does the socket receive only return one datagram at a time?
Here is a cut down version of my code to create, bind and receive from a UDP socket:
// Length of receive byte arrayIPAddress localIpAddress = IPAddress.Parse("10.0.0.1");int port = 11000;int maxByteLength = 100;int bytesReceived;// Create endpoint from the IP address and port numberIPEndPoint localEndPoint = new IPEndPoint(localIpAddress, port);// Create an array to receive databyte[] receiveBuffer = new byte[maxByteLength];// Create UDP socketSocket listener = new Socket(AddressFamily.InterNetwork, SocketType.Dgram, ProtocolType.Udp);// Bind to portlistener.Bind(localEndPoint);// Set socket timeout to 1 second, // this sets how long the socket receive method will wait for a datagramlistener.SetSocketOption(SocketOptionLevel.Socket, SocketOptionName.ReceiveTimeout, 1000);// Attempt to receive a datagramtry{ bytesReceived = listener.Receive(receiveBuffer); // Data has been received. // What happens to oversize data if socket has received more bytes than fit in array?}
Answer - YES, bytes are lost when the buffer is too small.
I wrote a program that sends data to itself using a tx and rx UDP socket with IP address 127.0.0.1. (port 11000).
The main program sends three 10 byte arrays one after another, then sleeps 10 seconds.
(Each array has different data so it can be recognised when received.)
A second thread receives from the same port, and displays the data array received.
I added an extra one second delay after each receive to ensure that if the thread wakes up immediately when the first message is received, the socket has two messages waiting when the thread next wakes up and calls the receive method.
Results:
When the receive byte array is equal to the message size (10 vs 10), then each receive operation gets one whole message. No message is lost.
When the receive byte array is greater than the message size (15 vs 10), then the same behaviour occurs. One whole message per receive. (Parts of the subsequent messages are not added onto the first.)
When the receive byte array is twice the size of a message (20 vs 10), then the same behaviour occours. One whole message per receive. (The entire subsequent message is not added to the first, even though it will fit in the buffer.)
When the receive byte array is less than the size of a message (7 vs 10), then the last bytes of every message are lost. The susequent read(s) of the socket do NOT deliver the missing bytes. Subsequent reads deliver the first bytes of the next message.
So, it looks like the socket receive method returns one datagram at a time, and the receive array needs to be as long as the longest message, then nothing will be lost.
I would have expected the second read to get the remaining 3 bytes of the first packet.
Ditto,
I saw some other code recently (I think it was Python - am I allowed to say Python on the Forum?) where the coder read into a one byte array initially, the first byte in his/her message scheme was the length. The code then entered a while loop that read further available bytes until the correct length was completely received.
I thought the C# sockets would have the same behaviour and I would need to implement the same loop. OK the behaviour is different, but with the way the socket works, it is not needed as long as the longest message length is known.
There is one thing to note, I can't rmember if the Python was using a TCP/IP socket, or a UDP socket. Perhaps the C# TCP/IP sockets don't lose data. Thats an experiment for another day.....
Yes, TCP sockets are different, they work the way we would expect.
Except that I've found that they don't always return the number of bytes that I request ... even if I sent more than that. I always do the reads in a loop to verify that I get all of the data I need for each read ...
That's what makes programming fun (and frustrating) - John
I am not used to VB, it looks like you are able to sleep on the socket (with a 5000ms timeout), and then ask it how many bytes are available, create a buffer of the correct size and then read the bytes out.
I can see the same Poll method is avalable in C#.
Couple of questions
- Is this also a UDP socket?
- What happens if there are multiple messages - are all waiting bytes read out in one go, or are they kept as separate messages?
Paul,
- sktConnection.Poll(50000, SelectMode.SelectRead)
The 50000 is the maximum time to wait for a response, in microseconds.
- Dim bBytes As Byte() = New Byte(sktConnection.Available - 1) {}
The amount of Bytes that can read, so dimension the buffer array.
- If sktConnection.Receive(bBytes) > 0 Then
Fill the buffer, and tell howmuch bytes are in buffer.
if Yes (there are bytes)concat the array in the string.
- _strRequest &= New String(Encoding.UTF8.GetChars(bBytes))
If not (there are NO bytes in the buffer) exit the loop
- Exit Do
There were bytes in the buffer so wait 200 milliseconds
- Thread.Sleep(200)
and do it all over again (if there were bytes in the buffer)
- Loop
in C#:
while (sktConnection.Poll(50000, SelectMode.SelectRead)) {
byte[] bBytes = new byte[sktConnection.Available];
if (sktConnection.Receive(bBytes) > 0) {
this._strRequest += new string(Encoding.UTF8.GetChars(bBytes));
} else {
break; }
Thread.Sleep(200);
}
convert VB to C# vice versa:
http://www.developer...t/vb-to-csharp/