UDP Not Sending/Receiving Fragmented IP Packets Properly
843790Jun 1 2006 — edited Jun 2 2006Alright, I refuse to believe this is a Java code but I also don't understand what could be wrong with my code since its just a simple UDP sender receiver (using *.send and *.receive).
Its kind of a complex problem and it took me a few hours to debug this using Ethereal.
I have a problem with packets not sending if the ARP cache has entries for the hosts. If I send a UDP packet that is smaller than the sender and receivers MTU (default Windows XP is 1480 bytes) then everything sends fine. However, if I send a packet that must be fragmented (IP layer) then I have issues if the ARP cache doesn't contain any routes to the hosts.
Here's whats happening (I'm capturing all of this on Ethereal). I flush the ARP cache on the machine so there are no entries. If I send a packet that must be fragmented (not sure which RFC this is defined in but its one of them) I can see the ARP requests propagating the network. However, after those, only the LAST fragment of the IP packet is sent. Of course, when the receiver only receives one piece of a fragmented packet then it drops the packet.
Now, if I resend the exact same packet, the ARP entries have been populated on the sending host and the packet is fragmented properly and sent properly. It also is received properly on the receiver and everything is fine.
I don't know what this could be since I thought it was my code so I wrote a very very simple application using the UDP examples that sun provides. This makes no sense at all but I cna't figure this out.
This happens consistently. I can keep sending over and over if there is an arp entry but as soon as it needs to find a route to the host and fragmentation occurs, only the last fragment is sent and received. This is really annoying.