A new one for the cabinet of horrors:
I'm currently optimizing my TDMA setup: my gateway listens for packets at 5 different bitrates to enable each client to use it's preferred bitrate. This is possible with bearable latencies since I'm overlaying the RX periods: I start with 1200 baud, if there's no SyncAddressMatch irq after ~45 ms I switch to the next higher rate. That way - if bitrates double each step - you can fit infinitely many bitrates into a 100ms window.
However it's not that easy. The clocks on gw and mote aren't exactly synchronized so the gw needs to listen a couple ms early and continue for a couple ms after the agreed bitrate window closes. That adds 2x this silent period for each bitrate, quickly increasing the time required to service all bitrates.
So I've been trying to narrow the discrepancies between the clocks. What I found is that I could do this well for one bitrate, but then things were off at another bitrate. Somehow my estimate of transmit time wasn't consistent across all bitrates.
I debugged this all day today - thinking of course this had to be a bug in my code. I found that the actual TX times were very close to my estimates - within 300 uS or so. But the packet arrival time estimates still didn't add up.
So finally just now I scoped DIO0 on GW and node simultaneously and actually saw what was the only possible explanation left: After a packet is completely sent it takes a bitrate dependent amount of time to trigger payload available on the other side. That delay is independent of the number of bytes sent. It decreases with higher bitrates but not strictly proportionally to bit transmit time.
At 1200 baud it's a 2.7 bits delay. At 9600 it's around 4.3. You get the picture. So if you want to precisely time your packets you're going to have to apply a correction. I'm now using 3 bits and that gets me within a ms.
Joe