Author Topic: The inexplicable latency of RX  (Read 1321 times)

joelucid

  • Hero Member
  • *****
  • Posts: 868
The inexplicable latency of RX
« on: January 25, 2017, 11:28:52 AM »
A new one for the cabinet of horrors:

I'm currently optimizing my TDMA setup: my gateway listens for packets at 5 different bitrates to enable each client to use it's preferred bitrate. This is possible with bearable latencies since I'm overlaying the RX periods: I start with 1200 baud, if there's no SyncAddressMatch irq after ~45 ms I switch to the next higher rate. That way - if bitrates double each step - you can fit infinitely many bitrates into a 100ms window.

However it's not that easy. The clocks on gw and mote aren't exactly synchronized so the gw needs to listen a couple ms early and continue for a couple ms after the agreed bitrate window closes. That adds 2x this silent period for each bitrate, quickly increasing the time required to service all bitrates.

So I've been trying to narrow the discrepancies between the clocks. What I found is that I could do this well for one bitrate, but then things were off at another bitrate. Somehow my estimate of transmit time wasn't consistent across all bitrates.

I debugged this all day today - thinking of course this had to be a bug in my code. I found that the actual TX times were very close to my estimates - within 300 uS or so. But the packet arrival time estimates still didn't add up.

So finally just now I scoped DIO0 on GW and node simultaneously and actually saw what was the only possible explanation left: After a packet is completely sent it takes a bitrate dependent amount of time to trigger payload available on the other side. That delay is independent of the number of bytes sent. It decreases with higher bitrates but not strictly proportionally to bit transmit time.

At 1200 baud it's a 2.7 bits delay. At 9600 it's around 4.3. You get the picture. So if you want to precisely time your packets you're going to have to apply a correction. I'm now using 3 bits and that gets me within a ms.

Joe

perky

  • Hero Member
  • *****
  • Posts: 873
  • Country: gb
Re: The inexplicable latency of RX
« Reply #1 on: January 25, 2017, 01:24:14 PM »
Curious result.

There are two ways to trigger a transmission, either fill the FIFO first and go into TX mode, or go into TX mode and fill the FIFO with a threshold of 1. Does this delay exist with both methods?

Also I wonder why you use PayloadReady to synchronize your timings? I actually use SyncAddressMatch, and yes I could get one of those with no PayloadReady but in that case I simply treat it as no packet received and calculate the next timeslot accordingly. In theory this means timing is independent of packet length.

Mark.

joelucid

  • Hero Member
  • *****
  • Posts: 868
Re: The inexplicable latency of RX
« Reply #2 on: January 25, 2017, 01:36:32 PM »
Quote
There are two ways to trigger a transmission, either fill the FIFO first and go into TX mode, or go into TX mode and fill the FIFO with a threshold of 1. Does this delay exist with both methods?

I use the first approach which is required with AES (I think).

Quote
Also I wonder why you use PayloadReady to synchronize your timings? I actually use SyncAddressMatch, and yes I could get one of those with no PayloadReady but in that case I simply treat it as no packet received and calculate the next timeslot accordingly. In theory this means timing is independent of packet length.

I calculate time to PayloadReady for a given payload to determine whether I can still send the packet in the current hop or if I need to wait for the next. Do you always transmit at the beginning of the hop or how do you deal with that?

Joe

perky

  • Hero Member
  • *****
  • Posts: 873
  • Country: gb
Re: The inexplicable latency of RX
« Reply #3 on: January 25, 2017, 02:44:45 PM »
I use the first approach which is required with AES (I think).
Well I can see how AES engine which is dual ported to the FIFO could add some bit rate independent delays, but I would expect it also to be dependent on packet size. Also the FIFO has to cross clock domains so I wonder if there's any sychronizing going on.

I calculate time to PayloadReady for a given payload to determine whether I can still send the packet in the current hop or if I need to wait for the next. Do you always transmit at the beginning of the hop or how do you deal with that?

My time between hops is long, 6 seconds or so. I also send status packets to all nodes, they use the SyncAddressMatch time to synchronize their clocks so remain locked. I think my system differs from yours somewhat ;)

Mark.

joelucid

  • Hero Member
  • *****
  • Posts: 868
Re: The inexplicable latency of RX
« Reply #4 on: January 25, 2017, 03:08:32 PM »
Quote
Well I can see how AES engine which is dual ported to the FIFO could add some bit rate independent delays, but I would expect it also to be dependent on packet size.

It does, but it's a couple of uS and as you say independent of bitrate.

I wonder if the DAGC uses some lookahead? That could explain it.

Quote
My time between hops is long, 6 seconds or so. I also send status packets to all nodes, they use the SyncAddressMatch time to synchronize their clocks so remain locked. I think my system differs from yours somewhat ;)

Yeah, I want to have as short of a hop dwell time as possible so the multiple bitrate support doesn't slow down response time too much. Definitely below 200ms so switching on a light still seems pretty instantaneous.

Finally with this discovery I seem to be fine with +/- 3ms time tolerance which helps tremendously. I should be able to get 150ms hop time with full 1200 baud support! My next lake thermometer goes 1km downstream ;)