Even if I make what seem like overly conservative assumptions, it just doesn't seem to add-up to be all that bad.
Here are what I consider to be some overly conservative assumptions:
1. It takes 500 microseconds to switch channels.
2. 50 channels are hopped among.
3. The node wakes up and listens continuously on just one of the channels (doesn't matter which).
4. The gateway sends out 100 bit frames (a number I picked out of thin air just to be conservative) at 300kbps.
5. Included in the frame is information on what the next several channels are, in order, that it will jump to.
6. After sending the frame, the gateway listens on the same channel for a time period of 100 bits to see if it gets a response. Here again, 100 bits is just a thin-air number picked just to be conservative. 6. After jumping to the next channel, the gateway listens for a period of 100 bits for a response from the node. Again, this is just a thin-air number I picked to be conservative.
7. If the gateway doesn't detect anything, it switches to Tx mode, sends a 100 bit frame, and hops to the next channel in its schema.
So, on average, to cover all 50 channels, it would take the gateway roughly 50 x [500us + (airtime for 200 bits at 300kbps)], which my back of the envelope says is 58 milliseconds. Thus, on average, you'd expect it to take half that, or
29ms, for the gateway to connect with the node, right? And that's using what seems like very conservative numbers as well as a conservative listen strategy for the node.
OK, so now that I've written this down, I see where the flaw is. It's going to take the node far longer than I've allowed to react to the packet it receives from the gateway once it finally does receive the gateway's packet on the channel it's listening on. In fact, on first blush, it seems as though that reaction time may well strongly influence how long the process takes, unless maybe the look-ahead on the channel hopping schema is long-enough that it accounts for that delay, and the node can catch up by jumping to the right channel where the gateway is after the node's lengthy reaction delay. Well, that does seem possible, so maybe it's not so bad after all.
[Edit: So, conservatively assume the node's reaction time is 4ms. Then, on average, with these conservative assumptions, the time for the node to sync up with the gateway might be 29+4=33ms. I'd expect the actual number, without the inflation caused by overly conservative assumptions, to be less than that.
Does that sound about right, or have I left something out? ]