I notice that the default network ID (address) used, for example, in the library's Gateway and Node examples, is 100.
I have previously been using 0, as I just felt it was tidier and there was nothing in the comments to say that one ID was better than another (that is, if you were to use only one network ID in any one installation).
After reading around the subject of best use of radios (and reducing reception failure rate), though, I found that you want over-the-air bits to alternate between 1s and 0s as much as possible (thus the use of data whitening, Manchester encoding, etc.).
So I was wondering, if rather than 100, might 85 be better, as it would result in a sync word that would alternate more, as after the initial default 0x2D, it'd then alternate bits (giving a full sync word of 0x2D55 or 2 bytes of: 00101101 01010101)? Or might that be more likely to be mistaken for a section of preamble?
It certainly seems like my previous preference for using 0 might lead to more reception losses, anyway...
Also, based on the diagram given here:
https://lowpowerlab.com/2019/05/02/rfm69-10bit-node-addresses/, if not using data whitening (to make the bits alternate more in the "payload length" byte, "to node ID" byte, "from node ID" byte, CTL byte and the follwoing bytes), would it be good to choose node IDs in those bytes that also caused as much alternation as possible in the 1s and 0s? For instance, the gateway ID might be set to 682 (binary 1010101010), if using those new 10-bit node IDs, as (presuming a node is always communicating with a gateway rather than with other nodes), it'll always lead to alternate 1s and 0s in either the to "to node ID" or the "from node ID" byte in each transmission?
Or would such small tweaks be nothing compared to the chance of long sequences of 1s or 0s in the message itself (causing reception issues)?
I ask partly to try and understand the underlying processes better and partly to try and reduce the rate of reception failures I'm seeing.