@Tom - thanks for the tip. I implemented a similar approach, and it works well, as long as correct time is maintained. Working on syncing time on each node with server time (dwelling into wireless programming - keep getting impressed by Felix's work)
I'm wondering though, like you said, the minimum variability in timing with the RTC I use is 1 second. This method works well for 5-7 nodes, but my logging interval can be as small as 10 seconds, depending on the application (some are used in research - decay of gases, quick environmental changes in a controlled chamber, some are used around an academic building to monitor building activity, etc.) I can see this method to be a problem in scalability when I add more nodes in the network, and if number of nodes exceeds 10, then the setup reaches its limit. If I increase the logging interval to 30 or 60 seconds, I am able to add more nodes, but anything beyond that, I would be compromising on the quality of data collected. I currently have 8 nodes, so the method works fine for now, but will be increasing the number soon, so just thinking ahead.
Another approach would be to use more than one gateway and allocate a different network ID to it and the few nodes it serves. I haven't tried this yet as most of my nodes are currently in use, but since they would be operating in the same frequency but just have a different network ID, I'm wondering how that would affect congestion and data loss. If I have 30 nodes and 3 gateways inside a building (10 nodes per gateway) for example, how would they behave when working in tandem? I would assume there would be some type of interaction by a node to all three gateways till it figures out which one has the correct network ID, and during this interaction, other nodes that are supposed to be sending data to their gateways might not be able to do so.
Is this a proper way to look at it? Maybe I'm confusing the principles on how the modules work.
Edit: Just reading up on network IDs and hardware filtering. Nevermind that.