As I see or understand it, there are two trains of thought here.
-
Turn off ADR.
-
Use ADR setting to try and optimize your nodes settings automatically, where the NS assist you via the predetermined parameters you selected.
I high capacity microwave links we have similar assistance, called AGC, as the fade margins and link conditions change the system adjust the TX power. So you can have a relative low TX power to have the optimal RX level. As the fade increases the TX power is increased to maintain the same optimal RX level. As fade increases, you can safely increase the TX power as the interference with adjacent systems will be constant, as the RF path to that system experiences the same amount of increase in fade
This way you are minimizing the interference with adjacent systems (nodes).
Are ADR not achieving this where lower TX power and DR is maintained (if correctly setup) until you are experiencing changes in RF conditions, change in SNR?
In this manner you can have an increase of the amount of RF devices in the same geographical area.
Note: I have seen thought that with this type of automatic control via some clever calculations. When a new RF device is introduced to mix, things can take a while to stabilize. You sometimes need to manually interfere with the settings on one or two devices and then everything is happy again.