UPDATE - solved, see post below this.sorry for opening this old topic, but I'm trying to understand the suggested modifications in sendACK().
I found this thread because I called sendACK directly after receivedDone and got no data. I've read that you don't recommend this, but I think there is something wrong.
In my main, I measured the time for the canSend in the sendACK:
while (1) {
if (rfm.receiveDone()) {
if (rfm.ACKRequested()) {
//ackMinRSSI = rfm.NewSendACK();
csma_time = rfm.sendACK();
}
// now rfm.DATA is zero
and the modified sendACK:
int RFM69::sendACK(const void* buffer, uint8_t bufferSize) {
uint8_t sender = SENDERID;
int16_t _RSSI = RSSI; // save payload received RSSI value
writeReg(REG_PACKETCONFIG2, (readReg(REG_PACKETCONFIG2) & 0xFB) | RF_PACKET2_RXRESTART); // avoid RX deadlocks
uint32_t now = t.read_ms();
while (!canSend() && t.read_ms() - now < RF69_CSMA_LIMIT_MS)
receiveDone();
int csma_time = t.read_ms() - now;
sendFrame(sender, buffer, bufferSize, false, true);
RSSI = _RSSI; // restore payload RSSI
return csma_time;
}
The result is always a time <= 1ms. But with the other code suggested here I run into timeouts because of too high background rssi. Both cases look not plausible to me...
int RFM69::NewSendACK(const void* buffer, uint8_t bufferSize) {
setMode(RF69_MODE_TX);
int16_t _RSSI = RSSI; // save payload received RSSI value
bool canSendACK = false;
int rssiMin = 1000;
uint32_t now = t.read_ms();
while ((canSendACK == false) && (t.read_ms() - now < 100 /*ACK_CSMA_LIMIT_MS*/))
{
int rssi = readRSSI();
canSendACK = (rssi < CSMA_LIMIT); // CSMA_LIMIT = -90
rssiMin = (rssi < rssiMin) ? rssi : rssiMin;
wait_us(50);
}
if (canSendACK) {
writeReg(REG_PACKETCONFIG2,
(readReg(REG_PACKETCONFIG2) & 0xFB) | RF_PACKET2_RXRESTART); // avoid RX deadlocks
sendFrame(SENDERID, buffer, bufferSize, false, true);
}
RSSI = _RSSI; // restore payload RSSI
return (canSendACK ? 0 : rssiMin);
}
Trying to follow the flow in the original code:
- receive Data, want ACK
- call sendACK, call canSend()
- canSend is in RX mode, payload > 0, rssi > threshold
- receiveDone in sendACK is called
- RX mode, payload > 0 -> mode is set to standby!
- next call to canSend in sendACK loop
- mode is standby, canSend returns false
- next call to receiveDone
- mode is standby, receiveBegin is called and everything reseted
- next call to canSend in sendACK loop
- mode is RX, payload is 0, now rssi is (now always?) low
- ACK is sent
So this looks reliable because rssi is always measured low and the ACK is sent. In opposite, the NewSendACK version reports timeouts cause rssi is too high. But I think this rssi measurement gives a wrong value, I only don't now why. From the the program flow, the new Version should work without destroying the previous received payload.
Can someone follow me?