mLinux 3.3.13 corrupt downlink messages
Home › Forums › Conduit: mLinux Model › mLinux 3.3.13 corrupt downlink messages
- This topic has 8 replies, 2 voices, and was last updated 7 years, 5 months ago by
EG.
-
AuthorPosts
-
November 18, 2017 at 1:34 pm #21812
EG
ParticipantAfter flashing mLinux 3.3.13 on my Conduit, I’m seeing an occasional corrupt downlink message detected on my node. I get either an invalid mic or a mismatch on node address. My node handles this fine, but it’s wasting bandwidth. These are class A acks.
I also updated my network server, packet forwarder, and libmts to the latest version and still see the issue. I never saw a single error such as this with 3.2.0, and I can run the same node with one of two Conduits, one on 3.3.13 and one on 3.2.0, and see the problem only with the later version.
Has anyone seen anything like this?
I tried to downgrade to 3.2.0, but I wound up with missing filesystems, so I flashed back to 3.3.13.
I have reported this to Multitech support as well.
November 18, 2017 at 2:06 pm #21813Jason Reiss
KeymasterDoes your node enable CRC for downlink packets? It should not be enabled.
The network server has changed to send downlinks without CRC enabled per Lorawan in version 1.0.26.
November 18, 2017 at 2:33 pm #21814EG
ParticipantThanks for your response.
No, It does not. I get these errors on the order of once every 10 minutes when sending 1 msg/sec, BTW.
November 18, 2017 at 4:37 pm #21815Jason Reiss
KeymasterAre you using the default network server version 1.0.8 in mlinux 3.2.0?
Or did you update to latest packages in both conduits and only see issues with 3.3.12?
What is the distance between gw and node?
What datarate is used?November 18, 2017 at 4:53 pm #21816EG
ParticipantI was about to reply that I hadn’t updated the network server, but I went to check and I have version 1.0.13 running on my mlinux 3.2.0 gateway. I don’t remember upgrading it, but I guess I must have.
The gateway and node are 1-2 meters apart. This is with DR4.
On the 3.3.13 gateway, the reason I flashed it originally was that I messed up the network config and bricked it (it wouldn’t complete boot), so I flashed 3.3.13 using uboot. I can’t tell you what version the network server was before that. I saw I was having the mic/address problem (which actually hung my node sw because I had a bug when handling the bad acks, so I’m sure I’ve never gotten them before on either gateway), so then I tried to downgrade (which didn’t work), then went back to 3.3.13.
November 18, 2017 at 5:25 pm #21817Jason Reiss
Keymaster1.0.13 will have crc enabled on downlink packets.
I have only seen corrupt packets when the crc settings are mismatched between gw and node. Perhaps try enabling crc on the node to experiment.
Have you tried swapping cards to rule out hw issue?
November 18, 2017 at 5:27 pm #21818Jason Reiss
KeymasterCheck also the frequency accuracy of the node. 15-20 Khz off can cause bad packets.
November 18, 2017 at 5:55 pm #21819EG
ParticipantJust to make sure I understand – the crc setting changed as of network server version 1.0.26, which changes the packet format. I’ll certainly double check my code and versions, but I would think that I would get more than sporadic errors if there were a header format mismatch, no?
I’ll double check the network server version on the 3.3.13 gateway. It’s in my lab and I won’t be there until Monday. I’l also play around with the crc setting.
Would flashing the mlinux-factory-image update the network server? If I’m reading this (http://www.multitech.net/mlinux/images/mtcdt/3.3.13/analysis/ipklist.txt) correctly, the package I flashed would install lora-network-server – 1.0.41-r1.0? So it does seem the two network servers in question have different crc settings.
I haven’t swapped cards because the problem started happening on a sw update. I can try that, though.
How would I check the frequency accuracy?
November 18, 2017 at 7:24 pm #21820EG
ParticipantIt looks to me like explicit header mode is used and thus only the downlink header crc bit is relevant.
-
AuthorPosts
- You must be logged in to reply to this topic.