LabVIEW

cancel
Showing results for 
Search instead for 
Did you mean: 

Network Streams Memory Leak with NI XNET and cRIO

Hello Everyone.

I am using an NI XNET module attached to a cRIO 9045.
I am reading the data from NI XNET using waveforms. 

There are a fixed number of waveforms getting transmitted from the CAN module and I am using Network Streams to transfer this data to the host computer. But when i do this, i get memory leaks. The memory on the cRIO keeps increasing to a point where either there is a NI XNET Queue overload or memory in the cRIO runs out and the machine crashes.

 

I have tried with waveforms and just transmitting the Y component of the waveform in form of double and they both give the same issue

 

It turns out this only happens when i am transferring CAN data. 
Other AI modules, in the same method, works perfectly file.

I have attached the relevant VIs.

Would be grateful for suggestions/solutions 🙂

0 Kudos
Message 1 of 4
(1,935 Views)

I have experience with Network Streams in LabVIEW RT applications, but have no experience with CAN and I've not heard of (or used) XNET.  I'm guessing that the fact that Streams works for you when you are not using CAN and fails when you are might mean that the CAN stuff (or XNET) is interfering with the TCP/IP protocol.  However, it could also be that whatever the Stream is that supports CAN is configured incorrectly.

 

While I really appreciate your attaching all of the relevant code, there's too much with which I'm not familiar to start poking around, unguided.  Can you identify the Streams that you create, describing their name, the parameters of the data being passed, the roles at the Host (PC?) and Target (cRIO?) end (i.e. Host Reader, Target Writer), and which one makes the connection (I tend to assign this to the PC, with the RT Target starting out running in a "Wait for Connection" loop, which means that the PC needs to "know" the Target's IP), and make a table of these?  I trust that once you create a Stream, you leave its data "intact" until you destroy its endpoints.

 

Do you have a colleague who knows a little LabVIEW (or at least a little Programming) with whom you can "walk through" your code, showing the Creation, Use, and Destruction of each of your Streams?  Armed with a Table such as I suggested in the previous paragraph, as well as your code, we could also try to "follow the logic", but it would be much easier with your guidance.  And you might find (as has happened with me numerous times) that as you get to Step 6, you'll say "... and here we ... oops, there's supposed to be a Wire here! ... just a second while I fix this ...".

 

Bob Schor

0 Kudos
Message 2 of 4
(1,907 Views)

Yeah Bob is right.  The code is missing lots of subVIs and dependent controls, but there's also lots of things going on.  The code needs attention, and refactoring.  If your block diagram needs to scroll in the vertical direction it likely needs to be cleaned up.  The heavy use of locals, Shared Variables, and Ignoring of errors could be hiding issues.  I'm also confused by other parts of the code making it hard to understand.  Please don't take this criticism too personally.  I understand the situation of having to just make it work and crunching deadlines.  But it is more difficult to provide feedback when the source is in the state you've provided.

 

In cases like this with potential memory leaks, I often try to trim down the code to a state that is more simple and see if the issue still remains.  Maybe try simulating parts of the code (like the XNet part) and see if the problem goes away, or if it is still there.  If you do get a small minimized test case that causes issues we can run it, or a support ticket can be opened with NI to try to see if this is a bug on NI's side that needs fixing.

 

I use Network Streams and XNet very heavily on the Linux RT platform, and don't have an issue with memory or stability.  This happens to be the 9136 which is a cDAQ system but many of the underlining technologies are the same.  These tests run many parallel loops, talking to hardware, logging, reporting status, sequencing, and controls often non-stop for many months.  I'm not saying there can't be a bug on NI's side, I'm just trying to give you some confidence that a stable system can be made with this setup.

0 Kudos
Message 3 of 4
(1,840 Views)

Hello,

 

Thank you so much for your feedback.

 

You are right, there is quite a lot happening in 1 VI that can cause the confusion.

I tried to go through the code with my colleagues but to no avail.

 

To give you a little perspective, it turns out that working with xNet does not seem to be an issue. I disabled the network streams and executed the program and there is no memory leak, but when the network stream is enabled, it is only then that memory leak occurs.

 

I followed your advised on the Network Shared Variables and made it a RT FIFO based system hoping this could help the execution, but yet again, to no avail.

 

I have been using network streams with other DAQ modules like the 9231 or 9220 working upto 100k Hz frequency and there is no network leak. it is only the XNET and network streams combination that creates the issue.

I hope this feedback help. I would like to ask if there was any other way you can suggest that i can transfer data from the RT to the host computer via limited latency to 100ms and no data loss.

 

Once again, thank you so much for your help 🙂

 

Further details:

Host: Windows10 based PC with labview 2018 

RT: Linux based cRIO 9045

IPs have been assigned specifically to the cRIOs.

I destroy the endpoints only when the stream is clear.

0 Kudos
Message 4 of 4
(1,833 Views)