LabVIEW

cancel
Showing results for 
Search instead for 
Did you mean: 

queue memory leak?

Hi,

 

i have a problem with an application in combination with a queue. My acquiring process reads data from a fpga and writes every 100ms the data into a queue. The consumer process reads the data and send them via tcp/ip to a client. Strange to say the memory usage is increasing although the queue size is not growing. I have attached a snippet. Unfortunatley there are many dependencies to other functions but the structure of the data exchange should be well readable.

 

Best Regards,

 

Joachim

0 Kudos
Message 1 of 6
(4,637 Views)

Hi Joachim,

 

there are two most common problems when a programm consumes more and more memory:

 

 

1.) Memory fragmentation (e. g. when you use the Build Array function and shift registers inside a loop)

 

See: Memory Usage and RAM

http://forums.ni.com/ni/board/message?board.id=170&message.id=53804&requireLogin=False

 

 

2.) Unclosed references (esp. in a loop)

 

Always close any kind of references!!!

 

 

I took a short look at your VI and assume that the problem arises inside the loop in the middle (state-machine).

Check the handling of the references and the handling of the queue. 

 

I hope this helps.

 

 

With best regards,

 

Ralf N.

Message Edited by ralfn on 02-05-2010 09:55 AM
0 Kudos
Message 2 of 6
(4,592 Views)

Hi,

 

I have seen similar issues, in my case there was a time critical loop enqueuing data and  a high priority loop dequeueing data elements. The size of the remaining queue elements indeed did not grow and all elements where deqeueud. Still the memory grew until the Realtime system was out of memory. After placing a few "request deallacation" functions to clean up memory the problems happend less offten. After some experiments I found out that the time critical loop creates copies to transfer data to non-time critical loop, these copies are not always cleaned up by the memory manager. Event when there is no memory available the memory manager still does not clean up memory allocated in the time critical loop.

 

Placing the enqueue and dequeue both in timecritical loop worked for me,

 

Arnoud de Kuijper

T&M Solutions BV

0 Kudos
Message 3 of 6
(4,441 Views)

I have had the same thing happen, but on version 8.2, using tcp/ip from RT to desktop computer. When I disconnected the error in/out from the queues, the leak went away. It's an easy thing to try at least, assuming your error in/outs are wired.

 

Message Edited by Broken Arrow on 04-23-2010 08:21 AM
Richard






0 Kudos
Message 4 of 6
(4,425 Views)

Joachim,

 

i am not aware of any leak behavior of queues in any LV version i am familiar with.

But something to note to your post/application:

It is quite obvious that this is the RT application for a cRIO (surprise, its even in your screenshot name 🙂 ). It is suggested not to use queues on RT targets since queues are not deterministic (see here) . Please switch to RT FIFOs.

Second point: leak behavior is often created by the way how data is handled after retrieval from the queue/RT FIFO. So without seeing the subvis dealing with the elements we can only guess.....

Last point: Your application seems to depend on a stable network connection to the host. I am not sure if this is desired. I would personally try to avoud such things but approach it more like seen in the Referencearchitecture "In-Vehicle Datalogger with cRIO".

 

hope this helps,

Norbert

Norbert
----------------------------------------------------------------------------------------------------
CEO: What exactly is stopping us from doing this?
Expert: Geometry
Marketing Manager: Just ignore it.
0 Kudos
Message 5 of 6
(4,415 Views)

Norbert B wrote:

Joachim,

 

i am not aware of any leak behavior of queues in any LV version i am familiar with.

[...]


Norbert,

In 2007, I worked on an RT system that was plagued with memory leaks, well three to be exact. ONE of them ended up being the error input to a Queue (or it could have been a Notifier, but definitely one of the two). The error cluster was carrying around a Warning, nor an error, so the message was propagated on through the wires for thousands of loop iterations. Not a problem. However, it was determined that for some reason the Queue (or LabVIEW somewhere) was causing a leak due to that Warning being on the wire. If we erased the Warning, or didn't wire the error into the Queue, the leak went away.

 

I also remember a blanket statement that was going around at the time about not using Queues or Notifiers on the RT.

Richard






0 Kudos
Message 6 of 6
(4,397 Views)