LabVIEW

cancel
Showing results for 
Search instead for 
Did you mean: 

How can I implement a dynamic loop rate?

Appologies if this has been answered, but searching hasn't revealed the solution.

I have a data acquisition appliction on the RT clocked externally.  I can send every x-th sample to the host PC for display via shared variable (data bandwidth is no issue).  I'd like to dynamically determine what "x" should be based on the processor load on the host (I can send x to the RT from the host).

Is there a simple way to determine how much processor time my application is sucking up in the host PC application so I can adapt x to the environment?

0 Kudos
Message 1 of 3
(3,177 Views)
Hi Lee Jay!
 
Thank you for contacting National Instruments.  From my understanding of the information you have provided, I think I may have a suggestion to achieve this behavior.  However, I am a bit unclear of what you are trying to do specifically.  LabVIEW does not have any inherent functionality to determine processor timing and resource usage as you have described.  From within the program we can determine the time that elapses from one point to another by using the Tick Count and Sequence Structures functionalities.
 
Another aspect to consider is trying to use something similar to Queue Basics, which can be found in the Example Finder.  This should allow for control over the rate at which items are read at the host regardless of the data coming from the target.
 
I hope this helps.  Let me know if there is anything else I can help with or clarify.  Have a great day!
 
Jason W.
 
 
National Instruments
Applications Engineer
0 Kudos
Message 2 of 3
(3,102 Views)
Thanks for responding.

The host application is somewhat processor-intensive because it displays a lot of data on the screen.  I can run it at a certain rate, but the maximum rate changes based on what's going on with that machine and how fast the data is changing.  Since the RT's cycle time is clocked in real-time, I'd like to send the RT the decimation value from the host, depending on how much time it has available.

I gather from your message that a possible way to do this would be to time the execution time of a single pass through the loop on the host, and compare that to the time between samples currently being commanded.  I'm a bit annoyed that I didn't think of such a simple solution.  I'll give that option, and several variations I can think of, a look.  Thanks.
0 Kudos
Message 3 of 3
(3,013 Views)