From 04:00 PM CDT – 08:00 PM CDT (09:00 PM UTC – 01:00 AM UTC) Tuesday, April 16, ni.com will undergo system upgrades that may result in temporary service interruption.

We appreciate your patience as we improve our online experience.

Actor Framework Discussions

cancel
Showing results for 
Search instead for 
Did you mean: 

Timed delayed message

So, I have been using the Time Delayed Send Message to send periodic messages to retrieve data from an actor and I was curious - why is this using the notifier wait time to clock the execution loop rather than a time loop (or some other type of internal loop timer).  It seems to me if you want to send a message at regular intervals, this would be a better way.  With the wait, you do have a finite period over which all the code behind the Get Notification occurs and this causes the loop to drift in time.  So, for instance, if you have something that you want to execute at 1 Hz, in order to reliably get the message every one second, you actually have to set up the loop to send the message at intervals > 1 Hz.  Am I missing something here?  Does anyone have any thoughts concerning this?

Cheers, Matt

0 Kudos
Message 1 of 27
(10,987 Views)

One reason could be that the AF is designed to work with the Real Time targets. If I remember correctly from the Real Time training class, real time targets such as the cRIO have limited cores and threading.  Each Timed While loop takes one of those cores (or threads, I forget).  So to use up one of those cores/threads on a simple "heartbeat" timed loop vs a required deterministic loop would be a waste of the resources.   I am sure someone else can provide a more elegant response.

Kenny

0 Kudos
Message 2 of 27
(6,185 Views)

Hey Kenny,

Thanks for the reply.  Just to clear up a couple of things: first, the initial target for the AF was not RT.  This is very evident in early releases as it did not run on RT targets (or ran poorly, but that was several years ago so the discussion is alluding me).  Second, while a timed loop can potentially gobble up resources by forcing the loop to  iterate over other lower priority items on a block diagram, I would argue that this is not the case with the Time Delayed Send MSG VI.  Here, the VI has one task and one task only - queue up a message at a regular interval.  This is computationally light weight so, unless you are bombing your system with messages, the timed loop should not interfere with resource allocation (and if you are bombing your system with messages, you are going to have a resource issue anyway, so this seems irrelevant).  Finally, while cRIOs are more limited in processing power (although I would argue with some of the newer models this is not the case), cRIOs are not the only RT targets.  I am using a >3 GHz quad-core processor on a PXIe system so computational power is not at such a premium as on the old cRIOs.  And, if you are shooting for the lowest common denominator on hardware, then we really ought to be talking about the sbRIO.

Anyway, those are my thoughts.  Thanks for the reply.

Cheers, Matt

0 Kudos
Message 3 of 27
(6,185 Views)

I use Notifiers for delays also.   They are cancelable waits (by posting to or destroying the Notifier) which is very useful.   Not sure what the AF’s "Time Delayed Send Message” does, but I prevent “drift” by calculating the next time required, subtracting the current time, and waiting that.   So if I want to send messages when the Millisecond Timer is a multiple of 1000, and the current time (after code execution) is 345672005 ms, then I wait 995 ms.  This self corrects any delays, so there is no drift over time.  — James

Message 4 of 27
(6,185 Views)

The use of the Timed Loop would be problematic for multiple reasons.

1. Timed Loop is not supported on Mac or Linux.

2. The Timed Loop is meaningless on Windows where it is only simulated. If you want the Timed Loop to be a Timed Loop, you need to be on a deterministic real time system. But on RT, you would NOT want a Timed Loop for a message sender. The Timed Loop is useful for maintaining determinism. But just because you send the message on a regular frequency implies nothing about its rate of handling. All that a Timed Loop would do on RT is interfere with your actual determnistic code loops. The framework part of the Actor Framework cannot ever be deterministic, by defnition. Given that, the much lighter weight of the Notifier count creates significantly less system drag than the Timed Loop.

0 Kudos
Message 5 of 27
(6,185 Views)

AQ, let's be honest - there are many LabVIEW features not supported on Mac or Linux, so why is this an issue?  Anyone in my group who will be doing any LV development does it on a Windows machine simply because too many features are missing to make it useful on any other OS.  LV ceased being Mac-centric many years ago...

Can you expand on the "drag" comment?  How does enqueing a message in a timed loop create drag or a resource burden?  Maybe I am missing something here.

And, I would take issue with your comment regarding rate of handling - what you say is indeed true, and for many of my cases is absolutely spot on, but I would still like to send messages at regular intervals, regardless of whether there might be jitter in the delivery and non-deterministic behavior in the handling. 

As an example: if I request data from a serial device that is represented as an actor in my system I can never guarantee determinstic behavior of the response of the device regardless of how I request it; but, I can guarantee that the message to retrieve data is always sent at regular intervals thus doing the best I can to ensure that I have fresh data every period.  With the notifier, I am absolutely guaranteed to drift simply because there is a finite execution period for the loop as it is coded.  Thus, if I want to ensure that I have fresh data every second, I actually have to code my wait time to be less than 1 second.  Now, I have the increased burden of actually having to talk to that port at a rate greater than 1 Hz.  The resource burden associated with the increased calls to the port (and all of the code that entails) can not possibly be smaller than the burden of enqueuing messages in a timed loop, can it?

Now, as James pointed out, there are other ways to produce messages at regular intervals.  Maybe his approach would be better...  I am just suggesting that the way that the delayed message is currently implemented produces results that surprised me (as I did not look under the hood initially).  Maybe the VI documentation says something and I missed it, but if it doesn't, it probably ought to state that the implementation is prone to drift.  My feeling is that many programmers who use this might actually be looking for something that produces a message at regular intervals; and this VI does not.

0 Kudos
Message 6 of 27
(6,185 Views)

mtat76 wrote:

AQ, let's be honest - there are many LabVIEW features not supported on Mac or Linux, so why is this an issue?  Anyone in my group who will be doing any LV development does it on a Windows machine simply because too many features are missing to make it useful on any other OS.  LV ceased being Mac-centric many years ago...

a) Because I started off building the AF with plans (still working on them) for building a distributed computing platform similar to SETI-At-Home and I didn't want to restrict myself from the rapidly growing number of users that only have Macs for home desktops/laptops.

b) Because if I develop initially for Mac, Linux and Windows, I can be much much more certain that the code will at least work when I eventually turn my attention to Real Time.

c) Because I support many customers who use LabVIEW without hardware and for them, LabVIEW is a great tool.

d) Because I personally develop G code almost exclusively on my Mac. So code I build is going to be cross-platform.

Answering the rest of your comments a bit out of order.

mtat76 wrote:

but I would still like to send messages at regular intervals

It should be a regular interval as written. Let's say the timeout you provide is X milliseconds. Let's say the loop takes Y milliseconds to execute every time. Then the message is going to send every X+Y milliseconds -- a regular interval. Outside of a real-time system, this is well within the noise level of the operating system and the rest of the multithreading environment.

I wasn't aiming to send on a metronome precision. That simply wasn't a use case for a send like this. This was meant to provide a regular heartbeat, and drift isn't relevant as long as it keeps coming. If you need metronome precision, then you're talking determinism. But it is -- as I said -- very misleading, in my opinion, to write a VI that goes to massive lengths to send the message on a fixed pulse, with all the error monitoring and drift detection junk, for something that is inherently not going to be handled at all regularly.

To put it another way: As I see it, if this VI as written isn't good enough for your use case, any timing structure for sending messages isn't going to be good enough for your use cases. You can't use it for pumping data from a nested actor to a caller. You can't use it for keeping a UI boolean flashing at a constant rate. And no amount of fixing the *send* will fix those use cases. So why include that effort?

mtat76 wrote:

Can you expand on the "drag" comment?  How does enqueing a message in a timed loop create drag or a resource burden?  Maybe I am missing something here.

Timed loops elevate their own priority to make their schedule commitments. They demand processor cycles from other systems. That isn't something that this VI needs to be doing. Also, if you were to put this on a Real Time system, you now have thread contention between the various timed loops, and this sort of message pump is clearly not part of your determinism. If this VI used a Timed Loop, it would be ruled out from being usable on RT.

0 Kudos
Message 7 of 27
(6,185 Views)

AristosQueue wrote:

mtat76 wrote:

but I would still like to send messages at regular intervals


It should be a regular interval as written. Let's say the timeout you provide is X milliseconds. Let's say the loop takes Y milliseconds to execute every time. Then the message is going to send every X+Y milliseconds -- a regular interval. Outside of a real-time system, this is well within the noise level of the operating system and the rest of the multithreading environment.


OK, I am not arguing with this, but the X+Y is not close to determinable a priori (and by determinable, I simply mean that you have no idea what the Y is close to unless you sit down and time it before operation).  Although I write a lot of RT code, I rarely use timing structures because I often let the hardware control the rate at which things occur.  Although the code within a particular loop may take an indeterminate amount of time (and this is the case even within timed loops), I can guarantee that the loop itself will return (possibly with jitter) at the rate that I expect and therefore I know that my data structures that I will be processing are available down the line at regular intervals.  So this statement...

AristosQueue wrote:

To put it another way: As I see it, if this VI as written isn't good enough for your use case, any timing structure for sending messages isn't going to be good enough for your use cases. You can't use it for pumping data from a nested actor to a caller. You can't use it for keeping a UI boolean flashing at a constant rate. And no amount of fixing the *send* will fix those use cases. So why include that effort?

demonstrates some lack of understanding (likely because the problem statement lacks specifics).  As I said above, although I can not guarantee when or how long it will take to process a message, if I send that message off at well defined intervals I will be able to say that the processing is occurring somewhere within that interval of interest.  So, if I have a full second to do something, then I know (as long as I have not stuffed anything in there that requires processing beyond the alotted interval) that the message will be processed somewhere within that interval (and I am not saying I know where in the interval, so strict determinism is not the issue here) .

Going back to the X+Y statement - if take the approach as code, we don't know what the interval is where the message is sent without more extensive probing. And there is no way that we can say that processing of the message will occur within the period of interest (without setting a higher rate).  So, if I request a wait of 1 s, we now get the message being processed at maybe 1.1 s and, if this is consistent, we have significant drift and  ultimately we miss periods where the message is not processed.  And even if we have a general idea of what Y is, we can still get drift even when accounting for this (unless the jitter on the execution time is purely Gaussian - which, in many systems, it will not be).

Although timed structures have inadvertently become the focus, I recognize, as James pointed out, that there are other ways to skin this cat. All I am saying is that it would be possible to provide more metronome like behavior without significant overhead.

After this discussion, I realize that I am now using this in a way that was clearly not intended and I think that I am going to move some stuff around in my code.  But, in my foggy brain, I am having trouble understanding exactly what the use case is for this VI other than a heartbeat.  My feeling is that few developers are likely to send regular messages on periods that are not well defined (other than something like a watchdog timer).

Thanks for the enlightening discussion, AQ.  Cheers, Matt

0 Kudos
Message 8 of 27
(6,185 Views)

This is the kind of discussion that moves the AF forward -- I want to keep discussing this a bit further to see if we should be adding another function to the AF. I encourage other developers to chime in.

> and  ultimately we miss periods where the message is not processed.

I would like you to go into detail here:

What are the consequences for a missed period in your application?

Does the system need to do something special when/if a missed send occurs?

Does there need to be any validation that the receive was done within some time horizon of the send?

Those answers would be key to creating a new AF method.

I am asking because the way I see it, there are only two use cases. First, the one where the time period is very large, human scale, and the precision is not required at all... if you occasionally miss an update, no one will ever notice... the data keeps coming in steadily. Second, the one where the time period is very small and requires computer-speed updates because it is part of keeping the system functional to have that precision interaction on both sender and receiver sides. In the former, the consequences for missing a period now and then are non-existent. In the latter, the AF command-style messaging is probably the wrong choice of architecture.

> So, if I have a full second to do something, then I know (as long as I have not stuffed

> anything in there that requires processing beyond the alotted interval)

At a full second scale, you have a near guarantee, but start getting much smaller and that guarantee evaporates. There may not be anything happening in your thread, but what about all the others? How many other actors exist in the application? How many other applications are running on the OS? How common is memory having to page in and out? It doesn't take much to blow through a 10ms time period. To me, this means that anyone using a heartbeat send has to assume that most of the time it will miss its time window (and then be pleasantly surprised when it turns out not to do so).

0 Kudos
Message 9 of 27
(6,185 Views)

I find a metronome-like lack of drift to be most useful for taking evenly-spaced data.   If I want a reading per second for 10 hours, then I don't care about millisecond jitter, but I do care about drift.  Evenly-spaced data is preferred for many types of analysis and takes up less memory as one doesn't have to record time readings for each point.

— James

BTW> If someone wants to write a new “metronome” component for the AF, consider the analogous components in my LAVA “Messenging” package (soon to be “Messenger Library” on the Tools Network).  Look at the example “Example of Recurring Event Methods”.   “Metronome” is an actor which accepts messages to change its period on the fly.  “Send on Next Millisecond Multiple” is a one-shot action analogous to the “Wait on Next ms Multiple” primitive.  

Message 10 of 27
(6,185 Views)