08-17-2019 10:44 AM
Ben, what is a DVR? Digital Virtual Reference?
08-17-2019 12:11 PM
08-19-2019 08:31 AM - edited 08-19-2019 08:31 AM
@Kevin_Price wrote:
A *much better* approach is that all clones enqueue into 1 common queue that the main app dequeues from. In this approach, the queue ref itself doesn't give you correspondence to a specific clone instance. To know that correspondence, you'll need to include appropriate info in the data carried by the queue.
-Kevin P
I'm in no way saying Kevin is wrong here, but I will add that there other ways of viewing this.
I personally prefer to create an array of notifiers as return channels (one per clone) as opposed to a single return queue. This way any weird behaviour that a clone might end up displaying affects ONLY its own channel, not the others. It DOES require a little more work on the receiver side, but for me at least (and my view of the world) the advantages (better memory management, no crosstalk) outweigh the disadvantages. Depending on how you view things yourself, your mileage may vary.
But for the "path of least resistance" to a decent workable solution, the single Queue is probably going to deliver more bang for the buck than multiple queues (we had specific requirements which pushed us down the path I mention above - multiple notifiers - we later came to recognise the advantages of it).
08-19-2019 10:17 AM
@Intaris wrote:
I personally prefer to create an array of notifiers as return channels (one per clone) as opposed to a single return queue.
Curious and looking to expand my mind. I'm having a tough time picturing a clean way to service dozens (or 100's) of individual return channels, notifiers in this case. I *do* understand the compartmentalization advantage you can end up with, but I keep picturing the servicing as a CPU-hogging polling process. Since you don't know which one of the 100-odd clones might be notifying you about something, don't you have to iterate through all of them pretty much continuously, letting most of them expire their 0 msec timeout?
I expect that you're doing things a different way, or else my intuitive concerns about massive polling are unfounded. Maybe both. Can you give a little more high-level overview about how you manage to service a massive collection of clones and individual return channels?
-Kevin P
08-19-2019 10:35 AM
The answer lies in the "Wait for notification from Multiple" and reading the documentation for it REALLY carefully.
Used improperly, this can actually cause memory problems instead of solving them, but there are workarounds when it finally becomes clear how this node actually functions.
We use it for asynchronous calls to parallel processes where we don't know how long they'll take. We check the status every 100ms or so, and it's certainly not a CPU hog, when done properly.
The "aha" moment comes when you realise that the array of notifiers returned from the node is NOT the same array as was fed in. It only returns the notifiers which have received something since the last call. With that information, it's possible to do some bookkeeping over each individual call until all asynchronous processes have completed. You essentially get all of the notifier results piecemeal over multiple calls. Checking the references allows you to assign the correct channel as having returned data.
08-19-2019 11:01 AM
Thanks, I kinda forgot about the "wait for notification from multiple" function, it isn't something I've used IIRC. I only remember a long, long ago discussion on LAVA that seemingly instigated the introduction of that function into LabVIEW.
I get it now. The use of "wait... multiple" is the biggest part of making it work cleanly, the relatively modest "polling" rate (~100 msec) also helps.
-Kevin P
08-19-2019 11:28 AM
I went to look at our code which uses this and found an inconsistency between a comment on the BD and the actual code, which is nice.
We spent some time debating and testing the fine differences between "Wait for notification from multiple" and "wait for notifier from multiple with history" (Longest node name ever?). Apparently, the version with history is required for deadlock-free usage. But the version with history maintains an ever-increasing list of timestamps for notifiers it used previously.
I believe our solution involved having a strict timing routine regarding possible timestamps for our notifiers (Possible timestampt for ) and making the VI containing the callsite for the node preallocated clone so that we had a guaranteed callsite to notifier array relationship. This is because each "wait on notifier from multiple" callsite retains its own history. It was convoluted, but under the conditions where we use it, it's a safe and performant solution.
I think.
Note: Timing details.
For subsequent arrays A and B, all Timestamps for notifiers in B must be older than all timestamps for notifiers in A. Then no deadlock occurs. Because we treat our array as atomic (we create, handle and destroy the entire array as a unit) and we only ever handle a single array, this can be guaranteed.
Note: Callsite importance
I have to admit I learned a lot about the inner workings of notifiers when working out this code. The fact that timestamps are stored per callsite as opposed to per VI or per Notifier reference took me a while to realise. It was one of several "aha" moments. OVer 20 years working with LabVIEW. Still experiencing "aha" moments. I'm probably at the stage where I'm experiencing "aha" moments I've actually already experienced years ago. Memory fades.
08-19-2019 12:10 PM - edited 08-19-2019 12:14 PM
Just to close the loop a bit. It's specifically the "...with history" primitive I was trying to recall, because of possible issues of the regular "wait...multiple" when fed a dynamic list of notifier refnums. And I think I managed to find the thread (or one of them at least) over on LAVA here from more than a decade ago. (Sorry, dunno whether or not the link to forum content will work if you don't have a login over on LAVA). More interesting content can be found there doing an AND search on keywords wait, notification, multiple.
-Kevin P
P.S. This msg is a more comprehensive summary within the referenced LAVA thread, though it may be an even better idea to simply back up all the way to the beginning.