LabVIEW

cancel
Showing results for 
Search instead for 
Did you mean: 

Communication with Preallocated Re-entrant Clones

Solved!
Go to solution

Ben, what is a DVR?  Digital Virtual Reference?

0 Kudos
Message 11 of 18
(773 Views)
Message 12 of 18
(765 Views)

@Kevin_Price wrote:

 

A *much better* approach is that all clones enqueue into 1 common queue that the main app dequeues from.  In this approach, the queue ref itself doesn't give you correspondence to a specific clone instance.  To know that correspondence, you'll need to include appropriate info in the data carried by the queue.

 

 

-Kevin P


I'm in no way saying Kevin is wrong here, but I will add that there other ways of viewing this.

 

I personally prefer to create an array of notifiers as return channels (one per clone) as opposed to a single return queue.  This way any weird behaviour that a clone might end up displaying affects ONLY its own channel, not the others.  It DOES require a little more work on the receiver side, but for me at least (and my view of the world) the advantages (better memory management, no crosstalk)  outweigh the disadvantages. Depending on how you view things yourself, your mileage may vary.

 

But for the "path of least resistance" to a decent workable solution, the single Queue is probably going to deliver more bang for the buck than multiple queues (we had specific requirements which pushed us down the path I mention above - multiple notifiers -  we later came to recognise the advantages of it).

0 Kudos
Message 13 of 18
(725 Views)

@Intaris wrote:

I personally prefer to create an array of notifiers as return channels (one per clone) as opposed to a single return queue. 

Curious and looking to expand my mind.   I'm having a tough time picturing a clean way to service dozens (or 100's) of individual return channels, notifiers in this case.  I *do* understand the compartmentalization advantage you can end up with, but I keep picturing the servicing as a CPU-hogging polling process.  Since you don't know which one of the 100-odd clones might be notifying you about something, don't you have to iterate through all of them pretty much continuously, letting most of them expire their 0 msec timeout?

 

I expect that you're doing things a different way, or else my intuitive concerns about massive polling are unfounded.  Maybe both.   Can you give a little more high-level overview about how you manage to service a massive collection of clones and individual return channels?

 

 

-Kevin P

CAUTION! New LabVIEW adopters -- it's too late for me, but you *can* save yourself. The new subscription policy for LabVIEW puts NI's hand in your wallet for the rest of your working life. Are you sure you're *that* dedicated to LabVIEW? (Summary of my reasons in this post, part of a voluminous thread of mostly complaints starting here).
Message 14 of 18
(707 Views)

The answer lies in the "Wait for notification from Multiple" and reading the documentation for it REALLY carefully.

Used improperly, this can actually cause memory problems instead of solving them, but there are workarounds when it finally becomes clear how this node actually functions.

 

We use it for asynchronous calls to parallel processes where we don't know how long they'll take. We check the status every 100ms or so, and it's certainly not a CPU hog, when done properly.

 

The "aha" moment comes when you realise that the array of notifiers returned from the node is NOT the same array as was fed in.  It only returns the notifiers which have received something since the last call.  With that information, it's possible to do some bookkeeping over each individual call until all asynchronous processes have completed. You essentially get all of the notifier results piecemeal over multiple calls.  Checking the references allows you to assign the correct channel as having returned data.

Message 15 of 18
(698 Views)

Thanks, I kinda forgot about the "wait for notification from multiple" function, it isn't something I've used IIRC.  I only remember a long, long ago discussion on LAVA that seemingly instigated the introduction of that function into LabVIEW. 

 

I get it now.  The use of "wait... multiple" is the biggest part of making it work cleanly, the relatively modest "polling" rate (~100 msec) also helps.

 

 

-Kevin P

CAUTION! New LabVIEW adopters -- it's too late for me, but you *can* save yourself. The new subscription policy for LabVIEW puts NI's hand in your wallet for the rest of your working life. Are you sure you're *that* dedicated to LabVIEW? (Summary of my reasons in this post, part of a voluminous thread of mostly complaints starting here).
0 Kudos
Message 16 of 18
(690 Views)

I went to look at our code which uses this and found an inconsistency between a comment on the BD and the actual code, which is nice. Smiley Mad

 

We spent some time debating and testing the fine differences between "Wait for notification from multiple" and "wait for notifier from multiple with history" (Longest node name ever?). Apparently, the version with history is required for deadlock-free usage. But the version with history maintains an ever-increasing list of timestamps for notifiers it used previously.

 

I believe our solution involved having a strict timing routine regarding possible timestamps for our notifiers (Possible timestampt for ) and making the VI containing the callsite for the node preallocated clone so that we had a guaranteed callsite to notifier array relationship.  This is because each "wait on notifier from multiple" callsite retains its own history.  It was convoluted, but under the conditions where we use it, it's a safe and performant solution.

 

I think. Smiley Frustrated

 

 

Spoiler

Note: Timing details.

For subsequent arrays A and B, all Timestamps for notifiers in B must be older than all timestamps for notifiers in A. Then no deadlock occurs.  Because we treat our array as atomic (we create, handle and destroy the entire array as a unit) and we only ever handle a single array, this can be guaranteed.

 

Spoiler

Note: Callsite importance

I have to admit I learned a lot about the inner workings of notifiers when working out this code. The fact that timestamps are stored per callsite as opposed to per VI or per Notifier reference took me a while to realise. It was one of several "aha" moments. OVer 20 years working with LabVIEW. Still experiencing "aha" moments.  I'm probably at the stage where I'm experiencing "aha" moments I've actually already experienced years ago. Memory fades.

Spoiler
Wait, that reminds me of an old VHS advertising. Was is memorex? (google says it was Scotch: "Re-record, don't fade away"). Smiley Very Happy

 

0 Kudos
Message 17 of 18
(682 Views)

Just to close the loop a bit.  It's specifically the "...with history" primitive I was trying to recall, because of possible issues of the regular "wait...multiple" when fed a dynamic list of notifier refnums.   And I think I managed to find the thread (or one of them at least) over on LAVA here from more than a decade ago.  (Sorry, dunno whether or not the link to forum content will work if you don't have a login over on LAVA).   More interesting content can be found there doing an AND search on keywords wait, notification, multiple.

 

 

-Kevin P

 

P.S.  This msg is a more comprehensive summary within the referenced LAVA thread, though it may be an even better idea to simply back up all the way to the beginning.

CAUTION! New LabVIEW adopters -- it's too late for me, but you *can* save yourself. The new subscription policy for LabVIEW puts NI's hand in your wallet for the rest of your working life. Are you sure you're *that* dedicated to LabVIEW? (Summary of my reasons in this post, part of a voluminous thread of mostly complaints starting here).
0 Kudos
Message 18 of 18
(672 Views)