DQMH Consortium Toolkits Feature Requests

cancel
Showing results for 
Search instead for 
Did you mean: 
lderuyter

Gereralize communication (Variant Reply Payload)

All,

 

The suggestion/request is mainly caputured in Implementation (1) at "https://delacor.com/dqmh-generic-networking-module/"

 

The proposal is to generalize the 'reply' communication by using a 'variant notifier' instead of a 'typedef notifier'.

 

I agree with the statement mentioned on the delacor website.

"This allows us to send and receive messages without knowing about their actual contents, and that’s the prerequisite for separating the module’s actual use code from the networking code."

 

The proposal is to change the DQMH scripting to always use the variant notifier. (instead of manually changing)

Actually I have changed the scripting to be able to have this automatically, which I use already 2 years. (If interest I can share with the consortium)

 

Personnally I don't see any disadvantages of using a variant instead of typedef in this case.

Also for writing the reply payload, this is not a problem as the typedef is also scripted automatically. (see screenshot) 

lderuyter_0-1643367717612.png

 

Any comments on potential disadvantages?

 

Regards,

Lucas

 

 

 

7 Comments
FireFist-Redhawk
Active Participant

The main disadvantage I see is in very rapid or very large payload requests. There is now a type conversion that has to happen every time you send the notifier. That's a relatively cheap operation I'm sure (for smaller payloads) but it's not free either. Maybe it's no issue 99% of the time, but as we know it's the edge cases that get you.

Redhawk
Test Engineer at Moog Inc.

Saying "Thanks that fixed it" or "Thanks that answers my question" and not giving a Kudo or Marked Solution, is like telling your waiter they did a great job and not leaving a tip. Please, tip your waiters.

drjdpowell
Trusted Enthusiast

Variants don't (by themselves at least) involve changing of the actual data and no copy (unlike, for comparison, flattening).  And Events do involve, I believe, two copies of the data, so if that is a problem it is already a problem even without a Variant.  

 

Getting LabVIEW to not make copies of large data is not easy, but avoiding Variants is not a useful strategy.

FireFist-Redhawk
Active Participant

I stand corrected. I just assumed there was a conversion but now I know. And just now I read on a post exactly what I thought after reading your response: that "To Variant" is essentially a waste of diagram space much of the time. The moooore you knowwww.

Redhawk
Test Engineer at Moog Inc.

Saying "Thanks that fixed it" or "Thanks that answers my question" and not giving a Kudo or Marked Solution, is like telling your waiter they did a great job and not leaving a tip. Please, tip your waiters.

joerg.hampel
Active Participant

If I remember correctly, the "To Variant" node will save you one copy (in memory) of the data that is cast/transformed/packed into a variant.

 

I think it was you, James, who posted this finding somewhere?




DSH Pragmatic Software Development Workshops (Fab, Steve, Brian and me)
Release Automation Tools for LabVIEW (CI/CD integration with LabVIEW)
HSE Discord Server (Discuss our free and commercial tools and services)
DQMH® (The Future of Team-Based LabVIEW Development)


drjdpowell
Trusted Enthusiast

Joerg, it was this conversation: https://forums.ni.com/t5/LabVIEW-Champion-Discussions/Memory-to-put-an-Array-in-a-Cluster/m-p/418143...

 

Unfortunately that forum is not public.  To summarize:

-- Using DETT with simple test code, one can start to understand where LabVIEW makes copies and memory allocations.  These are often not where you expect.

-- Variants aren't big concern, except that the implicit to-variant on a subVI input does make a copy (while the To Variant node does not).

-- Bundling into a Cluster makes a copy, even for large arrays.  You can see such a Bundle in the original post of this conversation.  This can be avoided using a Swap primitive.  

drjdpowell
Trusted Enthusiast

Here is a Repost of the initial post of that link I gave, to show using the DETT to understand what LabVIEW is actually doing:

 

I was investigating with the Desktop and Execution Trace Toolkit how memory is allocated.  I am confused by this:

drjdpowell_0-1643794349578.png

 

In this VI, if the array is passed directly (in the not shown disabled case, the array is wired straight through) then I see one allocation for the large array.  But if I put in a Cluster as shown, then there are two memory allocation for the array.  This makes no sense to me and seems a major flaw for any handling of large arrays.  Can anyone explain this to me?  Here is the DETT trace, running twice for the two cases:

 

drjdpowell_2-1643794349838.png

 

 

Olivier-JOURDAN
Active Participant
Status changed to: Development Started
 

Olivier Jourdan

Wovalab founder | DQMH Consortium board member | LinkedIn |

Stop writing your LabVIEW code documentation, use Antidoc!