LabVIEW

cancel
Showing results for 
Search instead for 
Did you mean: 

Cluster of indicators - updating elements cleanly

Typedef is a cluster of indicators.  Is there a good way to just update each indicator at will, without property nodes or locals?  Bundle by name means you have to update the whole cluster, or local variable it to the 'input cluster' so you can just replace 1 element.  Reading FP control clusters is clean but writing indicator clusters it would seem, isn't.  Seems like we need a bundle by name that only writes whatever you bundle and leaves the other elements alone.  Otherwise how do you typedef groups of indicators other than with a cluster?  

0 Kudos
Message 1 of 7
(2,147 Views)

@Flohpange wrote:

Seems like we need a bundle by name that only writes whatever you bundle and leaves the other elements alone.


Yeah, it's called Bundle By Name.  What you can do is store the cluster value in a shift register.  That is then wired to the Bundle By Name for the input cluster.  You can update only the elements that change.  And then the output cluster goes to the indicator and the shift register.


GCentral
There are only two ways to tell somebody thanks: Kudos and Marked Solutions
Unofficial Forum Rules and Guidelines
"Not that we are sufficient in ourselves to claim anything as coming from us, but our sufficiency is from God" - 2 Corinthians 3:5
Message 2 of 7
(2,103 Views)

I don't quite understand the question. Are you talking about UI performance issues? Since you are talking about updating and indicator, not sure why you think you need locals and property nodes.

 

Yes, keep the data structure in a shift register and write the indicator whenever the value changes. That's not a problem.

 

Don't forget the "in place element structure" if the new value also depends on the old value (e.g. increment, etc.)

Message 3 of 7
(2,095 Views)

Thanks for the replies, yes I'm trying to optimize performance.  I did consider the shift reg method to store cluster values.  (Using QMH architecture btw.)  I just thought for something this basic maybe there's some method I'm not aware of.  Updating different cluster elements from different loops I can use notifiers which should be better than locals or PNs.

0 Kudos
Message 4 of 7
(2,041 Views)

@Flohpange wrote:

Thanks for the replies, yes I'm trying to optimize performance.  I did consider the shift reg method to store cluster values.  (Using QMH architecture btw.)  I just thought for something this basic maybe there's some method I'm not aware of.  Updating different cluster elements from different loops I can use notifiers which should be better than locals or PNs.


Ok, you just dropped a bomb of very important information: multiple loops updating the values.  So a few things to consider:

1. Each loop should maintain its own state.  Since you are using a QMH, use the queues to send messages to everybody.  You don't need to send the entire cluster in the message, just the data that is being updated.  The receiving QMHs can then decide what to do with the new value.

2. Do you really need all of those values in the cluster in every loop?  It might be worth breaking up the cluster so each loop only maintains the values they care about.


GCentral
There are only two ways to tell somebody thanks: Kudos and Marked Solutions
Unofficial Forum Rules and Guidelines
"Not that we are sufficient in ourselves to claim anything as coming from us, but our sufficiency is from God" - 2 Corinthians 3:5
0 Kudos
Message 5 of 7
(2,035 Views)

Good points and I'm actually bypassing message handling overhead and using a plain queue for just that loop.  To explain (simplified): 3 parallel loops, the cluster is in loop1.  Loop2 updates some cluster elements, loop3 updates others at a different rate.  I could split up the cluster yes, but it makes too much logical sense.  Also it seems like there's this disconnect (:D) in Labview: what's best for simple considerations of front panel layout isn't always best for the diagram and dataflow; when in fact they need not even be related.  I think that would be easy to fix.  Anyway!

 

Btw, I did a quick test, maybe it's been done already, measuring how fast data is transferred to an indicator.  In order of fastest: direct wire, local var., queue, property node.  Local and queue are reasonable, within a 10x factor of wired.  Property node several orders of magnitude slower.

0 Kudos
Message 6 of 7
(2,005 Views)

@Flohpange wrote:

Btw, I did a quick test, maybe it's been done already, measuring how fast data is transferred to an indicator.  In order of fastest: direct wire, local var., queue, property node.  Local and queue are reasonable, within a 10x factor of wired.  Property node several orders of magnitude slower.


Data is transferred to an indicator asynchronously by in the UI thread. Simplified, both a direct wire and local variable just update a transfer buffer and continue. A queue cannot update an indicator, so I am not sure what you are talking about. Property nodes execute synchronously and require a thread switch and that's why they are slower. (See also e.g. this discussion and links in it)

 

(That said, 95% of "benchmarks" (for lack of a better word) discussed in the forum are highly flawed and most often meaningless. The success rate is probably even less for "quick tests" 🙂 )

 

I might get some blowback from my suggestion, but another option would be to keep the cluster data in a DVR (data value reference). Have each loop update it at their own leisure via the IPE, which protects from concurrent access to the shared resource. You UI loop can read the value at it's own pace and update the indicator with the current values.

0 Kudos
Message 7 of 7
(1,996 Views)