06-09-2014 03:21 AM
Hi,
I'm beginning to go around in circles, so I thought maybe someone could break me out of the loop
I have a main actor (call it display) that calls a number of other actors (call them 2D Graphs). The number of 2D graphs can change any time. I also have a separate read QSM in the display, that when activated reads in increments from a certain file and send the data in chunks to the 2D graphs. the flow goes like this: user presses a button in display, the button event sends a "Start" to the read queue, read queue initializes and sends read messages to itself until done, then closes the file and waits. Every read it also sends the data to the 2DGraphs.
My question is, how to pass the 2DGraph actor refs to the read loop? It needs to have them so that it can send them the data, and in order to know how much to read (the amount of data read is dependent on the number of 2DGraphs). I can't just send them to the read loop once as the number of 2D graph actors can change any time.
The options I've considered are:
1. Making the 2D Graph actor array a DVR. This is less elegant in my opinion, but seems the simplest option.
2. Making the read loop a separate actor, that receives messages that update the actor array, and messages to read data. I can make the update actor array message high priority I suppose, though I haven't really used high priority messages until now, so I'm not sure if this is the right way to go. This means duplicating the 2DGraph actor array in two actors, display and reader. This seems wasteful and complicated.
3. Make the reader loop an actor, and make the actor array a new actor (2D Manager), and have the reader actor subscribe to changes in the Manager. This is even more complicated and probably overkill.
4. reader loop sends all the data to main loop, main loop gets from it only the data it needs and sends data to 2DGraphs. This will cause multiple copies of the data and slow down the response time.
5. Have every 2D graph read its own data. This means that 5 different actors will be reading from the same file (in different locations) at the same time. I think it will slow the read.
See? Circles. Any advice?
Thanks,
Danielle
06-09-2014 02:43 PM
—> (4) The User will not notice the microseconds lost.
06-09-2014 04:07 PM
drjdpowell wrote:
—> (4) The User will not notice the microseconds lost.
Agreed. If that is actually a performance problem, slight modification would be to have the main loop send the reader loop a filter, the reader loop runs the filter on the data and then sends only the filtered data up to the main loop. But I'd only make that modification if the first solution proposed is actually a performance problem.
06-10-2014 12:54 AM
Thanks for answering!
I was more worried about memory issues than response time, actually. If I send the data to the main loop using an event, I could get a buildup of events with this data causing high memory consumption. The amount of data is on the order of 500x100x32 doubles, every second, if unfiltered. If filtered, it would be 500x100x4. I could also store it in 32 single element queues and pass the references around. That would prevent a number of copies.
However, there will be other actions the user can do at this time, won't handling this in the event loop slow down the response time more than microseconds?
Thanks,
Danielle
06-10-2014 05:29 AM
Be wary of the trap of premature optimization. Do it the clean and simple way, then benchmark it. If you then have a problem, recode a more complex way, while rerunning the benchmark to see if you are actually gaining anything.
06-10-2014 09:56 AM
drjdpowell wrote:
Be wary of the trap of premature optimization. Do it the clean and simple way, then benchmark it. If you then have a problem, recode a more complex way, while rerunning the benchmark to see if you are actually gaining anything.
This. Know what your performance requirements are, benchmark to verify they're being met, and use benchmark data to identify the (hopefully few) hotspots that require 'fancy' design.
06-10-2014 10:59 AM
MattP wrote:
drjdpowell wrote:
Be wary of the trap of premature optimization. Do it the clean and simple way, then benchmark it. If you then have a problem, recode a more complex way, while rerunning the benchmark to see if you are actually gaining anything.
This. Know what your performance requirements are, benchmark to verify they're being met, and use benchmark data to identify the (hopefully few) hotspots that require 'fancy' design.
Keep it simple, make it work then make it more performant/fast
06-10-2014 11:07 AM
dsavir wrote:
However, there will be other actions the user can do at this time, won't handling this in the event loop slow down the response time more than microseconds?
For UI interactions, you have up to 2 *milli* seconds before it becomes human noticeable, and most UI designers that NI has treat "fast response" as up to 10 milliseconds. That's enough time for many very complex computations.
06-11-2014 02:16 AM
Thanks all of you for the great advice! I was overthinking the entire issue (and falling into the trap of premature optimization ). Thanks for the reality call!
Danielle
06-11-2014 09:37 AM
Don't worry - premature optimization happens to everyone. It's nothing to be embarrassed about.