Actor Framework Discussions

cancel
Showing results for 
Search instead for 
Did you mean: 

Processor affinity and Actor Framework

Good morning,

So I have a project I am working on where the customer has a quad core PXI controller and he wants to be able to run multiple test instances of the application on the same controller, in his words, "one instance per core". My thought is, if using the Actor Framework, why? This is already an asynchronous process with each Actor in its own memory space in its own thread, correct? Would there be any benefit of launching the overall test system into its own processor core?

If I did that, I would use the for loop and launch each root actor for the application into a specific core, would the remainder of the actors that spin up in the application run on the same core? I would assume so, but not entirely clear on that.

Suggestions? Feedback? Further Questions? Cross Eyed Look "whaddya freakin nuts?"

-Steven

Message 1 of 6
(4,258 Views)

To the best of my knowledge (and I checked with my chief architect to verify), LabVIEW has no facility to assign a particular VI to a particular CPU.

With LabVIEW Real-Time module, you can assign a parallel For Loop to a particular CPU or set of CPUs. You'll need to check that documentation on your own for details... I've never had cause to do that.

In practice, you shouldn't need to assign actors to specific cores. As long as the number of actors (or, rather, actor hierarchies that run in parallel -- I can clarify that statement if you need me to but I hope you get my meaning) is equal to the number of cores, you should get roughly the same behavior as formal assignment would give you. Yes, true assignment might get you even better performance, but theoretically that should be minimal given the way that LV assigns executable clumps to the execution engine. YMMV.

0 Kudos
Message 2 of 6
(3,462 Views)

How much CPU time does each instance of the test system need?

This is all handled automatically. If each section of the test system uses up one core consistently they'll be schedueld across all the available cores.

I'd be suprised though if each one of the four required that much CPU time in the first place.

The only ways I know of of to assign CPU affinity to threads in LabVIEW is with the timed structures (and these should only be used in real-time):

-http://zone.ni.com/reference/en-XX/help/371361H-01/glang/timed_sequence/

-http://zone.ni.com/reference/en-XX/help/371361H-01/glang/timed_loop/

The reasons I can think of to do this are for very slight determinism gains in RT applications, and the vast majority of the time you should just let the OS do this automatically.

Craig H. | CLA CTA CLED | Applications Engineer | NI Employee 2012-2023
0 Kudos
Message 3 of 6
(3,462 Views)

Thank you for the explanation, now I have something to go on to tell the customer that they dont need to assign cores, that the OS and Runtime Engine will handle it for them, if I understood you correctly?

By the way, I'm an old fart, so what is YMMV?

-Steven

0 Kudos
Message 4 of 6
(3,462 Views)

Thanks for the information! I dont know at this point how much CPU time each instance will need, the only real time consuming part of the program is the AI/AO. I am driving a mass spectrometer through its different masses and then taking readings back, at a moderate rate (100KHz).

0 Kudos
Message 5 of 6
(3,462 Views)

StevenHowell_XOM wrote:

so what is YMMV?

Your Mileage May Vary. 🙂

StevenHowell_XOM wrote:

Thank you for the explanation, now I have something to go on to tell the customer that they dont need to assign cores, that the OS and Runtime Engine will handle it for them, if I understood you correctly?

"Need" is a weird word. You might need to do so, but you cannot, but you probably don't need to.

A language that truly allowed you to assign an actor tree to a particular core MIGHT get better performance. You MIGHT need that extra edge of performance. But the combined probability THAT a) your particular application will be one that will benefit from such strict assignment AND b) your program has such tight requirements that the small gain will actually be noticeable EQUALS a very slim chance and thus it is unlikely that you would need that. Make sense?

The strict assignment to processor core assumes knowledge about the data layout in memory to avoid one processor hitting the caches of the other processors. In practice, strict assignment can actually slow down an application because one of the cores might be idle for some reason for a window where the other actors might be able to use that core.

In some real-time apps, users have enough knowledge of their data processing to know that specific core assignment will have benefits, so we have that facility for the parallel For Loop, but in general, users do not have that knowledge about the behavior of their overall app, so we don't provide that at the VI level.

0 Kudos
Message 6 of 6
(3,462 Views)