Actor Framework Discussions

cancel
Showing results for 
Search instead for 
Did you mean: 

Overhead impact of numerous level of class inheritance

I'm working on what will eventually be a pretty large AF system with a Hardware Abstraction Layer, Signal Abstraction Layer, many UI's, state machines, loggers, so on and so forth. In a large system I know organization is key. Not enough levels and you can't find anything. Too many levels and you can't find anything . With class inheritance, there is also an overhead hit. I'm just not sure how significant it is and if it will matter in a system with LOT's of actors running. I'm following the general philosophy that if I have several actors that fall into a common bucket with some similar class attributres and methods, then a parent class is a good call. In some areas of my inheritance tree, I have 4 levels of inheritance below Actor.lvclass.

While I'm concerned about overhead hit, my goal isn't the most perfect optimized performant system in the world. My goal is first that it's works, and close behind that is developer efficiency. My HAL will eventaully encompanse 20+ different instrument types, with many concrete implementations of HW under each. I want the creation of the vast majority of this to be as efficient as possible (after the tough groundwork is laid of course).

What is the real impact of the class inheritance? Should I just not worry about it?

Message 1 of 14
(5,452 Views)

Dynamic dispatch overhead above a regular subVI call is a fixed amount regardless of the number of methods on the class and regardless of the depth of the heirarchy.

Lots of actors is a totally different and unrelated problem. In a system with massive amounts of parallelism, obviously only as many things can advance in a given clock cycle as you have CPUs. LabVIEW has cooperative multitasking to move those CPUs cleanly through lots of code. Most actors are asleep and doing nothing most of the time. They only wake up when they get a message, handle the message, then go back to sleep. This has been true in every large system we have looked at. Functionality tends to move through a system, with a few major ongoing tasks that occassionally call upon subactors to take care of work. Because most actors are mostly asleep, few systems see performance hits from huge numbers of actors.

If you really have many actors that are all actively doing their own thing even when not getting messages from the rest of the system, then you may see more timeslicing begin to drag on your system.

0 Kudos
Message 2 of 14
(4,136 Views)

That's a good answer...thanks. I wasn't sure about going through multiple levels, but if it's the same overhead for one level as 5 then it doesn't matter.

As for my large AF system, I think what you point out will be true. Many of the actors will spend most of their life in the "downtime" state, just waiting for their next message. I'm just trying to think through the possible issues that may arise from compounding a small overhead many times.

0 Kudos
Message 3 of 14
(4,136 Views)

Although this is a good thing to think through, wrkcrw00, it's the sort of thing you probably aren't in a position to think about clearly. What do I mean by that? I mean that for any application at scale, trying to assert performance characteristics as a result of code archiectures a priori has been demonstrated to be nearly impossible for human beings. You can only really ask about performance problems after the system is built and then track them down. So before the system exists, the best you can do is to evaluate the tools. The AF has been shown to have good performance at scale for a wide range of applications relative to other ways of implementing massive parallelism. Ultimately, massive parallelism costs CPUs, and any architecture you pick is going to have to deal with it. So you choose a tool that seems likely to handle it and you work with it.

Leave enough time to do your performance benchmarks at the end of the project AND time to fix code if those benchmarks do not meet your user needs.

0 Kudos
Message 4 of 14
(4,136 Views)

I agree. It's too early to really address performance issues. But, I'd like to not do something dumb early on when a simply forum post can reveal some useful information. I fully expect to go through a few iterations with this framework and am mentally prepared to do so.

Aside from that, if I get a question in my head, it will bug me until I can quell it. I'd rather just ask

0 Kudos
Message 5 of 14
(4,136 Views)

If you're wondering about performance of large numbers of actors, this conversation includes some "actor ring" tests of 10,000 actors (instances, not classes).

0 Kudos
Message 6 of 14
(4,136 Views)

AristosQueue wrote:

Dynamic dispatch overhead above a regular subVI call is a fixed amount regardless of the number of methods on the class and regardless of the depth of the heirarchy.

IIRC, this is not universally true.  Each "Call Parent method" will induce the DD overhead cost again.  That is to say, if your 4 inheritance levels each call their respective parent than you have 4x the DD overhead.  It's still not really significant in most cases, but the overhead cost to a single DD call is NOT guaranteed to be constant as it may queue up other DD calls.

0 Kudos
Message 7 of 14
(4,136 Views)

Intaris wrote:

AristosQueue wrote:

Dynamic dispatch overhead above a regular subVI call is a fixed amount regardless of the number of methods on the class and regardless of the depth of the heirarchy.

IIRC, this is not universally true.  Each "Call Parent method" will induce the DD overhead cost again.  That is to say, if your 4 inheritance levels each call their respective parent than you have 4x the DD overhead.  It's still not really significant in most cases, but the overhead cost to a single DD call is NOT guaranteed to be constant as it may queue up other DD calls.

Thats not IIRC in my book ... I would rather expect G compiler building Virtual Method Tables for each class at compile time. Thus 4x should not be relevant when we talk LabVIEW ...

0 Kudos
Message 8 of 14
(4,136 Views)

Dmitry,  it can never be a compile-time action because due to the dynamic loading of classes, the child does not know exactly how it's ancestry is going to look aside from it's parent.  Below that, it might all change between compile time and run-time.  There may be 3 levels of DD, there may be 35.  It all depends on how the classes (and which versions of them) are loaded at run-time.

I DO agree, however, that at the time of loading, there could be a definite and non-dynamic call chain created because once loaded things cannot change any more.

0 Kudos
Message 9 of 14
(4,136 Views)

The Call Parent Node is faster than a plain dynamic dispatch node, but it is not as fast as a static dispatch node.

The parent class instance to invoke is computed once when the caller is reserved. It does not have to be recomputed every time.

The CPN still has a dynamic aspect to it even though the method to invoke is fixed at reserve time because we do not do as deep an optimization on inplaceness to allow some flexibility in replacing a parent class without necessarily requiring a recompile of the children. Historically, the ability to replace a parent class with a new-but-should-be-binary-compatible-version has been hit-or-miss... we covered most of the cases but still sometimes required recompile needlessly. We *think* they're all fixed with the upcoming LV 2016. You should be able to put a parent class in one PPL and a child class in a separate PPL and replace the parent PPL with a later version on a deployed system without recompiling the child PPL, provided you don't change basic type signatures. Private-behind-the-scenes details including the parent class' private data and the internal implementation of VIs should not cause a break.

0 Kudos
Message 10 of 14
(4,136 Views)