LabVIEW Development Best Practices Discussions

cancel
Showing results for 
Search instead for 
Did you mean: 

Separate compiled code: possible regression in LabVIEW 2012?

Solved!
Go to solution

Hi Eli,

I'm worknig in a test development goup and we deal with test software in a regulated environment (medical business).

Therefore I'm fully with the other users here and must say, that this behaviour of LabVIEW 2012 is definitely unacceptable

for us. So I can only strongly emphasize the need for a fix of this typedef bug WITHOUT the VI modification propagation issue.

So please keep us updated, how NI plans to deal with this.

For us it is also very important to know, if it is possible at all doing this fix without this nasty side effect we see now.

Markus

Message 11 of 22
(2,496 Views)

Hi Eli, Markus, et al.,

I have a suggestion for a workaround to the 2012 issue.

We are using VSS (I know, hardly source control - I expect us to graduate to SVN soon) and by default, source files are read-only.   When I try to edit a file, LV will prompt me to check it out, which I usually do.  When I save the project, I know that only the files I have actually edited are writeable so I have LabVIEW save only those (i.e., I click the "Continue without prompts" button when prompted).  This means that the calling files do not need to be modified; they get the updated type definition when they are loaded/compiled.

I believe that SVN has an option to require a lock as a general property.  I realize that this detracts from the main use models of SVN (copy-modify-merge) but this does not currently work with .lvproj, .lvlib, & .lvclass files anyway.

Also, you will need to configure source control in LV to NOT include callers when checking out files and to Prompt to checkout files when edited.

If this is a bad idea please chime in!  I would hate to discover later than I am missing some source changes with this method of working.

Home this helps.

Jim "Stone Tools" Grady

0 Kudos
Message 12 of 22
(2,496 Views)

Eli

Here in the nuclear sector I have to concur with others that are in regulatory environments.  This undermines expansion of LabVIEW into some of our laboratory areas where source control is mandatory.

Regards

Blair Smith

0 Kudos
Message 13 of 22
(2,496 Views)

This problem doesn't stop you from doing source code control.

0 Kudos
Message 14 of 22
(2,496 Views)

Hi All,

I want to point you to the other side of Eli comment.

I am working mostly with LabVIEW FPGA. In LabVIEW 2011 I had couple of times cases that I change something and it looks like the VI haven't changed. LabVIEW writes "bitfile up to date" when you check the FPGA compiled code for changes.

But the code wouldn't run.

When forcing recompile the code execute normally.

For me this behavior is much more of concern. Codes look OK when you open it / test it. But it doesn't run on the machine is much more of concern.  I hope that in LabVIEW 2012 this wouldn't happen. Seems that in LabVIEW 2012 this issues are fixed. I don’t want those issues back.

Thanks - Amit,

Amit Shachaf
0 Kudos
Message 15 of 22
(2,496 Views)

Amit -

I don't think anyone doubts the severity of the bug or the necessity of fixing it; rather, the choice of how to fix it resulted in deprecating a feature that developers in many industries rely critically upon. Hopefully NI can find a way to fix the bug without having to conflate one file's changes with several other files that weren't changed (by the user).

0 Kudos
Message 16 of 22
(2,496 Views)

Michael

Thanks, knew that.  We seem to have lost track of the original post - to quote,

"My project is source controlled, and since 2 years all VIs and CTLs have separate object code to minimize the number of changed file at each commit. This was a huge improvement for me not to have tons of VIs modified when I changed a typo in a typedef."

His issue seems to be just the overhead of changed files at commit.  I'm coming at it from a slightly different world.  It's the discussion of unwarranted code change across a whole project I'd like to avoid, when nothing has changed that would affect functionality.

Change management includes predicting, accurately, which project artifacts will be modified during any change activity.  Source control is not just a cool tool to save the developer some time.  If I made the typo fix he mentions, and a file changed that was not on the list of approved deltas, then someone would have some 'splainin to do - hopefully, the analyst that made the delta list; I'd certainly be backing out the change until the change authority reviewed the problem.

And yes, I understand why the change was made; been there, got the bruises.  If this is the final solution, we'll have to get used to it.  But if NI can come up with an alternative, I'd like to hear about it.  Fortunately, this is not a problem for us right now; the only projects I am aware of here that are doing change management at that level are also mired three versions back.  I'm just looking to the future.

Thanks

Blair

Message 17 of 22
(2,496 Views)

TurboPhil wrote:

Has there been any thought to including version numbering or something similar with the typedef to ensure changes are correctly propagated (like backwards-compatibility with objects)?

Adding version numbering to typedefs doesn't solve the problem.  You'd need to change the typedefs so they keep a revision history in the same way classes do.  Without a revision history, there's always a chance Labview will not mutate the bundle/unbundle node correctly because the vi's bundling/unbundling the data may not be loaded into memory when the typedef is changed.

Unfortunately, adding a revision history also carries some additional costs and concerns.  Keeping a mutation history means the size of the typedef will grow over time.  That will be unacceptable in some cases, like fpga or rt code.  How do you handle runtime issues where a version 3 block of data is sent to a vi that only knows about version 2.  What if the data field the v2 vi expects doesn't exist in v3?  Even with a revision history it is all too easy for Labview to get the mutations wrong.  Imagine this scenario...

Suppose I have a typedef with a double I've labelled, "TempLimit," and I'm unbundling that in many different places in my app.  Now suppose I make the following changes:

1. Change the name of "TempLimit" to "TempSetpoint."

2. Add a new double and name it "MaxTemp."

3. Save and apply the edited typedef.

My intent is for the old TempLimit field to be replaced by the new MaxTemp field.  How does Labview know that's what I want to do?  I'm pretty sure LV will mutate all the TempLimit bundle/unbundle nodes to TempSetpoint nodes.  (Although the more recent versions might just leave a broken wire...)  That could have a huge impact on the safety of the system.  In this scenario it doesn't matter if the vi is loaded into memory or not when I make the edit.  With certain combinations of edits it is impossible for LV to know what the developer intended.

The more changes that are made to the typedef before those changes are applied to each vi that bundles/unbundles data, the more likely it is that LV will mutate the changes incorrectly without you being aware of it.  The best way to prevent that from happening is by resaving each vi that bundles/unbundles the data whenever the typedef is changed.

David_Staab wrote:

I don't think anyone doubts the severity of the bug...

IMO, it wasn't a bug. Labview was doing exactly what it was supposed to do. It's a usability issue in that people have expectations of typedefs that go beyond what they were originally designed to be used for. Those extended use cases lead to unintended, unexpected, and unintuitive changes to other parts of the application.  The change with LV2012 is a usability improvement.  People are less likely to run into situations where those kinds of changes are made.

Typedefs work okay when *all* the vis that use it are guaranteed to be loaded into memory anytime the typedef is edited (I strongly suspect that's the reason VI Trees became the recommended best practice) and as long as the changes are applied frequently.  In environments where componentized applications or scope limiting changes are the norm, typedefs don't work well without taking extraordinary precautions.

Blair Smith wrote:

But if NI can come up with an alternative, I'd like to hear about it.

There are a couple alternatives:

1. Create separate vis with the sole function of bundling/unbundling data from the typedef. Use these vis everywhere you would normally bundle/unbundle directly. Before changing a typedef, load these vis into memory. After the typedef is changed, the recompiles are restricted to that relatively small set of vis instead of spread out all over your application.

2. Use classes. Doing the above is the exact same thing classes do, except classes have additional benefits too and they're already built into LV.

Message 18 of 22
(2,496 Views)

This is my use case:

1. I use a type def whenever two VIs share a data structure (e.g. I want to send a message over a queue, over a connector pane, via a file, via (insert any data passing scheme of your choice). Note: I also use them to pass data within a VI, but that is a "kind-a boring use case" for this thread.

2. When I change a type def, I expect all VIs that use that type def to be re-compiled (not necessarily on the spot - but definetly before they run).

Pretty much end of story.

However, there is a related feature in LabVIEW that, as best as I'm able to tell, keeps track of all external files referenced in a VI, and in addition to keeping track of their lineage (e.g. member of this lib or class) and location on disk, also keeps a timestamp. So If A.vi calls B.vi, and if I touch B.vi such that it is saved back to disk, even though I did not change any of the external properties of B (say its conpane), whenever I load A into memory, LabVIEW will state that A.vi has been modified, and will insist that I need to save it. And I did not touch it! For those of us that use source control - this is a royal pain. Back in the 8.20 days, this happened all the time when toolkits were not compiled, and all developers would end up with their own timestamps.

Code / Object separation did a beautiful job of addressing that issue (so that unless there is a 'code' change to A.vi, it is not flagged as 'dirty' - needing a change.

I think these two issues are related. I'm not willing to give on either one.

So, adding to 1 and 2 above:

3. If I do not change my 'source', don't tell me that I did (because then I have to check it out of source control, save it, check it back in - even though there were no changes).

Note: #3 only applies if you select to separate out the object code (if you don't, then your VI will require new object code, and will need to be saved).

And note that for #2, typedefs can be nested. A change is a change.

The recommendation that we need to get everything into memory brings back oh such fond memories of when I took all several thousand of my VIs home for the day, loaded them into memory to make certain types of changes, then tucked them back in to source control. That is not a recipe for productivity. Or determinism.

0 Kudos
Message 19 of 22
(2,496 Views)

I started another thread on this topic.  http://forums.ni.com/t5/LabVIEW/separate-compilation-in-LV12/td-p/2219922, and someone referred me to this thread.  I'm not surprised to hear that this was an intentional change, but I'd like to reinforce remarks that this pretty much undoes all the benefit of separate compilation with respect to source control.  As I said in the post quoted below, it's a minimum requirement for source control that the comparison tool be able to detect whatever changes are made to the files.  If LVcompare says "no change" then the files should be equivalent.  Currently LVcompare is saying "no change". 

Does using classes avoid this problem?  I haven't used classes so far because they seem to be overkill for what I need.  Also many of my typedefs are more or less pieces of user interface where at some point the cluster is presented for manual data entry, and I seem to recall from what I read on classes that there isn't any comparable straightforward way to enter class values.

  Rob

I've verified that redefining strict type defs did not cause the referencing VI to change in LV11, and I have submitted this as a bug report.  It's obviously a work in progress what it means to have "separate compilation" in labview, but by comparison to ordinary text-based programming languages, what it ideally ought to mean is that the "source code" only contains a "name" for the referenced entities, and so need not be changed (automatically or otherwise) when the semantic associations of that name change in a way that is semantically compatible with that individual reference.  I understand that this is very different from how labview has worked in the past, but it is a much better fit for source control systems.

I would also say that a minimum invariant that is needed for sane source control is that when a VI is is modified, then the before and after VIs should be different according to the comparison tool.  In LV 12, they are not different (according to the comparison tool.)  Whether this is considered a requirement for the comparison tool or a constraint on invisible piddling in the VI file is something that NI's language designers need to work out.

0 Kudos
Message 20 of 22
(2,496 Views)