09-18-2010 08:31 AM
10-04-2010 03:30 PM
Hello,
I've am experiencing the slow save response and slow run response that has been discussed on this forum. I have saved all vis in the project to a separate folder and ran a "mass compile." My vis do not have any large array size requirements nor does it have any code that would require large amounts of memory.
Are there any updates that I need to know about?
Thanks,
Mike Mitchell
RFMD Carlsbad Design Center
11-02-2010 03:38 AM
hello,
after a few days testing I was forced to go back to LV2009, because the long "saving" times of VIs are really not reasonable.
Nobody is willing to pay us this "additional" LV programming time.
Hopefuly the next patch is comming soon.
Fred
11-02-2010 09:57 AM - edited 11-02-2010 10:04 AM
Hello,
Thus, have you open an update ticket for:
- LV Save speed performance ?
- LV global execution in a separate process ?
Nevertheless, I would greet the work already done since the first versions.
Best regards
NF
11-08-2010 03:05 PM
Hi all,
after having had too many coffees while LabVIEW 2010 was compiling, I was asking myself the following question:
Is it a good strategy of NI to let 90% - 95% of the programmers wait considerable time in order to search for mess that they don't produce?
From what I read about the new compiler features and its improvements I conclude that these are mostly achieved by cleaning up bad code, removing unused (undeleted) code that serious programmers delete themselves .... and I ask myself: Why do I have to wait for the compiler to finish searching for any mess that I do not generate? - I just don't get it!
On the other hand the strategy of NI is clear: NI sells more LabVIEW licences, if dummies are capable to produce the same quality of code as experienced and knowledgable software architects.
Understanding and agreeing with the aim of NI to sell more, why does NI not give me the possibility to opt out of the lengthy and unsuccessful (in the case of clean code) search for bad coding - Please give me the option to have a fast compiler while abstaining from the "automatic mess cleaning machine".
It is really my conviction after several month's of experience with the compiler of LV 2010 that the 20 to 50 times longer compile times are due to the search for bad programming and that this search does not at all increase execution speed, if the code was written properly!!!
Herbert
11-08-2010 03:26 PM
I would live to see a VI Property option to be able to “Skip DFIR Compiler for this VI”.
So when I’m developing my TOP VI that normally is quite large, I can control if I want to wait an extra 20s every time I press Ctrl-s
Cheers,
Mikael
11-08-2010 04:47 PM
DFIR was on in 2009. The slowdown is not DFIR (which includes things like dead code elimination and common subexpression combining). It's LLVM, which gets us a much lower-level optimization, including a much more sophisticated register allocator that allows us to keep data in registers more instead of spilling into the dataspace often. LLVM was found to give us about a 20% improvement in speed for the average VI (not just badly written ones). It turns out that it's also the register allocator that takes up a large portion of the time during a compile now, so we're looking into ways to cut that back down.
As far as the comment about just cleaning up bad code, that's really not the point of the DFIR transforms. The transforms we do in DFIR now are standard kinds of transforms that every compiler does. They let the programmer write code without worrying about low-level optimizations. If it makes sense to write an expression twice for readability (maybe it's far apart on the diagram) then you can do that and our compiler will just combine them. If it makes sense to do a calculation in a loop then you can do that and we'll just move it out of the loop if possible. The idea is to keep the programmer from having to hand-optimize everything.
For C/C++ compilers at this stage most attempts at hand-optimizing your code (aside from algorithmic optimizations) don't do much good because the compiler is generally smart enough to do all of that optimization automatically. We want LabVIEW's compiler to be the same.
12-10-2010 01:25 AM
Hi Adam,
thank's for the explanations concerning the optimization.
Now, I just have difficulties to believe what you express in your last statement:
"most attempts at hand-optimizing your code (aside from algorithmic optimizations) don't do much good"
Last week, I hand optimized the official NI Levenberg Marquart function and was able to increase its speed by more than
a factor of 10!!
by using array multiplication instead of for loops ...... - the algorithm was not changed at all.
I am now afraid that automatic optimization "cleans code" that I conciously wrote that way. If you provide me an option to override automatic optimization, I am happy. However without such an option I fear to have to spend much time in trying to trick the compiler to force it to do what I want.
Another obvious drawback of automatic optimization is the attitude it favours and what my factor of 10 demonstrates: People believe that the optimizer does the thinking for them (what hopefully never becomes true)!
Herbert
12-10-2010 08:37 AM
"Last week, I hand optimized the official NI Levenberg Marquart function and was able to increase its speed by more than a factor of 10!!"
That's an example of a place where the compiler is not optimizing well enough. There should be no performance difference between using a primitive in for loop on every element in an array and using that same primitive taking the array directly. We should be able to recognize patterns like that and make the equivalent (and in fact in some cases we probably do). That said, my comment about hand optimizing was specifically about C and C++ compilers. My point was that those compilers have added so many optimizations that hand optimizing is nearly pointless. LabVIEW isn't there yet, but we're working on it. That involves adding the kinds of optimizations which you are complaining about, though. In order for us to be as good as C and C++ we have to add those optimizations because that's how you get the best compiled code.
"I am now afraid that automatic optimization "cleans code" that I conciously wrote that way."
Do you have any examples of a compiler optimization slowing down your code? If so then we'd like to know that so we can fix it.
I feel like I should stress again that optimizations are not about "cleaning" your code. That's not at all the point of doing optimizations. Our goal is to make your code faster so that you can focus on what your code should be doing, not on how to make it fast. Getting the right answer is your job. Calculating it faster is (at least to some extent) our job.
12-10-2010 09:21 AM
Hello Adam,
thank's again for the fast and informative reply.
What you describe seems to me like christmas. It sounds marvelous and so I wish you and the team good luck on this project.
My reservation towards optimization most probably comes from a case, where my code started to behave strangely in LV 2010 and I suspected the optimizer to be the culprit.
However, I was not able to reproduce this error outside my large project. Being honest, the culprit can thus as well be partial corruption of the vi (I had that already several times where the removal of some controls brought the vi back to normal life).
With best regards and thanks
Herbert