From 11:00 PM CDT Friday, May 10 – 02:30 PM CDT Saturday, May 11 (04:00 AM UTC – 07:30 PM UTC), ni.com will undergo system upgrades that may result in temporary service interruption.

We appreciate your patience as we improve our online experience.

NI Labs Discussions

cancel
Showing results for 
Search instead for 
Did you mean: 

Welcome to LabVIEW High Performance Analysis Library

High Performance Analysis Library 2.0 is available now. New features includes:

  • More parallelized linear algebra and signal processing functions.
  • Single precision linear algebra and signal processing functions.
  • 3D transforms (FFT, DST, DCT)

Please visit NI Labs to download the installer.

0 Kudos
Message 11 of 23
(1,222 Views)

Hi,

Could you make the library available on the NI ftp site as well? When I download the zip file, the firewall of my company replies with:

The page you've been trying to access was blocked.

Reason: Active content was blocked due to digital signature violation. The violation is Missing Digital Signature.
Transaction ID is 4C1720598AB6DD0AAF94.

Renaming the .exe file in the zip to something like .ex_ or use a different compression method (7zip, rar) might help as well.

Regards,

Chris

0 Kudos
Message 12 of 23
(1,222 Views)

Hi Chris,

It is on the NI ftp now. The download url is: ftp://ftp.ni.com/pub/devzone/NI_Labs/HighPerformanceAnalysisLibrary2_0Installer.zip.

Please feel free to let me know if you have more questions.

Regards,

Qing

0 Kudos
Message 13 of 23
(1,222 Views)

This is looking great, thanks!  Great to have 3D FFTs (at last!) natively and fast in LabVIEW, and I'm also impressed to see SGL versions of all the routines.  I've quickly benchmarked the 3D FFTs and see a similar improvement over the ASP routines as for 2D on a 2-core machine.  Nice!

It does raise a few questions for me:

  1. There are other signal processing routines that build on top of these existing routines, prime examples being convolution and cross-correlation, both of which are often implelemented with FFTs.  Are these part of the plans?  It would be great to have built-in routines for these that cope with 1D/2D/3D and various representations.  Also, it would be great to finally have a "correct" Normalized Cross-Correlation which is sadly missing at the moment! (see here).
  2. Having SGL routines is great - I am about to reimplement my 3D deconvolution using SGL/CSG and see if the accuracy is sufficient, which would effectively give twice the memory capacity. However it does point up the lack of SGL routines in the rest of the Array/ASP libraries in LabVIEW currently, even simple functions like Mean.  I wonder whether there is the scope to rationalize across all of the floating-point routines in a way that enables this all to be used with the greatest efficiency.
  3. The only concern I have for my work is the ability to use memory efficiently.  Up until now, I've obviously had to code my own 3D FFTs (as a series of 1D FFTs) and have passed into the routine an array to use for the result.
    3D FFT (DBL).png
    This minimises the number of arrays created, which is very important especially for large 3D Deconvolution problems.  The Richardson-Lucy algorithm, for example, has 2 FFTs and 2 IFFTs per iteration, but I can ensure no extra memory is allocated along the way.
    3Ddeconv_RichardsonLucy_BD.png
    From quickly looking at the MKL documentation, it looks as though the option is there to either re-use the input array, or to provide input and output arrays.  Can something like this be added?

Again, many thanks for this.  It looks very useful, even at the current level of implementation.

0 Kudos
Message 14 of 23
(1,222 Views)

One further issue I've noted is that the 3D FFT routines are lacking the fftSize parameters.

0 Kudos
Message 15 of 23
(1,222 Views)

Is there a plan to migrate the High Performance Analysis Library to the x64 platform to be compatible with LV x64 2010?  If so, that would be great...  Also, if there IS a plan, is there an approximate time frame as to when it's likely to occur?

0 Kudos
Message 16 of 23
(1,222 Views)

Yes. 64-bit HPAL is on our plan. But I do not have the exact time when you will see it. This is a very new library, and too many stuffs could be added ...

By the way, we will have another release very soon. Thanks for your attention. Hope to hear more feedback from you.

0 Kudos
Message 17 of 23
(1,222 Views)

@duetcat:

Could you please explain what you mean with "very soon"? What are the changes in the new release?

0 Kudos
Message 18 of 23
(1,222 Views)

It was released. The main feature is sparse matrix functions. Here is the link:

http://decibel.ni.com/content/docs/DOC-13895

The next version is under development. Please feel free to send us your suggestion. Thanks.

0 Kudos
Message 19 of 23
(1,222 Views)

Just doing some more benchmarking, this time of the Dot Product function.  For my calculations, this High-Performance function is slightly slower than the builtin Linear Algebra function (LV 2010 and 2011), and considerably slower (3-4x) than the equivalent Multiply-Add for vectors smaller than 200.  All techniques show a speed increase when used inside a parallel loop, but the order is still the same.

DotProduct.png

Have you benchmarked all of the functions against their builtin equivalents?

0 Kudos
Message 20 of 23
(1,222 Views)