LabVIEW

cancel
Showing results for 
Search instead for 
Did you mean: 

Memory Management for Large Arrays

Hello,

 

I am working on a Machine Learning application where I need to work with very large arrays, primarily 1D String arrays, and as the system works, the arrays expand and then there are integer arrays which start from 1D but then become 2D (can have 3-4 or more columns) and around a million rows. Since these integer arrays are to store 0's and 1's, I use U8 integer so that it uses less memory.

 

I am using LabVIEW 2013 SP1 (hoping to get my hands on LabVIEW 2014 in a few weeks), on a Windows 8.1 Pro, Intel core i7-3537U CPU@2.00 GHz, 2.50 GHz (64-bit OS and x-64 based Processor), 8 GB RAM.

 

There are several issues:

1) Speed issue: I will explain this with an example. During the runtime, there is a 1D String array (let's call it "X_str") of around 550,000 elements which looks up each of those elements in another 1D String array (let's call it "T_str") of >3 Million elements, and when a string is found in T_str, the match index is looked up in another array 1D U8 array (let's call it "T_val"), and the corresponding value (either 0 or 1) is stored in the position of X_str into another 2D U8 array called X_val. So for example if a string "aaaa" is in X_str[450], it's looked up in T_str[] and is found at T_str[135000], then the value at T_val[135000] is fetched and is stored in X_val[450,column id] (either in column 1 or 2 ... as per the cycle).

Now, I am doing this in the simple manner, without using the "subroutine" priority. What could be an effective way to do this? So that the speed is fast. The program in "Update Arrays.zip" shows my implementation of this part.

 

2) Memory issue: The program stopped at a very critical juncture, while almost producing the results, because it ran out of memory. It ran out of memory for copying the array as well as writing it to a file. The method I am using to write to file is in the attachment (Write to file.zip).

 

The application has given some positive results when the examples were not very complex in terms of size. But a slightly larger example strings created huge arrays that I cannot even copy them as constants into another file for analysis. The memory is full.

 

I am trying to redesign the algorithm so that many unnecessary steps are avoided, but I also want to know what can be improved in terms of my LabVIEW usage.

 

I am also exploring how Data Value References and In Place Element Structure can be useful in this case.

 

 

Thanks a lot for your time!

 

Vaibhav
Download All
0 Kudos
Message 1 of 41
(6,636 Views)

You are already on the right track. I would say in place element structures for any time you are doing anything to the data aside from passing the reference would help.  Avoid splitting your array at all costs outside of the structure, because that duplicates the array in memory.

 

There is some additional reading here:   http://www.ni.com/white-paper/3625/en/

 

I don't really like that "GigaLabVIEW" llb, but it does have a few useful tools.

Message 2 of 41
(6,611 Views)

Thanks for the quick reply.

I have already gone through that document and that .llb as well. But I couldn't understand a couple of things there, namely, how a Queue is compared against In Place Element Structure, and about writing data in chunks for my case.

Also, breaking that 2GB barrier was not clear entirely. I am not sure if my Windows 8.1 x-64 based processor running LabVIEW 2013 SP1 needs that improvement. I think that's already incorporated.

Vaibhav
0 Kudos
Message 3 of 41
(6,588 Views)

Insert Into Array Is very expensive in terms of time and memory. Arrays occupy contiguous memory so continually growing an array requires frequent re-allocations of memory. Those not quite big enough memory segments which have been set aside are not necessarily released from LV and probably will not be reused because of the contiguous memory requirement. So you can get out of memory errors while plenty of memory locations are unused but no single block is large enough.  Replace Array Subset is much better.

 

String searches and manipulations are notoriously slow. If you have any knowledge about the strings which would allow you to eliminate some of the searches or other manipulations, it might be worth the extra logic to do that.  If yo udo something like this, it is a good idea to document what you are doing because it may not be obvious to someone who looks at the code later.

 

Several of the shift registers can be replaced by tunnels. If the data never changes inside the loops, the shift registers are not needed. Note that if the for loop can ever execute zero times, the shift register may be needed to pass the data through.

 

Boolean Case selectors are slightly faster than numerics, although the compiler may be smart enough to compensate in the -1 or Default situation.

 

You have some duplicated code inside the case structures.

 

Lynn

 

Cleaned up strings.png

Message 4 of 41
(6,587 Views)

@Vaibhav wrote:

I am trying to redesign the algorithm so that many unnecessary steps are avoided, but I also want to know what can be improved in terms of my LabVIEW usage.


Just looking at your "update table", the main speed bump is probably the repeated searching through a string array with a million elements. Since the search is linear in array size, this takes long because your array is not sorted. You might try to store the strings as variant attributes instead, giving you a "log N" lookup performance instead. I believe that would dramatically speed things up.

 

Your VI has an incredible amount of dead code and duplicate code. For example the innermost case differs only in a increment and a diagram constant that is either 0 or 1, so why duplicate all that other code? Many things are in shift registers that never change, so use a plain input tunnel instead. Below is a crude attempt at simplification, it won't be much faster, probably. I have not implemented the variant attribute solution, because it would also require changes in the caller, etc.

 

 For more details on using variant attributes, start with this thread. Let me know if you have any questions. Variant attributes are stored in a red-black tree and lookup is thus as fast as a binary search. It will be many orders of magnitude faster than your linear search.

 

Download All
Message 5 of 41
(6,582 Views)

@johnsold wrote:

Boolean Case selectors are slightly faster than numerics, although the compiler may be smart enough to compensate in the -1 or Default situation.

 

You have some duplicated code inside the case structures.


It seems Lynn and me think alike. 😄

 

However, I disagree with the numeric to boolean selector, there is really not much of a difference. More important is the number of cases. A case structure with only two cases is significantly faster than one with more than two, so as long as we only have two cases, we should be OK.

Message 6 of 41
(6,564 Views)

altenbach,

 

Thanks for the correction about the case structures.

 

I rarely use variants but it sounds like I need to learn more about the advantages of the attributes.

 

Lynn

0 Kudos
Message 7 of 41
(6,558 Views)

johnsold wrote:

I rarely use variants but it sounds like I need to learn more about the advantages of the attributes.


I find myself using them more and more lately.  They are great for look up tables.  Takes a little getting used to, but they do make things a lot simpler.


GCentral
There are only two ways to tell somebody thanks: Kudos and Marked Solutions
Unofficial Forum Rules and Guidelines
"Not that we are sufficient in ourselves to claim anything as coming from us, but our sufficiency is from God" - 2 Corinthians 3:5
0 Kudos
Message 8 of 41
(6,535 Views)

Here is a quick attempt at a variant version. The order ouf the output is different, but that might not matter. The entries are still correctly matched to the values. If it matters, a few things need to change.

Of course you would carry the data inside the variant across caller and subVIs, etc. There is not need to extract the actual data until you save.

 

I don't know how it performs memory-wise. 😉

 

The three LEDs show that the outputs match the expected values for the default inputs.

 

I changed the order of autoindexing to simplify the generation of the rows_S/X output. Modify as needed.

 

Download All
Message 9 of 41
(6,527 Views)

Christian,

In Nugget,  Jarrod_S. said in one reply that

we can store any type of data object as a name/value pair inside the variant and then retrieve those values by name. You pass around the variant data like you would a refnum. The big difference in this analogy is that if you copy the variant, you copy the entire storage, which is not what happens with a refnum or any other type of reference."

 

so while passing the variant into subvi will create extra copy ( so extra memory )????

Sorry for hijacking the thread but i think this question is related to the content if OP is going to use variant technique.

 

If i remember correctly at some place you showed a benchmark for case structure having multiple cases being slow ( or was it a discussion ???), I am not able to find that if you can attach that link it will be helpfull.

 

Message 10 of 41
(6,470 Views)