11-07-2013 09:49 AM
The basic task is to store a very large amount of data (eg. 10GB U8 array) in something similar to global variable. As LabVIEW can sometimes copy data in unexpected way, I used an array of Data Value References of 1D U8 array to avoid that. I first allocate memory, then make as many of these operations as I wish, and in the end, deallocate it.
I can't use Functional Global, because this "variable" must have several separate instances. So me and my collegue created a VI used like a FG, but it doesn't have shift registers or feedback nodes; instead, it passes a cluster of DVRs in and out. The data written and read will be files of identical size, so we also store an array of files' offsets (it's like a U64 pointer) so that we can access them at will.
But here comes the actual problem: all of this works up to a limit, and, curiously enogh:
1. allocation: initializing a 1G U8 array and creating a given number of DVRs in a loop works fine.
2. writing: accessing appropiate DVR (or two of them) by In Place Element structure and replacing subarray works fine.
3. reading: accessing DVRs in similar way and getting array subset crashes LabVIEW.
The tester sequentially allocates memory, writes to it (even 10GB), reads from it and deallocates it. The problem occured only after some 6GB had been read properly.
It is not caused by lack of physical RAM, as I tested it on 16GB computer. Moreover, on that computer the crash was a sudden (like one second) shutdown of LabVIEW, without any message from it or from Windows. On two different 8GB computers was different: LabVIEW hangs and a Windows message says just "program LabVIEW 12.0.1f5 Development System stopped working".
I never got an "out of memory" error, as all those machines use SWAP. After RAM has run out, program kept running, although it was painfully slow. This problem occures on 64-bit Windows 7 and 64-bit LabVIEW.
Is something wrong with LabVIEW itself, or maybe the way Windows manages memory? Or should I use a totally different approach?
11-11-2013 04:59 AM
Hi Andrzej and welcome to NI forums!
My first question would be whether you really need 10GB of memory data floating. If the access throughtput or latency does not dictate it, it would make more sense (at least for me) to stream to/from a faster disk then keeping everything in memory. You can create a huge binary file, and a similar FGV-style API to access said file from anywhere on the system. There is even an OpenG toolkit written specifically for accessing large files, available on VI Package Manager.
That being said, If large-scale memory is what you need, then the array of DVRs you did is a good way to go. A few thing I would suggest.
Let me know if any of this helped. If you're willing to share, I'd also be curious about your application, so what is the broader goal you're trying to accomplish.
Best regards:
Andrew Valko
NI Hungary