05-12-2016 05:21 PM
05-12-2016 05:39 PM - edited 05-12-2016 06:01 PM
@blessedk wrote:
OK I am faced with about 300 * 1 million rows and potentially more
Lets assume these are DBL, so we would have (1M x 300 x 8 bytes) 2.4GB per single data copy of the 2D array. You will have additional copies, e.g. for the graph etc. so this seems ridiculously high. Also, remember that arrays are contiguous in memory.
Are you using 64bit LabVIEW?
What you are trying to do reminds me of these pictures you can find on the internet..... 😄
05-12-2016 06:17 PM
05-12-2016 06:22 PM
Arrays in memory cannot be fragmented, so not only do you need a sufficient amount of free memory, the memory also needs to be in a single chunk.
05-12-2016 06:30 PM
05-12-2016 06:33 PM
05-13-2016 06:08 AM
@blessedk wrote:
Yamaeda,
I went back to your comment and observed something very interesting ( hopefully I understood that correctly) . " capturing zoom and scroll events" So I imagine you mean that any desired number of points spacing can be achieved as long as you are plotting ( or displaying) much smaller sections of the dataset at a time. Then When you display all, you basically show a decimated version
Exactly!
You basically decimate until you get 1sample/pixel, the decimation factor can be easily calculated from the currently shown time range (or similar) and sample rate.
If you e.g. sample at 10kHz for a minute, but have zoomed seconds 20-30, your first sample will be the 200000th, and if the graph is 1000 pixels wide the decimation should be 100k samples / 1000 pixels = 100.
/Y
05-13-2016 10:15 PM
I stumbled upon a Blog report (http://culverson.com/tips-and-tricks/) that talks about making a flexible Graph with multiple time scales, but with no "scrolling" backward in time (that is, you can see the last 5 minutes or 5 hours of data, but the plots are "anchored" with the latest Plot point, so you see the last 5 minutes or hours, not the first 5 minutes or hours).
I actually tried playing with this for a little demo project. It works surprisingly well, and scales nicely. Suppose you are sampling at 1KHz, your plot shows 500 points, and you sample for 5000 seconds (that's 5 million points). To keep the math simple, I'm going to use scale factors of 10. So plotting every point, you can see the last 500/1000 = 0.5 seconds of data (and it takes 500 points to show that). If you decimate (or average) 10 points at a time, your 500 "averaged" points will represent the last 5 seconds of data (for another 500 points). A third average of averages will show the last 50 seconds of data (again, taking 500 points), a fourth shows the last 500 seconds of data, and a final fifth shows 5000 seconds, or all of the data.
All of this takes 5 * 500 = 2500 points to save all the plots, rather less than 5 million points. [Of course, you want to save all of the data, so you stream the 5 million points to disk, accumulating the 5 plot representations "on the fly", as suggested by the Blog article]. It's actually quite amazing that it works so well -- if done right, it takes surprisingly few computational resources. Of course, I tried this on a 4 channel display, not 300 channels, but that's only 75 times more data ...
See if Culverson's ideas make sense to you and fit what you are trying to do.
Bob Schor