ni.com checkout is currently experiencing issues.
Support teams are actively working on the resolution.
ni.com checkout is currently experiencing issues.
Support teams are actively working on the resolution.
08-28-2023 09:43 AM - edited 08-28-2023 10:35 AM
Hi, I'm currently working on improving the imaging camera's data saving routine.
Before, we'd need about 500 MB/sec and everything was working just fine. Now we need to write at 900-1000 MB/sec.
The data is pretty much a raw bitstream, so there's no need to store metadata with every frame, so I'm using the Advanced Asynchronous Write to write the data to get the maximum speed.
During development I noticed the following problem (see speed test results below):
I.e. the exact same piece of code, without even the slightest change, when opened on the same machine under the non-admin user, shows a performance of less than 60% of the admin user's performance.
Question: how is this possible and how can it be fixed? This code should be able to run without admin privileges.
Attachment: archive with the quick-and-dirty test code (saved versions for 2018, 2020, 2022, 2023). This code generates the test array (kept in RAM) and then attempts to write the data, showing statistics after saving is complete. No errors handling or any safety strings attached.
Thanks!
Edit: OS is Windows 10 Pro x64 build 19045
Solved! Go to Solution.
08-28-2023 10:27 AM
When "reserve files size=TRUE", I am getting the following (LabVIEW 2020):
Error -2545 occurred at an unidentified location
Possible reason(s):
LabVIEW: (Hex 0xFFFFF60F) TDMS asynchronous mode is not initialized properly. Make sure the enable asynchronous? input of the TDMS Advanced Open function is TRUE. If you are writing data to a file, also make sure the TDMS Configure Asynchronous Writes function exists. If you are reading data from a file, also make sure that both the TDMS Configure Asynchronous Reads and the TDMS Start Asynchronous Reads functions exist.
Small, things, but you can probably reduce the flatten time by placing the local variable outside the loop stack (or use a wire!). Currently, the compiler must assume that it can change at any time and must re-read the value with every iteration. I would also disable debugging on the VI.
You did not mention the OS, so I assume Windows. Sorry, I have no clue about the difference depending on user account. Do both accounts have the same settings (e.g. performance options, DEP, etc.)?
08-28-2023 10:34 AM - edited 08-28-2023 10:42 AM
You receive error -2545 if you try to enable the files reservation without launching the LabVIEW as admin. This is normal according to https://www.ni.com/docs/en-US/bundle/labview/page/standard-versus-advanced-tdms-functions.html
Sorry, forgot to mention about my system: Windows 10 Pro x64 build 19045.
LabVIEW settings seems to be identical. I use the same user account for testing, just run LabVIEW either in a regular way, on via right click on the icon and "Run as Administrator".
Edit:
Concernig small optimizations: thank you, they indeed do work, but even everything combined, they couldn't improve situation for more than for 3%-5%, unfortunately.
08-29-2023 02:35 AM
I have also tested the freshly installed LabVIEW 2018 SP1 f4 and LabVIEW 2022 Q3. The problem is present in both of them.
08-29-2023 03:56 AM
This intrigued me as I've never heard of anything that would cause a difference with the user.
So I've had a play - to avoid a VM I've built the VI in LV 2020 64 bit to run it on my native system (Windows 11). I don't have an independent disk so my results weren't as precise as I would like.
I see a less dramatic variation but it there was some:
* Admin with Reserve: 0.892 GB/s
* Admin no Reserve: 0.864 GB/s then 0.777 GB/s
* Normal user: 0.716 GB/s
I ran process explorer from sys internals. This lets you see the calls made to the OS kernel during the runs so we can see if there is anything difference in the different cases.
I've attached them but basically the admin no reserve is different from the normal user:
* Admin with reserve does as it sounds - it reserves the full file up front and then uses a single write function for each write.
* Normal user also does as it sounds - no reservation and single write function for each write.
* When running as admin without reserve it actually performs a reserve before each write. So each write is essentially reserve-then-write.
I'm surprised this would make such a dramatic difference as you would think the OS would handle this in the write.
Looking at the calls used for the reservation the SetValidDataLengthInformationFile is the one that does require admin privileges (https://learn.microsoft.com/en-us/windows/win32/api/fileapi/nf-fileapi-setfilevaliddata).
So with all of that said - I guess this is expected from the TDMS library, although I'm surprised at the size of the difference. Maybe there are other libraries in other languages that can offer a middle ground or better performance as a non-admin user but I'm not aware of any with APIs already in LabVIEW.
I can't really think of anything else to try as this is all buried deep in the TDMS library.
08-29-2023 04:22 AM - edited 08-29-2023 04:33 AM
Ah this is why the SetValidData is required - it is for the writes to become asynchronous https://learn.microsoft.com/en-us/troubleshoot/windows/win32/asynchronous-disk-io-synchronous
This means you may find less difference if you don't use the asynchronous functions but you may need to structure your LabVIEW code to maximize utilisation of the disks or accept some performance loss from the peak of running as admin with the async API.
EDIT: I'm not sure this is actually a link since it doesn't look like the TDMS library uses the Windows async file access but runs its own async thread. If so it would be a bit annoying since it isn't clear why it would be needed.
08-29-2023 04:55 AM
Hello James,
Thank you very much!
Now test is working as expected, also the space reservation is working without need in the admin account. 1 minute fix, literally.
But why on earth is this not mentioned in the LabVIEW help? ⁉️
Just in case anyone is interested, here are the steps to fix it:
09-29-2023 08:43 AM - edited 09-29-2023 08:43 AM
Interestingly, that this story have continued. Now I have moved the code to the machine where it is actually supposed to work. And it is turns out, that exactly the same code saves data approximately 4 times slower, which is way too low.
Additional information about the PCs:
Currently I ran out of ideas, what the reason could be.
10-02-2023 09:35 AM
It appears that writing the actual data to the disk is not the root of this new issue. It seems to be related to the speed of the RAM or something similar. On the target machine, slicing the test data 3D array into 2D slices significantly slows down the measured timings of the writing process. However, when a 2D array is used instead with some data randomly changed just before writing, speeds return to normal.