LabVIEW

cancel
Showing results for 
Search instead for 
Did you mean: 

TDMS write speeds are very different for normal and admin users

Solved!
Go to solution

Hi, I'm currently working on improving the imaging camera's data saving routine.

 

Before, we'd need about 500 MB/sec and everything was working just fine. Now we need to write at 900-1000 MB/sec.

The data is pretty much a raw bitstream, so there's no need to store metadata with every frame, so I'm using the Advanced Asynchronous Write to write the data to get the maximum speed.

During development I noticed the following problem (see speed test results below):

 

  1. Under the admin user, with the file size reservation: ~1440 MB/sec, very close to the test SSD speed limit.
  2. Under the admin user, without the file size reservation: ~1350 MB/sec, still very good.
  3. Normal user, without the file size reservation: ~750 MB/sec. That's very bad, less than 60% of the speed in #2!

I.e. the exact same piece of code, without even the slightest change, when opened on the same machine under the non-admin user, shows a performance of less than 60% of the admin user's performance.

 

Question: how is this possible and how can it be fixed? This code should be able to run without admin privileges.

 

Attachment: archive with the quick-and-dirty test code (saved versions for 2018, 2020, 2022, 2023). This code generates the test array (kept in RAM) and then attempts to write the data, showing statistics after saving is complete. No errors handling or any safety strings attached. 

 

Thanks!

 

Edit: OS is Windows 10 Pro x64 build 19045

 

0 Kudos
Message 1 of 9
(1,322 Views)

When "reserve files size=TRUE", I am getting the following (LabVIEW 2020):

 

Error -2545 occurred at an unidentified location

Possible reason(s):

LabVIEW: (Hex 0xFFFFF60F) TDMS asynchronous mode is not initialized properly. Make sure the enable asynchronous? input of the TDMS Advanced Open function is TRUE. If you are writing data to a file, also make sure the TDMS Configure Asynchronous Writes function exists. If you are reading data from a file, also make sure that both the TDMS Configure Asynchronous Reads and the TDMS Start Asynchronous Reads functions exist.

 

Small, things, but you can probably reduce the flatten time by placing the local variable outside the loop stack (or use a wire!). Currently, the compiler must assume that it can change at any time and must re-read the value with every iteration. I would also disable debugging on the VI.

 

You did not mention the OS, so I assume Windows. Sorry, I have no clue about the difference depending on user account. Do both accounts have the same settings (e.g. performance options, DEP, etc.)?

0 Kudos
Message 2 of 9
(1,304 Views)

You receive error -2545 if you try to enable the files reservation without launching the LabVIEW as admin. This is normal according to https://www.ni.com/docs/en-US/bundle/labview/page/standard-versus-advanced-tdms-functions.html

 

Sorry, forgot to mention about my system: Windows 10 Pro x64 build 19045. 

 

LabVIEW settings seems to be identical. I use the same user account for testing, just run LabVIEW either in a regular way, on via right click on the icon and "Run as Administrator". 

 

Edit:

Concernig small optimizations: thank you, they indeed do work, but even everything combined, they couldn't improve situation for more than for 3%-5%, unfortunately.

0 Kudos
Message 3 of 9
(1,290 Views)

I have also tested the freshly installed LabVIEW 2018 SP1 f4 and LabVIEW 2022 Q3. The problem is present in both of them. 

0 Kudos
Message 4 of 9
(1,237 Views)

This intrigued me as I've never heard of anything that would cause a difference with the user.

 

So I've had a play - to avoid a VM I've built the VI in LV 2020 64 bit to run it on my native system (Windows 11). I don't have an independent disk so my results weren't as precise as I would like.

 

I see a less dramatic variation but it there was some:

 

* Admin with Reserve: 0.892 GB/s

* Admin no Reserve: 0.864 GB/s then 0.777 GB/s

* Normal user: 0.716 GB/s

 

I ran process explorer from sys internals. This lets you see the calls made to the OS kernel during the runs so we can see if there is anything difference in the different cases.

I've attached them but basically the admin no reserve is different from the normal user:

 

* Admin with reserve does as it sounds - it reserves the full file up front and then uses a single write function for each write.

* Normal user also does as it sounds - no reservation and single write function for each write.

* When running as admin without reserve it actually performs a reserve before each write. So each write is essentially reserve-then-write.

 

I'm surprised this would make such a dramatic difference as you would think the OS would handle this in the write.

 

Looking at the calls used for the reservation the SetValidDataLengthInformationFile is the one that does require admin privileges (https://learn.microsoft.com/en-us/windows/win32/api/fileapi/nf-fileapi-setfilevaliddata).

 

So with all of that said - I guess this is expected from the TDMS library, although I'm surprised at the size of the difference. Maybe there are other libraries in other languages that can offer a middle ground or better performance as a non-admin user but I'm not aware of any with APIs already in LabVIEW.

 

I can't really think of anything else to try as this is all buried deep in the TDMS library.

James Mc
========
CLA and cRIO Fanatic
My writings on LabVIEW Development are at devs.wiresmithtech.com
Message 5 of 9
(1,223 Views)
Solution
Accepted by D_mitriy

Ah this is why the SetValidData is required - it is for the writes to become asynchronous https://learn.microsoft.com/en-us/troubleshoot/windows/win32/asynchronous-disk-io-synchronous

 

This means you may find less difference if you don't use the asynchronous functions but you may need to structure your LabVIEW code to maximize utilisation of the disks or accept some performance loss from the peak of running as admin with the async API.


EDIT: I'm not sure this is actually a link since it doesn't look like the TDMS library uses the Windows async file access but runs its own async thread. If so it would be a bit annoying since it isn't clear why it would be needed.

 

 

James Mc
========
CLA and cRIO Fanatic
My writings on LabVIEW Development are at devs.wiresmithtech.com
Message 6 of 9
(1,216 Views)
Solution
Accepted by D_mitriy

Hello James,

 

Thank you very much!

Now test is working as expected, also the space reservation is working without need in the admin account. 1 minute fix, literally.

 

But why on earth is this not mentioned in the LabVIEW help? ⁉️

 

Just in case anyone is interested, here are the steps to fix it:

  1. Win+R -> type in gpedit.msc -> press Enter,
  2. Go to Computer Configuration\Windows Settings\Security Settings\Local Policies\User Rights Assignment\Perform volume maintenance tasks,
  3. Right click -> Properties -> Add User or Group,
  4. Add the user you need,
  5. Apply,
  6. Close,
  7. Log out from the system and login back (or just reboot),
  8. Done.

 

Message 7 of 9
(1,205 Views)

Interestingly, that this story have continued. Now I have moved the code to the machine where it is actually supposed to work. And it is turns out, that exactly the same code saves data approximately 4 times slower, which is way too low.

Additional information about the PCs:

  • Same version of Windows and same updates installed.
  • Same version of all the NI the drivers and the LabVIEW (was installed when I realized that something is wrong).
  • Faster SSD on the target PC (confirmed with Crystal Disk mark).
  • Same partition layout, i.e. sector and cluster size.
  • Somewhat faster overall speed of the CPU with quite similar architecture (Ryzen 7700X on development and Threadripper 2950X on target). During the test runs CPU load level does not exceed 50% for a just one core.
  • Double the RAM with about the same effective speed.
  • MKL_DEBUG_CPU_TYPE is NOT set on both machines.

 

Currently I ran out of ideas, what the reason could be.

0 Kudos
Message 8 of 9
(823 Views)

It appears that writing the actual data to the disk is not the root of this new issue. It seems to be related to the speed of the RAM or something similar. On the target machine, slicing the test data 3D array into 2D slices significantly slows down the measured timings of the writing process. However, when a 2D array is used instead with some data randomly changed just before writing, speeds return to normal.

 

0 Kudos
Message 9 of 9
(780 Views)