LabVIEW

cancel
Showing results for 
Search instead for 
Did you mean: 

Labview on Win 10 ARM 64 on Raspberry PI

Necesito de tu ayuda, me podrias explicar como lo hiciste, estoy tratando de usarlo para mi proyecto de tesis, pero no he conseguido hacerlo, te ruego me ayudes tengo poco tiempo y a la fecha no he encontrado como hacer que en el raspberry pi corran todos los programas de labview

 

0 Kudos
Message 11 of 24
(1,555 Views)

@YOM1106 wrote:

Necesito de tu ayuda, me podrias explicar como lo hiciste, estoy tratando de usarlo para mi proyecto de tesis, pero no he conseguido hacerlo, te ruego me ayudes tengo poco tiempo y a la fecha no he encontrado como hacer que en el raspberry pi corran todos los programas de labview


Machine Translation
I need your help, could you explain to me how you did it, I am trying to use it for my thesis project, but I have not been able to do it, please help me I have little time and to date I have not found how to make all of them run on the raspberry pi labview programs

Please open a new thread and explain what you want to do in detail. Please eplain if - and why - it is necessary to use a raspberry pi with Windows and LabVIEW in exactly that configuration. If you just want to control some other device, there are probably other options.

0 Kudos
Message 12 of 24
(1,539 Views)

I think this is a potentially an important subject (not simply an exercise in "hacking") simply because Microsoft Windows 11 on ARM is projected to take up a significant proportion of the laptop space in the coming years (e.g. 20% by 2025). Even though it's not that common, any applications developed on LabView for deployment may run into Windows compatibility issues (more and more).

As such, I've been trying to get a built application to run on Windows 11 ARM, and have succeeded in getting a "basic" app to run using the following steps:

Build standalone application using LabView 2016 (32 bit) and be sure to uncheck "use SSE optimizations" under the Advanced Tab.

Build installer and include 2016 Runtime Engine in build.

Deploy to Windows 11 ARM (using a .zip or self extracting .zip)

It is unclear whether this will work with a more complex app that might include links to dll's and such, but at least the proof of concept seems to work 🙂

 

Message 13 of 24
(1,438 Views)

Just to throw more wood on the fire:

wiebeCARYA_0-1675089189599.png

 

QD,"Execute:SSE Runtime Optimization",CTRL+SHIFT+B should get you that property, even if scripting magic is off.

 

You'd have to 'VI Server' this for all VIs in a build, I guess.

 

This might be a way to get it done in >LV16. SSE could be a requirement of the RTE though...

0 Kudos
Message 14 of 24
(1,397 Views)

Thanks for the hint.

But even if you disable the following option:

Cheaterlow_0-1675100886484.png

The error is still shown.

I've tried several more or less dodgy ways to disabled the SSE option, but the actual compile sub programm deep within the labview compile process has this option missing. Therefore is not possible to do it from my point of view...

0 Kudos
Message 15 of 24
(1,381 Views)

Yes, I seem to remember that the LabVIEW 2017 compiler actually lost the ability to disable code generation for SSE for floating point arithmetic. It may actually have something to do with the used LLVM compiler. Disabling SSE and SSE2 for AMD64 is considered unneeded since every AMD64 compatible CPU has guaranteed SSE/SSE2 support (except it seems the x86/x64 emulation in the Windows ARM build). And it seems LLVM eventually removed that possibility altogether.

Rolf Kalbermatter
My Blog
0 Kudos
Message 16 of 24
(1,372 Views)

This is a long (long, long) shot... Probably another red haring.

 

For a project (LV18) I have to set MKL_DEBUG_CPU_TYPE = 5 as a system variable.

 

Apparently, this fixes some compatibility between Intel and ARM CPUs. For me it's mostly lvanlys.dll that won't work properly if this variable isn't set.

 

See performance - When you have an AMD CPU, can you speed up code that uses the Intel-MKL? - Stack Overf...

 

This comment in that thread:

"The detail, that the slow non-intel path uses SSE, while the faster path (triggered by MKL_DEBUG_CPU_TYPE=5) uses AVX2 is probably worth mentioning...", makes me thing there is a slim change it makes some difference.

 

You don't mention what version you used. For my situation, this should be fixed from LV20 on higher. The variable might still do something useful when simulating Intel on a ARM...

 

Again, a long shot... But hopefully a quick test?

Message 17 of 24
(1,361 Views)

I had to google how to set this up right. Please correct me if i have done this wrong.

open cmd and type: "MKL_DEBUG_CPU_TYPE=5", leave the cmd open.

Restart Labview and compile it again.

The outcome is the same iam still getting the 0x464 with SSE issue.

 

Btw. iam using LV2020.

 

Thanks for the tips and suggestions 🙂

0 Kudos
Message 18 of 24
(1,355 Views)

@Cheaterlow wrote:

I had to google how to set this up right. Please correct me if i have done this wrong.

open cmd and type: "MKL_DEBUG_CPU_TYPE=5", leave the cmd open.

Restart Labview and compile it again.

The outcome is the same iam still getting the 0x464 with SSE issue.

 

Btw. iam using LV2020.

 

Thanks for the tips and suggestions 🙂


How to Set Environment Variables in Windows 11/10 (aomeitech.com)

 

You can add a key to the registry as well:

HowTo: Set an Environment Variable in Windows - Command Line and Registry - Dowd and Associates

0 Kudos
Message 19 of 24
(1,348 Views)

You are all barking up the wrong tree here. The real problem is that the LabVIEW built in compiler (based on LLVM) has lost the ability to generate code that will NOT use SSE/SSE2/AVX instructions with version 2017.

 

The MKL issue is entirely separate from that and in fact the opposite. lvanlys.dll is a thin wrapper around the Intel MKL library. This library can correctly detect what Intel CPU is present and enable different algorithm code paths that make maximal use of the CPU features. But this detection logic fails (not very surprisingly) with certain modern AMD CPUs. And instead of falling back to the minimum possible support such as only Pentium features, it hangs up in the detection routine. Setting the environment variable tells the detection routine that the system knows better what CPU features to use than its own detection logic and simply skips most of the detection altogether. Setting this environment variable to a value of 5, tells the MKL basically: "Don't worry I know that the CPU supports SSE, SSE2, AVX and AVX2 and you don't have to try to detect that. Just go ahead and assume I tell you the truth!"

On an x86 emulation in Windows ARM, this would be exactly the wrong thing to do. However, since the compiled LabVIEW code can't even initialize as it tries to setup the non-existing SSE registers, this is a moot point anyways.

Rolf Kalbermatter
My Blog
Message 20 of 24
(1,341 Views)