LabVIEW

cancel
Showing results for 
Search instead for 
Did you mean: 

DWarnInternal 0xAFFA74F1 in LVCPUFunctions.cpp when exiting LabVIEW

When exiting LabVIEW after opening a particular VI, I receive an internal warning:

DWarnInternal 0xAFFA74F1 in LVCPUFunctions.cpp

 

In the lvlog.txt file it states:

DWarnInternal 0xAFFA74F1: LVProcessorHierarchy: CPUs are not symmetric

 

Does anyone know what this means and, more importantly, how to fix it? It seems quite alarming seeing this message each time I exit LabVIEW after I working on the VI. I tried opening a previous version of my project from a time when I'm quite sure I did not experience this internal warning, but now find that the same internal warning occurs, which makes me suspect that, somehow, it isn't my code at fault, yet that is contradicted by not receiving an internal warning if I open different VIs.

 

If it's of any use, I've pasted in the entire contents of the lvlog.txt file below. Thanks in advance.

 

####
#Date: Thu, 1 Jun 2023 12:55:46 PM
#OSName: Windows 10 Enterprise
#OSVers: 10.0
#OSBuild: 19045
#AppName: LabVIEW
#Version: 19.0.1f5 64-bit
#AppKind: FDS
#AppModDate: 1/21/2021 15:54 GMT
#LabVIEW Base Address: 0x00007FF747DC0000


InitExecSystem() call to GetCurrProcessNumProcessors() reports: 12 processors
InitExecSystem() call to GetNumProcessors() reports: 12 processors
InitExecSystem() will use: 12 processors
starting LabVIEW Execution System 2 Thread 0 , capacity: 24 at [3768432951.56669426, (12:55:51.566694260 2023:06:01)]
starting LabVIEW Execution System 2 Thread 1 , capacity: 24 at [3768432951.56669426, (12:55:51.566694260 2023:06:01)]
starting LabVIEW Execution System 2 Thread 2 , capacity: 24 at [3768432951.56669426, (12:55:51.566694260 2023:06:01)]
starting LabVIEW Execution System 2 Thread 3 , capacity: 24 at [3768432951.56669426, (12:55:51.566694260 2023:06:01)]
starting LabVIEW Execution System 2 Thread 4 , capacity: 24 at [3768432951.56669426, (12:55:51.566694260 2023:06:01)]
starting LabVIEW Execution System 2 Thread 5 , capacity: 24 at [3768432951.56669426, (12:55:51.566694260 2023:06:01)]
starting LabVIEW Execution System 2 Thread 6 , capacity: 24 at [3768432951.56669426, (12:55:51.566694260 2023:06:01)]
starting LabVIEW Execution System 2 Thread 7 , capacity: 24 at [3768432951.56669426, (12:55:51.566694260 2023:06:01)]
starting LabVIEW Execution System 2 Thread 8 , capacity: 24 at [3768432951.56669426, (12:55:51.566694260 2023:06:01)]
starting LabVIEW Execution System 2 Thread 9 , capacity: 24 at [3768432951.56669426, (12:55:51.566694260 2023:06:01)]
starting LabVIEW Execution System 2 Thread 10 , capacity: 24 at [3768432951.56669426, (12:55:51.566694260 2023:06:01)]
starting LabVIEW Execution System 2 Thread 11 , capacity: 24 at [3768432951.56669426, (12:55:51.566694260 2023:06:01)]

<DEBUG_OUTPUT>
1/06/2023 12:56:19.967 PM
DWarnInternal 0xAFFA74F1: LVProcessorHierarchy: CPUs are not symmetric
d:\builds\penguin\labview\branches\2019\dev\source\execsupp\LVCPUFunctions.cpp(33) : DWarnInternal 0xAFFA74F1: LVProcessorHierarchy: CPUs are not symmetric
minidump id: 0a2f511d-7f22-483b-83a4-59c64d17d916
$Id: //labview/branches/2019/dev/source/execsupp/LVCPUFunctions.cpp#1 $

</DEBUG_OUTPUT>
0x00007FF7481B33FC - LabVIEW <unknown> + 0
0x00007FFCA1D65389 - mgcore_SH_19_0 <unknown> + 0
0x00007FF7488E5A97 - LabVIEW <unknown> + 0
0x00007FF4D5ABF5D8 - <unknown> <unknown> + 0
0x0000017857074FE8 - <unknown> <unknown> + 0
0x00007FF7488EE410 - LabVIEW <unknown> + 0
0x0000017857070080 - <unknown> <unknown> + 0
0x0000017857D84F10 - <unknown> <unknown> + 0
0x00007FF7488EE410 - LabVIEW <unknown> + 0
*** Dumping Bread Crumb Stack ***
*** LabVIEW Base Address: 0x00007FF747DC0000 ***
#** Loading: "C:\Users\ban106\OneDrive - CSIRO\Documents\LabVIEW Projects\Serendipity\Main.vi"
*** End Dump ***
stopping LabVIEW Execution System 2 Thread 0 , capacity: 24 at [3768432991.17576313, (12:56:31.175763131 2023:06:01)]
stopping LabVIEW Execution System 2 Thread 1 , capacity: 24 at [3768432991.17576313, (12:56:31.175763131 2023:06:01)]
stopping LabVIEW Execution System 2 Thread 2 , capacity: 24 at [3768432991.17576313, (12:56:31.175763131 2023:06:01)]
stopping LabVIEW Execution System 2 Thread 3 , capacity: 24 at [3768432991.17576313, (12:56:31.175763131 2023:06:01)]
stopping LabVIEW Execution System 2 Thread 4 , capacity: 24 at [3768432991.17576313, (12:56:31.175763131 2023:06:01)]
stopping LabVIEW Execution System 2 Thread 5 , capacity: 24 at [3768432991.17576313, (12:56:31.175763131 2023:06:01)]
stopping LabVIEW Execution System 2 Thread 6 , capacity: 24 at [3768432991.17576313, (12:56:31.175763131 2023:06:01)]
stopping LabVIEW Execution System 2 Thread 7 , capacity: 24 at [3768432991.17576313, (12:56:31.175763131 2023:06:01)]
stopping LabVIEW Execution System 2 Thread 8 , capacity: 24 at [3768432991.17576313, (12:56:31.175763131 2023:06:01)]
stopping LabVIEW Execution System 2 Thread 9 , capacity: 24 at [3768432991.17576313, (12:56:31.175763131 2023:06:01)]
stopping LabVIEW Execution System 2 Thread 10 , capacity: 24 at [3768432991.17576313, (12:56:31.175763131 2023:06:01)]
stopping LabVIEW Execution System 2 Thread 11 , capacity: 24 at [3768432991.17576313, (12:56:31.175763131 2023:06:01)]
Possible path leak, unable to purge elements of base #0

0 Kudos
Message 1 of 2
(455 Views)

I can add a little bit more testing information: On a whim, I tried disabling loop iteration parallelism on the few for loops I had enabled it for, saved the VI, then, baring the first time I exited LabVIEW after making the change, each subsequent time I exited LabVIEW after opening the VI I no longer received the internal warning, however when I re-enabled for loop parallelism, the internal warning came back.

0 Kudos
Message 2 of 2
(432 Views)