LabVIEW

cancel
Showing results for 
Search instead for 
Did you mean: 

Error 200621, memory underflow NI USB 6212, Do not allow regeneration

Solved!
Go to solution

Hello.

I have a problem with error 200621. My apps are working properly, but sometimes ( 1/100 launch) this error occurs. I found some solutions here, but I still can not solve my problem. I need to keep the regeneration off.

Thank you for your help!

JJ

 

HW - NI USB 6212

Download All
0 Kudos
Message 1 of 9
(2,716 Views)

Can be problem my computer performance? intel i5 -  2,2GHz, 8GB RAM, without SSD, only HDD 1TB. 

I find out that I will get this error (1/3 launch)  when I close and open Chrome with or more windows (20-30) and I launch something performance-intensive. I know there is no forum about computers, but i don´t know if it is fault in program or PC and I am really confused. If anyone has solved this, it helpe me a lot.

JJ

0 Kudos
Message 2 of 9
(2,643 Views)
Solution
Accepted by topic author Johnny_J

Hi Johnny,

 

I find out that I will get this error (1/3 launch)  when I close and open Chrome with or more windows (20-30) and I launch something performance-intensive.

Simple solution: don't use Chrome or other performance-intensive software in parallel with your DAQ software…

 

(In general it is good advice to use computers exclusively for DAQ tasks and NOT use other unrelated software in parallel.)

Best regards,
GerdW


using LV2016/2019/2021 on Win10/11+cRIO, TestStand2016/2019
0 Kudos
Message 3 of 9
(2,635 Views)

Ok, it's look like the problem is in the performance of a PC. Thank you for your time.
I hope this thread will help someone else.

JJ

0 Kudos
Message 4 of 9
(2,625 Views)

I couldn't look at your code because I'm still at LV 2016.

 

I agree that PC performance is at least a significant *contributor* to the problem.  But there's also the possibility that changes to the code might make the app less vulnerable to such performance limits.  Meanwhile, it's generally wise to follow GerdW's advice to try to run your DAQ apps solo as much as possible.  Especially (I would add) when using a USB device.

 

 

-Kevin P

CAUTION! New LabVIEW adopters -- it's too late for me, but you *can* save yourself. The new subscription policy for LabVIEW puts NI's hand in your wallet for the rest of your working life. Are you sure you're *that* dedicated to LabVIEW? (Summary of my reasons in this post, part of a voluminous thread of mostly complaints starting here).
0 Kudos
Message 5 of 9
(2,610 Views)

Hi Kevin.

I saved it for you in the previous version. If you have time to look, I would be very happy.

The power limitation is my only solution yet.

 

Thank you.

JJ

 

0 Kudos
Message 6 of 9
(2,605 Views)

Several coding improvements are possible.  I can't walk through all details, but hopefully can give you some ideas and topics to look into.

 

1. Loop speed is limited by the fact you do everything in a single loop -- AO, AI, calculations, graph displays.  Putting AO into one dedicated loop and AI into another would likely be a *huge* help toward reducing your DAQ errors.   Look into the topic "Producer - Consumer".

 

2. AO and AI are not truly synced in hardware, though you would probably like them to be.  Look into topics like "AO AI synchronization".

 

3. Presently, dataflow causes AO and then AI to run in series.  That doesn't appear functionally important, and is probably slowing your loop unnecessarily.

    Actually, after further consideration, this might be the single biggest problem you have.  On the first iteration, you write 0.5 seconds worth of data to your AO task.  In the background, that starts being generated as actual AO signals.  Next you request 0.5 seconds worth of data from your AI task.  You'll be waiting most of 0.5 seconds for it to arrive.  At the end of that wait time, while you do calcs and output display data, your AO task is getting perilously close to running out of defined data to generate.

   On the next iteration, you give it another 0.5 sec of data, only to spend most of that 0.5 sec waiting for the next chunk of AI data and then (perhaps) getting around to the next loop iteration barely in time once again.   Etc.

 

4. I'm always leery of Express VI's, like the sine generator that you call every loop iteration.  Maybe it's fine, but I'd be more trusting of one of the "normal" signal generation functions.

 

5. Your AO task writes 8000 samples at a time with a task configured for 16000 Hz sample rate.  So you deliver data in 0.5 second chunks.  I wouldn't particularly look to change that, it seems pretty reasonable.

 

 

-Kevin P

CAUTION! New LabVIEW adopters -- it's too late for me, but you *can* save yourself. The new subscription policy for LabVIEW puts NI's hand in your wallet for the rest of your working life. Are you sure you're *that* dedicated to LabVIEW? (Summary of my reasons in this post, part of a voluminous thread of mostly complaints starting here).
0 Kudos
Message 7 of 9
(2,601 Views)

I made a couple minimal and imperfect changes with comments.  Your code referred to some subvi's I don't have and you didn't include so you may need to relink the broken wires I had.  Briefly:

 

- I set up AI to "borrow" its sample clock from the AO task.  Because AI is started before the loop while AO doesn't start until inside the loop, this will *automatically* keep the tasks in hardware sync.

 

- I removed the dataflow dependence from AO to AI so now they are free to execute in parallel instead of in sequence

 

- I wrapped AI Read in a case structure that will *NOT* read on iteration 0 only.  This gives the AO task buffer a head start and will help prevent underflow. 

 

On iteration 0, you'll write 8000 samples (0.5 sec worth) to the AO buffer and start the generation.  AI also starts *actual* sampling because it's been waiting for the AO sample clock to come into.existence.  You do not read any AI data yet.  You'll proceed almost immediately to iteration 1.

    On iteration 1, you'll write another 8000 samples to the AO buffer.  At the same time you ask AI to read 8000 samples.  You'll wait most of the 0.5 sec for them to accumulate.  But by the time they do, you'll have *already* defined the next 0.5 sec worth of the AO buffer.  So you're not at such close risk of the underflow error.

   Subsequent iterations continue the same way, you're always writing to the AO buffer the samples it will start to generate about 0.5 sec in the future.

 

 

-Kevin P

 

CAUTION! New LabVIEW adopters -- it's too late for me, but you *can* save yourself. The new subscription policy for LabVIEW puts NI's hand in your wallet for the rest of your working life. Are you sure you're *that* dedicated to LabVIEW? (Summary of my reasons in this post, part of a voluminous thread of mostly complaints starting here).
Message 8 of 9
(2,575 Views)

Thank you for your help Kevin.

JJ

0 Kudos
Message 9 of 9
(2,565 Views)