01-07-2012 03:00 AM - edited 01-07-2012 03:09 AM
I am assuming your code is running on a PC which does not have a RTOS. This then begs the question, if the point of the RT FIFO is determinism, and it is being used on a non-deterministic OS, is there any reason to be using it instead of a queue? My initial assumption is to say no. The speed will probably depend more on your processor, and any time difference between a queue and a FIFO will probably not be noticable to the user, especially if the queue is only used for an event driven queued state machine. What you need to be more careful about is that the loop dequeuing elements can keep up with the loop enqueuing them, else you have bigger issues!
01-08-2012 07:15 PM
@Ryan,
Thanks for the link to the FAQ, That was the info that I was looking for on RT-FIFO.
The key information that I was looking for was that Queues use Blocking calls and that RT FIFO preallocates memory.
The execution determinism is not really a concern to me in my application but is good to know for future work.
Here is some (hopefuly constructive) feedback to NI/Labview.
I have posted on this forum many times seeking wisdom and knowlege and more often than not, an NI Application engineer or LV Guru replies with a link to helpful Knowlege Base Page.
I do my best to search for these nuggets but I just don't can't seem to find them.
Is there a special "Pro" search page that I should use
01-08-2012 07:35 PM - edited 01-08-2012 07:45 PM
@for(imstuck),
I use Compact fieldpoint, RT PXI, Windows and when the postman arrives, hopefuly Compact Rio.
Our code is used on all of these platforms and in the case of cFP needs to be written in a way that is respectful of execution time.
It is not such an issue when using the brute force of a vxworks on a dual core PXI-8108 but when using a cFP-2100, every execution cycle counts.
I learned recently how to overload a cFP processor making it "Forget" to service the UART driver, losing important comms data
@Nathan,
Thanks for the suggestion,
I am reluctant to base my architectual choices on experimentation and would prefer to use theory before emperical methods.
I have done my share of benchmarking but tend only to use it when I can't find information/documentation.
I use 4 different operating systems and 3 versions of labview, benchmarking can consume a lot of time and still not give a difinitive answer.
01-09-2012 07:28 AM
I have actually benchmarked all these methods. Unfortunately, I cannot find my actual benchmark. Results were about as expected.
Note that all this benchmarking was done on a desktop machine, and I know that things change from platform to platform.
In general, I use queues for point-to-point communication and user events for broadcast communications. I would use RT FIFOs on an RT platform if I needed the determinism.
01-09-2012 10:00 AM
DFGray wrote:
- Queues are fastest if they do not have to allocate memory (and they do allocate memory the first time through, and in subsequent times when needed. This is done in an intelligent fashion). But they have a lot of jitter due to said possible memory allocation.
Is that still true if you specify the queue size?
01-09-2012 10:34 AM
If you specify the queue size, it removes most of the jitter from using a queue. You take a one-time hit when the queue is created, but nothing is allocated when you actually use the queue.
01-09-2012 10:42 AM
@ Timmar
Unfortunately there is no "pro" search page. We search the same data base you have access to on ni.com. My "pro" tip would be to search for knowledge base articles only and scan through the results. To find this page search for "rt fifo" and then on the left hand side of the page select show me "KnowldegeBase". The article I linked should be about the 4th one down. From my experience the best three searches for these type of things are "KnowledgeBase", "Tutorials", and "Examples."
01-09-2012 11:50 AM
@DFGray wrote:
If you specify the queue size, it removes most of the jitter from using a queue. You take a one-time hit when the queue is created, but nothing is allocated when you actually use the queue.
Does it actually do the allocation at the time of queue creation? I was under the perhaps-mistaken impression that queues grow as elements are added, and that setting the queue size simply sets an upper bound on how much it can grow. I think I'd seen examples where it was suggested to pre-fill and then flush a queue when a program starts in order to make sure the queue was fully allocated prior to using it for any real work, although I couldn't find such an example in a quick search.
01-09-2012 12:40 PM
I do not know for sure, and it could have changed since the last time I looked at it. I would suggest you benchmark it. Use a flat sequence with every other frame populated with <vi.lib>\Utility\High Resolution Relative Seconds.vi.
01-09-2012 06:45 PM
I don't have that VI, but with the code below, I'm getting consistent results that suggest that setting the queue size does not cause that number of elements to be allocated. Enqueueing 1 million elements initially takes about 140ms; after flushing the queue, enqueuing the same elements takes only about 115ms. To me, this looks like space in the queue is not allocated until it is needed.