11-22-2010 10:57 AM
I need to understand the Camera Link standard. Say I have a in image in the format: meta data + image data bytes. In case of a basic Camera Link interface that has simply 3 8-bit wide data lines, FVAL, LVAL, and DVAL, do I just put the image data 3 bytes at a time on data lines while raising the FVAL, DVAL, LVAL siganls until the end of my data byes? Any help would be really appreciated.
Thank you in advance,
-Shervin
Solved! Go to Solution.
11-22-2010 11:38 AM
I would suggest you get (purchase) the CameraLink specification at http://www.machinevisiononline.org
11-22-2010 11:43 AM
You may also have a look at some tming diagrams that are included in some camera manuals. You could have a look on this one from Photonfocus for instance : http://www.photonfocus.com/upload/manuals/MAN037_MV_D1024E_3D01_V1_1.pdf
Hope this helps,
11-22-2010 11:45 AM
I do have the Camera Link Specifications already. But it's rather a hardware pin description than a thorough overview. It just explains in each standard which signals are available for use for example but not the protocol per say.
11-22-2010 12:38 PM
Sami thank a lot for the reference, it was really useful and now I get the idea. Funny fact is that none of this is described in the standard specifications itself. Maybe they didn't want to make it too specific.
Just one last thing if you could clarify. The data that is being sent, doesn't need to be of a specific type, say R, G, B byes in RGB bitmap images. It can be anything, for example in my case data bytes in a jpeg 2 encoded image, is this assumption correct? Basically can we say that Camera Link is in general more like a parallel interface for communication?
thanks,
-Shervin
11-23-2010 10:14 AM
The type of data has no influence on the transmission : for instance, we use a 3 taps 8 bits monochrome camera that we "declared" as a RGB24 to the framegrabber.
It is then up to you to include in your software how to "cast" the datas.
Another example is the Photonfocus camera which manual I suggested you to download : it is a camera for profilometry applications that returns many different datas for each image line that is output to the grabber.
However, I think you should take care of one thing : usually images must have a predefined size when you start an acquisition session. JPEG data (which size may vary from one image to another) may be in conflict with this. Then, maybe you should consider sending dummy bytes to make sure you always send fixed size data blocks.
11-23-2010 10:47 AM
I see, this makes sense and helps my case. Another issue is defining what a line is for LVAL. I am not too familiar with the format of JPEG 2000 images (*.j2k or *.jp2), but as you said this line size is not fixed. Can LVAL be treated the same as FVAL in general? i.e. treat the whole JPEG 2000 image as one big line and take care of it on the other side in software. Or does this again depend on my device provider and how they built it?
11-24-2010 12:01 PM
I tried to output a whole image as a single line many years ago and it lead to very strange results.
There have been many changes and improvements on IMAQ boards these last years and the issues I was facing at that time does probably not exist anymore.
So you can try otherwise it is probably not so difficult to split your maximum image data size into blocks equal in size you would output as lines.