Reading Image Stream from RCCC Bayer Camera Sensor in Ubuntu










2














I am working with a LI-AR0820 GMSL2 camera which uses the On-Semi AR0820 sensor that captures images in a 12-Bit RCCC Bayer format. I want to read the real-time image stream from the camera and turn it into a grayscale image (using this demosaicing algorithm) and then feed it into an object detection algorithm. However, since OpenCV does not support the RCCC format I can't use the VideoCapture class to get image data from the camera. I am looking for something similar to get the streamed image data in an array-like format so that I can further manipulate it. Any ideas?



I'm running Ubuntu 18.04 with OpenCV 3.2.0 and Python 3.7.1.



EDIT. I am using the code here.



#include <vector>
#include <iostream>
#include <stdio.h>
#include <opencv2/opencv.hpp>
#include <opencv2/highgui/highgui.hpp>

int main()
// Each pixel is made up of 16 bits, with the high 4 bits always equal to 0
unsigned char bytes[2];

// Hold the data in a vector
std::vector<unsigned short int> data;

// Read the camera data
FILE *fp = fopen("test.raw","rb");
while(fread(bytes, 2, 1, fp) != 0) (bytes[1] << 8));


// Make a matrix 1280x720 with 16 bits of unsigned integers
cv::Mat imBayer = cv::Mat(720, 1280, CV_16U);

// Make a matrix to hold RGB data
cv::Mat imRGB;

// Copy the data in the vector into a nice matrix
memmove(imBayer.data, data.data(), data.size()*2);

// Convert the GR Bayer pattern into RGB, putting it into the RGB matrix!
cv::cvtColor(imBayer, imRGB, CV_BayerGR2RGB);

cv::namedWindow("Display window", cv::WINDOW_AUTOSIZE);
// *15 because the image is dark
cv::imshow("Display window", 15*imRGB);

cv::waitKey(0);

return 0;



There are two problems with the code. First, I have to get a raw image file using fswebcam and then use the code above to read the raw file and display the image. I want to be able to access the /dev/video1 node and directly read the raw data from there instead of having to first save it and then read it separately. Second, OpenCV does not support the RCCC Bayer format so I have to come up with a demosaicing method.



The camera outputs serialized data through a Coax cable, so I use a Deser board with USB 3.0 connection to connect the camera to my laptop. The setup can be seen here.










share|improve this question



















  • 1




    Please provide some code on how you're currently reading images from the sensor. If you have a byte stream (encoded or raw) you can convert it to opencv mat.
    – zindarod
    Nov 5 at 20:57










  • If your camera supports Video4Linux, you'll be able to read data.
    – dhanushka
    Nov 7 at 16:47










  • Is the camera connected to your machine via USB 3.0?
    – Ulrich Stern
    Nov 9 at 20:39











  • According to the camera’s data sheet, it is UVC compliant. So in principle, VideoCapture should work. Can you share more details how VideoCapture failed for you?
    – Ulrich Stern
    Nov 10 at 9:24










  • VideoCapture reads a weird green-looking image, like the one found at the bottom left corner of the first page here: leopardimaging.com/uploads/LI-USB30-OV13850_datasheet.pdf
    – Goodarz Mehr
    Nov 10 at 18:16















2














I am working with a LI-AR0820 GMSL2 camera which uses the On-Semi AR0820 sensor that captures images in a 12-Bit RCCC Bayer format. I want to read the real-time image stream from the camera and turn it into a grayscale image (using this demosaicing algorithm) and then feed it into an object detection algorithm. However, since OpenCV does not support the RCCC format I can't use the VideoCapture class to get image data from the camera. I am looking for something similar to get the streamed image data in an array-like format so that I can further manipulate it. Any ideas?



I'm running Ubuntu 18.04 with OpenCV 3.2.0 and Python 3.7.1.



EDIT. I am using the code here.



#include <vector>
#include <iostream>
#include <stdio.h>
#include <opencv2/opencv.hpp>
#include <opencv2/highgui/highgui.hpp>

int main()
// Each pixel is made up of 16 bits, with the high 4 bits always equal to 0
unsigned char bytes[2];

// Hold the data in a vector
std::vector<unsigned short int> data;

// Read the camera data
FILE *fp = fopen("test.raw","rb");
while(fread(bytes, 2, 1, fp) != 0) (bytes[1] << 8));


// Make a matrix 1280x720 with 16 bits of unsigned integers
cv::Mat imBayer = cv::Mat(720, 1280, CV_16U);

// Make a matrix to hold RGB data
cv::Mat imRGB;

// Copy the data in the vector into a nice matrix
memmove(imBayer.data, data.data(), data.size()*2);

// Convert the GR Bayer pattern into RGB, putting it into the RGB matrix!
cv::cvtColor(imBayer, imRGB, CV_BayerGR2RGB);

cv::namedWindow("Display window", cv::WINDOW_AUTOSIZE);
// *15 because the image is dark
cv::imshow("Display window", 15*imRGB);

cv::waitKey(0);

return 0;



There are two problems with the code. First, I have to get a raw image file using fswebcam and then use the code above to read the raw file and display the image. I want to be able to access the /dev/video1 node and directly read the raw data from there instead of having to first save it and then read it separately. Second, OpenCV does not support the RCCC Bayer format so I have to come up with a demosaicing method.



The camera outputs serialized data through a Coax cable, so I use a Deser board with USB 3.0 connection to connect the camera to my laptop. The setup can be seen here.










share|improve this question



















  • 1




    Please provide some code on how you're currently reading images from the sensor. If you have a byte stream (encoded or raw) you can convert it to opencv mat.
    – zindarod
    Nov 5 at 20:57










  • If your camera supports Video4Linux, you'll be able to read data.
    – dhanushka
    Nov 7 at 16:47










  • Is the camera connected to your machine via USB 3.0?
    – Ulrich Stern
    Nov 9 at 20:39











  • According to the camera’s data sheet, it is UVC compliant. So in principle, VideoCapture should work. Can you share more details how VideoCapture failed for you?
    – Ulrich Stern
    Nov 10 at 9:24










  • VideoCapture reads a weird green-looking image, like the one found at the bottom left corner of the first page here: leopardimaging.com/uploads/LI-USB30-OV13850_datasheet.pdf
    – Goodarz Mehr
    Nov 10 at 18:16













2












2








2


2





I am working with a LI-AR0820 GMSL2 camera which uses the On-Semi AR0820 sensor that captures images in a 12-Bit RCCC Bayer format. I want to read the real-time image stream from the camera and turn it into a grayscale image (using this demosaicing algorithm) and then feed it into an object detection algorithm. However, since OpenCV does not support the RCCC format I can't use the VideoCapture class to get image data from the camera. I am looking for something similar to get the streamed image data in an array-like format so that I can further manipulate it. Any ideas?



I'm running Ubuntu 18.04 with OpenCV 3.2.0 and Python 3.7.1.



EDIT. I am using the code here.



#include <vector>
#include <iostream>
#include <stdio.h>
#include <opencv2/opencv.hpp>
#include <opencv2/highgui/highgui.hpp>

int main()
// Each pixel is made up of 16 bits, with the high 4 bits always equal to 0
unsigned char bytes[2];

// Hold the data in a vector
std::vector<unsigned short int> data;

// Read the camera data
FILE *fp = fopen("test.raw","rb");
while(fread(bytes, 2, 1, fp) != 0) (bytes[1] << 8));


// Make a matrix 1280x720 with 16 bits of unsigned integers
cv::Mat imBayer = cv::Mat(720, 1280, CV_16U);

// Make a matrix to hold RGB data
cv::Mat imRGB;

// Copy the data in the vector into a nice matrix
memmove(imBayer.data, data.data(), data.size()*2);

// Convert the GR Bayer pattern into RGB, putting it into the RGB matrix!
cv::cvtColor(imBayer, imRGB, CV_BayerGR2RGB);

cv::namedWindow("Display window", cv::WINDOW_AUTOSIZE);
// *15 because the image is dark
cv::imshow("Display window", 15*imRGB);

cv::waitKey(0);

return 0;



There are two problems with the code. First, I have to get a raw image file using fswebcam and then use the code above to read the raw file and display the image. I want to be able to access the /dev/video1 node and directly read the raw data from there instead of having to first save it and then read it separately. Second, OpenCV does not support the RCCC Bayer format so I have to come up with a demosaicing method.



The camera outputs serialized data through a Coax cable, so I use a Deser board with USB 3.0 connection to connect the camera to my laptop. The setup can be seen here.










share|improve this question















I am working with a LI-AR0820 GMSL2 camera which uses the On-Semi AR0820 sensor that captures images in a 12-Bit RCCC Bayer format. I want to read the real-time image stream from the camera and turn it into a grayscale image (using this demosaicing algorithm) and then feed it into an object detection algorithm. However, since OpenCV does not support the RCCC format I can't use the VideoCapture class to get image data from the camera. I am looking for something similar to get the streamed image data in an array-like format so that I can further manipulate it. Any ideas?



I'm running Ubuntu 18.04 with OpenCV 3.2.0 and Python 3.7.1.



EDIT. I am using the code here.



#include <vector>
#include <iostream>
#include <stdio.h>
#include <opencv2/opencv.hpp>
#include <opencv2/highgui/highgui.hpp>

int main()
// Each pixel is made up of 16 bits, with the high 4 bits always equal to 0
unsigned char bytes[2];

// Hold the data in a vector
std::vector<unsigned short int> data;

// Read the camera data
FILE *fp = fopen("test.raw","rb");
while(fread(bytes, 2, 1, fp) != 0) (bytes[1] << 8));


// Make a matrix 1280x720 with 16 bits of unsigned integers
cv::Mat imBayer = cv::Mat(720, 1280, CV_16U);

// Make a matrix to hold RGB data
cv::Mat imRGB;

// Copy the data in the vector into a nice matrix
memmove(imBayer.data, data.data(), data.size()*2);

// Convert the GR Bayer pattern into RGB, putting it into the RGB matrix!
cv::cvtColor(imBayer, imRGB, CV_BayerGR2RGB);

cv::namedWindow("Display window", cv::WINDOW_AUTOSIZE);
// *15 because the image is dark
cv::imshow("Display window", 15*imRGB);

cv::waitKey(0);

return 0;



There are two problems with the code. First, I have to get a raw image file using fswebcam and then use the code above to read the raw file and display the image. I want to be able to access the /dev/video1 node and directly read the raw data from there instead of having to first save it and then read it separately. Second, OpenCV does not support the RCCC Bayer format so I have to come up with a demosaicing method.



The camera outputs serialized data through a Coax cable, so I use a Deser board with USB 3.0 connection to connect the camera to my laptop. The setup can be seen here.







python image opencv video-streaming






share|improve this question















share|improve this question













share|improve this question




share|improve this question








edited Nov 9 at 23:25

























asked Nov 3 at 17:52









Goodarz Mehr

788




788







  • 1




    Please provide some code on how you're currently reading images from the sensor. If you have a byte stream (encoded or raw) you can convert it to opencv mat.
    – zindarod
    Nov 5 at 20:57










  • If your camera supports Video4Linux, you'll be able to read data.
    – dhanushka
    Nov 7 at 16:47










  • Is the camera connected to your machine via USB 3.0?
    – Ulrich Stern
    Nov 9 at 20:39











  • According to the camera’s data sheet, it is UVC compliant. So in principle, VideoCapture should work. Can you share more details how VideoCapture failed for you?
    – Ulrich Stern
    Nov 10 at 9:24










  • VideoCapture reads a weird green-looking image, like the one found at the bottom left corner of the first page here: leopardimaging.com/uploads/LI-USB30-OV13850_datasheet.pdf
    – Goodarz Mehr
    Nov 10 at 18:16












  • 1




    Please provide some code on how you're currently reading images from the sensor. If you have a byte stream (encoded or raw) you can convert it to opencv mat.
    – zindarod
    Nov 5 at 20:57










  • If your camera supports Video4Linux, you'll be able to read data.
    – dhanushka
    Nov 7 at 16:47










  • Is the camera connected to your machine via USB 3.0?
    – Ulrich Stern
    Nov 9 at 20:39











  • According to the camera’s data sheet, it is UVC compliant. So in principle, VideoCapture should work. Can you share more details how VideoCapture failed for you?
    – Ulrich Stern
    Nov 10 at 9:24










  • VideoCapture reads a weird green-looking image, like the one found at the bottom left corner of the first page here: leopardimaging.com/uploads/LI-USB30-OV13850_datasheet.pdf
    – Goodarz Mehr
    Nov 10 at 18:16







1




1




Please provide some code on how you're currently reading images from the sensor. If you have a byte stream (encoded or raw) you can convert it to opencv mat.
– zindarod
Nov 5 at 20:57




Please provide some code on how you're currently reading images from the sensor. If you have a byte stream (encoded or raw) you can convert it to opencv mat.
– zindarod
Nov 5 at 20:57












If your camera supports Video4Linux, you'll be able to read data.
– dhanushka
Nov 7 at 16:47




If your camera supports Video4Linux, you'll be able to read data.
– dhanushka
Nov 7 at 16:47












Is the camera connected to your machine via USB 3.0?
– Ulrich Stern
Nov 9 at 20:39





Is the camera connected to your machine via USB 3.0?
– Ulrich Stern
Nov 9 at 20:39













According to the camera’s data sheet, it is UVC compliant. So in principle, VideoCapture should work. Can you share more details how VideoCapture failed for you?
– Ulrich Stern
Nov 10 at 9:24




According to the camera’s data sheet, it is UVC compliant. So in principle, VideoCapture should work. Can you share more details how VideoCapture failed for you?
– Ulrich Stern
Nov 10 at 9:24












VideoCapture reads a weird green-looking image, like the one found at the bottom left corner of the first page here: leopardimaging.com/uploads/LI-USB30-OV13850_datasheet.pdf
– Goodarz Mehr
Nov 10 at 18:16




VideoCapture reads a weird green-looking image, like the one found at the bottom left corner of the first page here: leopardimaging.com/uploads/LI-USB30-OV13850_datasheet.pdf
– Goodarz Mehr
Nov 10 at 18:16












1 Answer
1






active

oldest

votes


















1





+50









If your camera supports the CAP_PROP_CONVERT_RGB property, you might be able to get raw RCCC data from VideoCapture. By setting this property to False, you can disable the conversion to RGB. So, you can capture raw frames using a code like (no error checking for simplicity):



cap = cv2.VideoCapture(0)
# disable converting images to RGB
cap.set(cv2.CAP_PROP_CONVERT_RGB, False)
while(True):
ret, frame = cap.read()
# other processing ...
cap.release()


I don't know if this works for your camera.



If you can get the raw images somehow, you can apply the de-mosaicing method described in the ANALOG DEVICES appnote.



filter



with optimal filter



optimal filter



I wrote following python code as described in the appnote to test the RCCC -> GRAY conversion.



import cv2
import numpy as np

rgb = cv2.cvtColor(cv2.imread('RGB.png'), cv2.COLOR_BGR2RGB)
c = cv2.cvtColor(rgb, cv2.COLOR_RGB2GRAY)
r = rgb[:, :, 0]

# no error checking. c shape must be a multiple of 2
rmask = np.tile([[1, 0], [0, 0]], [c.shape[0]//2, c.shape[1]//2])
cmask = np.tile([[0, 1], [1, 1]], [c.shape[0]//2, c.shape[1]//2])

# create RCCC image by replacing 1 pixel out of 2x2 pixel region
# in the monochrome image (c) with a red pixel
rccc = (rmask*r + cmask*c).astype(np.uint8)

# RCCC -> GRAY conversion
def rccc_demosaic(rccc, rmask, cmask, filt):
# RCCC -> GRAY
# use border type REFLECT_101 to give correct results for border pixels
filtered = cv2.filter2D(src=rccc, ddepth=-1, kernel=filt,
anchor=(-1, -1), borderType=cv2.BORDER_REFLECT_101)

demos = (rmask*filtered + cmask*rccc).astype(np.uint8)

return demos

# demo of the optimal filter
zeta = 0.5
kernel_4neighbor = np.array([[0, 0, 0, 0, 0],
[0, 0, 1, 0, 0],
[0, 1, 0, 1, 0],
[0, 0, 1, 0, 0],
[0, 0, 0, 0, 0]])/4.0
kernel_optimal = np.array([[0, 0, -1, 0, 0],
[0, 0, 2, 0, 0],
[-1, 2, 4, 2, -1],
[0, 0, 2, 0, 0],
[0, 0, -1, 0, 0]])/8.0
kernel_param = np.array([[0, 0, -1./4, 0, 0],
[0, 0, 0, 0, 0],
[-1./4, 0, 1., 0, -1./4],
[0, 0, 0, 0, 0],
[0, 0, -1./4, 0, 0]])

# apply optimal filter (Figure 7)
opt1 = rccc_demosaic(rccc, rmask, cmask, kernel_optimal)
# parametric filter with zeta = 0.5 (Figure 5)
opt2 = rccc_demosaic(rccc, rmask, cmask, kernel_4neighbor + zeta * kernel_param)

# PSNR
print(10 * np.log10(255**2/((c - opt1)**2).mean()))
print(10 * np.log10(255**2/((c - opt2)**2).mean()))


Input RGB image:



rgb



Simulated RCCC image:



rccc



Gray image from de-mosaicing algorithm:



gray



One more thing:



If your camera vendor provides an SDK for Linux, it may have an API to do the RCCC -> GRAY conversion, or at least get the raw image. If RCCC -> GRAY conversion is not in the SDK, the C# sample code should have it, so I suggest you take a look at their code.






share|improve this answer






















  • Thanks for the solution! I'll test it as soon as I can. The vendor does provide an SDK, but it is for Windows only, and the output image is mosaiced (like the second image above).
    – Goodarz Mehr
    Nov 11 at 18:22











Your Answer






StackExchange.ifUsing("editor", function ()
StackExchange.using("externalEditor", function ()
StackExchange.using("snippets", function ()
StackExchange.snippets.init();
);
);
, "code-snippets");

StackExchange.ready(function()
var channelOptions =
tags: "".split(" "),
id: "1"
;
initTagRenderer("".split(" "), "".split(" "), channelOptions);

StackExchange.using("externalEditor", function()
// Have to fire editor after snippets, if snippets enabled
if (StackExchange.settings.snippets.snippetsEnabled)
StackExchange.using("snippets", function()
createEditor();
);

else
createEditor();

);

function createEditor()
StackExchange.prepareEditor(
heartbeatType: 'answer',
autoActivateHeartbeat: false,
convertImagesToLinks: true,
noModals: true,
showLowRepImageUploadWarning: true,
reputationToPostImages: 10,
bindNavPrevention: true,
postfix: "",
imageUploader:
brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
allowUrls: true
,
onDemand: true,
discardSelector: ".discard-answer"
,immediatelyShowMarkdownHelp:true
);



);













draft saved

draft discarded


















StackExchange.ready(
function ()
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f53133996%2freading-image-stream-from-rccc-bayer-camera-sensor-in-ubuntu%23new-answer', 'question_page');

);

Post as a guest















Required, but never shown

























1 Answer
1






active

oldest

votes








1 Answer
1






active

oldest

votes









active

oldest

votes






active

oldest

votes









1





+50









If your camera supports the CAP_PROP_CONVERT_RGB property, you might be able to get raw RCCC data from VideoCapture. By setting this property to False, you can disable the conversion to RGB. So, you can capture raw frames using a code like (no error checking for simplicity):



cap = cv2.VideoCapture(0)
# disable converting images to RGB
cap.set(cv2.CAP_PROP_CONVERT_RGB, False)
while(True):
ret, frame = cap.read()
# other processing ...
cap.release()


I don't know if this works for your camera.



If you can get the raw images somehow, you can apply the de-mosaicing method described in the ANALOG DEVICES appnote.



filter



with optimal filter



optimal filter



I wrote following python code as described in the appnote to test the RCCC -> GRAY conversion.



import cv2
import numpy as np

rgb = cv2.cvtColor(cv2.imread('RGB.png'), cv2.COLOR_BGR2RGB)
c = cv2.cvtColor(rgb, cv2.COLOR_RGB2GRAY)
r = rgb[:, :, 0]

# no error checking. c shape must be a multiple of 2
rmask = np.tile([[1, 0], [0, 0]], [c.shape[0]//2, c.shape[1]//2])
cmask = np.tile([[0, 1], [1, 1]], [c.shape[0]//2, c.shape[1]//2])

# create RCCC image by replacing 1 pixel out of 2x2 pixel region
# in the monochrome image (c) with a red pixel
rccc = (rmask*r + cmask*c).astype(np.uint8)

# RCCC -> GRAY conversion
def rccc_demosaic(rccc, rmask, cmask, filt):
# RCCC -> GRAY
# use border type REFLECT_101 to give correct results for border pixels
filtered = cv2.filter2D(src=rccc, ddepth=-1, kernel=filt,
anchor=(-1, -1), borderType=cv2.BORDER_REFLECT_101)

demos = (rmask*filtered + cmask*rccc).astype(np.uint8)

return demos

# demo of the optimal filter
zeta = 0.5
kernel_4neighbor = np.array([[0, 0, 0, 0, 0],
[0, 0, 1, 0, 0],
[0, 1, 0, 1, 0],
[0, 0, 1, 0, 0],
[0, 0, 0, 0, 0]])/4.0
kernel_optimal = np.array([[0, 0, -1, 0, 0],
[0, 0, 2, 0, 0],
[-1, 2, 4, 2, -1],
[0, 0, 2, 0, 0],
[0, 0, -1, 0, 0]])/8.0
kernel_param = np.array([[0, 0, -1./4, 0, 0],
[0, 0, 0, 0, 0],
[-1./4, 0, 1., 0, -1./4],
[0, 0, 0, 0, 0],
[0, 0, -1./4, 0, 0]])

# apply optimal filter (Figure 7)
opt1 = rccc_demosaic(rccc, rmask, cmask, kernel_optimal)
# parametric filter with zeta = 0.5 (Figure 5)
opt2 = rccc_demosaic(rccc, rmask, cmask, kernel_4neighbor + zeta * kernel_param)

# PSNR
print(10 * np.log10(255**2/((c - opt1)**2).mean()))
print(10 * np.log10(255**2/((c - opt2)**2).mean()))


Input RGB image:



rgb



Simulated RCCC image:



rccc



Gray image from de-mosaicing algorithm:



gray



One more thing:



If your camera vendor provides an SDK for Linux, it may have an API to do the RCCC -> GRAY conversion, or at least get the raw image. If RCCC -> GRAY conversion is not in the SDK, the C# sample code should have it, so I suggest you take a look at their code.






share|improve this answer






















  • Thanks for the solution! I'll test it as soon as I can. The vendor does provide an SDK, but it is for Windows only, and the output image is mosaiced (like the second image above).
    – Goodarz Mehr
    Nov 11 at 18:22
















1





+50









If your camera supports the CAP_PROP_CONVERT_RGB property, you might be able to get raw RCCC data from VideoCapture. By setting this property to False, you can disable the conversion to RGB. So, you can capture raw frames using a code like (no error checking for simplicity):



cap = cv2.VideoCapture(0)
# disable converting images to RGB
cap.set(cv2.CAP_PROP_CONVERT_RGB, False)
while(True):
ret, frame = cap.read()
# other processing ...
cap.release()


I don't know if this works for your camera.



If you can get the raw images somehow, you can apply the de-mosaicing method described in the ANALOG DEVICES appnote.



filter



with optimal filter



optimal filter



I wrote following python code as described in the appnote to test the RCCC -> GRAY conversion.



import cv2
import numpy as np

rgb = cv2.cvtColor(cv2.imread('RGB.png'), cv2.COLOR_BGR2RGB)
c = cv2.cvtColor(rgb, cv2.COLOR_RGB2GRAY)
r = rgb[:, :, 0]

# no error checking. c shape must be a multiple of 2
rmask = np.tile([[1, 0], [0, 0]], [c.shape[0]//2, c.shape[1]//2])
cmask = np.tile([[0, 1], [1, 1]], [c.shape[0]//2, c.shape[1]//2])

# create RCCC image by replacing 1 pixel out of 2x2 pixel region
# in the monochrome image (c) with a red pixel
rccc = (rmask*r + cmask*c).astype(np.uint8)

# RCCC -> GRAY conversion
def rccc_demosaic(rccc, rmask, cmask, filt):
# RCCC -> GRAY
# use border type REFLECT_101 to give correct results for border pixels
filtered = cv2.filter2D(src=rccc, ddepth=-1, kernel=filt,
anchor=(-1, -1), borderType=cv2.BORDER_REFLECT_101)

demos = (rmask*filtered + cmask*rccc).astype(np.uint8)

return demos

# demo of the optimal filter
zeta = 0.5
kernel_4neighbor = np.array([[0, 0, 0, 0, 0],
[0, 0, 1, 0, 0],
[0, 1, 0, 1, 0],
[0, 0, 1, 0, 0],
[0, 0, 0, 0, 0]])/4.0
kernel_optimal = np.array([[0, 0, -1, 0, 0],
[0, 0, 2, 0, 0],
[-1, 2, 4, 2, -1],
[0, 0, 2, 0, 0],
[0, 0, -1, 0, 0]])/8.0
kernel_param = np.array([[0, 0, -1./4, 0, 0],
[0, 0, 0, 0, 0],
[-1./4, 0, 1., 0, -1./4],
[0, 0, 0, 0, 0],
[0, 0, -1./4, 0, 0]])

# apply optimal filter (Figure 7)
opt1 = rccc_demosaic(rccc, rmask, cmask, kernel_optimal)
# parametric filter with zeta = 0.5 (Figure 5)
opt2 = rccc_demosaic(rccc, rmask, cmask, kernel_4neighbor + zeta * kernel_param)

# PSNR
print(10 * np.log10(255**2/((c - opt1)**2).mean()))
print(10 * np.log10(255**2/((c - opt2)**2).mean()))


Input RGB image:



rgb



Simulated RCCC image:



rccc



Gray image from de-mosaicing algorithm:



gray



One more thing:



If your camera vendor provides an SDK for Linux, it may have an API to do the RCCC -> GRAY conversion, or at least get the raw image. If RCCC -> GRAY conversion is not in the SDK, the C# sample code should have it, so I suggest you take a look at their code.






share|improve this answer






















  • Thanks for the solution! I'll test it as soon as I can. The vendor does provide an SDK, but it is for Windows only, and the output image is mosaiced (like the second image above).
    – Goodarz Mehr
    Nov 11 at 18:22














1





+50







1





+50



1




+50




If your camera supports the CAP_PROP_CONVERT_RGB property, you might be able to get raw RCCC data from VideoCapture. By setting this property to False, you can disable the conversion to RGB. So, you can capture raw frames using a code like (no error checking for simplicity):



cap = cv2.VideoCapture(0)
# disable converting images to RGB
cap.set(cv2.CAP_PROP_CONVERT_RGB, False)
while(True):
ret, frame = cap.read()
# other processing ...
cap.release()


I don't know if this works for your camera.



If you can get the raw images somehow, you can apply the de-mosaicing method described in the ANALOG DEVICES appnote.



filter



with optimal filter



optimal filter



I wrote following python code as described in the appnote to test the RCCC -> GRAY conversion.



import cv2
import numpy as np

rgb = cv2.cvtColor(cv2.imread('RGB.png'), cv2.COLOR_BGR2RGB)
c = cv2.cvtColor(rgb, cv2.COLOR_RGB2GRAY)
r = rgb[:, :, 0]

# no error checking. c shape must be a multiple of 2
rmask = np.tile([[1, 0], [0, 0]], [c.shape[0]//2, c.shape[1]//2])
cmask = np.tile([[0, 1], [1, 1]], [c.shape[0]//2, c.shape[1]//2])

# create RCCC image by replacing 1 pixel out of 2x2 pixel region
# in the monochrome image (c) with a red pixel
rccc = (rmask*r + cmask*c).astype(np.uint8)

# RCCC -> GRAY conversion
def rccc_demosaic(rccc, rmask, cmask, filt):
# RCCC -> GRAY
# use border type REFLECT_101 to give correct results for border pixels
filtered = cv2.filter2D(src=rccc, ddepth=-1, kernel=filt,
anchor=(-1, -1), borderType=cv2.BORDER_REFLECT_101)

demos = (rmask*filtered + cmask*rccc).astype(np.uint8)

return demos

# demo of the optimal filter
zeta = 0.5
kernel_4neighbor = np.array([[0, 0, 0, 0, 0],
[0, 0, 1, 0, 0],
[0, 1, 0, 1, 0],
[0, 0, 1, 0, 0],
[0, 0, 0, 0, 0]])/4.0
kernel_optimal = np.array([[0, 0, -1, 0, 0],
[0, 0, 2, 0, 0],
[-1, 2, 4, 2, -1],
[0, 0, 2, 0, 0],
[0, 0, -1, 0, 0]])/8.0
kernel_param = np.array([[0, 0, -1./4, 0, 0],
[0, 0, 0, 0, 0],
[-1./4, 0, 1., 0, -1./4],
[0, 0, 0, 0, 0],
[0, 0, -1./4, 0, 0]])

# apply optimal filter (Figure 7)
opt1 = rccc_demosaic(rccc, rmask, cmask, kernel_optimal)
# parametric filter with zeta = 0.5 (Figure 5)
opt2 = rccc_demosaic(rccc, rmask, cmask, kernel_4neighbor + zeta * kernel_param)

# PSNR
print(10 * np.log10(255**2/((c - opt1)**2).mean()))
print(10 * np.log10(255**2/((c - opt2)**2).mean()))


Input RGB image:



rgb



Simulated RCCC image:



rccc



Gray image from de-mosaicing algorithm:



gray



One more thing:



If your camera vendor provides an SDK for Linux, it may have an API to do the RCCC -> GRAY conversion, or at least get the raw image. If RCCC -> GRAY conversion is not in the SDK, the C# sample code should have it, so I suggest you take a look at their code.






share|improve this answer














If your camera supports the CAP_PROP_CONVERT_RGB property, you might be able to get raw RCCC data from VideoCapture. By setting this property to False, you can disable the conversion to RGB. So, you can capture raw frames using a code like (no error checking for simplicity):



cap = cv2.VideoCapture(0)
# disable converting images to RGB
cap.set(cv2.CAP_PROP_CONVERT_RGB, False)
while(True):
ret, frame = cap.read()
# other processing ...
cap.release()


I don't know if this works for your camera.



If you can get the raw images somehow, you can apply the de-mosaicing method described in the ANALOG DEVICES appnote.



filter



with optimal filter



optimal filter



I wrote following python code as described in the appnote to test the RCCC -> GRAY conversion.



import cv2
import numpy as np

rgb = cv2.cvtColor(cv2.imread('RGB.png'), cv2.COLOR_BGR2RGB)
c = cv2.cvtColor(rgb, cv2.COLOR_RGB2GRAY)
r = rgb[:, :, 0]

# no error checking. c shape must be a multiple of 2
rmask = np.tile([[1, 0], [0, 0]], [c.shape[0]//2, c.shape[1]//2])
cmask = np.tile([[0, 1], [1, 1]], [c.shape[0]//2, c.shape[1]//2])

# create RCCC image by replacing 1 pixel out of 2x2 pixel region
# in the monochrome image (c) with a red pixel
rccc = (rmask*r + cmask*c).astype(np.uint8)

# RCCC -> GRAY conversion
def rccc_demosaic(rccc, rmask, cmask, filt):
# RCCC -> GRAY
# use border type REFLECT_101 to give correct results for border pixels
filtered = cv2.filter2D(src=rccc, ddepth=-1, kernel=filt,
anchor=(-1, -1), borderType=cv2.BORDER_REFLECT_101)

demos = (rmask*filtered + cmask*rccc).astype(np.uint8)

return demos

# demo of the optimal filter
zeta = 0.5
kernel_4neighbor = np.array([[0, 0, 0, 0, 0],
[0, 0, 1, 0, 0],
[0, 1, 0, 1, 0],
[0, 0, 1, 0, 0],
[0, 0, 0, 0, 0]])/4.0
kernel_optimal = np.array([[0, 0, -1, 0, 0],
[0, 0, 2, 0, 0],
[-1, 2, 4, 2, -1],
[0, 0, 2, 0, 0],
[0, 0, -1, 0, 0]])/8.0
kernel_param = np.array([[0, 0, -1./4, 0, 0],
[0, 0, 0, 0, 0],
[-1./4, 0, 1., 0, -1./4],
[0, 0, 0, 0, 0],
[0, 0, -1./4, 0, 0]])

# apply optimal filter (Figure 7)
opt1 = rccc_demosaic(rccc, rmask, cmask, kernel_optimal)
# parametric filter with zeta = 0.5 (Figure 5)
opt2 = rccc_demosaic(rccc, rmask, cmask, kernel_4neighbor + zeta * kernel_param)

# PSNR
print(10 * np.log10(255**2/((c - opt1)**2).mean()))
print(10 * np.log10(255**2/((c - opt2)**2).mean()))


Input RGB image:



rgb



Simulated RCCC image:



rccc



Gray image from de-mosaicing algorithm:



gray



One more thing:



If your camera vendor provides an SDK for Linux, it may have an API to do the RCCC -> GRAY conversion, or at least get the raw image. If RCCC -> GRAY conversion is not in the SDK, the C# sample code should have it, so I suggest you take a look at their code.







share|improve this answer














share|improve this answer



share|improve this answer








edited Nov 12 at 11:04

























answered Nov 11 at 8:41









dhanushka

7,67421932




7,67421932











  • Thanks for the solution! I'll test it as soon as I can. The vendor does provide an SDK, but it is for Windows only, and the output image is mosaiced (like the second image above).
    – Goodarz Mehr
    Nov 11 at 18:22

















  • Thanks for the solution! I'll test it as soon as I can. The vendor does provide an SDK, but it is for Windows only, and the output image is mosaiced (like the second image above).
    – Goodarz Mehr
    Nov 11 at 18:22
















Thanks for the solution! I'll test it as soon as I can. The vendor does provide an SDK, but it is for Windows only, and the output image is mosaiced (like the second image above).
– Goodarz Mehr
Nov 11 at 18:22





Thanks for the solution! I'll test it as soon as I can. The vendor does provide an SDK, but it is for Windows only, and the output image is mosaiced (like the second image above).
– Goodarz Mehr
Nov 11 at 18:22


















draft saved

draft discarded
















































Thanks for contributing an answer to Stack Overflow!


  • Please be sure to answer the question. Provide details and share your research!

But avoid


  • Asking for help, clarification, or responding to other answers.

  • Making statements based on opinion; back them up with references or personal experience.

To learn more, see our tips on writing great answers.





Some of your past answers have not been well-received, and you're in danger of being blocked from answering.


Please pay close attention to the following guidance:


  • Please be sure to answer the question. Provide details and share your research!

But avoid


  • Asking for help, clarification, or responding to other answers.

  • Making statements based on opinion; back them up with references or personal experience.

To learn more, see our tips on writing great answers.




draft saved


draft discarded














StackExchange.ready(
function ()
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f53133996%2freading-image-stream-from-rccc-bayer-camera-sensor-in-ubuntu%23new-answer', 'question_page');

);

Post as a guest















Required, but never shown





















































Required, but never shown














Required, but never shown












Required, but never shown







Required, but never shown

































Required, but never shown














Required, but never shown












Required, but never shown







Required, but never shown







Popular posts from this blog

Use pre created SQLite database for Android project in kotlin

Darth Vader #20

Ondo