Android Camera2 API Showing Processed Preview Image









up vote
6
down vote

favorite
12












New Camera 2 API is very different from old one.Showing the manipulated camera frames to user part of pipeline is confuses me. I know there is very good explanation on Camera preview image data processing with Android L and Camera2 API but showing frames is still not clear. My question is what is the way of showing frames on screen which came from ImageReaders callback function after some processing while preserving efficiency and speed in Camera2 api pipeline?



Example Flow :



camera.add_target(imagereader.getsurface) -> on imagereaders callback do some processing -> (show that processed image on screen?)



Workaround Idea : Sending bitmaps to imageview every time new frame processed.










share|improve this question



















  • 1




    Are you trying to do this for every preview frame, or only once in a while (for every high-resolution still capture, for example)? Depending on the rate and resolution, different display approaches might be more appropriate.
    – Eddy Talvala
    Sep 23 '15 at 17:15










  • Every frame actually. I know there will be frames which can not be shown due to the time loss by image processing. If YUV format gives mi 30 fps on preview and I can able to process that 20 frames of total 30 per second I want to show that 20 frame on screen.
    – rcmalli
    Sep 23 '15 at 19:38















up vote
6
down vote

favorite
12












New Camera 2 API is very different from old one.Showing the manipulated camera frames to user part of pipeline is confuses me. I know there is very good explanation on Camera preview image data processing with Android L and Camera2 API but showing frames is still not clear. My question is what is the way of showing frames on screen which came from ImageReaders callback function after some processing while preserving efficiency and speed in Camera2 api pipeline?



Example Flow :



camera.add_target(imagereader.getsurface) -> on imagereaders callback do some processing -> (show that processed image on screen?)



Workaround Idea : Sending bitmaps to imageview every time new frame processed.










share|improve this question



















  • 1




    Are you trying to do this for every preview frame, or only once in a while (for every high-resolution still capture, for example)? Depending on the rate and resolution, different display approaches might be more appropriate.
    – Eddy Talvala
    Sep 23 '15 at 17:15










  • Every frame actually. I know there will be frames which can not be shown due to the time loss by image processing. If YUV format gives mi 30 fps on preview and I can able to process that 20 frames of total 30 per second I want to show that 20 frame on screen.
    – rcmalli
    Sep 23 '15 at 19:38













up vote
6
down vote

favorite
12









up vote
6
down vote

favorite
12






12





New Camera 2 API is very different from old one.Showing the manipulated camera frames to user part of pipeline is confuses me. I know there is very good explanation on Camera preview image data processing with Android L and Camera2 API but showing frames is still not clear. My question is what is the way of showing frames on screen which came from ImageReaders callback function after some processing while preserving efficiency and speed in Camera2 api pipeline?



Example Flow :



camera.add_target(imagereader.getsurface) -> on imagereaders callback do some processing -> (show that processed image on screen?)



Workaround Idea : Sending bitmaps to imageview every time new frame processed.










share|improve this question















New Camera 2 API is very different from old one.Showing the manipulated camera frames to user part of pipeline is confuses me. I know there is very good explanation on Camera preview image data processing with Android L and Camera2 API but showing frames is still not clear. My question is what is the way of showing frames on screen which came from ImageReaders callback function after some processing while preserving efficiency and speed in Camera2 api pipeline?



Example Flow :



camera.add_target(imagereader.getsurface) -> on imagereaders callback do some processing -> (show that processed image on screen?)



Workaround Idea : Sending bitmaps to imageview every time new frame processed.







android android-camera






share|improve this question















share|improve this question













share|improve this question




share|improve this question








edited May 23 '17 at 11:33









Community

11




11










asked Sep 22 '15 at 19:32









rcmalli

15038




15038







  • 1




    Are you trying to do this for every preview frame, or only once in a while (for every high-resolution still capture, for example)? Depending on the rate and resolution, different display approaches might be more appropriate.
    – Eddy Talvala
    Sep 23 '15 at 17:15










  • Every frame actually. I know there will be frames which can not be shown due to the time loss by image processing. If YUV format gives mi 30 fps on preview and I can able to process that 20 frames of total 30 per second I want to show that 20 frame on screen.
    – rcmalli
    Sep 23 '15 at 19:38













  • 1




    Are you trying to do this for every preview frame, or only once in a while (for every high-resolution still capture, for example)? Depending on the rate and resolution, different display approaches might be more appropriate.
    – Eddy Talvala
    Sep 23 '15 at 17:15










  • Every frame actually. I know there will be frames which can not be shown due to the time loss by image processing. If YUV format gives mi 30 fps on preview and I can able to process that 20 frames of total 30 per second I want to show that 20 frame on screen.
    – rcmalli
    Sep 23 '15 at 19:38








1




1




Are you trying to do this for every preview frame, or only once in a while (for every high-resolution still capture, for example)? Depending on the rate and resolution, different display approaches might be more appropriate.
– Eddy Talvala
Sep 23 '15 at 17:15




Are you trying to do this for every preview frame, or only once in a while (for every high-resolution still capture, for example)? Depending on the rate and resolution, different display approaches might be more appropriate.
– Eddy Talvala
Sep 23 '15 at 17:15












Every frame actually. I know there will be frames which can not be shown due to the time loss by image processing. If YUV format gives mi 30 fps on preview and I can able to process that 20 frames of total 30 per second I want to show that 20 frame on screen.
– rcmalli
Sep 23 '15 at 19:38





Every frame actually. I know there will be frames which can not be shown due to the time loss by image processing. If YUV format gives mi 30 fps on preview and I can able to process that 20 frames of total 30 per second I want to show that 20 frame on screen.
– rcmalli
Sep 23 '15 at 19:38













1 Answer
1






active

oldest

votes

















up vote
14
down vote



accepted










Edit after clarification of the question; original answer at bottom



Depends on where you're doing your processing.



If you're using RenderScript, you can connect a Surface from a SurfaceView or a TextureView to an Allocation (with setSurface), and then write your processed output to that Allocation and send it out with Allocation.ioSend(). The HDR Viewfinder demo uses this approach.



If you're doing EGL shader-based processing, you can connect a Surface to an EGLSurface with eglCreateWindowSurface, with the Surface as the native_window argument. Then you can render your final output to that EGLSurface and when you call eglSwapBuffers, the buffer will be sent to the screen.



If you're doing native processing, you can use the NDK ANativeWindow methods to write to a Surface you pass from Java and convert to an ANativeWindow.



If you're doing Java-level processing, that's really slow and you probably don't want to. But can use the new Android M ImageWriter class, or upload a texture to EGL every frame.



Or as you say, draw to an ImageView every frame, but that'll be slow.




Original answer:



If you are capturing JPEG images, you can simply copy the contents of the ByteBuffer from Image.getPlanes()[0].getBuffer() into a byte, and then use BitmapFactory.decodeByteArray to convert it to a Bitmap.



If you are capturing YUV_420_888 images, then you need to write your own conversion code from the 3-plane YCbCr 4:2:0 format to something you can display, such as a int of RGB values to create a Bitmap from; unfortunately there's not yet a convenient API for this.



If you are capturing RAW_SENSOR images (Bayer-pattern unprocessed sensor data), then you need to do a whole lot of image processing or just save a DNG.






share|improve this answer






















  • As I stated that on question I have asked how to show that already read and processed frames on screen not how to convert data types but thanks anyways.
    – rcmalli
    Sep 22 '15 at 21:41










  • updated answer to hopefully match the question.
    – Eddy Talvala
    Sep 25 '15 at 18:56






  • 1




    Very well explained. I assume that answer will guide people who use new API .I am sorry about unclear question at the beginning.
    – rcmalli
    Sep 25 '15 at 19:09






  • 1




    @EddyTalvala, what do you suggest to use for filtered video recording (with showing preview)? Or maybe I should use completely another approach?
    – Oleksandr
    May 29 '16 at 15:10






  • 2




    If you want the recorded video to be filtered, you need to receive a frame from the camera, filter it, and then send it to screen and a video encoder. You can get a Surface from a MediaRecorder or MediaCodec, and send data to it from OpenGL by using the Surface to create a new EGLImage, or from Java with an ImageWriter, or from RenderScript with an Allocation.ioSend(). Which works best depends on how you want to write your filter.
    – Eddy Talvala
    May 31 '16 at 3:36











Your Answer






StackExchange.ifUsing("editor", function ()
StackExchange.using("externalEditor", function ()
StackExchange.using("snippets", function ()
StackExchange.snippets.init();
);
);
, "code-snippets");

StackExchange.ready(function()
var channelOptions =
tags: "".split(" "),
id: "1"
;
initTagRenderer("".split(" "), "".split(" "), channelOptions);

StackExchange.using("externalEditor", function()
// Have to fire editor after snippets, if snippets enabled
if (StackExchange.settings.snippets.snippetsEnabled)
StackExchange.using("snippets", function()
createEditor();
);

else
createEditor();

);

function createEditor()
StackExchange.prepareEditor(
heartbeatType: 'answer',
convertImagesToLinks: true,
noModals: true,
showLowRepImageUploadWarning: true,
reputationToPostImages: 10,
bindNavPrevention: true,
postfix: "",
imageUploader:
brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
allowUrls: true
,
onDemand: true,
discardSelector: ".discard-answer"
,immediatelyShowMarkdownHelp:true
);



);













draft saved

draft discarded


















StackExchange.ready(
function ()
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f32725367%2fandroid-camera2-api-showing-processed-preview-image%23new-answer', 'question_page');

);

Post as a guest















Required, but never shown

























1 Answer
1






active

oldest

votes








1 Answer
1






active

oldest

votes









active

oldest

votes






active

oldest

votes








up vote
14
down vote



accepted










Edit after clarification of the question; original answer at bottom



Depends on where you're doing your processing.



If you're using RenderScript, you can connect a Surface from a SurfaceView or a TextureView to an Allocation (with setSurface), and then write your processed output to that Allocation and send it out with Allocation.ioSend(). The HDR Viewfinder demo uses this approach.



If you're doing EGL shader-based processing, you can connect a Surface to an EGLSurface with eglCreateWindowSurface, with the Surface as the native_window argument. Then you can render your final output to that EGLSurface and when you call eglSwapBuffers, the buffer will be sent to the screen.



If you're doing native processing, you can use the NDK ANativeWindow methods to write to a Surface you pass from Java and convert to an ANativeWindow.



If you're doing Java-level processing, that's really slow and you probably don't want to. But can use the new Android M ImageWriter class, or upload a texture to EGL every frame.



Or as you say, draw to an ImageView every frame, but that'll be slow.




Original answer:



If you are capturing JPEG images, you can simply copy the contents of the ByteBuffer from Image.getPlanes()[0].getBuffer() into a byte, and then use BitmapFactory.decodeByteArray to convert it to a Bitmap.



If you are capturing YUV_420_888 images, then you need to write your own conversion code from the 3-plane YCbCr 4:2:0 format to something you can display, such as a int of RGB values to create a Bitmap from; unfortunately there's not yet a convenient API for this.



If you are capturing RAW_SENSOR images (Bayer-pattern unprocessed sensor data), then you need to do a whole lot of image processing or just save a DNG.






share|improve this answer






















  • As I stated that on question I have asked how to show that already read and processed frames on screen not how to convert data types but thanks anyways.
    – rcmalli
    Sep 22 '15 at 21:41










  • updated answer to hopefully match the question.
    – Eddy Talvala
    Sep 25 '15 at 18:56






  • 1




    Very well explained. I assume that answer will guide people who use new API .I am sorry about unclear question at the beginning.
    – rcmalli
    Sep 25 '15 at 19:09






  • 1




    @EddyTalvala, what do you suggest to use for filtered video recording (with showing preview)? Or maybe I should use completely another approach?
    – Oleksandr
    May 29 '16 at 15:10






  • 2




    If you want the recorded video to be filtered, you need to receive a frame from the camera, filter it, and then send it to screen and a video encoder. You can get a Surface from a MediaRecorder or MediaCodec, and send data to it from OpenGL by using the Surface to create a new EGLImage, or from Java with an ImageWriter, or from RenderScript with an Allocation.ioSend(). Which works best depends on how you want to write your filter.
    – Eddy Talvala
    May 31 '16 at 3:36















up vote
14
down vote



accepted










Edit after clarification of the question; original answer at bottom



Depends on where you're doing your processing.



If you're using RenderScript, you can connect a Surface from a SurfaceView or a TextureView to an Allocation (with setSurface), and then write your processed output to that Allocation and send it out with Allocation.ioSend(). The HDR Viewfinder demo uses this approach.



If you're doing EGL shader-based processing, you can connect a Surface to an EGLSurface with eglCreateWindowSurface, with the Surface as the native_window argument. Then you can render your final output to that EGLSurface and when you call eglSwapBuffers, the buffer will be sent to the screen.



If you're doing native processing, you can use the NDK ANativeWindow methods to write to a Surface you pass from Java and convert to an ANativeWindow.



If you're doing Java-level processing, that's really slow and you probably don't want to. But can use the new Android M ImageWriter class, or upload a texture to EGL every frame.



Or as you say, draw to an ImageView every frame, but that'll be slow.




Original answer:



If you are capturing JPEG images, you can simply copy the contents of the ByteBuffer from Image.getPlanes()[0].getBuffer() into a byte, and then use BitmapFactory.decodeByteArray to convert it to a Bitmap.



If you are capturing YUV_420_888 images, then you need to write your own conversion code from the 3-plane YCbCr 4:2:0 format to something you can display, such as a int of RGB values to create a Bitmap from; unfortunately there's not yet a convenient API for this.



If you are capturing RAW_SENSOR images (Bayer-pattern unprocessed sensor data), then you need to do a whole lot of image processing or just save a DNG.






share|improve this answer






















  • As I stated that on question I have asked how to show that already read and processed frames on screen not how to convert data types but thanks anyways.
    – rcmalli
    Sep 22 '15 at 21:41










  • updated answer to hopefully match the question.
    – Eddy Talvala
    Sep 25 '15 at 18:56






  • 1




    Very well explained. I assume that answer will guide people who use new API .I am sorry about unclear question at the beginning.
    – rcmalli
    Sep 25 '15 at 19:09






  • 1




    @EddyTalvala, what do you suggest to use for filtered video recording (with showing preview)? Or maybe I should use completely another approach?
    – Oleksandr
    May 29 '16 at 15:10






  • 2




    If you want the recorded video to be filtered, you need to receive a frame from the camera, filter it, and then send it to screen and a video encoder. You can get a Surface from a MediaRecorder or MediaCodec, and send data to it from OpenGL by using the Surface to create a new EGLImage, or from Java with an ImageWriter, or from RenderScript with an Allocation.ioSend(). Which works best depends on how you want to write your filter.
    – Eddy Talvala
    May 31 '16 at 3:36













up vote
14
down vote



accepted







up vote
14
down vote



accepted






Edit after clarification of the question; original answer at bottom



Depends on where you're doing your processing.



If you're using RenderScript, you can connect a Surface from a SurfaceView or a TextureView to an Allocation (with setSurface), and then write your processed output to that Allocation and send it out with Allocation.ioSend(). The HDR Viewfinder demo uses this approach.



If you're doing EGL shader-based processing, you can connect a Surface to an EGLSurface with eglCreateWindowSurface, with the Surface as the native_window argument. Then you can render your final output to that EGLSurface and when you call eglSwapBuffers, the buffer will be sent to the screen.



If you're doing native processing, you can use the NDK ANativeWindow methods to write to a Surface you pass from Java and convert to an ANativeWindow.



If you're doing Java-level processing, that's really slow and you probably don't want to. But can use the new Android M ImageWriter class, or upload a texture to EGL every frame.



Or as you say, draw to an ImageView every frame, but that'll be slow.




Original answer:



If you are capturing JPEG images, you can simply copy the contents of the ByteBuffer from Image.getPlanes()[0].getBuffer() into a byte, and then use BitmapFactory.decodeByteArray to convert it to a Bitmap.



If you are capturing YUV_420_888 images, then you need to write your own conversion code from the 3-plane YCbCr 4:2:0 format to something you can display, such as a int of RGB values to create a Bitmap from; unfortunately there's not yet a convenient API for this.



If you are capturing RAW_SENSOR images (Bayer-pattern unprocessed sensor data), then you need to do a whole lot of image processing or just save a DNG.






share|improve this answer














Edit after clarification of the question; original answer at bottom



Depends on where you're doing your processing.



If you're using RenderScript, you can connect a Surface from a SurfaceView or a TextureView to an Allocation (with setSurface), and then write your processed output to that Allocation and send it out with Allocation.ioSend(). The HDR Viewfinder demo uses this approach.



If you're doing EGL shader-based processing, you can connect a Surface to an EGLSurface with eglCreateWindowSurface, with the Surface as the native_window argument. Then you can render your final output to that EGLSurface and when you call eglSwapBuffers, the buffer will be sent to the screen.



If you're doing native processing, you can use the NDK ANativeWindow methods to write to a Surface you pass from Java and convert to an ANativeWindow.



If you're doing Java-level processing, that's really slow and you probably don't want to. But can use the new Android M ImageWriter class, or upload a texture to EGL every frame.



Or as you say, draw to an ImageView every frame, but that'll be slow.




Original answer:



If you are capturing JPEG images, you can simply copy the contents of the ByteBuffer from Image.getPlanes()[0].getBuffer() into a byte, and then use BitmapFactory.decodeByteArray to convert it to a Bitmap.



If you are capturing YUV_420_888 images, then you need to write your own conversion code from the 3-plane YCbCr 4:2:0 format to something you can display, such as a int of RGB values to create a Bitmap from; unfortunately there's not yet a convenient API for this.



If you are capturing RAW_SENSOR images (Bayer-pattern unprocessed sensor data), then you need to do a whole lot of image processing or just save a DNG.







share|improve this answer














share|improve this answer



share|improve this answer








edited Sep 25 '15 at 18:54

























answered Sep 22 '15 at 21:20









Eddy Talvala

11.2k12634




11.2k12634











  • As I stated that on question I have asked how to show that already read and processed frames on screen not how to convert data types but thanks anyways.
    – rcmalli
    Sep 22 '15 at 21:41










  • updated answer to hopefully match the question.
    – Eddy Talvala
    Sep 25 '15 at 18:56






  • 1




    Very well explained. I assume that answer will guide people who use new API .I am sorry about unclear question at the beginning.
    – rcmalli
    Sep 25 '15 at 19:09






  • 1




    @EddyTalvala, what do you suggest to use for filtered video recording (with showing preview)? Or maybe I should use completely another approach?
    – Oleksandr
    May 29 '16 at 15:10






  • 2




    If you want the recorded video to be filtered, you need to receive a frame from the camera, filter it, and then send it to screen and a video encoder. You can get a Surface from a MediaRecorder or MediaCodec, and send data to it from OpenGL by using the Surface to create a new EGLImage, or from Java with an ImageWriter, or from RenderScript with an Allocation.ioSend(). Which works best depends on how you want to write your filter.
    – Eddy Talvala
    May 31 '16 at 3:36

















  • As I stated that on question I have asked how to show that already read and processed frames on screen not how to convert data types but thanks anyways.
    – rcmalli
    Sep 22 '15 at 21:41










  • updated answer to hopefully match the question.
    – Eddy Talvala
    Sep 25 '15 at 18:56






  • 1




    Very well explained. I assume that answer will guide people who use new API .I am sorry about unclear question at the beginning.
    – rcmalli
    Sep 25 '15 at 19:09






  • 1




    @EddyTalvala, what do you suggest to use for filtered video recording (with showing preview)? Or maybe I should use completely another approach?
    – Oleksandr
    May 29 '16 at 15:10






  • 2




    If you want the recorded video to be filtered, you need to receive a frame from the camera, filter it, and then send it to screen and a video encoder. You can get a Surface from a MediaRecorder or MediaCodec, and send data to it from OpenGL by using the Surface to create a new EGLImage, or from Java with an ImageWriter, or from RenderScript with an Allocation.ioSend(). Which works best depends on how you want to write your filter.
    – Eddy Talvala
    May 31 '16 at 3:36
















As I stated that on question I have asked how to show that already read and processed frames on screen not how to convert data types but thanks anyways.
– rcmalli
Sep 22 '15 at 21:41




As I stated that on question I have asked how to show that already read and processed frames on screen not how to convert data types but thanks anyways.
– rcmalli
Sep 22 '15 at 21:41












updated answer to hopefully match the question.
– Eddy Talvala
Sep 25 '15 at 18:56




updated answer to hopefully match the question.
– Eddy Talvala
Sep 25 '15 at 18:56




1




1




Very well explained. I assume that answer will guide people who use new API .I am sorry about unclear question at the beginning.
– rcmalli
Sep 25 '15 at 19:09




Very well explained. I assume that answer will guide people who use new API .I am sorry about unclear question at the beginning.
– rcmalli
Sep 25 '15 at 19:09




1




1




@EddyTalvala, what do you suggest to use for filtered video recording (with showing preview)? Or maybe I should use completely another approach?
– Oleksandr
May 29 '16 at 15:10




@EddyTalvala, what do you suggest to use for filtered video recording (with showing preview)? Or maybe I should use completely another approach?
– Oleksandr
May 29 '16 at 15:10




2




2




If you want the recorded video to be filtered, you need to receive a frame from the camera, filter it, and then send it to screen and a video encoder. You can get a Surface from a MediaRecorder or MediaCodec, and send data to it from OpenGL by using the Surface to create a new EGLImage, or from Java with an ImageWriter, or from RenderScript with an Allocation.ioSend(). Which works best depends on how you want to write your filter.
– Eddy Talvala
May 31 '16 at 3:36





If you want the recorded video to be filtered, you need to receive a frame from the camera, filter it, and then send it to screen and a video encoder. You can get a Surface from a MediaRecorder or MediaCodec, and send data to it from OpenGL by using the Surface to create a new EGLImage, or from Java with an ImageWriter, or from RenderScript with an Allocation.ioSend(). Which works best depends on how you want to write your filter.
– Eddy Talvala
May 31 '16 at 3:36


















draft saved

draft discarded
















































Thanks for contributing an answer to Stack Overflow!


  • Please be sure to answer the question. Provide details and share your research!

But avoid


  • Asking for help, clarification, or responding to other answers.

  • Making statements based on opinion; back them up with references or personal experience.

To learn more, see our tips on writing great answers.





Some of your past answers have not been well-received, and you're in danger of being blocked from answering.


Please pay close attention to the following guidance:


  • Please be sure to answer the question. Provide details and share your research!

But avoid


  • Asking for help, clarification, or responding to other answers.

  • Making statements based on opinion; back them up with references or personal experience.

To learn more, see our tips on writing great answers.




draft saved


draft discarded














StackExchange.ready(
function ()
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f32725367%2fandroid-camera2-api-showing-processed-preview-image%23new-answer', 'question_page');

);

Post as a guest















Required, but never shown





















































Required, but never shown














Required, but never shown












Required, but never shown







Required, but never shown

































Required, but never shown














Required, but never shown












Required, but never shown







Required, but never shown







Popular posts from this blog

Darth Vader #20

How to how show current date and time by default on contact form 7 in WordPress without taking input from user in datetimepicker

Ondo