Hi All,
I am trying to test some image analysis applications with the Huawei P9. Is it possible to extract two images (one from each camera) from a single shot? I know one of the cameras has a monochrome lens, and I know how to obtain just the monochrome image, but it would be extremely valuable if I could obtain both images from just one shot.
Looking forward to your assistance,
Josh
I do not want to stop your enthusiasm, but from my tests, they don't exists two images from one shot.
I didn't do my tests with an engineering approach, I only did some empirical test and from these I gather that:
- when you setup the Monochrome mode, the P9 activates the left camera (on the left when facing the phone back)
- with all the other modes, the P9 activates the right camera (the one between the flash and the left camera)
The P9 doesn't create 2 images, than combine them, it just shot always 1. How I came to this conclusion? You can also try it at home:
I choosen few static subject and I made my photos with the phone on tripod, than I did many photoshoots in the normal way and also by covering alternately the 2 cameras with a black scotch tape.
Even by naked eye, even by using an image comparation software (I used Beyond Compare from Scooter Software) I found no difference at all, no more brightness, no more contrast, no better image definition.
I did in a bright environment, in a dark one, I enabled and disabled the PRO mode and I tried to do a testing more complete as I could (honestly, I omitted to test the image in RAW mode, I tested only JPEGs), but my conclusion is that the 2 cameras are doing a different job, but they are definetely NOT working together.
Thanks for testing, but did you also try this outside on a landscape view? Maybe then we will see other results?
Otherwise this is yet ANOTHER thing Huawei lied about.
Yes, I did.
I'm thinking about making a full post about photo comparation. Let's see
ScareIT said:
Yes, I did.
I'm thinking about making a full post about photo comparation. Let's see
Click to expand...
Click to collapse
That would be nice!
Hey guys. I did a quick test shooting in bokeh mode or aperture effect (I guess you know what I mean). If you cover the black and white lense it lets you shoot the picture BUT NOT edit the depth of field once you took the picture.
If you uncover the lense, it works like it is supposed and also stores the depth information (two lenses are crucial to get depth information).
Thus, in order to extract two images from one shot, the best guess is that you try it in bokeh mode. But even then I dont know if its possible. However, the phone definitely uses both lenses that time.
Great oTToToTenTanz!
I confirm that! Both cameras are essential to enable the wide aperture effect: when you try to shoot in the bokeh mode it appear an alert to check if the lens is clear, the blurred effect disappears and it's impossible to edit the depth in post-production.
I make 2 hypothesis:
- the phone really combines the 2 pictures in order to recreate the depth (is a strategy used in all the 3D cameras), so in some way there should be the possibility to get both pictures
- the phone uses the laser pointer to shot IR around the subject, then the monochrome camera will get the infrared information (and considering that its lens is without the RGB filter, will be very efficient to do that) and store them in order to obtain an accurate depth (I mean something like this: https://www.youtube.com/watch?v=dgrMVp7fMIE)
Nice things to try!
Additional Info on Depth
oTToToTenTanz said:
Hey guys. I did a quick test shooting in bokeh mode or aperture effect (I guess you know what I mean). If you cover the black and white lense it lets you shoot the picture BUT NOT edit the depth of field once you took the picture.
If you uncover the lense, it works like it is supposed and also stores the depth information (two lenses are crucial to get depth information).
Thus, in order to extract two images from one shot, the best guess is that you try it in bokeh mode. But even then I dont know if its possible. However, the phone definitely uses both lenses that time.
Click to expand...
Click to collapse
Hey oTToToTenTanz,
Really appreciate your (and everyone else's) help on this! Can you give me some more info on how you actually extract the depth info in a usable form e.g. a matrix? Does the image just produce an RGB-D image once saved?
Thanks so much,
Josh
Yes unfortunately I think this is simply a feature that huawei lied about. The phone doesn't actually use both lenses at the same time to produce better quality normal photos; the monochrome lens is only used for bw mode or to obtain depth information for the wife aperture mode. The two lenses are not used in conjunction to provide better low light performance. You can try it yourself as stated earlier in the thread, cover the bw lens with your finger and compare the photos with normal ones: they'll look the same...
As far as I understand it, there are two cases in which both cameras are used.
One is for the wide-aperture ("bokeh") mode, in which a depth map is created from both pictures that have a slightly different perspective. I've read somewhere that the resulting image is a normal JPG file that is way too large, so it seems that there is additional data after the end of the actual JPG image. This would also explain why the capability to adjust depth of field is lost once the file is opened and saved by any application. I'll have a look at such a file when I have some spare time; maybe I'll find out more.
The other case is landscape shots in low light. Several people reported that covering the second camera in this scenario results in much darker images. This seems like a silly limitation, but I believe I understand why it's there. The two images that the cameras take differ in perspective (obviously, due to the fact that the cameras are mounted next to each other), which is quite difficult to adjust for when trying to combine both sensors' data. However, when focusing at infinity, for example when taking landscape shots, the difference in perspective is negligible, so that in this case the two sensors' data can be easily combined to improve low-light performance.
Maybe it would be possible to combine both sensors' output at closer distances in a satisfactory way, but it seems that Huawei chose not to implement that. If I find a way to extract the second sensor's data from a wide-aperture image, I'll poke around a bit to see if it would be possible to combine them.
I did some poking around on my lunch break. I threw a wide-aperture image into JPEGsnoop and it came up with two images in the file (four if you count the thumbnails, as well), the first one being the processed, "bokeh" image, while the second is the original color image without any processing. I assume that this is the image that is used to re-process the wide-aperture image when editing the focus point or aperture through the gallery app.
JPEGsnoop also told me that there's more data after the image segments. Since it couldn't work out what that data is for (this is past the end of the actual JFIF file), I checked it out using a hex editor. I found a marker "edof" (extended depth-of-field?) followed by what looks like some header data, followed lots of repeating bytes. This block is about 1/16 the size of the image in pixels (so 1 byte for each 4x4 pixel block). I'm not sure whether that's a small greyscale version of the image itself or a depth map, but I suspect it's the latter.
So, I'm afraid that it will be impossible to extract the monochrome image sensor data from a wide-aperture image, as it's not there anymore.
PerpulaX said:
I've read somewhere that the resulting image is a normal JPG file that is way too large, so it seems that there is additional data after the end of the actual JPG image. This would also explain why the capability to adjust depth of field is lost once the file is opened and saved by any application. I'll have a look at such a file when I have some spare time; maybe I'll find out more.
Click to expand...
Click to collapse
I confirm that: I did few shots on a single subject (always using tripod);
- the pictures in normal mode and with wide aperture with the BW camera covered results in 2.5 MB weight (max resolution; the photo's Title/Subject/Description is marked as "edh"
- the same subject in wide aperture mode (with the BW camera fully working) results in 5.5 MB weight (more than double); the photo's Title/Subject/Description is marked as "edf"; if this photo is opened with some image editing software, no alpha layers or other visual information appears anywhere; if the photo is saved back, the size will became comparable to the same photo without wide aperture effect
As depth information are not appearing in any editing software, I suppose they are hidden inside the jpeg file with some kind of steganography technique. I tried to examine the file with some ready-to-use tool (like stegdetect, that should be capable to detect if a jpeg file is standard or has something hidden) but I get only some mismatching header error, nothing that can let me understand where and how the depth information are stored and, primarily, if the black and white picture is also stored inside.
The cam seems to be making two Images for every shot. You can for - instance - make a picture and then edit it with the onboard effects. If i make the picture e.g. partially B&W, I can see, that it does use an original B&W picture taken with the original shot. This is not an artificial B&W.
The question is, where it is stored or are the necessary informations only "combined"?
PerpulaX, ScareIT you guys are right,
- the 992x744 depth map is coded on 8 bits at the end of the file, use HxD editor to extract the image (check the tags in ascii code "edof" & "DepthEn" ).
- displayed jpg is the saved one after blur processing on your sd card
- hidden jpeg in exif is the original image shot , without blur processing.
So it explains why you can re-edit your picture anytime on your P9 even after renaming... or simply have fun with the depth map for detouring in photoshop for instance
Made a python script to automate the EDOF and image extraction. It's simple but it works.
https://github.com/jpbarraca/dual-camera-edof
zoubla88 said:
PerpulaX, ScareIT you guys are right,
- the 992x744 depth map is coded on 8 bits at the end of the file, use HxD editor to extract the image (check the tags in ascii code "edof" & "DepthEn" ).
- displayed jpg is the saved one after blur processing on your sd card
- hidden jpeg in exif is the original image shot , without blur processing.
So it explains why you can re-edit your picture anytime on your P9 even after renaming... or simply have fun with the depth map for detouring in photoshop for instance
Click to expand...
Click to collapse
Can you explain what is possible to do in post-process? What can I do with the photo?
You can do exactly the same thing as the Huawei gallery app (at least).
For Photoshop there are plenty of tutorials using Depth Maps with the Lens Blur plugin
ScareIT said:
Yes, I did.
I'm thinking about making a full post about photo comparation. Let's see
Click to expand...
Click to collapse
Waiting for more details and experience sharing from you
Tijauna said:
Yes unfortunately I think this is simply a feature that huawei lied about. The phone doesn't actually use both lenses at the same time to produce better quality normal photos; the monochrome lens is only used for bw mode or to obtain depth information for the wife aperture mode. The two lenses are not used in conjunction to provide better low light performance. You can try it yourself as stated earlier in the thread, cover the bw lens with your finger and compare the photos with normal ones: they'll look the same...
Click to expand...
Click to collapse
Hy!
I think, that P9 does take two pictures and combines them in low light conditions. Here is two example, when something went wrong with the combination of images, and the two images becomes visible: https://goo.gl/photos/cK5q2TEisEU7rmpz9
What do you think?
Abel
So the file size is increased when B&W is uncovered but gives no actual benefit to the picture? Damn it, as useless as Interpolation!
How do I take B+W photos but not with the filters ?
Just use a photo editor and reduce saturation to zero.
Also there are many camera apps on the store that take B&W photos so you can preview before taking the shot
turdboman said:
How do I take B+W photos but not with the filters ?
Click to expand...
Click to collapse
Not possible as it is limited by hardware. The sensors on the s21 ultra has a physical color filter array on them which helps them to capture RGB spectrum.
So unless you are talented enough to remove this RGB filter and replace it with a monochrome filter, you will have to use software simulation to enjoy b&w photos.
When it records the image it records the gray scale for each channel, the b&w image is already there. Simply edit/save with saturation at zero.
That's how I set up the proper contrast and black level, then bring up the saturation when editing sometimes.
Or shoot in RAW mode and post edit that image as you have all the raw sensor data. I doubt you'll see much if any difference...
This isn't rocket science.
I downloaded the Wichaya_V1.4 GCam on my Zenphone 8 and found that it can record 64MP jpegs as well as the normal 16MP and Raw file options.
I did a quick test to compare the quality and usability of each version: I took a simple shot that included highlight, shadow, fine detail and strong/subtle colours. The attached screen-grabs are from a small central area of the picture.
The storage size of the shots are 1.8MB. 6.2MB and 13.7MB respectively, so the Raw file is over 7x the size of the smaller jpeg. All details are shown at the same size so the quality and sharpness can be compared.
I found that the 64MP jpeg was both softer and grainier than the normal 16MP file. I sharpened it a little but I don’t think it offers any advantage over the smaller file because of the graininess. If you want a more retro film grain look this could be an advantage.
Straight from the camera the Raw/Dng file is very flat and requires post processing to get it into shape. It’s a 16MP file that contains far more information than the jpeg that’s recorded with it. I opened it in a Capture One (Lightroom or Affinity Photo, etc could be used) and spent time adjusting parameters to get what I considered to be the best version of the shot. There may be smartphone apps that could be used, but I prefer to do this on my laptop.
The outcome is that I will continue to shoot most pictures as normal 16MP jpegs but whenever there is something very special that I wish to capture I’ll switch to Raw + Jpeg mode. This gives me the regular Jpeg for reference and a Raw file to process later when I want to create the best quality of picture from the GCam.
Thank you for your review. Can you add 64MP RAW and process it? It should give some extra details.
Dave.a said:
Thank you for your review. Can you add 64MP RAW and process it? It should give some extra details.
Click to expand...
Click to collapse
A 64MP Raw option would be interesting and theoretically lead to higher quality results after post processing. Best to make a request to the developer, Wichaya.
I'm interested in getting the Zenfone 8. Does OpenCamera shoot in 64mp? I'd love to see some examples!
ActiveWave said:
I'm interested in getting the Zenfone 8. Does OpenCamera shoot in 64mp? I'd love to see some examples!
Click to expand...
Click to collapse
I don't know if Open camera can record 64MP images as this is not mentioned on its website, but it's free to download and worth trying out. In a previous post I compared the GCam, Asus camera and Open camera night shots. The Open camera gave poor results so I personally would not use it as the quality of low light shots is important to me.
Tom100% said:
I downloaded the Wichaya_V1.4 GCam on my Zenphone 8 and found that it can record 64MP jpegs as well as the normal 16MP and Raw file options.
I did a quick test to compare the quality and usability of each version: I took a simple shot that included highlight, shadow, fine detail and strong/subtle colours. The attached screen-grabs are from a small central area of the picture.
The storage size of the shots are 1.8MB. 6.2MB and 13.7MB respectively, so the Raw file is over 7x the size of the smaller jpeg. All details are shown at the same size so the quality and sharpness can be compared.
I found that the 64MP jpeg was both softer and grainier than the normal 16MP file. I sharpened it a little but I don’t think it offers any advantage over the smaller file because of the graininess. If you want a more retro film grain look this could be an advantage.
Straight from the camera the Raw/Dng file is very flat and requires post processing to get it into shape. It’s a 16MP file that contains far more information than the jpeg that’s recorded with it. I opened it in a Capture One (Lightroom or Affinity Photo, etc could be used) and spent time adjusting parameters to get what I considered to be the best version of the shot. There may be smartphone apps that could be used, but I prefer to do this on my laptop.
The outcome is that I will continue to shoot most pictures as normal 16MP jpegs but whenever there is something very special that I wish to capture I’ll switch to Raw + Jpeg mode. This gives me the regular Jpeg for reference and a Raw file to process later when I want to create the best quality of picture from the GCam.
Click to expand...
Click to collapse
How were u able to change the resolution ,I can't seem to find it in the (wichaya) settings
xXeqiunoxXx said:
How were u able to change the resolution ,I can't seem to find it in the (wichaya) settings
Click to expand...
Click to collapse
Switch to 64MP mode in the GCam interface, not in the Wichaya settings. See Screenshot,