Corrupted image instead of photo at some zoom levels - Huawei P20 Pro Questions & Answers

When I take a photo, normally in an app. At some zoom levels I get an image like the attached
Has anyone seen this issue ?

Related

Improving the stock "gallery" app

Is there any way to improve the gallery app by allowing the zooming of images upto 100% of their resolution? Even photos taken with the camera, if you zoom in, the quality looks crap because of the downscaled image.
I heard somewhere that this bug is fixed in Froyo - but can't remember where exactly
navmanyeah said:
Is there any way to improve the gallery app by allowing the zooming of images upto 100% of their resolution? Even photos taken with the camera, if you zoom in, the quality looks crap because of the downscaled image.
Click to expand...
Click to collapse
Install "large image viewer"

Why does a picture taken in portrait SAVE in landscape, even if rotated?

As stated....
I take a picture in portrait mode....click the little button to view the picture i took (inside the camera app) and it saves the picture sideways in landscape mode.
But, when i go to the Gallery app to view the picture, it displays correctly in portrait mode. So if i try to upload to facebook or send as an MMS, it turns the picture back to Landscape mode and sends like that, even though it displayed correctly in the Gallery. Even if i try to save the photo with it rotated incorrectly (thinking that it would turn the correct way when i sent it or uploaded it) it still shows up incorrectly.
The only fix i have found is to go to Gallery, view the photo, crop the photo (you don't actually have to crop anything, just select the whole picture) with it in the correct orientation and it will save a new pic in the correct orientation.
Come on, there has to be a fix for why it saves the picture incorrectly. I shouldn't have to make a full picture crop everytime i want to upload a pic or send it via MMS.
bump....
is this a common topic or what??
Use vignette to take photos and justpictures as gallery replacement set resolution to high in just pictures...this will keep photos sharper when zooming and cropping. The stock gallery app plain stinks
Swyped.....you may have to interpret

Extracting Both Images from P9 Dual Camera

Hi All,
I am trying to test some image analysis applications with the Huawei P9. Is it possible to extract two images (one from each camera) from a single shot? I know one of the cameras has a monochrome lens, and I know how to obtain just the monochrome image, but it would be extremely valuable if I could obtain both images from just one shot.
Looking forward to your assistance,
Josh
I do not want to stop your enthusiasm, but from my tests, they don't exists two images from one shot.
I didn't do my tests with an engineering approach, I only did some empirical test and from these I gather that:
- when you setup the Monochrome mode, the P9 activates the left camera (on the left when facing the phone back)
- with all the other modes, the P9 activates the right camera (the one between the flash and the left camera)
The P9 doesn't create 2 images, than combine them, it just shot always 1. How I came to this conclusion? You can also try it at home:
I choosen few static subject and I made my photos with the phone on tripod, than I did many photoshoots in the normal way and also by covering alternately the 2 cameras with a black scotch tape.
Even by naked eye, even by using an image comparation software (I used Beyond Compare from Scooter Software) I found no difference at all, no more brightness, no more contrast, no better image definition.
I did in a bright environment, in a dark one, I enabled and disabled the PRO mode and I tried to do a testing more complete as I could (honestly, I omitted to test the image in RAW mode, I tested only JPEGs), but my conclusion is that the 2 cameras are doing a different job, but they are definetely NOT working together.
Thanks for testing, but did you also try this outside on a landscape view? Maybe then we will see other results?
Otherwise this is yet ANOTHER thing Huawei lied about.
Yes, I did.
I'm thinking about making a full post about photo comparation. Let's see
ScareIT said:
Yes, I did.
I'm thinking about making a full post about photo comparation. Let's see
Click to expand...
Click to collapse
That would be nice!
Hey guys. I did a quick test shooting in bokeh mode or aperture effect (I guess you know what I mean). If you cover the black and white lense it lets you shoot the picture BUT NOT edit the depth of field once you took the picture.
If you uncover the lense, it works like it is supposed and also stores the depth information (two lenses are crucial to get depth information).
Thus, in order to extract two images from one shot, the best guess is that you try it in bokeh mode. But even then I dont know if its possible. However, the phone definitely uses both lenses that time.
Great oTToToTenTanz!
I confirm that! Both cameras are essential to enable the wide aperture effect: when you try to shoot in the bokeh mode it appear an alert to check if the lens is clear, the blurred effect disappears and it's impossible to edit the depth in post-production.
I make 2 hypothesis:
- the phone really combines the 2 pictures in order to recreate the depth (is a strategy used in all the 3D cameras), so in some way there should be the possibility to get both pictures
- the phone uses the laser pointer to shot IR around the subject, then the monochrome camera will get the infrared information (and considering that its lens is without the RGB filter, will be very efficient to do that) and store them in order to obtain an accurate depth (I mean something like this: https://www.youtube.com/watch?v=dgrMVp7fMIE)
Nice things to try!
Additional Info on Depth
oTToToTenTanz said:
Hey guys. I did a quick test shooting in bokeh mode or aperture effect (I guess you know what I mean). If you cover the black and white lense it lets you shoot the picture BUT NOT edit the depth of field once you took the picture.
If you uncover the lense, it works like it is supposed and also stores the depth information (two lenses are crucial to get depth information).
Thus, in order to extract two images from one shot, the best guess is that you try it in bokeh mode. But even then I dont know if its possible. However, the phone definitely uses both lenses that time.
Click to expand...
Click to collapse
Hey oTToToTenTanz,
Really appreciate your (and everyone else's) help on this! Can you give me some more info on how you actually extract the depth info in a usable form e.g. a matrix? Does the image just produce an RGB-D image once saved?
Thanks so much,
Josh
Yes unfortunately I think this is simply a feature that huawei lied about. The phone doesn't actually use both lenses at the same time to produce better quality normal photos; the monochrome lens is only used for bw mode or to obtain depth information for the wife aperture mode. The two lenses are not used in conjunction to provide better low light performance. You can try it yourself as stated earlier in the thread, cover the bw lens with your finger and compare the photos with normal ones: they'll look the same...
As far as I understand it, there are two cases in which both cameras are used.
One is for the wide-aperture ("bokeh") mode, in which a depth map is created from both pictures that have a slightly different perspective. I've read somewhere that the resulting image is a normal JPG file that is way too large, so it seems that there is additional data after the end of the actual JPG image. This would also explain why the capability to adjust depth of field is lost once the file is opened and saved by any application. I'll have a look at such a file when I have some spare time; maybe I'll find out more.
The other case is landscape shots in low light. Several people reported that covering the second camera in this scenario results in much darker images. This seems like a silly limitation, but I believe I understand why it's there. The two images that the cameras take differ in perspective (obviously, due to the fact that the cameras are mounted next to each other), which is quite difficult to adjust for when trying to combine both sensors' data. However, when focusing at infinity, for example when taking landscape shots, the difference in perspective is negligible, so that in this case the two sensors' data can be easily combined to improve low-light performance.
Maybe it would be possible to combine both sensors' output at closer distances in a satisfactory way, but it seems that Huawei chose not to implement that. If I find a way to extract the second sensor's data from a wide-aperture image, I'll poke around a bit to see if it would be possible to combine them.
I did some poking around on my lunch break. I threw a wide-aperture image into JPEGsnoop and it came up with two images in the file (four if you count the thumbnails, as well), the first one being the processed, "bokeh" image, while the second is the original color image without any processing. I assume that this is the image that is used to re-process the wide-aperture image when editing the focus point or aperture through the gallery app.
JPEGsnoop also told me that there's more data after the image segments. Since it couldn't work out what that data is for (this is past the end of the actual JFIF file), I checked it out using a hex editor. I found a marker "edof" (extended depth-of-field?) followed by what looks like some header data, followed lots of repeating bytes. This block is about 1/16 the size of the image in pixels (so 1 byte for each 4x4 pixel block). I'm not sure whether that's a small greyscale version of the image itself or a depth map, but I suspect it's the latter.
So, I'm afraid that it will be impossible to extract the monochrome image sensor data from a wide-aperture image, as it's not there anymore.
PerpulaX said:
I've read somewhere that the resulting image is a normal JPG file that is way too large, so it seems that there is additional data after the end of the actual JPG image. This would also explain why the capability to adjust depth of field is lost once the file is opened and saved by any application. I'll have a look at such a file when I have some spare time; maybe I'll find out more.
Click to expand...
Click to collapse
I confirm that: I did few shots on a single subject (always using tripod);
- the pictures in normal mode and with wide aperture with the BW camera covered results in 2.5 MB weight (max resolution; the photo's Title/Subject/Description is marked as "edh"
- the same subject in wide aperture mode (with the BW camera fully working) results in 5.5 MB weight (more than double); the photo's Title/Subject/Description is marked as "edf"; if this photo is opened with some image editing software, no alpha layers or other visual information appears anywhere; if the photo is saved back, the size will became comparable to the same photo without wide aperture effect
As depth information are not appearing in any editing software, I suppose they are hidden inside the jpeg file with some kind of steganography technique. I tried to examine the file with some ready-to-use tool (like stegdetect, that should be capable to detect if a jpeg file is standard or has something hidden) but I get only some mismatching header error, nothing that can let me understand where and how the depth information are stored and, primarily, if the black and white picture is also stored inside.
The cam seems to be making two Images for every shot. You can for - instance - make a picture and then edit it with the onboard effects. If i make the picture e.g. partially B&W, I can see, that it does use an original B&W picture taken with the original shot. This is not an artificial B&W.
The question is, where it is stored or are the necessary informations only "combined"?
PerpulaX, ScareIT you guys are right,
- the 992x744 depth map is coded on 8 bits at the end of the file, use HxD editor to extract the image (check the tags in ascii code "edof" & "DepthEn" ).
- displayed jpg is the saved one after blur processing on your sd card
- hidden jpeg in exif is the original image shot , without blur processing.
So it explains why you can re-edit your picture anytime on your P9 even after renaming... or simply have fun with the depth map for detouring in photoshop for instance
Made a python script to automate the EDOF and image extraction. It's simple but it works.
https://github.com/jpbarraca/dual-camera-edof
zoubla88 said:
PerpulaX, ScareIT you guys are right,
- the 992x744 depth map is coded on 8 bits at the end of the file, use HxD editor to extract the image (check the tags in ascii code "edof" & "DepthEn" ).
- displayed jpg is the saved one after blur processing on your sd card
- hidden jpeg in exif is the original image shot , without blur processing.
So it explains why you can re-edit your picture anytime on your P9 even after renaming... or simply have fun with the depth map for detouring in photoshop for instance
Click to expand...
Click to collapse
Can you explain what is possible to do in post-process? What can I do with the photo?
You can do exactly the same thing as the Huawei gallery app (at least).
For Photoshop there are plenty of tutorials using Depth Maps with the Lens Blur plugin
ScareIT said:
Yes, I did.
I'm thinking about making a full post about photo comparation. Let's see
Click to expand...
Click to collapse
Waiting for more details and experience sharing from you
Tijauna said:
Yes unfortunately I think this is simply a feature that huawei lied about. The phone doesn't actually use both lenses at the same time to produce better quality normal photos; the monochrome lens is only used for bw mode or to obtain depth information for the wife aperture mode. The two lenses are not used in conjunction to provide better low light performance. You can try it yourself as stated earlier in the thread, cover the bw lens with your finger and compare the photos with normal ones: they'll look the same...
Click to expand...
Click to collapse
Hy!
I think, that P9 does take two pictures and combines them in low light conditions. Here is two example, when something went wrong with the combination of images, and the two images becomes visible: https://goo.gl/photos/cK5q2TEisEU7rmpz9
What do you think?
Abel
So the file size is increased when B&W is uncovered but gives no actual benefit to the picture? Damn it, as useless as Interpolation!

P20 Pro Zoom Issue After EMUI 9 Update and Artefacts Present When Zooming

After installing the EMUI 9 update when I zoom, especially at 10x, the photos taken are not what they appear on the screen before taking them. Instead, the photo taken is something that is lower down to what I was trying to take a photo of. For example, if I try to take a picture of the B key on my laptop it takes a picture of the space bar.
Furthermore, whilst this hasn't only been occurring after the EMUI 9 update, very small artefacts that stay in the same position are present at high zoom, especially 10x zoom (they are invisible at no zoom and only become slightly visible at 5x zoom), when the camera focuses on an object. They appear like small white dots (visible when you zoom in on something that is black in colour as the white dots become more visible). However, they don't appear when taking the photo due to post-processing smoothing the photo out. They are not present though when taking pictures in the pitch black so surely they are not dead pixels? You can though see it on video at 10x zoom. The lens is clean. Is this normal?
Any help is much appreciated,
Thank you.
I've investigated the problem further regarding the first issue and I believe it must be something to do with the telephoto lens not always functioning, which happened straight after updating to EMUI 9, therefore, only the main sensor takes the picture, hence why the final photo appears further down than the preview photo. Also, if I take a photo and then look at it in the gallery and then go back to the camera, the preview image is shifted and the image quality is worse, which would make sense if the telephoto lens is not working when zooming. However, it does seem to work sometimes.
Can someone else test this as it must be a software issue.
I'm on EMUI 9.0.0.168
Thank you.
Telephoto lens preview and capturing will only works when the phone decide that the light is sufficient enough. Else it will use interpolated image captured from main sensor.
This been discussed in other threads as well

Panorama mode needs some improvement

Perhaps I'm doing it wrong, but I just can't get the panorama mode to work properly with the S10+.
I have 2 major gripes with it:
The panorama doesn't go all the way around, so I don't get a full 360 degree panoramic photo
The photos always come out distorted/stretched when viewed elsewhere
With issue one, it only seems to take about 350 degrees, so I end up with a stitch that doesn't quite work out when I view the photo later. Apps like Google Photos helpfully turn these types of images into something similar to being there and rotating on the spot but because the phone doesn't seem to take a full photo this doesn't work properly.
On issue two, when viewing the photos in Google Photos or even Facebook, they try to make these images into a photosphere type image, so 360 degrees along the X and Y axis. The trouble is, even though these photos are taken with the wide angle camera, it stretches the image upwards and distorts it to get that 360 view. LG wide angle cameras (at least on the G6 which is the last one I owned) add about 10 degrees to the top and the bottom of a pano photo using AI to match the colours at the top and bottom (e.g. extra blue at the top for sky). This results in a blurry patch at the top and bottom of the photosphere, but also prevents the main part of the image being distorted.
Its a shame about both of these "features" as otherwise I'm pretty happy with the camera, but having distorted panoramas is not great.

Categories

Resources