In-camera HDR processing
Publication Date: 2015-Dec-07
The IP.com Prior Art Database
Digital cameras cannot process the wide range of luminance that the human eye can. This solution proposes intelligent exposure processing during the image capture.
Page 01 of 3
In-camera HDR processing
Digital cameras cannot process the wide range of luminance that the human eye can. A scene that contains bright and dark areas will be either under or overexposed.
The current solution
For most digital cameras, the solution is to take multiple images of different exposures, and merge them to create the final image. This is known as high dynamic range (HDR) processing and can be done automatically by the camera (if it has that function) or manually by the user in software. However, this solution either prevents use of RAW format for images (because RAW images can't be merged in-camera) or requires post-processing by the user). Even with this solution, images might still not preserve the entire range of luminance that is present in the scene.
Our solution proposes intelligent exposure processing during the image capture. The exposure time is calculated on a per-pixel basis, comparing each pixel
with its neighbours to ensure relative luminance is preserved. The advantages are: Only one image capture is required
RAW format can be used (this format preserves the most information, and is therefore preferred by professional or keen amateur photographers)
No post-processing of images is required
Images are much easier to take (the user does not have to set the camera to use the HDR option, if available, and make extra settings such as the number of separate images to take, and the different exposure settings for each image) Because the user sees the result instantly, they can be sure they have a good image before leaving the scene
A digital camera can show the range of brightness in an image as a graph, known as a histogram. It plots the image pixels to show how many are jet black (low end of the graph) through to how many are pure white (top end of the graph). With today's technology the camera reaches a certain point and any pixels that are 'too bright' it bunches and calls them all 'bright white' and vice versa. Hence, when you are trying to take a picture of something where you have
extremes at both ends the picture degrades at one or other end of the scale. Fig 1. Overexposed histogram.
Fig 2. Under exposed histogram
Page 02 of 3
Our solution proposes intelligent exposure processing during the image capture.
At the point the user presses the button the camera should take a snap reading of how each pixel relates in brightness in comparison to the others. It should effectively be ignoring the current exposure time setting on the camera and instead of saying 'the rest are all white' should continue to calculate how each pixel relates to one another. Thus building up a graph that isn't clipped at either end of the scale and there is no lost information.
So the result would be data which would enable us to draw a hist...