AP0101CS
www.onsemi.com
12
The AdaCD Noise Reduction Filter is able to adapt its
noise filtering process to local image structure and noise
level, removing most objectionable color noise while
preserving edge details.
Black Level Subtraction and Digital Gain
After noise reduction, the pixel data goes through black
level subtraction and multiplication of all pixel values by a
programmable digital gain. Independent color channel
digital gain can be adjusted with registers. Black level
subtract (to compensate for sensor data pedestal) is a single
value applied to all color channels. If the black level
subtraction produces a negative result for a particular pixel,
the value of this pixel is set to 0.
Positional Gain Adjustments (PGA)
Lenses tend to produce images whose brightness is
significantly attenuated near the edges. There are also other
factors causing fixed pattern signal gradients in images
captured by image sensors. The cumulative result of all these
factors is known as image shading. The AP0101CS has an
embedded shading correction module that can be
programmed to counter the shading effects on each
individual R, Gb, Gr, and B color signal.
The correction functions
The correction functions can then be applied to each pixel
value to equalize the response across the image as follows:
Pcorrected(row, col) +Psensor(row, col) f(row, col)
(eq. 1)
where P are the pixel values and f is the color dependent
correction functions for each color channel.
Adaptive Local Tone Mapping (ALTM)
Real world scenes often have very high dynamic range
(HDR) that far exceeds the electrical dynamic range of the
imager. Dynamic range is defined as the luminance ratio
between the brightest and the darkest object in a scene. In
recent years many technologies have been developed to
capture the full dynamic range of real world scenes. For
example, the multiple exposure method is a widely adopted
method for capturing high dynamic range images, which
combines a series of low dynamic range images of the same
scene taken under different exposure times into a single
HDR image.
Even though the new digital imaging technology enables
the capture of the full dynamic range, low dynamic range
display devices are the limiting factor. Today’s typical LCD
monitor has contrast ratio around 1,000:1; however, it is not
atypical for an HDR image having contrast ratio around
250,000:1. Therefore, in order to reproduce HDR images on
a low dynamic range display device, the captured high
dynamic range must be compressed to the available range of
the display device. This is commonly called tone mapping.
Tone mapping methods can be classified into global tone
mapping and local tone mapping. Global tone mapping
methods apply the same mapping function to all pixels.
While global tone mapping methods provide
computationally simple and easy to use solutions, they often
cause loss of contrast and detail. A local tone mapping is thus
necessary in addition to global tone mapping for the
reproduction of visually more appealing images that also
reveal scene details that are important for automotive safety
and surveillance applications. Local tone mapping methods
use a spatially varying mapping function determined by the
neighborhood of a pixel, which allows it to increase the local
contrast and the visibility of some details of the image. Local
methods usually yield more pleasing results because they
exploit the fact that human vision is more sensitive to local
contrast.
ON Semiconductor’s ALTM solution significantly
improves the performance over global tone mapping.
ALTM is directly applied to the Bayer domain to compress
the dynamic range from 20−bit to 12−bit. This allows the
regular color pipeline to be used for HDR image rendering.
Color Interpolation
In the raw data stream fed by the sensor core to the IFP,
each pixel is represented by a 20− or 12−bit integer number,
which can be considered proportional to the pixel’s response
to a one−color light stimulus, red, green, or blue, depending
on the pixel’s position under the color filter array. Initial data
processing steps, up to and including ALTM, preserve the
one−color−per−pixel nature of the data stream, but after
ALTM it must be converted to a three−colors−per−pixel
stream appropriate for standard color processing. The
conversion is done by an edge−sensitive color interpolation
module. The module pads the incomplete color information
available for each pixel with information extracted from an
appropriate set of neighboring pixels. The algorithm used to
select this set and extract the information seeks the best
compromise between preserving edges and filtering out
high frequency noise in flat field areas. The edge threshold
can be set through register settings.
Color correction and aperture correction
To achieve good color fidelity of the IFP output,
interpolated RGB values of all pixels are subjected to color
correction. The IFP multiplies each vector of three pixel
colors by a 3 x 3 color correction matrix. The three
components of the resulting color vector are all sums of three
10−bit numbers. The color correction matrix can be either
programmed by the user or automatically selected by the
auto white balance (AWB) algorithm implemented in the
IFP. Color correction should ideally produce output colors
that are corrected for the spectral sensitivity and color
crosstalk characteristics of the image sensor. The optimal
values of the color correction matrix elements depend on
those sensor characteristics and on the spectrum of light
incident on the sensor. The color correction variables can be
adjusted through register settings.
To increase image sharpness, a programmable 2D
aperture correction (sharpening filter) is applied to