IJSRD - International Journal for Scientific Research & Development| Vol.

3, Issue 11, 2016 | ISSN (online): 2321-0613

A Review on Removal of Shadow from a Single Image
Shoyeb Pathan1 S S Banait2
1
Student 2Assistant Professor
1,2
Department of Computer Engineering
1,2
K.K.Wagh College Of Engineering- Nashik, Savitribai Phule Pune University, Maharashtra, India.
Abstract— A frame work for detecting the shadow and
removal of the shadow from a single image in the real world
pictures. Till now the work on detecting the shadows had
more effort for design of hand-craft features. Shadows
distroits the image in computer pictures. For instance, the
decrease of performance of objects recognition and object
scene analysis. This shadow removal technique does not
perform well on curved surfaces and in the case of highly
non-uniform shadows. In the cases of shadows in dark
environments, this method appears to increase the contrast
of the recovered region. To address these issues proposed
frame work learns the relevant features in such a manner
through deep neural networks. Bayesian formulation
accurately extracts shadow matte and remove shadows. The
proposed method formulation is based on novel model
which accurately models the shadow generation process in
shadowed part and non-shadowed part. The proposed
method improves the image quality on curved surfaces and
visual quality of photographs and real world images.
Key words: Bayesian shadow removal, Conditional Random
Field, Convolutional Neural Networks (ConvNets), Shadow
Matting
I. INTRODUCTION
In computer graphics applications there are lots of
algorithms for image in painting we are going to focus on
shadow in painting. Shadows are a frequently occurring
natural phenomenon, whose detection and manipulation are
important in many computer vision (e.g. visual scene
understanding) and computer graphics applications.
Shadows have been used for tasks related to object shape,
size, movement, number of light sources and illumination
conditions. Shadows have a particular practical importance
in augmented reality applications, where the illumination
conditions in a scene can be used to seamlessly render
virtual objects and their casted shadows. Shadows can also
cause complications in many fundamental computer vision
tasks. For instance, they can degrade the performance of
object recognition, stereo, shape reconstruction, image
segmentation and scene analysis. In digital photography,
Information about shadows and their removal can
help to improve the visual quality of photographs. Inspired
by the hierarchical architecture of the human visual cortex,
many deep representation learning architectures have been
proposed in the last decade. We draw our motivation from
the recent successes of these deep learning methods in many
computer vision tasks where learned features out-performed
hand-crafted features. On that basis, we propose to use
multiple convolutional neural networks (ConvNets) to learn
useful feature representations for the task of shadow
detection. Our formulation is based on a generalized shadow
generation model which models both the umbra and
penumbra regions. To the best of our knowledge, we are the
first to use learned features’ in the context of shadow

detection, as opposed to the common carefully designed and
hand-crafted features[1].
The proposed shadow detection approach combines
local information at image patches with the local
information across boundaries. Since the regions and the
boundaries exhibit different types of features, we split the
detection procedure into two respective portions. Separate
ConvNets are consequently trained for patches extracted
around the scene boundaries and the super-pixels.
Predictions made by the ConvNets are local and we
therefore need to exploit the higher level interactions
between the neighbouring pixels. For this purpose, we
incorporate local beliefs in a Conditional Random Field
(CRF) model which enforces the labelling consistency over
the nodes of a grid graph defined on an image. This removes
isolated and spurious labelling outcomes and encourages
neighbouring pixels to adopt the same label.
Using the detected shadow mask, the umbra (Latin
meaning shadow), penumbra (Latin meaning almostshadow) and shadow-less regions and propose a Bayesian
formulation to automatically remove shadows.
A generalized shadow generation model which
separately defines the umbra and penumbra generation
process. The resulting optimization problem has a relatively
large number of unknown parameters, whose MAP
estimates are efficiently computed by alternatively solving
for the parameters. The shadow removal process also
extracts smooth shadow matte that can be used in
applications such as shadow compositing and editing.
A preliminary version of this research (which
solely focuses on shadow detection) appeared. In addition,
the current study includes: (a) a new approach to estimate
shadow statistics, (b) shadow removal and shadow matte
extraction, (c) number of additional analysis and limitations,
(d) applications in many computer vision and graphics tasks.
II. RELATED WORKS
A. Shadow Detection
One of the most popular methods to detect shadows is to use
a variety of shadow variant and invariant cues to capture the
statistical and deterministic characteristics of shadows. The
extracted features model the chromatic, textural and
illumination properties of shadows to determine the
illumination conditions in the scene. Some works give more
importance to features computed across image boundaries,
such as intensity and colour ratios across boundaries and the
computation of text on features on both sides of the edges.
Although these feature representations are useful, they are
based on assumptions that may not hold true in all cases. As
an example, chromatic cues assume that the texture of the
image regions remains the same across shadow boundaries
and only the illumination is different. This approach fails
when the image regions under shadows are barely visible.
Moreover, all of these methods involve a considerable effort

All rights reserved by www.ijsrd.com

849

A Review on Removal of Shadow from a Single Image
(IJSRD/Vol. 3/Issue 11/2016/215)

in the design of hand-crafted features for shadow detection
and feature selection.[1]
The challenging nature of the shadow detection
problem, many simplistic assumptions are commonly
adopted. Previous works made assumptions related to the
illumination sources, the geometry of the objects casting
shadows and the material properties of the surfaces on
which shadows are cast.
Salvador et al. [2] consider object cast shadows. In
many image analysis and interpretation applications,
shadows comes in between with fundamental tasks such as
object extraction and description. For this reason, shadow
segmentation is an important step in image analysis. In this
paper, they propose a new cast shadow segmentation
algorithm for both static and dynamic images. This
technique exploits spectral and geometrical properties of
shadows in a scene to make this task work. The present
shadow is first hypothesized with an initial and simple
evidence based on the truth that shadows make the region
dark on the region on which it is. The validity of detected
regions as shadows is further checked by making use of
more hypotheses on color invariance and geometric
properties of shadows. Finally, an information stage selects
or rejects the initial hypothesis for every detected region.
Using 3D graphics, they report psychophysical
results which show that: 1) the information provided by the
motion of an object's shadow complicates other sources of
information and perceptual biases, such as the assumption of
constant object shape and seeing point; 2) the natural
constraint of shadow darkness plays a role in the
interpretation of a shadow patch; 3) when shadow is caused
by a light source, the visual system incorrectly interprets the
shadow. The results support the hypothesis that the human
visual system incorporates a static light source constraint in
the perceptual processing of spatial layout of scenes.
Joshi et al [3] different from traditional methods
that explore pixel or edge information, they employ a
section based approach. And adds to consider individual
sections separately, they predict relative illumination
conditions between segmented regions from their views and
work on pairwise classification based on such information.
Classification results are used to make a graph of segments,
and graph-cut method is used to solve the labelling of
shadow and non-shadow regions. Detection results are later
filtered by image impainting, and the shadow free image is
recovered by relighting each pixel based on our lighting
model. In addition, they created a new dataset with shadowfree obsolete truth images, which provides a quantitative
basis for evaluating shadow removal
Vazquez et al [7] This technique can detect both
the cast and self-shadow. The method exploits local color
constancy properties which are cause of reflectance
suppression in excess of shadowed regions. For detecting
shadowed areas in a scene, the values of the backdrop image
are separated by values of the current frame in the true color
(RGB) space. They use all three type of colour space in our
work. Illumination map is extracted using a steerable filter
framework based on global, local correlations in low and
high frequency bands respectively. The lighting and colour
features so extracted are then input to a decision trees are
designed to detect shadow edges using AdaBoost. The
simulation results give us an idea about the performance of

the proposed method as good with boundary marking on
shadow and no shadow region with high accuracy.
B. Shadow Removal and Matting
Almost all approaches that are employed to either edit or
remove shadows are based on models that are derived from
the image formation process. A popular choice is to
physically model the image into a decomposition of its
intrinsic images along with some parameters that are
responsible for the generation of shadows. As a result, the
shadow removal process is reduced to the estimation of the
model parameters.
Finlayson et al. [5] addressed this problem by
nullifying the shadow edges and reintegrating the image,
which results in the estimation of the additive scaling factor.
Since such global integration (which requires the solution of
a 2D Poisson equation causes artifacts, the integration along
a 1D. The problem of shadow detection and removal from
single images of natural scenes. Different from traditional
methods that explore pixel or edge information, they employ
a region based approach. In addition to considering
individual regions separately, they predict relative
illumination conditions bettheyen segmented regions from
their appearances and perform pairwise classification based
on such information. Classification results are used to build
a graph of segments, and graph-cut is used to solve the
labeling of shadow and non-shadow regions. Detection
results are later refined by image matting, and the shadowfree image is recovered by relighting each pixel based on
our lighting model. they evaluate our method on the shadow
detection dataset
Hamiltonian path [4] proposed for shadow removal
though gradient based methods do not account for the
shadow variations inside the umbra region. To address this
shortcoming, Arbel and Hel-Or [5] treat the illumination
recovery problem as a 3D surface reconstruction and use a
thin plate model to successfully remove shadows lying on
curved surfaces. Alternatively, information theory based
techniques are proposed and a bilateral filtering based
approach is recently proposed in to recover intrinsic
(illumination and reflectance) images. However, these
approaches either require user assistance, calibrated imaging
sensors, careful parameter selection or considerable
processing times. To overcome these shortcomings, some
reasonably fast and accurate approaches have been proposed
which aim to transfer the color statistics from the nonshadow regions to the shadow regions.
Several assumptions are made in the shadow
removal literature due to the ill-posed nature of recovering
the model parameters for each pixel. The camera sensor
parameters are needed in. Multiple narrowband sensor
outputs for each scene are required in, many techniques
employs a sequence of images to recover the intrinsic
components.
III. ANALYSIS
A. Shadow Detection
Try to detect and localize shadows precisely at the pixel
level. ConvNets to learn multilevel hierarchies of features.
The final layer of the network is fully connected and comes
just before the output layer. This layer works as a traditional
MLP with one hidden layer followed by a logistic regression

All rights reserved by www.ijsrd.com

850

A Review on Removal of Shadow from a Single Image
(IJSRD/Vol. 3/Issue 11/2016/215)

output layer which provides a distribution over the classes.
Overall, after the network has been trained, it takes an RGB
patch as an input and processes it to give a posterior
distribution over binary classes. ConvNets operate on equisized windows, so it is required to extract patches around
desired points of interest
During the training process, they use stochastic
gradient descent to automatically learn feature
representations in a supervised manner. The gradients are
computed using back-propagation to minimize the cross
entropy loss function. We set the training parameters using a
cross validation process. The training samples are shuffled
randomly before training since the network can learn faster
from unexpected samples
B. Shadow Removal
The first step is to identify the umbra, penumbra and the
corresponding non-shadowed regions in an image. We also
need to identify the boundary where the actual object and its
shadow meet. This identification helps to avoid any errors
during the estimation of shadow/non shadow statistics (e.g.,
color distribution). The previous work need to process
manually through human interaction. The proposed system
simply works to produce or estimate the umbra, penumbra
regions and the object-shadow boundary.
Heuristically, the object-shadow boundary is
relatively darker compared to other shadow boundaries
where differences in light intensity are significant.
Therefore, given a shadow mask, we calculate the boundary
normal at each point. We cluster the boundary points
according to the direction of their normal. This results in
separate boundary segments which join to form the
boundary contour around the shadow.
IV. SUMMARY & CONCLUSION
A data-driven approach to learn the most relevant features
for the detection of shadows from a single image. The
proposed shadow removal framework extracts the shadow
matte along with the recovered image. A Bayesian
formulation constitutes the basis of shadow removal
procedure and thereby makes use of an improved shadow
generation model. Shadow detection uses the combination
of boundary and region ConvNets incorporated in the CRF
model. For shadow removal, the multi-level color transfer
followed by the Bayesian refinement is used. The proposed
framework has a number of applications including image
editing and enhancement tasks.

REFERENCES
[1] D. H. Hubel and T. N. Wiesel, “Receptive fields,
binocular interaction and functional architecture in the
cat’s visual cortex,” The Journal of Physiology, vol.
160, no. 1, p. 106, 1962.
[2] E. Salvador, A. Cavallaro, and T. Ebrahimi, “Cast
shadow segmentation using invariant color features,”
CVIU, vol. 95, no. 2, pp. 238–259, 2004.
[3] A. J. Joshi and N. P. Papanikolopoulos, “Learning to
detect moving shadows in dynamic environments,”
TPAMI, vol. 30, no. 11, pp. 2055–2063, 2008.
[4] I. Huerta, M. Holte et al., “Detection and removal of
chromatic moving shadows in surveillance scenarios,”
in ICCV. IEEE, 2009, pp. 1499–1506.
[5] G. D. Finlayson, S. D. Hordley, C. Lu, and M. S. Drew,
“On the removal of shadows from images,” TPAMI,
vol. 28, no. 1, pp. 59–68, 2006.
[6] G. D. Finlayson, S. D. Hordley, and M. S. Drew,
“Removing shadows from images,” in ECCV, vol.
2353. Springer, 2002, pp. 823–836.
[7] E. Vazquez, R. Baldrich et al., “Describing reflectances
for color segmentation robust to shadows, highlights,
and textures,” TPAMI, vol. 33, no. 5, pp. 917–930,
2011.
[8] A. Bousseau, S. Paris, and F. Durand, “User-assisted
intrinsic images,” in TOG, vol. 28, no. 5. ACM, 2009.
[9] G. D. Finlayson, S. D. Hordley, C. Lu, and M. S. Drew,
“On the removal of shadows from images,” TPAMI,
vol. 28, no. 1, pp. 59–68, 2006.
[10] G. D. Finlayson, S. D. Hordley, and M. S. Drew,
“Removing shadows from images,” in ECCV, vol.
2353. Springer, 2002, pp. 823–836.
[11] G. D. Finlayson, M. S. Drew, and C. Lu, “Entropy
minimization for shadow removal,” IJCV, vol. 85, no.
1, pp. 35–57, 2009.
[12] C. Fredembach and G. D. Finlayson, “Hamiltonian path
based shadow removal.” in BMVC, 2005, pp. 502–511.
[13] F. Liu and M. Gleicher, “Texture-consistent shadow
removal,” in ECCV. Springer, 2008, pp. 437–450.
[14] A. Mohan, J. Tumblin, and P. Choudhury, “Editing soft
shadows in a digital photograph,” Computer Graphics
and Applications, IEEE, vol. 27, no. 2, pp. 23–31, 2007.
[15] E. Arbel and H. Hel-Or, “Shadow removal using
intensity surfaces and texture anchor points,” TPAMI,
vol. 33, no. 6, pp. 1202–1216, 2011.
[16] V. Kwatra, M. Han, and S. Dai, “Shadow removal for
aerial imagery by information theoretic intrinsic image
analysis,” in ICCP. IEEE, 2012, pp. 1–8.

ACKNOWLEDGEMENT
I am glad to express my sentiments of gratitude to all who
rendered their valuable help for the successful completion of
the paper. I am thankful to my guide Prof. S. S. Banait,
Associate
Professor,
Computer
Engineering,
K.K.W.I.E.E.R.,
Nashik
for
his
guidance
and
encouragement. His expert suggestions and scholarly
feedback had greatly enhanced the effectiveness. I would
also like to thanks to Prof. Dr. S. S. Sane, Head, Department
of Computer Engineering, K.K.W.I.E.E.R., Nashik.

All rights reserved by www.ijsrd.com

851

Sign up to vote on this title
UsefulNot useful