Escolar Documentos
Profissional Documentos
Cultura Documentos
For any other purposes, Permission must be obtained from the IEEE by emailing pubs-permissions@ieee.org.
This article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication.
1
Sparse Signal Methods for 3D Radar Imaging
Christian D. Austin, Emre Ertin, and Randolph L. Moses
The Ohio State University, Department of Electrical and Computer Engineering
2015 Neil Avenue, Columbus, OH 43210, USA
Email: {austinc, ertine, randy}@ece.osu.edu
AbstractSynthetic aperture radar (SAR) imaging is a valu-
able tool in a number of defense surveillance and monitoring
applications. There is increasing interest in three-dimensional
(3D) reconstruction of objects from radar measurements. Tra-
ditional 3D SAR image formation requires data collection over
a densely sampled azimuth-elevation sector. In practice, such
a dense measurement set is difcult or impossible to obtain,
and effective 3D reconstructions using sparse measurements are
sought. This paper presents wide-angle three-dimensional image
reconstruction approaches for object reconstruction that exploit
reconstruction sparsity in the signal domain to ameliorate the
limitations of sparse measurements. Two methods are presented;
rst, we use
p
penalized (for p 1) least squares inversion, and
second, we utilize tomographic SAR processing to derive wide-
angle 3D reconstruction algorithms that are computationally
attractive but apply to a specic class of sparse aperture sam-
plings. All approaches rely on high-frequency radar backscatter
phenomenology so that sparse signal representations align with
physical radar scattering properties of the objects of interest. We
present full 360
, 114.1
],
and [18
, 42.1
in
azimuth and 25
in this azimuth/elevation
sector.
70 75 80 85 90 95 100 105 110
15
20
25
30
35
40
45
Azimuth
E
l
e
v
a
t
i
o
n
Fig. 1. Sparse squigglepath radar measurements as a function of azimuth
and elevation angle in degrees.
B. Multipass Circular SAR Dataset
The second sparse dataset we consider is the multipass
CSAR data from the AFRL GOTCHA Volumetric SAR Data
Set, Version 1.0 [35], [46]. This dataset consists of sampled,
dechirped radar return values that have been transformed
to the form of G(k
x
, k
y
, k
z
; , , pol) in (2). The data is
fully polarimetric from 8, 360
, 45
].
The actual ight path is not perfectly circular, as shown in
Figure 3, and not at perfectly constant and equally-spaced
elevations. The center frequency of the radar is f
c
= 9.6GHz,
Copyright (c) 2010 IEEE. Personal use is permitted. For any other purposes, Permission must be obtained from the IEEE by emailing pubs-permissions@ieee.org.
This article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication.
5
(a) Squiggle Path
(b) CSAR Path
Fig. 2. Data domes of all k-space data that can be collected by a radar
for (a) the pseudorandom synthetic squiggle path backhoe dataset, and (b)
the GOTCHA dataset; units are in rad/m. Support of the k-space data is
contained between the inner and outer dome. Inner and outer domes show the
minimum and maximum radar interrogating frequencies. The outlines on the
outer domes show the locations of the sparse k-space data collected, which
extends from the outline radially to the inner dome.
and the bandwidth of the radar is 640MHz, signicantly lower
than that of the squiggle path collection. Figure 2(b) shows the
k-space data collected by the eight GOTCHA passes. The k-
space radial extent from the outer dome to inner dome of
data collected, dictated by radar bandwidth, is seen to be
signicantly smaller than in the squiggle path case. Figure 2(b)
also illustrates that the GOTCHA k-space data is very limited
in elevation extent, in contrast to the squiggle path.
Fig. 3. Actual GOTCHA passes. Scale is in meters.
IV.
p
REGULARIZED LEAST-SQUARES IMAGING
ALGORITHM
In this section we present the rst of two 3D imaging
algorithms; this algorithm applies to general data collection
scenarios, but will be used for sparse collections here. The
proposed approach assumes that the number of 3D locations in
which nonzero backscattering occurs is sparse in the 3D recon-
struction space, and applies sparse reconstruction techniques.
We pose the reconstruction as an
p
regularized least-squares
(LS) problem, in which a regularizing term encourages sparse
solutions. This
p
regularized LS imaging algorithm attempts
to t an image-domain scattering model to the measured k-
space data under a penalty on the number of non-zero voxels.
The algorithm assumes that the complex magnitude response
of each scattering center is approximately constant over narrow
aspect angles and across the radar frequency bandwidth. The
algorithm in this section applies to general apertures; this is
in contrast to the second algorithm presented in Section V,
which applies to apertures with specic structure.
Dene a set of N locations in image reconstruction space
as candidate scattering center locations,
C = {(x
n
, y
n
, z
n
)}
N
n=1
. (5)
Typically these locations are chosen on a uniform rectilinear
grid. The M N data measurement matrix is given by
A =
_
e
j(k
x,m
x
n
+k
y,m
y
n
+k
z,m
z
n
)
_
m,n
,
where m indexes the M measured k-space frequencies down
rows, and n indexes the N coordinates in C across columns.
Under the assumption that scattering center amplitude is
constant over the aspect angle extent and radar bandwidth con-
sidered, the measured (subaperture) data from the scattering
center model, (2), can be written in matrix form as
b = A + , (6)
where is the N-dimensional vectorized 3D image that we
wish to reconstruct; it has complex amplitude value
n
in
row n if a scattering center is located at (x
n
, y
n
, z
n
) and is
zero in row n otherwise; the image vector maps to the 3D
image, I(x
n
, y
n
, z
n
), by the relation I(x
n
, y
n
, z
n
) = (i) if
and only if column i of A is from coordinate (x
n
, y
n
, z
n
). The
vector is an M dimensional i.i.d. circular complex Gaussian
noise vector with zero mean and variance
2
n
, and b is an
M-dimensional vector of noisy k-space radar measurements.
The reconstructed image,
, is the solution to the sparse
optimization problem [27], [28]
= argmin
_
b A
2
2
+
p
p
_
, (7)
where the p-norm is denoted as
p
, 0 < p 1, and is a
sparsity penalty weighting parameter. Note that the solution
to (7) applies for general A matrices, and the radar ight
path locations that index the rows of A can be arbitrary. In
particular, ight paths such as the squiggle path in Figure 2(a)
can be used. Many algorithms exist for solving (7) or the
constrained version of this problem when p = 1 (e.g. [26],
[47][50]), or in the more general case, when 0 < p 1 (e.g.
Copyright (c) 2010 IEEE. Personal use is permitted. For any other purposes, Permission must be obtained from the IEEE by emailing pubs-permissions@ieee.org.
This article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication.
6
[28], [51]). We use the iterative majorization-minimization
algorithm in [28] to implement (7). This algorithm is suitable
for the general case when 0 < p 1. The algorithm
has two loops, an outer loop which iterates on a surrogate
function and an inner loop that solves a matrix inverse using
a conjugate gradient algorithm, in our experience, the inner
loop terminates after very few iterations when using a Fourier
operator, as considered here. Empirical evidence also indicates
that this majorization-minimization algorithm terminates faster
than a split Bregman iteration approach [50]. An outline of the
majorization-minimization algorithm implementation is pro-
vided in the Appendix. For algorithm implementation, dene
a uniform rectilinear grid on the x, y, and z spatial axes with
voxel spacings of x, y, and z, respectively. Let the set
of candidate coordinates C in (5) consist of all permutations
of (x, y, z) coordinates from the partitioned axes; then, the set
C denes a uniform 3D grid on the scene. If, in addition,
the k-space samples are on a uniform 3D frequency grid
centered at the origin, the operation A can be implemented
using the computationally efcient 3D Fast Fourier Transform
(FFT) operation. In many scenarios, including the one here,
the measured k-space samples are not on a uniform grid,
and the FFT cannot be used directly. Instead an interpolation
step followed by an FFT is needed. An alternative approach
would be to use Type-2 nonuniform FFTs (NUFFT)s as the
operator A to process data directly on the non-uniform k-
space grid, at added computational cost [52], [53]. Nonuni-
form FFT algorithms require an interpolation step, which is
executed each time the operator A is evaluated; whereas, in
FFT implementation, interpolation occurs only once and the
interpolated data becomes b. When using an iterative algorithm
to solve (7), as used here, having to perform interpolation once
can result in signicant computational savings. Our empirical
results on the X-band data sets considered here suggest that
nearest neighbor interpolation results in well-resolved images
at low computational cost, and so it is adopted here.
Implementing the optimization algorithm solving (7) for
large-scale problems can be challenging from a memory and
computational standpoint. In iterative algorithms, like the one
utilized here, typically, the data vector b as well as the current
iterate of and a gradient with the same dimension as is
stored. For example, in the simulations below, we reconstruct
a scene with N = 182 250 252 1.1 10
7
voxels
to cover a single vehicle. So, at the very least, it would
be necessary to store the data vector in addition to two
vectors of double or single precision in 1.110
7
dimensional
complex space. For algorithms that utilize a conjugate gradient
approach to calculate matrix inverses, it is also necessary to
store a conjugate vector of the same dimension N, and in a
Newton-Raphson approach, it is necessary to store a Hessian
of dimension N N. During each iteration of an algorithm,
it is commonly required to evaluate the operator A and its
adjoint. These operations can become very computationally
expensive when the problem size grows and may result in a
computationally intractable algorithm, unless a fast operator
such as the FFT is employed.
Specically, since A is an M N matrix, direct multi-
plication of A requires MN multiplies and additions per
evaluation. In examples using the squiggle path and nine
subapertures chosen, the average value of these nine M values
is 10
5
, so MN 10
12
operations. After initial interpolation,
an FFT implementation of A requires O(D
3
log(D
3
)) opera-
tions, where D is the maximum number of samples across the
image dimensions. For the imaging example with dimensions
182 250 252, D = 252. For concreteness, assuming the
constant multiple on the order of operations in the FFT is
close to unity, FFT implementation of the operator A requires
approximately 252
3
log(252
3
) 3.810
8
operations; so, FFT
implementation results in computational savings greater than
a factor of 2500.
Since the scattering centers in model (2) are anisotropic and
polarization dependent, we apply (7) to form the image for
each narrow-angle subaperture and polarization, and combine
the images using equation (4). Recent approaches for joint
reconstruction of multiple images [54] may also be applied to
simultaneously reconstruct all polarizations for each subaper-
ture.
V. WIDE-ANGLE TOMOGRAPHIC SAR IMAGING
The second approach we consider for 3D reconstruction
is a tomographic SAR approach [11][20], in which the
relative phase information from several closely spaced col-
lection paths is used to estimate the height scattering prole
using interferometric techniques. Applying this approach in
combination with angle subapertures, one can divide the 3D
problem problem into a set of 2D subaperture image formation
problems followed by 1D spectral estimation computations.
This approach results in signicantly lower computation and
memory requirements as compared with the method presented
in Section IV. On the other hand, the Tomo-SAR-based
approach applies only to multi-baseline images, and thus
applies only to a particular subclass of sparse data collection
geometries. As a result, the algorithm proposed in this section
does not have the generality of the
p
regularized LS approach,
but does provide reduced computation for those cases in which
the data collection geometry is amenable to this approach.
Tomographic SAR approaches have been considered for forest
canopy and building height estimation using relatively narrow-
angle linear collection geometries [13][16], [18][20]. Here,
we adapt this approach to full-360
}. The
backscatter measurements, r(f
j
;
i
,
, centered at azimuth
m
.
Rather than store the k-space data directly, we can provide
compact image products matched to scatterers with limited
persistence, and maintain 1-1 correspondence with the original
k-space data. These image products are 2D ground plane
(z = 0) image sequences {I(x, y, 0;
m
,
, pol)}
m
where
each image is the output of a lter matched to a limited-
persistence reector over the azimuth angles in azimuth win-
dow W
m
(). Specically, the m-th subaperture images are
constructed as
I(x, y, 0;
m
,
, pol) =
F
1
(x,y)
_
G(k
x
, k
y
,
_
k
2
x
+ k
2
y
tan(
);
m
,
, pol)
W
m
_
tan
1
k
x
k
y
_
_
, 1 L,
where F
1
(x,y)
is the 2D inverse Fourier transform, and the
azimuthal window function W(
m
) is dened as:
W
m
() =
_
W
_
_
, /2 < < /2
0, otherwise.
(8)
Here,
m
is the center azimuth angle for the m-th window
and describes the hypothesized azimuth persistence width.
The window function W() is an invertible tapered window
used for cross-range sidelobe reduction; typical choices may
be the Hamming or Taylor windows that are commonly used
in SAR images. Each image can be modulated to baseband
and sampled at a lower resolution in (x, y) without causing
aliasing. Each baseband ground image I
B
(x, y, 0;
m
,
, pol)
is calculated as:
I
B
(x, y, 0;
m
,
, pol) =
I(x, y, 0;
m
,
, pol)e
j(k
0
x,m
x+k
0
y,m
y)
.
(9)
where the center frequency (k
0
x,m
, k
0
y,m
) is determined by
the center aperture
m
, mean elevation angle
and center
frequency f
c
:
k
0
x,m
=
4f
c
c
cos
cos
m
, k
0
y,m
=
4f
c
c
cos
sin
m
.
An important property of this subaperture imaging approach
is that Nyquist sampling of (x, y) in subaperture images is
dictated by the baseband downrange and crossrange k-space
extents, and therefore, the image sample spacing is (much)
less ne than if the full SAR image is formed using all k-
space data jointly [21]. For modest azimuth window extent
in radians, the Nyquist sampling in the downrange is dictated
by the inverse of the radar bandwidth,
1
BW
, and the crossrange
sampling is dictated by
1
(f
c
+BW/2)
; these sample spacings
are much coarser than the
1
2(f
c
+BW/2)
spacing that would be
needed for the full Circular SAR k-space data. The result is
a signicantly smaller storage requirement for CSAR imagery
data.
B. Tomographic SAR
We next present a method for using the set of ground
plane images I
B
(x, y, 0;
m
,
).
The input to the wide-angle Tomo-SAR algorithm
is a set of baseband modulated ground plane images
{I
B
(x, y, 0;
m
,
m
of data collected at elevation cuts
. We process each
subaperture separately; for each subaperture denote the image
sequence as {I(x, y;
, pol)}
L
=1
and consider without loss of
generality
m
= 0. We consider a nite (and small) number,
p, of scattering centers at each resolution cell (x, y) and
reparameterize the scene reectivity g(x, y, z) as
g
p
(x, y) g(x, y, h
p
(x, y)), (10)
where g
p
(x, y) denotes the complex-valued reectivity of the
scattering center at location (x, y, h
p
(x, y)). In general, the
number of scattering centers per resolution cell varies spatially
and needs to be estimated from the data. The ground plane
image for elevation
, y
, pol) =
s(x, y)
p
g
p
(x, y)e
j tan(
)k
0
x
h
p
(x,y)
e
jxk
0
x
,
(11)
where s(x, y) is the inverse Fourier transform of the 2D
windowing function used in imaging the 2D point spread
function of the imaging operator and k
0
x
= (
4f
c
c
) cos(
) is
the center frequency used in baseband modulation. The ground
locations (x, y, h
p
(x, y)) and the image coordinates (x
l
, y
l
))
are related through layover:
x
l
= x + tan(
)h
p
(x, y). y
l
= y. (12)
We assume that the difference between the elevation angles
for the different passes is sufciently small so that for each
elevation pass the scattering center (x, y, h(x, y)), falls in the
same resolution cell (x
l
, y
l
); for practical object or scene
heights radar point spread functions, and elevation diversity,
this assumption is generally satised. Then the baseband
images from each pass can be modeled as
I(x
l
, y
l
;
, pol) =
p
g
p
(x
l
, y
l
)e
jk
0
x
tan(
)h
p
(x
l
,y
l
)
, (13)
where g
p
(x
l
, y
l
) s(x, y)
_
p
g
p
(x, y)e
jxk
0
x
_
. This can
be expressed as a sum of complex exponential model
I(x
l
, y
l
;
, pol) =
p
g
p
(x
l
, y
l
)e
jk
p
(x
l
,y
l
) tan(
)
, (14)
where the the frequency factor k
p
is given by
k
p
(x
l
, y
l
) =
4f
c
cos(
)
c
h
p
(x
l
, y
l
). (15)
Copyright (c) 2010 IEEE. Personal use is permitted. For any other purposes, Permission must be obtained from the IEEE by emailing pubs-permissions@ieee.org.
This article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication.
8
In general, the elevation spacing of the L measurements
in (14) is not equally-spaced. As an example, even though
GOTCHA CSAR passes have a planned (ideal) equally-spaced
separation of = 0.18
, pol)}
L
=1
is one of estimating a set of
complex exponentials from L measurements; see also [11]
[20], [55].
In Tomo-SAR, the number of scattering centers in a res-
olution cell must be estimated before spectral estimation
methods can be applied to estimate height parameters. This
is a model order selection problem, and different methods
exist for model order selection [55][57]; a discussion of
model order selection in the context of Tomo-SAR has been
treated extensively in the literature (see e.g. [17], [20]). In a
recent study [21] using CSAR X-band data of vehicles, the
estimated model order was 1 in a large majority of cases,
and when the model order was > 1, one dominant (large
amplitude) scattering center was often seen. Thus, the complex
exponential signal in (14) is sparse, with typically only 1
scattering center in the height dimension. This suggests that
the estimation bias resulting from forcing the model order to
be 1 may be small for a large fraction of pixels. Choosing
the model order to be 1 presents a computational advantage,
because for the single-exponential case, a maximum likelihood
estimator of its frequency in white measurement noise is given
by the peak of the Fourier transform of the data, and this
Fourier transform is easy to compute. We thus adopt this model
order approximation, and estimate, for each pixel (x
l
, y
l
) the
single dominant height location k
1
(x
l
, y
l
), as the peak of the
Fourier transform of the L, I(x
l
, y
l
;
, 76
).
Fig. 4. Magnitude of k-space data subset from azimuth range [66
, 76
).
Lighter colors and smaller points are used for smaller magnitude samples;
darker colors and larger points are used for larger magnitude samples. Axes
units are in rad/m.
Each subset of data is contained in a bounding box
with bandwidths in each dimension of (X
BW
, Y
BW
, Z
BW
) =
(142.80, 314.2, 285.6) rad/m. At these bandwidths, spatial
samples are critically sampled with sample spacings of
(x, y, z) = (0.044, 0.02, 0.022) meters in each respec-
tive dimension. Both the image reconstruction and k-space in-
terpolation are performed on uniformly spaced 182250252
grids. With this size grid, the spatial extent of the reconstructed
images is [4, 4) [2.5, 2.5) [2.77, 2.77) meters in the
x, y, and z dimensions respectively. Each subset of k-space
data is interpolated using nearest neighbor interpolation. In
simulations not presented here, more accurate interpolations
using both the Epanechnikov and Gaussian kernels, were
found to result in nearly identical images, but at much higher
computational cost.
The squiggle path dataset is noiseless. To simulate the effect
of radar measurement noise, we corrupt the k-space data with
i.i.d. circular complex Gaussian noise with zero mean and
variance,
2
n
= 0.9. Real and imaginary parts of the k-space
data have a mean of approximately zero and variance,
2
s
, of
approximately 9; thus, the noise variance is chosen so that the
signal to noise ratio (SNR) is 10 dB, where SNR in decibels
is dened as 10 log(
2
s
2
n
).
First, we show in Figure 5 a side view of a gold standard
benchmark 3D reconstructed backhoe image corresponding
to the squiggle path dataset [45]. The image was formed
using a windowed 3D inverse Fourier transform of a dense k-
space dataset covering the azimuth and elevation range of the
squiggle path; this dense data is given for every
1
14
in azimuth
and elevation angle along an azimuth range of [65.5
, 114.5
]
and elevation range of [17.5
, 42.5
, 114.5
, 42.5
, 76
). Light colors and small points are used for small magnitude voxels;
darker colors and large points are used for large magnitude voxels. Axis units
are in meters.
Figure 8 shows the side and top view of a reconstructed
squiggle path backhoe image using the
p
regularized LS
reconstruction algorithm in Section IV. The top 30 dB magni-
tude voxels are displayed. The images in Figure 8 were formed
by rst reconstructing 27, 3D images from each subaperture
and polarization; images are the solution to the optimization
problem (7). All images are reconstructed using a norm with
p = 1 and sparsity parameter = 10, which are selected
manually. Automatic selection of is an ongoing area of
research [58][60]. Here, p, and were chosen empirically
through visual inspection of images. Final images are formed
by combining the subset images over the maximum of polar-
izations in addition to aspect angles according to (4).
In addition to the scattering point plots displayed in the
top of Figure 8, it is possible to accentuate surfaces of 3D
reconstructed images for visualization by smoothing image
voxels; visualizations are shown in Figures 8(e) and 8(f)
1
.
There are a large array of scientic visualization tools for
accomplishing such a task, such as Maya and ParaView. Maya
visualization examples are given in [61]. Here we apply a
Gaussian kernel with diagonal covariance and equal standard
deviation, , to smooth the voxels. Smoothed images are
formed on a grid with the same dimensions as the original
grid. To speed up the smoothing, the kernel is given a xed
support within some radius of the grid voxel being smoothed.
In Figures 8(e) and 8(f), a standard deviation of = 0.4 m and
grid radius of 3 is used. Voxel magnitude is then displayed
using color and transparency coding. Blue, transparent colors
indicate low relative voxel magnitude and red, opaque colors
indicate large relative voxel magnitude.
As can be seen from Figure 8, features in the sparse
reconstructions are well-resolved. For example, the hood, roof,
and front and back scoops are clearly visible, in the correct
location, and do not exhibit the large sidelobe spreading seen
in Figure 7. The side panels of the driver cab are not visible,
and the arm on the back scoop is not as prominent as in
the benchmark in Figure 5, but most backhoe features in the
benchmark backhoe image are also visible in the squiggle path
reconstruction. There are a small number of artifacts in the
image that do not lie close to the backhoe, namely below
the front and back scoop. These artifacts appear to be due to
multiple-bounce effects that are present in the given scattering
data, rather than to an error artifact of the reconstruction
process. From the top view of the backhoe, the group of voxels
at the top left also appear to be present in the benchmark image
as viewed from an angle not shown in Figure 5; these voxels
are also likely the result of multibounce from the back scoop
and are not artifacts specic to squiggle path reconstruction.
Simulation results presented above were performed in MAT-
LAB on a system with an Intel 3 GHz Dual Core Xeon
processor and 4 GB of memory. Both the interpolation and
sparse optimization in image reconstruction can be computa-
tionally intensive. The Nearest-neighbor interpolation method
1
A movie of this visualization rotating 360
p
regularized LS reconstructions, 5
subapertures from 0
to
360
p
regularized LS reconstructions. Scattering is assumed to be
above the ground plane in calculations; so, unlike in the
p
regularized LS reconstruction, there are no non-zero voxels
below the vehicle. As in the
p
regularized LS reconstruc-
tion, a set of 72 subaperture image sets were formed, each
3
A movie of the combined VV and HH polarization
visualization rotating 360
J(x, x
n
) = y Ax
2
2
+
N
i=1
(x
i
, x
n
i
), (16)
where superscript n is the sequence index; subscript i is the
component index of the x vector, and
(x
i
, x
n
i
) =|x
n
i
|
p
+ Re
_
p (x
n
i
)
|x
n
i
|
p2
(x
i
x
n
i
)
_
+
1
2
p|x
n
i
|
p2
|x
i
x
n
i
|
2
.
(17)
It was shown in [28] that the sequence of solutions
x
n+1
= argmin
x
J(x, x
n
)
=
_
A
H
A +
2
D(x
n
)
_
1
A
H
y, (18)
where D(x
n
) = diag
_
p|x
n
i
|
p2
_
, converges to a solution to
(7) as n . For the imaging problems considered here,
direct inversion of the matrix in (18) can be computationally
intensive, and we utilize the the conjugate gradient (CG)
method to solve the inverse. The algorithm decomposes into a
nested loop. The outer loop iterates on the solution x
n
, and the
inner loop is the CG loop that solves the inverse in (18). To
arrive at an exact solution to the original optimization problem,
the outer loop must be executed an innite amount of times.
Here, we terminate the outer loop when the relative change in
the original objective function is small between iterations, and
we terminate the inner CG loop when the relative magnitude of
the residual becomes small. The tolerances used here for algo-
rithm termination were qualitatively chosen. These tolerances
affect image quality, and execution speed, but empirically there
does not appear to be much improvement in image quality by
decreasing tolerance past a certain level.
REFERENCES
[1] G. Titi, E. Zelnio, K. Naidu, R. Dilsavor, M. Minardi, N. Subotic,
R. Moses, L. Potter, L. Lin, R. Bhalla, and J. Nehrbass, Visual
SAR using all degrees of freedom, in Proc. MSS Tri-Service Radar
Symposium, Albuquerque, NM, June 21-25 2004.
[2] C. V. Jakowatz Jr., D. E. Wahl, P. H. Eichel, D. C. Ghiglia, and
P. A. Thompson, Spotlight-Mode Synthetic Aperture Radar: A Signal
Processing Approach. Boston: Kluwer Academic Publishers, 1996.
[3] S. DeGraaf, 3-D fully polarimetric wide-angle superresolution-based
SAR imaging, in Thirteenth Annual Adaptive Sensor Array Processing
Workshop (ASAP 2005). Lexington, M.A.: MIT Lincoln Laboratory,
June 78 2005.
[4] J. Jakowatz, C.V. and D. Wahl, Three-dimensional tomographic imag-
ing for foliage penetration using multiple-pass spotlight-mode SAR,
in Signals, Systems and Computers, 2001. Conference Record of the
Thirty-Fifth Asilomar Conference on, vol. 1, 2001, pp. 121 125 vol.1.
[5] K. Knaell, Three-dimensional SAR from curvilinear apertures, in
Proceedings of the 1996 IEEE National Radar Conference, May 1996,
pp. 220 225.
[6] J. Li, Z. Bi, Z.-S. Liu, and K. Knaell, Use of curvilinear SAR for three-
dimensional target feature extraction, IEE Proceedings Radar, Sonar
and Navigation, vol. 144, no. 5, pp. 275 283, October 1997.
[7] S. Axelsson, Beam characteristics of three-dimensional SAR in curved
or random paths, IEEE Transactions on Geoscience and Remote Sens-
ing, vol. 42, no. 10, pp. 2324 2334, October 2004.
[8] O. Frey, C. Magnard, M. Ruegg, and E. Meier, Focusing of airborne
synthetic aperture radar data from highly nonlinear ight tracks, IEEE
Transactions on Geoscience and Remote Sensing, vol. 47, no. 6, pp.
1844 1858, June 2009.
[9] M. Stuff, M. Biancalana, G. Arnold, and J. Garbarino, Imaging moving
objects in 3D from single aperture synthetic aperture radar, in Proc.
IEEE 2004 Radar Conference, April 2629 2004, pp. 9498.
[10] W. G. Carrara, R. M. Majewski, and R. S. Goodman, Spotlight Synthetic
Aperture Radar: Signal Processing Algorithms. Artech House, 1995.
[11] S. Xiao and D. C. Munson, Spotlight-mode SAR imaging of a three-
dimensional scene using spectral estimation techniques, in Proceedings
of IGARSS 98, vol. 2, 1998, pp. 624644.
Copyright (c) 2010 IEEE. Personal use is permitted. For any other purposes, Permission must be obtained from the IEEE by emailing pubs-permissions@ieee.org.
This article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication.
16
[12] Z. She, D. Gray, R. Bogner, and J. Homer, Three-dimensional SAR
imaging via multiple pass processing, in IEEE International Geoscience
and Remote Sensing Symposium, 1999. (IGARSS 99), vol. 5, 1999, pp.
2389 2391.
[13] A. Reigber and A. Moreira, First demonstration of airborne SAR
tomography using multibaseline L-band data, IEEE Transactions on
Geoscience and Remote Sensing, vol. 38, no. 5, pp. 2142 2152,
September 2000.
[14] G. Fornaro, F. Serano, and F. Soldovieri, Three-dimensional focusing
with multipass SAR data, IEEE Transactions on Geoscience and
Remote Sensing, vol. 41, no. 3, pp. 507 517, March 2003.
[15] F. Lombardini, M. Montanari, and F. Gini, Reectivity estimation
for multibaseline interferometric radar imaging of layover extended
sources, IEEE Transactions on Signal Processing, vol. 51, no. 6, pp.
1508 1519, June 2003.
[16] F. Lombardini and A. Reigber, Adaptive spectral estimation for multi-
baseline SAR tomography with airborne L-band data, in Geoscience
and Remote Sensing Symposium, 2003. IGARSS 03. Proceedings. 2003
IEEE International, vol. 3, July 2003, pp. 2014 2016.
[17] F. Gini and F. Lombardini, Multibaseline cross-track SAR interferom-
etry: a signal processing perspective, IEEE AES Magazine, vol. 20,
no. 8, pp. 7193, Aug 2005.
[18] S. Tebaldini, Single and multipolarimetric SAR tomography of forested
areas: A parametric approach, IEEE Transactions on Geoscience and
Remote Sensing, vol. 48, no. 5, pp. 2375 2387, May 2010.
[19] X. X. Zhu and R. Bamler, Tomographic SAR inversion by l
1
-norm
regularization:the compressive sensing approach, IEEE Transactions
on Geoscience and Remote Sensing, vol. 48, no. 10, pp. 3839 3846,
October 2010.
[20] , Very high resolution spaceborne SAR tomography in urban
environment, IEEE Transactions on Geoscience and Remote Sensing,
vol. PP, no. 99, pp. 1 13, 2010.
[21] E. Ertin, R. L. Moses, and L. C. Potter, Interferometric methods for 3-D
target reconstruction with multi-pass circular SAR, IET Radar, Sonar
and Navigation, vol. 4, no. 3, pp. 464 473, June 2010.
[22] F. Lombardini, Differential tomography: a new framework for SAR
interferometry, IEEE Transactions on Geoscience and Remote Sensing,
vol. 43, no. 1, pp. 37 44, January 2005.
[23] R. Moses and L. Potter, Noncoherent 2D and 3D SAR reconstruction
from wide-angle measurements, in Thirteenth Annual Adaptive Sensor
Array Processing Workshop (ASAP 2005). Lexington, M.A.: MIT
Lincoln Laboratory, June 78 2005.
[24] L. C. Potter and R. L. Moses, Attributed scattering centers for SAR
ATR, IEEE Transactions on Image Processing, vol. 6, no. 1, pp. 7991,
1997.
[25] S. Chen, D. Donoho, and M. Saunders, Atomic decomposition by basis
pursuit, SIAM Journal on Scientic Computing, vol. 20, no. 1, pp. 33
61, 1998.
[26] M. Figueiredo, R. Nowak, and S. Wright, Gradient projection for sparse
reconstruction: Application to compressed sensing and other inverse
problems, IEEE Journal of Selected Topics in Signal Processing, vol. 1,
no. 4, pp. 586597, December 2007.
[27] M. C etin and W. Karl, Feature-enhanced synthetic aperture radar image
formation based on nonquadratic regularization, IEEE Trans. on Image
Processing, vol. 10, no. 4, pp. 623631, April 2001.
[28] T. Kragh and A. Kharbouch, Monotonic iterative algorithms for SAR
image restoration, in IEEE 2006 Int. Conf. on Image Processing,
October 2006, pp. 645648.
[29] R. Moses, L. Potter, and M. C etin, Wide angle SAR imaging, in
Algorithms for Synthetic Aperture Radar Imagery XI. Orlando, FL.:
SPIE Defense and Security Symposium, April 1216 2004.
[30] E. Ertin, L. Potter, and R. Moses, Enhanced imaging over complete
circular apertures, in Fortieth Asilomar Conf. on Signals, Systems and
Computers (ACSSC 06), Oct 29 Nov. 1 2006, pp. 15801584.
[31] C. D. Austin and R. L. Moses, Wide-angle sparse 3D synthetic aperture
radar imaging for nonlinear ight paths, in IEEE National Aerospace
and Electronics Conference (NAECON) 2008, July 1618 2008, pp. 330
336.
[32] C. D. Austin, E. Ertin, and R. L. Moses, Sparse multipass 3D SAR
imaging: Applications to the GOTCHA data set, in Algorithms for
Synthetic Aperture Radar Imagery XVI, E. G. Zelnio and F. D. Garber,
Eds. Orlando, FL.: SPIE Defense and Security Symposium, April 13
17 2009.
[33] E. Ertin, R. L. Moses, and L. C. Potter, Interferometric methods
for 3-D target reconstruction with multi-pass circular SAR, in 7th
European Conference on Synthetic Aperture Radar (EUSAR 2008),
Friedrichshafen, Germany, June 25 2008.
[34] K. Naidu and L. Lin, Data dome: full k-space sampling data for high-
frequency radar research, in Algorithms for Synthetic Aperture Radar
Imagery XI. Orlando, FL.: SPIE Defense and Security Symposium,
April 1216 2004.
[35] C. H. Casteel, L. A. Gorham, M. J. Minardi, S. Scarborough, and K. D.
Naidu, A challenge problem for 2D/3D imaging of targets from a
volumetric data set in an urban environment, in Algorithms for Synthetic
Aperture Radar Imagery XIV, E. G. Zelnio and F. D. Garber, Eds.
Orlando, FL.: SPIE Defense and Security Symposium, April 913 2007.
[36] D. E. Dudgeon, R. T. Lacoss, C. H. Lazott, and J. G. Verly, Use
of persistant scatterers for model-based recognition, in Algorithms for
Synthetic Aperture Radar Imagery (Proc. SPIE 2230), D. A. Giglio, Ed.,
1994, pp. 356368.
[37] R. Bhalla, J. Moore, and H. Ling, A global scattering center representa-
tion of complex targets using the shooting and bouncing ray technique,
IEEE Trans. on Antennas and Propagation, vol. 45, no. 6, pp. 1850
1856, 1997.
[38] D. Rossi and A. Willsky, Reconstruction from projections based on
detection and estimation of objectsParts I and II: Performance analysis
and robustness analysis, IEEE Transactions on Acoustics, Speech, and
Signal Processing, vol. 32, pp. 886906, 1984.
[39] R. L. Moses, E. Ertin, and C. Austin, Synthetic aperture radar
visualization, in Proceedings of the 38th Asilomar Conference on
Signals,Systems, and Computers, Pacic Grove, CA, Nov 2004.
[40] K. E. Dungan and L. C. Potter, Classifying sets of attributed scattering
centers using a hash coded database, in Algorithms for Synthetic
Aperture Radar Imagery XVII, E. G. Zelnio and F. D. Garber, Eds.
Orlando, FL.: SPIE Defense and Security Symposium, April 59 2010.
[41] , Classifying transformation-variant attributed point patterns, Pat-
tern Recognition, vol. 43, no. 11, pp. 38053816, November 2010.
[42] K. Varshney, M. C etin, J. Fisher, and A. Willsky, Sparse representation
in structured dictionaries with application to synthetic aperture radar,
IEEE Transactions on Signal Processing, vol. 56, no. 8, pp. 35483561,
August 2008.
[43] I. Stojanovic, M. C etin, and W. C. Karl, Joint space-aspect reconstruc-
tion of wide-angle SAR exploiting sparsity, in Algorithms for Synthetic
Aperture Radar Imagery XV. Orlando, FL.: SPIE Defense and Security
Symposium, March 1718 2008.
[44] J. A. Jackson and R. L. Moses, An algorithm for 3D target scatterer
feature estimation from sparse SAR apertures, in Algorithms for Syn-
thetic Aperture Radar Imagery XVI (Proc. SPIE vol. 7337), E. G. Zelnio
and F. D. Garber, Eds., 2009.
[45] Air Force Research Laboratory. (2010, January) Backhoe sample
public release and Visual-D challenge problem. [Online]. Available:
https://www.sdms.afrl.af.mil/request/data request.php#Visual-D
[46] . (2010, January) Gotcha 2D / 3D imaging challenge problem.
[Online]. Available: https://www.sdms.afrl.af.mil/datasets/gotcha/
[47] E. Cand es and J. Romberg,
1
-MAGIC: Recovery of sparse signals via
convex programming,, California Institute of Technology, Tech. Rep.,
October 2005.
[48] A. Beck and M. Teboulle, A fast iterative shrinkage-thresholding
algorithm for linear inverse problems, SIAM J. Imaging Sciences, vol. 2,
no. 1, pp. 183202, January 2009.
[49] I. Daubechies, M. Defrise, and C. D. Mol, An iterative thresholding
algorithm for linear inverse problems with a sparsity constraint, Comm.
Pure Appl. Math., vol. 57, no. 11, pp. 14131457, 2004.
[50] T. Goldstein and S. Osher, The split Bregman method for L1-
regularized problems, SIAM Journal on Imaging Sciences, vol. 2, no. 2,
pp. 323343, 2009.
[51] R. Saab, R. Chartrand, and
Ozg ur Yilmaz, Stable sparse approxima-
tions via nonconvex optimization, in 33rd International Conference on
Acoustics, Speech, and Signal Processing (ICASSP), 2008.
[52] L. Greengard and J.-Y. Lee, Accelerating the nonuniform Fast Fourier
Transform, SIAM Review, vol. 43, no. 3, pp. 443454, 2004.
[53] J. Fessler and B. Sutton, Nonuniform Fast Fourier Transforms us-
ing min-max interpolation, IEEE Transactions on Signal Processing,
vol. 51, no. 2, pp. 560574, February 2003.
[54] N. Ramakrishnan, E. Ertin, and R. Moses, Enhancement of coupled
multichannel images using sparsity constraints, IEEE Transactions on
Image Processing, vol. 19, no. 8, pp. 21152126, August 2010.
[55] P. Stoica and R. Moses, Spectral Estimation of Signals. Prentice Hall,
2005.
[56] M. Wax and T. Kailath, Detection of signals by information theoretic
criteria, IEEE Trans. ASSP, vol. 33, pp. 387392, April 1985.
[57] D. N. Lawley, Tests of signicance of the latent roots of the covariance
and correlation matrices, Biometrica, vol. 43, pp. 128136, 1956.
Copyright (c) 2010 IEEE. Personal use is permitted. For any other purposes, Permission must be obtained from the IEEE by emailing pubs-permissions@ieee.org.
This article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication.
17
[58] D. Malioutov, M. C etin, and A. Willsky, Homotopy continuation for
sparse signal representation, in Proc. IEEE Int. Conf. on Acoustics,
Speech, and Signal Processing (ICASSP 05), vol. 5, March 2005, pp.
733736.
[59]
O. Batu and M. C etin, Hyper-parameter selection in non-quadratic
regularization-based radar image formation, in Algorithms for Synthetic
Aperture Radar Imagery XV. Orlando, FL.: SPIE Defense and Security
Symposium, March 1720 2008.
[60] C. Austin, R. Moses, J. Ash, and E. Ertin, On the relation between
sparse reconstruction and parameter estimation with model order se-
lection, IEEE Journal of Selected Topics in Signal Processing, vol. 4,
no. 3, pp. 560 570, June 2010.
[61] R. Moses, P. Adams, and T. Biddlecome, Three-dimensional target
visualization from wide-angle IFSAR data, in Algorithms for Synthetic
Aperture Radar Imagery XII. Orlando, FL.: SPIE Defense and Security
Symposium, March 28 April 1 2005.
[62] E. Ertin, C. D. Austin, S. Sharma, R. L. Moses, and L. C. Potter,
GOTCHA experience report: Three-dimensional SAR imaging with
complete circular apertures, in Algorithms for Synthetic Aperture Radar
Imagery XIV, E. G. Zelnio and F. D. Garber, Eds. Orlando, FL.: SPIE
Defense and Security Symposium, April 913 2007.
Christian D. Austin (S02) received a B.E. de-
gree in Computer Engineering and a B.S. degree
in Mathematics from the State University of New
York (SUNY) at Stony Brook in 2003. In 2006, he
received his M.S. in Electrical Engineering from The
Ohio State University, Columbus, Ohio. Currently,
he is pursuing a Ph.D. degree in Electrical Engi-
neering at the Ohio State University. His research
interests include statistical signal processing, com-
pressive sensing, and synthetic aperture radar.
Emre Ertin is a Research Assistant Professor with
the Department of Electrical and Computer Engi-
neering at the Ohio State University. He received
the B.S. degree in Electrical Engineering and Physics
from Bogazici University, Turkey in 1992, the M.Sc.
degree in Telecommunication and Signal Processing
from Imperial College, U.K. in 1993, and the Ph.D.
degree in Electrical Engineering from the Ohio State
University in 1999. From 1999 to 2002 he was with
the Core Technology Group at Battelle Memorial
Institute. His current research interests are statistical
signal processing, wireless sensor networks, radar signal processing, biomed-
ical sensors, distributed optimization and control.
Randolph L. Moses (S78-M85-SM90) received
the B.S., M.S., and Ph.D. degrees in electrical engi-
neering from Virginia Polytechnic Institute and State
University in 1979, 1980, and 1984, respectively.
During summer 1983, he was a SCEEE Summer
Faculty Research Fellow with Rome Air Devel-
opment Center, Rome, NY. From 1984 to 1985,
he was with the Eindhoven University of Tech-
nology, Eindhoven, The Netherlands, as a NATO
Postdoctoral Fellow. Since 1985, he has been with
the Department of Electrical Engineering, The Ohio
State University, Columbus, and is currently a professor there, and serves as
Director for the Institute for Sensing Systems. From 1994 to 1995, he was on
sabbatical leave as a visiting researcher with the System and Control Group,
Uppsala University, Sweden. His research interests are in time series analysis,
radar signal processing, sensor array processing, and sensor networks. Dr.
Moses is an Associate Editor for the IEEE TRANSACTIONS ON IMAGE
PROCESSING and serves on the Sensor Array and Multichannel (SAM)
Technical Committee of the IEEE Signal Processing Society. He is a member
of Eta Kappa Nu, Tau Beta Pi, Phi Kappa Phi, and Sigma Xi.