Escolar Documentos
Profissional Documentos
Cultura Documentos
Chien-Chia Chen
apart.
where is the coding/sampling matrices with rows j \ m .
The reconstruction formula complementing the above sampling
However, such minimization problem is NP-hard [11, 27] since it
theorem is
requires exhaustive searches over all subsets of columns of .
(1)
f ( x ) = f ( kT )sinc( x / T k ), Fortunately, a number of studies [9-12] have shown that P0 is
k Z
equivalent to l1-norm minimization problem, which can be recast
where the equidistant samples of f(x) may be interpreted as as a linear program. Several upper bounds of sparsity and
coefficients of some basis functions obtained by appropriate coherence under different conditions are given in [7-8, 13].
shifting and rescaling of the sinc-function: sinc(x)=sin(x)/(x). Dr.
This paper intends to conduct a survey on the above sub-Nyquist
Shannons paper shows that eq. (1) is exact if f(x) is bandlimited
sampling techniques from the fundamental theory, signal recovery,
to max/T, which is called Nyquist rate, a term that was coined
to recent applications. The rest of this survey is organized as
by Shannon in recognition of Nyquists important contributions in follows. Section 2 introduces numerous important foundation of
the communication theory [3]. compressive sensing. Section 3 briefly describes a number of
Recently, sub-Nyquist sampling has attracted a lot of attention of signal recovery techniques for compressive sensing. Innovative
both mathematicians and computer scientists. Sub-Nyquist applications of compressive sensing in all kinds of areas are
sampling, also known as compressive sampling or compressed presented in Section 4 and Section 5 concludes the paper.
sensing, refers to the problem of recovering signals by samples
much fewer than suggested by Nyquist rate. Compressive sensing 2. COMPRESSIVE SENSING BASICS
is of interest in the scenario where sampling by Nyquist rate is
either not feasible or not efficient. For example, in sensor 2.1 Sparsity
networks, the number of sensors may be limited. Also, Sparsity is important in compressive sensing as it determines how
measurements might be considerably expensive as in certain efficient one can acquire signals nonadaptively. The commonest
imaging processes via neutron scattering. Or the sampling rate is definition of sparsity used in compressive sensing is as follows.
bounded by hardware limitation so sampling as fast as Nyquist Let the vector u \m denote the recorded values of a signal f(t),
rate is not achievable. These circumstances bring up important which is expanded in an orthonormal basis = [ 1 2 " m ] as
questions. Is an accurate recovery possible if we have only
follows:
samples much less than Nyquist rate suggests? How can one
approximate based on this information?
m In other words, if a signal is known to be K-sparse in some
v = xi i , (3)
domains , by randomly selecting the samples in the domain ,
i =1
the minimum number of selected samples n is
where x is the coefficient sequence of v, xi = v, i . Let denote
O(C 2 (, ) K log m) . This theorem suggests a practical
the mm matrix with 1 " m as columns. If a signal has a sparse acquisition strategy that moves the compression step from sensors
expansion, discarding small coefficients will not have much to the post-processing computers. Sensors can simply sample
perceptual loss. A signal is called K-sparse if its expansion has at nonadaptively in an incoherent domain, which would essentially
most K nonzero entries. This principle is what underlies most acquire the signal in a compressed form. All that is needed is a
modern lossy coders such as JPEG-2000 [14] and many others. decoder, usually a non-power-constrained computer, to
decompress the data. If the signal happens to be sufficiently
2.2 Coherence sparse, exact recovery occurs.
Let the pair (, ) be the orthobases of \ m , where is the However, this brings up another potential issue that the quality of
sensing modality and is the signal model. The definition of recovery now depends on the quality of random number
coherence between the sensing basis and the representation generators 1 . A nave way to select n measurements in the
basis given in [15] is domain uniformly at random is to first collect enough samples and
(4) then drop redundant ones. Such approach is not preferred as it
(, ) = m max k , j
1 k , j m wastes power on sensing and wastes space on storing samples.
Another way is to randomly generate the interval to sample such
This coherence stands for the largest correlation between any two
that the samples are equivalent to being selected uniformly or
elements of and . The coherence will be large is they contain
normally. This approach allows sensors to sample only desired
correlated elements; otherwise, it is small. For to be close to its
measurements and thus sampling rate is indeed reduced. Such
minimum value of 1, each of the measurement vectors (rows of )
approach, however, raises a potential problem that the quality of
must be spread out in the domain. [15] proves that a K-sparse
the simulated uniform random variables may not be the same as
signal can be reconstructed from K*log(m) measurements in any
previous one, and therefore it might degrade the quality of
domain where the test vectors are flat, i.e., the coherence is O(1).
recovered signal. There are also some random sampling schemes
2.3 Random Sampling that encode a number of samples to a single measurement, such as
The concept of sparsity and incoherence quantizes the random linear sampling or Bernoulli random sampling.
compressibility of a signal. A signal is more compressible if it has
a larger sparsity in some representation domain that is less 3. CS RECOVERY ALGORITHMS
coherent to the sensing domain . However, such compression is Now that previous section has depicted the fundamental theory of
obviously infeasible to perform on most sensing devices as compressed sensing, we next discuss several compressed sensing
transforming from a dense domain to a sparse domain can be recovery algorithms (data decompress algorithms) in this
computationally expensive. section.
Fortunately, it turns out random matrices are largely incoherent 3.1 l1-norm Minimization
with any fixed basis [4]. With high probability, a uniform
Since [4], several studies [6, 7, 18] have demonstrated the
randomly selected orthobasis , which can be done by
effectiveness of of l1-norm minimization for recovering sparse
otrhonomalizing m vectors sampled independently and uniformly
signals from a limited number of measurements. Followed the
on the unit sphere, has a coherence of 2 log m between any notation of P0 given in Section 1, the problem of l1-norm
fixed basis . Also, random waveforms with independent minimization can be formulated as follows:
identically distributed (i.i.d.) entries, e.g., Guassian or Bernoulli (6)
(P1 ) minm v 1 subject to v = y.
random entries, will also exhibit a very low coherence with any v\
fixed representation [4]. The general rule to make It is known that the problem (P1) can be solved by linear
measurements to be incoherent in the sparsity domain is to programming technique. Several algorithms, such as fixed point
make unstructured with respect to . Taking random continuation (FPC) [19], interior point methods, and homotopy
measurements is in some sense an optional strategy for acquiring methods, are proposed specifically for solving the above l1-norm
sparse signals [15]. It requires a near-minimal number of minimization problem. Recovery algorithms based on l1-norm
measurements [6, 7, 13, 16, 17] and all of the constants appearing minimization are mostly accurate and robust for reconstructing
in the analysis are small [18]. sparse signals while some of them may take long time to obtain
The following theorem proposed in [15] proves the above the sparsest solution.
observation. Based on l1-norm minimization recovery algorithm, it has been
Theorem 2 [15]: Fix u \m and suppose that the coefficient shown in [20] that exact reconstruction occurs for K-sparse signal
sequence x of u in the basis is K-sparse. Select n measurements when K<(1+1/M)/2, where M is the coherent parameter of the
in the domain uniformly at random. Then if
n C 2 (, ) K log m (5) 1
Note that to the best of the authors knowledge, there is no
for some positive constants C, the sparsest solution to recover u is reference regarding this issues, while all of this is learnt by the
exact with overwhelming probability. author during the work on the course project of compressive
sensing framework on TinyOS.
measurement matrices and the signal model . This allows one proposed recovery algorithm and the LARS algorithm with
to build rather simple deterministic matrices with K n . Some LASSO modification [25]. Note that the constant K stands for K-
later work [21-22] further proves the existence of matrices sparse signal.
with K n / log(m / n) , which is substantially larger than n .
Fig. 1 shows an example of l1 recovery shown in [4]. Fig. 1(a) is
the original sparse real valued signal. Fig. 1(b) is the
reconstruction from 60 (complex valued) Fourier coefficients by
l1-norm minimization. It turns out the reconstruction is exact.
3.4.3 l1 Filtering
It is obvious that all of the above algorithms are for the most part
static: they focus on finding the solution for a fixed set of
measurements. A recent work proposes a method for quickly
updating the solution to some l1-norm minimization problems as
new measurements are added [33]. They proposed the l1 filter,
which can be implemented using standard techniques from
numerical linear algebra. Their proposed scheme is homotopy
based where they add new measurements in the system and
instead of solving updated problem directly, a series of simple
intermediate problems that lead to the desired solution is solved.
Fig. 4 shows the average number of iterations per new
measurement, which turns out that their proposed scheme is fairly
efficient to update the solution.
Fig. 4 Average number of homotopy steps with one new (a) first image acquisition
measurement at different sparsity levels. (n=256, m-150)
4. APPLICATION OF CS
As the fundamental theory is mostly settled, the number of
compressed sensing based application grows rapidly in recent
years. The area of application spans from sensor networks, image
processing, medial imaging, compressive radar, astronomy,
communications, remote sensing, to robotics and control. This
section selects and presents a number of most recent and most
innovative (from authors point of view) application.
6. ACKNOWLEDGMENTS
A Special thank to the DSP group in Rice University for their
great paper collection page of compressed sensing [45]. Most of
Fig. 8 Reconstruction from 5-fold accelerated acquisition of first- the papers described in this survey are found in their website.
pass contrast enhanced abdominal angiography. (a) Readers interested in more detail are encouraged to further
Reconstruction from a complete data set. (b) LR (c) ZF-w/dc (d) explore their website for more literatures.
CS reconstruction from random undersampling. The patient has a
aorto-bifemoral bypass graft. This is meant to carry blood from 7. REFERENCES
the aorta to the lower extremities. There is a high-grade stenosis in [1] C. E. Shannon, Communication in the presence of noise, in
the native right common illiac artery, which is indicated by the Proc. IRE, vol. 37, pp. 1021, 1949.
arrows. In figure parts (a) and (d) flow across the stenosis is
visible, but it is not on (b) and (c).
[2] C.E. Shannon, Classic paper: Communication in the [19] E. T. Hale, W. Yin, and Y. Zhang, A fixed-point
presence of noise, Proc. IEEE, vol. 86, no. 2, pp. 447-457, continuation method for l1-regularized minimization with
1998. applications to compressed sensing, Rice University, CAAM
[3] H. Nyquist, Certain topics in telegraph transmission theory, Tech. Report, TR07-07, Jul. 2007.
Trans Amer. Inst. Elect. Eng., vol. 47, pp. 617-644, 1928. [20] D. L. Donoho, M. Elad, and V.N. Temlyakov, Stable
[4] E. J. Candes and M. B. Wakin, An introduction to recovery of sparse overcomplete representations in the
compressive sampling, IEEE Signal Processing Magazine, presence of noise, IEEE Tran. Information Theory, vol.52,
2008. pp.6-18, 2006.
[5] E. Candes, J. Romberg, T. Tao, Robust uncertainty [21] R. DeVore, Optimal computation, ICM Proceedings,
principles: exact signal reconstruction from highly Madrid 1, pp. 187-215, 2006.
incomplete frequency information, IEEE Trans. Information [22] E. Candes, Compressive sampling, ICM Proceedings,
Theory, vol. 52, no. 2, pp. 489-509, Feb 2006. Madrid 3, pp. 1433-1452, 2006.
[6] E. Candes and T. Tao, Near optimal signal recovery from [23] P. Boufounos and R.G. Baraniuk, 1-bit compressive
random projections: universal encoding strategies, IEEE sensing, in Proc. of Conference on Information Sciences
Trans. Information Theory, vol. 52, no. 12, pp. 5406-5425, and Systems, Mar. 2008.
Dec. 2006. [24] P. Boufounos and R.G. Baraniuk, Reconstructing sparse
[7] D. Donoho, Compressed sensing, IEEE Trans. Information signals from their zero crossings, in Proc. of IEEE
Theory, vol. 52, no. 4, pp. 1289-1306, Apr. 2006. International Conference on Acoustics, Speech, and Signal
[8] B. S. Kashin and V. N. Temlyakoc, A remark on Processing 2008 (ICASSP 08).
compressed sensing, Mathematical Notes, vol. 82, no. 5-6, [25] B. Efron, T. Hastie, I. Jognstone, and R. Tibshirani, Least
pp. 748-755, Nov. 2007. angle regression, Annals of Statistics, vol. 32, no. 2, pp.
[9] D. L. Donoho and X. Hus, Uncertainty principles and ideal 407-499, Apr. 2004.
atomic decomposition, IEEE Trans on Information Theory, [26] Y. C. Pati, R. Rezaiifar, and P.S. Krishnaprasad, Orthogonal
vol. 47, pp. 2845-2862, 2001. matching pursuit: recursive function approximation with
[10] M. Elad and A. M. Bruckstein, A generalized uncertainty applications to wavelet decomposition, in Proc. 27th Annu.
principle and sparse representation in paris of RN bases, Asilomar Conference Signals, Systems, and Computers, vol.
IEEE Trans. on Information Theory, vol. 48, pp. 2558-2567, 1, pp. 40-44, Nov. 1993.
2002. [27] G. Davis, S. Mallat, and M. Avellaneda, Greedy adaptive
[11] R. Gribonval and M. Nielsen, Sparse representations in approximation, Constr. Approximiation, vol. 13, pp. 57-98,
unions of bases, IEEE Trans. on Information Theory, vol. 49, 1997.
pp. 3320-3325, 2003. [28] J. A. Tropp and A. C. Gilbert, Signal recovery from random
[12] J. A. Tropp, Greed is good: algorithmic results for sparse measurements via orthogonal matching pursuit, IEEE Trans.
approximation, technical report, the University of Texas at Information Theory, vol. 53, no. 12, Dec. 2007.
Austin, 2003. [29] Y.E. Nesterov and A.S. Nemirovski, Interior Point
[13] E. Candes and T. Tao, Decoding by linear programming, Polynomial Algoeithms in Convex Programming, SIAM,
IEEE Trans. on Information Theory, vol. 51, no. 12, pp. 1994.
4203-4215, Dec. 2005. [30] S. Kunis and H. Rauhut, Random Sampling of Sparse
[14] D. S. Taubman and M. W. Marcellin, JPEG 2000: Image Trigonometric Polynomials, II. Orthogonal Matching Pursuit
Compression Fundamentals, Standards and Practice. versus Basis Pursuit, Foundations of Computational
Norwell, MA: Kluwer, 2001. Mathematics, vol. 8, no. 6, Nov. 2008.
[15] E. Candes and J. Romberg, Sparsity and incoherence in [31] M. Fornasier and H. Rauhut, Iterative thresholding
compressive sampling, Inverse Problem, vol. 23, no. 3, pp. algorithms, Applied and Computational Harmonic Analysis,
969-985, 2007. vol. 25, no. 2, pp. 187-208, Sep. 2008.
[16] R. G. Baraniuk, M. Davenport, R. DeVore, and M. Wakin, [32] A. Chambolle1 and P.-L. Lions, Image recovery via total
A simple proof of the restricted isometry property for variation minimization and related problems, Numerische
random matrices, Constructive Approximation, vol. 28, no. Mathematik, vol. 76, no. 2, pp. 167-188, Apr. 1997.
3, pp. 253-263, Dec. 2008. [33] M. S. Asif and J. Romberg, Streaming measurements in
[17] D. L. Donoho, For most large underdetermined systems of compressive sensing: L1 filtering, Asilomar Conference on
linear equations the minimal l1-norm solution is also the Signals, Systems and Computers, Oct. 2008.
sparsest solution, Comm. Pure Appl. Math., vol. 59, no. 6, [34] M. Duarte, M. Davenport, D. Takhar, J. Laska, T. Sun, K.
pp. 797-829, 2006. Kelly, and R. Baraniuk, Single-pixel imaging via
[18] D. L. Donoho and J. Tanner, Neighborliness of randomly- compressive sampling, IEEE Signal Processing Magazine,
projected simplices in high dimensions, Proc. Natl. Acad. vol. 25, no. 2, pp. 83 - 91, Mar. 2008
Sci., vol. 102, no. 27, pp. 9452-9457, 2005. [35] R. Baraniuk and P. Steeghs, Compressive radar imaging,
IEEE Radar Conference, Apr. 2007
[36] L. Potter, P. Schniter, and J. Ziniel, Sparse reconstruction
for RADAR, SPIE Algorithms for Synthetic Aperture Radar
Imagery XV, 2008.
[37] S. R. Bhaskaran, L. Davis, A. Grant, S. Hanly, P. Tune,
Downlink Scheduling Using Compressed Sensing,
Information Theory Workshop (ITW), 2009.
[38] Mona Sheikh, Olgica Milenkovic, and Richard Baraniuk,
Compressed sensing DNA microarrays, Rice ECE
Department Technical Report TREE 0706, May 2007.
[39] W. Dai, M. Sheikh, O. Milenkovic, and R. Baraniuk,
Compressive Sensing DNA Microarrays, EURASIP
Journal on Bioinformatics and Systems Biology, 2009.
[40] N. Shental, A. Amir, O. Zuk, Rare-Allele Detection Using
Compressed Se(que)nsing, CoRR, 2009.
[41] M. F. Duarte, M. B. Wakin, D. Baron, and R. G. Baraniuk,
Universal Distributed Sensing via Random Projections, in
Proc. of IPSN 2006.
[42] M. Lustig, D. Donoho, and J. M. Pauly, Sparse MRI: The
application of compressed sensing for rapid MR imaging,
Magnetic Resonance in Medicine, vol. 58, no. 6, pp. 1182-
1195, Dec. 2007
[43] C. Qiu, W. Lu and N. Vaswani, Real-time Dynamic MR
Image Reconstruction using Kalman Filtered Compressed
Sensing, in Proc. of IEEE Int. Conf. on Acoustics, Speech,
and Signal Processing (ICASSP), Taipei, Taiwan, Apr. 2009
[44] H. Jung, J. C. Ye, Performance evalution of accelerated
functional MRI acquisition using compressed sensing, in
Proc. IEEE International Symposium on Biomedical Imaging
(ISBI), pp. 702-705, Jun. 2009.
[45] Compressive Sensing Resources,
http://www.compressedsensing.com/