The central problems in compressed sensing can be framed in terms of linear algebra. In this model, a signal is a vector v in some high-dimensional vector space, RN. The sampling process can be described as multiplication by a specially chosen n * N matrix Φ, called the sensing matrix. Typically we will have n << N, so that the problem of recovering v from Φv is massively under-determined.
However, for k-sparse vectors v (ie. v contains at most k non-zero entries) it turns out that we can recover y=Φv
if Φ has certain properties, as detailed by Candés, Donoho, and Tao.

Definition: If Φ and Φ' are matrices such that Φ'i,ji,j for every non-zero entry of Φ', then we say that Φ' is a sparsification of Φ.
The density of Φ, δ(Φ) is the proportion of non-zero entries which Φ contains.

In my paper with Ó Catháin and Zhao, Sparsification of Matrices and Compressed Sensing, we analyse the effect of the sparsification of the sensing matrix Φ on its recovery capabilities.
The graph below is similar to the motivational figure from our paper. It shows that the sparsified matrices (δ=0.05 and 0.1) outperform the original matrix (δ=1).
We also include the interesting section of the table which was used to generate the graph. The bolded entries indicate where at most 2% of the 500 recovery attempts failed.



Signal Recovery Comparison of Sp(\Phi,1),Sp(\Phi,0.1),Sp(\Phi,0.05)
k-sparsity δ=1 δ=0.1 δ=0.05
37499500499
38498500499
39498499497
40486500499
41479498500
42469493499
43449489498
44422487496
45385472495
46359451486
47323419481
48274394469
49237338444
50186308400
51134262382
52104207345
5376151293
5448126245
553491173
562467160
57134797
5872880
5953058
602935
The Matlab files used to generate the relevant data and these graphs can be found here, along with further information on parameters etc.


Algorithms

We considered three different algorithms in the paper - the standard linear programming solver in Matlab, linprog,; Orthogonal Matching Pursuit (OMP) - see Tropp and Gilbert's paper, and Needell and Tropp's CoSaMP.
Both OMP and CoSaMP take a priori knowledge of the sparsity of the vector which is to be recovered, and are significantly faster than linprog. All three, however, exhibit some degree of improvement in recovery for sparsified sensing matrices. Figure 2 illustrates this.
Signal Recovery Comparison of LP,CoSaMP and OMP for varying levels of sparsity
Algorithm δ=1 δ=0.9 δ=0.8 δ=0.7 δ=0.6 δ=0.5 δ=0.4 δ=0.3 δ=0.2 δ=0.1 δ=0.05 δ=0.0.03 δ=0.0.02 δ=0.01
LP 40 40 40 40 40 40 40 41 40 43 46 45 37 3
OMP 6 8 8 9 11 12 13 15 17 16 11 5 2 1
CoSaMP 27 30 35 37 37 40 45 47 50 52 49 25 1 1



Speed

In the two graphs below, we examine the effect of sparsification on the run-time of the algorithm. For a range of vector sparsities, we timed 100 recovery attempts from 100x1000 sensing matrices, using both the standard linear programming solver in Matlab, and the specialised CoSaMP algorithm developed by Needell and Tropp. We see that CoSaMP is significantly faster than the linear programming solver, and that the sparsified matrices recover noticeably more vectors than the unsparsified ones.
These timing computations were run on an Intel(R) Core(TM) i5-3570 @ 3.4GHz with 8GB of RAM.