Compressive sensing – the most magical of signal processing.

sincWe learn in undergraduate studies about the Nyquist rate, sampling theorem.  At least some of us might have felt that it is a bit of an overkill. Why do we always use twice the frequency of the signal for sampling? Cant, we make use of some of the innate patterns of the signal and reduce the sampling frequency? In the case of Nyquist sampling, we reconstruct the signal using sinc functions. What if we have some other set of functions for reconstruction?

Compressive sensing techniques actually formalize this to help minimize the number of sampling points required considerably. The key idea is that we already know in some basis the signal is very sparse! In a very practical point of view, we know there is some matrix with which you multiply a sparse vector we get the signal with very high accuracy. Then, we can solve an optimization problem which solves both the reconstruction and sparsity of the coefficients such that we get the signal back.

In nutshell, the idea can be explained in the following three steps(Subject to conditions which I don’t explain)

\textbf{s} = \textbf{M}\textbf{x}\\  \textbf{x} =\textbf{B} \textbf{c} \text{~(We know the \textbf{B} in which \textbf{c} is sparse)}\\  \text{Find \textbf{c} such that \textbf{s} =\textbf{MBc}, and \textbf{c} is sparse}.\\

A tutorial is given in my paper

Further reading: Orthogonal matching pursuit.

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s