## Convolution as a basis expansion!

We’re doing a course called ‘EP211’ titled ‘Mathematical Physics’ as part of our BTech, Engineering Physics programme at IITM. Currently, we’re working with Fourier Series expansion in Hilbert Spaces.

In another course called ‘EC204’ titled ‘Networks and Systems’ offered by the Electrical Engineering Department, we’re working with Linear, Time invariant systems and Fourier transforms, frequency domain analysis etc.

Within 1 month, we can already draw parallels between the courses. Since signals are functions in Function Space and linear time-invariant systems are like linear operators in this Function space, whatever we learn in the Mat. Phy. course can probably be applied to the Net. Sys. course!

The first realisation of some connection was when we wrote eigenvalue equations in Net. Sys: We showed that e^(st) is an eigenstate of any LTI system. Today, though it might be a trivial thing, I realised that “Convolution” with the Dirac Delta function delta(t), which we kept using repeatedly in the Net Sys course was just a kind of series expansion of a function f(t) in the basis of Dirac Delta functions delta(x – t):

f(x) = integral[ f(t)dt.delta(x – t) , -inf, +inf ]

Think of f(x) as a function in Function space. f(t)dt is a coeffecient in the expansion of f(x) in the basis of Dirac Delta functions delta(x – t). ‘t’ is a kind of ‘label’ for the basis, like how we label bases with countable number of basis vectors with an index like e_{i} being labelled with index ‘i’.

Now, Dirac Delta provides an orthonormal basis, because:

integral [ delta(x – t1).delta(x – t2)dx, -inf, +inf ] = delta(t1 – t2)

because

integral [ f(y).delta(y – y_{0})dy, -inf, +inf ] = f(y_{0})

However, convolution with any other function as in f(t)*h(t) is also similar, excepting that the basis vectors {h(x – t) for all ‘x’} is not orthonormal:

integral [ f(t).h(x – t)dt, -inf, +inf ] = f(x)*h(x)

Again, f(t)dt looks like a weight for the basis vector h(x – t). However, {h(x – t)} (which I like to call h_{x}(t) to lay emphasis on the fact that ‘x’ is acting like a continuous ‘index’) need not constitute an orthonormal basis:

integral [ h(x1 – t).h(x2 – t)dt, -inf, +inf ] != delta ( x1 – x2 ) in general.

Can we do Gram-Schmidt orthogonalisation on a general {h(x – t)} basis? What do we get then? I still have to work out! I guessed I’d get delta(x – t), but it doesn’t seem to work out that way!

Another interesting thing is that the Dirac Delta orthonormal basis comes from the eigenvectors of the Hermitian (? – Need to check this) operator ‘x_{op}’ (multiplication by independent variable) in the Function space:

x_{op}|delta_{y}> = y|delta_{y}>

i.e.

t.delta(x – t) = x.delta(x – t)

Taking ‘t’ to be the independent variable, and ‘x’ to be the eigenvalue and the label for eigenstates. But are these the only eigenvectors?

(We used the ‘x_{op}’ operator in class today for something, that I’m yet to understand :-D)

I wish WordPress had a LaTeX feature in the free version itself.

## anonymous24 5:55 pm

onFebruary 15, 2008 Permalink | Log in to ReplyGood post. I remember Fishy telling me last year (in his 4th semester) that he is sort of seeing Elec and Phy courses merge together. Engg Physics Rocks da. Much better than Electrical Engineering 😛

## Akarsh Simha 7:12 pm

onFebruary 15, 2008 Permalink | Log in to ReplyYes… I’m glad I’m doing Engineering Physics.

That’s nice to know that the courses kind of merge.

Probably we can do a complete treatment of Linear Time Invariant Systems in terms of vectors and operators in Function Space.

## name 9:32 am

onSeptember 1, 2008 Permalink | Log in to ReplyGood day!,