LINEAR SYSTEM

Linear System

Maths

I would rather consider this work as an appreciation towards linear systems that govern the entire 20th century. We are going to see how Fourier transform is applied to linear systems in the following aspects,

---- the impulse response and transform

---- complex exponential as eigen-functions of linear time invariant systems


Basic Definition of Linear System

A system is none other than a mapping from inputs to outputs, thus, a linear system, L, is the one that satisfies the principles of superposition, i.e.,


For a finite sum, an extension is given by,


One can even further extend this to infinite sums (integrals), however, it generally requires additional assumptions on L, e.g., the continuity of which allows to take limits in order to deal with the convergence of an integral.

Remarkably and obviously, direct proportionality,


where alpha is a constant, is the only example of linear systems, that is, all linear systems harken back to the direct proportionality.

Multiplication, where alpha is a function of t, is a generalization of direct proportionality, and definitely defines a linear system,


For instance, the act of switching on and off a light bulb can be defined as,


where PIa(t) is the rectangle function indicating that the switch is on for a duration a. This act is a linear operation since it is modeled by multiplication by PIa(t).

Besides, sampling is also a linear operation since it can be defined in form of multiplication by Shah function, i.e.,


And apparently, the sampling of a sum is the sum of samplings,



Matrix Multiplication

A slight but important generalization, direct proportion plus additions, is always in form of matrix multiplication which is, in linear algebra, a combination of direct proportion or multiplication with adding. Given an n-by-matrix, A, and an m-dimensional input vector v, the n-dimensional output vector can be obtained via a linear system characterized by matrix multiplication, w = Av, where


and


since the basic definition of linear system holds as,


Therefore, special properties of a linear system are derived from those of a matrix A. In most of cases, eigenvectors and eigenvalues of a matrix are of interest, since they satisfy direct proportionality which is, as mentioned above, the only example of linear system, i.e., v is an eigenvector if


Hence, if you have a whole family of eigenvectors that span the set of all possible inputs (form a basis for all inputs), v1,…,vn of which the eigenvalues are lambda1,…, lambdan respectively, then we are able to analyze A easily in the following way. Given any input u, writing


then we can obtain the outputw by


Problem is, when can we do this? That is, when does a linear system have a basis of eigenvectors? The spectrum theorem in finite dimension gives a straightforward answer that, an orthonormal basis of eigenvectors can be found from a symmetric (Hermitian if necessary) matrix.

In conclusion, it is not that matrix multiplication is a good example of finite dimension of linear systems, but the only example. Any finite dimensional linear system (operator) can be realized as matrix multiplication. For instance, for inputs of polynomial’s of degree smaller than or equal to n,


the operator L,


describes a linear system as



Integration against a Kernel

There is an analogue statement for infinite dimensional continuous case where a linear system can be characterized by integration against a kernel. In this case, an input is a function v(x) and a kernel is also a function k(xy), the output w(x) can be obtained by


Interestingly, one can think of this as the infinite dimensional continuous analog of matrix multiplication, where x is the row index, y is the column index and the integral is a summation over the whole real axis.

Similarly, special properties of a linear system arise by extra assumptions on the kernel. Analogous to a symmetric matrix case, K­T=K, a corresponding kernel is characterized by symmetry, k(x, y) = k(y, x). For Hermitian matrix case, we have,


For instance, Fourier transform,


is a symmetric linear system, since the kernel k(s,t) satisfies symmetry by,


Besides, given a function h(.), convolution defines a special linear system by,


Note that its kernel depends on (x – y), neither x, nor y, separately. In particular, if x and y are shifted by the same amount a, (x–y) stays the same and the integral is unchanged. Consequently, this leads to convolution as a linear time invariant system.

In conclusion, it is not just that “integration against a kernel” is a good example of linear systems but the only example. As a fantastically provocative statement goes, any linear system can be realized as integration against a kernel.

Now that, in a finite dimensional discrete case, we can find a basis of space of inputs and express the matrix in form of them, how can we produce a kernel for a general linear system in an infinite dimensional continuous case?

Note (according to pairing & distribution theorem) that any function v(x) can be written as


An output w of a linear system L can be computed by applying L to the integral,


Setting


strikingly, we find the kernel for a general linear system, called the impulse response, which is the system applied to a shifted delta function (It is how the system L responds to an impulsive input, i.e. delta(x-y)).

This leads us formally to the Schwartz Kernel Theorem,

If L is a linear operator on distribution, then there is a unique kernel k, which is another distribution, so that,


For instance, the impulse response for Fourier transform is,


In a finite dimensional discrete case, the impulse response for Lv = Av is A. (This can be proved in a similar way as in infinite continuous case)

Besides, in the case of switching on and off, where Lv= PIv, its impulse response is given by,


To check the above, we can plug the kernel back,


Moreover, recall that a linear system characterized by convolution is defined as,


According to Schwartz Kernel Theorem, it can be concluded that its kernel must be


This can be directly checked.


Time (Shift) Invariant System

The relationship between convolution and delay (shift) is quite worthy of singling out. Defining the delay operator,


we already know that the convolution of a delayed signal is the delay of convolution (This can be directly proved via change of variables), i.e.,


Formally, a linear system w = Lv is time or shift invariant if the delay of an input causes the identical one of the output, i.e. w(x-a) = L(v(x-a)). Therefore, if a system is given by convolution, then it has time invariance. Furthermore, the converse is also true. That is, if L is time invariant, it must be given by convolution. Here is the proof. Since a time invariant linear system can be generally defined as


letting


then,


Thus, a linear system with time invariance is given by convolution,


Now that the difference between TIS (time invariant system) and a general linear system is whether the kernel depends only on (x-y) or not, the case of switching on and off is not a TIS.

To summarize the satisfactory state of affairs,

(1) Any linear system is integration against a kernel (impulse response).

(2) A linear system is time invariant if and only if it is given by convolution.

The same considerations hold for discrete linear systems that is given by multiplication by matrix,


where is an output vector and v is an input vector. Further, L is a linear time invariant (LTI) system iff w =h *v, where


If we write an LTI system as matrix multiplication, then matrix A has a special form for time invariant systems. What is A like? For instance, for an LTI system with the kernel h = (1, 2, 3, 4)T, the output wAvhv. What is A? Its columns can be computed by the followings.

The ith column is


where the jth component of ci is


therefore,


A matrix like this is called a circulant matrix. In conclusion, an LTI system is given by multiplication by a circulant matrix.


Fourier Analysis of LTI System

Anytime, anybody mentions convolution, bells should go off in your head, buzzers should go off in your pocket, suggesting that you take Fourier transform. Recall that an LTI system, in continuous case, is given by convolution,


Taking Fourier transform,


where H(s) is called the transfer function, in frequency domain, the LTI system is given by direct proportionality.

Noticeably, the last general fact on LTI systems is that complex exponentials are eigen-functions. Here is the proof.

Suppose the input is a complex exponential,


the output in frequency domain is


back to time domain,


which indicates that a complex exponential is an eigen-function of LTI systems. The corresponding eigenvalue is H(mu) given by the value of transfer function at mu. You might take real part of complex exponentials in applications. But it is not true that sine and cosine separately are eigen-functions of LTI systems. For instance, suppose an input that is the real part of a complex exponential,


then the output of an LTI system is,


This gets stuck without the assumption that h(.) is a real kernel suggesting that H(.) has symmetry,


Therefore,


where


The same considerations hold for discrete LTI systems where


of which the Fourier-transformed form is


Again, discrete complex exponentials are eigenvectors of discrete LTI systems. Here is the proof. Given


in frequency domain,


taking inverse discrete Fourier transform,


This indicates that


forms a basis (almost orthonormal) of eigenvectors for any discrete LTI systems whereas the corresponding eigenvalues H[k] depend on the system itself.

Here is an application. Suppose a discrete LTI system is,


with


We can then obtain its impulse response,


As is described, the eigenvectors of this LTI system


are the eigenvectors of A. Then, what is the eigenvalues of matrix A?

One might solve it in a conventional way by calculating the determinant,


Alternatively, however, this can be done via LTI system by noticing that A is a circulant matrix which is a characterization of a discrete LTI system. Therefore, the eigenvalues are H[0], H[1], H[2], H[3].


Considering that eigenvectors of an LTI system forms an orthogonal basis, any LTI system, given by convolution in discrete case, is diagonalizable by complex exponentials. Therefore, in general, an N-by-circulant matrix A can be diagonalized by


where



ACKNOWLEDGEMENT

I would like to thank Professor Osgood in Department of Electrical Engineering in Stanford University. This work is a gift for my family, Mum and Dad in particular.


May 8th, 2015

  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 1
    评论
评论 1
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值