TheWavelet Tutorial Part III

转载自:http://users.rowan.edu/~polikar/WAVELETS/WTpart3.html

MULTIRESOLUTION ANALYSIS
&
THE CONTINUOUS WAVELET TRANSFORM

Robi Polikar

MULTIRESOLUTION ANALYSIS

Although the time and frequency resolution problems are results of aphysical phenomenon (the Heisenberg uncertainty principle) and exist regardlessof the transform used, it is possible to analyze any signal by using analternative approach called themultiresolution analysis (MRA MRA, asimplied by its name, analyzes the signal at different frequencies withdifferent resolutions. Every spectral component is not resolved equally as wasthe case in the STFT.

MRA is designed to give good time resolution and poor frequency resolutionat high frequencies and good frequency resolution and poor time resolution atlow frequencies. This approach makes sense especially when the signal at handhas high frequency components for short durations and low frequency componentsfor long durations. Fortunately, the signals that are encountered in practicalapplications are often of this type. For example, the following shows a signalof this type. It has a relatively low frequency component throughout the entiresignal and relatively high frequency components for a short duration somewherearound the middle.

THE CONTINUOUS WAVELET TRANSFORM

The continuous wavelet transform was developed as analternative approach to the short time Fourier transform to overcome theresolution problem. The wavelet analysis is done in a similar way to the STFTanalysis, in the sense that the signal is multiplied with a function, {\it thewavelet}, similar to the window function in the STFT, and the transform iscomputed separately for different segments of the time-domain signal. However,there are two main differences between the STFT and the CWT:

1. The Fourier transforms of the windowed signals are not taken, andtherefore single peak will be seen corresponding to a sinusoid, i.e., negativefrequencies are not computed.

2. The width of the window is changed as the transform is computed for everysingle spectral component, which is probably the most significantcharacteristic of the wavelet transform.

The continuous wavelet transform is defined as follows

Equation 3.1

As seen in the above equation , the transformedsignal is a function of two variables,tauands , the translation and scale parameters, respectively. psi(t)is the transforming function, and it is calledthe mother wavelet . Theterm mother wavelet gets its name due to two important properties of thewavelet analysis as explained below:

The term wavelet means a small The smallness refers to thecondition that this (window) function is of finite length (compactlysupported). The wave refers to the condition that this function is oscillatory . The termmother implies that thefunctions with different region of support that are used in the transformationprocess are derived from one main function, or the mother wavelet. In otherwords, the mother wavelet is aprototype for generating the other windowfunctions.

The term translation is used in the same sense as it was used in theSTFT; it is related to the location of the window, as the window is shiftedthrough the signal. This term, obviously, corresponds to time information inthe transform domain. However, we do not have a frequency parameter, as we hadbefore for the STFT. Instead, we have scale parameter which is defined as$1/frequency$. The term frequency is reserved for the STFT. Scale is describedin more detail in the next section.

The Scale

The parameter scale in the wavelet analysis is similar to the scaleused in maps. As in the case of maps, high scales correspond to a non-detailed globalview (of the signal), and low scales correspond to a detailed view. Similarly,in terms of frequency, low frequencies (high scales) correspond to a globalinformation of a signal (that usually spans the entire signal), whereas highfrequencies (low scales) correspond to a detailed information of a hiddenpattern in the signal (that usually lasts a relatively short time). Cosinesignals corresponding to various scales are given as examples in the following figure .

Figure 3.2

Fortunately in practical applications, low scales (high frequencies) do notlast for the entire duration of the signal, unlike those shown in the figure,but they usually appear from time to time as short bursts, or spikes. Highscales (low frequencies) usually last for the entire duration of the signal.

Scaling, as a mathematical operation, either dilates or compresses a signal.Larger scales correspond to dilated (or stretched out) signals and small scalescorrespond to compressed signals. All of the signals given in the figure arederived from the same cosine signal, i.e., they are dilated or compressedversions of the same function. In the above figure,s=0.05 is thesmallest scale, and s=1 is the largest scale.

In terms of mathematical functions, if f(t)is a given functionf() corresponds to acontracted (compressed) version of f(t)if s > 1 and to anexpanded (dilated) version of f(t) if s < 1 .

However, in the definition of the wavelet transform, the scaling term isused in the denominator, and therefore, the opposite of the above statementsholds, i.e., scaless > 1 dilates the signals whereas scales s< 1 , compresses the signal. This interpretation of scale will be used throughoutthis text.

COMPUTATION OF THE CWT

Interpretation of the above equation will be explained in this section. Let x(t) is the signal to be analyzed. The motherwavelet is chosen to serve as a prototype for all windows in the process. Allthe windows that are used are the dilated (or compressed) and shifted versionsof the mother wavelet. There are a number of functions that are used for thispurpose. The Morlet wavelet and the Mexican hatfunction are two candidates, and they are used for the wavelet analysis of theexamples which are presented later in this chapter.

Once the mother wavelet is chosen the computation starts with s=1 andthe continuous wavelet transform is computed for all values of s , smaller and larger than ``1''. However, dependingon the signal, a complete transform is usually not necessary. For all practicalpurposes, the signals are bandlimited, and therefore,computation of the transform for a limited interval of scales is usuallyadequate. In this study, some finite interval of values for swere used, as will be described later in this chapter.

For convenience, the procedure will be started from scale s=1 andwill continue for the increasing values of s ,i.e., the analysis will start from high frequencies and proceed towards lowfrequencies. This first value ofs will correspond to the mostcompressed wavelet. As the value ofs is increased, the wavelet willdilate.

The wavelet is placed at the beginning of the signal at the point whichcorresponds to time=0. The wavelet function at scale ``1'' is multiplied by thesignal and then integratedover all times. The result of the integrationis then multiplied by the constant number1/sqrt{s This multiplication is forenergy normalization purposes so that the transformed signal will have the sameenergy at every scale. The final result is the value of the transformation,i.e., the value of the continuous wavelet transformat time zero and scales= Inother words, it is the value that corresponds to the pointtau= s=1 in the time-scale plane.

The wavelet at scale s=1 is then shifted towards the right bytauamount to the location t= and the above equation iscomputed to get the transform value att= , s=1in the time-frequency plane.

This procedure is repeated until the wavelet reaches the end of the signal. Onerow of points on the time-scale plane for the scale s=1 is nowcompleted.

Then, s is increased by a small value. Note that, this is acontinuous transform, and therefore, bothtauand s must be incremented continuously .However, if this transform needs to be computed by a computer, then bothparameters are increased bya sufficiently small step size. Thiscorresponds to sampling the time-scale plane.

The above procedure is repeated for every value of s. Everycomputation for a given value ofs fills the corresponding single row ofthe time-scale plane. When the process is completed for all desired values ofs,the CWT of the signal has been calculated.

The figures below illustrate the entire process step bystep.

Figure 3.3

In Figure 3.3, the signal and the wavelet function are shown for fourdifferent values oftau The signal is a truncated version of the signalshown in Figure 3.1. The scale value is1 ,corresponding to the lowest scale, or highest frequency. Note how compact it is(the blue window). It should be as narrow as the highest frequency componentthat exists in the signal. Four distinct locations of the wavelet function areshown in the figure at to= to=40, to=90, andto=140 . At every location, it is multiplied by the signal. Obviously, theproduct is nonzero only where the signal falls in the region of support of thewavelet, and it is zero elsewhere. By shifting the wavelet in time, the signalis localized in time, and by changing the value of s ,the signal is localized in scale (frequency).

If the signal has a spectral component that corresponds to the current valueofs (which is 1 in this case), the product of the wavelet with thesignalat the location where this spectral component exists gives arelatively large value. If the spectral component that corresponds to thecurrent value ofs is not present in the signal, the product value willbe relatively small, or zero. The signal in Figure 3.3 has spectral componentscomparable to the window's width ats=1 around t=100 ms.

The continuous wavelet transform of the signal in Figure 3.3 will yieldlarge values for low scales around time 100 ms, and small values elsewhere. Forhigh scales, on the other hand, the continuous wavelet transform will givelarge values for almost the entire duration of the signal, since lowfrequencies exist at all times.

Figure 3.4
Figure 3.5

Figures 3.4 and 3.5 illustrate the same process for the scales s=5 and s=20,respectively. Note how the window width changes with increasing scale(decreasing frequency). As the window width increases, the transform startspicking up the lower frequency components.

As a result, for every scale and for every time (interval), one point of thetime-scale plane is computed. The computations at one scale construct the rowsof the time-scale plane, and the computations at different scales construct thecolumns of the time-scale plane.

Now, let's take a look at an example, and see how the wavelet transformreally looks like. Consider thenon-stationary signal in Figure 3.6.This is similar to the example given for the STFT, except at differentfrequencies. As stated on the figure, the signal is composed of four frequencycomponents at 30 Hz, 20 Hz, 10 Hz and 5 Hz.

Figure 3.6

Figure 3.7 is the continuous wavelet transform (CWT) of this signal. Notethat the axes are translation and scale, not time and frequency. However,translation is strictly related to time, since it indicates where the motherwavelet is located. The translation of the mother wavelet can be thought of asthe time elapsed since t=The scale, however, has a whole different story. Remember that the scaleparameters in equation 3.1 is actually inverseof frequency. In other words, whatever we said about the properties of thewavelet transform regarding the frequency resolution, inverse of it will appearon the figures showing the WT of the time-domain signal.

Figure 3.7

Note that in Figure 3.7 that smaller scales correspond to higherfrequencies, i.e., frequency decreases as scale increases, therefore, thatportion of the graph with scales around zero, actually correspond to highestfrequencies in the analysis, and that with high scales correspond to lowestfrequencies. Remember that the signal had 30 Hz (highest frequency) componentsfirst, and this appears at the lowest scale at a translationsof 0 to 30. Then comes the 20 Hz component, secondhighest frequency, and so on. The 5 Hz component appears at the end of thetranslation axis (as expected), and at higher scales (lower frequencies) againas expected.

Figure 3.8

Now, recall these resolution properties: Unlike the STFT which has a constantresolution at all times and frequencies, the WT has a good time and poorfrequency resolution at high frequencies, and good frequency and poor timeresolution at low frequencies. Figure 3.8 shows the same WT in Figure 3.7 fromanother angle to better illustrate the resolution properties: In Figure 3.8,lower scales (higher frequencies) havebetter scale resolution (narrowerin scale, which means that it is less ambiguous what the exact value of thescale) which correspond topoorer frequency resolution . Similarly,higher scales have scale frequency resolution (wider support in scale, whichmeans it is more ambitious what the exact value of the scale is which correspond to better frequency resolution of lowerfrequencies.

The axes in Figure 3.7 and 3.8 are normalized and should be evaluated accordingly.Roughly speaking the 100 points in the translation axis correspond to 1000 ms,and the 150 points on the scale axis correspond to a frequency band of 40 Hz(the numbers on the translation and scale axis do not correspond to secondsand Hz, respectively, they are just the number of samples in thecomputation).

TIME AND FREQUENCY RESOLUTIONS

In this section we will take a closer look at the resolution properties ofthe wavelet transform. Remember that the resolution problem was the main reasonwhy we switched from STFT to WT.

The illustration in Figure 3.9 is commonly used to explain how time andfrequency resolutions should be interpreted. Every box in Figure 3.9corresponds to a value of the wavelet transform in the time-frequency plane. Notethat boxes have a certainnon-zero area, which implies that the value ofa particular point in the time-frequency plane cannot be known. All the pointsin the time-frequency plane that falls into a box represented by one value of the WT.

Figure 3.9

Let's take a closer look at Figure 3.9: First thing to notice is thatalthough the widths and heights of the boxes change, the area is constant. Thatis each box represents an equal portion of the time-frequency plane, but givingdifferent proportions to time and frequency. Note that at low frequencies, theheight of the boxes are shorter (which corresponds to better frequencyresolutions, since there is less ambiguity regarding the value of the exactfrequency), but their widths are longer (which correspond to poor timeresolution, since there is more ambiguity regarding the value of the exacttime). At higher frequencies the width of the boxes decreases, i.e., the timeresolution gets better, and the heights of the boxes increase, i.e., thefrequency resolution gets poorer.

Before concluding this section, it is worthwhile to mention how the partitionlooks like in the case of STFT. Recall that in STFT the time and frequencyresolutions are determined by the width of the analysis window, which isselected once for the entire analysis, i.e., both time and frequencyresolutions are constant. Therefore the time-frequency plane consists ofsquaresin the STFT case.

Regardless of the dimensions of the boxes, the areas of all boxes, both inSTFT and WT, are the same and determined byHeisenberg's inequality As a summary, the area ofa box is fixed for each window function (STFT) or mother wavelet (CWT), whereasdifferent windows or mother wavelets can result in different areas. However,allareas are lower bounded by 1/4 \ That is, we cannot reduce the areas of the boxes asmuch as we want due to the Heisenberg's uncertainty principle. On the other hand, for a given mother wavelet the dimensions of theboxes can be changed, while keeping the area the same. This is exactlywhat wavelet transform does.

THE WAVELET THEORY: A MATHEMATICALAPPROACH

This section describes the main idea of wavelet analysis theory, which canalso be considered to be the underlying concept of most of the signal analysistechniques. The FT defined by Fourier usebasis functions to analyze andreconstruct a function. Every vector in a vector space can be written as alinear combination of the basis vectors in that vector space, i.e., bymultiplying the vectors by some constant numbers, and then by taking thesummation of the products. The analysis of the signal involves the estimationof these constant numbers (transform coefficients, or Fourier coefficients,wavelet coefficients, etc). The synthesis, or the reconstruction, correspondsto computing the linear combination equation.

All the definitions and theorems related to this subject can be found inKeiser's book,A Friendly Guide to Wavelets but an introductory levelknowledge of how basis functions work is necessary to understand the underlyingprinciples of the wavelet theory. Therefore, this information will be presentedin this section.

Basis Vectors

Note: Most of the equations include letters of the Greek alphabet. Theseletters are written out explicitly in the text with their names, such astau, , phi etc.For capital letters, the first letter of the name has been capitalized, suchas, , , Phietc.Also, subscripts are shown by the underscore character _ ,and superscripts are shown by the ^ character. Also note that allletters or letter names written in bold type face represent vectors, important points are also written in bold face, but themeaning should be clear from the context.

A basis of a vector space V is a set of linearly independentvectors, such that any vectorv in V can be written as a linearcombination of these basis vectors. There may be more than one basis for avector space. However, all of them have the same number of vectors, and thisnumber is known as thedimension of the vector space. For example intwo-dimensional space, the basis will have two vectors.


Equation 3.2

Equation 3.2 shows how any vector v can be written as a linearcombination of the basis vectorsb_kandthe corresponding coefficients nu^k.

This concept, given in terms of vectors, can easily be generalized tofunctions, by replacing the basis vectorsb_kwith basis functions t), and the vector v with a function f(t). Equation3.2 then becomes


Equation 3.2a

The complex exponential (sines and cosines)functions are the basis functions for the FT. Furthermore, they are orthogonalfunctions, which provide some desirable properties for reconstruction.

Let t) and g(t) be two functions in L^2 []. ^2 []denotes the set of square integrable functions in theinterval []). The inner product of two functionsis defined by Equation 3.3:


Equation 3.3

According to the above definition of the inner product, the CWT can bethought of as the inner product of the test signal with the basis functions _( ,s)(t):


Equation 3.4

where,


Equation 3.5

This definition of the CWT shows that the wavelet analysis is a measure of similaritybetween the basis functions (wavelets) and the signal itself. Here thesimilarity is in the sense of similar frequency content. The calculated CWTcoefficients refer to the closeness of the signal to the wavelet at thecurrent scale

This further clarifies the previous discussion on the correlation of thesignal with the wavelet at a certain scale. If the signal has a major componentof the frequency corresponding to the current scale, then the wavelet (thebasis function) at the current scale will be similar or close tothe signal at the particular location where this frequency component occurs.Therefore, the CWT coefficient computed at this point in the time-scale planewill be a relatively large number.

Inner Products, Orthogonality, and Orthonormality

Two vectors v , w are said to be orthogonalif their inner product equals zero:


Equation 3.6

Similarly, two functions $f$ and $g$ are said to be orthogonal to each otherif their inner product is zero:


Equation 3.7

A set of vectors {v_1, v_2, ....} is said to be orthonormal, if they are pairwise orthogonal to each other,and all have length ``1''. This can be expressed as:


Equation 3.8

Similarly, a set of functions {t)}, k=1,2,3,..., is said to be orthonormalif


Equation 3.9


Equation 3.10

equivalently


Equation 3.11

where, delta_{} is the Kronecker delta function, defined as:


Equation 3.12

As stated above, there may be more than one set of basis functions (orvectors). Among them, the orthonormal basis functions(or vectors) are of particular importance because of the nice properties theyprovide in finding these analysis coefficients. The orthonormalbases allow computation of these coefficients in a very simple andstraightforward way using the orthonormalityproperty.

For orthonormal bases, thecoefficients, can be calculated as


Equation 3.13

the function f(t) can then be reconstructed byEquation 3.2_a by substituting the coefficients.This yields

Equation 3.14

Orthonormal bases may not be available for everytype of application where a generalized version,biorthogonalbases can be used. The term ``biorthogonal''refers to two different bases which are orthogonal to each other, but each donot form an orthogonal set.

In some applications, however, biorthogonal basesalso may not be available in which caseframes can be used. Framesconstitute an important part of wavelet theory, and interested readers arereferred to Kaiser's book mentioned earlier.

Following the same order as in chapter 2 for the STFT, some examples ofcontinuous wavelet transform are presented next. The figures given in theexamples were generated by a program written to compute the CWT.

Before we close this section, I would like to include two mother waveletscommonly used in wavelet analysis. The Mexican Hat wavelet is defined as thesecond derivative of the Gaussian function:


Equation 3.15

which is

Equation 3.16

The Morlet wavelet is defined as


Equation 3.16a

where a is a modulation parameter, and sigmais the scaling parameter that affects the width of the window.

EXAMPLES

All of the examples that are given below correspond to real-lifenon-stationary signals. These signals are drawn from a database signals thatincludesevent related potentials of normal people, and patients withAlzheimer's disease. Since these are not test signals like simple sinusoids, itis not as easy to interpret them. They are shown here only to give an idea ofhow real-life look like.

The following signal shown in Figure 3.11 belongs to anormal person.

Figure 3.11

the following is its CWT. The numbers on theaxes are of no importance to us. those numbers simplyshow that the CWT was computed at 350 translation and 60 scale locations on thetranslation-scale plane. The important point to note here is the fact that thecomputation is not a true continuous WT, as it is apparent from thecomputation at finite number of locations. This is only a discretized versionof the CWT, which is explained later on this page. Note, however, that this isNOT discrete wavelet transform (DWT) which is the topic of Part IV of thistutorial.

Figure 3.12

the Figure 3.13 plots the same transform from adifferent angle for better visualization.

Figure 3.13

Figure 3.14 plots an event related potential of a patient diagnosed withAlzheimer's disease

Figure 3.14

Figure 3.15 illustrates its CWT:

Figure 3.15

here is another view from a different angle

Figure 3.16

THE WAVELET SYNTHESIS

The continuous wavelet transform is a reversible transform, provided thatEquation 3.18 is satisfied. Fortunately, this is a very non-restrictiverequirement. The continuous wavelet transform is reversible if Equation 3.18 issatisfied, even though the basis functions are in general may not be orthonormal. The reconstruction is possible by using thefollowing reconstruction formula:

Equation 3.17 Inverse Wavelet Transform

where C_psi is a constantthat depends on the wavelet used. The success of the reconstruction depends onthis constant called,the admissibility constant to satisfy the following admissibilitycondition:

Equation 3.18 Admissibility Condition

where psi^hat(xi) is the FTof (t). Equation 3.18 implies that 0)= 0, which is


Equation 3.19

As stated above, Equation 3.19 is not a very restrictive requirement sincemany wavelet functions can be found whose integral is zero. For Equation 3.19to be satisfied, the wavelet must be oscillatory.

Discretizationof the Continuous Wavelet Transform: The Wavelet Series

In today's world, computers are used to do most computations (wellok... almost all computations). It is apparent thatneither the FT, nor the STFT, nor the CWT can be practically computed by usinganalytical equations, integrals, etc. It is therefore necessary to discretize the transforms. As in the FT and STFT, the mostintuitive way of doing this is simply sampling the time-frequency (scale)plane. Again intuitively, sampling the plane with a uniformsampling rate sounds like the most natural choice. However, in the caseof WT, the scale change can be used to reduce the sampling rate.

At higher scales (lower frequencies) the samplingrate can be decreased, according to Nyquist's rule.In other words, if the time-scale plane needs to be sampled with a samplingrate ofN_1 at scale s_1 , the same plane can be sampled with asampling rate ofN_2 , at scale s_2 , where, s_1 < s_2 (correspondingto frequencies f1>f2 ) and N_2 < N_1 . The actualrelationship between N_1 and N_2 is


Equation 3.20


Equation 3.21

In other words, at lower frequencies the sampling rate can be decreasedwhich will save a considerable amount of computation time.

It should be noted at this time, however, that the discretizationcan be done in any way without any restriction as far as the analysis of thesignal is concerned. If synthesis is not required, even the Nyquist criteria does not need to be satisfied. The restrictions onthe discretization and the sampling rate becomeimportant if, and only if, the signal reconstruction is desired. Nyquist's sampling rate is the minimum sampling rate thatallows the originalcontinuous time signal to be reconstructed from its discretesamples. The basis vectors that are mentioned earlier are of particularimportance for this reason.

As mentioned earlier, the wavelet psi(tau,s) satisfyingEquation 3.18, allows reconstruction of the signal by Equation 3.17. However,this is true for the continuous transform. The question is: can we stillreconstruct the signal if we discretize the time andscale parameters? The answer is ``yes'', under certain conditions (as theyalways say in commercials: certain restrictions apply !!!).

The scale parameter s is discretized first on a logarithmic grid. Thetime parameter is then discretizedwith respect to the scale parameter i.e., a different sampling rate is used for every scale. In other words, thesampling is done on thedyadic sampling grid shown in Figure 3.17 :

Figure 3.17

Think of the area covered by the axes as the entire time-scale plane. TheCWT assigns a value to the continuum of points on this plane. Therefore, thereare an infinite number of CWT coefficients. First consider the discretization of the scale axis. Among that infinitenumber of points, only a finite number are taken, using a logarithmic rule. Thebase of the logarithm depends on the user. The most common value is2 becauseof its convenience. If 2 is chosen, only the scales 2, 4, 8, 16, 32, 64etc. are computed. If the value was 3, the scales 3, 9,27, 81, 243etc. would have been computed. The timeaxis is then discretized according to the discretizationof the scale axis. Since the discrete scale changes by factors of 2 , the sampling rate is reduced for the time axis bya factor of 2 at every scale.

Note that at the lowest scale (s=2), only 32 points of the time axis aresampled (for the particular case given in Figure 3.17). At the next scalevalue, s=4, the sampling rate of time axis is reduced by a factor of 2 sincethe scale is increased by a factor of 2, and therefore, only 16 samples aretaken. At the next step, s=8 and 8 samples are taken in time, and so on.

Although it is called the time-scale plane, it is more accurate to call itthe translation-scale plane, because ``time'' in the transform domainactually corresponds to the shifting of the wavelet in time. For the waveletseries, the actual time is still continuous.

Similar to the relationship between continuous Fourier transform, Fourierseries and the discrete Fourier transform, there is a continuous wavelettransform, a semi-discrete wavelet transform (alsoknown as wavelet series) and a discrete wavelet transform.

Expressing the above discretization procedure inmathematical terms, the scale discretization iss= s_0^and translation discretization is tau= k.s_0^j.tau_0where s_0>1 and tau_0>0 . Note, how thetranslation discretization is dependent on scale discretization withs_

The continuous wavelet function


Equation 3.22

Equation 3.23

inserting s = s_0^j , and tau = k.s_0^j.tau_0 .

If {)} constitutes an orthonormalbasis, the wavelet series transform becomes


Equation 3.24

Equation 3.25

A wavelet series requires that {)} are either orthonormal, biorthogonal, orframe. If{)} are not orthonormal,Equation 3.24 becomes


Equation 3.26

where hat{ _{}^*(t)} , is either the dual biorthogonalbasisor dual frame (Note that * denotes the conjugate).

If {) } are orthonormal or biorthogonal, the transform will be non-redundant, where asif they form a frame, the transform will be redundant. On the other hand, it ismuch easier to find frames than it is to find orthonormalor biorthogonal bases.

The following analogy may clear this concept. Consider the whole process aslooking at a particular object. The human eyes first determine the coarse viewwhich depends on the distance of the eyes to the object. This corresponds toadjusting the scale parameters_0-j). Whenlooking at a very close object, with great detail, j is negative andlarge (low scale, high frequency, analyses the detail in the signal). Movingthe head (or eyes) very slowly and with very small increments (of angle, ofdistance, depending on the object that is being viewed), corresponds to smallvalues of tau = k.s_0^j.tau_ Note thatwhen j is negative and large, it corresponds to small changes in time,tau(high sampling rate) and large changes in s_0^-j (low scale, highfrequencies, where the sampling rate is high). The scale parameter can bethought of as magnification too.

How low can the sampling rate be and still allow reconstruction of thesignal? This is the main question to be answered to optimize the procedure. Themost convenient value (in terms of programming) is found to be ``2'' for s_0and "1" for . Obviously, when thesampling rate is forced to be as low as possible, the number of available orthonormal wavelets is also reduced.

The continuous wavelet transform examples that were given in this chapterwere actually the wavelet series of the given signals. The parameters werechosen depending on the signal. Since the reconstruction was not needed, thesampling rates were sometimes far below the critical value where s_0 variedfrom 2 to 10, and tau_0 varied from 2 to 8, for different examples.

This concludes Part III of this tutorial. I hope you now have a basicunderstanding of what the wavelet transform is all about. There is one thingleft to be discussed however. Even though the discretized wavelet transform canbe computed on a computer, this computation may take anywhere from a coupleseconds to couple hours depending on your signal size and the resolution youwant. An amazingly fast algorithm is actually available to compute the wavelettransform of a signal. The discrete wavelet transform (DWT) is introduced inthe final chapter of this tutorial, in Part IV.

Let's meet at the grand finale, shall we?

Wavelet Tutorial Main Page
Robi Polikar Main Page 

The Wavelet Tutorial is hosted by Rowan University,College of Engineering Web Servers

AllRights Reserved. This tutorial is intended for educational purposes only.Unauthorized copying, duplicating and publishing strictly prohibited.

RobiPolikar
136 Rowan Hall

Dept.of Electrical and Computer Engineering

Rowan University

Glassboro, NJ 08028

Phone:(856) 256 5372


  • 0
    点赞
  • 2
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值