matlab fits,Linear fits using Matlab

Linear fits using Matlab

http://www.helsinki.fi/~fyl_tlpk/luento/data-sovitus-e.html

In this section linear fits using Matlab are discussed. The least

squares method was the topic of the lecture http://www.helsinki.fi/~fyl_tlpk/luento/data-pns-e.html

Polynomial fit using the function polyfit, no weighting

Let one assume that the measured data is in the two-column array

Data. The input parameters of polyfit are the data and the degree

of the fitted polynomial

>> coefficients = polyfit(Data(:,1), Data(:,2), degree)

The values of the model polynomial at

points Data(:,1) may be computed using the function polyval:

>> P = polyval(coefficients, Data(:,1))

The polynomial fit may be done also

by plotting the data using the command plot (or errorbar). The

fitting may be done by choosing this option of 'tools/basic

fitting'. The residual and residual may also be computed.

Example. Fitting a line using polyfit

Model function f(x) = x + 2. Add statistical

errorr sto data (std = 1).

x = 1:10; x = x'; a = 1; b = 2; y = a*x+b; y = y + randn(size(y))*1;

Matlab demo m-file

echo on

% Make simulated data

x = 1:10; x = x'; a = 1; b = 2; n = length(x);

y = a*x+b; y = y + randn(size(y))*1;

% Fit the line

[P,S] = polyfit(x,y,1) % P sisältää a:n ja b:n

% Standard errors according to help text

Rinv = inv(S.R);

C = (Rinv*Rinv')*S.normr^2/S.df; sqrt(diag(C))

% forming the coefficient matrix

A = [x ones(size(x))];

% fit

[P2, C2, mse] = lscov(A, y, eye(n,n)*3^2),

% error bars for the coefficients P2

C2/sqrt(mse)

echo off

Execution in Matlab

>> echo on

% Make simulated data

x = 1:10; x = x'; a = 1; b = 2; n = length(x);

y = a*x+b; y = y + randn(size(y))*1;

% Fit the line

[P,S] = polyfit(x,y,1) % P sisältää a:n ja b:n

% Standard errors according to help text

Rinv = inv(S.R);

C = (Rinv*Rinv')*S.normr^2/S.df; sqrt(diag(C))

% forming the coefficient matrix

A = [x ones(size(x))];

[P2, C2, mse] = lscov(A, y, eye(n,n)*3^2),

% error bars for the coefficients P2

C2/sqrt(mse)

echo off

P =

0.9401 2.6561

S =

R: [2x2 double]

df: 8

normr: 2.5209

ans =

0.0981

0.6089

P2 =

0.9401

2.6561

C2 =

0.0981

0.6089

mse =

0.0883

ans =

0.3303

2.0494

The results thus are: the slope 0.9 and error 0.3, the constant 3

with error 2.

Plotting the result

errorbar(x,y,ones(size(x))*3)

hold

plot(x,a*x+b)

hold

The function linsolve and the operator \ for not weighed

fit

The function linsolve and operator \ solve the not weighed set

of linear equations.

The set of equations needs to be presented in

form Ab = y.

Example. Find the

crossing of the lines y = 2x+1, y =

-3x+2

y – 2x = 1

y + 3x = 2

The coefficient matrix and data column vector:

A = [1 –2;1 3]; d = [1;2];

Solution

A\d

ans =

1.4

0.2

The crossing point (0.2,1.4) was obtained. Because A was a

square matrix, also this will work

inv(A)*d = [1.4; 0.2].

This is numerically uneffective way.

Linear least squares fit using the function lscov

This function solves the

equation

Ab = y.

The matrix A, data

y, and the errors

dy are

known.

With this function one may do a weighted fit and obtain error

estimates for the parameters on the basis of the covariance matrix

of the data y.

Not weighted fit

The function lscov may be used simply as

b = lscov(A,y)

The same result would be obtained by

b = A\y.

Weighted fit

The usual weighting

matrix is the inverse of the covariance matrix of the dataS =

inv(cov(y)).

If the  the components of y are statistically

independent, the covariance matrix is diagonal and the diagonal

contains the variances of the components of y. Then the sum to be

minimized may be written as

F(a, x) =

(1/n) ∑i 1/(var(y))i|yi -

f(a,xi)|2.

Form first the weighting vector

S = 1./dy.^2

or the whole covariance matrix covy. Assuming

that the components of y are independent:

covy = diag(dy.^2).

The fitting is done by

[b, stdb, mse] = lscov(A,y,1./dy.^2)

[b, stdb, mse] = lscov(A,y,covy)

Both the solution and the standard errors stdx are formed. The

error estimate is obtained as

errb = stdb/sqrt(mse)

The function lscov assumes that the covariance

matrix of y is known only up to a scale factor.

The parameter mse is an estimate of that unknown

scale factor. If the covariance matrix is known,

the standard errors should be scaled by

1/sqrt(mse). See the help-text of the function

lscov.

Actually the function lscov outputs also the covariance matrix

of b. It should be scaled with 1/mse.

Error estimation on the basis of the covariance matrix of

data

Denote the equation to be solved by Ab = y as previously.

Starting from the weighted normal

equations

AtSAb =

AtSy,

where S is the weigting

matrix.

The solution may be written as

b = Gy, where G =

inv(AtSA) AtS.

Then the covariance matrix of the

solution b may be written in terms of the solution of the data

y

cov(b) = G cov(y)

Gt

Thus simple error estimates for the parameters b are obtained as

follows:

S = inv(covy); G = inv(A'*S*A)*A'*S; covb = G*covy*G'; errb = sqrt(diag(covb))

Example. Polynomial fit, mode f(x) = Ax2 + Bx +

C

Here the base functions are the monomers

x2, x1

and x0 and the goal is to find

coefficients A, B and C.

Denote the experimental data by (xi, yi). Assume that

values of x are exact and that errros

of y are stored in array

sig.

Assume also - exceptionally - that exact data

yt, the correct result, is also known.

The data are stored in columns x, yt, y and

sig.

>> [x yt y sig]

1.0000 5.0000 7.3406 4.6811

1.5000 11.0000 11.2147 0.4293

2.0000 19.0000 8.9606 20.0789

2.5000 29.0000 19.5285 18.9429

3.0000 41.0000 37.2557 7.4886

3.5000 55.0000 43.1411 23.7177

4.0000 71.0000 60.4410 21.1181

4.5000 89.0000 103.7248 29.4496

5.0000 109.0000 109.5574 1.1149

5.5000 131.0000 118.8268 24.3463

6.0000 155.0000 154.5877 0.8245

6.5000 181.0000 169.7166 22.5669

7.0000 209.0000 195.5072 26.9856

7.5000 239.0000 236.3890 5.2220

8.0000 271.0000 280.5347 19.0693

Forming the coefficient matrix A

A = [x.^2 x ones(size(x))];

The condition number of A:

cond(A)

ans = 157.6907

On the basis of the condition number, the problem is solvable

using lsq-fit.

Exact solution using the exact data yt (exceptionally known in

this case) and 'measured' data y:

>> [A\yt A\y]

ans =

4.0000 4.5037

2.0000 -2.3234

-1.0000 2.0201

Covariance matrix of y

covy = diag(sig.^2);

Plot for fun

surf(covy), title('cov(y)'),

Errors of the fitting parameters based on the covariance matrix

of the data covz

G = inv(A'*A)*A'; covc = G*covy*G';

error estimates:

sqrt(diag(covc))

The error estimated are: 1.0 8.7 13.7 for coefficients A, B and

C, respectively.

Weighted fit using lscov with error estimates for the fitting

parameters

The matrix A, the data column y and the covariance matrix of y are

already formed. So everything is ready for using the function

lscov.

[b, errb, mse] = lscov(A, y, covy)

b =

3.8728

2.8255

-1.7252

errb =

0.1436

1.0573

1.3110

mse =

0.2599

Error estimates for the fitting parameters (ratk)

>> errb/sqrt(mse)

ans =

0.2816

2.0740

2.5717

Was the weighting helpful? In this case the residuals are about

the same.

b0 = A\y;

>> [norm(y-A*b), norm(y-A*b0)]

ans =

35.6 30.7

plot(x, A*b, x, A*b0, x, y)

Another way to obtain errors:

S = inv(covy); G = inv(A'*S*A)*A'*S; covc = G*covy*G'; errorestimate = sqrt(diag(covc))

errorestimate =

0.2816

2.0740

2.5717

The results are

coefficients

error estimates

3.8

0.3

2.8

2.1

-1.7

2.6

Example. Erroneous values in the data

This demonstration shows that weighting is beneficial. Test (data

in the script) here contain one clearly wrong value. Its weight is

reduced to zero.

% make test data

% assume that it obeys Poisson statistics

x = 0:0.5:10; x = x';

yt = 100*x + 1000; y = poissrnd(yt);

% make an error to the data

y(18) = 800;

% error bars in the case of Poisson statistics

subplot(2,2,1), errorbar(x,y,sqrt(y));

% fit using polyfit

coe = polyfit(x,y,1),

subplot(2,2,3), plot(x,yt,x,y,'.',x,polyval(coe,x))

% weighed fit using lscov

wei = 1./y; wei(18) = 0;

coe2 = lscov([x ones(size(x))], y, wei)

subplot(2,2,4), plot(x,yt,x,y,'.',x,polyval(coe2,x))

Results for a test run

>> sovituskoe

coe =

87.125 1020.7

coe2 =

106.63

973.47

Fitting using lscov gave much better results in this case.

How do you known when a value is erroneus? Check the measurement

parameters. Repeat the experiment, if possible. Functions of the statictics package

Function regress outputs the solution and confidence interval.

The package includes a function polytool, which is a graphical

interphase for polynomial fit.

Example. Function regress.

% solution br, confidence intervals bintr

>> [br, bintr] = regress(y, A)

br =

0.9702

2.2430

bintr =

0.7619 1.1784

0.9511 3.5350

Linear trend in the data

Function detrend for removing linear 'background'

Try with f(x) =

sin(x) + 5x

x = 0:0.1:8*pi; x=x'; f = 5*x+sin(x);

The linear trend is removed:

plot(x,detrend(f),x,sin(x),x,f)

Another way is to fit

Model

g(x, a, b) = ax + b

sin(x).

Base functions f1(x) = x and

f2(x) = sin(x). Matrix

A =

[f1(x) f2(x)] = [x

sin(x)].

One may use lscov or \ or linsolve.

If the function f2 were f2(x)

= sin(cx), where c is an unknown fitting parameter the

model would be

g(x,a,b,c)

= ax + b

sin(cx).

For solving this a function for nonlinear fits is needed.

This can be performed, for instance, with the function

'fminsearch'.

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值