【有限元方法】Newton-Raphson Method

Newton-Raphson Method

Linear vs Nonlinear Analysis:

  • At this point, we can conduct a linear analysis no problem

∫ ∑ i , j = 1 3 σ i j ε i j ∗ d v = ∫ t n ⋅ u ∗ d s + ∫ ρ b ⋅ u ∗ d v ⇒ ∫ e [ B ] T [ C ] [ B ] d x ⏟ k e u e = ∫ ∂ e [ N ] T t n d s + ∫ e [ N ] T ρ b d v \begin{aligned} &\int \sum_{i,j=1}^3\sigma_{ij}\varepsilon_{ij}^*dv=\int t_n\cdot u^*ds + \int \rho b \cdot u^*dv \\ \Rightarrow &\underbrace{\int_e[B]^T[C][B]dx}_{k_e}u_e=\int_{\partial e}[N]^Tt_n ds + \int_e[N]^T\rho b dv \end{aligned} i,j=13σijεijdv=tnuds+ρbudvke e[B]T[C][B]dxue=e[N]Ttnds+e[N]Tρbdv

  • Two Assumption:

(1) Small Deformations
(2) Linear Elastic Materials

  • This allowed us to solve our system using a system of linear equations:

[ K ] [ u ] = [ F ] \begin{aligned} [K] [u] = [F] \end{aligned} [K][u]=[F]

  • Unfortunately, the world is a cruel place and the majority of realistic scenarios require a nonlinear analysis:

Stress-Strain Relationship:
∫ ∑ i , j = 1 3 σ i j ε i j ∗ d v \begin{aligned} &\int \sum_{i,j=1}^3\sigma_{ij}\varepsilon_{ij}^*dv \end{aligned} i,j=13σijεijdv

t n t_n tn and ρ b \rho b ρb are solution dependent:

∫ t n ⋅ u ∗ d s + ∫ ρ b ⋅ u ∗ d v \begin{aligned} \int t_n\cdot u^*ds + \int \rho b \cdot u^*dv \end{aligned} tnuds+ρbudv

Types of Nonlinearities

There are many types of nonlinearities:

1. Geometric Nonlinearity

  • Buckling
  • Large deformation of the material
    在这里插入图片描述

2. Material Nonlinearity

  • Plasticity
  • Damage

在这里插入图片描述

3. Kinematic Nonlinearity

  • Contact

在这里插入图片描述

Newton-Raphson Method (Single Variable)

To solve nonlinear equations we typically employ the Newton-Raphson method (also referred to as Newton’s method). To see how this method works, let’s examine a single variable scenario:

If we hace a continuous function f f f, we can perform a Taylor series expansion:

f ( x ( i + 1 ) ) ≈ f ( x ( i ) ) + ∂ f ( x ( i ) ) ∂ x ( x ( i + 1 ) − x ( i ) ) + 1 2 ! ( ∂ 2 f ( x ( i ) ) ∂ x 2 ) ( x ( i + 1 ) − x ( i ) ) + . . . \begin{aligned} f(x^{(i+1)})\approx f(x^{(i)})+\frac{\partial f(x^{(i)})}{\partial x}(x^{(i+1)}-x^{(i)})+\frac{1}{2!}(\frac{\partial ^2f(x^{(i)})}{\partial x^2})(x^{(i+1)}-x^{(i)})+... \end{aligned} f(x(i+1))f(x(i))+xf(x(i))(x(i+1)x(i))+2!1(x22f(x(i)))(x(i+1)x(i))+...

Neglecting higher-order terms:

f ( x ( i + 1 ) ) ≈ f ( x ( i ) ) + ∂ f ( x ( i ) ) ∂ x ( x ( i + 1 ) − x ( i ) ) \begin{aligned} f(x^{(i+1)})\approx f(x^{(i)})+\frac{\partial f(x^{(i)})}{\partial x}(x^{(i+1)}-x^{(i)}) \end{aligned} f(x(i+1))f(x(i))+xf(x(i))(x(i+1)x(i))

Rearrange equation:

( x ( i + 1 ) − x ( i ) ) = ( f ( x ( i + 1 ) ) − f ( x ( i ) ) ) f ′ ( x ( i ) ) Δ x = ( x ( i + 1 ) − x ( i ) ) K t = f ′ ( x ( i ) ) \begin{aligned} (x^{(i+1)}-x^{(i)})&=\frac{(f(x^{(i+1)}) - f(x^{(i)}))}{f'(x^{(i)})} \\ \Delta x &= (x^{(i+1)}-x^{(i)}) \\ K_t &= f'(x^{(i)}) \end{aligned} (x(i+1)x(i))ΔxKt=f(x(i))(f(x(i+1))f(x(i)))=(x(i+1)x(i))=f(x(i))

Simply equation:

Δ x = F − f ( x ( i ) ) K t = K t − 1 ( F − f ( x ( i ) ) ) \begin{aligned} \Delta x &=\frac{F- f(x^{(i)})}{K_t} = K_t^{-1}(F-f(x^{(i)})) \end{aligned} Δx=KtFf(x(i))=Kt1(Ff(x(i)))

Now let’s examine the parameters of the equation:

F = F = F= Desired Value
x i = x^{i} = xi= Initial Value of x x x
f ( x ( i ) ) = f(x^{(i)}) = f(x(i))= Initial Value of the Function
K t = K_t = Kt= Slope of the Tangent Line
Δ x = \Delta x= Δx= Increment in x x x : solve it

在这里插入图片描述

In FEA:

  • F − f ( x ( i ) ) = F - f(x^{(i)})= Ff(x(i))= Different between applied forces ( F ) (F) (F) and internal forces ( f ( x ( i ) ) ) (f(x^{(i)})) (f(x(i)))
  • Δ x = \Delta x = Δx= Displacement increment

Example 1 - Single Variable Newton-Raphson Method

If f ( x ) = x f(x) = \sqrt{x} f(x)=x , find x x x such that f ( x ) = 3 f(x) = 3 f(x)=3

Function Information:

f ( x ) = x ⇒ K t = ∂ f ∂ x = 1 2 x \begin{aligned} f(x) = \sqrt{x} \Rightarrow K_t = \frac{\partial f}{\partial x} = \frac{1}{2\sqrt{x}} \end{aligned} f(x)=x Kt=xf=2x 1

Δ x = K t − 1 ( F − f ( x ( i ) ) ) \Delta x =K_t^{-1}(F-f(x^{(i)})) Δx=Kt1(Ff(x(i)))

x ( i ) x^{(i)} x(i) f ( x ( i ) ) f(x^{(i)}) f(x(i)) K t K_t Kt Δ x \Delta x Δx x ( i + 1 ) x^{(i+1)} x(i+1)
110.545

Δ x = K t − 1 ( F − f ( x ( i ) ) ) = ( 0.5 ) − 1 ( 3 − 1 ) = 4 \Delta x =K_t^{-1}(F-f(x^{(i)})) =(0.5)^{-1}(3-1)=4 Δx=Kt1(Ff(x(i)))=(0.5)1(31)=4

在这里插入图片描述

x ( i ) x^{(i)} x(i) f ( x ( i ) ) f(x^{(i)}) f(x(i)) K t K_t Kt Δ x \Delta x Δx x ( i + 1 ) x^{(i+1)} x(i+1)
110.545
52.240.223.428.42

在这里插入图片描述

x ( i ) x^{(i)} x(i) f ( x ( i ) ) f(x^{(i)}) f(x(i)) K t K_t Kt Δ x \Delta x Δx x ( i + 1 ) x^{(i+1)} x(i+1)
110.545
52.240.223.428.42
8.422.900.170.578.99

在这里插入图片描述

Newton-Raphson Method (Multi-Variable)

Unfortunately, most scenarios (especially finite element) involve many multi-variable equations:

f 1 ( x 1 , x 2 , . . . , x n ) = 0 f 2 ( x 1 , x 2 , . . . , x n ) = 0 ⋮ f n ( x 1 , x 2 , . . . , x n ) = 0 f_1(x_1, x_2, ..., x_n) = 0 \\ f_2(x_1, x_2, ..., x_n) = 0 \\ \vdots \\ f_n(x_1, x_2, ..., x_n) = 0 f1(x1,x2,...,xn)=0f2(x1,x2,...,xn)=0fn(x1,x2,...,xn)=0

We can perform the Taylor series expansion for each equation:

f 1 ( x 1 ( i + 1 ) , x 2 ( i + 1 ) , . . . , x n ( i + 1 ) ) ≈ f 1 ( x 1 ( i ) , x 2 ( i ) , . . . , x n ( i ) ) + ∂ f 1 ∂ x 1 ∣ x ( i ) ( x 1 ( i + 1 ) − x 1 ( i ) ) + ∂ f 1 ∂ x 2 ∣ x 2 ( i ) ( x ( i + 1 ) − x 2 ( i ) ) + ⋯ + ∂ f 1 ∂ x n ∣ x n ( i ) ( x ( i + 1 ) − x n ( i ) ) \begin{aligned} &f_1(x_1^{(i+1)}, x_2^{(i+1)}, ..., x_n^{(i+1)})\approx f_1 (x_1^{(i)}, x_2^{(i)}, ..., x_n^{(i)}) \\ &+ \frac{\partial f_1}{\partial x_1}|_{x^{(i)}}(x_1^{(i+1)}-x_1^{(i)})+ \frac{\partial f_1}{\partial x_2}|_{x_2^{(i)}}(x^{(i+1)}-x_2^{(i)}) + \cdots +\frac{\partial f_1}{\partial x_n}|_{x_n^{(i)}}(x^{(i+1)}-x_n^{(i)}) \end{aligned} f1(x1(i+1),x2(i+1),...,xn(i+1))f1(x1(i),x2(i),...,xn(i))+x1f1x(i)(x1(i+1)x1(i))+x2f1x2(i)(x(i+1)x2(i))++xnf1xn(i)(x(i+1)xn(i))

Writing all the equations in matrix form:

[ f 1 ( x ( i + 1 ) ) f 2 ( x ( i + 1 ) ) ⋮ f n ( x ( i + 1 ) ) ] = [ f 1 ( x ( i ) ) f 2 ( x ( i ) ) ⋮ f n ( x ( i ) ) ] + [ ∂ f 1 ∂ x 1 ∂ f 1 ∂ x 2 ⋯ ∂ f 1 ∂ x n ∂ f 2 ∂ x 1 ∂ f 2 ∂ x 2 ⋯ ∂ f 2 ∂ x n ⋮ ⋮ ⋱ ⋮ ∂ f n ∂ x 1 ∂ f n ∂ x 2 ⋯ ∂ f n ∂ x n ] [ ( x 1 ( i + 1 ) − x 1 ( i ) ) ( x 2 ( i + 1 ) − x 2 ( i ) ) ⋮ ( x n ( i + 1 ) − x n ( i ) ) ] \begin{aligned} \begin{bmatrix} f_1(x^{(i+1)}) \\ f_2(x^{(i+1)}) \\ \vdots \\ f_n(x^{(i+1)}) \end{bmatrix} = \begin{bmatrix} f_1(x^{(i)}) \\ f_2(x^{(i)}) \\ \vdots \\ f_n(x^{(i)}) \end{bmatrix} + \begin{bmatrix} \frac{\partial f_1}{\partial x_1} & \frac{\partial f_1}{\partial x_2} & \cdots & \frac{\partial f_1}{\partial x_n} \\ \frac{\partial f_2}{\partial x_1} & \frac{\partial f_2}{\partial x_2} & \cdots & \frac{\partial f_2}{\partial x_n} \\ \vdots & \vdots & \ddots & \vdots \\ \frac{\partial f_n}{\partial x_1} & \frac{\partial f_n}{\partial x_2} & \cdots & \frac{\partial f_n}{\partial x_n} \end{bmatrix} \begin{bmatrix} (x_1^{(i+1)}-x_1^{(i)})\\ (x_2^{(i+1)}-x_2^{(i)}) \\ \vdots \\ (x_n^{(i+1)}-x_n^{(i)}) \end{bmatrix} \end{aligned} f1(x(i+1))f2(x(i+1))fn(x(i+1)) = f1(x(i))f2(x(i))fn(x(i)) + x1f1x1f2x1fnx2f1x2f2x2fnxnf1xnf2xnfn (x1(i+1)x1(i))(x2(i+1)x2(i))(xn(i+1)xn(i))

We can move the vector containing the value of the functions at the initial guess to the other side to obtain the following:

[ ∂ f 1 ∂ x 1 ∂ f 1 ∂ x 2 ⋯ ∂ f 1 ∂ x n ∂ f 2 ∂ x 1 ∂ f 2 ∂ x 2 ⋯ ∂ f 2 ∂ x n ⋮ ⋮ ⋱ ⋮ ∂ f n ∂ x 1 ∂ f n ∂ x 2 ⋯ ∂ f n ∂ x n ] [ ( x 1 ( i + 1 ) − x 1 ( i ) ) ( x 2 ( i + 1 ) − x 2 ( i ) ) ⋮ ( x n ( i + 1 ) − x n ( i ) ) ] = [ f 1 ( x ( i + 1 ) ) f 2 ( x ( i + 1 ) ) ⋮ f n ( x ( i + 1 ) ) ] − [ f 1 ( x ( i ) ) f 2 ( x ( i ) ) ⋮ f n ( x ( i ) ) ] \begin{bmatrix} \frac{\partial f_1}{\partial x_1} & \frac{\partial f_1}{\partial x_2} & \cdots & \frac{\partial f_1}{\partial x_n} \\ \frac{\partial f_2}{\partial x_1} & \frac{\partial f_2}{\partial x_2} & \cdots & \frac{\partial f_2}{\partial x_n} \\ \vdots & \vdots & \ddots & \vdots \\ \frac{\partial f_n}{\partial x_1} & \frac{\partial f_n}{\partial x_2} & \cdots & \frac{\partial f_n}{\partial x_n} \end{bmatrix} \begin{bmatrix} (x_1^{(i+1)}-x_1^{(i)})\\ (x_2^{(i+1)}-x_2^{(i)}) \\ \vdots \\ (x_n^{(i+1)}-x_n^{(i)}) \end{bmatrix} = \begin{bmatrix} f_1(x^{(i+1)}) \\ f_2(x^{(i+1)}) \\ \vdots \\ f_n(x^{(i+1)}) \end{bmatrix} - \begin{bmatrix} f_1(x^{(i)}) \\ f_2(x^{(i)}) \\ \vdots \\ f_n(x^{(i)}) \end{bmatrix} x1f1x1f2x1fnx2f1x2f2x2fnxnf1xnf2xnfn (x1(i+1)x1(i))(x2(i+1)x2(i))(xn(i+1)xn(i)) = f1(x(i+1))f2(x(i+1))fn(x(i+1)) f1(x(i))f2(x(i))fn(x(i))

or in a more simple form:

[ K ] [ Δ x ] = [ F ] − [ f ( x ( i ) ) ] [K] [\Delta x] = [F] - [f(x^{(i)})] [K][Δx]=[F][f(x(i))]

Moving [ K t ] [K_t] [Kt] to the other side:

[ Δ x ] = [ K t ] − 1 ( [ F ] − [ f ( x ( i ) ) ] ) [\Delta x] = [K_t]^{-1}([F]-[f(x^{(i)})]) [Δx]=[Kt]1([F][f(x(i))])

Same as single variable case but now we have vectors and matrices

Convergence Checks:

With multiple variables it is difficult to determine if the method has converged or not. To make things easier, the norm of the vectors is used:

1. Force (and Moment) Convergence:

  • Force convergence is achieved when the norm of the vector f ( x ( i ) ) − F f(x^{(i)}) - F f(x(i))F divided by the norm of vector F F F is less than the specified tolerance e R e_R eR

∣ ∣ f ( x ( i ) ) − F ∣ ∣ ∣ ∣ F ∣ ∣ ≤ e R \frac{||f(x^{(i)})-F||}{||F||} \leq e_R ∣∣F∣∣∣∣f(x(i))F∣∣eR

2. Displacement Convergence:

  • Force convergence is achieved when the norm of the vector Δ u \Delta u Δu divided by the norm of vector x ( i ) x^{(i)} x(i) is less than the specified tolerance e u e_u eu
    ∣ ∣ Δ x ∣ ∣ ∣ ∣ x ( i ) ∣ ∣ ≤ e e \frac{||\Delta x||}{||x^{(i)}||} \leq e_e ∣∣x(i)∣∣∣∣Δx∣∣ee

Example 2 - Multi-Variable Newton-Raphson Method:

If f ( x 1 , x 2 ) = [ 20 x 1 4 x 2 + 3 x 2 3 , 20 x 1 2 x 2 3 ] f(x_1, x_2) = [20x_1^4x_2+3x_2^3, 20x_1^2x_2^3] f(x1,x2)=[20x14x2+3x23,20x12x23], find { x 1 , x 2 } \{x_1, x_2\} {x1,x2} such that f ( x 1 , x 2 ) = [ 20 , 1 ] f(x_1, x_2) = [20, 1] f(x1,x2)=[20,1]

Function Information:

f 1 ( x 1 , x 2 ) = 20 x 1 4 x 2 + 3 x 2 3 f 2 ( x 1 , x 2 ) = 20 x 1 2 x 2 3 . ⇒ [ K t ] = [ ∂ f 1 ∂ x 1 ∂ f 1 ∂ x 2 ∂ f 2 ∂ x 1 ∂ f 2 ∂ x 2 ] = [ 80 x 1 3 x 2 20 x 1 4 + 9 x 2 2 40 x 1 x 2 3 60 x 1 2 x 2 2 ] \begin{aligned} f_1(x_1, x_2) &= 20x_1^4x_2+3x_2^3 \\ f_2(x_1, x_2) &= 20x_1^2x_2^3. \\ \Rightarrow [K_t] &= \begin{bmatrix} \frac{\partial f_1}{\partial x_1} &\frac{\partial f_1}{\partial x_2} \\ \frac{\partial f_2}{\partial x_1} &\frac{\partial f_2}{\partial x_2} \end{bmatrix} = \begin{bmatrix} 80x_1^3x_2& 20x_1^4+9x_2^2 \\ 40x_1x_2^3& 60x_1^2x_2^2 \end{bmatrix} \end{aligned} f1(x1,x2)f2(x1,x2)[Kt]=20x14x2+3x23=20x12x23.=[x1f1x1f2x2f1x2f2]=[80x13x240x1x2320x14+9x2260x12x22]

Sample Calculations ( 1 s t 1^{st} 1st Iteration):
f 1 ( 1 , 1 ) = 20 ( 1 ) 4 ( 1 ) + 3 ( 1 ) 3 = 23 f 2 ( 1 , 1 ) = 20 ( 1 ) 2 ( 1 ) 3 = 20 [ K t ] = [ 80 ( 1 ) 3 ( 1 ) 20 ( 1 ) 4 + 9 ( 1 ) 2 40 ( 1 ) ( 1 ) 3 60 ( 1 ) 2 ( 1 ) 2 ] ⇒ [ K t ] = [ 80 29 40 60 ] [ Δ x ] = [ K t ] − 1 ( [ F ] − [ f ] ) [ Δ x ] = [ 80 29 40 60 ] − 1 ( [ 20 1 ] ) − [ 23 20 ] ) = [ 0.102 − 0.385 ] ) \begin{aligned} f_1(1, 1) &= 20(1)^4(1) + 3(1)^3 = 23 \\ f_2(1, 1) &= 20(1)^2(1)^3 = 20 \\ [K_t] & = \begin{bmatrix} 80(1)^3(1) & 20(1)^4+9(1)^2 \\ 40(1)(1)^3 & 60(1)^2(1)^2 \end{bmatrix} \\ \Rightarrow [K_t] &= \begin{bmatrix} 80 & 29 \\ 40 & 60 \end{bmatrix} \\ [\Delta x] &= [K_t]^{-1}([F]-[f]) \\ [\Delta x] &=\begin{bmatrix} 80 & 29 \\ 40 & 60 \end{bmatrix} ^{-1}( \begin{bmatrix} 20 \\ 1 \end{bmatrix}) - \begin{bmatrix} 23 \\ 20 \end{bmatrix}) = \begin{bmatrix} 0.102 \\ -0.385 \end{bmatrix}) \end{aligned} f1(1,1)f2(1,1)[Kt][Kt][Δx][Δx]=20(1)4(1)+3(1)3=23=20(1)2(1)3=20=[80(1)3(1)40(1)(1)320(1)4+9(1)260(1)2(1)2]=[80402960]=[Kt]1([F][f])=[80402960]1([201])[2320])=[0.1020.385])

x 1 ( i ) x_1^{(i)} x1(i) x 2 ( i ) x_2^{(i)} x2(i) f 1 ( x ( i ) ) f_1(x^{(i)}) f1(x(i)) f 2 ( x ( i ) ) f_2(x^{(i)}) f2(x(i)) Δ x 1 \Delta x_1 Δx1 Δ x 2 \Delta x_2 Δx2 e R e_R eR e u e_u eu
1123200.102-0.38596.128.1
1.1020.61518.8385.6500.125-0.21523.919.7
1.2270.40018.3251.9270.096-0.0859.69.9
⋮ \vdots ⋮ \vdots ⋮ \vdots ⋮ \vdots ⋮ \vdots ⋮ \vdots ⋮ \vdots ⋮ \vdots
1.3480.30219.9791.0010.001-0.00010.10.04

6 Iterations

Other Methods:

Over the years the Newton-Raphson method has been modified to help improve stability and reduce computation time:

1. Modified Newton-Rapshson Method

  • [ K t ] [K_t] [Kt] is recalculated at selected iterations (say every 5 t h 5^{th} 5th increment) or kept constant during the entire analysis
  • Requires additional computations to converge but is able to reduce computation cost

Secant Method (Quasi-Newton)

  • Approximates [ K t ] [K_t] [Kt] after the first iterations

[ K t ] i = f ( x ( i ) ) − f ( x ( i − 1 ) ) x ( i ) − x ( i − 1 ) [K_t]_i = \frac{f(x^{(i)})-f(x^{(i-1)})}{x^{(i)}-x^{(i-1)}} [Kt]i=x(i)x(i1)f(x(i))f(x(i1))

  • Don’t need to calculate derivatives!
  • Provides a good convergence rate while reducing the computational cost

Example 3- Single Variable Modified Newton-Raphson Method:

If f ( x ) = x f(x) = \sqrt{x} f(x)=x , find x x x such that f ( x ) = 3 f(x) = 3 f(x)=3

Function Information:

f ( x ) = x ⇒ K t = ∂ f ∂ x = 1 2 x \begin{aligned} f(x) = \sqrt{x} \Rightarrow K_t = \frac{\partial f}{\partial x} = \frac{1}{2\sqrt{x}} \end{aligned} f(x)=x Kt=xf=2x 1

Δ x = K t − 1 ( F − f ( x ( i ) ) ) \Delta x =K_t^{-1}(F-f(x^{(i)})) Δx=Kt1(Ff(x(i)))

x ( i ) x^{(i)} x(i) f ( x ( i ) ) f(x^{(i)}) f(x(i)) K t K_t Kt Δ x \Delta x Δx x ( i + 1 ) x^{(i+1)} x(i+1)
110.545

在这里插入图片描述

x ( i ) x^{(i)} x(i) f ( x ( i ) ) f(x^{(i)}) f(x(i)) K t K_t Kt Δ x \Delta x Δx x ( i + 1 ) x^{(i+1)} x(i+1)
110.545
52.240.51.526.52

在这里插入图片描述

x ( i ) x^{(i)} x(i) f ( x ( i ) ) f(x^{(i)}) f(x(i)) K t K_t Kt Δ x \Delta x Δx x ( i + 1 ) x^{(i+1)} x(i+1)
110.545
52.240.51.526.52
6.522.550.50.897.41

在这里插入图片描述

x ( i ) x^{(i)} x(i) f ( x ( i ) ) f(x^{(i)}) f(x(i)) K t K_t Kt Δ x \Delta x Δx x ( i + 1 ) x^{(i+1)} x(i+1)
110.545
52.240.51.526.52
6.522.550.50.897.41
7.412.720.50.557.97

在这里插入图片描述

Slower Convergence!!

Material Nonlinearity Example

在这里插入图片描述
在这里插入图片描述

Wolfram Mathematica

(* Material Nonlinearity Example*)
 Clear["Global *"]
 
 (*Parameters *)  
 L = 1.0;
 A = 1;
 p = X1^2';

(* Newton-Raphson Parameters*)
Xi = 1;
ErrorLimit = 0.5;
MaxIter = 1;

(* Stress-Strain Relationship *)
s = 10 * u'[X1] + 100000 * u'[X1]^3;


(* Exact Solution *)
DE = D[s*A, X1]+p;
sol = NDSolve[{DE == 0, u[0] == 0, u[L] == 0}, u[X1], X1];   // Numerical  
uExact = u[X1] /. sol[[1]]; 

Plot[uExact, {X1, 0, L}]

在这里插入图片描述

(* RR Approximation *)



(* Approximation Function *)
uApprox = a0 + a1*X1 + a2*X1^2;

(* Essential Boundary Conditioons *)
BC1 = uApprox /. {X1 -> 0};
BC2 = uApprox /. {X1 -> L};
sol1 = Solve[{BC1 == 0, BC2 == 0}, {a0, a1}];
{a0, a1} = {a0, a1} /. sol1[[1]];

(* Total Internal Strain Energy *)
S = 10 * e + 100000 * e^3;
SED = Integrate[S, {e, 0, e}];      // 5e^2 + 25000e^4, Strain Energy Density
e = D[uApprox, X1];
TSE = Integrate[SED, {X1, 0, L}];  // 1.66667a2^2+5000 a2^4,   Total Strain Energy

(* Work *)
W = Integrate[p * uApprox, {X1, 0, L}];   // -0.05a2

(* Potential Energy *)
PE = TSE - W;             // 0.05a2 + 1.66667a2^2 + 5000a2^4

(* Newton-Rapshon Method *)
// What equation trying to solve?

ai = {a2};
Func = D[PE, a2];    // f(x)
Kt = D[Func, a2];    // f'(x)


For[i = 1, i <= maxIter, i++

(* Calculate Function and Kt at Current Iteration *)
FuncIter = Func /. {a2 -> Xi};
KtIter = Kt /. {a2 -> Xi};

(* Solving for Delta X *)
Delta = (-1) * FuncIter / KtIter;

(* Finding Xi for Next Iteration *)
XInit = Xi;
Xi = Xi + Delta;


(* Break Loop Once Convergence Occurs *)
Error. = (Delta / XInit) * 100;
If[Abs[Error] <= ErrorLimit, 
 Print["Analysis Converged!"];
 Print["     Coefficient a2 = ",  Xi];
 Print["    Error= ", Error, " (%)"];
 a2 = Xi;
 Conv = 1;
 Break[]];

(* Track Results *)
Print["  Iteration: ", i];
Print["  Coefficient a2 = ", Xi];
Print["Error = ", Error, "(%)"];

(* Error Message if Max Number of Iterations Reached *)
If[i==maxIter, 
  Print["Max of Number of Iterations Reached"]];



]

Plot[{uExact, uApprox}, {X1, 0, L}]

在这里插入图片描述

  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值