微积分提纲+公式整理(大一下)

        各位西交利物浦的同学们大家好啊!期末在即,为了不挂科,我费了好大力气,尽自己所能的整理了这些提纲。内容粗浅,如果大家能看到,并且为大家的学习提供了一点点的帮助,那真的再好不过了。哦还有,我是按 MTH008 的课程进行整理的,由于课程不同,可能知识点有所欠缺,欢迎各位大佬在评论区补充,本人才疏学浅,若内容有误,欢迎大家指正。^_^

本博客仅供学习交流使用,侵删。


文章目录



Chapter 9 : Infinite Series



9.1 Infinite Sequences


Description of a Sequences:

Enough initial terms : 1, 4, 7, 10, ⋯ \cdots
Explicit formula : a n = 3 n − 2 ,   n ≥ 1 a_n=3n-2,\ n\geq1 an=3n2, n1
Recursion formula : a 1 = 1 ,   a n = a n − 1 + 3 ,   n ≥ 2 a_1=1,\ a_n=a_{n-1}+3,\ n\geq2 a1=1, an=an1+3, n2


9.1.1 Limit of a Sequence


Definition:

A sequence a n {a_n} an has the limit L L L, and we write lim ⁡ n → ∞ a n = L \lim\limits_{n\to\infty}a_n=L nliman=L or a n → L a_n\to L anL as n → ∞ n\to\infty n if we can make the terms a n a_n an as close to L L L as we like by taking n n n sufficiently large.


ϵ \epsilon ϵ - N Definition:

For every ϵ > 0 \epsilon>0 ϵ>0, there is a corresponding integer N > 0 N>0 N>0, such that ∣ a n − L ∣ < ϵ |a_n-L|<\epsilon anL<ϵ for all n ≥ N n\geq N nN


9.1.2 Convergence & Divergence


Definition:

If lim ⁡ n → ∞ a n = L \lim\limits_{n\to\infty}a_n=L nliman=L, we say the sequence { a n } \{a_n\} {an} converges (or is convergent) to L L L.
If lim ⁡ n → ∞ a n \lim\limits_{n\to\infty}a_n nliman does not exist, we say the sequence { a n } \{a_n\} {an} diverges (is divergent).


Remark

The convergence or divergence of a sequence does not depend on the initial terms, but rather on infinite terms from large n n n.


Theorem A : Properties of Convergent Sequences

Let { a n } \{a_n\} {an} and { b n } \{b_n\} {bn} be convergent sequences and k k k a constant.

Then :

  1. lim ⁡ n → ∞ k = k \lim\limits_{n\to\infty}k=k nlimk=k
  2. lim ⁡ n → ∞ k a n = k lim ⁡ n → ∞ a n \lim\limits_{n\to\infty}ka_n=k\lim\limits_{n\to\infty}a_n nlimkan=knliman
  3. lim ⁡ n → ∞ ( a n ± b n ) = lim ⁡ n → ∞ a n ± lim ⁡ n → ∞ b n \lim\limits_{n\to\infty}(a_n\pm b_n)=\lim\limits_{n\to\infty}a_n\pm\lim\limits_{n\to\infty}b_n nlim(an±bn)=nliman±nlimbn
  4. lim ⁡ n → ∞ ( a n ⋅ b n ) = lim ⁡ n → ∞ a n ⋅ lim ⁡ n → ∞ b n \lim\limits_{n\to\infty}(a_n\cdot b_n)=\lim\limits_{n\to\infty}a_n\cdot\lim\limits_{n\to\infty}b_n nlim(anbn)=nlimannlimbn
  5. lim ⁡ n → ∞ a n b n = lim ⁡ n → ∞ a n lim ⁡ n → ∞ b n ,   ( lim ⁡ n → ∞ b n ≠ 0 ) \lim\limits_{n\to\infty}\frac{a_n}{b_n}=\frac{\lim\limits_{n\to\infty}a_n}{\lim\limits_{n\to\infty}b_n},\ (\lim\limits_{n\to\infty}b_n\ne0) nlimbnan=nlimbnnliman, (nlimbn=0)

L’Hospital’s Rule:

lim ⁡ n → ∞ a n = L \lim\limits_{n\to\infty}a_n=L nliman=L is a special case of lim ⁡ x → ∞ f ( x ) = L \lim\limits_{x\to\infty}f(x)=L xlimf(x)=L


Theorem B : Squeeze Theorem

Suppose that { a n } \{a_n\} {an} and { c n } \{c_n\} {cn} both converge to L L L and that a n ≤ b n ≤ c n a_n\leq b_n\leq c_n anbncn for n ≥ N n\geq N nN ( N N N a fixed integer). Then { b n } \{b_n\} {bn} also converges to L.


Theorem C

If lim ⁡ n → ∞ ∣ a n ∣ = 0 \lim\limits_{n\to\infty}|a_n|=0 nliman=0, then lim ⁡ n → ∞ a n = 0 \lim\limits_{n\to\infty}a_n=0 nliman=0


Theorem D : Monotonic Sequence Theorem

Every bounded, monotonic sequence is convergent

If the sequence { a n } \{a_n\} {an} satisfies a n ≤ a n + 1 a_n\leq a_{n+1} anan+1 and a n ≤ U a_n\leq U anU for all n ≥ N n\geq N nN, then lim ⁡ n → ∞ a n = A ≤ U \lim\limits_{n\to\infty}a_n=A\leq U nliman=AU.

If the sequence { a n } \{a_n\} {an} satisfies a n ≥ a n + 1 a_n\geq a_{n+1} anan+1 and a n ≥ U a_n\geq U anU for all n ≥ N n\geq N nN, then lim ⁡ n → ∞ a n = A ≥ U \lim\limits_{n\to\infty}a_n=A\geq U nliman=AU.


Remark :

It is not necessary that the sequences be monotonic initially, only that they are monotonic from some term on, that is for n ≥ N n ≥ N nN , where N N N is a fixed integer.



9.2 Infinite Series


9.2.1 Convergence and Divergence of Infinite Series


Definition : Convergence and Divergence

If the sequence of partial sums { S n } \{S_n\} {Sn} converges and lim ⁡ n → ∞ S n = S \lim\limits_{n\to\infty}S_n=S nlimSn=S, then the infinite series ∑ k = 1 ∞ a k \sum\limits_{k=1}^{\infty}a_k k=1ak is said to converge and has sum S S S. We denote this by writing S = ∑ k = 1 ∞ a k S=\sum\limits^{\infty}_{k=1}a_k S=k=1ak
If { S n } \{S_n\} {Sn} diverges, then the series is said to diverge. A divergent series has no sum.


Geometric Series

∑ k = 1 ∞ a r k − 1 = a + a r + a r 2 + a r 3 + ⋯ \sum\limits^{\infty}_{k=1}ar^{k-1}=a+ar+ar^2+ar^3+\cdots k=1ark1=a+ar+ar2+ar3+ , where a ≠ 0 a\ne0 a=0

∑ k = 1 ∞ a r k − 1 : { c o n v e r g e s   t o   a 1 − r if |r|<1 d i v e r g e s if |r|≥1 \sum\limits^{\infty}_{k=1}ar^{k-1}:\begin{cases}\mathrm{converges\ to\ \frac{a}{1-r}} & \text {if |r|<1} \\\mathrm{diverges} &\text{if |r|≥1}\end{cases} k=1ark1:{converges to 1radivergesif |r|<1if |r|≥1


Collapsing Series

裂项相消求 S n S_n Sn


Harmonic Series

∑ n = 1 ∞ 1 n = 1 + 1 2 + 1 3 + ⋯ \sum\limits^{\infty}_{n=1}\frac{1}{n}=1+\frac{1}{2}+\frac{1}{3}+\cdots n=1n1=1+21+31+ is divergent.

but ∑ n = 1 ∞ 1 n k \sum\limits^{\infty}_{n=1}\frac{1}{n^k} n=1nk1 is convergent while k > 1 k>1 k>1


9.2.2 Properties of Convergent Series


Theorem A : nth-Term Test for Divergence

If the series ∑ n = 1 ∞ a n \sum\limits^{\infty}_{n=1}a_n n=1an converges, then lim ⁡ n → ∞ a n = 0 \lim\limits_{n\to\infty}a_n=0 nliman=0
Equivalently, if lim ⁡ n → ∞ a n ≠ 0 \lim\limits_{n\to\infty}a_n\ne0 nliman=0 or if lim ⁡ n → ∞ a n \lim\limits_{n\to\infty}a_n nliman does not exist, then the series diverges


Remark :

If lim ⁡ n → ∞ a n = 0 \lim\limits_{n\to\infty}a_n=0 nliman=0, we can not conclude that the series lim ⁡ n → ∞ a n \lim\limits_{n\to\infty}a_n nliman converges.


Theorem B : Linearity of Convergent Series

If ∑ k = 1 ∞ a k \sum\limits_{k=1}^{\infty}a_k k=1ak and ∑ k = 1 ∞ b k \sum\limits_{k=1}^{\infty}b_k k=1bk both converges and c c c is a constant, then ∑ k = 1 ∞ c a k \sum\limits_{k=1}^{\infty}ca_k k=1cak and ∑ k = 1 ∞ ( a k + b k ) \sum\limits_{k=1}^{\infty}(a_k+b_k) k=1(ak+bk) also converge, and

  1. ∑ k = 1 ∞ c a k = c ∑ k = 1 ∞ a k \sum\limits_{k=1}^{\infty}ca_k=c\sum\limits_{k=1}^{\infty}a_k k=1cak=ck=1ak
  2. ∑ k = 1 ∞ ( a k + b k ) = ∑ k = 1 ∞ a k + ∑ k = 1 ∞ b k \sum\limits_{k=1}^{\infty}(a_k+b_k)=\sum\limits_{k=1}^{\infty}a_k+\sum\limits_{k=1}^{\infty}b_k k=1(ak+bk)=k=1ak+k=1bk

Theorem C

If ∑ k = 1 ∞ a k \sum\limits_{k=1}^{\infty}a_k k=1ak diverges and c ≠ 0 c\ne0 c=0, then ∑ k = 1 ∞ c a k \sum\limits_{k=1}^{\infty}ca_k k=1cak diverges.



9.3 Convergence of positive series


注意,本节所讨论的所有内容均建立于   p o s i t i v e   s e r i e s \ \red{positive\ series}  positive series 条件之上,对于存在负数的数列在后面会进行讨论。


Theorem A : Bounded Sum Test

The p o s i t i v e \red{\mathrm{positive}} positive series ∑ n = 1 ∞ a n \sum\limits_{n=1}^{\infty}a_n n=1an converges if and only if all of its partial sums are bounded by a constant. That is, S n < A S_n<A Sn<A for all n ∈ N n\in N nN. where A A A is a constant number.


Theorem B : Integral Test

Let f f f be a c o n t i n u o u s ,   p o s i t i v e ,   n o n − i n c r e a s i n g \red{\mathrm{continuous,\ positive,\ non-increasing}} continuous, positive, nonincreasing function over [ 1 , ∞ ] [1,\infty ] [1,] such that f ( n ) = a n f(n)=a_n f(n)=an for all n ∈ N n\in N nN. Then the p o s i t i v e \red{\mathrm{positive}} positive series ∑ n = 1 ∞ a n \sum\limits_{n=1}^\infty a_n n=1an converges if and only if the integral ∫ 1 ∞ f ( x ) d x \int_1^\infty f(x)\mathrm{dx} 1f(x)dx converges.


Theorem C : Ordinary Comparison Test

Assume that 0 ≤ a n ≤ C ⋅ b n 0\le a_n\le C\cdot b_n 0anCbn for all n ∈ N n\in N nN, where C > 0 C>0 C>0 is a constant. Then :

  • If ∑ n = 1 ∞ b n \sum\limits_{n=1}^\infty b_n n=1bn converges, then ∑ n = 1 ∞ a n \sum\limits_{n=1}^\infty a_n n=1an converges.
  • If ∑ n = 1 ∞ a n \sum\limits_{n=1}^\infty a_n n=1an diverges, then ∑ n = 1 ∞ b n \sum\limits_{n=1}^\infty b_n n=1bn diverges.

Theorem D : Limit Comparison Test

Assume that a n ≥ 0 a_n\ge 0 an0 and b n > 0 b_n>0 bn>0 for all n ∈ N n\in N nN. Assume that lim ⁡ n → ∞ a n b n \lim\limits_{n\to\infty}\frac{a_n}{b_n} nlimbnan exists and put L = lim ⁡ n → ∞ a n b n L=\lim\limits_{n\to\infty}\frac{a_n}{b_n} L=nlimbnan, Then :

  • If L ∈ ( 0 , ∞ ) L\in(0,\infty) L(0,), then ∑ n = 1 ∞ a n \sum\limits_{n=1}^\infty a_n n=1an converges (diverges) ⇔ \Leftrightarrow ∑ n = 1 ∞ b n \sum\limits_{n=1}^\infty b_n n=1bn converges (diverges).
  • If L = 0 L=0 L=0, then ∑ n = 1 ∞ b n \sum\limits_{n=1}^\infty b_n n=1bn converges ⇒ \Rightarrow ∑ n = 1 ∞ a n \sum\limits_{n=1}^\infty a_n n=1an converges.
  • If L = ∞ L=\infty L=, then ∑ n = 1 ∞ b n \sum\limits_{n=1}^\infty b_n n=1bn diverges ⇒ \Rightarrow ∑ n = 1 ∞ a n \sum\limits_{n=1}^\infty a_n n=1an diverges.

Theorem E : Ratio Test

Let a n > 0 a_n>0 an>0 for all n ∈ N n\in N nN. Assume that L = lim ⁡ n → ∞ a n + 1 a n L=\lim\limits_{n\to\infty}\frac{a_{n+1}}{a_n} L=nlimanan+1 exists. Then :

  • If L < 1 L<1 L<1, then the series ∑ n = 1 ∞ a n \sum\limits^\infty_{n=1}a_n n=1an converges.
  • If L > 1 L>1 L>1, then the series ∑ n = 1 ∞ a n \sum\limits^\infty_{n=1}a_n n=1an diverges.

Theorem F : Root Test

Let a n ≥ 0 a_n\ge 0 an0 for all n ∈ N n\in N nN. Assume that r = lim ⁡ n → ∞ a n n r=\lim\limits_{n\to\infty}\sqrt[n]{a_n} r=nlimnan exists. Then :

  • If r < 1 r<1 r<1, then the series ∑ n + 1 ∞ a n \sum\limits_{n+1}^\infty a_n n+1an converges.
  • If r > 1 r>1 r>1, then the series ∑ n + 1 ∞ a n \sum\limits_{n+1}^\infty a_n n+1an diverges.


9.4 Alternating Series


9.4.1 Alternating Series Convergence


Theorem : Alternating Series Test

Let ∑ n = 1 ∞ ( − 1 ) n − 1 a n     ( a n > 0 ) \sum\limits^\infty_{n=1}(-1)^{n-1}a_n\ \ \ (a_n>0) n=1(1)n1an   (an>0) be an alternating series. If the sequence a n {a_n} an satisfies

  1. a n > a n + 1 a_n>a_{n+1} an>an+1
  2. lim ⁡ n → ∞ a n = 0 \lim\limits_{n\to\infty}a_n=0 nliman=0

then the series converges.

If S S S is the sum of the series, then ∣ S n − S ∣ ≤ a n + 1 |S_n-S|\le a_{n+1} SnSan+1. That is to say, if we use S n S_n Sn to approximate S S S, then the error is less than a n + 1 a_{n+1} an+1.


9.4.2 Absolute Convergence


Theorem A : Absolute Convergence Test

If ∑ ∣ u n ∣ \sum|u_n| un converges, then ∑ u n \sum u_n un converges.


Theorem B : Absolute Ratio Test

Let ∑ u n \sum u_n un be a series of nonzero terms and suppose that lim ⁡ n → ∞ ∣ u n + 1 ∣ ∣ u n ∣ = L \lim\limits_{n\to\infty}\frac{|u_{n+1}|}{|u_n|}=L nlimunun+1=L .

  1. If L < 1 L<1 L<1, the series converges absolutely.
  2. If L > 1 L>1 L>1, or if lim ⁡ n → ∞ ∣ u n + 1 ∣ ∣ u n ∣ = ∞ \lim\limits_{n\to\infty}\frac{|u_{n+1}|}{|u_n|}=\infty nlimunun+1=, the series diverges.
  3. If L = 1 L=1 L=1, the test is inconclusive.

9.4.3 Conditional convergence


Definition

A series ∑ u n \sum u_n un is called conditionally convergent if ∑ u n \sum u_n un converges but ∑ ∣ u n ∣ \sum|u_n| un diverges.


Theorem : Alternating p p p - series

∑ n = 1 ∞ ( − 1 ) n + 1 1 n p \sum\limits^\infty_{n=1}(-1)^{n+1}\frac{1}{n^p} n=1(1)n+1np1 { p > 1 absolute converges 0 < p ≤ 1 conditional converges p ≤ 0 diverges \begin{cases}p>1&\text{absolute converges}\\0<p\le1&\text{conditional converges}\\p\le0&\text{diverges}\end{cases} p>10<p1p0absolute convergesconditional convergesdiverges



9.5 Power Series


Series of constants : series of the form ∑ n = 1 ∞ a n \sum\limits_{n=1}^\infty a_n n=1an where each term a n a_n an is a number.
Series of functions : series of the form ∑ n = 1 ∞ u n ( x ) \sum\limits_{n=1}^\infty u_n(x) n=1un(x) where each term u n ( x ) u_n(x) un(x) is a function of x x x.
Power Series : series of the form ∑ n = 1 ∞ a n x n = a 0 + a 1 x + a 2 x 2 + ⋯ \sum\limits_{n=1}^\infty a_nx^n=a_0+a_1x+a_2x^2+\cdots n=1anxn=a0+a1x+a2x2+

The geometric series with ratio r = x r=x r=x has sum given by S ( x ) = a 1 − x S(x)=\frac{a}{1-x} S(x)=1xa , and is convergent while − 1 < x < 1 -1<x<1 1<x<1

We also say that the function a 1 − x \frac{a}{1-x} 1xa has the power series representation.


Definition : the Convergence Set for Power Series

The set on which a power series converges is called its Convergence Set.


Theorem A : Possible Types of the Convergence Set

The convergence set for a power series ∑ n = 0 ∞ a n x n \sum\limits^\infty_{n=0}a_nx^n n=0anxn is always an interval of the following three types:

  1. The single point x = 0 x=0 x=0
  2. An interval ( − R , R ) (-R,R) (R,R), plus possibly one or both end points
  3. The infinite interval ( − ∞ , ∞ ) (-\infty,\infty) (,)

In the three types, the series is said to have radius of convergence 0 , R 0,R 0,R and ∞ \infty respectively.

This theorem says that the convergence set must be an interval.


Theorem B

A power series ∑ a n x n \sum\limits a_nx^n anxn converges absolutely on the interior of its convergence set.



9.6 Operations on Power Series


Theorem A : Term-by-Term Differentiation and Integration

Suppose that S ( x ) S(x) S(x) is the sum of a power series on an interval I I I. Then, if x x x is an interior point of I I I

  1. S ′ ( x ) = ∑ n = 0 ∞ ( a n x n ) ′ S^\prime(x)=\sum\limits^\infty_{n=0}(a_nx^n)^\prime S(x)=n=0(anxn)
  2. ∫ 0 x S ( t ) d x = ∑ n = 0 ∞ f 0 x a n t n d t \int^x_0S(t)\mathrm dx=\sum\limits^\infty_{n=0}f^x_0a_nt^n\mathrm dt 0xS(t)dx=n=0f0xantndt

Remarks :

  1. The sum S ( x ) S(x) S(x) is both differentiable and integrable, and its derivative and integral can be calculated by term-by-term differentiation and integration.
  2. The radius of convergence of both the differentiated and integrated series is the same as for original series. But the convergence and divergence of the differentiated and integrated series at the end points of I I I might change.
  3. We can apply Theorem A to a power series with a known sum to obtain sum formulas for other series.

Theorem B : Addition and Subtraction

Let f ( x ) = ∑ a n x n f(x)=\sum\limits a_nx^n f(x)=anxn and g ( x ) = ∑ b n x n g(x)=\sum\limits b_nx^n g(x)=bnxn for ∣ x ∣ < r |x|<r x<r. Then the operations of addition and subtraction can be performed on these series as if they were polynomials,

f ( x ) ± g ( x ) = ∑ ( a n ± b n ) x n f(x)\pm g(x)=\sum(a_n\pm b_n)x^n f(x)±g(x)=(an±bn)xn for ∣ x ∣ < r |x|<r x<r.


I m p o r t a n t   M a c l a u r i n   S e r i e s \red{\mathrm{Important\ Maclaurin\ Series}} Important Maclaurin Series

These formulas can be used as known results

  1. 1 1 − x = ∑ n = 0 ∞ x n = 1 + x + x 2 + x 3 + ⋯   ,    − 1 < x < 1 \frac{1}{1-x}=\sum\limits^\infty_{n=0}x^n=1+x+x^2+x^3+\cdots,\ \ -1<x<1 1x1=n=0xn=1+x+x2+x3+,  1<x<1

  2. ln ⁡ ( 1 + x ) = ∑ n = 0 ∞ ( − 1 ) n + 1 x n + 1 n + 1 = x − x 2 2 + x 3 3 − x 4 4 + ⋯   ,    − 1 < x < 1 \ln(1+x)=\sum\limits^\infty_{n=0}(-1)^{n+1}\frac{x^{n+1}}{n+1}=x-\frac{x^2}{2}+\frac{x^3}{3}-\frac{x^4}{4}+\cdots,\ \ -1<x<1 ln(1+x)=n=0(1)n+1n+1xn+1=x2x2+3x34x4+,  1<x<1

  3. arctan ⁡ x = ∑ n = 0 ∞ ( − 1 ) n x 2 n + 1 2 n + 1 = x − x 3 3 + x 5 5 − x 7 7 + ⋯   ,    − 1 < x < 1 \arctan x=\sum\limits^\infty_{n=0}(-1)^n\frac{x^{2n+1}}{2n+1}=x-\frac{x^3}{3}+\frac{x^5}{5}-\frac{x^7}{7}+\cdots,\ \ -1<x<1 arctanx=n=0(1)n2n+1x2n+1=x3x3+5x57x7+,  1<x<1

  4. e x = ∑ n = 0 ∞ x n n ! = 1 + x + x 2 2 ! + x 3 3 ! + ⋯   ,    − ∞ < x < ∞ e^x=\sum\limits^\infty_{n=0}\frac{x^n}{n!}=1+x+\frac{x^2}{2!}+\frac{x^3}{3!}+\cdots,\ \ -\infty<x<\infty ex=n=0n!xn=1+x+2!x2+3!x3+,  <x<

  5. sin ⁡ x = ∑ n = 0 ∞ ( − 1 ) n x 2 n + 1 ( 2 n + 1 ) ! = x − x 3 3 ! + x 5 5 ! − x 7 7 ! + ⋯   ,    − ∞ < x < ∞ \sin x=\sum\limits^\infty_{n=0}(-1)^n\frac{x^{2n+1}}{(2n+1)!}=x-\frac{x^3}{3!}+\frac{x^5}{5!}-\frac{x^7}{7!}+\cdots,\ \ -\infty<x<\infty sinx=n=0(1)n(2n+1)!x2n+1=x3!x3+5!x57!x7+,  <x<

  6. cos ⁡ x = ∑ n = 0 ∞ ( − 1 ) n x 2 n ( 2 n ) ! = 1 − x 2 2 + x 4 4 ! − x 6 6 ! + ⋯   ,    − ∞ < x < ∞ \cos x=\sum\limits^\infty_{n=0}(-1)^n\frac{x^{2n}}{(2n)!}=1-\frac{x^2}{2}+\frac{x^4}{4!}-\frac{x^6}{6!}+\cdots,\ \ -\infty<x<\infty cosx=n=0(1)n(2n)!x2n=12x2+4!x46!x6+,  <x<



9.8 Taylor and Maclaurin Series


Important Power Series

  1. 1 1 − x = ∑ n = 0 ∞ x n ,    − 1 < x < 1 \frac{1}{1-x}=\sum\limits^\infty_{n=0}x^n,\ \ -1<x<1 1x1=n=0xn,  1<x<1
  2. 1 1 + x = ∑ n = 0 ∞ ( − 1 ) n x n ,    − 1 < x < 1 \frac{1}{1+x}=\sum\limits^\infty_{n=0}(-1)^nx^n,\ \ -1<x<1 1+x1=n=0(1)nxn,  1<x<1

We say that

  1. 1 1 − x \frac{1}{1-x} 1x1 has power series representation ∑ n = 0 ∞ x n \sum\limits^\infty_{n=0}x^n n=0xn, when − 1 < x < 1 -1<x<1 1<x<1.
  2. 1 1 + x \frac{1}{1+x} 1+x1 has power series representation ∑ n = 0 ∞ ( − 1 ) n x n \sum\limits^\infty_{n=0}(-1)^nx^n n=0(1)nxn, when − 1 < x < 1 -1<x<1 1<x<1.

Theorem A : Uniqueness Theorem

If f f f has a power series representation in x − a x-a xa, that is, if f ( x ) = c 0 + c 1 ( x − a ) + c 2 ( x − a ) 2 + ⋯ + c n ( x − a ) n + ⋯   ,    ∣ x − a ∣ < R f(x)=c_0+c_1(x-a)+c_2(x-a)^2+\cdots+c_n(x-a)^n+\cdots,\ \ |x-a|<R f(x)=c0+c1(xa)+c2(xa)2++cn(xa)n+,  xa<R

then its coefficients are given by the formula : c n = f ( n ) ( a ) n ! c_n=\frac{f^{(n)}(a)}{n!} cn=n!f(n)(a)

Thus f ( x ) f(x) f(x) must be represented uniquely as : f ( x ) = f ( a ) + f ′ ( a ) ( x − a ) + f ′ ′ ( a ) 2 ( x − a ) 2 + f ′ ′ ′ ( a ) 3 ! ( x − a ) 3 + ⋯ = ∑ n = 0 ∞ f ( n ) ( a ) n ! ( x − a ) n ,    ∣ x − a ∣ < R f(x)=f(a)+f^\prime(a)(x-a)+\frac{f^{\prime\prime}(a)}{2}(x-a)^2+\frac{f^{\prime\prime\prime}(a)}{3!}(x-a)^3+\cdots=\sum\limits^\infty_{n=0}\frac{f^{(n)}(a)}{n!}(x-a)^n,\ \ |x-a|<R f(x)=f(a)+f(a)(xa)+2f(a)(xa)2+3!f(a)(xa)3+=n=0n!f(n)(a)(xa)n,  xa<R

For n = 0 n=0 n=0, we define f ( 0 ) ( a ) = f ( a ) f^{(0)}(a)=f(a) f(0)(a)=f(a) and 0 ! = 1 0!=1 0!=1


Definition : Taylor Series & Maclaurin Series

The power series ∑ n = 0 ∞ f ( n ) ( a ) n ! ( x − a ) n \sum\limits^\infty_{n=0}\frac{f^{(n)}(a)}{n!}(x-a)^n n=0n!f(n)(a)(xa)n is called the Taylor Series of f ( x ) f(x) f(x) at a a a.
If a = 0 a=0 a=0, the corresponding series ∑ n = 0 ∞ f ( n ) ( 0 ) n ! x n \sum\limits^\infty_{n=0}\frac{f^{(n)}(0)}{n!}x^n n=0n!f(n)(0)xn is called the Maclaurin Series.


Theorem B

Elementary functions can be expanded into power series.(初等函数可以展开成幂级数)

That is :

If f ( x ) f(x) f(x) is an elementary function, and the Taylor series of f ( x ) f(x) f(x) has the radius of convergence R R R, then in the interval ( a − R , a + R ) (a-R,a+R) (aR,a+R), we have : f ( x ) = f ( a ) + f ′ ( a ) ( x − a ) + f ′ ′ ( a ) 2 ( x − a ) 2 + f ′ ′ ′ ( a ) 3 ! ( x − a ) 3 + ⋯ f(x)=f(a)+f^\prime(a)(x-a)+\frac{f^{\prime\prime}(a)}{2}(x-a)^2+\frac{f^{\prime\prime\prime}(a)}{3!}(x-a)^3+\cdots f(x)=f(a)+f(a)(xa)+2f(a)(xa)2+3!f(a)(xa)3+

In particular, if a = 0 a=0 a=0, then in the interval ( − R , R ) (-R,R) (R,R), we have : f ( x ) = f ( 0 ) + f ′ ( 0 ) x + f ′ ′ ( 0 ) 2 x 2 + f ′ ′ ′ ( 0 ) 3 ! x 3 + ⋯ f(x)=f(0)+f^\prime(0)x+\frac{f^{\prime\prime}(0)}{2}x^2+\frac{f^{\prime\prime\prime}(0)}{3!}x^3+\cdots f(x)=f(0)+f(0)x+2f(0)x2+3!f(0)x3+



Chapter 11 : Geometry in Space and Vectors



11.1 Cartesian Coordinates in Three-Space


Definition : Rectangular Coordinate System

The making of the rectangular coordinate system:

  1. Take a point O O O as the origin of the coordinate system
  2. Through the origin O O O, draw three mutually perpendicular coordinate lines O x ,   O y ,   O z Ox,\ Oy,\ Oz Ox, Oy, Oz
  3. The positive directions of the 3 axes follows the right-handed rule.

Theorem A : Distance Formula

∣ P 1 P 2 ∣ = ( x 2 − x 1 ) 2 + ( y 2 − y 1 ) 2 + ( z 2 − z 1 ) 2 |P_1P_2|=\sqrt{(x_2-x_1)^2+(y_2-y_1)^2+(z_2-z_1)^2} P1P2=(x2x1)2+(y2y1)2+(z2z1)2


Theorem B : Midpoint Formula

m 1 = x 1 + x 2 2 ,   m 2 = y 1 + y 2 2 ,   m 3 = z 1 + z 2 2 m_1=\frac{x_1+x_2}{2},\ m_2=\frac{y_1+y_2}{2},\ m_3=\frac{z_1+z_2}{2} m1=2x1+x2, m2=2y1+y2, m3=2z1+z2


Theorem C : Arc Length Formula

L = ∫ a b [ f ′ ( t ) ] 2 + [ g ′ ( t ) ] 2 + [ h ′ ( t ) ] 2 d t L=\int^b_a\sqrt{[f^\prime(t)]^2+[g^\prime(t)]^2+[h^\prime(t)]^2}\mathrm{d}t L=ab[f(t)]2+[g(t)]2+[h(t)]2 dt



11.2 Vector


Definition

a quantity has both a magnitude and a direction.


Remark : Free Vector

If two vectors have the same length and point in the same direction, then these two vectors are equivalent or equal.

Thus, if a vector translates in space, the vector will not change. We may say that the vectors discussed in Calculus are free vectors.


Theorem A : Operation Laws

For any vector u ⃗ ,   v ⃗ \vec u,\ \vec v u , v and w ⃗ \vec w w , and any scalars a a a and b b b, the following relationships hold.

  1. u ⃗ + v ⃗ = v ⃗ + u ⃗ \vec u+\vec v=\vec v+\vec u u +v =v +u
  2. ( u ⃗ + v ⃗ ) + w ⃗ = u ⃗ + ( v ⃗ + w ⃗ ) (\vec u+\vec v)+\vec w=\vec u+(\vec v+\vec w) (u +v )+w =u +(v +w )
  3. u ⃗ + 0 = 0 + u ⃗ = u ⃗ \vec u+0=0+\vec u=\vec u u +0=0+u =u
  4. u ⃗ − u ⃗ = 0 \vec u-\vec u=0 u u =0
  5. a ( b u ⃗ ) = ( a b ) u ⃗ a(b\vec u)=(ab)\vec u a(bu )=(ab)u
  6. a ( u ⃗ + v ⃗ ) = a u ⃗ + a v ⃗ a(\vec u+\vec v)=a\vec u+a\vec v a(u +v )=au +av
  7. ( a + b ) u ⃗ = a u ⃗ + b u ⃗ (a+b)\vec u=a\vec u+b\vec u (a+b)u =au +bu
  8. 1 u ⃗ = u ⃗ 1\vec u=\vec u 1u =u

Theorem B : Unit Vector

e v = v ∣ ∣ v ∣ ∣ e_v=\frac{v}{||v||} ev=vv



11.3 The Dot Product

u ⃗ ⋅ v ⃗ = ⟨ u 1 , u 2 , u 3 ⟩ ⋅ ⟨ v 1 , v 2 , v 3 ⟩ = u 1 v 1 + u 2 v 2 + u 3 v 3 \vec u\cdot\vec v=\langle u_1,u_2,u_3\rangle\cdot\langle v_1,v_2,v_3\rangle=u_1v_1+u_2v_2+u_3v_3 u v =u1,u2,u3v1,v2,v3=u1v1+u2v2+u3v3


Theorem A : Properties of the Dot Product

If u ⃗ ,   v ⃗ \vec u,\ \vec v u , v and w ⃗ \vec w w are vectors, and c c c is a scalar, then

  1. u ⃗ ⋅ v ⃗ = v ⃗ ⋅ u ⃗ \vec u\cdot\vec v=\vec v\cdot\vec u u v =v u
  2. u ⃗ ⋅ ( v ⃗ + w ⃗ ) = u ⃗ ⋅ v ⃗ + u ⃗ ⋅ w ⃗ \vec u\cdot(\vec v+\vec w)=\vec u\cdot\vec v+\vec u\cdot\vec w u (v +w )=u v +u w
  3. c ( u ⃗ ⋅ v ⃗ ) = ( c u ⃗ ) ⋅ v ⃗ c(\vec u\cdot\vec v)=(c\vec u)\cdot\vec v c(u v )=(cu )v
  4. 0 ⋅ u ⃗ = 0 0\cdot\vec u=0 0u =0
  5. u ⃗ ⋅ u ⃗ = ∣ ∣ u ⃗ ∣ ∣ 2 \vec u\cdot\vec u=||\vec u||^2 u u =u 2

Theorem B

If θ \theta θ is the smallest non-negative angle between the nonzero vectors u ⃗ \vec u u and v ⃗ \vec v v , then u ⃗ ⋅ v ⃗ = ∣ ∣ u ⃗ ∣ ∣   ∣ ∣ v ⃗ ∣ ∣ cos ⁡ θ \vec u\cdot\vec v=||\vec u||\ ||\vec v||\cos\theta u v =u  v cosθ


Definition : Orthogonal

Vectors that are perpendicular are said to be orthogonal.


Theorem C : Perpendicularity Criterion

Two nonzero vectors u ⃗ \vec u u and v ⃗ \vec v v are perpendicular if and only if their dot product u ⃗ ⋅ v ⃗ \vec u\cdot\vec v u v is 0.


Theorem D : Projections

p r v u ⃗ = ∣ ∣ u ⃗ ∣ ∣ cos ⁡ θ pr_v\vec u=||\vec u||\cos\theta prvu =u cosθ


Theorem E : Direction Angles and Cosines

在这里插入图片描述
cos ⁡ α = a ⃗ ⋅ i ⃗ ∣ a ⃗ ∣ ∣ i ⃗ ∣ = a 1 ∣ a ⃗ ∣ \cos\alpha=\frac{\vec a\cdot\vec i}{|\vec a||\vec i|}=\frac{a_1}{|\vec a|} cosα=a i a i =a a1

cos ⁡ β = a ⃗ ⋅ j ⃗ ∣ a ⃗ ∣ ∣ j ⃗ ∣ = a 2 ∣ a ⃗ ∣ \cos\beta=\frac{\vec a\cdot\vec j}{|\vec a||\vec j|}=\frac{a_2}{|\vec a|} cosβ=a j a j =a a2

cos ⁡ γ = a ⃗ ⋅ k ⃗ ∣ a ⃗ ∣ ∣ k ⃗ ∣ = a 3 ∣ a ⃗ ∣ \cos\gamma=\frac{\vec a\cdot\vec k}{|\vec a||\vec k|}=\frac{a_3}{|\vec a|} cosγ=a k a k =a a3

a ⃗ ∣ a ⃗ ∣ = ⟨ cos ⁡ α , cos ⁡ β , cos ⁡ γ ⟩ \frac{\vec a}{|\vec a|}=\langle\cos\alpha,\cos\beta,\cos\gamma\rangle a a =cosα,cosβ,cosγ and cos ⁡ 2 α + cos ⁡ 2 β + cos ⁡ 2 γ = 1 \cos^2\alpha+\cos^2\beta+\cos^2\gamma=1 cos2α+cos2β+cos2γ=1


Theorem F : Distance Formula from a Point to a Plane

L = ∣ A x 0 + B y 0 + C z 0 − D ∣ A 2 + B 2 + C 2 L=\frac{|Ax_0+By_0+Cz_0-D|}{\sqrt{A^2+B^2+C^2}} L=A2+B2+C2 Ax0+By0+Cz0D


Theorem G : Distance Formula between the parallel planes

L = ∣ D 1 − D 2 ∣ A 2 + B 2 + C 2 L=\frac{|D_1-D_2|}{\sqrt{A^2+B^2+C^2}} L=A2+B2+C2 D1D2



11.4 The Cross Product


Algebraic Definition

u ⃗ × v ⃗ = ⟨ u 2 v 3 − u 3 v 2 , u 3 v 1 − u 1 v 3 , u 1 v 2 − u 2 v 1 ⟩ \vec u\times\vec v=\langle u_2v_3-u_3v_2,u_3v_1-u_1v_3,u_1v_2-u_2v_1\rangle u ×v =u2v3u3v2,u3v1u1v3,u1v2u2v1

u ⃗ × v ⃗ = ∣ i j k u 1 u 2 u 3 v 1 v 2 v 3 ∣ \vec u\times\vec v=\begin{vmatrix}i&j&k\\u_1&u_2&u_3\\v_1&v_2&v_3\end{vmatrix} u ×v =iu1v1ju2v2ku3v3


Theorem A

Let u ⃗ \vec u u and v ⃗ \vec v v be vectors in three-space and θ \theta θ be the smallest non-negative angle between them. Then

  1. u ⃗ ⋅ ( u ⃗ × v ⃗ ) = 0 = v ⃗ ⋅ ( u ⃗ × v ⃗ ) \vec u\cdot(\vec u\times\vec v)=0=\vec v\cdot(\vec u\times\vec v) u (u ×v )=0=v (u ×v )
  2. u ⃗ , v ⃗ \vec u, \vec v u ,v , and u ⃗ × v ⃗ \vec u\times\vec v u ×v form a right-handed triple.
  3. ∣ ∣ u ⃗ × v ⃗ ∣ ∣ = ∣ ∣ u ⃗ ∣ ∣   ∣ ∣ v ∣ ∣ sin ⁡ θ ||\vec u\times\vec v||=||\vec u||\ ||v||\sin\theta u ×v =u  vsinθ

Remark

The geometric significance of u ⃗ ⋅ v ⃗ \vec u\cdot\vec v u v and u ⃗ × v ⃗ \vec u\times\vec v u ×v shows that the results of these two products depend only on the lengths of u ⃗ \vec u u and v ⃗ \vec v v , and the angles from u ⃗ \vec u u to v ⃗ \vec v v , that are independent of the coordinate system.


In particular

  1. a ⃗ × a ⃗ = 0 \vec a\times\vec a=0 a ×a =0
  2. i ⃗ × j ⃗ = k \vec i\times\vec j=k i ×j =k

Theorem C : Algebraic Properties

If u ⃗ , v ⃗ \vec u,\vec v u ,v and w ⃗ \vec w w are vectors in three-space and k k k is a scalar, then

  1. u ⃗ × v ⃗ = − ( v ⃗ × u ⃗ ) \vec u\times\vec v=-(\vec v\times\vec u) u ×v =(v ×u )
  2. u ⃗ × ( v ⃗ + w ⃗ ) = ( u ⃗ × v ⃗ ) + ( u ⃗ × w ⃗ ) \vec u\times(\vec v+\vec w)=(\vec u\times\vec v)+(\vec u\times\vec w) u ×(v +w )=(u ×v )+(u ×w )
  3. k ( u ⃗ × v ⃗ ) = ( k u ⃗ ) × v ⃗ = u ⃗ × ( k v ⃗ ) k(\vec u\times\vec v)=(k\vec u)\times\vec v=\vec u\times(k\vec v) k(u ×v )=(ku )×v =u ×(kv )
  4. ( u ⃗ × v ⃗ ) ⋅ w ⃗ = u ⃗ ⋅ ( v ⃗ × w ⃗ ) (\vec u\times\vec v)\cdot\vec w=\vec u\cdot(\vec v\times\vec w) (u ×v )w =u (v ×w )

Theorem D

Two vectors u ⃗ \vec u u and v ⃗ \vec v v in three-space are parallel if and only if u ⃗ × v ⃗ = 0 \vec u\times\vec v=0 u ×v =0


Theorem E

∣ a ⃗ ⋅ ( b ⃗ × c ⃗ ) ∣ = ∣ a 1 a 2 a 3 b 1 b 2 b 3 c 1 c 2 c 3 ∣ |\vec a\cdot(\vec b\times\vec c)|=\begin{vmatrix}a_1&a_2&a_3\\b_1&b_2&b_3\\c_1&c_2&c_3\end{vmatrix} a (b ×c )=a1b1c1a2b2c2a3b3c3

b ⃗ × c ⃗ \vec b\times\vec c b ×c is the area of the parallelogram determined by the vectors with b ⃗ ,   c ⃗ \vec b,\ \vec c b , c .

∣ a ⃗ ⋅ ( b ⃗ × c ⃗ ) ∣ |\vec a\cdot(\vec b\times\vec c)| a (b ×c ) is the volume of the parallelepiped determined by the vectors with a ⃗ ,   b ⃗ \vec a,\ \vec b a , b and c ⃗ \vec c c .



11.5 Vector-valued Functions and Curvilinear Motion


11.5.1 Vector-valued Functions


Theorem A : Limit of Vector-valued Functions

Let F ⃗ ( t ) = f ( t ) i ⃗ + g ( t ) j ⃗ + h ( t ) k ⃗ = ⟨ f ( t ) , g ( t ) , h ( t ) ⟩ \vec F(t)=f(t)\vec i+g(t)\vec j+h(t)\vec k=\langle f(t),g(t),h(t)\rangle F (t)=f(t)i +g(t)j +h(t)k =f(t),g(t),h(t)

F ⃗ ( t ) \vec F(t) F (t) has a limit at c c c if and only if f , g f,g f,g and h h h have limits at c c c.

lim ⁡ t → c F ⃗ ( t ) = [ lim ⁡ t → c f ( t ) ] i ⃗ + [ lim ⁡ t → c g ( t ) ] j ⃗ + [ lim ⁡ t → c h ( t ) ] k ⃗ \lim\limits_{t\to c}\vec F(t)=[\lim\limits_{t\to c}f(t)]\vec i+[\lim\limits_{t\to c}g(t)]\vec j+[\lim\limits_{t\to c}h(t)]\vec k tclimF (t)=[tclimf(t)]i +[tclimg(t)]j +[tclimh(t)]k


Theorem B : Derivative of a Vector-valued Function

  1. F ⃗ ′ ( t ) = f ′ ( t ) i ⃗ + g ′ ( t ) j ⃗ + h ′ ( t ) k ⃗ \vec F^\prime(t)=f^\prime(t)\vec i+g^\prime(t)\vec j+h^\prime(t)\vec k F (t)=f(t)i +g(t)j +h(t)k

  2. D t [ F ⃗ ( t ) + G ⃗ ( t ) ] = F ⃗ ′ ( t ) + G ⃗ ′ ( t ) D_t[\vec F(t)+\vec G(t)]=\vec F^\prime(t)+\vec G^\prime(t) Dt[F (t)+G (t)]=F (t)+G (t)

  3. D t [ c F ⃗ ( t ) ] = c F ⃗ ′ ( t ) D_t[c\vec F(t)]=c\vec F^\prime(t) Dt[cF (t)]=cF (t)

  4. D t [ p ( t ) ⋅ F ⃗ ( t ) ] = p ( t ) F ⃗ ′ ( t ) + p ′ ( t ) F ⃗ ( t ) D_t[p(t)\cdot\vec F(t)]=p(t)\vec F^\prime(t)+p^\prime(t)\vec F(t) Dt[p(t)F (t)]=p(t)F (t)+p(t)F (t)

  5. D t [ F ⃗ ( t ) ⋅ G ⃗ ( t ) ] = F ⃗ ( t ) ⋅ G ⃗ ′ ( t ) + F ⃗ ′ ( t ) ⋅ G ⃗ ( t ) D_t[\vec F(t)\cdot\vec G(t)]=\vec F(t)\cdot\vec G^\prime(t)+\vec F^\prime(t)\cdot\vec G(t) Dt[F (t)G (t)]=F (t)G (t)+F (t)G (t)

  6. D t [ F ⃗ ( t ) × G ⃗ ( t ) ] = F ⃗ ( t ) × G ⃗ ′ ( t ) + F ⃗ ′ ( t ) × G ⃗ ( t ) D_t[\vec F(t)\times\vec G(t)]=\vec F(t)\times\vec G^\prime(t)+\vec F^\prime(t)\times\vec G(t) Dt[F (t)×G (t)]=F (t)×G (t)+F (t)×G (t)

  7. D t [ F ⃗ ( p ( t ) ) ] = F ⃗ ′ ( p ( t ) ) p ′ ( t ) D_t[\vec F(p(t))]=\vec F^\prime(p(t))p^\prime(t) Dt[F (p(t))]=F (p(t))p(t)


Theorem C : Integration of a Vector-valued Function

∫ a b F ⃗ ( t ) d t = [ ∫ a b f ( t ) d t ] i ⃗ + [ ∫ a b g ( t ) d t ] j ⃗ + [ ∫ a b h ( t ) d t ] k ⃗ \int^b_a\vec F(t)\mathrm{d}t=[\int^b_a f(t)\mathrm{d}t]\vec i+[\int^b_a g(t)\mathrm{d}t]\vec j+[\int^b_a h(t)\mathrm{d}t]\vec k abF (t)dt=[abf(t)dt]i +[abg(t)dt]j +[abh(t)dt]k


11.5.2 Curvilinear Motion


r ( t ) = f ( t ) i ⃗ + g ( t ) j ⃗ + h ( t ) k ⃗ r(t)=f(t)\vec i+g(t)\vec j+h(t)\vec k r(t)=f(t)i +g(t)j +h(t)k

v ( t ) = r ′ ( t ) v(t)=r^\prime(t) v(t)=r(t)

a ( t ) = v ′ ( t ) a(t)=v^\prime(t) a(t)=v(t)

s = ∫ a t [ f ′ ( u ) ] 2 + [ g ′ ( u ) ] 2 + [ h ′ ( u ) ] 2 s=\int^t_a\sqrt{[f^\prime(u)]^2+[g^\prime(u)]^2+[h^\prime(u)]^2} s=at[f(u)]2+[g(u)]2+[h(u)]2


11.6 Lines and Tangent Lines in Three-space


Equations for Lines

  1. parametric equations : x = x 0 + a t ,   y = y 0 + b t ,   z = z 0 + c t x=x_0+at,\ y=y_0+bt,\ z=z_0+ct x=x0+at, y=y0+bt, z=z0+ct
  2. symmetric equations : x − x 0 a = y − y 0 b = z − z 0 c \frac{x-x_0}{a}=\frac{y-y_0}{b}=\frac{z-z_0}{c} axx0=byy0=czz0

Remark :

If one of a , b , c a,b,c a,b,c is 0 0 0, for example if c = 0 c=0 c=0, then the symmetric equations should be written as :

x − x 0 a = y − y 0 b   a n d   z = z 0 \frac{x-x_0}{a}=\frac{y-y_0}{b}\ \mathrm{and}\ z=z_0 axx0=byy0 and z=z0


Tangent Line to a Curve

The tangent line to the curve has direction vector :
r ′ ( t ) = f ′ ( t ) i ⃗ + g ′ ( t ) j ⃗ + h ′ ( t ) k ⃗ r^\prime(t)=f^\prime(t)\vec i+g^\prime(t)\vec j+h^\prime(t)\vec k r(t)=f(t)i +g(t)j +h(t)k
and through the point :
r ( t ) = f ( t ) i ⃗ + g ( t ) j ⃗ + h ( t ) k ⃗ r(t)=f(t)\vec i+g(t)\vec j+h(t)\vec k r(t)=f(t)i +g(t)j +h(t)k


11.8 Surface in Three-space

If P ( x , y , z ) ∈ S ⇔ x ,   y   z P(x,y,z)\in S\Leftrightarrow x,\ y\ z P(x,y,z)Sx, y z satisfy F ( z , y , z ) = 0 F(z,y,z)=0 F(z,y,z)=0, then we say that S S S is the graph of the equation F ( x , y , z ) = 0 F(x,y,z)=0 F(x,y,z)=0 and F ( x , y , z ) = 0 F(x,y,z)=0 F(x,y,z)=0 is the equation of the graph S S S. The graph of F ( x , y , z ) = 0 F(x,y,z)=0 F(x,y,z)=0 is usually called a surface.


11.8.1 Cylinders

A cylinder is a surface generated by parallel moving a given line l l l along a plane curve C C C.


Equations for Cylinders

If a cylinder Σ \Sigma Σ satisfies that the line l l l is parallel to z z z-axis and the given curve C C C is the plane curve F ( x , y ) = 0 F(x,y)=0 F(x,y)=0 in the x y xy xy-plane. Then : M ( x , y , z ) ∈ Σ ⇔ M ′ ( x , y , 0 ) ∈ C : F ( x , y ) = 0 M(x,y,z)\in\Sigma\Leftrightarrow M^\prime(x,y,0)\in C:F(x,y)=0 M(x,y,z)ΣM(x,y,0)C:F(x,y)=0
Therefore, the equation of the cylinder Σ \Sigma Σ is also F ( x , y ) = 0 F(x,y)=0 F(x,y)=0

Remark

C : { ( x , y , z ) : F ( x , y ) = 0 , z = 0 } C : \{(x,y,z):F(x,y)=0,z=0\} C:{(x,y,z):F(x,y)=0,z=0}
Σ : { ( x , y , z ) : F ( x , y ) = 0 } \Sigma:\{(x,y,z):F(x,y)=0\} Σ:{(x,y,z):F(x,y)=0}


In general, the equation of a surface in three-pace is an equation in three variables x , y , z x,y,z x,y,z: F ( x , y , z ) = 0 F(x,y,z)=0 F(x,y,z)=0


11.8.2 Surface of Revolution

A surface generated by revolving a plane curve C C C about a given line l l l is called the surface of revolution. The curve C C C is called the generator. The line l l l is called axis of revolution.


  1. F ( ± x 2 + y 2 , z ) = 0 F(\pm\sqrt{x^2+y^2},z)=0 F(±x2+y2 ,z)=0
    在这里插入图片描述
  2. F ( y , ± x 2 + z 2 ) = 0 F(y,\pm\sqrt{x^2+z^2})=0 F(y,±x2+z2 )=0
    在这里插入图片描述
  3. x 2 + y 2 = 2 p z x^2+y^2=2pz x2+y2=2pz (circular paraboloid)
    circular paraboloid
  4. y 2 a 2 + x 2 + z 2 b 2 = 1 \frac{y^2}{a^2}+\frac{x^2+z^2}{b^2}=1 a2y2+b2x2+z2=1 (circular ellipsoid)
    circular ellipsoid
  5. z 2 = k 2 ( x 2 + y 2 ) z^2=k^2(x^2+y^2) z2=k2(x2+y2) (circular cone or right cone)
    circular cone

11.8.3 Quadric Surface


Definition

A quadic surface is the graph in three-space of a second-degree equation in three variables x, y and z.


Method of Tracing

The best way to visualize the graph of a quadric surface is to find the intersections of the surface with planes that are parallel to the coordinate planes.

These intersections are called cross sections; those with the coordinate planes are also called traces.


  1. x 2 a 2 + y 2 b 2 + z 2 c 2 = 1 \frac{x^2}{a^2}+\frac{y^2}{b^2}+\frac{z^2}{c^2}=1 a2x2+b2y2+c2z2=1 (Ellipsoid)
    Ellipsoid
  2. x 2 a 2 + y 2 b 2 = z \frac{x^2}{a^2}+\frac{y^2}{b^2}=z a2x2+b2y2=z (Elliptic Paraboloid)
    Elliptic Paraboloid
  3. x 2 a 2 − y 2 b 2 = z \frac{x^2}{a^2}-\frac{y^2}{b^2}=z a2x2b2y2=z (Hyperbolic Paraboloid)
    Hyperbolic Paraboloid
  4. x 2 a 2 + y 2 b 2 − z 2 c 2 = 1 \frac{x^2}{a^2}+\frac{y^2}{b^2}-\frac{z^2}{c^2}=1 a2x2+b2y2c2z2=1 (Hyperboloid of One Sheet)
    Hyperboloid
  5. x 2 a 2 + y 2 b 2 − z 2 c 2 = − 1 \frac{x^2}{a^2}+\frac{y^2}{b^2}-\frac{z^2}{c^2}=-1 a2x2+b2y2c2z2=1 (Hyperboloid of Two Sheets)
    Hyperboloid
  6. x 2 a 2 + y 2 b 2 − z 2 c 2 = 0 \frac{x^2}{a^2}+\frac{y^2}{b^2}-\frac{z^2}{c^2}=0 a2x2+b2y2c2z2=0 (Elliptic Cone)
    Elliptic cone

Chapter 12 : Derivatives for Functions of Two or More Variables



12.1 Functions of Two or More Variables


12.1.1 Functions of Two Variables

A function of two variables is a rule f f f that assigns to each ordered pair of real numbers ( x , y ) (x, y) (x,y) in some set D D D of the plane a unique real number denoted by f ( x , y ) f(x, y) f(x,y)
Domain : The set D ⊂ R 2 = { ( x , y ) : x , y ∈ R } D\subset R^2=\{(x, y):x, y\in R\} DR2={(x,y):x,yR}
Range : The set of values that f f f takes on, that is : { f ( x , y ) : ( x , y ) ∈ D } ⊂ R \{f(x, y) : (x, y)\in D\}\subset R {f(x,y):(x,y)D}R
Natural Domain : If a function is given by a formula and no domain is specified, then the domain of the function is taken to be the set of all pairs ( x , y ) (x, y) (x,y) for which the given expression makes sense and gives a real number.


We often write z = f ( x , y ) z=f(x, y) z=f(x,y). The variables x x x and y y y are called independent variables and z z z is called the dependent variable.


12.1.2 Graphs of Functions of Two Variables

The graph of a function f f f of two variables with domain D D D is the graph of the equation z = f ( x , y ) ,   ( x , y ) ∈ D z=f(x, y),\ (x, y)\in D z=f(x,y), (x,y)D. which will normal be a surface in 3-space. In other words, it is the set of all points ( x , y , z ) (x, y, z) (x,y,z) in R 3 R^3 R3 such that z = f ( x , y ) z=f(x, y) z=f(x,y) and ( x , y ) (x, y) (x,y) is in D D D.
For the graph of a function of two variables, each line perpendicular to the xy-plane intersects the surface in at most one point.


12.1.3 Level Curves and Contour Map

The level curves of a function f f f of two variables are the curves in xy-plane with equation f ( x , y ) = c f(x, y)=c f(x,y)=c, where c c c is a constant (in the range of f f f)
Geometrically, each horizontal plane z = c z=c z=c intersects the surface in a curve. The projection of this curve on xy-plane is a level curve. A collection of such curves is called a contour map.


12.1.4 Functions of Three or More Variables

A function of three variables is a function f f f that assigns to ==each ordered triple ( x , y , z ) (x, y, z) (x,y,z) in a domain D ⊂ R 3 D\subset R^3 DR3 a unique real number denoted by f ( x , y , z ) f(x, y, z) f(x,y,z).
Domain : The set D ⊂ R 3 = { ( x , y , z ) : x , y , z ∈ R } D\subset R^3=\{(x, y, z):x, y, z\in R\} DR3={(x,y,z):x,y,zR}
Range : The set of values that f f f takes on, that is : { f ( x , y , z ) : ( x , y , z ) ∈ D } ⊂ R \{f(x, y, z) : (x, y, z)\in D\}\subset R {f(x,y,z):(x,y,z)D}R


A function of n n n variables is a function f f f that assigns to each ordered n-tuple ( x 1 , x 2 , ⋯   , x n ) (x_1, x_2, \cdots, x_n) (x1,x2,,xn) in a domain D ⊂ R n D\subset R^n DRn a unique real number denoted by f ( x 1 , x 2 , ⋯   , x n ) f(x_1, x_2, \cdots, x_n) f(x1,x2,,xn).


12.1.5 Level Surface


The level surface of a function f f f of three variables are the surfaces in three space with f ( x , y , z ) = c f(x, y, z)=c f(x,y,z)=c, where c c c is a constant (in the range of f f f).

levelSurface


12.2 Partial Derivatives


12.2.1 Partial Derivatives for Functions of Two Variables


If f f f is a function of two variables, its partial derivative are the functions f x f_x fx and f y f_y fy defined by
f x ( x , y ) = lim ⁡ h → 0 f ( x + h , y ) − f ( x , y ) h f_x(x, y)=\lim\limits_{h\to0}\frac{f(x+h, y)-f(x,y)}{h} fx(x,y)=h0limhf(x+h,y)f(x,y)
f y ( x , y ) = lim ⁡ h → 0 f ( x , y + h ) − f ( x , y ) h f_y(x, y)=\lim\limits_{h\to0}\frac{f(x, y+h)-f(x,y)}{h} fy(x,y)=h0limhf(x,y+h)f(x,y)


Notations:

If z = f ( x , y ) z=f(x, y) z=f(x,y)

f x ( x , y ) = ∂ z ∂ x = ∂ ∂ x f ( x , y ) f_x(x,y)=\frac{\partial z}{\partial x}=\frac{\partial}{\partial x}f(x, y) fx(x,y)=xz=xf(x,y)

f x ( x 0 , y 0 ) = ∂ z ∂ x ∣ ( x 0 , y 0 ) f_x(x_0, y_0)=\frac{\partial z}{\partial x}|_{(x_0, y_0)} fx(x0,y0)=xz(x0,y0)

∂ ∂ x \frac{\partial}{\partial x} x and ∂ ∂ y \frac{\partial}{\partial y} y represent linear operators.


12.2.2 Geometric Interpretation of Partial Derivatives


The equation of z = f ( x , y ) z=f(x, y) z=f(x,y) represents a surface S S S.
The point P ( x 0 , y 0 , f ( x 0 , y 0 ) ) P(x_0, y_0, f(x_0, y_0)) P(x0,y0,f(x0,y0)) lies on the surface.
The plane y = y 0 y=y_0 y=y0 intersects this surface in the plane Q P R QPR QPR.
Then f x ( x 0 , y 0 ) f_x(x_0, y_0) fx(x0,y0) can be interpreted geometrically as the slope of the tangent line to this curve at P P P.
在这里插入图片描述

12.2.3 Physical Interpretation of Partial Derivatives


If z = f ( x , y , z ) z=f(x, y, z) z=f(x,y,z), then

∂ z ∂ x \frac{\partial z}{\partial x} xz represents the rate of change of z z z with respect to x x x when y y y is fixed.

∂ z ∂ y \frac{\partial z}{\partial y} yz represents the rate of change of z z z with respect to y y y when x x x is fixed.


12.2.4 Higher Partial Derivatives


f x x = ∂ ∂ x ( ∂ f ∂ x ) = ∂ 2 f ∂ x 2 f_{xx}=\frac{\partial}{\partial x}(\frac{\partial f}{\partial x})=\frac{\partial^2f}{\partial x^2} fxx=x(xf)=x22f

f y y = ∂ ∂ y ( ∂ f ∂ y ) = ∂ 2 f ∂ y 2 f_{yy}=\frac{\partial}{\partial y}(\frac{\partial f}{\partial y})=\frac{\partial^2f}{\partial y^2} fyy=y(yf)=y22f

f x y = ( f x ) y = ∂ ∂ y ( ∂ f ∂ x ) = ∂ 2 f ∂ y ∂ x f_{xy}=(f_x)_y=\frac{\partial}{\partial y}(\frac{\partial f}{\partial x})=\frac{\partial^2f}{\partial y\partial x} fxy=(fx)y=y(xf)=yx2f

f y x = ( f y ) x = ∂ ∂ x ( ∂ f ∂ y ) = ∂ 2 f ∂ x ∂ y f_{yx}=(f_y)_x=\frac{\partial}{\partial x}(\frac{\partial f}{\partial y})=\frac{\partial^2f}{\partial x\partial y} fyx=(fy)x=x(yf)=xy2f



12.3 Limits and Continuity


12.3.1 Limit of a Function of Two Variable


If the values of f ( x , y ) f(x,y) f(x,y) get closer and closer to L L L as the point ( x , y ) (x, y) (x,y) approaches ( a , b ) (a,b) (a,b) along any path that is in the domain of f f f, then we say that the limit of f ( x , y ) f(x,y) f(x,y) as ( x , y ) (x,y) (x,y) approach ( a , b ) (a,b) (a,b) is L L L and we write lim ⁡ ( x , y ) → ( a , b ) f ( x , y ) = L \lim\limits_{(x,y)\to(a,b)}f(x,y)=L (x,y)(a,b)limf(x,y)=L


To say that lim ⁡ ( x , y ) → ( a , b ) f ( x , y ) = L \lim\limits_{(x,y)\to(a,b)}f(x,y)=L (x,y)(a,b)limf(x,y)=L means that for every ϵ > 0 \epsilon>0 ϵ>0 there is a corresponding δ > 0 \delta>0 δ>0 such that ∣ f ( x , y ) − L ∣ < ϵ |f(x,y)-L|<\epsilon f(x,y)L<ϵ whenever 0 < ( x − a ) 2 + ( y − b ) 2 < δ 0<\sqrt{(x-a)^2+(y-b)^2}<\delta 0<(xa)2+(yb)2 <δ

The definition can be immediately extended to functions of three or more variables.


Geometric Interpretation

If any small interval ( L − ϵ , L + ϵ ) (L-\epsilon,L+\epsilon) (Lϵ,L+ϵ) is given around L L L, then we can find a disk D δ ⊂ D D_\delta\subset D DδD with center ( a , b ) (a,b) (a,b) and radius δ \delta δ such that f f f maps all the points in D δ D_\delta Dδ except possibly ( a , b ) (a,b) (a,b) into the interval ( L − ϵ , L + ϵ ) (L-\epsilon,L+\epsilon) (Lϵ,L+ϵ)

在这里插入图片描述
The Limit Theorem and Squeeze Theorem can be extended to functions of two or more variables.


Remark :

The path of approach to ( a , b ) (a, b) (a,b) is irrelevant. That is if the limit exists , then f ( a , b ) f(a, b) f(a,b) must approach the same limit no matter how ( x , y ) (x, y) (x,y) approaches ( a , b ) (a, b) (a,b).

So we can find two different paths of approach along which the function f ( x , y ) f(x, y) f(x,y) has different limits, then the limit lim ⁡ ( x , y ) → ( x 0 , y 0 ) f ( x , y ) \lim\limits_{(x,y)\to(x_0,y_0)}f(x, y) (x,y)(x0,y0)limf(x,y) does not exist.


Theorem A : Two Paths Test

If f ( x , y ) → L 1 f(x, y)\to L_1 f(x,y)L1 as ( x , y ) → ( a , b ) (x,y)\to(a,b) (x,y)(a,b) along a path C 1 C_1 C1 and f ( x , y ) → L 2 f(x,y)\to L_2 f(x,y)L2 as ( x , y ) → ( a , b ) (x, y)\to(a,b) (x,y)(a,b) along a path C 2 C_2 C2, where L 1 ≠ L 2 L_1\ne L_2 L1=L2, then lim ⁡ ( x , y ) → ( a , b ) f ( x , y ) \lim\limits_{(x, y)\to(a,b)}f(x,y) (x,y)(a,b)limf(x,y) does not exist.


12.3.2 Continuity


A function f f f of two variables is called continuous at ( a , b ) (a,b) (a,b) if
lim ⁡ ( x , y ) → ( a , b ) f ( x , y ) = f ( a , b ) \lim\limits_{(x,y)\to(a,b)}f(x,y)=f(a,b) (x,y)(a,b)limf(x,y)=f(a,b)
We say f f f is continuous on D D D if f f f is continuous at every point in D D D.


Remark :

  1. Intuitively, continuity means that the graph of f f f has no hole or break.
  2. Polynomial functions of two variables are continuous everywhere, since they are sums and products of the continuous functions x , y x,y x,y and c c c.
  3. Rational functions of two variables are quotients of polynomial functions and thus are continuous wherever the denominator is not zero.

Theorem B : Composition of Functions

If a function g g g of two variables is continuous at ( a , b ) (a,b) (a,b) and a function f f f of one variable is continuous at g ( a , b ) g(a,b) g(a,b), then the composite function f ∘ g f\circ g fg, defined by ( f ∘ g ) ( x , y ) = f ( g ( x , y ) ) (f\circ g)(x, y)=f(g(x,y)) (fg)(x,y)=f(g(x,y)), is continuous at ( a , b ) (a,b) (a,b).


Language relative to Sets in the Plane

  1. Neighborhood : A Neighborhood of radius δ \delta δ of a point P P P is the set of all points inside of the circle with center P P P and radius δ \delta δ
  2. interior point : A interior point P P P of a set S S S if there is a neighborhood of P P P contained in S S S.
  3. boundary point : A boundary point P P P of a set S S S if every neighborhood of P P P contains points that are in S S S and points that are not in S S S.
  4. interior : Interior of a set S S S is the set of all interior points of S S S.
  5. boundary : Boundary of a set S S S is the set of all boundary points of S S S.
  6. open set : All points are interior points.
  7. closed set : A set contains all its boundary points.
  8. bounded set : If there exists an R > 0 R>0 R>0 such that all points in D D D are inside the circle of radius R R R centered at the origin.

Remark :

  1. If S S S is an open set, to say that f f f is continuous on S S S means precisely that f f f is continuous at every point of S S S. To say that f f f is continuous at a boundary point P P P of S means precisely that f ( Q ) f(Q) f(Q) must approach f ( P ) f(P) f(P) as Q Q Q approaches P P P through points in S S S.
  2. Continuous functions on closed and bounded sets have many properties like that of continuous functions on closed intervals, such as Intermediate Theorem and Max-Min Existence Theorem.

Theorem C : Equality of Mixed Partials

If f x y f_{xy} fxy and f y x f_{yx} fyx are continuous on an open set S S S, then f x y = f y x f_{xy}=f_{yx} fxy=fyx at each point of S S S.



12.4 Differentiability


12.4.1 Tangent Plane to the Graph of f ( x , y ) f(x, y) f(x,y)


The normal vector n ⃗ \vec{n} n of the tangent plane can be chosen as : n ⃗ = v 1 ⃗ × v 2 ⃗ = ⟨ f x ( x 0 , y 0 ) , f y ( x 0 , y 0 ) , − 1 ⟩ \vec{n}=\vec{v_1}\times\vec{v_2}=\langle f_x(x_0, y_0), f_y(x_0, y_0),-1\rangle n =v1 ×v2 =fx(x0,y0),fy(x0,y0),1

The equation of the tangent plane to z = f ( x , y ) z=f(x, y) z=f(x,y) at ( x 0 , y 0 ) (x_0, y_0) (x0,y0) is : f x ( x 0 , y 0 ) ( x − x 0 ) + f y ( x 0 , y 0 ) ( y − y 0 ) − ( z − f ( x 0 , y 0 ) = 0 f_x(x_0, y_0)(x-x_0)+f_y(x_0, y_0)(y-y_0)-(z-f(x_0, y_0)=0 fx(x0,y0)(xx0)+fy(x0,y0)(yy0)(zf(x0,y0)=0


12.4.2 Differentiability for Functions of One Variable


Define ϵ = f ( x 0 + Δ x ) − f ( x 0 ) − f ′ ( x 0 ) Δ x Δ x \epsilon=\frac{f(x_0+\Delta x)-f(x_0)-f^\prime(x_0)\Delta x}{\Delta x} ϵ=Δxf(x0+Δx)f(x0)f(x0)Δx, and Δ y = f ( x 0 + Δ x ) − f ( x 0 ) \Delta y=f(x_0+\Delta x)-f(x_0) Δy=f(x0+Δx)f(x0), if f f f is differentiable at x 0 x_0 x0, then :

Δ y = f ′ ( x 0 ) Δ x + ϵ Δ x \Delta y=f^\prime(x_0)\Delta x+\epsilon\Delta x Δy=f(x0)Δx+ϵΔx, where ϵ → 0 \epsilon\to0 ϵ0 as Δ x → 0 \Delta x\to0 Δx0


12.4.3 Differentiability for Functions of Two Variables


Let Δ z = f ( x 0 + Δ x , y 0 + Δ y ) − f ( x 0 , y 0 ) \Delta z=f(x_0+\Delta x, y_0+\Delta y)-f(x_0, y_0) Δz=f(x0+Δx,y0+Δy)f(x0,y0)


z = f ( x , y ) z=f(x, y) z=f(x,y) is said to be differentiable or locally linear at ( x 0 , y 0 ) (x_0, y_0) (x0,y0) if the increment Δ z \Delta z Δz can be expressed in the form
Δ z = f x ( x 0 , y 0 ) Δ x + f y ( x 0 , y 0 ) Δ y + ϵ 1 Δ x + ϵ 2 Δ y \Delta z=f_x(x_0, y_0)\Delta x+f_y(x_0, y_0)\Delta y+\epsilon_1\Delta x+\epsilon_2\Delta y Δz=fx(x0,y0)Δx+fy(x0,y0)Δy+ϵ1Δx+ϵ2Δy, where ϵ 1 → 0 \epsilon_1\to0 ϵ10 and ϵ 2 → 0 \epsilon_2\to0 ϵ20 as ( Δ x , Δ y ) → ( 0 , 0 ) (\Delta x, \Delta y)\to(0,0) (Δx,Δy)(0,0)


Linear Approximation for f ( x , y ) f(x, y) f(x,y)

f ( x , y ) ≈ f ( x 0 , y 0 ) + f x ( x 0 , y 0 ) ( x − x 0 ) + f y ( x 0 , y 0 ) ( y − y 0 ) = L ( x , y ) f(x, y)\approx f(x_0, y_0)+f_x(x_0, y_0)(x-x_0)+f_y(x_0, y_0)(y-y_0)=L(x, y) f(x,y)f(x0,y0)+fx(x0,y0)(xx0)+fy(x0,y0)(yy0)=L(x,y)


If z = f ( x , y ) z=f(x, y) z=f(x,y) and f f f is differentiable at ( x 0 , y 0 ) (x_0, y_0) (x0,y0), then f x ( x 0 , y 0 ) d x + f y ( x 0 , y 0 ) d y f_x(x_0, y_0)\mathrm{d}x+f_y(x_0, y_0)\mathrm{d}y fx(x0,y0)dx+fy(x0,y0)dy
is called the differential of f f f at ( x 0 , y 0 ) (x_0, y_0) (x0,y0). If f f f is a differentiable function, then differential d z \mathrm{d}z dz of f f f is defined by d z = f x ( x , y ) d x + f y ( x , y ) d y \mathrm{d}z=f_x(x, y)\mathrm{d}x+f_y(x, y)\mathrm{d}y dz=fx(x,y)dx+fy(x,y)dy
where d \mathrm{d} dx and d y \mathrm{d}y dy are differentials of independent variables.


Theorem A : Sufficient Condition for Differentiability

If f ( x , y ) f(x,y) f(x,y) has continuous partial derivatives f x ( x , y ) f_x(x, y) fx(x,y) and f y ( x , y ) f_y(x,y) fy(x,y) on a disk D D D, whose interior contains ( a , b ) (a, b) (a,b), then f ( x , y ) f(x, y) f(x,y) is differentiable at ( a , b ) (a, b) (a,b)


Theorem B : Differentiability Implies Continuity

If f ( x , y ) f(x, y) f(x,y) is differentiable at ( x 0 , y 0 ) (x_0, y_0) (x0,y0), then f ( x , y ) f(x, y) f(x,y) is continuous at ( x 0 , y 0 ) (x_0, y_0) (x0,y0).


Vector Notation

If identifying the point ( x , y ) (x, y) (x,y) with the vector ⟨ x , y ⟩ \langle x,y\rangle x,y, we can write ( x , y ) = p ⃗ (x, y)=\vec{p} (x,y)=p and f ( x , y ) = f ( p ⃗ ) f(x, y)=f(\vec{p}) f(x,y)=f(p ). We define p 0 ⃗ = ( x 0 , y 0 ) ,   h ⃗ = ( Δ x , Δ y ) ,   ϵ ⃗ = ( ϵ 1 , ϵ 2 ) \vec{p_0}=(x_0, y_0),\ \vec{h}=(\Delta x, \Delta y),\ \vec{\epsilon}=(\epsilon_1, \epsilon_2) p0 =(x0,y0), h =(Δx,Δy), ϵ =(ϵ1,ϵ2)

Thus, the expression f ( x 0 + Δ x , y 0 + Δ y ) − f ( x 0 , y 0 ) = f x ( x 0 , y 0 ) Δ x + f y ( x 0 , y 0 ) Δ y + ϵ 1 Δ x + ϵ 2 Δ y \begin{aligned}&f(x_0+\Delta x,y_0+\Delta y)-f(x_0, y_0)\\=&f_x(x_0, y_0)\Delta x+f_y(x_0, y_0)\Delta y+\epsilon_1\Delta x+\epsilon_2\Delta y\end{aligned} =f(x0+Δx,y0+Δy)f(x0,y0)fx(x0,y0)Δx+fy(x0,y0)Δy+ϵ1Δx+ϵ2Δy

can be simplified as f ( p 0 ⃗ + h ⃗ ) − f ( p 0 ⃗ ) = ( f x ( p 0 ⃗ ) , f y ( p 0 ⃗ ) ) ⋅ h ⃗ + ϵ ⃗ ⋅ h ⃗ f(\vec{p_0}+\vec{h})-f(\vec{p_0})=(f_x(\vec{p_0}),f_y(\vec{p_0}))\cdot\vec{h}+\vec\epsilon\cdot\vec h f(p0 +h )f(p0 )=(fx(p0 ),fy(p0 ))h +ϵ h


12.4.4 Gradient of Functions of Two Variables

For z = f ( x , y ) = f ( p ) z=f(x,y)=f(p) z=f(x,y)=f(p), the vector ⟨ f x ( p ⃗ ) , f y ( p ⃗ ) ⟩ \langle f_x(\vec p),f_y(\vec p)\rangle fx(p ),fy(p ) is called the gradient of f f f at p ⃗ \vec p p and is denoted by ∇ f ( p ⃗ ) \nabla f(\vec p) f(p )
Thus, if f f f is differentiable at p ⃗ \vec p p , then f ( p ⃗ + h ⃗ ) = f ( p ⃗ ) + ∇ f ( p ⃗ ) ⋅ h ⃗ + ϵ ⋅ h ⃗ f(\vec p+\vec h)=f(\vec p)+\nabla f(\vec p)\cdot\vec h+\epsilon\cdot\vec h f(p +h )=f(p )+f(p )h +ϵh , where ϵ → 0 \epsilon\to0 ϵ0 as h ⃗ → 0 \vec h\to0 h 0.


Rules for Gradients

∇ \nabla is often called the del operator. In many respects, gradients behave like derivatives.


Theorem C : Properties of ∇ \nabla

∇ \nabla is a linear operator; that is,

  1. ∇ [ f ( p ⃗ ) + g ( p ⃗ ) ] = ∇ f ( p ⃗ ) + ∇ g ( p ⃗ ) \nabla[f(\vec p) + g(\vec p)]=\nabla f(\vec p)+\nabla g(\vec p) [f(p )+g(p )]=f(p )+g(p )
  2. ∇ [ α f ( p ⃗ ) ] = α ∇ f ( p ⃗ ) \nabla[\alpha f(\vec p)]=\alpha\nabla f(\vec p) [αf(p )]=αf(p )
  3. ∇ [ f ( p ⃗ ) g ( p ⃗ ) ] = f ( p ⃗ ) ∇ g ( p ⃗ ) + g ( p ⃗ ) ∇ f ( p ⃗ ) \nabla[f(\vec p)g(\vec p)]=f(\vec p)\nabla g(\vec p)+g(\vec p)\nabla f(\vec p) [f(p )g(p )]=f(p )g(p )+g(p )f(p )

12.4.5 Differentiability for Functions of more variables


Gradient

∇ f = ⟨ f x , f y , f z ⟩ \nabla f=\langle f_x,f_y,f_z\rangle f=fx,fy,fz

Let p ⃗ = ( x , y , z ) ,   h ⃗ = ( Δ x , Δ y , Δ z ) ,   ϵ ⃗ = ⟨ ϵ 1 , ϵ 2 , ϵ 3 ⟩ \vec p=(x,y,z),\ \vec h=(\Delta x,\Delta y,\Delta z),\ \vec\epsilon=\langle \epsilon_1,\epsilon_2,\epsilon_3\rangle p =(x,y,z), h =(Δx,Δy,Δz), ϵ =ϵ1,ϵ2,ϵ3

Then, differentiability

f f f is differentiable or locally linear at p ⃗ \vec p p if f ( p ⃗ + h ⃗ ) = f ( p ⃗ ) + ∇ f ( p ⃗ ) ⋅ h ⃗ + ϵ ⃗ ⋅ h ⃗ f(\vec p+\vec h)=f(\vec p)+\nabla f(\vec p)\cdot\vec h+\vec\epsilon\cdot \vec h f(p +h )=f(p )+f(p )h +ϵ h , where ϵ ⃗ → 0 \vec\epsilon\to0 ϵ 0 as h ⃗ → 0 \vec h\to0 h 0



12.5 Directional Derivatives and Gradients


12.5.1 Directional Derivatives

Let P 0 ( x 0 , y 0 ) P_0(x_0, y_0) P0(x0,y0) be a given point. For any unit vector u ⃗ = ⟨ u 1 , u 2 ⟩ \vec u=\langle u_1,u_2\rangle u =u1,u2, let D u f ( x 0 , y 0 ) = lim ⁡ h → 0 f ( x 0 + h u 1 , y 0 + h u 2 ) − f ( x 0 , y 0 ) h D_uf(x_0, y_0)=\lim\limits_{h\to0}\frac{f(x_0+hu_1,y_0+hu_2)-f(x_0, y_0)}{h} Duf(x0,y0)=h0limhf(x0+hu1,y0+hu2)f(x0,y0)
This limit, if it exists, is called the directional derivative of f f f at P 0 ( x 0 , y 0 ) P_0(x_0, y_0) P0(x0,y0) in the direction of u ⃗ \vec u u


Theorem A : Computation of Directional Derivatives

If f f f is differentiable at p ⃗ \vec p p , then for any unit vector u ⃗ = ⟨ u 1 , u 2 ⟩ \vec u=\langle u_1,u_2\rangle u =u1,u2, the function f f f has a directional derivative at p ⃗ \vec p p in the direction of u ⃗ \vec u u and D u f ( p ⃗ ) = ∇ f ( p ⃗ ) ⋅ u ⃗ D_uf(\vec p)=\nabla f(\vec p)\cdot\vec u Duf(p )=f(p )u


Theorem B : Maximum and Minimum Rate of Change

If ∇ f ≠ 0 \nabla f\ne0 f=0, then function f ( x , y ) f(x, y) f(x,y) increases most rapidly at P ( x 0 , y 0 ) P(x_0, y_0) P(x0,y0) in the direction of the gradient with rate ∣ ∣ ∇ f ( x 0 , y 0 ) ∣ ∣ ||\nabla f(x_0, y_0)|| f(x0,y0) and decreases most rapidly in the opposite direction with rate − ∣ ∣ ∇ f ( x 0 , y 0 ) ∣ ∣ -||\nabla f(x_0, y_0)|| f(x0,y0).


12.5.2 Level Curves and Gradients


Theorem C

The gradient of f f f at a point P P P is perpendicular to the level curve of f f f that goes through P P P.



12.6 The Chain Rule


12.6.1 Two Versions of Chain Rule


Theorem A

If z = f ( x , y ) z=f(x, y) z=f(x,y) is a differentiable of x x x and y y y, where x = g ( t ) ,   y = h ( t ) x=g(t),\ y=h(t) x=g(t), y=h(t) are both differentiable functions of t t t, then z = f ( g ( t ) , h ( t ) ) z=f(g(t), h(t)) z=f(g(t),h(t)) is a differentiable function of t t t and d z d t = ∂ z ∂ x d x d t + ∂ z ∂ y d y d t \frac{\mathrm{d}z}{\mathrm{d}t}=\frac{\partial z}{\partial x}\frac{\mathrm{d}x}{\mathrm{d}t}+\frac{\partial z}{\partial y}\frac{\mathrm{d}y}{\mathrm{d}t} dtdz=xzdtdx+yzdtdy


Remark :

The derivative in Theorem A can be interpreted as the rate of change of z z z with respect to t t t as the point ( x , y ) (x, y) (x,y) moves along the curve C C C with parametric equations x = g ( t ) ,   y = h ( t ) x=g(t),\ y=h(t) x=g(t), y=h(t)


Theorem B

If z = f ( x , y ) z=f(x, y) z=f(x,y) is a differentiable of x x x and y y y, where x = g ( s , t ) ,   y = h ( s , t ) x=g(s, t),\ y=h(s, t) x=g(s,t), y=h(s,t) are both differentiable functions of s s s and t t t, then z = f ( g ( s , t ) , h ( s , t ) ) z=f(g(s, t), h(s, t)) z=f(g(s,t),h(s,t)) is a differentiable function of s s s and t t t and ∂ z ∂ s = ∂ z ∂ x ∂ x ∂ s + ∂ z ∂ y ∂ y ∂ s \frac{\partial z}{\partial s}=\frac{\partial z}{\partial x}\frac{\partial x}{\partial s}+\frac{\partial z}{\partial y}\frac{\partial y}{\partial s} sz=xzsx+yzsy ∂ z ∂ t = ∂ z ∂ x ∂ x ∂ t + ∂ z ∂ y ∂ y ∂ t \frac{\partial z}{\partial t}=\frac{\partial z}{\partial x}\frac{\partial x}{\partial t}+\frac{\partial z}{\partial y}\frac{\partial y}{\partial t} tz=xztx+yzty


Generalization

d u d t = ∂ u ∂ x d x d t + ∂ u ∂ y d y d t + ∂ u ∂ z d z d t \frac{\mathrm{d}u}{\mathrm{d}t}=\frac{\partial u}{\partial x}\frac{\mathrm{d}x}{\mathrm{d}t}+\frac{\partial u}{\partial y}\frac{\mathrm{d}y}{\mathrm{d}t}+\frac{\partial u}{\partial z}\frac{\mathrm{d}z}{\mathrm{d}t} dtdu=xudtdx+yudtdy+zudtdz

∂ u ∂ s = ∂ u ∂ x ∂ x ∂ s + ∂ u ∂ y ∂ y ∂ s + ∂ u ∂ z ∂ z ∂ s \frac{\partial u}{\partial s}=\frac{\partial u}{\partial x}\frac{\partial x}{\partial s}+\frac{\partial u}{\partial y}\frac{\partial y}{\partial s}+\frac{\partial u}{\partial z}\frac{\partial z}{\partial s} su=xusx+yusy+zusz


12.6.2 Implicit Differentiation


Theorem A

Suppose that F ( x , y ) = 0 F(x, y)=0 F(x,y)=0 defines y y y implicitly as a function of x x x, namely y = f ( x ) y=f(x) y=f(x) where F ( x , f ( x ) ) = 0 F(x, f(x))=0 F(x,f(x))=0 for all x x x in the domain of f f f. Since both x x x and y y y are functions of x x x, by applying chain rule to differentiate both sides of the equation F ( x , y ) = 0 F(x, y)=0 F(x,y)=0 we obtain d y d x = − ∂ F ∂ x ∂ F ∂ y = − F x F y \frac{\mathrm{d}y}{\mathrm{d}x}=-\frac{\frac{\partial F}{\partial x}}{\frac{\partial F}{\partial y}}=-\frac{F_x}{F_y} dxdy=yFxF=FyFx


Theorem B

Suppose that F ( x , y , z ) = 0 F(x, y, z)=0 F(x,y,z)=0 defines z z z implicitly as a function of x x x and y y y, namely z = f ( x , y ) z=f(x, y) z=f(x,y) where F ( x , y , f ( x , y ) ) = 0 F(x, y, f(x, y))=0 F(x,y,f(x,y))=0 for all ( x , y ) (x, y) (x,y) in the domain of f f f. Differentiating both sides of the equation F ( x , y , z ) = 0 F(x, y, z)=0 F(x,y,z)=0 with respect to x x x by holding y y y fixed, we obtain ∂ z ∂ x = − F x F z ,   ∂ z ∂ y = − F y F z \frac{\partial z}{\partial x}=-\frac{F_x}{F_z},\ \frac{\partial z}{\partial y}=-\frac{F_y}{F_z} xz=FzFx, yz=FzFy



12.7 Tangent Planes and Differentials


12.7.1 Tangent Planes


If ∇ F ( x 0 , y 0 , z 0 ) ≠ 0 \nabla F(x_0, y_0, z_0)\ne 0 F(x0,y0,z0)=0, then the plane through P P P perpendicular to ∇ F ( x 0 , y 0 , z 0 ) \nabla F(x_0, y_0, z_0) F(x0,y0,z0) is called the tangent plane to the surface at P P P. ∇ F ( x 0 , y 0 , z 0 ) \nabla F(x_0, y_0, z_0) F(x0,y0,z0) is called a normal vector to the surface at P P P.


Theroem A : Equation of Tangent Plane

F x ( x 0 , y 0 , z 0 ) ( x − x 0 ) + F y ( x 0 , y 0 , z 0 ) ( y − y 0 ) + F z ( x 0 , y 0 , z 0 ) ( z − z 0 ) = 0 F_x(x_0,y_0,z_0)(x-x_0)+F_y(x_0,y_0,z_0)(y-y_0)+F_z(x_0,y_0,z_0)(z-z_0)=0 Fx(x0,y0,z0)(xx0)+Fy(x0,y0,z0)(yy0)+Fz(x0,y0,z0)(zz0)=0


Remarks :

  1. ∇ F \nabla F F is a vector normal to the level surface of F ( x , y , z ) F(x, y, z) F(x,y,z), that is, F ( x , y , z ) = k F(x, y, z)=k F(x,y,z)=k
  2. Normal line of S S S through P P P is given by x − x 0 F x ( x 0 , y 0 , z 0 ) = y − y 0 F y ( x 0 , y 0 , z 0 ) = z − z 0 F z ( x 0 , y 0 , z 0 ) \frac{x-x_0}{F_x(x_0,y_0,z_0)}=\frac{y-y_0}{F_y(x_0,y_0,z_0)}=\frac{z-z_0}{F_z(x_0,y_0,z_0)} Fx(x0,y0,z0)xx0=Fy(x0,y0,z0)yy0=Fz(x0,y0,z0)zz0

12.7.2 Differentials


If z = f ( x ) z=f(x) z=f(x) is a differentiable function, then f x ( x , y ) d x + f y ( x , y ) d y f_x(x,y)\mathrm dx+f_y(x,y)\mathrm dy fx(x,y)dx+fy(x,y)dy is called the (total) differential of f f f, denoted by d z \mathrm dz dz or d f ( x , y ) \mathrm df(x,y) df(x,y). Where d x = Δ x \mathrm dx=\Delta x dx=Δx and d y = Δ y \mathrm dy=\Delta y dy=Δy are called differentials of independent variables x x x and y y y.

在这里插入图片描述


12.8 Maxima and Minima


12.8.1 Local Maximum and Local Minimum

Let f f f be a function with domain S S S and p ⃗ 0 \vec p_0 p 0 be a point in S S S.

  1. f ( p ⃗ ) f(\vec p) f(p ) is a local maximum value of f f f if f ( p ⃗ 0 ) ≥ f ( p ⃗ ) f(\vec p_0)\ge f(\vec p) f(p 0)f(p ) for all points p ⃗ ∈ N ∩ S \vec p\in N\cap S p NS, where N N N is some neighborhood of p 0 p_0 p0.
  2. f ( p ⃗ ) f(\vec p) f(p ) is a local minimum value of f f f if f ( p ⃗ 0 ) ≤ f ( p ⃗ ) f(\vec p_0)\le f(\vec p) f(p 0)f(p ) for all points p ⃗ ∈ N ∩ S \vec p\in N\cap S p NS, where N N N is some neighborhood of p 0 p_0 p0.
  3. f ( p ⃗ ) f(\vec p) f(p ) is a local extreme value of f f f if f ( p ⃗ ) f(\vec p) f(p ) is either a local maximum value or a local minimum value.
  4. f ( p ⃗ ) f(\vec p) f(p ) is a global maximum value or global minimum value if the inequalities above hold for all point on S S S.

Theorem A : First Derivative Test for Local Extreme Value

If f f f is a local maximum or minimum value at an interior point ( a , b ) (a,b) (a,b) of its domain and if the first partial derivatives exist there, then f x ( a , b ) = 0 f_x(a, b)=0 fx(a,b)=0 and f y ( a , b ) = 0 f_y(a, b)=0 fy(a,b)=0


A point ( a , b ) (a, b) (a,b) is called a stationary point if ∇ f ( a , b ) = 0 \nabla f(a,b)=0 f(a,b)=0


Theorem B : Second Partials Test for Local Extreme Values

Suppose that f ( x , y ) f(x, y) f(x,y) has continuous second partial derivatives in a neighborhood of ( a , b ) (a, b) (a,b) and that ∇ f ( a , b ) = 0 \nabla f(a,b)=0 f(a,b)=0 Let D = D ( a , b ) = f x x ( a , b ) f y y ( a , b ) − ( f x y ( a , b ) ) 2 D=D(a, b)=f_{xx}(a,b)f_{yy}(a,b)-(f_{xy}(a,b))^2 D=D(a,b)=fxx(a,b)fyy(a,b)(fxy(a,b))2 Then

  1. if D > 0 D>0 D>0 and f x x ( a , b ) < 0 , f ( a , b ) f_{xx}(a,b)<0,f(a,b) fxx(a,b)<0,f(a,b) is a local maximum value.
  2. if D > 0 D>0 D>0 and f x x ( a , b ) > 0 , f ( a , b ) f_{xx}(a,b)>0,f(a,b) fxx(a,b)>0,f(a,b) is a local minimum value.
  3. if D < 0 , f ( a , b ) D<0, f(a,b) D<0,f(a,b) is not an extreme value, ( a , b ) (a, b) (a,b) is a saddle point.
  4. if D = 0 D=0 D=0, the test is inconclusive.

D D D can also be written as a determinant (Hession of f f f) D = ∣ f x x f x y f y x f y y ∣ = f x x f y y − ( f x y ) 2 D=\begin{vmatrix}f_{xx}&f_{xy}\\f_{yx}&f_{yy}\end{vmatrix}=f_{xx}f_{yy}-(f_{xy})^2 D=fxxfyxfxyfyy=fxxfyy(fxy)2


Procedure for Finding Local Extreme of z = f ( x , y ) z=f(x, y) z=f(x,y)

  1. Identify all stationary points by ∇ f ( x , y ) = 0 \nabla f(x, y)=0 f(x,y)=0
  2. Calculate D ( x , y ) D(x, y) D(x,y) and f x x ( x , y ) f_{xx}(x, y) fxx(x,y) at each stationary point.

12.8.1 Global Maximum and Minimum Values


Theorem A : Max-Min Existence Theorem

If f f f is continuous on a closed bounded set S S S, then f f f attains both a maximum and minimum values.


Theorem B : Critical Point Theorem

Let f f f be defined on a set S S S containing p ⃗ 0 \vec p_0 p 0. If f ( p ⃗ 0 ) f(\vec p_0) f(p 0) is an extreme value, then p ⃗ 0 \vec p_0 p 0 must be a critical point.


The critical points of f f f on S S S are of three types.

  1. stationary point : p ⃗ 0 \vec p_0 p 0 is an interior point where ∇ f ( p ⃗ 0 ) = 0 \nabla f(\vec p_0)=0 f(p 0)=0
  2. singular point : p ⃗ 0 \vec p_0 p 0 is an interior point, where f f f is not differentiable
  3. boundary point of S S S

Procedure for finding the max. and min. value on closed bounded set

  1. Find the values of f f f at stationary points and singular points in S S S.
  2. Find the extreme values of f f f on the boundary of S S S.
  3. The largest of these values is the maximum value; the smallest is the minimum value.


12.9 The Method of Lagrange Multipliers


Theorem A : Lagrange’s Method

To find extreme values of z = f ( x , y ) z=f(x,y) z=f(x,y) subject to the constraint g ( x , y ) = 0 g(x, y)=0 g(x,y)=0

  1. Find all values of x , y x, y x,y and λ \lambda λ such that: { ∇ f ( x , y ) = λ ∇ g ( x , y ) g ( x , y ) = 0 \begin{cases}\nabla f(x,y)=\lambda\nabla g(x,y)\\g(x,y)=0\end{cases} {f(x,y)=λg(x,y)g(x,y)=0
  2. Evaluate f f f at all points that result from Step 1. The Largest of these values is the maximum value and the smallest is the minimum value.

The corresponding λ \lambda λ is called a Lagrange Multiplier.



Chapter 13 : Double Integral



13.1 Double Integral on Rectangular Regions


Theorem A

If f f f if bounded on a close rectangle R R R and continuous except on a finite number of smooth curves, then it is integrable on R R R.



13.2 Iterated Integrals


∬ [ a , b ] × [ c , d ] f ( x , y ) d A = ∫ c d [ ∫ a b f ( x , y ) d x ] d y = ∫ a b [ ∫ c d f ( x , y ) d y ] d x \iint\limits_{[a,b]\times[c,d]}f(x,y)\mathrm dA=\int^d_c\left[\int^b_af(x,y)\mathrm dx\right]\mathrm dy=\int^b_a\left[\int^d_cf(x,y)\mathrm dy\right]\mathrm dx [a,b]×[c,d]f(x,y)dA=cd[abf(x,y)dx]dy=ab[cdf(x,y)dy]dx


Average value of f f f

  1. f ( x ) f(x) f(x) on I = [ a , b ] I=[a,b] I=[a,b] : a v e ( f , I ) = ∫ a b f ( x ) d x l e n g t h ( I ) l e n g t h ( I ) = ∫ a b ( 1 ) d x = ( b − a ) \begin{aligned}&ave(f,I)=\frac{\int^b_af(x)\mathrm dx}{length(I)}\\&length(I)=\int^b_a(1)\mathrm dx=(b-a)\end{aligned} ave(f,I)=length(I)abf(x)dxlength(I)=ab(1)dx=(ba)
  2. f ( x , y ) f(x,y) f(x,y) on R ⊂ R 2 R\sub R^2 RR2 : a v e ( f , R ) = ∬ R f ( x , y ) d A a r e a ( R ) a r e a ( R ) = ∬ R ( 1 ) d A \begin{aligned}&ave(f,R)=\frac{\iint\limits_Rf(x,y)\mathrm dA}{area(R)}\\&area(R)=\iint\limits_R(1)\mathrm dA\end{aligned} ave(f,R)=area(R)Rf(x,y)dAarea(R)=R(1)dA
  3. f ( x , y , z ) f(x,y,z) f(x,y,z) on D ⊂ R 3 D\sub R^3 DR3 : a v e ( f , D ) = ∭ D f ( x , y , z ) d V v o l u m e ( D ) v o l u m e ( D ) = ∭ D ( 1 ) d V \begin{aligned}&ave(f,D)=\frac{\iiint\limits_Df(x,y,z)\mathrm dV}{volume(D)}\\&volume(D)=\iiint\limits_D(1)\mathrm dV\end{aligned} ave(f,D)=volume(D)Df(x,y,z)dVvolume(D)=D(1)dV
  4. f ( x , y ) f(x,y) f(x,y) on C : r ⃗ ( t ) , t ∈ [ a , b ] C:\vec r(t),t\in[a,b] C:r (t),t[a,b] : a v e ( f , C ) = ∫ a b f ( r ⃗ ( t ) ) ∣ r ⃗ ′ ( t ) ∣ d t l e n g t h ( C ) l e n g t h ( C ) = ∫ a b ( 1 ) ∣ r ⃗ ′ ( t ) ∣ d t \begin{aligned}&ave(f,C)=\frac{\int^b_af(\vec r(t))|\vec r^\prime(t)|\mathrm dt}{length(C)}\\&length(C)=\int^b_a(1)|\vec r^\prime(t)|\mathrm dt\end{aligned} ave(f,C)=length(C)abf(r (t))r (t)dtlength(C)=ab(1)r (t)dt
  5. f ( x , y , z ) f(x,y,z) f(x,y,z) on S : S ⃗ ( u , v ) , ( u , v ) ∈ R 2 S:\vec S(u,v),(u,v)\in R^2 S:S (u,v),(u,v)R2 : a v e ( f , S ) = ∬ R f ( S ⃗ ( u , v ) ) ∣ S ⃗ u × S ⃗ v ∣ d A a r e a ( S ) a r e a ( S ) = ∫ R ( 1 ) ∣ S ⃗ u × S ⃗ v ∣ d A \begin{aligned}&ave(f,S)=\frac{\iint\limits_Rf(\vec S(u,v))|\vec S_u\times\vec S_v|\mathrm dA}{area(S)}\\&area(S)=\int\limits_R(1)|\vec S_u\times\vec S_v|\mathrm dA\end{aligned} ave(f,S)=area(S)Rf(S (u,v))S u×S vdAarea(S)=R(1)S u×S vdA


13.3 Double Integrals over General Regions


13.3.1 Double Integrals over General Regions


Let z = f ( x , y ) z=f(x,y) z=f(x,y) be defined on a general bounded region D.

  • Enclose D D D by a rectangle R R R.
  • Define a new function F F F with domain R R R. F ( x , y ) = { f ( x , y ) if  ( x , y ) ∈ D 0 if  ( x , y ) ∈ R \ D F(x,y)=\begin{cases}f(x,y)&\text{if}\ (x,y)\in D\\0&\text{if}\ (x,y)\in R\backslash D\end{cases} F(x,y)={f(x,y)0if (x,y)Dif (x,y)R\D
  • Then, ∬ D f ( x , y ) d A : = ∬ R F ( x , y ) d A \iint\limits_Df(x,y)\mathrm dA:=\iint\limits_RF(x,y)\mathrm dA Df(x,y)dA:=RF(x,y)dA

Remarks :

  • The definition makes sense and does not depend on the rectangle we use as long as it contains D D D.
  • The Double Integrals on a General Region is also (1) linear (2) additive and (3) satisfies the comparison property.

13.3.2 Evaluation of Double Integrals over General Regions


  • y-simple region : It lies between the graphs of two continuous functions of x x x. D = { ( x , y ) : g 1 ( x ) ≤ y ≤ g 2 ( x ) , a ≤ x ≤ b } D=\{(x,y):g_1(x)\le y\le g_2(x),a\le x\le b\} D={(x,y):g1(x)yg2(x),axb} Choose a rectangle R = [ a , b ] × [ c , d ] R=[a,b]\times[c,d] R=[a,b]×[c,d] contains D D D. Then ∬ D f ( x , y ) d A = ∬ R F ( x , y ) d A = ∫ a b ∫ c d F ( x , y ) d y d x = ∫ a b ∫ g 1 ( x ) g 2 ( x ) f ( x , y ) d y d x \begin{aligned}\iint\limits_Df(x,y)\mathrm dA&=\iint\limits_RF(x,y)\mathrm dA\\&=\int^b_a\int^d_cF(x,y)\mathrm dy\mathrm dx\\&=\int^b_a\int^{g_2(x)}_{g_1(x)}f(x,y)\mathrm dy\mathrm dx\end{aligned} Df(x,y)dA=RF(x,y)dA=abcdF(x,y)dydx=abg1(x)g2(x)f(x,y)dydx在这里插入图片描述
  • x-simple region : It can be expressed as D = { ( x , y ) : h 1 ( y ) ≤ x ≤ h 2 ( y ) , c ≤ y ≤ d } D=\{(x,y):h_1(y)\le x\le h_2(y),c\le y\le d\} D={(x,y):h1(y)xh2(y),cyd} Using same method, we can show that ∬ D f ( x , y ) d A = ∫ c d ∫ h 1 ( y ) h 2 ( y ) f ( x , y ) d x d y \iint\limits_Df(x,y)\mathrm dA=\int^d_c\int^{h_2(y)}_{h_1(y)}f(x,y)\mathrm dx\mathrm dy Df(x,y)dA=cdh1(y)h2(y)f(x,y)dxdy在这里插入图片描述

Remark :

If the region D D D is neither x-simple nor y-simple, it can usually be considered as a union of regions of x-simple or y-simple.
在这里插入图片描述
Procedure for Evaluation of Double Integrals

  1. Sketch the region of integration D D D.
  2. Write D D D as y-simple or x-simple region
  3. Convert double integral into iterated integral
  4. Evaluate the iterated integral


13.4 Double Integrals in Polar Coordinates


13.4.1 Double Integrals over Polar Rectangles


Partition R R R into small polar rectangles R 1 , R 2 , ⋯   , R n R_1,R_2,\cdots,R_n R1,R2,,Rn by means of a polar grid. Let Δ r k \Delta r_k Δrk and Δ θ k \Delta\theta_k Δθk denote the dimensions of the typical piece R k R_k Rk

Choose center points ( r ˉ k . θ ˉ k ) (\bar r_k. \bar\theta_k) (rˉk.θˉk) in each R k R_k Rk. The area of R k R_k Rk is given by Δ A k = r ˉ k Δ r k Δ θ k \Delta A_k=\bar r_k\Delta r_k\Delta\theta_k ΔAk=rˉkΔrkΔθk, Then V ≈ ∑ k = 1 n F ( r ˉ k , θ ˉ k ) Δ A k = ∑ k = 1 n F ( r ˉ k , θ ˉ k ) r ˉ k Δ r k Δ θ k V\thickapprox\sum\limits^n_{k=1}F(\bar r_k,\bar\theta_k)\Delta A_k=\sum\limits^n_{k=1}F(\bar r_k,\bar\theta_k)\bar r_k\Delta r_k\Delta\theta_k Vk=1nF(rˉk,θˉk)ΔAk=k=1nF(rˉk,θˉk)rˉkΔrkΔθk
So we can change the double integral in polar coordinates ∬ R f ( x , y ) d A = ∬ R f ( r cos ⁡ θ , r sin ⁡ θ ) r d r d θ \iint\limits_Rf(x,y)\mathrm dA=\iint\limits_Rf(r\cos\theta,r\sin\theta)r\mathrm d r\mathrm d\theta Rf(x,y)dA=Rf(rcosθ,rsinθ)rdrdθ


13.4.2 For General Region


If f f f is continuous on a polar region of the form S = { ( r , θ ) : ϕ 1 ( θ ) ≤ r ≤ ϕ 2 ( θ ) , α ≤ θ ≤ β } S=\{(r,\theta):\phi_1(\theta)\le r\le\phi_2(\theta),\alpha\le\theta\le\beta\} S={(r,θ):ϕ1(θ)rϕ2(θ),αθβ} Then we can change the double integral into an iterated integral in polar coordinates ∬ S f ( x , y ) d A = ∫ α β ∫ ϕ 1 ( θ ) ϕ 2 ( θ ) f ( r cos ⁡ θ , r sin ⁡ θ ) r d r d θ \iint\limits_Sf(x,y)\mathrm dA=\int^\beta_\alpha\int^{\phi_2(\theta)}_{\phi_1(\theta)}f(r\cos\theta,r\sin\theta)r\mathrm dr\mathrm d\theta Sf(x,y)dA=αβϕ1(θ)ϕ2(θ)f(rcosθ,rsinθ)rdrdθ In particular, if f ( x , y ) = 1 f(x,y)=1 f(x,y)=1, then A ( S ) = ∬ S r d r d θ A(S)=\iint\limits_Sr\mathrm dr\mathrm d\theta A(S)=Srdrdθ



13.6 Surface Area


Form a partition P P P of S S S with lines parallel to the x − x- x and y − y- yaxes. This divides S S S into n n n subrectangles R m R_m Rm with the lengths of sides Δ x m \Delta x_m Δxm and Δ y m , m = 1 , 2 , ⋯   , n \Delta y_m,m=1,2,\cdots,n Δym,m=1,2,,n

For each m m m, let G m G_m Gm be the part of the surface that prejects onto R m R_m Rm. Let P m P_m Pm be the corner of G m G_m Gm closest to the origin and T m T_m Tm denote the parallelogram on the tangent plane at P m P_m Pm, that projects onto R m R_m Rm. Then we can use the area of T m T_m Tm to approximate the area of G m G_m Gm
在这里插入图片描述
We next find the area of the parallelogram T m T_m Tm. Let u m u_m um and v m v_m vm denote the vectors that form the sides of T m T_m Tm. Then u ⃗ m = Δ x m i ⃗ + f x ( x m , y m ) Δ x m k ⃗ \vec u_m=\Delta x_m\vec i+f_x(x_m,y_m)\Delta x_m\vec k u m=Δxmi +fx(xm,ym)Δxmk v ⃗ m = Δ y m j ⃗ + f y ( x m , y m ) Δ y m k ⃗ \vec v_m=\Delta y_m\vec j+f_y(x_m,y_m)\Delta y_m\vec k v m=Δymj +fy(xm,ym)Δymk The area of T m T_m Tm is A ( T m ) = ∣ ∣ u ⃗ m × v ⃗ m ∣ ∣ = f x 2 ( x m , y m ) + f y 2 ( x m , y m ) + 1   Δ A m ≈ A ( G m ) A(T_m)=||\vec u_m\times\vec v_m||=\sqrt{f_x^2(x_m,y_m)+f_y^2(x_m,y_m)+1}\ \Delta A_m\approx A(G_m) A(Tm)=u m×v m=fx2(xm,ym)+fy2(xm,ym)+1  ΔAmA(Gm)Thus A ( G ) = ∬ S f x 2 + f y 2 + 1   d A \red{A(G)=\iint\limits_S\sqrt{f^2_x+f^2_y+1}\ \mathrm dA} A(G)=Sfx2+fy2+1  dA



13.7 Triple Integrals


13.7.1 Triple Integrals over Cuboid

We define the triple integral by ∭ B f ( x , y , z ) d V = lim ⁡ ∣ ∣ P ∣ ∣ → 0 ∑ k = 1 n f ( x ˉ k , y ˉ k , z ˉ k ) Δ V k \iiint\limits_Bf(x,y,z)\mathrm dV=\lim\limits_{||P||\to0}\sum\limits^n_{k=1}f(\bar x_k,\bar y_k,\bar z_k)\Delta V_k Bf(x,y,z)dV=P0limk=1nf(xˉk,yˉk,zˉk)ΔVkif the limit exists. Here ∣ ∣ P ∣ ∣ ||P|| P is the maximal length of the diagonals of the small boxes, not the maximal volume of the small boxes.

  1. If f f f is continuous, then the triple integral always exists.

  2. The triple integrals have the standard properties:

    • Linearity : ∭ B f ( x , y , z ) + g ( x , y , z ) d V = ∭ B f ( x , y , z ) d V + ∭ B g ( x , y , z ) d V \iiint\limits_Bf(x,y,z)+g(x,y,z)\mathrm dV=\iiint\limits_Bf(x,y,z)\mathrm dV+\iiint\limits_Bg(x,y,z)\mathrm dV Bf(x,y,z)+g(x,y,z)dV=Bf(x,y,z)dV+Bg(x,y,z)dV
    • Additivity on Regions : ∭ D f ( x , y , z ) d V = ∭ D 1 f ( x , y , z ) d V + ∭ D 2 f ( x , y , z ) d V \iiint\limits_Df(x,y,z)\mathrm dV=\iiint\limits_{D_1}f(x,y,z)\mathrm dV+\iiint\limits_{D_2}f(x,y,z)\mathrm dV Df(x,y,z)dV=D1f(x,y,z)dV+D2f(x,y,z)dV
    • The Comparison Property : ∭ D f ( x , y , z ) d V ≤ ∭ D g ( x , y , z ) d V \iiint\limits_Df(x,y,z)\mathrm dV\le\iiint\limits_Dg(x,y,z)\mathrm dV Df(x,y,z)dVDg(x,y,z)dV if f ≤ g f\le g fg on D D D

13.7.2 General Regions L : z − z- zsimple


Assume that S = { ( x , y , z ) : a 1 ≤ x ≤ a 2 ,   ϕ 1 ( x ) ≤ y ≤ ϕ 2 ( x ) ,   ψ 1 ( x , y ) ≤ z ≤ ψ 2 ( x , y ) } S=\{(x,y,z):a_1\le x\le a_2,\ \phi_1(x)\le y\le\phi_2(x),\ \psi_1(x,y)\le z\le\psi_2(x,y)\} S={(x,y,z):a1xa2, ϕ1(x)yϕ2(x), ψ1(x,y)zψ2(x,y)}Then ∭ S f ( x , y , z ) d V = ∫ a 1 a 2 ∫ ϕ 2 ( x ) ϕ 1 ( x ) ∫ ψ 2 ( x , y ) ψ 1 ( x , y ) f ( x , y , z ) d z d y d x \iiint\limits_Sf(x,y,z)\mathrm dV=\int^{a_2}_{a_1}\int^{\phi_1(x)}_{\phi_2(x)}\int^{\psi_1(x,y)}_{\psi_2(x,y)}f(x,y,z)\mathrm dz\mathrm dy\mathrm dx Sf(x,y,z)dV=a1a2ϕ2(x)ϕ1(x)ψ2(x,y)ψ1(x,y)f(x,y,z)dzdydx


13.7.3 Cylindrical Coordinate

在这里插入图片描述
For P = ( x , y , z ) = ( r , θ , z ) P=(x,y,z)=(r,\theta,z) P=(x,y,z)=(r,θ,z) { x = r cos ⁡ θ y = r sin ⁡ θ z = z \begin{cases}x=r\cos\theta\\y=r\sin\theta\\z=z\end{cases} x=rcosθy=rsinθz=zTherefore, f ( P ) = f ( x , y , z ) = f ( r cos ⁡ θ , r sin ⁡ θ , z ) = F ( r , θ , z ) f(P)=f(x,y,z)=f(r\cos\theta,r\sin\theta,z)=F(r,\theta,z) f(P)=f(x,y,z)=f(rcosθ,rsinθ,z)=F(r,θ,z)



13.8 Spherical Coordinates

A point P P P in R 3 \mathbb{R}^3 R3 has spherical coordinates ( ρ , θ , ϕ ) (\rho,\theta,\phi) (ρ,θ,ϕ) if ρ \rho ρ is the distance from the origin to P P P, θ \theta θ is the same as in cylindrical coordinates and ϕ \phi ϕ is the angle between the positive z − z- zaxis and the line segment O P OP OP

By the definition, we can get : { x = ρ sin ⁡ ϕ cos ⁡ θ y = ρ sin ⁡ ϕ sin ⁡ θ z = ρ cos ⁡ ϕ \begin{cases}x=\rho\sin\phi\cos\theta\\y=\rho\sin\phi\sin\theta\\z=\rho\cos\phi\end{cases} x=ρsinϕcosθy=ρsinϕsinθz=ρcosϕ ρ ≥ 0 ,   0 ≤ θ ≤ 2 π ,   0 ≤ ϕ ≤ π \rho\ge0,\ 0\le\theta\le2\pi,\ 0\le\phi\le\pi ρ0, 0θ2π, 0ϕπ

In cylindrical coordinates, r = x 2 + y 2 r=\sqrt{x^2+y^2} r=x2+y2 , while ρ = x 2 + y 2 + z 2 \rho=\sqrt{x^2+y^2+z^2} ρ=x2+y2+z2 in spherical coordinates.

In spherical coordinates, cos ⁡ ϕ = z x 2 + y 2 + z 2 \cos\phi=\frac{z}{\sqrt{x^2+y^2+z^2}} cosϕ=x2+y2+z2 z


Jacobian :

∭ S f ( x , y , z ) d V = ∭ a p p r o p r i a t e   l i m i t s f ( ρ sin ⁡ ϕ cos ⁡ θ , ρ sin ⁡ ϕ sin ⁡ θ , ρ cos ⁡ ϕ ) ρ 2 sin ⁡ ϕ d ρ d θ d ϕ \iiint\limits_Sf(x,y,z)\mathrm dV=\iiint\limits_{appropriate\ limits}f(\rho\sin\phi\cos\theta,\rho\sin\phi\sin\theta,\rho\cos\phi)\rho^2\sin\phi\mathrm d\rho\mathrm d\theta\mathrm d\phi Sf(x,y,z)dV=appropriate limitsf(ρsinϕcosθ,ρsinϕsinθ,ρcosϕ)ρ2sinϕdρdθdϕ
Here, we can get ρ 2 sin ⁡ ϕ \rho^2\sin\phi ρ2sinϕ by Jacobian. J ( ρ , θ , ϕ ) = [ ∂ x ∂ ρ ∂ x ∂ θ ∂ x ∂ ϕ ∂ y ∂ ρ ∂ y ∂ θ ∂ y ∂ ϕ ∂ z ∂ ρ ∂ z ∂ θ ∂ z ∂ ϕ ] J(\rho,\theta,\phi)=\begin{bmatrix}\frac{\partial\blue x}{\partial\purple\rho}&\frac{\partial\blue x}{\partial\green\theta}&\frac{\partial\blue x}{\partial\red\phi}\\\frac{\partial y}{\partial\purple\rho}&\frac{\partial y}{\partial\green\theta}&\frac{\partial y}{\partial\red\phi}\\\frac{\partial z}{\partial\purple\rho}&\frac{\partial z}{\partial\green\theta}&\frac{\partial z}{\partial\red\phi}\end{bmatrix} J(ρ,θ,ϕ)=ρxρyρzθxθyθzϕxϕyϕzThat is, ρ 2 sin ⁡ ϕ = ∣ J ( ρ , θ , ϕ ) ∣ \rho^2\sin\phi=|J(\rho,\theta,\phi)| ρ2sinϕ=J(ρ,θ,ϕ)



Chapter 14 : Vector Fields



14.1 Gradient, Divergence and Curl

Let S S S be a subset of R 2 \mathbb R^2 R2. A vector field over S S S is a map F ⃗ : S → R 2 \vec F:S\to\mathbb R^2 F :SR2such that ( x , y ) → ( F 1 ( x , y ) , F 2 ( x , y ) ) (x,y)\to(F_1(x,y),F_2(x,y)) (x,y)(F1(x,y),F2(x,y))

  • The vector field F ⃗ \vec F F is called continuous, if both F 1 ( x , y ) F_1(x,y) F1(x,y) and F 2 ( x , y ) F_2(x,y) F2(x,y) are continuous functions over S S S.
  • Suppose S ⊂ R 2 S\subset\mathbb R^2 SR2 is open. The vector field F ⃗ \vec F F is called defferentiable, if both F 1 ( x , y ) F_1(x,y) F1(x,y) and F 2 ( x , y ) F_2(x,y) F2(x,y) are differentiable functions over S S S.

14.1.1 The Gradient of a Scalar Field


Let f ( x , y , z ) f(x,y,z) f(x,y,z) be a scalar field and it is differentiable. We know the gradient of f f f : ∇ f ( x , y , z ) = < ∂ f ∂ x , ∂ f ∂ y , ∂ f ∂ z > \nabla f(x,y,z)=<\frac{\partial f}{\partial x},\frac{\partial f}{\partial y },\frac{\partial f}{\partial z}> f(x,y,z)=<xf,yf,zf>

A vector field F ⃗ \vec F F is called a conservative vector field if there is a scalar field, which is a function, f f f such that ∇ f = F ⃗ \nabla f=\vec F f=F . This function f f f is called a potential function of F ⃗ \vec F F


14.1.2 The Divergence and Curl of a Vector Field

Let F ⃗ = M i ⃗ + N j ⃗ + P k ⃗ \vec F=M\vec i+N\vec j+P\vec k F =Mi +Nj +Pk be a vector field for which the first partial derivatives of M M M, N N N, and P P P exist. Then : d i v F ⃗ = ∂ M ∂ x + ∂ N ∂ y + ∂ P ∂ z = ∇ ⋅ F ⃗ c u r l F ⃗ = ( ∂ P ∂ y − ∂ N ∂ z ) i ⃗ + ( ∂ M ∂ z − ∂ P ∂ x ) j ⃗ + ( ∂ N ∂ x − ∂ M ∂ y ) k ⃗ = ∇ × F ⃗ \begin{aligned}\mathrm{div} \vec F&=\frac{\partial M}{\partial x}+\frac{\partial N}{\partial y}+\frac{\partial P}{\partial z}=\nabla\cdot\vec F\\\mathrm{curl}\vec F&=(\frac{\partial P}{\partial y}-\frac{\partial N}{\partial z})\vec i+(\frac{\partial M}{\partial z}-\frac{\partial P}{\partial x})\vec j+(\frac{\partial N}{\partial x}-\frac{\partial M}{\partial y})\vec k=\nabla\times\vec F\end{aligned} divF curlF =xM+yN+zP=F =(yPzN)i +(zMxP)j +(xNyM)k =×F 推导过程

If F ⃗ \vec F F is a conservative vector field, then c u r l F ⃗ = 0 ⃗ \mathrm{curl}\vec F=\vec 0 curlF =0 . Conversely, if c u r l F ⃗ = 0 ⃗ \mathrm{curl}\vec F=\vec 0 curlF =0 , F ⃗ \vec F F is a conservative vector field.





洋洋洒洒 5w字,努力将所有知识点记了下来,也不知道会有多大用处,但如果对你起到作用了的话,请务必 关注+点赞+收藏 感激不尽。临表涕零,不知所言。

  • 7
    点赞
  • 17
    收藏
    觉得还不错? 一键收藏
  • 打赏
    打赏
  • 2
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论 2
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包

打赏作者

SP FA

你的鼓励将是我创作的最大动力

¥1 ¥2 ¥4 ¥6 ¥10 ¥20
扫码支付:¥1
获取中
扫码支付

您的余额不足,请更换扫码支付或充值

打赏作者

实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值