Calculus/Multivariable calculus

From testwiki
Revision as of 22:13, 8 March 2008 by 129.81.156.153 (talk) (โ†’Limits and continuity)
(diff) โ† Older revision | Latest revision (diff) | Newer revision โ†’ (diff)
Jump to navigation Jump to search

< Calculus/Outline

In your previous study of calculus, we have looked at functions and their behaviour. Most of these functions we have examined have been all in the form

f(x) : RR,

and only occasional examination of functions of two variables. However, the study of functions of several variables is quite rich in itself, and has applications in several fields.

We write functions of vectors - many variables - as follows:

f : RmRn

and f(x) for the function that maps a vector in Rm to a vector in Rn.

Before we can do calculus in Rn, we must familiarise ourselves with the structure of Rn. We need to know which properties of R can be extended to Rn

Topology in Rn

We are already familar with the nature of the regular real number line, which is the set R, and the two-dimensional plane, R2. This examination of topology in Rn attempts to look at a generalization of the nature of n-dimensional spaces - R, or R23, or Rn.

Lengths and distances

If we have a vector in R2, we can calculate its length using the Pythagorean theorem. For instance, the length of the vector (2, 3) is

32+22=13

We can generalize this to Rn. We define a vector's length, written |x|, as the square root of the sum of the squares of each of its components. That is, if we have a vector x=(x1,...,xn),

|๐ฑ|=x12+x22++xn2

Now that we have established some concept of length, we can establish the distance between two vectors. We define this distance to be the length of the two vectors' difference. We write this distance d(x, y), and it is

d(๐ฑ,๐ฒ)=|๐ฑ๐ฒ|=(xiyi)2

This distance function is sometimes referred to as a metric. Other metrics arise in different circumstances. The metric we have just defined is known as the Euclidean metric.

Open and closed balls

In R, we have the concept of an interval, in that we choose a certain number of other points about some central point. For example, the interval [-1, 1] is centered about the point 0, and includes points to the left and right of zero.

In R2 and up, the idea is a little more difficult to carry on. For R2, we need to consider points to the left, right, above, and below a certain point. This may be fine, but for R3 we need to include points in more directions.

We generalize the idea of the interval by considering all the points that are a given, fixed distance from a certain point - now we know how to calculate distances in Rn, we can make our generalization as follows, by introducing the concept of an open ball and a closed ball respectively, which are analogous to the open and closed interval respectively.

an open ball
B(๐š,r)
is a set in the form { xRn|d(x, a) < r}
an closed ball
B(๐š,r)
is a set in the form { xRn|d(x, a) ≤ r}

In R, we have seen that the open ball is simply an open interval centered about the point x=a. In R2 this is a circle with no boundary, and in R3 it is a sphere with no outer surface. (What would the closed ball be?)


Boundary points

If we have some area, say a field, then the common sense notion of the boundary is the points 'next to' both the inside and outside of the field. For a set, S, we can define this rigorously by saying the boundary of the set contains all those points such that we can find points both inside and outside the set. We call the set of such points ∂S

Typically, when it exists the dimension of ∂S is one lower than the dimension of S. e.g the boundary of a volume is a surface and the boundary of a surface is a curve.

This isn't always true; but it is true of all the sets we will be using.


A set S is bounded if there is some positive number such that we can encompass this set by a closed ball about 0. --> if every point in it is within a finite distance of the origin, i.e there exists some r>0 such that x is in S implies |x|<r.

Curves and parametrizations

If we have a function f : RRn, we say that f's image (the set {f(t) | tR} - or some subset of R) is a curve in Rn and f is its parametrization.

Parametrizations are not necessarily unique - for example, f(t) = (cos t, sin t) such that t ∈ [0, 2π) is one parametrization of the unit circle, and g(t) = (cos at, sin at) such that t ∈ [0, 2π/a) is a whole family of parametrizations of that circle.

Collision and intersection points

Say we have two different curves. It may be important to consider

  • when the two curves cross each other - where they intersect
  • when the two curves hit each other at the same time - where they collide.

Intersection points

Firstly, we have two parametrizations f(t) and g(t), and we want to find out when they intersect, this means that we want to know when the function values of each parametrization are the same. This means that we need to solve

f(t) = g(s)

because we're seeking the function values independent of the times they intersect.

For example, if we have f(t) = (t, 3t) and g(t) = (t, t2), and we want to find intersection points:

f(t) = g(s)
(t, 3t) = (s, s2)
t = s and 3t = s2

with solutions (t, s) = (0, 0) and (3, 3)

So, the two curves intersect at the points (0, 0) and (3, 3).

Collision points

However, if we want to know when the points "collide", with f(t) and g(t), we need to know when both the function values and the times are the same, so we need to solve instead

f(t) = g(t)

For example, using the same functions as before, f(t) = (t, 3t) and g(t) = (t, t2), and we want to find collision points:

f(t) = g(t)
(t, 3t) = (t, t2)
t = t and 3t = t2

which gives solutions t = 0, 3 So the collision points are (0, 0) and (3, 9).

We may want to do this to actually model physical problems, such as in ballistics.

Continuity and differentiability

If we have a parametrization f : RRn, which is built up out of component functions in the form f(t) = (f1(t),...,fn(t)), f is continuous if and only if each component function is also.

In this case the derivative of f(t) is

ai = (f1′(t),...,fn′(t)). This is actually a specific consequence of a more general fact we will see later.


Tangent vectors

Recall in single-variable calculus that on a curve, at a certain point, we can draw a line that is tangent to that curve at exactly at that point. This line is called a tangent. In the several variable case, we can do something similar.

We can expect the tangent vector to depend on f′(t) and we know that a line is its own tangent, so looking at a parametrised line will show us precisely how to define the tangent vector for a curve.

An arbitary line is f(t)=at+b, with  :fi(t)=ait+bi, so

fi′(t)=ai and
f′(t)=a, which is the direction of the line, its tangent vector.

Similarly, for any curve, the tangent vector is f′(t).



Angle between curves

We can then formulate the concept of the angle between two curves by considering the angle between the two tangent vectors. If two curves, parametrized by f1 and f2 intersect at some point, which means that

f1(s)=f2(t)=c,

the angle between these two curves at c is the angle between the tangent vectors f1′(s) and f2′(t) is given by

arccos๐Ÿ1(s)๐Ÿ2(t)|๐Ÿ1(s)||๐Ÿ2(t)|

Tangent lines

With the concept of the tangent vector as being analogous to being the gradient of the line in the one variable case, we can form the idea of the tangent line. Recall that we need a point on the line and its direction.

If we want to form the tangent line to a point on the curve, say p, we have the direction of the line f′(p), so we can form the tangent line

x(t)=p+t f′(p)


Different parametrizations

One such parametrization of a curve is not necessarily unique. Curves can have several different parametrizations. For example, we already saw that the unit circle can be parametrized by g(t) = (cos at, sin at) such that t ∈ [0, 2π/a).

Generally, if f is one parametrization of a curve, and g is another, with

f(t0) = g(s0)

there is a function u(t) such that u(t0)=s0, and g'(u(t)) = f(t) near t0.

This means, in a sense, the function u(t) "speeds up" the curve, but keeps the curve's shape.

Surfaces

A surface in space can be described by the image of a function f : R2Rn. f is said to be the parametrization of that surface.

For example, consider the function

f(α, β) = α(2,1,3)+β(-1,2,0)

This describes an infinite plane in R3. If we restrict α and β to some domain, we get a parallelogram-shaped surface in R3.

Surfaces can also be described explicitly, as the graph of a function z = f(x, y) which has a standard parametrization as f(x,y)=(x, y, f(x,y)), or implictly, in the form f(x, y, z)=c.

Level sets

The concept of the level set (or contour) is an important one. If you have a function f(x, y, z), a level set in R3 is a set of the form {(x,y,z)|f(x,y,z)=c}. Each of these level sets is a surface.

Level sets can be similarly defined in any Rn

Level sets in two dimensions may be familiar from maps, or weather charts. Each line represents a level set. For example, on a map, each contour represents all the points where the height is the same. On a weather chart, the contours represent all the points where the air pressure is the same.


Limits and continuity

Before we can look at derivatives of multivariate functions, we need to look at how limits work with functions of several variables first, just like in the single variable case.

If we have a function f : RmRn, we say that f(x) approaches b (in Rn) as x approaches a (in Rm) if, for all positive ε, there is a corresponding positive number δ, |f(x)-b| < ε whenever |x-a| < δ, with xa.

This means that by making the difference between x and a smaller, we can make the difference between f(x) and b as small as we want.

If the above is true, we say

  • f(x) has limit b at a
  • lim๐ฑ๐š๐Ÿ(๐ฑ)=๐›
  • f(x) approaches b as x approaches a
  • f(x) → b as xa

These four statements are all equivalent.

Rules

Since this is an almost identical formulation of limits in the single variable case, many of the limit rules in the one variable case are the same as in the multivariate case.

For f and g, mapping Rm to Rn, and h(x) a scalar function mapping Rm to R, with

  • f(x) → b as xa
  • g(x) → c as xa
  • h(x) → H as xa

then:

  • lim๐ฑ๐š(๐Ÿ+๐ )=๐›+๐œ
  • lim๐ฑ๐š(h๐Ÿ)=H๐›

and consequently

  • lim๐ฑ๐š(๐Ÿ๐ )=๐›๐œ
  • lim๐ฑ๐š(๐Ÿ×๐ )=๐›×๐œ

when H≠0

  • lim๐ฑ๐š(๐Ÿh)=๐›H

Continuity

Again, we can use a similar definition to the one variable case to formulate a definition of continuity for multiple variables.

If f : RmRn, f is continuous at a point a in Rm if f(a) is defined and

lim๐ฑ๐š๐Ÿ(๐ฑ)=๐Ÿ(๐š)

Just as for functions of one dimension, if f, g are both continuous at p, f+g, λf (for a scalar λ), f·g, and f×g are continuous also. If φ : RmR is continus at p, φf, f/φ are too if φ is never zero.

From these facts we also have that if A is some matrix which is n×m in size, with x in Rm, a function f(x)=A x is continuous in that the function can be expanded in the form x1a1+...+xmam, which can be easily verified from the points above.

If f : RmRn which is in the form f(x) = (f1(x),...,fn(x) is continuous if and only if each of its component functions are a polynomial or rational function, whenever they are defined.

Finally, if f is continuous at p, g is continuous at f(p), g(f(x)) is continuous at p.

Special note about limits

It is important to note that we can approach a point in more than one direction, and thus, the direction that we approach that point counts in our evaluation of the limit. It may be the case that a limit may exist moving in one direction, but not in another.

Differentiable functions

We will start from the one-variable definition of the derivative at a point p, namely

limxpf(x)f(p)xp=f(p)

We can't divide by vectors, so this defintion can't be immediately extended to the multiple variable case. However, we can divide by the absolute value of a vector, so lets rewrite this definition in terms of absolute values

limxp|f(x)f(p)f(p)(xp)||xp|=0

after pulling f'(p) inside and putting it over a common denominator.

So, how can we use this for the several-variable case?

If we switch all the variables over to vectors and replace the constant,(which performs a linear map in one dimension) with a matrix (which is also a linear map), we have

lim๐ฑ๐ฉ|๐Ÿ(๐ฑ)๐Ÿ(๐ฉ)A(๐ฑ๐ฉ)||๐ฑ๐ฉ|=0

If this limit exists for some f : RmRn, and there is a matrix A which is m×n, we refer to this matrix as being the derivative and we write it as Dp f.

A point on terminology - in referring to the action of taking the derivative, we write Dp f, but in referring to this matrix itself, it is known as the Jacobian matrix and is also written Jp f. More on the Jacobian later.

Properties

There are a number of important properties of this formulation of the derivative.

Affine approximations

If f is differentiable at p for x close to p, |f(x)-(f(p)+A(x-p))| is small compared to |x-p|, which means that f(x) is approximately equal to f(p)+A(x-p).

We call an expression of the form g(x)+c affine, when g(x) is linear and c is a constant. f(p)+A(x-p) is an affine approximation to f(x).

Jacobian matrix and partial derivatives

The Jacobian matrix of a function is in the form

(J๐ฉ๐Ÿ)ij=fixj|๐ฉ

for a f : RmRn, Jp f' is a m×n matrix.

The consequence of this is that if f is differentiable at p, all the partial derivatives of f exist at p.

However, it is possible that all the partial derivatives of a function exist at some point yet that function is not differentiable there.

Continuity and differentiability

Furthermore, if all the partial derivatives exist, and are continuous in some neighbourhood of a point p, then f is differentiable at p. This has the consequence that for a function f which has its component functions built from continuous functions (such as rational functions, differentiable functions or otherwise), f is differentiable everywhere f is defined.

We use the terminology continuously differentiable for a function differentiable at p which has all its partial derivatives existing and are continuous in some neighbourhood at p.

Rules of taking Jacobians

If f : RmRn, and h(x) : RmR are differentiable at 'p':

  • J๐ฉ(๐Ÿ+๐ )=J๐ฉ๐Ÿ+J๐ฉ๐ 
  • J๐ฉ(h๐Ÿ)=hJ๐ฉ๐Ÿ+๐Ÿ(๐ฉ)J๐ฉh
  • J๐ฉ(๐Ÿ๐ )=๐ TJ๐ฉ๐Ÿ+๐ŸTJ๐ฉ๐ 

Important: make sure the order is right - matrix multiplication is not commutative!

Chain rule

The chain rule for functions of several variables is as follows. For f : RmRn and g : RnRp, and g o f differentiable at p, then the Jacobian is given by

(J๐Ÿ(๐ฉ)๐ )(J๐ฉ๐Ÿ)

Again, we have matrix multiplication, so one must preserve this exact order. Compositions in one order may be defined, but not necessarily in the other way.


Alternate notations

For simplicity, we will often use various standard abbreviations, so we can write most of the formulae on one line. This can make it easier to see the important details.

We can abbreviate partial differentials with a subscript, e.g,

xh(x,y)=hxxyh=yxh

When we are using a subscript this way we will generally use the Heaviside D rather than ∂,

Dxh(x,y)=hxDxDyh=DyDxh

Mostly, to make the formulae even more compact, we will put the subscript on the function itself.

Dxh=hxhxy=hyx

If we are using subscripts to label the axes, x1, x2 …, then, rather than having two layers of subscripts, we will use the number as the subscript.

h1=D1h=1h=x1h=hx1

We can also use subscripts for the components of a vector function, u=(ux, uy, uy) or u=(u1,u2un)

If we are using subscripts for both the components of a vector and for partial derivatives we will separate them with a comma.

ux,y=uxy

The most widely used notation is hx. Both h1 and ∂1h are also quite widely used whenever the axes are numbered. The notation ∂xh is used least frequently.

We will use whichever notation best suits the equation we are working with.

Directional derivatives

Normally, a partial derivative of a function with respect to one of its variables, say, xj, takes the derivative of that "slice" of that function parallel to the xj'th axis.

More precisely, we can think of cutting a function f(x1,...,xn) in space along the xj'th axis, with keeping everything but the xj variable constant.

From the definition, we have the partial derivative at a point p of the function along this slice as

๐Ÿxj=limt0๐Ÿ(๐ฉ+t๐žj)๐Ÿ(๐ฉ)t

provided this limit exists.

Instead of the basis vector, which corresponds to taking the derivative along that axis, we can pick a vector in any direction (which we usually take as being a unit vector), and we take the directional derivative of a function as

๐Ÿ๐=limt0๐Ÿ(๐ฉ+t๐)๐Ÿ(๐ฉ)t

where d is the direction vector.

If we want to calculate directional derivatives, calculating them from the limit definition is rather painful, but, we have the following: if f : RnR is differentiable at a point p, |p|=1,

๐Ÿ๐=D๐ฉ๐Ÿ(๐)

There is a closely related formulation which we'll look at in the next section.

Gradient vectors

The partial derivatives of a scalar tell us how much it changes if we move along one of the axes. What if we move in a different direction?

We'll call the scalar f, and consider what happens if we move an infintesimal direction dr=(dx,dy,dz), using the chain rule.

๐๐Ÿ=dxfx+dyfy+dzfz

This is the dot product of dr with a vector whose components are the partial derivatives of f, called the gradient of f

grad๐Ÿ=๐Ÿ=(๐Ÿ(๐ฉ)x1,,๐Ÿ(๐ฉ)xn)

We can form directional derivatives at a point p, in the direction d then by taking the dot product of the gradient with d

๐Ÿ(๐ฉ)๐=๐๐Ÿ(๐ฉ).

Notice that grad f looks like a vector multiplied by a scalar. This particular combination of partial derivatives is commonplace, so we abbreviate it to

=(x,y,z)

We can write the action of taking the gradient vector by writing this as an operator. Recall that in the one-variable case we can write d/dx for the action of taking the derivative with respect to x. This case is similar, but acts like a vector.

We can also write the action of taking the gradient vector as:

=(x1,x2,xn)

Properties of the gradient vector

Geometry
  • Grad f(p) is a vector pointing in the direction of steepest slope of f. |grad f(p)| is the rate of change of that slope at that point.

For example, if we consider h(x, y)=x2+y2. The level sets of h are concentric circles, centred on the origin, and

h=(hx,hy)=2(x,y)=2๐ซ

grad h points directly away from the origin, at right angles to the contours.

  • Along a level set, (∇f)(p) is perpendicular to the level set {x|f(x)=f(p) at x=p}.

If dr points along the contours of f, where the function is constant, then df will be zero. Since df is a dot product, that means that the two vectors, df and grad f, must be at right angles, i.e the gradient is at right angles to the contours.

Algebraic properties

Like d/dx, ∇ is linear. For any pair of constants, a and b, and any pair of scalar functions, f and g

ddx(af+bg)=addxf+bddxg(af+bg)=af+bg

Since it's a vector, we can try taking its dot and cross product with other vectors, and with itself.

Divergence

If the vector function u maps Rn to itself, then we can take the dot product of u and ∇. This dot product is called the divergence.

div๐ฎ=๐ฎ=u1x1+u2x2+unxn

If we look at a vector function like v=(1+x2,xy) we can see that to the left of the origin all the v vectors are converging towards the origin, but on the right they are diverging away from it.

Div u tells us how much u is converging or diverging. It is positive when the vector is diverging from some point, and negative when the vector is converging on that point.

Example:
For v=(1+x2, xy), div v=3x, which is positive to the right of the origin, where v is diverging, and negative to the left of the origin, where v is converging.

Like grad, div is linear.

(a๐ฎ+b๐ฏ)=a๐ฎ+b๐ฏ

Later in this chapter we will see how the divergence of a vector function can be integrated to tell us more about the behaviour of that function.

To find the divergence we took the dot product of and a vector with on the left. If we reverse the order we get

๐ฎ=uxDx+uyDy+uzDz

To see what this means consider i· This is Dx, the partial differential in the i direction. Similarly, u· is the the partial differential in the u direction, multiplied by |u|

Curl

If u is a three-dimensional vector function on R3 then we can take its cross product with ∇. This cross product is called the curl.

curl๐ฎ=×๐ฎ=|๐ข๐ฃ๐คDxDyDzuxuyuz|

Curl u tells us if the vector u is rotating round a point. The direction of curl u is the axis of rotation.

We can treat vectors in two dimensions as a special case of three dimensions, with uz=0 and Dzu=0. We can then extend the definition of curl u to two-dimensional vectors

curl๐ฎ=DyuxDxuy

This two dimensional curl is a scalar. In four, or more, dimensions there is no vector equivalent to the curl.

Example:
Consider u=(-y, x). These vectors are tangent to circles centred on the origin, so appear to be rotating around it anticlockwise.

curl๐ฎ=Dy(y)Dxx=2

Example
Consider u=(-y, x-z, y), which is similar to the previous example.

curl๐ฎ=|๐ข๐ฃ๐คDxDyDzyxzy|=2๐ข+2๐ค

This u is rotating round the axis i+k

Later in this chapter we will see how the curl of a vector function can be integrated to tell us more about the behaviour of that function.

Product and chain rules

Just as with ordinary differentiation, there are product rules for grad, div and curl.

  • If g is a scalar and v is a vector, then
the divergence of gv is
(g๐ฏ)=g๐ฏ+(๐ฏ)g
the curl of gv is
×(g๐ฏ)=g(×๐ฏ)+(g)×๐ฏ
  • If u and v are both vectors then
the gradient of their dot product is
(๐ฎ๐ฏ)=๐ฎ×(×๐ฏ)+๐ฏ×(×๐ฎ)+(๐ฎ)๐ฏ+(๐ฏ)๐ฎ
the divergence of their cross product is
(๐ฎ×๐ฏ)=๐ฏ(×๐ฎ)๐ฎ(×๐ฏ)
the curl of their cross product is
×(๐ฎ×๐ฏ)=(๐ฏ)๐ฎ(๐ฎ)๐ฏ+๐ฎ(๐ฏ)๐ฏ(๐ฎ)


We can also write chain rules. In the general case, when both functions are vectors and the composition is defined, we can use the Jacobian defined earlier.

๐ฎ(๐ฏ)|๐ซ=๐‰๐ฏ๐ฏ|๐ซ

where Ju is the Jacobian of u at the point v.

Normally J is a matrix but if either the range or the domain of u is R1 then it becomes a vector. In these special cases we can compactly write the chain rule using only vector notation.

  • If g is a scalar function of a vector and h is a scalar function of g then
h(g)=dhdgg
  • If g is a scalar function of a vector then
=(g)ddg

This substitution can be made in any of the equations containing

Second order differentials

We can also consider dot and cross products of with itself, whenever they can be defined. Once we know how to simplify products of two ∇'s we'll know out to simplify products with three or more.

The divergence of the gradient of a scalar f is

2f(x1,x2,xn)=2fx12+2fx22++2fxn2

This combination of derivatives is the Laplacian of f. It is commmonplace in physics and multidimensional calculus because of its simplicity and symmetry.

We can also take the Laplacian of a vector,

2๐ฎ(x1,x2,xn)=2๐ฎx12+2๐ฎx22++2๐ฎxn2

The Laplacian of a vector is not the same as the divergence of its gradient

(๐ฎ)2๐ฎ=×(×๐ฎ)

Both the curl of the gradient and the divergence of the curl are always zero.

×f=0(×๐ฎ)=0

This pair of rules will prove useful.

Integration

We have already considered differentiation of functions of more than one variable, which leads us to consider how we can meaningfully look at integration.

In the single variable case, we interpret the definite integral of a function to mean the area under the function. There is a similar interpretation in the multiple variable case: for example, if we have a paraboloid in R3, we may want to look at the integral of that paraboloid over some region of the xy plane, which will be the volume under that curve and inside that region.

Riemann sums

When looking at these forms of integrals, we look at the Riemann sum. Recall in the one-variable case we divide the interval we are integrating over into rectangles and summing the areas of these rectangles as their widths get smaller and smaller. For the multiple-variable case, we need to do something similar, but the problem arises how to split up R2, or R3, for instance.

To do this, we extend the concept of the interval, and consider what we call a n-interval. An n-interval is a set of points in some rectangular region with sides of some fixed width in each dimension, that is, a set in the form {xRn|aixibi with i = 0,...,n}, and its area/size/volume (which we simply call its measure to avoid confusion) is the product of the lengths of all its sides.

So, an n-interval in R2 could be some rectangular partition of the plane, such as {(x,y) | x ∈ [0,1] and y ∈ [0, 2]|}. Its measure is 2.

If we are to consider the Riemann sum now in terms of sub-n-intervals of a region Ω, it is

i;SiΩf(xi*)m(Si)

where m(Si) is the measure of the division of Ω into k sub-n-intervals Si, and x*i is a point in Si. The index is important - we only perform the sum where Si falls completely within Ω - any Si that is not completely contained in Ω we ignore.

As we take the limit as k goes to infinity, that is, we divide up Ω into finer and finer sub-n-intervals, and this sum is the same no matter how we divide up Ω, we get the integral of f over Ω which we write

Ωf

For two dimensions, we may write

Ωf

and likewise for n dimensions.

Iterated integrals

Thankfully, we need not always work with Riemann sums every time we want to calculate an integral in more than one variable. There are some results that make life a bit easier for us.

For R2, if we have some region bounded between two functions of the other variable (so two functions in the form f(x) = y, or f(y) = x), between a constant boundary (so, between x = a and x =b or y = a and y = b), we have

abf(x)g(x)h(x,y)dydx

An important theorem (called Fubini's theorem) assures us that this integral is the same as

Ωf.

Order of integration

In some cases the first integral of the entire iterated integral is difficult or impossible to solve, therefore, it can be to our advantage to change the order of integration.

abf(x)g(x)h(x,y)dxdy

cde(y)f(y)h(x,y)dydx

As of the writing of this, there is no set method to change an order of integration from dxdy to dydx or some other variable. Although, it is possible to change the order of integration in an x and y simple integration by simply switching the limits of integration around also, in non-simple x and y integrations the best method as of yet is to recreate the limits of the integration from the graph of the limits of integration.

In higher order integration that can't be graphed, the process can be very tedious. For example, dxdydz can be written into dzdydx, but first dxdydz must be switched to dydxdz and then to dydzdx and then to dzdydx (but since 3-dimensional cases can be graphed, doing this would be seemingly idiotic).

Parametric integrals

If we have a vector function, u, of a scalar parameter, s, we can integrate with respect to s simply by integrating each component of u seperately.

๐ฏ(s)=๐ฎ(s)dsvi(s)=ui(s)ds

Similarly, if u is given a function of vector of parameters, s, lying in Rn, integration with respect to the parameters reduces to a multiple integral of each component.

Line integrals

In one dimension, saying we are integrating from a to b uniquely specifies the integral.

In higher dimensions, saying we are integrating from a to b is not sufficient. In general, we must also specify the path taken between a and b.

We can then write the integrand as a function of the arclength along the curve, and integrate by components.

E.g, given a scalar function h(r) we write

Ch(๐ซ)d๐ซ=Ch(๐ซ)d๐ซdsds=Ch(๐ซ(s))๐ญ(s)ds

where C is the curve being integrated along, and t is the unit vector tangent to the curve.

There are some particularly natural ways to integrate a vector function, u, along a curve,

C๐ฎdsC๐ฎd๐ซC๐ฎ×d๐ซC๐ฎ๐งds

where the third possibility only applies in 3 dimensions.

Again, these integrals can all be written as integrals with respect to the arclength, s.

C๐ฎd๐ซ=C๐ฎ๐ญdsC๐ฎ×d๐ซ=C๐ฎ×๐ญds

If the curve is planar and u a vector lieing in the same plane, the second integral can be usefully rewritten. Say,

๐ฎ=ut๐ญ+un๐ง+ub๐›

where t, n, and b are the tangent, normal, and binormal vectors uniquely defined by the curve.

Then

๐ฎ×๐ญ=๐›un+๐งub

For the 2-d curves specified b is the constant unit vector normal to their plane, and ub is always zero.

Therefore, for such curves,

C๐ฎ×d๐ซ=C๐ฎ๐งds

Green's Theorem

Let C be a piecewise smooth, simple closed curve that bounds a region S on the Cartesian plane. If two function M(x,y) and N(x,y) are continuous and their partial derivatives are continuous, then

S(NxMy)dA=CMdx+Ndy=C๐…d๐ซ

In order for Green's theorem to work there must be no singularities in the vector field within the boundaries of the curve.

Green's theorem works by summing the circulation in each infinitesimal segment of area enclosed within the curve.

Inverting differentials

We can use line integrals to calculate functions with specified divergence, gradient, or curl.

  • If grad V = u
V(๐ฉ)=๐ฉ0๐ฉ๐ฎd๐ซ+h(๐ฉ)
where h is any function of zero gradient and curl u must be zero.
  • If div u = V
๐ฎ(๐ฉ)=๐ฉ0๐ฉVd๐ซ+๐ฐ(๐ฉ)
where w is any function of zero divergence.
  • If curl u = v
๐ฎ(๐ฉ)=12๐ฉ0๐ฉ๐ฏ×d๐ซ+๐ฐ(๐ฉ)
where w is any function of zero curl.

For example, if V=r2 then

gradV=2(x,y,z)=2๐ซ

and

๐ŸŽ๐ซ2๐ฎd๐ฎ=๐ŸŽ๐ซ2(udu+vdv+wdw)=[u2]๐ŸŽ๐ซ+[v2]๐ŸŽ๐ซ+[w2]๐ŸŽ๐ซ=x2+y2+z2=r2

so this line integral of the gradient gives the original function.

Similarly, if v=k then

๐ฎ(๐ฉ)=๐ฉ0๐ฉ๐ค×d๐ซ

Consider any curve from 0 to p=(x, y', z), given by r=r(s) with r(0)=0 and r(S)=p for some S, and do the above integral along that curve.

๐ฎ(๐ฉ)=0S๐ค×d๐ซdsds=0S(drxds๐ฃdryds๐ข)ds=๐ฃ0Sdrxdsds๐ข0Sdrydsds=๐ฃ[rx(s)]0S๐ข[ry(s)]0S=px๐ฃpy๐ข=x๐ฃy๐ข

and curl u is

12|๐ข๐ฃ๐คDxDyDzyx0|=๐ค=๐ฏ

as expected.

We will soon see that these three integrals do not depend on the path, apart from a constant.

Surface and Volume Integrals

Just as with curves, it is possible to parameterise surfaces then integrate over those parameters without regard to geometry of the surface.

That is, to integrate a scalar function V over a surface A parameterised by r and s we calculate

AV(x,y,z)dS=AV(r,s)detJdrds

where J is the Jacobian of the tranformation to the parameters.

To integrate a vector this way, we integrate each component seperately.

However, in three dimensions, every surface has an associated normal vector n, which can be used in integration. We write dS=ndS.

For a scalar function, V, and a vector function, v, this gives us the integrals

AV๐๐’A๐ฏ๐๐’A๐ฏ×๐๐’

These integrals can be reduced to parametric integrals but, written this way, it is clear that they reflect more of the geometry of the surface.

When working in three dimensions, dV is a scalar, so there is only one option for integrals over volumes.

Gauss's divergence theorem

We know that, in one dimension,

abDfdx=f|ab

Integration is the inverse of differentiation, so integrating the differential of a function returns the original function.

This can be extended to two or more dimensions in a natural way, drawing on the analogies between single variable and multivariable calculus.

The analog of D is ∇, so we should consider cases where the integrand is a divergence.

Instead of integrating over a one-dimensional interval, we need to integrate over a n-dimensional volume.

In one dimension, the integral depends on the values at the edges of the interval, so we expect the result to be connected with values on the boundary.

This suggests a theorem of the form,

V๐ฎdV=V๐ง๐ฎdS

This is indeed true, for vector fields in any number of dimensions.

This is called Gauss's theorem.

There are two other, closely related, theorems for grad and curl:

  • VudV=Vu๐งdS,
  • V×๐ฎdV=V๐ง×๐ฎdS,

with the last theorem only being valid where curl is defined.

Stokes' curl theorem

These theorems also hold in two dimensions, where they relate surface and line integrals. Gauss's divergence theorem becomes

S๐ฎdS=S๐ง๐ฎds

where s is arclength along the boundary curve and the vector n is the unit normal to the curve that lies in the surface S, i.e in the tangent plane of the surface at its boundary, which is not necessarily the same as the unit normal associated with the boundary curve itself.

Similarly, we get

S×๐ฎdS=C๐ง×๐ฎds(1),

where C is the boundary of S

In this case the integral does not depend on the surface S.

To see this, suppose we have different surfaces, S1 and S2, spanning the same curve C, then by switching the direction of the normal on one of the surfaces we can write

S1+S2×๐ฎdS=S×๐ฎdSS×๐ฎdS(2)

The left hand side is an integral over a closed surface bounding some volume V so we can use Gauss's divergence theorem.

S1+S2×๐ฎdS=V×๐ฎdV

but we know this integrand is always zero so the right hand side of (2) must always be zero, i.e the integral is independent of the surface.

This means we can choose the surface so that the normal to the curve lying in the surface is the same as the curves intrinsic normal.

Then, if u itself lies in the surface, we can write

๐ฎ=(๐ฎ๐ง)๐ง+(๐ฎ๐ญ)๐ญ

just as we did for line integrals in the plane earlier, and substitute this into (1) to get

S×๐ฎdS=C๐ฎd๐ซ

This is Stokes' curl theorem