MATH3401 — Complex Analysis

Table of Contents

Lecture 1 — The Complex Field

Lecture 1 — The Complex Field

Joseph Grotowski: [email protected]. 06-206 (Tues 10-12, Thurs 11-12, or by appointment).

Introduction

This course is compulsory for many students. Its prerequisites are MATH2000 and MATH2400 (or equivalents). This is the capstone course for math majors and math undergraduate degrees.

The techniques and problem solving strategies in this course will be beneficial in many ways.

Lecture recordings will be on the Blackboard. Tutorials start in week 1.

Assessment

The best 5 of 6 assignments together count for 20%. The midsemester counts for 20% (one page of handwritten notes is allowed, single-sided). The final exam counts for 60% (one page of handwritten notes, double-sided).

Complex analysis

Cool stuff

A ‘nice’ result which shows all different parts of maths coming together: eiπ=1. e^{i\pi} = -1.

There are a number of fascinating things about this equation. Despite ii not appearing in this expression, it still delves into complex analysis. 0sinxxdx=π2 \int_{0}^\infty \frac {\sin x} x \,dx = \frac \pi 2 A reasonable question is does this integral even converge? If we replace sinx\sin x with 11, the integral diverges by the pp-test. Arguing the integral exists is a bad time without complex analysis, but is really really nice with complex. This will be done towards the end of the semester, making use of contour integrals around a path in the complex plane.

In the more applied realm, we can also do things with fluid flow. A very expensive method would be constructing a physical model then running experiments. With complex analysis, we can perform analysis on a straight pipe, then map to the pipe above without having to build the channel. We can just tweak the parameters in the map to test different scenarios. This is called a conformal transoformation.

Similarly, Joukowski transformations can be used to model air flow around a wing.

We can also get nice results about series like 112+122+132+=π26112122+132142+=π212k=111+4k2π2=12(1e112) \begin{aligned} \frac 1 {1^2} + \frac 1{2^2} + \frac1 {3^2} + \cdots &= \frac {\pi^2} 6 \\ \frac 1 {1^2} - \frac 1{2^2} + \frac1 {3^2} - \frac 1 {4^2} +\cdots &= \frac {\pi^2} {12} \\ \sum_{k=1}^\infty \frac 1 {1 + 4 k^2 \pi^2}& = \frac 1 2 \left(\frac 1 {e-1} - \frac 1 2\right) \end{aligned}

Riemann Zeta

ζ(s)=n=11ns=p prime(1ps)1 \zeta (s) = \sum_{n=1}^\infty \frac 1 {n^s} = \prod_{p\ \text{prime}}(1-p^{-s})^{-1} (The product of primes result is from Euler. This is called the Riemann zeta function)

Riemann hypothesis: ζ\zeta has infinitely many non-trivial zeros and they all lie on the line Re(s)=1/2\text{Re}(s) = 1/2.

Note that the expression for ζ\zeta only makes sense for Res)>1\text{Re} s) > 1, so we need to extend it to C\mathbb C via analytic continuation. In doing this, the trivial zeros are 2,4,6,-2, -4, -6, \dots

Lecture 2 — Complex Numbers

Lecture 2 — Complex Numbers

\renewcommand\Re{\operatorname{Re}} \renewcommand\Im{\operatorname{Im}}

Complex numbers have been around for a while.

C\mathbb C as a field

B.C section 1-3

N={1,2,3,}N0={0,1,2,3,}Z={0,±1,±2,±3,}Q={p/q:p,qZ,q0}R=real numbersC=complex numbers \begin{aligned} \mathbb N &= \{1, 2, 3, \ldots \} \\ \mathbb N_0 &= \{0, 1, 2, 3, \ldots \} \\ \mathbb Z &= \{0, \pm1, \pm2, \pm3, \ldots \} \\ \mathbb Q &= \{p/q : p, q \in \mathbb Z, q \ne 0 \} \\ \mathbb R &= \text{real numbers} \\ \mathbb C &= \text{complex numbers} \end{aligned}

Note that Q\mathbb Q is actually equivalence classes of “quotients” of integers because certain expressions are equivalent (see MATH2401). R\mathbb R can be defined in several technical ways, such as Dedekind cuts or limits of sequences.

C\mathbb C can be represented in various (equivalent) ways:

ii is the complex number represented by (0,1)(0,1). We say RC\mathbb R \subset \mathbb C by identifying the complex number x+0ix + 0i with the real number xx.

Addition in C\mathbb C

(x1,y1)+(x2,y2)=(x1+x2,y1+y2)(x1+iy1)+(x2+iy2)=(x1+x2)+i(y1+y2) \begin{aligned} (x_1, y_1) + (x_2, y_2) &= (x_1 + x_2, y_1 + y_2) \\ (x_1 + iy_1) + (x_2 + iy_2) &= (x_1 + x_2) + i(y_1 + y_2) \end{aligned}

Multiplication in C\mathbb C

Denoted by ×\times or \cdot or juxtaposition (that is, putting things next to each other). (x1,y1)(x2,y2)=(x1x2y1y2,y1x2+x1y2)(x1+iy1)(x2+iy2)=(x1x2y1y2)+i(y1x2+x1y2) \begin{aligned} (x_1, y_1)\cdot(x_2, y_2) &= (x_1x_2 - y_1y_2, y_1x_2 + x_1y_2) \\ (x_1 + iy_1) \cdot (x_2 + iy_2) &= (x_1 x_2 - y_1y_2) + i(y_1x_2 + x_1y_2) \end{aligned} The definition of multiplication formally applies if we use the usual rules for algebra in R\mathbb R and set i2=1i^2 = -1.

Note: Multiplication of two complex numbers sums their angles (where positive is CCW) and multiples their radius.

C\mathbb C is a field

With this addition and multiplication, C\mathbb C is a field. Check: C\mathbb C must be closed under the binary operations ++ and \cdot.

F2: ++ has identity 0+0i0+0i and inverse (x)+i(y)(-x) + i(-y). F5: \cdot has identity 1+0i1 + 0i and inverse z1=1/(x+iy)(xiy)/(xiy)=xx2+y2iyx2+y2 z^{-1} = 1/(x+iy) \cdot (x-iy)/(x-iy) = \frac x{x^2+y^2} - i\frac{y}{x^2+y^2}

Since C\mathbb C is a field, it holds: z1,z2=0    z1=0 or z2=0z_1, z_2 = 0 \implies z_1 = 0 \text{ or }z_2 = 0. This is the null-factor law and holds because on all fields. Also, we have (z1z2)1=z11z21(z_1z_2)^{-1} = z_1^{-1}z_2^{-1}.

Note: i2=1i^2 = -1 and (i)2=1(-i)^2 = -1. These are the only two solutions of z2=1z^2 = -1 in the complex numbers (we cannot check this yet). This is due to the Fundamental Theorem of Algebra.

Remark: C\mathbb C is not ordered and, in fact, cannot be ordered. Thus, ii is no more special then i-i.

B.C. 4, 5

Given z=x+iyCz = x + iy \in \mathbb C, there are a few useful functions to have: - modulus: :C[0,)|\cdot| : \mathbb C \to \mathbb [0, \infty), where z=x2+y2|z| = \sqrt{x^2 + y^2}, - real part: Re(z)=x\operatorname{Re}(z) = x, imaginary part: Im(z)=y\operatorname{Im}(z) = y (both CR\mathbb C \to \mathbb R),

Lecture 3 — Functions of Complex Numbers

Lecture 3 — Functions of Complex Numbers

Complex conjugate

The complex conjugate is defined as a function ˉ:CC\bar \cdot : \Complex \to \Complex, where (x+iy)(xiy)(x + iy) \mapsto (x - iy). Geometrically, this reflects a complex number about the real axis.

Properties

z=zˉ    Im(z)=0 (i.e. zR)(zˉ)=zzw=zˉwˉz+w=zˉ+wˉz1=(zˉ)1,z0z2=zzˉRe(z)=z+zˉ2Im(z)=zzˉ2 \begin{aligned} z = \bar z \iff \operatorname{Im}(z)&= 0 \text{ (i.e. z} \in \mathbb R \text{)} \\ \overline {(\bar z)} &= z \\ \overline {zw} &= \bar z \bar w \\ \overline{z+w} &= \bar z + \bar w \\ \overline {z^{-1}} &= (\bar z)^{-1}, z \ne 0 \\ |z|^2 &= z \bar z \\ \operatorname{Re}(z) &= \frac{z + \bar z} 2 \\ \operatorname{Im}(z) &= \frac{z - \bar z} 2 \end{aligned}

A very useful property (from MATH1051) is the triangle inequality: z+wz+w. |z+w| \le |z| + |w|. Proof: More specifically using the cosine rule, z+w2=z2+w22zwcosA. |z+w|^2 = |z|^2 + |w|^2 - 2|z||w|\cos A. This is a true and exact statement. However, in analysis, we often want to make these statements less precise but more useful. Because 1cos1-1 \le \cos \le 1, z+w2z2+w2+2zw=(z+w)2    z+wz+w \begin{aligned} |z+w|^2 &\le |z|^2 + |w|^2 + 2|z||w| \\ &= (|z| + |w|)^2\\ \implies |z+w| &\le |z| + |w| \end{aligned}

Polar coordinates

B.C. 6-9

Given a complex number z=x+iyz = x+iy, we can find rr and θ\theta such that x=rcosθ, and  y=rsinθ. x = r \cos \theta, \text{ and }\ y = r \sin \theta. Then, we can also write it using Euler’s formula (as a formal convention for the moment): z=reiθ=r(cosθ+isinθ). z = re^{i\theta} = r(\cos \theta + i \sin \theta). Remark: this formula follows formally from the Taylor series of eiθe^{i\theta}.

Here, θ\theta is an (as opposed to the) argument of the complex number zz. We write θ=argz\theta = \arg z. Here, arg\arg is not a (single-valued) function. Given a θ\theta, we can always take θ+2π\theta + 2\pi which will satisfy the xx and yy equations. Also, for z=0z=0, any θ\theta will work.

To make arg\arg a function, we need to restrict its range. There are two options: 00 to 2π2 \pi and π-\pi to π\pi. In complex analysis, we normally use the second. Specifically, Argz\operatorname{Arg} z is defined to be the unique values of θ\theta such that π<argzπ-\pi < \arg z \le \pi.

Examples: - Arg(1+i)=π/4\operatorname{Arg}(1+i) = \pi / 4 but arg(1+i)=,7π/4,π/4,9π/4,\arg (1+i) = \ldots, -7\pi/4, \pi/4, 9\pi/4, \ldots. - Arg(1)=π\operatorname{Arg}(-1) = \pi. - Arg(0)\operatorname{Arg}(0) is undefined, but arg(0)=R\arg (0) = \mathbb R.

In summary, Arg\operatorname{Arg} is a function C{0}(π,π]\mathbb C \setminus \{0\} \to (-\pi, \pi]. Alternative notation for C{0}\mathbb C \setminus \{0\} is C\mathbb C^* or C\mathbb C_*.

Note: eiθ=1 (easy to check)(eiθ)1=eiθ=eiθ(reiθ)(ρeiϕ)=(rρ)ei(θ+ϕ)    zw=zw,arg(zw)+argzargw \begin{aligned} |e^{i\theta}| &= 1 \text{ (easy to check)}\\ (e^{i\theta})^{-1} = e^{-i\theta} &= \overline {e^{i\theta}} \\ (re^{i\theta})(\rho e^{i\phi}) &= (r\rho) e^{i(\theta + \phi)} \\ \implies |zw| &= |z| |w|,\enspace \arg (zw) + \arg z \arg w \end{aligned} However, the last equality does not necessarily hold for Arg\operatorname{Arg}. For example, with z=w=(1+i)/2z = w = {(-1 + i)/\sqrt 2}

De Moivre’s formula

z=reiθ    zn=rneinθ,nZ. z = re^{i\theta} \implies z^n = r^n e^{in\theta}, \quad n \in \mathbb Z. In particular, einθ=(cosθ+isinθ)n=cos(nθ)+isin(nθ). e^{in\theta} = (\cos \theta + i \sin \theta)^n = \cos(n\theta) + i \sin (n\theta).

Lecture 4 — Functions as Mappings

Lecture 4 — Functions as Mappings

Roots of a complex number

What is the number zz such that znz^n gives us the original number? By the fundamental theorem of algebra, we know there ar exactly nn nn-th roots in C\mathbb C.

Consider the nn-th roots of z=reiθz = re^{i\theta}, for zCz \in \mathbb C_*. That is, we want all wCw \in \mathbb C such that wn=zw^n = z. Notation: exp(ξ)=eξ\exp (\xi) = e^\xi.

Then, we can use de Moivre’s theorem “in reverse” to see that zz has nn distinct roots: {r1/nexp(iθn),r1/nexp(iθn+i2πn),,r1/nexp(iθn+i2π(n1)n)} \left\{ r^{1/n}\exp\left(\frac{i\theta}n\right), r^{1/n}\exp\left(\frac{i\theta}n + \frac{i2\pi}n\right), \ldots, r^{1/n}\exp\left(\frac{i\theta}n + \frac{i2\pi(n-1)}n\right) \right\}

B.C. 13 (8th ed 12-13)

Functions & mappings

Suppose we have ΩC\Omega \subseteq \mathbb C and a function f:ΩCf : \Omega \to \mathbb C can be viewed as a mapping on Ω\Omega, the domain of ff. If Ω\Omega is not specified, then we take Ω\Omega to be as large as possible.

Example: For f(z)=1/zf(z) = 1/z we can take Ω=C{0}\Omega = \mathbb C \setminus\{0\}, so f:CCf : \mathbb C_* \to \mathbb C. As notation, we can also write f:z1/xf : z \mapsto 1/x, or w=1/zw=1/z, or just 1/z1/z if the meaning is clear.

The usual notation is w:(x,y)(u,v)w : (x,y) \mapsto (u,v), i.e. w(x+iy)=u(x+iy)+iv(x+iy)w(x+iy)=u(x+iy)+iv(x+iy) or w(x,y)=u(x,y)+iv(x,y)w(x,y)=u(x,y)+iv(x,y).

This notation is not completely rigorous; uu is both a function from C\mathbb C and from R2\mathbb R^2. We could introduce a map φ:(x,y)(x+iy)\varphi : (x,y)\mapsto (x+iy) but this is excessively verbose. There is no real problem with this, but be aware.

Definitions

Examples: - Consider f(z)=1/zf(z) = 1/z. domf=C\operatorname{dom}f = \mathbb C_\star. f1(ξ)=1/zf^{-1}(\xi)=1/z is a function CC\mathbb C_* \to \mathbb C_*. - For g(z)=1/(1z2)g(z) = 1/(1-|z|^2). domg=C{z:z=1}\operatorname{dom}g = \mathbb C \setminus \{z : |z| = 1\}. The function is g:{z:z1}Rg : \{z : z \ne 1\} \to \mathbb R. The inverse is not a function. - For h(z)=znh(z) = z^n where h:CCh : \mathbb C \to \mathbb C, the inverse is also not a function.

Geometric intuition

Let’s aim to get a geometric picture of what a given ff does.

Examples: - w=1+zw = 1+z moves each point one unit to the right (in the positive real direction). - reiθrei(θ+π/2)re^{i\theta} \mapsto re^{i(\theta+\pi/2)} rotates points through an angle of π/2\pi/2 in the counter-clockwise direction about the origin.

For new and unfamiliar mappings, break them down into compositions of known or easy maps.

Examples: - w=Az+bw = Az + b where A,bCA, b \in \mathbb C and A0A \ne 0. We can think of AA as a dilation and rotation, then +b+b as a translation. - For zAzz \mapsto Az, write A=aeiαA=a e^{i\alpha} for α,aR\alpha, a \in \mathbb R. This gives us reiθarei(θ+α)re^{i\theta}\mapsto ar e^{i(\theta+\alpha)}. Specifically, it dilates the modulus by a factor of a=Aa=|A| and rotates through α=argA\alpha = \arg A. - For zz+bz \mapsto z+b where b=b1+b2ib = b_1+b_2i, b1,b2Rb_1, b_2 \in \mathbb R. This translates b1b_1 to the right and b2b_2 up. If negative, goes in the opposite direction.

Note: The maps above have domain and image C\mathbb C.

Lecture 5 — Mappings 2

Lecture 5 — Mappings 2

Another very important map to look at is z1/zz \mapsto 1/z on C\mathbb C_*. We can write this as the composition of two slightly more complicated functions.

Define ξ(z)=z/z2\xi(z) = z/|z|^2 on C\mathbb C_* and η(z)=zˉ\eta(z) = \bar z. For zCz \in \mathbb C_*, we can compose these two as ηξ(z)=η(ξ(z))=(zz2)=zˉz2=zˉzzˉ=1z. \begin{aligned} \eta \circ \xi(z) = \eta(\xi(z)) = \overline {\left(\frac z {|z|^2}\right)} = \frac{\bar z}{|z|^2} = \frac {\bar z}{z \bar z} = \frac 1 z. \end{aligned}

ξ\xi is called inversion, with respect to the unit circle. η\eta is just reflection about the real axis.

For w=1/z=zˉ/z2w = 1/z = \bar z / |z|^2 we can write it as x+iyu+ivx+iy \mapsto u+iv, where w=xiyx2+y2    u=xx2+y2,v=yx2+y2. w=\frac{x-iy}{x^2 + y^2} \quad\implies\quad u = \frac{x}{x^2+y^2}, \quad v=\frac{-y}{x^2+y^2}. We can use this to show the following statement: 1/z1/z maps circles and lines in the zz=plane to circles and lines in the ww-plane. Note that this does not require circles to map to circles, or lines to map to lines.

The key point is both circles and lines in the zz-plane can be represented as A(x2+y2)+Bx+Cy+D=0, where B2+C2>4AD A(x^2+y^2)+Bx+Cy+D=0,\quad\text{ where } B^2+C^2 > 4AD for A,B,C,DRA,B,C,D \in \mathbb R. If A=0A=0, then the equation is a circle. The inequality constraint tells us that (x+B2A)2+(y+C2A)2=(B2+C24AD2A)2. \begin{aligned} \left(x+\frac B {2A}\right)^2 + \left(y+\frac C {2A}\right)^2 = \left(\frac{\sqrt{B^2+C^2-4AD}}{2A}\right)^2. \end{aligned} Note, for w=1/zw = 1/z, the uu and vv expressions earlier tell us that ww has the form D(u2+v2)+BuCv+A=0 D(u^2+v^2)+Bu-Cv+A=0 which is a circle or line.

Terminology

Examples: Affine transformations are bijections CC\mathbb C \to\mathbb C , and 1/z1/z is a binection CC\mathbb C_* \to \mathbb C_*.

Möbius transformations

B.C. 99 (8th ed 93)

Let a,b,c,dCa, b, c, d \in \mathbb C where adbc0ad-bc \ne 0. Then, w=T(z)=az+bcz+d w = T(z) = \frac{az+b}{cz+d} is called a Möbius (or linear fractional) transformation. The natural domain of definition is - if c=0c = 0, then domw=C\operatorname{dom}w = \mathbb C (because c=0    d0c = 0\implies d \ne 0), or - if c0c \ne 0, then domw=C{d/c}\operatorname{dom}w = \mathbb C \setminus \{-d/c\}.

Let’s try to understand TT geometrically.

Claim: TT is injective and surjective from CC\mathbb C \to \mathbb C. Proof. For c=0c = 0, then to prove injectiveness suppose T(z)=T(ξ)T(z) = T(\xi). We want to show z=ξz = \xi. Substituting into the formula for TT, adz+bd=adξ+bd    z=ξ. \frac a d z + \frac b d = \frac a d \xi + \frac b d \implies z = \xi. To prove it is surjective, given wCw \in \mathbb C, we need zCz \in \mathbb C such that T(z)=wT(z) = w. The value z=d/a(wb/d)z = d/a(w-b/d) satisfies this.

For c0c \ne 0, consider w=az+bcz+d=a(z+d/c)ad/c+bc(z+d/c)=ac+(bcadc)1czd \begin{aligned} w &= \frac{az+b}{cz+d} = \frac{a(z+d/c) - ad/c + b}{c(z+d/c)}\\ &= \frac a c + \left(\frac{bc-ad}c\right)\frac 1 {cz-d} \end{aligned} This is a composition of a linear transformation, 1/z1/z and another linear transformation.

Thus, TT is the composition of linear and 1/z1/z maps. That is, Z1=cz+d,W=1/Z1,w=az+bcadcW. Z_1 = cz+d, \quad W = 1/Z_1, \quad w = \frac a z + \frac{bc-ad}cW. In both cases, Möbius transformations are compositions of maps previously studied. This means they are bijective.

Lecture 6 — Möbius Transformations 2 & The Extended Complex Plane

Lecture 6 — Möbius Transformations 2 & The Extended Complex Plane

Recall that T(z)=w=az+bcz+dT(z) = w = \frac{az+b}{cz+d} (adbc0ad-bc \ne 0) is a Möbius transformation.

It can be rewritten as Azw+Bz+Cw+D=0 Azw + Bz + Cw + D = 0 where A=cA = c, B=aB = -a, C=dC=d, D=bD=b. This is called the implicit form.

Recall that case 1 was c=0c = 0, which reduces TT to a linear transformation which is a bijection CC\mathbb C \to \mathbb C. Case 2 was also a bijection from C{d/c}C{a/c}\mathbb C \setminus \{-d/c\} \to \mathbb C \setminus \{a/c\}, with inverse T1(w)=dw+bcwa. T^{-1}(w) = \frac{-dw+b}{cw-a}.

A question might be can we extend TT to a function CC\mathbb C \to \mathbb C in case 2? In particular, such that the extension is injective and surjective. The answer is yes, by “plugging the hole”. We simply define T(d/c)=a/c. T(-d/c) = a/c. However, this is unsatisfying because the function becomes discontinuous.

An important concept: We are going to extend C\mathbb C to the extended complex plane, written Cˉ\bar{ \mathbb C}. This is done by adding a point at infinity, which is called \infty. We can think of the complex plane as a sphere with the origin at one pole and this \infty at the other, with distances expanding as you go further from 00.

We then define T(d/c)=T(-d/c) = \infty and T()=a/cT(\infty) = a/c. This extends TT to a map CˉCˉ\bar {\mathbb C }\to \bar {\mathbb C} which is injective and surjective.

Remark: Cˉ\bar {\mathbb C} is a topological space and the above extension is continuous. A topology on a set is a space with so-called “open sets”. Intuitively, points can be ‘nearby’ to other points.

Cˉ\bar {\mathbb C} can be visualised as the Riemann sphere. The origin 0+0i0+0i is at the south pole. A point on the complex plane is mapped uniquely to a point on the sphere. This is done by picking the point on the sphere’s surface on the line between the point and the north pole. “Infinity” can be thought of as the north pole.

A few final remarks on Möbius transformations. Given 3 distinct points in z1,z2,z3Cˉz_1, z_2, z_3\in\bar{ \mathbb C} and 3 different distinct points w1,w2,w3Cˉw_1, w_2, w_3 \in \bar {\mathbb C}, there exists a unique Möbius transformation TT such that T(z1)=w1, T(z2)=w2, and T(z3)=w3. T(z_1) = w_1, \ T(z_2)=w_2, \text{ and }T(z_3)=w_3. In fact, TT is given by (ww1)(w2w3)(ww3)(w2w1)=(zz1)(z2z3)(zz3)(z2z1). \frac{(w-w_1)(w_2-w_3)}{(w-w_3)(w_2-w_1)} = \frac{(z-z_1)(z_2-z_3)}{(z-z_3)(z_2-z_1)}. In practice, it may be easier to directly solve for a,b,c,da,b,c,d than using the above expression.

Note: How does this work with infinity? T()=a/c    limzT(z)=a/cT(d/c)=    limzd/c1/T(z)=0 \begin{aligned} T(\infty) = a/c &\iff \lim_{|z|\to\infty} T(z) = a/c\\ T(-d/c) = \infty &\iff \lim_{z\to -d/c} 1/T(z) = 0 \end{aligned}

Lecture 7 — Exponential Maps

Lecture 7 — Exponential Maps

A note on coronavirus about the recent mail from Joanne Wright, the DVC(A).

Recall the Möbius transformation, and note that is is unique up to scaling for λ>0\lambda > 0. w=az+bcz+d=λaz+λbλcz+λd w = \frac{az+b}{cz+d} = \frac{\lambda az+\lambda b}{\lambda cz+\lambda d}

Remark: Any map from the inside of a (upper half) half-plane to the inside of a circle has the form w=eiαzz0zz0 for some αR,z0C,Imz0>0. w = e^{-i\alpha} \frac{z-z_0}{z-z_0}\quad \text{ for some }\alpha \in \mathbb R, z_0 \in \mathbb C, \operatorname{Im} z_0 > 0.

Exponential map

B.C. 103 (8Ed 104)

zez=expx=w,domw=C. z \mapsto e^z = \exp x = w, \quad \operatorname{dom} w = \mathbb C. Given a z=x+iyz = x+iy for x,yRx, y \in \mathbb R, w=ez=ex+iy=exeiy=ex(cosy+isiny)=u+iv where u=excosyv=exsiny. w = e^z = e^{x+iy} = e^x e^{iy} = e^x (\cos y + i \sin y) = u+iv\\[0.7em] \begin{aligned} \text{ where }\quad u &= e^x \cos y\\ v &= e^x \sin y. \end{aligned} This is easier to see by writing w=ρeiϕw = \rho e^{i\phi} where ρ=ex\rho = e^x, ϕ=y+2kπ\phi = y + 2k\pi for kZk \in \mathbb Z. This function is periodic in C\mathbb C.

Images under exp

Properties

Many of the properties of the real exp\exp extend to C\mathbb C. Such as - e0=1e^0 = 1. - ez=1/eze^{-z} = 1/e^z. - ez1+z2=ez1ez2e^{z_1+z_2} = e^{z_1}e^{z_2}. - ez1z2=ez1/ez2e^{z_1-z_2} = e^{z_1}/e^{z_2}. - (ez1)z2=ez1z2(e^{z_1})^{z_2} = e^{z_1z_2}.

However, some things do not extend: - ex>0 xRe^x > 0~\forall x \in \mathbb R but, for example, eiϕ=1e^{i\phi} = -1. - xexx \mapsto e^x is monotone increasing for xRx \in \mathbb R but zezz \mapsto e^z is periodic with period 2πi2\pi i.

Note: As in R\mathbb R, ez=0e^z = 0 has no solution in C\mathbb C. If there was some z=x+iyz = x+iy such that ez=0e^z = 0, then exeiy=0    ex=0e^x e^{iy} = 0 \implies e^x = 0 because eiy=1|e^{iy}| = 1, contradiction.

Inverses

B.C. 31-33 (8Ed 30-32)

We have a function f:ΩCf : \Omega \to \mathbb C. Then, g:RangefΩg : \operatorname{Range}f \to \Omega is an inverse of ff if gf:ΩΩg \circ f : \Omega \to \Omega is the identity. That is, (gf)(z)=z(g \circ f)(z) = z for all zΩz \in \Omega.

Example: zz+1z \mapsto z+1 and zz1z \mapsto z-1 are inverses for CC\mathbb C \to \mathbb C. z1/zz \mapsto 1/z is its own inverse CC\mathbb C_* \to \mathbb C_*.

Lecture 8 — Logarithm

Lecture 8 — Logarithm

Inverse of the exponential

The inverse of the exponential! It’s probably too much to hope for log=loge\log = \log_e to be the inverse, because exp\exp is periodic (with period 2πi2\pi i) in C\mathbb C.

Begin with ew=ze^w = z. Write z=reiΘ,r>0z = re^{i\Theta}, r > 0, where Θ=Argz(π,π]\Theta = \operatorname{Arg} z \in (-\pi, \pi].

We can make our calculations clearer by using polar coordinates in the domain and rectangular coordinates in the range. That is, w=u+iv    z=ew=eu+iv=eueiv    eu=r,v=Θ+2kπ,kZ. w = u+iv \implies z=e^w = e^{u+iv}=e^ue^{iv}\\ \implies e^u = r,\quad v=\Theta + 2k\pi, \quad k \in \mathbb Z. So u=lnru = \ln r, which (notation in this course) means logarithm with base ee of the positive real number rr. Thus, w=u+iv=lnr+i(Θ+2kπ)kZ=lnz+iargz \begin{aligned} w &= u+iv \\ &= \ln r + i(\Theta + 2k\pi) \quad k \in \mathbb Z \\ &= \ln |z| + i \arg z \end{aligned} This defines the multi-valued function log:CC\log : \mathbb C_* \to \mathbb C_*. exp(logz)=zlog(expz)=z+2kπi \begin{aligned} \exp (\log z) &= z\\ \log(\exp z) &= z + 2k\pi i \end{aligned} We can check the the properties of log\log translate into C\mathbb C. For example, (note that this is a statement of multi-valued functions) - log(zξ)=logz+logξ\log (z\xi) = \log z + \log \xi. - log(z/ξ)=logzlogξ\log (z / \xi) = \log z - \log \xi.

As with Arg\operatorname{Arg} and arg\arg, we can define the principal logarithm, denoted Log:CC\operatorname{Log} : \mathbb C_* \to \mathbb C_*, as Logz=lnz+iArgz \operatorname{Log} z = \ln |z| + i \operatorname{Arg} z This function is single-valued but has the disadvantage of being discontinuous on the negative real axis and 00, since Arg\operatorname{Arg} is discontinuous there. Indeed, Log\operatorname{Log} and Arg\operatorname{Arg} are not even defined at 00.

As with Arg\operatorname{Arg}, it may be the case that Log(z1z2)Logz1+Logz2\operatorname{Log}(z_1 z_2) \ne \operatorname{Log} z_1 + \operatorname{Log} z_2.

Complex exponents

Remark: In reals, we could define dsomething like 222^{\sqrt 2} as limn2an\lim_{n\to\infty}2^{a_n} where {an}2\{a_n\}\to\sqrt 2. This doesn’t quite work in complex.

Set zc=exp(clogz)z^c = \exp(c \log z). Because log\log is multi-valued, this may result in a sequence of outputs. For cNc \in \mathbb N and 1/cZ1/c \in \mathbb Z, we recover the formulas from the fourth lecture.

Remark: B.C. defines z1/nz^{1/n} as a multi-valued function and defines the principal value as PV(z1/n)=z1/nexp(iArgz/n). \operatorname{PV}(z^{1/n}) = |z|^{1/n}\exp (i\operatorname{Arg} z / n). Similarly for zzcz \mapsto z^c, PV(zc)=exp(cLogz)=exp(clnz+icArga). \operatorname{PV}(z^{c}) = \exp(c \operatorname{Log} z) = \exp (c \ln |z| + ic \operatorname{Arg}a).

Example: As a concrete example, doable but easy to make mistakes, PV[(1i)4i]=exp(4i(ln1i+iArg(1i)))=exp(4iln24(π/4))=eπexp(4iln2)=eπ(cos(2ln2)+isin(2ln2)) \begin{aligned} \operatorname{PV}[(1-i)^{4i}] &= \exp(4i (\ln |1-i| + i\operatorname{Arg}(1-i)))\\ &= \exp (4i \ln \sqrt 2 -4(-\pi/4))\\ &= e^{\pi}\exp(4i\ln \sqrt 2) \\ &= e^\pi (\cos(2\ln 2) + i\sin (2\ln 2)) \end{aligned}

Sometimes, we need to use a different single-valued Log\operatorname{Log} or Arg\operatorname{Arg}. For example, if we need to integrate around a contour excluding the i-i axis. In this case, we would define Argz\operatorname{\mathcal {Arg}} z such that π/2<argz3π/2-\pi/2 < \arg z \le 3\pi/2. This leads to an alternative single-valued Log\mathcal {Log} and derived functions.

Next: square roots, branch cuts.

Lecture 9 — Branch Cuts & Trigonometric Functions

Lecture 9 — Branch Cuts & Trigonometric Functions

B-C 108.

A branch is a half-open interval of the form αθ<α+2π\alpha \le \theta < \alpha + 2\pi or α<θ~α+2π\alpha < \tilde \theta \le \alpha + 2\pi of R\mathbb R.

This is good because we can define a single-valued Arg\operatorname{Arg} with values in this interval, a single-valued Log\operatorname{Log}, as well as a single-valued branch of, for example, z1/2z^{1/2}.

A branch cut is a subset of C\mathbb C, of the form {z:argz=α}{0}\{z : \arg z = \alpha\}\cup \{0\}. This is where a particular branch is discontinuous.

For example, PV(z1/2)\operatorname{PV}(z^{1/2}) which maps zz1/2exp(iArgz2)reiθrexp(iθ/2) \begin{aligned} z &\mapsto |z|^{1/2} \exp \left(\frac{i\operatorname{Arg}z } 2\right) \\ re^{i\theta}&\mapsto \sqrt r \exp(i\theta/2) \end{aligned} The branch is π<θπ-\pi < \theta \le \pi and the branch cut is the negative real axis union with zero.

Consider the behaviour of zz1/2z \mapsto z^{1/2} under two different branches, π<θπ-\pi < \theta \le \pi and 0θ<2π0 \le \theta < 2\pi. Exercise: Repeat for (zz0)1/2(z-z_0)^{1/2}.

Trigonometric functions

B-C 37-39 (8Ed 34-35)

For xRx \in \mathbb R, eix=cosx+isinxeix=cosxisinx    cosx=eix+eix2    sinx=eixeix2i \begin{aligned} e^{ix} &= \cos x + i\sin x \\ e^{-ix} &= \cos x - i \sin x\\ \implies \cos x &= \frac{e^{ix}+e^{-ix}}2\\ \implies \sin x &= \frac{e^{ix}-e^{-ix}}{2i} \end{aligned} We can use these expressions to define cos\cos and sin\sin on C\mathbb C. Specifically, cosz=eiz+eiz2andsinz=eizeiz2i. \cos z = \frac{e^{iz}+e^{-iz}}2\quad \text{and}\quad \sin z = \frac{e^{iz}-e^{-iz}}{2i}. This gives us the following properties: - cosz=cos(z)\cos z = \cos (-z) - sinz=sin(z)\sin z = - \sin (-z) - cos(z+ξ)=coszcosξsinzsinξ\cos(z+\xi) = \cos z \cos \xi - \sin z \sin \xi - sin(z+ξ)=sinzcosξ+coszsinξ\sin (z+\xi) = \sin z \cos \xi + \cos z \sin \xi - sin2z+cos2z=1\sin^2 z + \cos^2 z = 1 (this does not imply that they are bounded in C\mathbb C) - sin(z+π/2)=cosz\sin (z+\pi/2) = \cos z - sin(zπ/2)=cosz\sin (z-\pi/2) = -\cos z (these two proven using properties of exp)

Hyperbolic functions

On R\mathbb R, the hyperbolic functions were sinhx=exex2coshx=ex+ex2 \sinh x = \frac{e^x-e^{-x}}2\\ \cosh x = \frac{e^x + e^{-x}}2 Recall that sinh\sinh is somewhat like a exaggerated cubic and cosh\cosh is not unlike a steeper periodic parabola. Also, cosh\cosh can be used to model a hanging cable with weight.

Similarly to the first trig functions, we can define the hyperbolic functions on C\mathbb C as coshz=ez+ez2andsinhz=ezez2. \cosh z = \frac{e^{z}+e^{-z}}2\quad \text{and}\quad \sinh z = \frac{e^{z}-e^{-z}}{2}. Interestingly, sin(iy)=isinhyandcos(iy)=coshy. \sin (iy) = i \sinh y \quad \text{and}\quad \cos(iy) = \cosh y. Tke z=xz = x and ξ=iy\xi = iy in the sum formulas and we get sin(x+iy)=sinxcos(iy)+cosx+sin(iy)=sinxcoshy+icosxsinhycos(x+iy)=cosxcoshyisinxsinhy \begin{aligned} \sin(x+iy) &= \sin x \cos(iy) + \cos x + \sin (iy)\\ &= \sin x \cosh y + i \cos x \sinh y\\ \cos(x+iy) &= \cos x \cosh y - i\sin x \sinh y \end{aligned} Together, the two above equalities imply sin(z+2π)=sinz\sin(z+2\pi) = \sin z and cos(z+2π)=cosz\cos(z+2\pi) = \cos z. Additionally, we have cosh2z=1+sinh2z\cosh^2 z = 1+\sinh^2 z and sinz2=sin2xcosh2y+cos2xsinh2y=sin2x(1+sinh2y)+(1sin2x)sinh2y=sin2x+sinh2ycosz2=cos2x+sinh2y \begin{aligned} |\sin z|^2 &= \sin^2 x \cosh^2 y + \cos^2 x \sinh^2 y \\ &= \sin^2 x(1+\sinh^2y) +(1-\sin^2x)\sinh^2 y\\ &= \sin^2 x + \sinh^2 y\\ |\cos z|^2 &= \cos^2x + \sinh^2 y \end{aligned}

Recall that a function f:ΩZf : \Omega \to \mathbb Z is called bounded if there exists MM such that f(z)M|f(z)| \le M for all zΩz \in \Omega. Note that there can exist unbounded functions with finite area.

Finally, sin\sin and cos\cos are unbounded on C\mathbb C, because with a sufficiently large imaginary component they can become arbitrarily large.

Lecture 10 — Bounded Functions & Topology

Recall that we can have unbounded functions with bounded area.

Examples:

Definition. A zero of a function is a value of zz such that f(z)=0f(z) = 0.

For example, the zeros of sin\sin are nπ+0in\pi + 0i for nZn \in \mathbb Z. This can be derived from the sin(x+iy)=sinxcoshy+icosxsinhy\sin(x+iy) = \sin x \cosh y + i \cos x \sinh y equation. Similarly, the zeros of cos\cos are (n+1/2)π(n+1/2)\pi. The zeros of sinh\sinh and cosh\cosh are nπin\pi i and (n+1/2)πi(n+1/2)\pi i, respectively.

Inverse Trig Functions

If w=arcsinzw = \arcsin z, then z=sinwz = \sin w and z=sinw=eiweiw2ieiweiw=e2iw12ieiw    2iezeiw=e2iw1    (eiw)22iz(eiw)1=0 \begin{aligned} z &= \sin w \\ &= \frac {e^{iw}-e^{-iw}}{2i} \frac{e^{iw}}{e^{iw}}\\ &= \frac{e^{2iw}-1}{2ie^{iw}}\\ \implies 2ie^{ze^{iw}} &= e^{2iw}-1\\ \implies (e^{iw})^2 - 2iz(e^{iw}) - 1& = 0 \end{aligned} We can solve this quadratic using the complex quadratic formula, which doesn’t use ±\pm but instead uses ()1/2(\cdot)^{1/2} as a multi-valued square root. So,     eiw=2iz+(4z2+4)1/22=iz+(1z2)1/2    iw=log(iz+(1z2)1/2)w=arcsinz=ilog(iz+(1z2)1/2) \begin{aligned} \implies e^{iw} &= \frac{2iz + (-4z^2 + 4)^{1/2}}{2} \\ &= iz + (1-z^2)^{1/2}\\ \implies iw &= \log(iz + (1-z^2)^{1/2})\\ w =\arcsin z&= -i\log(iz + (1-z^2)^{1/2}) \end{aligned} Note that we have a multi-valued logarithm and for each of those, a double-valued square root. This makes it a lot more fun than real numbers.

Example: arcsin(i)=ilog(1+z1/2)=ilog(1±2)\arcsin (-i) = -i\log(1+z^{1/2})=-i\log(1\pm\sqrt 2). So we need to consider two logarithms. log(1+2)=ln(1+2)+2nπi\log(1+\sqrt 2) = \ln (1+ \sqrt 2) + 2n\pi i is relatively fine. Then, log(12)=ln12+arg(12)=ln(21)+(2n+1)πi \begin{aligned} \log (1-\sqrt 2) &= \ln|1-\sqrt 2| + \arg(1-\sqrt 2)\\ &=\ln (\sqrt 2 - 1) + (2n+1)\pi i \end{aligned} Putting these together, we get arcsin(i)\arcsin (-i) is i(ln(1+2)+2nπi)-i(\ln(1+\sqrt 2)+2n\pi i) and i(ln(21)+(2m+1)πi)-i(\ln(\sqrt 2-1)+(2m+1)\pi i) for n,mZn, m \in \mathbb Z.

Topology

Topology is the study of topos, space. Our basic building block is some ball around an arbitrary point in C\mathbb C.

Definition. Given z0Cz_0 \in \mathbb C and ϵ>0\epsilon > 0, Bϵ(z0)B_\epsilon(z_0) denotes the (open) ball of radius ϵ\epsilon about z0z_0, a.k.a. an ϵ\epsilon-neighbourhood of z0z_0. In set notation, Bϵ(z0)={z:zz0<ϵ}B_\epsilon(z_0) = \{z : |z-z_0| < \epsilon\}. Similarly, Bϵ(z0)\overline B_\epsilon(z_0) is the closed ball of radius ϵ\epsilon about z0z_0 (a closed ϵ\epsilon-neighbourhood of z0z_0) given by {z:zz0ϵ}\{ z : |z-z_0| \le \epsilon\}. A deleted ϵ\epsilon-neighbourhood of z0z_0 is {z:0<zz0<ϵ}\{z : 0 < |z-z_0| < \epsilon\}.

Note that the only feature of C\mathbb C used by this definition is |\cdot|, the modulus. That is, zz0=(xx0)2+(yy0)2=(x,y)(x0,y0)R2=d((x,y),(x0,y0))R=d(z,z0)C \begin{aligned} |z-z_0| &= \sqrt{(x-x_0)^2 + (y-y_0)^2} \\ &= \|(x,y)-(x_0,y_0)\|_{\mathbb R^2} \\ &= d((x,y), (x_0,y_0))_\mathbb R \\ &= d(z, z_0)_\mathbb C \end{aligned} This has obvious analogues to R\mathbb R with d(x,y)R=xyd(x,y)_\mathbb R = |x-y| being the absolute value distance. Balls in R\mathbb R are just intervals.

Definition. Given ΩC\Omega \subseteq \mathbb C, zCz \in \mathbb C is an interior point of Ω\Omega if there exists ϵ>0\epsilon > 0 such that Bϵ(z)ΩB_\epsilon(z) \subset \Omega. Note that this implies Bϵ(z)ΩB_{\epsilon'}(z) \subset \Omega for all 0<ϵ<ϵ0<\epsilon' < \epsilon.

Definition. zCz \in \mathbb C is an exterior point of Ω\Omega if there exists ϵ>0\epsilon > 0 such that Bϵ(z)Ω=B_\epsilon(z) \cap \Omega = \emptyset.

Definition. zCz \in \mathbb C is a boundary point of Ω\Omega if for all ϵ>0\epsilon > 0, Bϵ(z)ΩB_\epsilon(z) \cap \Omega \ne \emptyset and Bϵ(z)ΩcB_\epsilon(z) \cap \Omega^c \ne \emptyset. That is, any ϵ\epsilon-neighbourhood around zz contains points inside and outside Ω\Omega. Here, $^c $ denotes the complement, that is CΩ\mathbb C \setminus \Omega.

Lecture 11 — Topology Definitions

Definition. The boundary of Ω\Omega, denoted Ω\partial \Omega, is defined as {zC:z is a boundary point}\{z \in \mathbb C : z \text{ is a boundary point}\}.

Recall that interior points are in Ω\Omega and exterior points are in Ωc\Omega^c. What about the boundary points?

Let’s look at a circle Ω={z:z=1}\Omega = \{z : |z| = 1\}. In this case, we have Ω=Ω\partial \Omega = \Omega. Let’s consider a blob:

image-20200326163437855

Here, z1z_1 is an interior point, z2z_2 is an exterior point, z3z_3 is a boundary point in Ω\Omega, and z4z_4 is a boundary point not in Ω\Omega.

Definition. IntΩ\operatorname{Int}\Omega is the interior of Ω\Omega, the set of all interior points. ExtΩ\operatorname{Ext}\Omega is the exterior of Ω\Omega, the set of all exterior points.

Definition. Ω\Omega is open if Ω=IntΩ\Omega = \operatorname{Int}\Omega, and Ω\Omega is closed if ΩΩ\partial \Omega \subseteq \Omega.

Examples:

Note that Ω1\Omega_1 is open and Ω1c\Omega_1^c is closed. Ω2\Omega_2 is closed and Ω2c\Omega_2^c is open. Both Ω3\Omega_3 and Ω3c\Omega_3^c are neither open nor closed.

Definition. A set which is both closed and open is called clopen.

Definition. A set ΩC\Omega \subseteq \mathbb C is called connected if there do not exist non-empty, open, disjoint sets Ω\Omega' and Ω\Omega'' such that ΩΩΩ\Omega \subseteq \Omega' \cup \Omega'' and ΩΩ\Omega' \cap \Omega \ne \emptyset and ΩΩ\Omega'' \cap \Omega \ne \emptyset.

That is, we can’t find two ‘separated’ sets which together contain all of Ω\Omega and each contain parts of Ω\Omega.

image-20200326165016594

Above, Ω1\Omega_1 is disconnected because we can find such Ω\Omega' and Ω\Omega''. However, Ω2\Omega_2 is connected.

Lecture 12 — Path Connected, Domains and Limits

Definition. A set ΩC\Omega \subseteq \mathbb C is piecewise affinely path connected if any two points in Ω\Omega can be connected by a finite number of line segments in Ω\Omega, joined end to end.

image-20200327100554664

For open sets in C\mathbb C, this is equivalent to the original definition of connected. (This will not be proved in MATH3401.)

However, it is not so in general. For example, there is a comb space which is connected but not path connected. This is connected because we cannot find open sets

image-20200327100945700

Claim. If Ω1\Omega_1 and Ω2\Omega_2 are open subsets of C\mathbb C, then so is Ω1Ω2\Omega_1 \cap \Omega_2.

Proof. If Ω1Ω2=\Omega_1 \cap \Omega_2 =\emptyset, then we are done because the empty set is open. Otherwise, for any zΩ1Ω2z \in \Omega_1 \cap \Omega_2, there exist ϵ1,ϵ2>0\epsilon_1, \epsilon_2 > 0 such that Bϵ1(z)Ω1B_{\epsilon_1}(z) \subseteq \Omega_1 and Bϵ2(z)Ω2B_{\epsilon_2}(z) \subseteq \Omega_2. Take ϵ=min{ϵ1,ϵ2}\epsilon = \min \{\epsilon_1, \epsilon_2\}. Then, Bϵ(z)Ω1B_\epsilon(z) \subseteq \Omega_1 and Bϵ(z)Ω2B_\epsilon(z) \subseteq \Omega_2 which implies Bϵ(z)Ω1Ω2B_\epsilon(z) \subseteq \Omega_1 \cap \Omega_2. Since zz was arbitrary and this is the definition of interior point, we see that Int(Ω1Ω2)=Ω1Ω2\operatorname{Int}(\Omega_1 \cap \Omega_2) = \Omega_1 \cap \Omega_2. Therefore, Ω1Ω2\Omega_1 \cap \Omega_2 is open. \square

Definition. A domain is an open, connected subset of C\mathbb C. A region is a set whose interior is a domain.

Definition. A point zCz \in \mathbb C is called an accumulation point of ΩC\Omega \subseteq \mathbb C if any deleted neighbourhood of zz intersects Ω\Omega. Note that zz need not be in Ω\Omega.

Examples:

Limits

B-C 15-16.

Definition. Let ff be a complex-valued function defined on a deleted neighbourhood of z0Cz_0 \in \mathbb C. Then, we say limzz0f(z)=w0\lim_{z \to z_0} f(z) = w_0 if for all ϵ>0\epsilon > 0, there exists δ>0\delta > 0 such that 0<zz0<δ    f(z)w0<ϵ. 0 < |z-z_0| < \delta \implies |f(z) - w_0| < \epsilon. Note that ff does not need to be defined at z0z_0.

Examples:

Remark. If a limit exists, then it is unique.

Limit Theorems

B-C 17 (8 Ed 16)

Suppose z=x+iyz=x+iy and f(z)=u(x,y)+iv(x,y)f(z) = u(x,y)+iv(x,y). Let z0=x0+iy0z_0 = x_0 + iy_0 and w0=u0+iv0w_0 = u_0 + iv_0.

Theorem 1. limzz0f(z)=w0    {lim(x,y)(x0,y0)u(x,y)=u0,andlim(x,y)(x0,y0)v(x,y)=v0. \lim_{z\to z_0} f(z)= w_0 \iff \begin{cases} \lim_{(x,y)\to(x_0, y_0)} u(x,y) = u_0, & \text{and}\\ \lim_{(x,y)\to(x_0, y_0)} v(x,y) = v_0. \end{cases} Theorem 2. (Non-exciting facts about operations of limits.) Suppose limzz0f(z)=w0\lim_{z \to z_0}f(z) = w_0, limzz0g(z)=ξ0\lim_{z \to z_0}g(z) = \xi_0 and λC\lambda \in \mathbb C. Then, \begin{align} \lim_{z \to z_0}(f \pm g)(z) &= w_0 \pm \xi_0 \tag{1}\\ \lim_{z \to z_0}(\lambda f)(z) &= \lambda w_0 \tag{2}\\ \lim_{z \to z_0}(fg)(z) &= w_0\xi_0 \tag{3}\\ \lim_{z \to z_0}\frac{f(z)}{g(z)} &=\frac{ w_0}{\xi_0}\qquad\text{ if } \xi_0 \ne 0\tag{4} \end{align} Not that limzz0g(z)=ξ0\lim_{z\to z_0}g(z) = \xi_0 and ξ00\xi_0 \ne 0 implies g(z)0g(z) \ne 0 within a neighbourhood of z0z_0.

Lecture 13 — Limits, Continuity and Differentiability

Recall the comb space, space with vertical lines of length 1 at x=1/2ix = 1/2^i and a horizontal line of length 1, with the origin removed. This is not path affinely finite path connected because we cannot move through the origin.

However, it is connected because any open set containing the x=0x=0 line must extend some distance towards the other lines, hence containing the rest of the comb lines. So there do not exist two disjoint open sets which contain this comb, and it’s connected.

Limits at infinity

Recall in R\mathbb R that limxx0f(x)=\lim_{x\to x_0}f(x) = \infty means: given M>0M> 0, δ>0\exists \delta > 0 such that 0<xx0<δ0 < |x-x_0| < \delta implies f(x)>Mf(x) > M.

In C\mathbb C, a neighbourhood of z0Cz_0 \in \mathbb C is a ball and a neighbourhood of \infty has the form {z:z>M}\{z : |z| > M\}. Note that in the Riemann sphere model, this would be some region around the “north pole”.

image-20200401153303634

So, “close to \infty    \iff z|z| is large     \iff 1/z1/|z| is small. Keeping that in mind, this means limzz0f(z)=    limzz01f(z)=0limzf(z)=w0    limz0f(1/z)=w0limzf(z)=    limz01f(1/z)=0 \begin{aligned} \lim_{z \to z_0} f(z) = \infty &\iff \lim_{z \to z_0} \frac 1 {f(z)} = 0\\ \lim_{z \to \infty} f(z) = w_0 &\iff \lim_{z \to 0} f(1/z) = w_0 \\ \lim_{z \to \infty} f(z) = \infty &\iff \lim_{z \to 0} \frac 1{f(1/z)} = 0 \end{aligned} Examples:

Continuity & Differentiability

B.C. 19 (8 Ed 18)

Let ff be defined in some neighbourhood of z0z_0.

Definition. We say ff is continuous at z0z_0 if limzz0f(z)=f(z0)\lim_{z \to z_0} f(z) = f(z_0). That is, given ϵ>0\epsilon > 0 there exists δ>0\delta > 0 such that zz0<δ    f(z)f(z0)<ϵ. |z-z_0| < \delta \implies |f(z) - f(z_0)| < \epsilon.

Basic results

Differentiability

Recall that f:ΩRRf : \Omega \subseteq \mathbb R \to \mathbb R is differentiable if limh0f(x+h)f(x)h\lim_{h \to 0} \frac{f(x+h)-f(x)}h exists, and the limit defines f(x)f'(x) in R\mathbb R.

Definition. For f:ΩCCf : \Omega \subseteq \mathbb C\to \mathbb C is differentiable if limξ0f(z0+ξ)f(z0)ξ\lim_{\xi \to 0} \frac{f(z_0+\xi)-f(z_0)}\xi exists and the limit defines f(z0)f'(z_0).

This definition implies f(z0)=limΔz0f(z0+Δz)f(z0)Δzf'(z_0) = \lim_{\Delta z \to 0} \frac{f(z_0+\Delta z) - f(z_0)}{\Delta z}. Writing w=f(z)w = f(z) and Δw=f(z0+Δx)f(z0)\Delta w = f(z_0 + \Delta x) - f(z_0), we can write f(z0)=limΔz0ΔwΔz=dwdz(z0)f'(z_0) = \lim_{\Delta z \to 0} \frac{\Delta w}{\Delta z} = \frac {dw}{dz}(z_0). These are equivalent ways to write the derivative.

Lecture 14 — Derivatives and Complex Differentiation

Example: Take the derivative of f(z)=4z2f(z) = 4z^2 from first principles. Put w=f(z)w = f(z) and take z0Cz_0 \in \mathbb C. limΔz0ΔwΔz=limΔz0f(z0+Δz)f(z0)Δz=limΔz04(z0+Δz)24z02Δz=limΔz04z02+8z0Δz+4(Δz)24z02Δz=8z0    f(z)=8z \begin{aligned} \lim_{\Delta z \to 0} \frac{\Delta w}{\Delta z} &= \lim_{\Delta z \to 0} \frac{f(z_0 + \Delta z) - f(z_0)}{\Delta z} \\ &= \lim_{\Delta z \to 0} \frac{4(z_0 + \Delta z)^2 - 4z_0^2}{\Delta z} \\ &= \lim_{\Delta z \to 0} \frac{4z_0^2 + 8z_0\Delta z + 4(\Delta z)^2 - 4z_0^2}{\Delta z}\\ &= 8z_0\\ \implies f'(z) &= 8z \end{aligned}

Example: For f(z)=z2f(z) = |z|^2, ff' doesn’t exist except at z=0z=0. This is a very different situation from the case in R\mathbb R, where the function is differentiable everywhere.

B.C. 23 Ex 2 (8 Ed 22 Ex 2)

Note. Differentiability implies continuity, but the converse does not hold. An example of the converse failing is z2|z|^2 or z|z|.

Formulae (compare to f:RRf : \mathbb R \to \mathbb R)

ddz(c)=0cCddzzn=nzn1nZddzez=ezddzsinz=coszddzcosz=sinz \begin{aligned} \frac{d}{dz}(c) &= 0\qquad c \in \mathbb C \\ \frac{d}{dz}\,z^n &= n z^{n-1} \quad n \in \mathbb Z\\ \frac{d}{dz} \,e^z &= e^z \\ \frac{d}{dz} \,\sin z &= \cos z \\ \frac{d}{dz} \,\cos z &= -\sin z \end{aligned}

The usual rules apply. For f,gf, g differentiable, (f±g)=f±g(fg)=fg+fg(f/g)=gffgf2g0 \begin{aligned} (f \pm g)' &= f' \pm g' \\ (fg)' &= fg' + f'g \\ (f/g)' &= \frac{gf' - fg'}{f^2} \quad g \ne 0 \end{aligned} We also have the chain rule: if ff is differentiable at z0z_0 and gg is differentiable at f(z0)f(z_0), then the composition gfg \circ f is differentiable at z0z_0 and the derivative is (gf)(z0)=g(f(z0))f(z0) (g\circ f)'(z_0) = g'(f(z_0))f'(z_0) and this can be written as dgdz=dgdwdwdzwhere w=f(z). \frac{dg}{dz} = \frac{dg}{dw} \frac{dw}{dz}\quad \text{where } w = f(z).

Cauchy-Riemann

Let z=x+iyz = x+iy and suppose f:zw=u(x,y)+iv(x,y)f : z \mapsto w = u(x,y) + iv(x,y) is differentiable at z0=x0+iy0z_0 = x_0 + iy_0. Set Δz=Δx+iΔy\Delta z = \Delta x + i \Delta y, then f(z0)=limΔz0ΔwΔzf'(z_0) = \lim_{\Delta z\to 0} \frac{\Delta w}{\Delta z}.

Key point: If the derivative exists, its value is independent of how Δz0\Delta z \to 0.

Note that Δw=f(z0+Δz)f(z0)=u(x0+Δx,y0+Δy)+iv(x0+Δx,y0+Δy)u(x0,y0)iv(x0,y0) \begin{aligned} \Delta w = f(z_0 + \Delta z) - f(z_0) &= u(x_0 + \Delta x, y_0 + \Delta y) + iv (x_0 + \Delta x, y_0 + \Delta y) - u(x_0,y_0) - iv(x_0, y_0) \end{aligned} We can decompose the limit into real and imaginary, f(z0)=lim(Δx,Δy)(0,0)Re(ΔwΔz)+ilim(Δx,Δy)(0,0)Im(ΔwΔz). \begin{aligned} f'(z_0) &= \lim_{(\Delta x, \Delta y) \to (0,0)} \operatorname{Re}\left(\frac{\Delta w} {\Delta z}\right) + i \lim_{(\Delta x, \Delta y) \to (0,0)}\operatorname{Im}\left(\frac{\Delta w} {\Delta z}\right). \end{aligned} These limits must still be independent of the path (Δx,Δy)(0,0)(\Delta x, \Delta y) \to (0,0). To start, let (Δx,Δy)(0,0)(\Delta x, \Delta y) \to (0,0) along the xx-axis, i.e. along (Δx,0)(\Delta x, 0) for Δx0\Delta x \ne 0. So, ΔwΔz=u(x0+Δx,y0)u(x0,y0)Δx+iv(x0+Δx0,y0)v(x0,y0)Δx \begin{aligned} \frac{\Delta w}{\Delta z} &= \frac{u(x_0 + \Delta x, y_0) - u(x_0, y_0)}{\Delta x} + i\frac{v(x_0 + \Delta x_0, y_0) - v(x_0, y_0)}{\Delta x} \end{aligned} which implies (below, uxu_x is the partial derivative of uu w.r.t. xx) lim(Δx,Δy)(0,0)Re(ΔwΔz)=ux(x0,y0)=ux(x0,y0)lim(Δx,Δy)(0,0)Im(ΔwΔz)=vx(x0,y0)=vx(x0,y0) \begin{aligned} \lim_{(\Delta x, \Delta y) \to (0,0)} \operatorname{Re}\left(\frac{\Delta w} {\Delta z}\right) &= u_x(x_0, y_0) = \frac{\partial u}{\partial x}(x_0, y_0) \\ \lim_{(\Delta x, \Delta y) \to (0,0)} \operatorname{Im}\left(\frac{\Delta w} {\Delta z}\right) &= v_x(x_0, y_0) = \frac{\partial v}{\partial x}(x_0, y_0) \end{aligned} We can derive similar expressions for Δz0\Delta z \to 0 along the yy-axis. For this, we get ΔwΔz=u(x0,y0+Δy)u(x0,y0)iΔy+iv(x0,y0+Δy)v(x0,y0)iΔy \begin{aligned} \frac{\Delta w}{\Delta z} &= \frac{u(x_0, y_0 + \Delta y) - u(x_0, y_0)}{i\Delta y} + i\frac{v(x_0, y_0+ \Delta y) - v(x_0, y_0)}{i\Delta y} \end{aligned} Being careful with the ii, we get lim(Δx,Δy)(0,0)Re(ΔwΔz)=vy(x0,y0)lim(Δx,Δy)(0,0)Re(ΔwΔz)=uy(x0,y0) \begin{aligned} \lim_{(\Delta x, \Delta y) \to (0,0)} \operatorname{Re}\left(\frac{\Delta w}{\Delta z}\right) &= v_y(x_0, y_0) \\ \lim_{(\Delta x, \Delta y) \to (0,0)} \operatorname{Re}\left(\frac{\Delta w}{\Delta z}\right) &= -u_y(x_0, y_0) \\ \end{aligned} Together, because the Δz0\Delta z \to 0 must be path independent and we’ve found the value along two paths, these must coincide. This gives is the Cauchy-Riemann equations.

Theorem. (Cauchy-Riemann equations) If f=u+ivf = u+iv is differentiable at z0=x0+iy0z_0 = x_0 + iy_0, then ux=vyu_x = v_y and vx=uy-v_x= u_y at (x0,y0)(x_0, y_0).

Note. We have shown that C/R are necessary for complex differentiability, but they are not sufficient. There are sufficient conditions.

Sufficient conditions

If we know

then f(z0)f'(z_0) exists.

Note that there are no iis in this board; it is a statement on functions of R2\mathbb R^2.

Remark. There are no necessary and sufficient conditions for complex differentiability. Otherwise, we would have reduced complex analysis to R2\mathbb R^2 analysis (how boring!).

Lecture 15 — Wirtinger Operators and Analytic Functions

What does Cauchy-Riemann mean in polar coordinates? Take z=x+iy=reiθz = x+iy = re^{i\theta} so x=rcosθx = r \cos \theta and y=rsinθy = r \sin \theta. By the chain rule, we get ur=uxcosθ+uysinθuθ=uxrsinθ+uyrcosθvr=vxcosθ+vysinθvθ=vxrsinθ+vyrcosθ \begin{aligned} u_r &= u_x \cos \theta + u_y \sin \theta \\ u_\theta &= -u_x r \sin \theta + u_y r \cos \theta \\ v_r &= v_x \cos \theta + v_y \sin \theta \\ v_\theta &= v_x r \sin \theta + v_y r \cos \theta \end{aligned} We can derive C/R in polar coordinates as rur=vθr u_r = v_\theta and uθ=rvru_\theta = -r v_r.

Therefore, if ff' exists, then f=ux+ivxf' = u_x + iv_x. By using the polar coordinates expression, we also get f(z)=eiθ(ur+ivr)f'(z) = e^{-i\theta}(u_r + iv_r).

Wirtinger operators

Formally, we are going to change variables from (x,y)(x,y) to (z,zˉ)(z, \bar z), where z=x+iyz = x+iy and zˉ=xiy\bar z = x-iy. This means that x=(z+zˉ)/2x = (z + \bar z)/2 and y=(zzˉ)/(2i)y = (z - \bar z)/(2i).

This derivation makes use of the multivariate chain rule. Specifically, if x(t)x(t) and y(t)y(t) are differentiable functions of tt and z=f(x,y)z = f(x,y) is a differentiable function of xx and yythen z=f(x(t),y(t))z = f(x(t),y(t)) is differentiable and dzdt=zxxt+zyyt. \frac{dz}{dt} = \frac{\partial z}{\partial x}\frac{\partial x}{\partial t} + \frac{\partial z}{\partial y}\frac{\partial y}{\partial t}.

fx=fzzx+fzˉzˉx=fz+fzˉfy=fzzy+fzˉzˉy=ifzifzˉ \begin{aligned} \frac{\partial f}{\partial x} &= \frac{\partial f}{\partial z}\frac{\partial z}{\partial x} + \frac{\partial f}{\partial \bar z} \frac{\partial \bar z}{\partial x} \\ &= \frac {\partial f}{\partial z} + \frac {\partial f}{\partial \bar z} \\ \frac{\partial f}{\partial y} &= \frac{\partial f}{\partial z}\frac{\partial z}{\partial y} + \frac{\partial f}{\partial \bar z} \frac{\partial \bar z}{\partial y} \\ &= i\frac {\partial f}{\partial z} -i \frac {\partial f}{\partial \bar z} \\ \end{aligned}

Then, fxify=2fz    z=12(xiy)fx+ify=2fzˉ    zˉ=12(x+iy) \begin{aligned} \frac{\partial f}{\partial x} - i\frac{\partial f}{\partial y} &= 2 \frac{\partial f}{\partial z} \implies \frac{\partial }{\partial z} = \frac 1 2 \left(\frac \partial {\partial x} - i \frac \partial {\partial y}\right) \\ \frac{\partial f}{\partial x} + i\frac{\partial f}{\partial y} &= 2 \frac{\partial f}{\partial \bar z} \implies \frac{\partial }{\partial \bar z} = \frac 1 2 \left(\frac \partial {\partial x}+ i \frac \partial {\partial y}\right) \end{aligned} z\frac{\partial}{\partial z} and zˉ\frac{\partial}{\partial \bar z} are called the Wirtinger operators.

Example: Consider f(z)=zn=(x+iy)nf(z) = z^n = (x+iy)^n. Then, fz=12(xiy)(x+iy)n=12(n(x+iy)n1i2n(x+iy)n1)=12(n(x+iy)n1+n(x+iy)n1)=n(x+iy)n1=nzn1=f(z)fzˉ=0(follows from above) \begin{aligned} \frac{\partial f}{\partial z} &= \frac 1 2 \left(\frac \partial {\partial x} - i \frac \partial {\partial y}\right)(x+iy)^n \\ &= \frac 1 2(n(x+iy)^{n-1} -i^2n(x+iy)^{n-1}) \\ &= \frac 1 2(n(x+iy)^{n-1} +n(x+iy)^{n-1})\\ &= n(x+iy)^{n-1} = nz^{n-1}=f'(z) \\ \frac{\partial f}{\partial \bar z} &= 0\quad \text{(follows from above)} \end{aligned}

For f=u+ivf = u+iv complex differentiable, 12fx=12(ux+ivx)=CR12(vyiuy)=i2(uy+ivy)=i2fy \begin{aligned} \frac 1 2 \frac{\partial f}{\partial x} &= \frac 1 2 (u_x + iv_x) \overset{\text{CR}}= \frac 1 2 (v_y -iu_y) \\ &= -\frac i 2 (u_y + iv_y) = -\frac i 2\frac{\partial f}{\partial y} \end{aligned} So C/R holds if and only if fzˉ=0\frac{\partial f}{\partial \bar z} = 0. This is version II of the Cauchy-Riemann equations.

But why is this partial derivative equal to the full derivative? From f=ux+ivxf' = u_x + iv_x, dfdz=ux+ivx=fx=ify(by CR)=12(fxify)=fz \begin{aligned} \frac{df}{dz} &= u_x + iv_x = \frac{\partial f}{\partial x} \\ &= -i \frac{\partial f}{\partial y}\quad \text{(by CR)} \\ &= \frac 1 2 \left(\frac{\partial f}{\partial x} -i \frac{\partial f}{\partial y}\right) \\ &= \frac{\partial f}{\partial z} \end{aligned} Example 1: Find f(z)f'(z) for f(z)=ezf(z) = e^z. First, we check the sufficient conditions for ff' to exist. Writing f(z)=u+iv=ex+iy=ex(cosy+isiny)f(z) = u+iv = e^{x+iy} = e^x(\cos y + i \sin y), it is defined on C\mathbb C. Moreover, the components are u=excosyu = e^x \cos y and v=exsinyv = e^x \sin y which have partials defined and continuous on C\mathbb C. Then, we need to check C/R by testing ux=vyu_x = v_y and uy=vxu_y = -v_x or just checking fzˉ=0\frac{\partial f}{\partial \bar z} = 0.

Example 2: When is g(z)=z2g(z) = |z|^2 differentiable? Note that g(z)=zzˉ=x2+y2g(z) = z \bar z = x^2 + y^2. Checking C/R II, gzˉ=0    z=0\frac{\partial g}{\partial \bar z} = 0 \implies z = 0, so gg cannot be differentiable for z0z \ne 0 because C/R is necessary. At z=0z = 0, we check the sufficient conditions. It is easy to show that u,v,ux,vx,uy,vyu, v, u_x, v_x, u_y, v_y are defined and continuous on a neighbourhood of 0. Therefore, g(0)=0g'(0) = 0.

Exercise: Go through the same exercise for z1/zz \mapsto 1/z on C\mathbb C_*.

Definition. A function f:ΩCf : \Omega \to \mathbb C is analytic at z0z_0 if ff is differentiable on a neighbourhood of z0z_0.

Definition. A function is singular at z0z_0 if it is not analytic at z0z_0 but is analytic at some point in any neighbourhood of z0z_0. For example, f(z)=1/zf(z) = 1/z is analytic on C\mathbb C_* and singular at 00.

That is, given Bϵ(0)B_\epsilon(0), ff is analytic on Bϵ(z0)B_{\epsilon'}(z_0) for some z0Bϵ(0)z_0 \in B_\epsilon(0) and ϵ<z0\epsilon' < |z_0|.

Definition. A function is entire if it is analytic on all of C\mathbb C. For example, polynomials, sine, cosine, exponential, etc.

Note: If a function is differentiable at precisely one point, it is not analytic there or anywhere (e.g. z2|z|^2).

Also, note that we are calling once-differentiable functions analytic. In real analysis, analytic functions were smooth and equal to their power series (infinitely differentiable). What’s going on?

Lecture 16 — Examples of Derivatives and Taylor Series

Mid-semester exam: Wednesday 22/04/2020 9am.

Remember from real analysis that we have functions differentiable once but not twice.

Continuing with derivatives, consider ddzlogz\frac{d}{dz} \log z where z>0|z| > 0. Recall that in C\mathbb C, logz=lnz+iargz=lnr+iθ\log z = \ln |z| + i \arg z=\ln r + i\theta. Looking at the second expression in its components, u=lnru = \ln r and v=θv = \theta so ur=1/ru_r = 1/r, uθ=0u_\theta = 0, vr=0v_r = 0 and vθ=1v_\theta = 1. Checking C/R in polar coordinates, we need rur=vθanduθ=rvr ru_r = v_\theta \quad \text{and}\quad u_\theta = -rv_r which we do have. We need to make log\log a function so it can be continuous; we need to choose a branch. Pick a subset of C\mathbb C_* such that α<θ<α+2π\alpha < \theta < \alpha + 2\pi then log\log is differentiable. From Lecture 15, ddzlogz=eiθ(ur+ivr)=eiθ/r=1/z. \frac d{dz} \log z = e^{-i\theta}(u_r + iv_r) = e^{-i\theta}/r = 1/z. For example, ddzLogz=1/z\frac d{dz} \operatorname{Log} z = 1/z for π<Argz<π-\pi < \operatorname{Arg}z < \pi and z>0|z| > 0.

For f(z)=zcf(z) = z^c where cCc \in \mathbb C_* is fixed, we have f(z)=exp(clogz)f(z) = \exp (c \log z) and f(z)=cexp(clogz)/zf'(z) = c\exp (c \log z)/z by the chain rule and using the derivative of log\log. We can also write this as zcc/z=czc1z^c c/z = cz^{c-1} which is valid on any domain of the form {z:z>0,α<argz<α+2π}\{z : |z| > 0, \alpha < \arg z < \alpha + 2\pi\}, due to the branch cut of log\log.

Remark: Try this for g(z)=czg(z) = c^z.

Notation from real analysis

Given ΩRn\Omega \subseteq \mathbb R^n,

Note that (i) implies ff is smooth, and in Rn\mathbb R^n, (i) does not imply (ii).

Example: Consider an example to illustrate this past point. f(x)={e1/x2x>00x0 f(x) = \begin{cases} e^{-1/x^2} & x >0 \\ 0 & x \le 0 \end{cases} Then, f(n)(x)f^{(n)}(x) exists for all x0x \ne 0 trivially and f(n)(0)=0f^{(n)}(0) = 0 for all nn. Also, f(n)f^{(n)} is continuous on R\mathbb R. However, the Taylor series of ff about 00 is n=0f(n)(0)xnn!0 \sum_{n=0}^\infty \frac{f^{(n)}(0) x^n}{n!} \equiv 0 so ff is not equal to its Taylor series in a neighbourhood of 00. Therefore, fC(R)f \in C^\infty(\mathbb R) but fCω(R)f \notin C^\omega(\mathbb R).

In real analysis, we have CωCC1000C1C0. C^\omega \subsetneq C^\infty \subsetneq \cdots \subsetneq C^{1000} \subsetneq \cdots \subsetneq C^1 \subsetneq C^0. Next, we will be moving onto integration but there are some problems. There was the intuition of ‘area’ but how does this translate to C\mathbb C? We could look at something like a two-dimensional volume under a hypersurface but that doesn’t really work. Instead, we can revert to a complex valued function of real parameters. Next lecture, we will see why this makes sense and how it leads to the familiar integration.

Lecture 17 — Integration, Rules, FToC, Contours

We want integration to give us some notion of (signed) as well as reversing differentiation, with the goal of building up the fundamental theorem of calculus.

Integration

B-C §41-43 (8 Ed §37-39)

Consider a C\mathbb C-valued function of one real variable. That is, w(t)=u(t)+iv(t)w(t) = u(t) + iv(t) for tRt \in \mathbb R. Define w(t)=u(t)+iv(t)w'(t) = u'(t) + iv'(t).

The usual rules for real-valued differentiation apply:

We can also define definite and indefinite integrals for such functions. For a,bRa, b \in \mathbb R, abw(t)dt=abu(t)dt+iabv(t)dtRe(abw(t)dt)=abRe(w(t))dtIm(abw(t)dt)=abIm(w(t))dt \begin{aligned} \int_a^b w(t)\, dt &= \int_a^b u(t)\,dt + i\int_a^bv(t)\,dt \\ \operatorname{Re}\left(\int_a^b w(t)\,dt\right) &= \int_a^b \operatorname{Re}(w(t))\,dt \\ \operatorname{Im}\left(\int_a^b w(t)\,dt\right) &= \int_a^b \operatorname{Im}(w(t))\,dt \end{aligned} 0w(t)dt\int_0^\infty w(t)\,dt and similar can be defined analogously. The above expressions certainly make sense if ww is continuous, that is wC0([a,b])w \in C^0([a,b]).

Somewhat more generally, it also holds for piecewise continuous functions on [a,b][a,b]. That is, ww such that there exist c1<c2<<cn(a,b)c_1 < c_2 < \cdots < c_n \in (a,b) such that

Of course, the limits existing for ww imply the limits exist for uu and vv.

image-20200409114835570

Suppose there exists W(t)=U(t)+iV(t)W(t) = U(t) + iV(t) such that W=wW' = w on [a,b][a,b]. Then, the fundamental theorem of calculus holds, in the form of abw(t)dt=W(b)W(b). \int_a^b w(t)\,dt = W(b) - W(b). The next estimate is crucial.

Lemma. Suppose w=u+ivw = u+iv is piecewise continuous on [a,b][a,b]. Then, abw(t)dtabw(t)dt. \left|\int_a^b w(t)\,dt\right|\le \int_a^b \left|w(t)\right|\,dt. Proof. If abw(t)dt=0\int_a^b w(t)\,dt = 0, then the left is 00 and right is 0\ge 0 so we are done. Otherwise, there exists r>0r > 0 and θ0R\theta_0 \in \mathbb R such that abw(t)dt=reiθ0\int_a^b w(t)\,dt = re^{i\theta_0} which implies abw(t)dt=r\left|\int_a^b w(t)\,dt\right| = r. Then, abw(t)dt=reiθ0abeiθ0w(t)dt=r    r=abeiθ0w(t)dt=Re(abeiθ0w(t)dt)=abRe(eiθ0w(t))dt \begin{aligned} \int_a^b w(t)\,dt &= re^{i\theta_0}\\ \int_a^b e^{-i\theta_0} w(t)\,dt &= r\\ \implies r=\int_a^b e^{-i\theta_0} w(t)\,dt &=\operatorname{Re}\left(\int_a^b e^{-i\theta_0} w(t)\,dt\right) \\ &= \int_a^b \operatorname{Re}\left(e^{-i\theta_0}w(t)\right)\,dt \end{aligned} However, Re(eiθ0w(t))eiθ0w(t)=w(t)\operatorname{Re}\left(e^{-i\theta_0}w(t)\right) \le \left|e^{-i\theta_0}w(t)\right| = |w(t)| because eiθ0=1\left|e^{-i\theta_0} \right|= 1. Combining this with the expression for abw(t)dt=r\left|\int_a^b w(t)\,dt\right| = r from earlier, abw(t)dt=rabRe(eiθ0w(t))dtabw(t)dt. \left|\int_a^b w(t)\,dt\right| = r \le \int_a^b \operatorname{Re}\left(e^{-i\theta_0}w(t)\right)\,dt \le \int_a^b \left|w(t)\right|\,dt. \square

Contours and arcs

A contour is a parametrised curve in C\mathbb C. Given x(t),y(t)x(t), y(t) continuous on [a,b]R[a,b] \to \mathbb R, z(t)=x(t)+iy(t),atb z(t) = x(t) + iy(t), \quad a \le t \le b defines an arc in C\mathbb C.

This is both a set of points z([z,b])z([z,b]), called the trace of the arc, and also a recipe for drawing the arc (the parametrisation).

Lecture 18 - Jordan Curves, Simple Closed Contours

Recall that z(t)=x(t)+iy(t)z(t) = x(t) + iy(t) for t[a,b]t \in [a,b]. The parameter tt can be thought of as time.

Definition. A Jordan arc (or simple arc) does not intersect itself. That is, z(t1)z(t2)z(t_1) \ne z(t_2) for t1t2t_1 \ne t_2.

Definition. A Jordan curve (or simple closed curve) is a Jordan arc that has the property z(a)=z(b)z(a) = z(b).

Example 1: z={t+it0t1t+i1<t2z = \begin{cases}t + it & 0 \le t \le 1 \\ t + i & 1 < t \le 2\end{cases} is a simple arc, whose trace is the graph of the points. The arc would be traced out with a ‘speed’ of 2\sqrt2 between 00 and 11 because it covers a distance of 2\sqrt2 in 1 time unit.

Example 2: z=z0+Reiθz = z_0 + Re^{i\theta} for 0θ2π0 \le \theta \le 2\pi is an arc whose trace is a circle, centred at z0z_0 of radius RR.

Example 3: z=z0+Reiθz = z_0 + Re^{-i\theta} for 0θ2π0 \le \theta \le 2\pi traces the same circle, but in the opposite direction. We use a negative in the exponent to allow the parameter to be increasing (fitting the time analogy).

Example 4: z=z0+Re2iθz = z_0 + Re^{2i\theta} for 0θ2π0 \le \theta \le 2\pi again has the same trace, but it “covers” the circle twice.

In these examples, 2 and 3 are Jordan curves and 4 is not.

Definition. An arc/curve is called differentiable if z(t)z'(t) exists (at all t(a,b)t \in (a,b) for an arc, and at t[a,b]t \in [a,b] for a curve).

Definition. If zz' exists and is continuous, then abz(t)dt\int_a^b |z'(t)|\,dt exists and defines the arc length.

This is crucial because the length of an arc does not depend on the particular parametrisation. More specifically, if z(t)z(t) is any parametrisation of the image arc, we can define another one by t=Φ(τ)t = \Phi(\tau) with Φ(α)=a\Phi(\alpha) = a and Φ(β)=b\Phi(\beta) = b such that ΦC([α,β])\Phi \in C([\alpha, \beta]) and ΦC((α,β))\Phi' \in C((\alpha, \beta)). Then, z(t)=Z(τ)=z(Φ(t))z(t) = Z(\tau) = z(\Phi(t)).

We will prove that the arc length is the same. Assume Φ(τ)>0\Phi(\tau) > 0 for all τ\tau (that is, we always move forwards in time). Then, abz(t)dt=αβz(Φ(τ))Φ(τ)dτ=αβZ(τ)dτ \begin{aligned} \int_a^b|z'(t)|\,dt &= \int_\alpha^\beta |z'(\Phi(\tau))| \Phi'(\tau)\,d\tau \\ &= \int_\alpha^\beta \left|Z'(\tau)\right|\,d\tau \end{aligned} which implies arc length is independent of parametrisation.

Definition. A contour is an arc/curve/Jordan curve such that zz is continuous and zz is piecewise differentiable. Additionally, if initial and final values coincide and there are no other self-intersections, it is a simple closed contour.

Theorem (Jordan curve theorem). Any simple closed contour divides C\mathbb C into three parts:

Although it seems obvious, this is actually more complex. Consider a Möbius strip. This would take about 8 lectures to prove, so we’ll trust Jordan on this one.

Remark: The theorem still holds if we remove the requirement that zz is piecewise differentiable. This leads to very freaky things such as space-filling curves.

Contour integrals

Given a contour CC, a contour integral is written Cf(z)dz or z1z2f(z)dz. \begin{aligned} \int_C f(z)\,dz \quad \text{ or }\quad \int_{z_1}^{z_2} f(z)\,dz. \end{aligned} We can write the second expression if we know:

Suppose the contour CC is specified by z(t)z(t) with z1=z(a)z_1 = z(a) and z2=z(b)z_2 = z(b), with atba \le t \le b, and suppose ff is piecewise continuous on CC. Then (reminiscent of line integrals), Cf(z)dz=abf(z(t))z(t)dt. \int_C f(z)\,dz = \int_a^b f(z(t))z'(t)\,dt.

Glossary

Lecture 19 — Contour Integrals

Recall that an arc is made up of the trace, the image of points, and the parametrisation, a way of driving along the curve.

Suppose CC is a contour given by z(t)z(t) for t[a,b]t \in [a,b] with z1=z(a)z_1 = z(a) and z2=z(b)z_2 = z(b). Suppose ff is piecewise continuous on CC.

Contour integrals

Basic properties

Example: Evaluate I=CzˉdzI = \int_C \bar z\,dz where CC is given by z(θ)=2eiθz(\theta) = 2e^{i\theta} for π/2θπ/2-\pi/2 \le \theta \le \pi/2. This traces the right half of a circle with radius 2 counter-clockwise. We check that CC is continuous on CC (indeed, differentiable) and ff is continuous on CC. Note that z(θ)=2ieiθz'(\theta) = 2ie^{i\theta}. Then,     I=π/2π/2f(z(θ))z(θ)dθ=π/2π/2(2eiθ)2ieiθdθ=4iπ/2π/2eiθeiθdθ=4iπ/2π/2dθ=4πi \begin{aligned} \implies I &= \int_{-\pi/2}^{\pi/2} f(z(\theta)) z'(\theta)\,d\theta \\ &= \int_{-\pi/2}^{\pi/2}\overline{(2e^{i\theta})}2ie^{i\theta}\,d\theta \\ &= 4i\int_{-\pi/2}^{\pi/2}{e^{-i\theta}}e^{i\theta}\,d\theta \\ &= 4i\int_{-\pi/2}^{\pi/2}\,d\theta \\ &= 4\pi i \end{aligned} On CC, zzˉ=4z \bar z = 4 which implies zˉ=4/z\bar z = 4/z. As a corollary, Cdzz=πi\int_C \frac{dz}z = \pi i. See §45 (8 Ed §41) for more examples.

Antidifferentiation

Let DD be a domain in C\mathbb C (that is, an open connected subset of C\mathbb C).

Definition. An antiderivative of ff on DD is FF such that F(z)=f(z)F'(z) = f(z) on DD.

Theorem. The following three are equivalent:

Proof. (i) to (ii) follows from the fundamental theorem of calculus. For (ii) to (iii), take a closed contour CC in DD with z(a)=z(b)=z1z(a) = z(b) = z_1. Fix γ(a,b)\gamma \in (a,b) such that z(γ)z1z(\gamma) \ne z_1. Split CC into two contours: C1C_1 with tγt \le \gamma and C2C_2 with tγt \ge \gamma. Then, C1+C2=CC_1 + C_2 = C and Cf=C1+C2f=C1f+C2f=C1C2f=0, \begin{aligned} \int_C f &= \int_{C_1 + C_2} f = \int_{C_1}f + \int_{C_2} f = \int_{C_1} - \int_{-C_2} f = 0, \end{aligned} because C2-C_2 and C1C_1 have the same start and end points so their integrals are equal by (ii). For (iii) to (ii) to (i), see B/C. \square

In particular, for CC from z1z2z_1 \to z_2 in DD, it holds that Cf(z)dz=F(b)F(a), \int_C f(z)\,dz = F(b) - F(a), for any antiderivative FF of ff.

Further examples of contour integrals

Keep in mind that we are doing integration, which is more of an art than a science. That is, it can be very difficult to get a (closed) for solution for even simple-looking integrands.

Example 2: I=01+iz2dzI = \int_0^{1 + i} z^2\,dz. Here, f(z)=z2f(z) = z^2 has an antiderivative, such as F(z)=z3/3F(z) = z^3/3. By the FToC, I=F(1+i)F(0)=23(1+i). I = F(1 + i) - F(0) = \frac{2}3(-1 + i). Example 3: I=Cdz/z2I = \int_C dz / z^2, with C=2eiθC = 2e^{i\theta} and 0θ2π0 \le \theta \le 2\pi. The integrand 1/z21/z^2 has an antiderivative on C\mathbb C_*, namely 1/z-1/z. Because CC is a closed contour lying completely within C\mathbb C_*, (iii) implies I=0I = 0.

More generally, the same argument shows that Czndz=0\int_C z^n\,dz = 0 for all closed contours CC and nZ{1}n \in \mathbb Z \setminus \{-1\}.

Lecture 20 — Cauchy-Goursat

Example 4: I=CdzzI = \int_C \frac{dz}z where C=2eiθC = 2e^{i\theta} and 0θ2π0 \le \theta\le 2\pi. We cannot use the argument from earlier example because the antiderivative does not exist along the whole interval (regardless of branch cuts). We can try to split up CC into C1C_1 and C2C_2, the left and right halves of the circle. Then, I=I1+I2I = I_1 + I_2 where I1I_1 and I2I_2 are the integrals along C1C_1 and C2C_2 respectively.

On a domain D=C{R<0{0}}D = \mathbb C \setminus \{\mathbb R_{<0} \cup \{0\}\}, Log\operatorname{Log} is a primitive for 1/z1/z on C1DC_1 \subset D. The previous lecture’s theorem tells us that I1=Log(2i)Log(2i)=πiI_1 = \operatorname{Log}(2i) - \operatorname{Log}(-2i) = \pi i (recall, Log(z)=lnz+iArgz\operatorname{Log}(z) = \ln |z| + i \operatorname{Arg}z). Note that this agrees with our corollary from lecture 19.

For I2I_2, on D=C{R>0{0}}D' = \mathbb C \setminus \{\mathbb R_{>0} \cup \{0\}\}, 1/z1/z has a primitive such as Logz=lnz+iArgz\operatorname{\mathcal {Log}}z = \ln |z| + i \operatorname{\mathcal {Arg}}z where 0Argz2π0 \le \operatorname{\mathcal {Arg}}z\le 2\pi. Note that C2DC_2 \subset D'. By the theorem, I2=Log(2i)Log(2i)=πiI_2 = \operatorname{\mathcal {Log}}(-2i) - \operatorname{\mathcal{Log}}(2i) = \pi i (being careful to use our modified argument function).

Therefore, I=I1+I2=2πiI =I_1 + I_2 = 2\pi i. We can conclude that Czndz={0nZ{1},2πin=0. \int_C z^n\,dz = \begin{cases} 0 & n \in \mathbb Z \setminus \{-1\},\\ 2\pi i & n = 0. \end{cases} for any circle CC centred at the origin and positively oriented (counter-clockwise).

Cauchy-Goursat

§50 (8 Ed §46).

Theorem. Let CC be a simple closed curve in C\mathbb C. If ff is analytic on CC and its interior, then Cf(z)dz=0. \int_C f(z)\,dz = 0. Remark: The converse does not hold. Consider Czndz\int_C z^n\,dz with n=2,3,n = -2, -3, \ldots which is not analytic at 0 for any circles around 0.

Proof. Prove for a rectangle, then approximate the contour CC these squares. The interior cancels and the outer edges approach the integral.

M-\ell estimate: (This forms a key step of the proof.) Suppose ff is continuous on a contour CC, given by z=z(t)z = z(t) and atba \le t \le b. Then, there exists MM such that f(z)M|f(z)| \le M for all zCz \in C (by extreme value theorem in R\mathbb R). So, Cf(z)dz=abf(z(t))z(t)dtabf(z(t))z(t)dtMabz(t)dt=M \begin{aligned} \left|\int_C f(z)\,dz\right| &= \left|\int_a^b f(z(t))\,z'(t)\,dt\right| \\ &\le \int_a^b |f(z(t))|\,|z'(t)|\,dt \\ &\le M \int_a^b |z'(t)|\,dt = M\ell \end{aligned} where =(C)\ell = \ell(C) is the arc length of CC.

Cauchy-Goursat extension

Recall (?) that a domain DD is simply connected if for every simple closed contour CC in DD, it holds that IntCD\operatorname{Int}C\subseteq D. Roughly speaking, this means that DD has “no holes”. That is, all simply closed contours are null homotopic.

If DD is not simply connected, it is multiply connected.

Theorem. If ff is analytic on a contour CC, as well as on C1,,CnIntCC_1, \ldots, C_n \subset \operatorname{Int}C and on the interior of the domain bordered by C1,C2,,CnC_1, C_2, \ldots, C_n, and C,C1,,CnC, C_1, \ldots, C_n are all positively oriented, then Cf(z)dz+j=1nCjf(z)dz=0. \int_C f(z)\,dz + \sum_{j=1}^n \int_{C_j}f(z)\,dz = 0. Note that positively oriented means the that while traversing the contour, the region is on your left. This is particularly important for the orientation of C1,,CnC_1, \ldots, C_n.

Visually,

image-20200430115113018

Lecture 21 — Cauchy Integral Formula

Theorem (Cauchy integral formula). Let ff be analytic on and inside a simple closed curve CC that is positively oriented (interior is to the left of the curve’s direction). Then, if z0IntCz_0 \in \operatorname{Int}C we have f(z0)=12πiCf(z)zz0dz,or2πif(z0)=Cf(z)zz0dz. \begin{aligned} f(z_0) &= \frac 1 {2\pi i} \int_C \frac{f(z)}{z-z_0}\,dz, \quad\text{or}\quad 2\pi if(z_0) = \int_C \frac{f(z)}{z-z_0}\,dz. \end{aligned} This is quite an amazing result. Roughly, ff is differentiable and we can know the value of ff at a point by the integral of any curve around that point.

image-20200501112406995

Proof. Note that the integrand is not analytic on IntC\operatorname{Int}C because it is not defined at z0z_0. We will “cut out” this discontinuity so we can apply the Cauchy-Goursat theorem. Set Cρ={z(θ)=z0+ρeiθ,0θ2π}C_\rho = \{z(\theta) = z_0 + \rho e^{i\theta}, 0 \le \theta \le 2\pi\} as a curve around our point z0z_0, for ρ\rho sufficiently small such that IntCρIntC\operatorname{Int} C_\rho \subset \operatorname{Int} C.

We have f(z)/(zz0)f(z)/(z-z_0) is analytic on IntCIntCρ\operatorname{Int}C \setminus \operatorname{Int}C_\rho as well as CC and CρC_\rho. We apply Cauchy-Goursat’s extension to multiply connected domains and that gives us Cf(z)zz0dz=Cρf(z)zz0dz    Cf(z)zz0dzf(z0)Cρdzzz0=Cρf(z)f(z0)zz0dz \begin{aligned} \int_C \frac{f(z)}{z-z_0}\,dz &= \int_{C_\rho} \frac{f(z)}{z-z_0}\,dz \\ \implies \int_C \frac{f(z)}{z-z_0}\,dz -f(z_0)\int_{C_\rho}\frac{dz}{z-z_0}&= \int_{C_\rho} \frac{f(z)-f(z_0)}{z-z_0}\,dz \end{aligned} From lecture 20, we know that Cρdzzz0=2πi\int_{C_\rho} \frac{dz}{z-z_0} = 2\pi i because CρC_\rho is a circle centered at z0z_0 and this holds for any ρ>0\rho > 0. Since ff is analytic at z0z_0, it is continuous at z0z_0 so given ϵ>0\epsilon > 0 there exists δ>0\delta > 0 such that f(z)f(z0)<ϵ|f(z) - f(z_0)|<\epsilon for all zz0<δ|z-z_0| < \delta. Choose ρ<δ\rho < \delta and we will have f(z0+ρeiθ)f(z0)<ϵ|f(z_0 + \rho e^{i\theta})-f(z_0)|<\epsilon.

Returning to the equations from above, Cf(z)zz0dz2πif(z0)Cρf(z)f(z0)zz0dz \begin{aligned} \left|\int_C \frac{f(z)}{z-z_0}\,dz -2\pi i\,f(z_0) \right| &\le \int_{C_\rho} \frac{|f(z)-f(z_0)|}{|z-z_0|}\,dz \end{aligned} Note that all points on CρC_\rho are exactly ρ\rho away from z0z_0. Thus, 1/zz0=1/ρ1/|z-z_0| = 1/\rho. Moreover, the integral Cρf(z)f(z0)dz\int_{C_\rho} |f(z) - f(z_0)|\,dz is bounded by ϵ2πρ\epsilon \cdot 2\pi \rho by the M-\ell estimate (here, MM is ϵ\epsilon and \ell is the circumference of a circle with radius ρ\rho). This gives us, Cρf(z)f(z0)zz0dz=1ρCρf(z)f(z0)dz<1ρϵ2πρ=2πϵ \begin{aligned} \int_{C_\rho} \frac{|f(z)-f(z_0)|}{|z-z_0|}\,dz &= \frac 1 \rho \int_{C_\rho} |f(z) - f(z_0)|\,dz \\ &< \frac 1 \rho \epsilon \cdot 2\pi\rho = 2\pi\epsilon \end{aligned} By sending ϵ0\epsilon \to 0, we can make this arbitrarily small which tells us Cf(z)zz0dz2πif(z0)=0    f(z0)=12πiCf(z)zz0dz, \begin{aligned} \left|\int_C \frac{f(z)}{z-z_0}\,dz -2\pi i\,f(z_0) \right| = 0 \iff f(z_0) = \frac 1 {2\pi i}\int_C \frac{f(z)}{z-z_0}\,dz, \end{aligned} as required. \square

Lecture 22 — Morera, Liouville Theorem

Recall the Cauchy integral formula: If ff is analytic on and inside the simple closed curve CC, traversed positively, and z0IntCz_0 \in \operatorname{Int} C, then f(z0)=12πiCf(z)zz0dz. f(z_0) = \frac 1 {2\pi i}\int_C \frac{f(z)}{z-z_0}\,dz. Theorem. Under the same conditions, f(n)(z0)=n!2πiCf(z)(zz0)n+1dz. f^{(n)}(z_0) = \frac{n!}{2\pi i}\int_C \frac{f(z)}{(z-z_0)^{n+1}}\,dz. Proof. See exercise 9 of §57 (8 Ed §52). \square

As a result, this tell us that for f=u+ivf = u+iv and ff analytic at z0=x0+iy0z_0 = x_0 + iy_0, we know that partials of all orders of uu and vv exist and are continuous at (x0,y0)(x_0, y_0). This is very different from the situation in R\mathbb R, where it is very easy to have functions with continuous derivatives but not differentiable. For example, with f(x)=x3f(x) = |x|^3, ff, ff' and ff'' are continuous but f(0)f'''(0) does not exist.

Note: If ff is analytic at z0z_0, then its derivatives of all orders exist and are analytic at z0z_0.

Theorem (Morera). Let ff be continuous on a domain Ω\Omega. If Cf(z)dz=0\int_C f(z)\,dz = 0 for all closed contours CC in Ω\Omega, then ff is analytic on Ω\Omega.

Proof. By the theorem from lecture 19, ff has a primitive FF because Cf(Z)dz=0\int_Cf(Z)\,dz = 0. But then, F=fF' = f exists and is continuous on Ω\Omega by assumption of the theorem. This tell us that FF is analytic. Hence, by the note above, f=Ff = F' is also analytic. \square

A number of nice results follow from the theorem with f(n)(z0)f^{(n)}(z_0) above.

Result (I). Let ff be analytic in and on CR(z0)C_R(z_0) (curve of a circle of radius RR around z0z_0) and set MR=maxzCRf(z)M_R = \max_{z \in C_R}|f(z)|. Then, f(n)(z0)n!MRRn. \left|f^{(n)}(z_0)\right| \le \frac{n!M_R}{R^n}. This tells us that if we know what the function does on the circle, we can estimate the size of its derivatives at a point. In fact, the closer we get, the worse this estimate becomes because of the division by RnR^n.

Proof. MRM_R is well defined by the extreme value theorem. Then, applying the aforementioned theorem, f(n)(zn)=n!2πiCRf(z)(zz0)n+1dzn!2πCRf(z)zz0n+1dzn!MR2πRn+1CRdz=n!MRRn \begin{aligned} \left|f^{(n)}(z_n)\right| = \left|\frac{n!}{2\pi i} \int_{C_R}\frac{f(z)}{(z-z_0)^{n+1}}\,dz\right| &\le \frac{n!}{2\pi}\int_{C_R}\frac{|f(z)|}{|z-z_0|^{n+1}}\,dz \\ &\le \frac{n!M_R}{2\pi R^{n+1}}\int_{C_R}dz \\ &= \frac{n!M_R}{R^n} \end{aligned} Above, note that zz0=R|z-z_0|=R on this contour, and CRdz\int_{C_R}dz is just the arc length of CRC_R (equal to 2πR2\pi R). \square

As a brief discussion, we have all these powerful results about analytic functions in C\mathbb C. However, this hints that being complex differentiable is actually a very restrictive condition.

Result (II – Liouville). If f:CCf : \mathbb C \to \mathbb C is bounded and entire (everywhere differentiable), then ff is constant.

Proof. Suppose fM|f| \le M on all of C\mathbb C and it is entire. Apply result I for n=1n=1 on CR(z0)C_R(z_0), an arbitrary circle around z0z_0. The result implies that f(z0)1!MR=MR. |f'(z_0)| \le \frac{1!M}R = \frac M R. Letting RR \to \infty, we see that f(z0)=0f'(z_0) = 0. Since z0z_0 was arbitrary, we have the result. \square

This is clearly not the case in R\mathbb R.

Result (III – Fundamental theorem of algebra). An nn-th degree polynomial has exactly nn zeros.

Lecture 23 — Conformal Maps, Harmonic Functions

Conformal maps

§112 (8 Ed §101)

Definition. A conformal map ff is a map f:zwf : z \mapsto w where ff is analytic and f(z0)0f'(z_0) \ne 0. Then locally (near z0z_0), ff preserves angles, orientation, and shape.

In the image below, Γ1\Gamma_1 and Γ2\Gamma_2 are the images of C1C_1 and C2C_2 under ff. The angles between them are α\alpha and β\beta. They intersect at z0z_0 and f(z0)f(z_0), respectively. That is, Γ1=f(C1)\Gamma_1 = f(C_1) and Γ2=f(C2)\Gamma_2=f(C_2). Conformality tells us that α=β\alpha = \beta.

If orientation (i.e. sense, direction) is not necessarily preserved but the angle’s magnitude is, the map is called isogonal.

image-20200505115755673

If instead we had an analytic function with f(z0)=0f'(z_0)=0, then z0z_0 is a critical point of ff. This means the angle is not preserved around z0z_0. However, the angle will be multiplied by mm where mm is the smallest integer such that f(m)(z0)0f^{(m)}(z_0) \ne 0.

§113 (8 Ed §103)

Conformality means the map is locally 1-to-1 and onto. That is, ff has a local inverse. This follows from MATH2400/1’s inverse function theorem. Specifically, it is locally invertible if detJf0\det J_f \ne 0. In this case, detJf=uxuyvxvy=uxvyuyvx=ux2+vx2=ux+ivx2=f20 \begin{aligned} \det J_f = \begin{vmatrix}u_x & u_y \\ v_x & v_y\end{vmatrix} = u_x v_y - u_y v_x = u_x^2 + v_x^2 = |u_x + iv_x|^2 = |f'|^2 \ne 0 \end{aligned} due to f(z0)0f'(z_0) \ne 0 and analyticity of ff.

Harmonic functions

We look for a function U:ΩRU : \Omega \to \mathbb R such that ΔU=0(or alternatively, 2U=0). \Delta U = 0 \quad\text{(or alternatively, }\nabla^2U=0\text{)}.

Here, Δ\Delta or 2\nabla^2 is the Laplacian/Laplace operator defined as ΔU=Uxx+Uyy\Delta U = U_{xx} + U_{yy} or more generally in Rn\mathbb R^n, Δ=j=1nUjj\Delta = \sum_{j=1}^n U_{jj}. This is used to model many physical situations in “steady state”.

Motivation

Take a region ΩR2\Omega \subset \mathbb R^2 or R3\mathbb R^3. Let Λ\Lambda be a “sufficiently smooth subdomain of Ω\Omega”. Some intuition is that an arbitrary point x\mathbf x on the Λ\partial \Lambda has an external normal, denoted ν(x)\boldsymbol{\nu}(\mathbf x) with unit normal ν(x)\boldsymbol{\nu}'(\mathbf x).

UU is the density of something “in equilibrium”, and F\mathbf F is the flux density of UU in Ω\Omega “in equilibrium”.

This means that along the boundary of Λ\Lambda, ΛFνdS=0, \int_{\partial \Lambda} \mathbf F \cdot \boldsymbol{\nu}'\,dS = 0, where dSdS is the surface measure on Λ\partial \Lambda (i.e. one dimension lower). This means the net in-flow and out-flow are equal. In terms of fluids, this means there are no sources and sinks.

We apply Gauss divergence theorem with the above integral which tells us that ΛFνdS=ΛdivFdx=0 \int_{\partial \Lambda} \mathbf F \cdot \boldsymbol{\nu}'\,dS=\int_\Lambda \operatorname{div}\mathbf F\,d\mathbf x = 0 where dx=dxdyd\mathbf x = dx\,dy in 2D, etc. Since Λ\Lambda is essentially arbitrary, there holds divF=0\operatorname{div}\mathbf F = 0 in Ω\Omega. That is, j=1njFj=0\sum_{j=1}^n \partial_j F_j = 0 in Ω\Omega.

In many physical situations, F=cU\mathbf F = c \nabla U with cc usually negative (corresponding to repelling forces). This means that divF=cdivU=0    divU=ΔU=0. \begin{aligned} \operatorname{div}\mathbf F &= c\operatorname{div}\nabla U = 0 \implies \operatorname{div} \nabla U = \Delta U = 0. \end{aligned}

Lecture 24 — Harmonic Conjugates

Recall from last lecture, conformal maps and Laplacian of harmonic functions.

If UU is the concentration of something “in equilibrium”, that implies (somewhat) that ΔU=0\Delta U = 0. There are many solutions to this in general (constants, linear, etc) however we are often interested in boundary conditions.

Can we also study Ut=αΔU\frac{\partial U}{\partial t} = \alpha \Delta U? As the left hand side approaches 0, the Laplacian approaches 0 and the system approaches steady state. This has many physical applications.

Examples:

Note that these are radial functions around 0. But how badly do they behave?

Theorem. If f(z)=u(x,y)+iv(x,y)f(z) = u(x,y) + iv(x,y) is analytic in ΩC\Omega \subseteq \mathbb C, then uu and vv are harmonic in Ω\Omega.

Proof. Recall that if ff is analytic then uu and vv have continuous partials of all orders and C/R holds. That is, ux=vyu_x = v_y and uy=vxu_y = -v_x. We can differentiate these and apply C/R again to get uxx=vyxuxy=vxxuyx=vyyuyy=vyx \begin{aligned} u_{xx} &= v_{yx} & u_{xy} &= -v_{xx} \\ u_{yx} &= v_{yy} & u_{yy} &= -v_{yx} \end{aligned} Since partials of all orders are continuous, by Clairaut’s theorem, uxy=uyxu_{xy} = u_{yx} and vxy=vyxv_{xy} = v_{yx}. Therefore, uxx=vyx=uyyu_{xx} = v_{yx} = -u_{yy} and similarly for vv, so Δu=0\Delta u = 0 and Δv=0\Delta v = 0. \square

Definition. If uu and vv are harmonic and satisfy C/R, then vv is called a (not the) harmonic conjugate of uu. Note that this is not symmetric.

Theorem. f=u+ivf = u+iv is analytic in Ω\Omega if and only if vv is a harmonic conjugate of uu.

Proof. (\rightarrow) is done above. ()(\leftarrow) vv is a harmonic conjugate so uu and vv are both harmonic and u,ux,uy,uxx,uyyu, u_x, u_y, u_{xx}, u_{yy} all exist, are continuous and satisfy C/R throughout Ω\Omega, ff is analytic. \square

Example: Suppose vv and ww are harmonic conjugates of uu. This means that u+ivu+iv and u+iwu+iw are both analytic. Applying C/R, ux=vy=wy,anduy=vx=wx. \begin{aligned} u_x &= v_y = w_y, \quad \text{and}\quad u_y = -v_x = -w_x. \end{aligned} Integrating the derivatives of vv and ww wrt their partial variable, we get v=w+ϕ(x)v = w + \phi(x) and v=w+ψ(y)v = w+\psi(y). Therefore, ϕ(x)=ψ(y)\phi(x) = \psi(y) which must be a constant. This means v=w+cv = w+c. \circ

A similar procedure can be used to find a harmonic conjugate of a given harmonic function uu.

Example: Find a harmonic conjugate of u(x,y)=y33x2yu(x,y) = y^3 - 3x^2y.

uui s a polynomial function of xx and yy so has continuous partials of all orders. Moreover, uxx+uyy=0u_{xx}+u_{yy} = 0. Suppose vv is a harmonic conjugate of uu. C/R tells us ux=vyu_x = v_y so vy=6xyv_y = -6xy. Integrating this wrt yy gives us v=3xy2+ϕ(x)v = -3xy^2 + \phi(x). Using this in the second part of C/R, uy=vx3y23x2=3y2ϕ(x)ϕ(x)=3x2ϕ(x)=x3+c \begin{aligned} u_y &= -v_x \\ 3y^2 - 3x^2 &= 3y^2 - \phi'(x) \\ \phi'(x) &= 3x^2\\ \phi(x) &= x^3 + c \end{aligned} So, we can choose c=0c =0 and v(x,y)=3xy2+x3v(x,y) = -3xy^2+x^3 is a harmonic conjugate of uu. Note that in this example, u=Refu=\operatorname{Re}f and v=Imfv=\operatorname{Im} f where f(z)=iz3f(z) = iz^3.

Lecture 25 — Transformations of Harmonic Functions

Recall harmonic conjugates. That is, vv is a harmonic conjugate of uu if uu and vv satisfy C/R.

Remark: vv is a harmonic conjugate of uu does not imply uu is a harmonic conjugate of vv.

Example: u=x2y2u = x^2 - y^2 so v=2xyv = 2xy. Then, u+iv=z2u + iv = z^2 is an entire function (analytic everywhere). Therefore, vv is a harmonic conjugate of uu. However, if uu were actually a harmonic conjugate of vv, then v+iuv + iu would be analytic. We can check with C/R that this function is analytic nowhere.

Remark: Suppose uu is harmonic on a simply connected domain Ω\Omega. Then, uu has a harmonic conjugate on Ω\Omega. (§115, 8 Ed §104)

Physical problems

§116 (8 Ed §115)

“Physical” configurations are often modelled by solutions of partial differential equations. Generally, we are interested in solving a PDE subject to associated initial/boundary conditions.

For example, (D){Δu=0in Ω,uΩ=φ \begin{aligned} (D)\begin{cases} \Delta u = 0 & \text{in }\Omega, \\ u|_{\partial \Omega}=\varphi \end{cases} \end{aligned} which means that Δu=0\Delta u = 0 within Ω\Omega and u=ϕu = \phi on the boundary. Here, Ω\Omega and φ\varphi are known and uu is unknown. In particular, φ:ΩR\varphi : \partial \Omega \to \mathbb R. This (D) is called the Dirichlet problem for Laplace’s equation, a.k.a. the boundary problem of the first kind.

A practical application is a heat equation with an insulated boundary. (D) can be solved by finding a uu that minimises Ωu2dxsuch thatuΩ=φ. \int_\Omega |\nabla u|^2\,d\mathbf x \quad\text{such that}\quad u|_{\partial \Omega} = \varphi. This can be solved by calculus of variations and functional derivatives.

There are also boundary conditions of the second kind, called Neumann boundary conditions. This is (N){Δu=0in Ω,uν=ψon Ω (N) \begin{cases} \Delta u = 0 & \text{in }\Omega,\\ \frac{\partial u}{\partial \boldsymbol{\nu}} = \psi & \text{on }\partial \Omega \end{cases} where ν\boldsymbol{\nu} is the unit normal function on the boundary. Note that uν=u(x)ν(x)\frac{\partial u}{\partial \boldsymbol{\nu}} = \nabla u(\mathbf x)\cdot \boldsymbol{\nu}(\mathbf x). In practice, we often have homogeneous Neumann boundary conditions, i.e. ψ=0\psi = 0. This is also referred to as no-slip conditions.

Transformations of harmonic functions

image-20200511143259943

Theorem. If ff is conformal and hh is harmonic in Λ\Lambda, then HH is harmonic in Ω\Omega where H(x,y)=h(u(x,y),v(x,y))H(x,y) =h(u(x,y),v(x,y)).

Proof. Messy in general but straightforward when Λ\Lambda is simply connected. See §115 (8 Ed §104). \square

Example: Take h(u,v)=evsinuh(u,v) = e^{-v}\sin u which is harmonic on the upper half-plane. Define w=z2w = z^2 on Ω\Omega, the first quadrant. Thus, w=u+ivw = u+iv where u=x2y2u = x^2 - y^2 and v=2xyv = 2xy.

image-20200511143934011

Applying this theorem, we know that H(x,y)=e2xysin(x2y2) H(x,y) = e^{-2xy} \sin (x^2-y^2) is harmonic on Ω\Omega. Note that Dirichlet and Neumann boundary conditions are preserved under conformal transformations (more next lecture). \circ

Lecture 26 — Bubbles, Boundary Transformations

We looked at soap film (last year). The key connection is the Neumann boundary conditions. Recall that harmonic functions can be used to minimise some sort of energy function.

In this case, the soap minimises internal potential energy which is done by minimising the surface area of the bubble. This leads to some interesting behaviour for tetrahedral and cubic wire frames with the edges meeting in the middle (as opposed to spanning the face planes).

image-20200513120436775

Transformations

Suppose ff is conformal and CC is a smooth (infinitely differentiable) arc in Ω\Omega (or on the boundary of Ω\Omega with some care). Let Γ=f(C)\Gamma = f(C) and H(x,y)=h(u(x,y),v(x,y)). H(x,y) = h(u(x,y),v(x,y)).

Example: In C\mathbb C (called the ww-plane), the function h(u,v)=v=Imwh(u,v)=v=\operatorname{Im} w is harmonic. In particular, it is harmonic on the horizontal strip Λ\Lambda where π/2<Imw<π/2-\pi/2 < \operatorname{Im}w<\pi/2. We claim that f:zLogzf : z \mapsto \operatorname{Log}z maps Ω\Omega, the right half-plane, onto Λ\Lambda conformally.

image-20200513122155515

Then, z=x+iyLogz=lnz+iArgz=lnx2+y2u+iarctan(y/x)iv    H(x,y)=h(u,v)=arctan(y/x) \begin{aligned} z =x+iy\mapsto \operatorname{Log}z &= \ln |z| + i \operatorname{Arg}z \\ &= \underbrace{\ln \sqrt{x^2 +y^2}}_{u} + \underbrace{i\arctan(y/x)}_{iv} \\ \implies H(x,y) &= h(u,v) =\arctan (y/x) \end{aligned} The boundary of Ω\Omega is of the form A={0+δi:δR}A = \{0+\delta i : \delta \in \mathbb R\}. Therefore, f(A)=LogA=lnA+iArgA=lnδ±iπ/2 \begin{aligned} f(A) = \operatorname{Log}A&= \ln |A| + i \operatorname{Arg}A \\ &= \ln |\delta| \pm i\pi/2 \end{aligned} which is exactly the boundary of Λ\Lambda.

Lecture 27 — Heat

Recall that harmonic functions can be mapped to harmonic functions.

Steady-state temperature in a half-plane

§119 (8 Ed §107).

image-20200514120939350

Let Ω\Omega be the upper half-plane. We apply heat to the boundary such that the temperature is 11 between 1-1 and 11 and 00 everywhere else. We want to find the steady state temperature distribution on Ω\Omega.

Fourier’s law of heat conductions tells us that Tt=(k2T)=k2ΔT. \begin{aligned} \frac{\partial T}{\partial t} &= \nabla \cdot(-k^2\nabla T) = -k^2\Delta T. \end{aligned} Moreover, steady state tells us that this derivative is 00 so ΔT=0\Delta T = 0.

So, we want to solve (D){ΔT=0in Ω,T(x,0)={1x<10x1for xR. \begin{aligned} (D)\begin{cases} \Delta T = 0 & \text{in }\Omega,\\ T(x,0)=\begin{cases} 1 & |x| < 1 \\ 0 & |x| \ge 1 \end{cases} & \text{for }x \in \mathbb R. \end{cases} \end{aligned} Because the temperature being added is 11, the temperature on the plane is bounded between 00 and 11. However, allowing exponentially growing functions (in yy) will lead to non-physical solutions.

Note that in C\mathbb C (call it the ww-plane), h(u,v)=v=Imwh(u,v) = v = \operatorname{Im}w is harmonic. Back to (D), we are looking for a bounded solution with limyT(x,y)=0\lim_{y\to\infty}T(x,y)=0 for all xx.

Define Ω~={z:Imz0,z±1}\tilde \Omega = \{z : \operatorname{Im}z \ge 0, z \ne \pm 1\}, i.e. Ω\Omega and its boundary excluding the discontinuities. Define θ1,θ2,r1,r2\theta_1, \theta_2, r_1, r_2 on Ω~\tilde \Omega such that z1=r1exp(iθ1)z+1=r2exp(iθ2) \begin{aligned} z-1 &= r_1 \exp (i\theta_1) \\ z+1 &= r_2 \exp (i\theta_2) \end{aligned} Here, these are defining radial coordinates centred at +1+1 and 1-1. r1,r2>0r_1, r_2 > 0 and 0θ1,θ2π0 \le \theta_1, \theta_2 \le \pi.

image-20200514122046882

We introduce the transformation w=Logz1z+1, \begin{aligned} w = \operatorname{\mathcal {Log}}\frac {z-1}{z+1}, \end{aligned} where Log\operatorname{\mathcal {Log}} has a branch cut on the negative imaginary axis, so π/2<Log3π/2-\pi/2 < \operatorname{\mathcal{Log}}\le 3\pi/2. Then, w=Logr1exp(iθ1)r2exp(iθ2)=lnr1r2+i(θ1θ2) \begin{aligned} w = \operatorname{\mathcal{Log}}\frac{r_1\exp(i\theta_1)}{r_2\exp(i\theta_2)} = \ln \frac{r_1}{r_2} + i(\theta_1-\theta_2) \end{aligned} We claim that ww maps the interior of Ω\Omega onto Λ\Lambda, the horizontal strip 0<v<π0 < v<\pi. We can look at points along the boundary of Ω\Omega and see where they map to on the boundary of Λ\Lambda.

image-20200514123307468
image-20200514123330279

We have transformed our boundary conditions to a problem which can be solved much easier. We just need to find a function satisfying Tv=πi=1T|_{v=\pi i}=1 and Tv=0=0T|_{v=0}=0. Indeed, v/πv/\pi is a bounded harmonic function satisfying these constraints. So, w=lnz1z+1+iArgz1z+1    v=Arg(z1z+1z+1z+1)=Arg(x2+y21+2iy(x+1)2+y2)=arctan(2yx2+y21) \begin{aligned} w &= \ln \left|\frac{z-1}{z+1}\right| + i \operatorname{\mathcal{Arg}}\frac{z-1}{z+1} \\ \implies v &= \operatorname{\mathcal{Arg}}\left(\frac{z-1}{z+1}\frac{\overline{z+1}}{\overline{z+1}}\right) \\ &= \operatorname{\mathcal{Arg}}\left(\frac{x^2 + y^2 - 1 + 2iy}{(x+1)^2+y^2}\right) \\ &= \arctan\left(\frac{2y}{x^2+y^2-1}\right) \end{aligned} where 0arctanπ0 \le \arctan \le \pi with special care when x2+y2=1x^2 + y^2 = 1. The solution is then 1πarctan2yx2+y21. \frac 1 \pi \arctan \frac{2y}{x^2+y^2-1}. We can check that this is bounded between 0 and 1. This can be visualised using colour or isotherms of the form T(x,y)=cT(x,y)=c which are circular arcs like x2+(ycot(πc))2=csc2(πc). x^2 + (y-\cot(\pi c))^2=\csc^2(\pi c).

Lecture 28 — Scale Factor, Poisson’s Integral Formula

Recall that a conformal map preserves angles, orientations and is 1-to-1. However, it can scale points.

Suppose f:zwf : z \mapsto w is a conformal map (i.e. analytic and f(z0)0f'(z_0) \ne 0). For zz near z0z_0 with zz0z \ne z_0, f(z)f(z0)zz0f(z0)    f(Z)f(z0)f(z0)zz0. \begin{aligned}\frac{|f(z) - f(z_0)|}{|z-z_0|} \approx |f'(z_0)| \quad\implies\quad |f(Z) - f(z_0)| \approx |f'(z_0)|\,|z-z_0|.\end{aligned} Here, f(z0)|f'(z_0)| is the scaling factor or dilation factor, i.e. the magnitude of the stretching or shrinking effect.

Example: f(z)=z2f(z) = z^2 at z0=1+iz_0 = 1+i (here, z=x+iyz = x+iy and w=u+ivw = u+iv). Then, u=x2y2u=x^2-y^2 and v=2xyv = 2xy.

image-20200515123353527

Observe that the tangent lines and angles are preserved under ff. The scaling factor is f(z0)=2z0=22|f'(z_0)| = 2|z_0| = 2\sqrt 2. For a small length near z0z_0, its length will be scaled by a factor of 222\sqrt2. Thus, 22\ell' \approx 2 \sqrt 2 \ell and also, Area(B)(22)2Area(A)\operatorname{Area}(B) \approx (2\sqrt 2)^2 \operatorname{Area}(A). This relationship holds regardless of the curves C1C_1 and C2C_2, and for all points z0z_0.

Poisson’s integral formula

§135 (8 Ed 7§124)

image-20200515124433001

Recall that from Cauchy, if ff is analytic in and on C0C_0, then for zIntC0z \in \operatorname{Int}C_0, f(z)=12πiC0f(ξ)ξzdξ. f(z) = \frac1{2\pi i}\int_{C_0}\frac{f(\xi)}{\xi-z}\,d\xi. Recall that for z=reiθz = re^{i\theta}, r>0r>0, the inverse point to zz relative to the circle C0C_0 is reiθr^*e^{i\theta} with rr^* such that rr=r02r^*r = r_0^2. Also, note that z=reiθ=r02reiθ=r02reiθ=r02zˉ=ξξˉzˉ for ξC0. z^* = r^*e^{i\theta}=\frac{r_0^2}re^{i\theta} = \frac{r_0^2}{re^{-i\theta}}=\frac{r_0^2}{\bar z} = \frac{\xi \bar \xi}{\bar z} \quad\text{ for }\xi \in C_0. Now fix zIntC0z \in \operatorname{Int}C_0 and z0z \ne 0. Note that C0f(ξ)ξzdξ=0becauseξf(ξ)ξz \begin{aligned} \int_{C_0} \frac{f(\xi)}{\xi-z^*}\,d\xi=0\quad\text{because}\quad\xi \mapsto\frac{f(\xi)}{\xi-z^*} \end{aligned} is analytic in and on C0C_0 since zExtC0z^* \in \operatorname{Ext}C_0 by the Cauchy integral formula. Using this in the above expression, f(z)=12πiC0(1ξz1ξz)If(ξ)dξI=(ξξzξξz)1ξ=(ξξz11ξˉ/zˉ)1ξ=(ξξzzˉzˉξˉ)1ξ=(ξξˉzzˉξz2)1ξ \begin{aligned} f(z) &= \frac 1 {2\pi i} \int_{C_0} \underbrace{\left(\frac 1 {\xi-z}-\frac 1{\xi-z^*}\right)}_{I}f(\xi)\,d\xi \\ I &= \left(\frac \xi {\xi-z} - \frac \xi {\xi - z^*}\right)\frac 1 \xi \\ &= \left(\frac \xi {\xi-z}-\frac 1{1-\bar \xi/\bar z}\right)\frac 1 \xi \\ &= \left(\frac \xi {\xi-z}-\frac{\bar z}{\bar z - \bar \xi}\right)\frac 1 \xi\\ &= \left(\frac{\xi\bar\xi-z\bar z}{|\xi-z|^2} \right)\frac 1 \xi \end{aligned} Recall that z=reiθz = re^{i\theta}. Put ξ=r0eiϕ\xi = r_0 e^{i\phi} for 0ϕ2π0 \le \phi \le 2\pi. Then, dξ=r0ieiϕdϕd\xi = r_0ie^{i\phi}\,d\phi. Substituting this into the integrand, I=r02r2ξz21r0eiϕ. \begin{aligned} I &= \frac{r_0^2-r^2}{|\xi-z|^2}\cdot \frac 1 {r_0e^{i\phi}}. \end{aligned} This is mostly in nice radial coordinates except for the ξz|\xi-z| part. Can we rewrite this? Consider the diagram below.

image-20200515183257053

We can appeal to the cosine rule which tells us that ξz2=r02+r22r0rcos(ϕθ). |\xi-z|^2 = r_0^2 + r^2 - 2r_0r\cos(\phi-\theta). Plugging this back into f(z)f(z), we have f(z)=f(reiθ)=12πi02πr02r2ξz2f(r0eiϕ)r0ieiϕr0eiϕdϕ=r02r22π02πf(r0eiϕ)r022r0rcos(ϕθ)+r2dϕ \begin{aligned} f(z)=f(re^{i\theta}) &= \frac 1 {2\pi i}\int_0^{2\pi}\frac{r_0^2-r^2}{|\xi-z|^2}\cdot\frac{f(r_0e^{i\phi})r_0ie^{i\phi}}{r_0e^{i\phi}}\,d\phi \\ &= \frac{r_0^2 - r^2}{2\pi}\int_0^{2\pi} \frac{f(r_0e^{i\phi})}{r_0^2 -2r_0r\cos(\phi-\theta)+r^2}\,d\phi \end{aligned}

Recall that the real part of a complex analytic function is harmonic. Taking the real part of the above expression and given “nice enough” Φ(r0,ϕ)\Phi(r_0, \phi) defined on the boundary C0C_0 of Br0B_{r_0} a (in fact, the) solution of the Dirichlet problem {Δu=0in Br0,uBr0=Φ(r0,ϕ)on Br0, \begin{cases} \Delta u = 0 & \text{in }B_{r_0}, \\ u|_{\partial B_{r_0}} = \Phi(r_0, \phi)& \text{on }\partial B_{r_0}, \end{cases} is given by u(r,θ)=12π02πr02r2r022r0rcos(ϕθ)+r2P(r0,r,ϕ,θ)Φ(r,ϕ)dϕ. u(r, \theta) = \frac 1 {2\pi}\int_0^{2\pi}\underbrace{\frac{r_0^2-r^2}{r_0^2-2r_0r\cos(\phi-\theta)+r^2}}_{P(r_0,r,\phi,\theta)}\Phi(r, \phi)\,d\phi. The middle fraction of the integrand is called the Poisson kernel, denoted P(r0,r,ϕ,θ)P(r_0,r,\phi,\theta) and is due to Poisson (1885). This is valid for r=0r=0 too!

Lecture 29 — Sequences and Series

The first part of this lecture was finishing the Poisson integral formula and is written in the previous document.

Complex sequences & series

§60 (8 Ed §55)

Compare this to the situation in R\mathbb R. Formally, a sequence is a function f:NCf : \mathbb N \to \mathbb C (or N0C\mathbb N _0\to \mathbb C), nznn \mapsto z_n, written as {zn}\{z_n\}.

Definition (Limit). We say limnzn=z\lim_{n \to \infty}z_n = z or “{zn}\{z_n\} converges to zz” if and only if given ϵ>0\epsilon > 0, there exists NNN \in \mathbb N such that n>Nn > N implies znz<ϵ|z_n-z|<\epsilon. As in R\mathbb R, this definition does not help us find a limit.

Definition (Series). Formally, n=0zn\sum_{n=0}^\infty z_n for znCz_n \in \mathbb C converges as a series if and only if the associated sequence of partial sums {sn}\{s_n\} converges as a sequence, where sn=k=0nzks_n = \sum_{k=0}^n z_k.

A typical question we might ask is does zn\sum z_n converge?

An easy test that a series does not converge is the nn-th term test, if zn\sum z_n converges, then zn0z_n \to 0 (converse does not hold). Once we know it converges, we know that zn\sum z_n is just a complex number.

Remark: A sequence {zn}\{z_n\} is bounded if there exists MM such that zn<M|z_n|<M for all nn.

Convergent implies that a sequence is bounded (converse does not hold, see {(1)n}\{(-1)^n\}).

Definition (Absolute convergence). We say that zn\sum z_n converges absolutely if and only if zn\sum |z_n| converges. Absolute convergence implies convergence (converse does not hold, see (1)n/n\sum (-1)^n/n).

Definition (Remainder). Given n=0zn\sum_{n=0}^\infty z_n, set sn=k=0nzks_n=\sum_{k=0}^n z_k as the partial sums. Then, let ρn=k=n+1zk\rho_n = \sum_{k=n+1}^\infty z_k as the tail or remainder.

Theorem. snss_n \to s if and only if ρn0\rho_n \to 0.

Example: We claim that n=0zn=1/(1z)=S\sum_{n=0}^\infty z_n = 1/(1-z)=S for z<1|z| < 1.

Proof. sn=1+z++znzsN=z+z2++zn+1    (1z)sn=1zn1sn=1zn+11z    ρn=ssn=zn+11z \begin{aligned} s_n &= 1 + z + \cdots + z^n \\ zs_N &= z + z^2 + \cdots + z^{n+1} \\ \implies (1-z)s_n &= 1 - z^{n-1} \\ s_n &= \frac{1-z^{n+1}}{1-z}\\ \implies \rho_n=s-s_n&= \frac {z^{n+1}}{1-z} \end{aligned} Since z<1|z|<1, ρn0|\rho_n| \to 0 as nn \to \infty which implies that snss_n \to s. \square

Remark: As in R\mathbb R, we can do simple operations on convergent series:

Definition (Power series). A power series centred at z0z_0 is a series of the form n=0an(zz0)n. \sum_{n=0}^\infty a_n(z-z_0)^n. This series has a radius of convergence. RR. That is, it converges absolutely within this radius, diverges outside, and may converge or diverge on the boundary. (These can be checked with the ratio test.)

If R=0R = 0, the series converges only at z0z_0. If R=R = \infty, it converges on all of C\mathbb C.

Lecture 30 — Taylor Series and Taylor’s Theorem

Theorem (Taylor’s Theorem). Let ff be analytic on BR(z0)B_R(z_0). Then, ff has a power series representation on BR(z0)B_R(z_0) for zz0<R|z-z_0|<R as f(z)=n=0an(zz0)nwherean=f(n)(z0)n!. f(z) = \sum_{n=0}^\infty a_n(z-z_0)^n\quad \text{where}\quad a_n = \frac{f^{(n)}(z_0)}{n!}. Note that this is an incredibly powerful statement; unlike R\mathbb R, the function is given by the power series. However, the analytic condition is very restrictive.

For the case of z0=0z_0=0, this is called the Maclaurin series.

Example: What is the Maclaurin series of f(z)=ezf(z) = e^z? ff is entire which means that R=R=\infty. Also, f(n)(0)=e0=1f^{(n)}(0) = e^0=1 for all nn. Thus, ez=n=0znn!. e^z = \sum_{n=0}^\infty \frac {z^n}{n!}. Proof. Assume z0=0z_0 = 0, otherwise translate. Choose zBRz \in B_R, let z=r|z|=r and fix r0(r,R)r_0 \in (r, R). Set C=Cr0C = C_{r_0}, positively oriented.

image-20200530163746259

Cauchy integral formula tells us that the value of this analytic function at zz is just f(z)=12πiCf(ξ)ξzdξ. \begin{aligned} f(z) &= \frac 1 {2\pi i}\int_C \frac{f(\xi)}{\xi-z}\,d\xi. \end{aligned} Looking at the integrand, 1ξz=1ξ(11z/ξ)=1ξ(n=0N1(z/ξ)n+(z/ξ)N1z/ξ)=n=0N1znξn+1+zN(ξz)ξN \begin{aligned} \frac 1 {\xi-z} &= \frac 1 \xi \left(\frac 1 {1-z/\xi}\right) \\ &= \frac 1 \xi \left(\sum_{n=0}^{N-1}(z/\xi)^n + \frac{(z/\xi)^N}{1-z/\xi}\right) \\ &= \sum_{n=0}^{N-1}\frac {z^n}{\xi^{n+1}} + \frac{z^N}{(\xi-z)\xi^N} \end{aligned} The integral becomes f(z)=12πiCf(ξ)ξzdξ=n=0N112πiCf(ξ)znξn+1dξ+zN2πiCf(ξ)(ξz)ξNdξρN1(z) \begin{aligned} f(z) &= \frac 1 {2\pi i}\int_C \frac{f(\xi)}{\xi-z}\,d\xi = \sum_{n=0}^{N-1}\frac 1{2\pi i}\int_C \frac{f(\xi)z^n}{\xi^{n+1}}\,d\xi + \underbrace{\frac{z^N}{2\pi i} \int_C \frac{f(\xi)}{(\xi-z)\xi^N}\,d\xi}_{\rho_{N-1}(z)} \end{aligned} We call the rightmost part ρN1(z)\rho_{N-1}(z). We can use Cauchy’s integral formula along with the extended Cauchy integral formula (which gives us derivatives), we get f(z)=n=0N1f(n)(0)znn!+ρN1(z). f(z) = \sum_{n=0}^{N-1}\frac{f^{(n)}(0)z^n}{n!} + \rho_{N-1}(z).

At this point, we’d like to show that limNρN1(z)=0\lim_{N \to \infty}\rho_{N-1}(z) = 0. Note that ξC    ξ=r0\xi \in C \implies |\xi| = r_0. Suppose there exists MNM_N such that we can bound the integrand with f(ξ)(ξz)ξNMNon C. \left|\frac{f(\xi)}{(\xi-z)\xi^N}\right| \le M_N \quad \text{on }C. Then, we would be able to say ρN1(z)rN2πMN(C)=r0rNMN. |\rho_{N-1}(z)| \le \frac{r^N}{2\pi}M_N \ell(C) = r_0r^N M_N. To find such an MNM_N, ff is analytic implies f|f| is continuous. CC is closed and bounded, so extreme value theorem (even just of a single parameter along the curve) implies there exists μ\mu such that fμ|f| \le \mu on CC. Furthermore, ξN=r0N|\xi|^N = r_0^N. Using reverse triangle inequality, ξzξz=r0r|\xi-z| \ge \left||\xi|-|z|\right| = r_0-r (note direction of inequality because this is in the denominator).

Putting this all together, MN=μroN(r0r)M_N = \frac \mu {r_o^N (r_0-r)} suffices for what we want. Then, ρN1(z)r0rNMN=r0rNμr0N(r0r)=μr0r0r(rr0)N |\rho_{N-1}(z)| \le r_0r^N M_N = \frac{r_0r^N\mu}{r_0^N(r_0-r)} = \frac{\mu r_0}{r_0-r}\left(\frac r {r_0}\right)^N Therefore, ρN1(z)0|\rho_{N-1}(z)| \to 0 as NN \to \infty because r/r0<1r/r_0 < 1. \square

Remark: In R\mathbb R, a Taylor series might converge but fail to converge to the function (see Lecture 16).

To calculate radius of convergence of a power series an(zz0)n\sum a_n (z-z_0)^n, we can use the ratio test. First, compute Λ=limnan+1an    R=1Λ. \Lambda = \lim_{n \to \infty} \left|\frac{a_{n+1}}{a_n}\right| \quad \implies \quad R = \frac 1 \Lambda. Conventionally, Λ=0    R=\Lambda=0 \iff R= \infty and Λ=    R=0\Lambda = \infty \iff R=0. This is easier because the fraction is the regular ratio test ordering.

Example: f(z)=ezf(z) = e^z. For this function, R=R = \infty because ez=n=0znn!    Λ=limn1/(n+1)!1/n!=limn1n+1=0. e^z = \sum_{n=0}^\infty \frac {z^n}{n!} \quad \implies\quad \Lambda = \lim_{n \to \infty} \frac{1/(n+1)!}{1/n!} = \lim_{n \to \infty} \frac 1{n+1} = 0. Example: f(z)=z2e3zf(z) = z^2 e^{3z} and find the Maclaurin series. Note that ff is entire, so e3z=n=03nznn!    z2e3z=n=03nzn+2n!=n=23n2zn(n2)! \begin{aligned} e^{3z} &= \sum_{n=0}^\infty \frac{3^nz^n}{n!} \\ \implies z^2 e^{3z} &= \sum_{n=0}^\infty \frac{3^nz^{n+2}}{n!} = \sum_{n=2}^{\infty} \frac{3^{n-2}z^n}{(n-2)!} \end{aligned} The last step is because we need a power series to have powers of exactly znz^n.

Lecture 31 — Laurent Series, Residues at Poles

Example: Consider the Maclaurin series of f(z)=1/(1z)f(z) = 1/(1-z). Note that ff is analytic for z<1|z| < 1 and indeed, on C{1}\mathbb C \setminus \{1\}. Moreover, f(n)(z)=n!(1z)n+1,z1. f^{(n)}(z) = \frac{n!}{(1-z)^{n+1}}, \quad z \ne 1. In particular, f(n)(0)=n!f^{(n)}(0) = n!. This tells us that the Taylor series of ff at 00 is given by Tf,0(z)=n=1zn. T_{f,0}(z) = \sum_{n=1}^\infty z^n. This has Λ=limn1/1=1\Lambda = \lim_{n \to \infty}1/1=1 which implies R=1R = 1. Taylor’s theorem implies that Tf,0T_{f,0} converges to ff for z<1|z| < 1.

Some things to note here. This is exactly the geometric series formula, which says 11z=n=0zn\frac 1 {1-z}=\sum_{n=0}^\infty z^n for z<1|z|<1. The series converges “out to the first singularity”, here 11.

Example: Find the Maclaurin series of 1/(2+4z)1/(2+4z) on C{1/2}\mathbb C \setminus \{-1/2\}. Note that we can manipulate it into the familiar form above. 12+4z=1/21+2z=1/21(2z)=12n=0(2z)n=n=0(1)n2n1zn,for 2z<1. \frac 1 {2+4z} = \frac{1/2}{1+2z} = \frac {1/2}{1-(-2z)}=\frac 1 2 \sum_{n=0}^\infty(-2z)^n = \sum_{n=0}^\infty (-1)^n 2^{n-1}z^n, \quad \text{for }|-2z|<1. Example: f(z)=(1+2z2)/(z3+z5)f(z) = (1+2z^2)/(z^3+z^5). Some clever algebra tricks lead to f(z)=1z3(2+2z21+z211+z2)=1z3(211+z2) f(z) = \frac 1{z^3}\left(\frac{2+2z^2}{1+z^2}-\frac 1 {1+z^2}\right) = \frac 1 {z^3}\left(2 - \frac 1 {1+z^2}\right) which is analytic on C{0,±i}\mathbb C \setminus \{0, \pm i\}. Note that this wont converge around 00. For z<1|z|<1, 1/(1+z2)=n=0(1)nz2n1/(1+z^2) = \sum_{n=0}^\infty (-1)^n z^{2n}. f(z)=1z3(2(1z2+z4))=1z3+1z+z+z3+. \begin{aligned} f(z) &= \frac 1 {z^3} (2 - (1 - z^2 + z^4 - \cdots)) = \frac 1 {z^3} + \frac 1 z +- z + z^3 + \cdots. \end{aligned} Although this is not defined at 0, it is still useful. It is almost a power series but has some terms involving negative exponents of zz.

Laurent series

By Weierstrass (1841) / Laurent (1843).

image-20200530181747176

Theorem. Let ff be analytic on the open annulus (donut-like shape) A={z:r1<zz0<r2}A = \{z : r_1 < |z-z_0| < r_2\}, centred at z0z_0. Let CC be a positively oriented, simple, closed curve in AA, and let zIntCz \in \operatorname{Int}C. Then, ff has a series representation, the Laurent series, f(z)=n=0an(zz0)n+n=1bn(zz0)n f(z) = \sum_{n=0}^\infty a_n (z-z_0)^n + \sum_{n=1}^\infty \frac{b_n}{(z-z_0)^n} on AA, where an=12πiCf(ξ)(ξz0)n+1dξ,bn=12πiCf(ξ)(ξz0)n+1dξ. \begin{aligned} a_n &= \frac 1 {2\pi i}\int_C \frac{f(\xi)}{(\xi-z_0)^{n+1}}\,d\xi, \qquad b_n = \frac 1 {2\pi i}\int_C \frac{f(\xi)}{(\xi-z_0)^{-n+1}}\,d\xi. \end{aligned} The bib_i’s are essentially coefficients of terms of the series with negative exponents (notice the negative power in the integrand the division by (zz0)n(z-z_0)^n).

Alternatively, this can be written as f(z)=n=cn(zz0)n,where cn=12πiCf(ξ)(ξz0)n+1dξ. f(z) = \sum_{n=-\infty}^\infty c_n(z-z_0)^n, \quad \text{where }c_n = \frac 1 {2\pi i}\int_C \frac{f(\xi)}{(\xi-z_0)^{n+1}}\,d\xi. In particular with a Laurent series, b1=12πiCf(ξ)(ξz0)1+1dξ=12πiCf(ξ)dξ. b_1 = \frac 1 {2\pi i} \int_C \frac{f(\xi)}{(\xi-z_0)^{-1+1}}\,d\xi = \frac 1 {2\pi i}\int_C f(\xi)\,d\xi. This means that if we know the Laurent series, we know the value of this contour integral. This is so important that is has a name, the residue of ff at z0z_0, denoted resz=z0f(z)\operatorname{res}_{z=z_0}f(z). The residue of an analytic function is zero. For example, 1/z1/z has residue 11 at the origin but 0 elsewhere. Here, we are also concerned about the very particular behaviour of the coefficient of z1z^{-1}.

Notes:

Example: Find the Laurent series of e1/ze^{1/z}. We can just use change of variables with the exponential Taylor series to get e1/z=n=01n!zn=1+1z+12!z2 e^{1/z}= \sum_{n=0}^\infty \frac 1 {n! z^n} = 1 + \frac 1 z + \frac 1 {2!z^2} for all z>0|z| > 0. Here, there is only one term in the Taylor series part, which is 1. Looking at the coefficient of z1z^{-1}, we know that 12πiCe1/ξdξ=b1=1 \frac 1 {2\pi i}\int_C e^{1/\xi}d\xi = b_1 = 1 for all circles about the origin.

Example 7: Compute I=C5z2z(z1)dz I = \int_C \frac{5z-2}{z(z-1)}\,dz where C=2eiθC = 2e^{i\theta} for θ[0,2π]\theta \in [0, 2\pi]. Note that the integrand f(z)=5z2z(z1)f(z) = \frac{5z-2}{z(z-1)} is analytic everywhere except for 0 and 1.

Residues and poles

Definition. We say f:ΩCf : \Omega \to \mathbb C has a singularity at z0z_0 if

In particular, we say ff has an isolated singularity at z0z_0 if ff is analytic on Bϵ(z0){z0}B_\epsilon(z_0)\setminus \{z_0\} for some ϵ>0\epsilon > 0.

Examples:

Lecture 32 — Cauchy Residue and Product, Singularities

Remark: f(z)=1/(zi)2f(z) = 1/(z-i)^2 is already a Laurent series about ii with b2=1b_2 = 1 and other coefficients zero.

Cauchy product of series

§73 (8 Ed §67)

Theorem. Suppose f(z)=n=0anznf(z) = \sum_{n=0}^\infty a_n z^n and g(z)=n=0bnzng(z) = \sum_{n=0}^\infty b_nz^n converge. Then, (fg)(z)=n=0cnzn(fg)(z) = \sum_{n=0}^\infty c_nz^n where cn=k=0nakbnkc_n = \sum_{k=0}^n a_k b_{n-k}.

Example: Consider for z<1|z|<1, ez1+z=ez11(z)=(1+z+z22!+)(1z+z2)=1+(1+1)z+(1+1/21)z2+=1+z22+ \begin{aligned} \frac {e^z}{1+z} = e^z \frac{1}{1-(-z)} &= \left(1 + z + \frac {z^2}{2!} + \cdots\right)\left(1-z+z^2-\cdots\right) \\ &= 1 + (-1+1)z + (1+1/2-1)z^2 + \cdots \\ &= 1 + \frac{z^2}2 + \cdots \end{aligned} Remark: We can take term-by-term derivatives and integrals of series (see §71 or 8 Ed §65).

Cauchy residue theorem

Theorem. Suppose CC is a positively oriented simple closed curve and that ff is analytic in and on CC except at finitely many isolated points {z1,z2,,zk}\{z_1, z_2, \ldots, z_k\}. Then, Cf(z)dz=2πij=1kresz=zjf(z). \int_C f(z)\,dz = 2\pi i \sum_{j=1}^k \underset{z=z_j}{\operatorname{res}}f(z). Proof. Take disjoint positively oriented circles C1,,CkC_1, \ldots, C_k around each z1,,zkz_1, \ldots, z_k with disjoint interiors, all lying in the interior of CC. Then, C,C1,,CkC, C_1, \ldots, C_k form the boundary of a multiply-connected domain Ω\Omega. Then, ff is analytic on Ω\Omega and its boundary, so the Cauchy-Goursat extension implies that Cf(z)dz=j=1kCjf(z)dz=2πij=1kresz=zjf(z). \int_C f(z)\,dz = \sum_{j=1}^k \int_{C_j}f(z)\,dz = 2\pi i \sum_{j=1}^k \operatorname{res}_{z=z_j}f(z). \quad \square Example: Apply this to example 7 from last lecture. Note that ff is analytic on C{0,1}\mathbb C \setminus \{0,1\} and CC is the circle of radius 2 around the origin. I=C5z2z(z1)dz. I = \int_C \frac{5z-2}{z(z-1)}\,dz. There are two methods we can try. First, look close to zero with 0<z<10<|z|<1 to see that f(z)=11z5z2z=(1)(1+z+z2+)(52/z)    resz=0f(z)=2 \begin{aligned} f(z) = -\frac{1}{1-z}\cdot\frac{5z-2}{z} &= (-1)(1+z+z^2+\cdots)(5-2/z) \\ \implies \underset{z=0}{\operatorname{res}}f(z) &= 2 \end{aligned} Now, we also need a Laurent series around 11 (which is even less fun). We can write f(z)=5(z1)+3z111((z1))=(5+3z1)(1+((z1))+((z1))2+)    resz=1f(z)=3 \begin{aligned} f(z) &= \frac{5(z-1)+3}{z-1}\cdot\frac{1}{1-(-(z-1))} \\ &= \left(5+\frac3{z-1}\right)(1+(-(z-1))+(-(z-1))^2+\cdots) \\ &\qquad\implies \underset{z=1}{\operatorname{res}}f(z) = 3 \end{aligned} Because the only coefficient of (z1)1(z-1)^{-1} comes from the 3/(z1)3/(z-1).

Alternatively, we can use partial fractions. Note that ff can be decomposed into parts which are analytic on C{1}\mathbb C \setminus \{1\} and C{0}\mathbb C \setminus \{0\}, respectively. f(z)=3z1+2z f(z) = \frac{3}{z-1}+\frac 2 z Therefore, this is its own Laurent series around the points 0 and 1. The other fraction is analytic near the other singularity, so does not affect the Laurent series. Therefore, resz=0f(z)=2resz=1=3. \operatorname{res_{z=0}}f(z) = 2 \qquad \operatorname{res}_{z=1}=3. Note that the expression 3/(z1)3/(z-1) describes, in some sense, the prototypical singularity of a whole family of functions.

Classifying isolated singularities

If z0z_0 is an isolated singularity of ff (i.e. ff analytic on a deleted neighbourhood), then there exists RR such that ff has a Laurent series expansion on BR(z0){z0}B_R(z_0) \setminus \{z_0\} given by f(z)=n=0an(zz0)n+n=1bn(zz0)n. f(z) = \sum_{n=0}^\infty a_n(z-z_0)^n + \sum_{n=1}^\infty b_n(z-z_0)^{-n}. There are three cases to consider.

Lecture 33 — Picard’s Theorem, More Singularities

Recall that isolated singularities fall into three cases. These are points in the complex plane where a function runs into trouble, but is fine in a punctured disk around that point. The three cases are removable, poles, and essential singularities.

These can be defined by, in a deleted ball Br(z0){z0}B_r(z_0) \setminus\{z_0\}, (Case I)n0an(zz0)n(Case II)nNan(zz0)n,N>1,aN0(Case III)n=an(zz0)n \begin{aligned} &\text{(Case I)} &&\sum_{n \ge 0}a_n(z-z_0)^n \\ &\text{(Case II)} &&\sum_{n \ge -N}a_n(z-z_0)^n, \quad N>1, a_N \ne 0 \\ &\text{(Case III)} && \sum_{n=-\infty}^\infty a_n (z-z_0)^n \end{aligned} Example: f(z)=sin(z)/zf(z) = \sin (z) /z is a function f:CCf:\mathbb C^* \to \mathbb C with a singularity at z=0z=0. We can expand sin\sin into its Taylor series to show that sinz=n=0(1)nz2n+1(2n+1)!    f(z)=1z(zz33!+). \sin z = \sum_{n=0}\frac{(-1)^nz^{2n+1}}{(2n+1)!} \implies f(z) = \frac 1 z\left(z - \frac {z^3}{3!}+ \cdots\right). The powers of zz are all non-negative which means the function is analytic on the deleted ball, so the singularity is removable. If we wanted to define an entire function, we could say f^=f\hat f = f on C\mathbb C^* and f^(0)=a0=1\hat f(0) = a_0=1. Also, the residue is 0.

Example: f(z)=z21z211f(z) = \frac{z^2-1}{z^2-1}\ne 1 which is f:C{±1}Cf : \mathbb C \setminus \{\pm 1\} \to \mathbb C. There are (clearly) two removable singularities at ±1\pm 1. We can’t equate this function to 1, but we could define f^:CC\hat f : \mathbb C \to \mathbb C with f^(z)=1\hat f(z) = 1. This ‘fixes’ the singularity in such a way that you could never tell it was there.

Example: f(z)=1/z4f(z) = 1/z^4 is again defined on C\mathbb C^*. There is an isolated singularity at z=0z=0 and this is a pole of order 4.

Example: f(z)=1/z4+1/z2f(z) = 1/z^4 + 1/z^2 is the same story. The pole is still of order 4; we don’t care about the higher order terms.

Example: f(z)=sinh(z2)/z7f(z) = \sinh (z^2)/z^7, with f:CCf : \mathbb C^* \to \mathbb C. Remember that sinhz=n=0z2n+1(2n+1)!\sinh z = \sum_{n=0}^\infty \frac{z^{2n+1}}{(2n+1)!}. Then, f(z)=1z7n0z4n+2(2n+1)!=n0z4n5(2n+1)!=1z5+1z13!+ f(z) = \frac 1 {z^7}\sum_{n \ge 0} \frac{z^{4n+2}}{(2n+1)!} = \sum_{n \ge 0}\frac{z^{4n-5}}{(2n+1)!} = \frac 1 {z^5} + \frac 1 z \frac 1 {3!}+\cdots We have a pole of order 5 and the residue is 1/3!1/3! in this case.

Example: f(z)=e1/zf(z) = e^{1/z}. In this case, f(z)=n=0(1/z)nn!=1+1z+1z212!+ f(z) = \sum_{n=0}^\infty \frac{(1/z)^n}{n!} = 1 + \frac 1 z + \frac 1 {z^2}\frac 1 {2!} + \cdots so we have an essential singularity at z=0z=0 and the residue is 1.

Picard’s theorem

(This is the big version.)

Theorem. Suppose ff has an essential singularity at z0z_0. Let R>0R > 0 be arbitrarily small. Then, on BR(z0){z0}B_R(z_0) \setminus\{z_0\}, the function ff attains every value in C\mathbb C infinitely often, with the possible exception of one point.

Theorem. If ff has a pole of order NN at z0z_0, then in Br(z0){z0}B_r(z_0)\setminus \{z_0\} we can write f(z)=ϕ(z)(zz0)N f(z) = \frac{\phi(z)}{(z-z_0)^N} with ϕ\phi analytic in BrB_r and ϕ(z0)0\phi(z_0)\ne 0. Moreover, resz=z0f(z)=ϕ(N1)(z0)(N1)!. \operatorname{res}_{z=z_0} f(z) = \frac{\phi^{(N-1)}(z_0)}{(N-1)!}.

Lecture 34 — Zeros, Poles and Cauchy Principal Value

What is an example of an essential singularity? Consider f(z)=1/(1z1)f(z) = 1/(1-z^{-1}) for z0z \ne 0. What is the singularity at 00? Observe, f(z)=111z=zz1=z1z=z(1+z+z2+)for0<z<1. f(z) = \frac {1}{1-\frac 1 z}=\frac {z}{z-1} = \frac{-z}{1-z}=-z(1+z+z^2+\cdots) \quad \text{for}\quad0 < |z| < 1. Thus, the singularity is removable because the Laurent series has only non-negative powers. Taking f(0)=0f(0) = 0 extends ff to an analytic function on C{1}\mathbb C \setminus \{1\}. Moreover, this means the Laurent series is only value out to the singularity at z=1z=1.

Recall that if ff has a pole of order nn at z0z_0 and we can write f(z)=ϕ(z)/(zz0)nf(z) = \phi(z)/(z-z_0)^n for ϕ\phi analytic in Br(z0)B_r(z_0) and ϕ(z0)0\phi(z_0)\ne 0, then Resz=z0f=ϕ(n1)(z0)(n1)!. \operatorname{Res}_{z=z_0}f= \frac {\phi^{(n-1)}(z_0)}{(n-1)!}. In particular, if z0z_0 is a simple pole, then Resz=z0f=ϕ(z0)\operatorname{Res}_{z=z_0}f=\phi(z_0).

Example: Consider f(z)=(z+i)/(z2+9)f(z) = (z+i)/(z^2+9). ff is analytic on C{±3i}\mathbb C \setminus\{\pm 3i\}, and ±3i\pm 3i are isolated singularities. Near z=3iz=3i, we can write f(z)=ϕ(z)z3iwhereϕ(z)=z+iz+3i f(z) = \frac {\phi(z)}{z-3i}\quad \text{where}\quad \phi(z) = \frac {z+i}{z+3i} noting that ϕ\phi is analytic and non-zero near 3i3i. The theorem tells us that Resz=3if=ϕ(3i)=4i/6i=2/3\operatorname{Res}_{z=3i}f=\phi(3i)=4i/6i=2/3. We can do the same thing at 3i-3i as well.

Example: f(z)=(z3+2z)/(zi)3f(z) = (z^3+2z)/(z-i)^3. Observe that this is analytic except at ii. Near ii, f(z)=ϕ(z)(zi)3andϕ(z)=z3+2z f(z) = \frac {\phi(z)}{(z-i)^3} \quad \text{and}\quad \phi(z) = z^3+2z with ϕ\phi analytic and ϕ(i)0\phi(i) \ne 0. Using the same theorem, Resz=if=ϕ(i)/2!=3i\operatorname{Res}_{z=i}f={\phi''(i)}/{2!}=3i.

Zeros of functions

§82 (8 Ed §75)

We can generalise the theorem to talk about zeros of functions, because poles are essentially zeros of the denominator.

Lemma. If ff is analytic at z0z_0, then ff has a zero of order mm at z0z_0 if {f(j)(z0)=0for j=0,,m1, andf(m)(z0)0. \begin{cases} f^{(j)}(z_0)=0& \text{for }j=0, \ldots, m-1, \text{ and} \\ f^{(m)}(z_0)\ne 0. \end{cases} Example: f(z)=(zi)4(z4)f(z) = (z-i)^4(z-4) has a zero of order 44 at ii and a simple zero (i.e. zero of order 1) at 4.

Theorem. ff is analytic at z0z_0 and has a zero of order mm at z0z_0 if and only if f(z)=(zz0)mg(z) f(z) = (z-z_0)^m g(z) where gg is analytic and g(z0)0g(z_0)\ne 0.

Zeros & poles

§83 (8 Ed §76)

Theorem. Suppose pp and qq are analytic at z0z_0, p(z0)0p(z_0)\ne 0, and qq has a zero of order mm at z0z_0. Then, p/qp/q has a pole of order mm at z0z_0.

Example: p(z)=1p(z)=1 and q(z)=z(ez1)q(z)=z(e^z-1). We know that p/qp/q has an isolated singularity at 00. pp is analytic and non-zero everywhere (obviously). We can check that q(0)=0,q(0)=0,q(0)=10q(0)=0, q'(0)=0, q''(0)=1\ne 0. Thus, p/qp/q has a pole of order 22 at 00.

Theorem. Let p,qp, q be analytic at z0z_0. If p(z0)0p(z_0)\ne 0 and q(z0)=0q(z_0)= 0, then p/qp/q has a simple pole at z0z_0 and Resz=z0pq=p(z0)q(z0). \operatorname{Res}_{z=z_0}\frac p q = \frac {p(z_0)}{q'(z_0)}. Note that there exist higher-order analogues, but they become messy.

Cauchy principal value

The main application of this is contour integrals. In R\mathbb R, recall that f(x)dx=limm1m10f(x)dx+limm20m2f(x)dx. \begin{aligned} \int_{-\infty}^\infty f(x)\,dx=\lim_{m_1\to\infty}\int_{-m_1}^0 f(x)\,dx+\lim_{m_2\to\infty}\int_0^{m_2}f(x)\,dx. \end{aligned} Note we can replace the split 00 with any fixed cRc\in \mathbb R. If the limits on the right exist, then we say the integral exists with the given value.

We cannot in general replace the right-hand side with limmmmf(x)dx\lim_{m\to\infty}\int_{-m}^mf(x)\,dx. If we do this anyway, it defines the Cauchy principal value (PV) integral.

Example: Let’s look at this in practice. The below improper integral is undefined, because xdx=limm1m10xdx+limm20m2xdx=limm1m12/2+limm2m22/2. \begin{aligned} \int_{-\infty}^\infty x\,dx &= \lim_{m_1\to\infty}\int_{-m_1}^0x\,dx+\lim_{m_2\to\infty}\int_0^{m_2}x\,dx \\ &= \lim_{m_1\to-\infty}-m_1^2/2+\lim_{m_2\to\infty}m_2^2/2. \end{aligned} However, the principal value is PVxdx=limmmmxdx=limm[m22m22]=0. \begin{aligned} \operatorname{PV}\int_{-\infty}^\infty x\,dx = \lim_{m\to\infty}\int_{-m}^m x\,dx=\lim_{m\to\infty}\left[\frac{m^2}2-\frac{m^2}2\right]=0. \end{aligned} Question: When does PVf=f\operatorname{PV}\int_{-\infty}^\infty f=\int_{-\infty}^\infty f? One case is for even functions or non-negative functions. Specifically, if ff is even (so f(x)=f(x)f(x)=f(-x) for all xRx \in \mathbb R), then 0f(x)dx=12f(x)dx=12PVf(x)dx \int_0^\infty f(x)\,dx=\frac 1 2 \int_{-\infty}^\infty f(x)\,dx = \frac 1 2 \operatorname{PV} \int_{-\infty}^\infty f(x)\,dx and these integrals converge or diverge together.

Lecture 35 — Cauchy Principal Value Examples

What’s the connection between these integrals and complex analysis? We signed up for the square root of 1-1 and that’s what we have fun with.

Suppose ff is even and “nice” on R\mathbb R and we want to evaluate f(x)dx\int_{-\infty}^\infty f(x)\,dx.

image-20200628114331883

Suppose ff is analytic in and on C=Γ1+Γ2C = \Gamma_1 + \Gamma_2, possibly except for isolated singularities in IntC\operatorname{Int}C. We know that Cf=Γ1f+Γ2f. \int_Cf=\int_{\Gamma_1}f+\int_{\Gamma_2}f. Hopefully, we can evaluate the left-hand side with the residue theorem (sum of residues at singularities). Then, if we let RR \to \infty, the integral over Γ2\Gamma_2 is what we want: PVf=f\operatorname{PV}\int_{-\infty}^\infty f=\int_{-\infty}^\infty f because we assumed ff is even.

It remains to deal with limRΓ1f\lim_{R\to\infty}\int_{\Gamma_1}f. Hopefully, we can estimate this to show this goes to zero, for example via MM-\ell.

Example: Evaluate I=0x2/(x6+1)dxI = \int_0^\infty x^2/(x^6+1)\,dx. Note that f(z)=x2/(x6+1)f(z) = x^2/(x^6+1) is even and continuous. As x±x \to \pm \infty, f1/x4f \sim 1/x^4 so II converges by the pp-test since p>1p>1. Moreover, the complex function f(z)=z2/(z6+1)f(z)=z^2/(z^6+1) is analytic on C\mathbb C except for 6 zeros of z6+1z^6+1, i.e. (1)1/6(-1)^{1/6}.

ff is analytic in and on C=Γ1+Γ2C=\Gamma_1+\Gamma_2 except for the 3 zeros in the upper half-plane. These singularities are z1=eπi/6z_1=e^{\pi i/6}, z2=iz_2=i, and z3=e5πi/6z_3=e^{5\pi i/6}. The residue theorem implies that Cf(z)dz=2πij=13Resz=zjf. \int_C f(z)\,dz = 2\pi i\sum_{j=1}^3 \operatorname{Res}_{z=z_j}f. We can see that ff has the form p/qp/q and at each zjz_j, p(zj)0p(z_j)\ne 0, q(zj)=0q(z_j)=0, and q(zj)0q'(z_j)\ne 0. Thus, each singularity is a simple pole. From the theorem last lecture of Resz0p/q=p(z0)/q(z0)\operatorname{Res}_{z_0}p/q=p(z_0)/q'(z_0), we have Cf(z)dz=2πij=13z2(z6+1)z=zj=2πij=13zj26zj5=2πi(16i16i+16i)=π3. \int_Cf(z)\,dz=2\pi i \sum_{j=1}^3 \left.\frac{z^2}{(z^6+1)'}\right|_{z=z_j}=2\pi i\sum_{j=1}^3 \frac{z_j^2}{6z_j^5}=2\pi i\left(\frac 1 {6i}-\frac 1 {6i}+\frac 1 {6i}\right)=\frac \pi 3. Remember that we’re looking for Cf=Γ1f+Γ2f\int_C f=\int_{\Gamma_1}f+\int_{\Gamma_2}f. We’ve got the left-hand side now. As the radius RR \to \infty, Γ2f2I\int_{\Gamma_2}f \to 2I because we’re looking for the integral from 00 to \infty. Now, we want to show that Γ1f0\int_{\Gamma_1}f \to 0 as RR \to \infty. We claim that Γ1fMRR|\int_{\Gamma_1}f|\le M_R \ell_R where R\ell_R is the length of Γ1\Gamma_1, which is πR\pi R. Also, MR=maxzΓ1f(z)maxz=Rz2z6+1maxz=Rz2z61R2R61 \begin{aligned} M_R &= \max_{z\in\Gamma_1}|f(z)| \le \max_{|z|=R}\left|\frac{z^2}{z^6+1}\right|\le\max_{|z|=R}\frac{|z|^2}{|z|^6-|1|}\le\frac{R^2}{R^6-1} \end{aligned} where the denominator comes from the inverse triangle inequality. Therefore, limRMRR=limRπRR2R61=0. \lim_{R \to \infty}M_R\ell_R = \lim_{R\to\infty}\frac{\pi R\cdot R^2}{R^6-1}=0. Finally, Cf=Γ1f+Γ2f    π3=2I+0    I=π6. \int_C f=\int_{\Gamma_1}f+\int_{\Gamma_2}f \implies\frac \pi 3=2I+0\implies I=\frac \pi 6. Example: I=0sinx/xdxI=\int_0^\infty \sin x/x\,dx. Firstly, notice that this is an improper integral because the integrand is undefined at 0 and the bound is infinity. Near 00, the integrand approaches 11. Approaching \infty is an absolute pain to estimate (via real analysis).

The good news is sinx/x\sin x/x is even. If II exists, then I=12sinxxdx=12PVsinxxdx. I=\frac 1 2\int_{-\infty}^\infty \frac{\sin x}x\,dx=\frac 1 2 \operatorname{PV}\int_{-\infty}^\infty \frac{\sin x}x\,dx. We need to be careful because our theorems don’t work across singularities, despite 00 being a removable singularity. This gives is a 4-part contour.

image-20200628122232198

A standard trick when working with trig functions is take an exponential and use the real or imaginary part as needed. Take f(z)=eiz/zf(z)=e^{iz}/z which is analytic on C\mathbb C_*. Cauchy tells us that the integral across the whole contour is 0. Thus, 0=γ1+Cρ+γ2+CRf=γ1f+Cρf+γ2f+CRf=I1+I2+I3+I4. 0=\int_{\gamma_1+C_\rho+\gamma_2+C_R}f=\int_{\gamma_1}f+\int_{C_\rho}f+\int_{\gamma_2}f+\int_{C_R}f = I_1 + I_2 + I_3 + I_4. Let the integrals of the right-hand side be I1,,I4I_1, \ldots, I_4 respectively. Looking at I1I_1 and I3I_3, I1=Rρeixxdx=ρReiw(w)1dwandI3=ρReixxdx. I_1 = \int_{-R}^{-\rho}\frac{e^{ix}}x\,dx=-\int_\rho^R\frac{e^{-iw}(-w)}{-1}\,dw \quad \text{and} \quad I_3 = \int_\rho^R\frac{e^{ix}}x\,dx. Combining these, I1+I3=ρReixeixxdx=2iρRsinxxdx2i0sinxxdx I_1+I_3=\int_\rho^R\frac{e^{ix}-e^{-ix}}{x}\,dx=2i\int_\rho^R\frac{\sin x}x\,dx \quad\longrightarrow\quad 2i\int_0^\infty\frac {\sin x}{x}\,dx after taking the limits ρ0\rho\to0 and RR \to \infty. Therefore, 0sinxxdx=12i(limρ0I2+limRI4) \int_0^\infty \frac {\sin x}{x}\,dx = -\frac 1{2i}\left(\lim_{\rho \downarrow 0}I_2 + \lim_{R \to \infty}I_4 \right) assuming these limits exist. Looking at I2I_2, we substitute z=ρeiθz=\rho e^{i\theta} and θ:π0\theta : \pi \to 0 (direction) with appropriate change of variables formula. I2=Cρeizzdz=π0eiρeiθρeiθiρeiθdθ=i0πeiρeiθdθ. \begin{aligned} I_2 &= \int_{C_\rho}\frac {e^{iz}}{z}\,dz = \int_\pi^0\frac {e^{i\rho e^{i\theta}}}{\rho e^{i\theta}} i\rho e^{i\theta}\,d\theta = -i\int_0^\pi e^{i\rho e^{i\theta}}\,d\theta. \end{aligned} Since ρeiθ=ρ|\rho e^{i\theta}| = \rho, the expression eiρeiθ1e^{i\rho e^{i\theta}}\to 1 as ρ0\rho \to 0 uniformly for θ[0,π]\theta \in [0,\pi]. From real analysis, this means that the convergence radius δ\delta is independent of θ\theta, depending on ϵ\epsilon. This means we can write limρ0I2=i0πlimρ0(eiρeiθ)dθ=i0πdθ=iπ. \lim_{\rho \downarrow 0}I_2 = -i\int_0^\pi \lim_{\rho \downarrow 0}\left(e^{i\rho e^{i\theta}}\right)\,d\theta=-i\int_0^\pi d\theta=-i\pi.

Finally, I40I_4 \to 0 as RR \to \infty by Jordan’s lemma (next lecture) because we cannot use MM-\ell in the usual way. Therefore, 0sinxxdx=12i(limρ0I2+limRI4)=12i[iπ+0]=π2. \int_0^\infty \frac {\sin x}{x}\,dx = -\frac 1{2i}\left(\lim_{\rho \downarrow 0}I_2 + \lim_{R \to \infty}I_4 \right)=-\frac 1 {2i }\left[-i\pi+0\right]=\frac\pi 2.

Lecture 36 — Jordan’s Lemma and Rouché’s Theorem

We continued the example of 0sinx/xdx\int_0^\infty \sin x/x\,dx from the previous lecture.

Lemma (Jordan). Suppose ff is analytic on the closed upper half plane excluding some disc, {z:Imz0}{z:zR0}\left\{z : \operatorname*{Im}z\ge 0\right\}\cap \left\{z : |z| \ge R_0\right\}, which satisfies f(z)M/Rβ|f(z)| \le M/R^\beta for some M,β>0M, \beta > 0 on the outside arc of ΓR={Reiθ:R>R0,0θπ}\Gamma_R = \left\{Re^{i\theta} : R > R_0, 0 \le \theta \le \pi\right\}. Then, for all α>0\alpha > 0, limRΓReiαzf(z)dz=0. \lim_{R \to \infty}\int_{\Gamma_R}e^{i\alpha z}f(z)\,dz = 0. Proof. Fairly straightforward apart from one estimate. \square

Argument principle & Rouché’s theorem

§93–94 (8 Ed §86–87) – Rouche’s Theorem

The argument principle says if ff is analytic in and on CC, a simple closed curve, except possibly for poles inside CC, then the rate of change of the argument along the curve is ΔCargf=2π(zp), \Delta_C \arg f=2\pi(z\cdot p), where zz is the number of zeros inside CC counting multiplicity, and pp is the number of poles counting sum of orders.

Theorem (Rouché’s theorem). Let ff and gg be analytic in and on a simple closed curve CC (orientation irrelevant). Suppose g(z)<f(z)|g(z)| < |f(z)| for all zz on this curve. Then, ff and f+gf+g have the same number of zeros (counting multiplicity) inside CC.

For example, (zi)2(z+i)3(z-i)^2(z+i)^3 has 5 zeros counting multiplicity. Make sure to check conditions before applying theorems.

Example: How many zeros of h(z)=z74z31h(z) = z^7-4z^3-1 lie inside the unit circle? Let f(z)=4z3f(z) = -4z^3 and g(z)=z7+z1g(z) = z^7+z-1, noting that both are polynomials and entire. In general for picking ff and gg with an annulus, take the larger power outside the annulus. This case is a bit more special. First, we have f=4|f|=4 on CC and by the triangle inequality, g(z)z7+z+1=3<4=f(z). |g(z)| \le |z^7| + |z| + |-1| =3< 4=|f(z)|. Because g<f|g| < |f| on CC, Rouché’s theorem implies that ff and f+g=hf+g=h have the same number of zeros inside CC, namely 33 because ff has a 33-fold zero at 00.

Lecture 37 — Additional

Proof of Jordan’s lemma

Lemma (Jordan). Suppose ff is analytic on the closed upper half plane excluding some disc, {z:Imz0}{z:zR0}\left\{z : \operatorname*{Im}z\ge 0\right\}\cap \left\{z : |z| \ge R_0\right\}, which satisfies f(z)M/Rβ|f(z)| \le M/R^\beta for some M,β>0M, \beta > 0 on the outside region ΓR={Reiθ:R>R0,0θπ}\Gamma_R = \left\{Re^{i\theta} : R > R_0, 0 \le \theta \le \pi\right\}. Then, for all α>0\alpha > 0, limRΓReiαzf(z)dz=0. \lim_{R \to \infty}\int_{\Gamma_R}e^{i\alpha z}f(z)\,dz = 0. Proof. On ΓR\Gamma_R, we have z=Reiθz=Re^{i\theta} and dz=iReiθdz=iRe^{i\theta}. Substituting in, ΓReiαzf(z)dz=R0πeiαReiθf(Reiθ)dθR0πeiαReiθf(Reiθ)dθ. \begin{aligned} \left|\,\int_{\Gamma_R}e^{i\alpha z}f(z)\,dz\, \right|= R\left|\, \int_0^\pi e^{i\alpha R e^{i\theta}}f(Re^{i\theta})\,d\theta\, \right| &\le R\int_0^\pi \left| e^{i\alpha Re^{i\theta}}f(Re^{i\theta})\right|\,d\theta. \end{aligned} Expanding eiθe^{i\theta} in the inner exponent and taking the modulus gives us =R0πeiα(Rcosθ+iRsinθ)f(Reiθ)dθ=R0πeαRsinθf(Reiθ)dθ. \begin{aligned} =R\int_0^\pi \left| e^{i\alpha (R\cos \theta + iR\sin \theta)}f(Re^{i\theta}) \right|\,d\theta = R\int_0^\pi e^{-\alpha R \sin \theta} \left|f(Re^{i\theta}) \right|\,d\theta. \end{aligned} Now we use the bound assumption on ff and symmetry of the integrand, MRβ10πeαRsinθdθ=2MRβ10π/2eαRsinθdθ. \le \frac M{R^{\beta-1}} \int_0^\pi e^{-\alpha R \sin \theta}\,d\theta = \frac {2M}{R^{\beta-1}}\int_0^{\pi/2}e^{-\alpha R\sin \theta}\,d\theta. Note that sinθ2θ/π\sin \theta \ge 2\theta / \pi for 0θπ/20 \le \theta \le \pi/2 (can be proven via simple calculus). Using this, we have 2MRβ10π/2eαRsinθdθ2MRβ10π/2e2αRθ/πdθ=2MRβ1π2αR(1eαR)=MπαRβ(1eαR) \begin{aligned} \frac {2M}{R^{\beta-1}}\int_0^{\pi/2}e^{-\alpha R\sin \theta}\,d\theta &\le \frac {2M}{R^{\beta-1}}\int_0^{\pi/2}e^{-2\alpha R\theta/\pi}\,d\theta = \frac {2M}{R^{\beta-1}}\frac {\pi}{2\alpha R}\left( 1 - e^{-\alpha R} \right) = \frac {M\pi}{\alpha R^\beta}\left(1-e^{\alpha R}\right) \end{aligned} which we can see goes to 00 as RR \to \infty. \square

Trig substitutions in integrals

To solve integrals of the form 02πf(sint,cost)dt \int_0^{2\pi }f(\sin t, \cos t)\,dt we can try the substitutions z=cost+isint,z1=costisint z=\cos t + i \sin t, \quad z^{-1}=\cos t-i \sin t with the motivation being z=eitz=e^{it} for 0t2π0 \le t \le 2\pi. Then we just need to integrate around the unit circle. This gives us cost=12(z+z1)andsint=12i(zz1) \begin{gathered} \cos t = \frac 1 2 \left(z+z^{-1}\right)\quad \text{and}\quad \sin t=\frac 1 {2i}\left( z - z^{-1} \right) \end{gathered} which implies that dz=(sint+icost)dt=izdt. dz=(-\sin t + i \cos t)\,dt=iz\,dt. Example: Find I=02π1/(2+cost)dtI = \int_0^{2\pi}1/(2+\cos t)\,dt. Write z=eitz=e^{it} and using the substitution above, dt=dtizandcost=12(z+z1). dt=\frac {dt}{iz} \quad \text{and}\quad\cos t = \frac 1 2 \left( z+z^{-1} \right). For C={eit,0t2π}C = \left\{ e^{it}, 0 \le t \le 2\pi \right\}, we have I=C1/(iz)2+1/2(z+z1)dz=iCdz2z+1/2(z2+1)=2iCdzz2+4z+1. \begin{aligned} I = \int_C \frac {1/(iz)}{2+1/2\left( z+z^{-1} \right)}\,dz = -i\int_C \frac {dz}{2z+1/2\left( z^2+1 \right)} =-2i\int_C\frac {dz}{z^2+4z+1}. \end{aligned} To evaluate the last integral, note that the integrand is analytic except for zeros of the denominator, 2±3-2\pm \sqrt 3. Only 2+3-2+\sqrt 3 lies inside CC, so by residue theorem I=(2i)2πiResz=2+31z2+4z+1=Resz=2+31/(z(23))z(2+3). I = (-2i)2\pi i \operatorname*{Res}_{z=-2+\sqrt 3}\frac {1}{z^2+4z+1}=\operatorname*{Res}_{z=-2+\sqrt 3}\frac {1/(z-(-2-\sqrt 3))}{z-(-2+\sqrt 3)}. The numerator is analytic and non-zero near 2+3-2+\sqrt 3 so the integrand has a simple pole at 2+3-2+\sqrt 3. The residue is calculated by Resz=2+31z2+4z+1=ϕ(2+3)=123 \operatorname*{Res}_{z=-2+\sqrt 3}\frac {1}{z^2 + 4z+1}=\phi(-2+\sqrt 3)=\frac {1}{2\sqrt 3} and hence, I=(2i)2πi/(23)=2π/3I = (-2i)2\pi i /(2\sqrt 3)=2\pi / \sqrt 3.

Laurent series of 1/sinhz1/\sinh z

We will try to calculate the Laurent series of 1/sinhz1/\sinh z at 00. This has singularities where sinhz=0\sinh z=0 which is exactly for z=nπiz=n\pi i, nZn \in \mathbb Z. Thus, 1/sinhz1/\sinh z has a Laurent series on 0<z<π0<|z|<\pi. Then, writing out the series, 1sinhz=1z+z3/3!+z5/5!+=1z11+z2/3!+z4/5!+. \begin{aligned} \frac {1}{\sinh z} &= \frac {1}{z + z^3/3! + z^5/5! + \cdots} = \frac 1 z \frac {1}{1+z^2/3! + z^4/5! + \cdots}. \end{aligned} For z2/3!+z4/5!+<1|z^2/3! + z^4/5! + \cdots|<1, we can use the geometric series formula as long as z|z| is sufficiently small. 1sinhz=1z[1(z23!+z45!+)+(z23!+z45!+)2] \frac 1 {\sinh z} = \frac 1 z \left[ 1 - \left(\frac{z^2}{3!} + \frac{z^4}{5!} + \cdots\right) + \left(\frac{z^2}{3!} + \frac{z^4}{5!} + \cdots\right)^2 - \cdots \right] In particular, there is a pole of order 1 at 0 with Resz=01sinhz=1. \operatorname*{Res}_{z=0}\frac {1}{\sinh z}=1.

Laurent series of cotz/z2\cot z/z^2

We start with cotz=cosz/sinz\cot z = \cos z / \sin z and the usual Taylor series of sin\sin and cos\cos. g(z)=cotzz2=1z2coszsinz=1z2(n=0(1)nz2n/(2n)!n=0(1)nz2n+1/(2n+1)!)=1z2(1z2/2!+z4/4!)(zz3/3!+z5/5!+)=1z3(1z2/2!+z4/4!)(1z2/3!+z4/5!+) \begin{aligned} g(z) = \frac {\cot z}{z^2} = \frac 1 {z^2}\frac {\cos z}{\sin z} &=\frac {1} {z^2}\left( \frac {\sum_{n=0}^\infty (-1)^nz^{2n}/(2n)!}{\sum_{n=0}^\infty (-1)^nz^{2n+1}/(2n+1)!} \right) \\ &=\frac {1}{z^2}\frac {\left(1-z^2/2! + z^4/4! - \cdots\right)} {\left(z-z^3/3!+z^5/5!+\cdots\right)} \\ &= \frac {1}{z^3} \frac {\left(1-z^2/2! + z^4/4! - \cdots\right)} {\left(1-z^2/3!+z^4/5!+\cdots\right)} \\ \end{aligned} Above, we expanded the power series of sin\sin and cos\cos, then factored a zz out of the denominator with the goal of using the geometric series formula. Continuing, 1z3(1z2/2!+z4/4!)(1z2/3!/+z4/5!+)=1z3(1z2/2!+z4/4!)1(z2/3!+z4/5!+). \begin{aligned} \frac {1}{z^3} \frac {\left(1-z^2/2! + z^4/4! - \cdots\right)} {\left(1-z^2/3!/+z^4/5!+\cdots\right)} &= \frac {1}{z^3} \frac {\left(1-z^2/2! + z^4/4! - \cdots\right)} {1-(z^2/3!+z^4/5!+\cdots)}. \end{aligned} The tail of the denominator series converges (why?) and has modulus less than 11 (really?) and so we can write this as a geometric series. Expanding a few terms is sufficient to determine the residue. g(z)=1z3(1z2/2!+z4/4!)n=0(z2/3!+z4/5!+)n=1z3(1z2/2!+z4/4!)(1+(z2/3!+)+(z2/3!+)2+)=1z3(1+(13!12!)z2+) \begin{aligned} g(z)&=\frac {1}{z^3} \left(1-z^2/2! + z^4/4! - \cdots\right) \sum_{n=0}^\infty (z^2/3!+z^4/5!+\cdots)^n \\ &= \frac {1}{z^3} \Big(1-z^2/2! + z^4/4! - \cdots\Big) \Big(1 + (z^2/3!+\cdots) + (z^2/3!+\cdots)^2+\cdots\Big)\\ &= \frac {1}{z^3}\left(1 + \left( \frac {1}{3!}-\frac {1}{2!} \right)z^2 + \cdots\right) \end{aligned} Looking at the fractions, 1/61/2=1/31/6-1/2=-1/3 so the residue at 00 is 1/3-1/3. Anything more than this is going to be very hard.