
M337_2
Introduction to complex analysis
About this free course
This free course is an adapted extract from the Open University course .
This version of the content may include video, images and interactive content that may not be optimised for your device.
You can experience this free course as it was originally designed on OpenLearn, the home of free learning from The Open University –
There you’ll also be able to track your progress via your activity record, which you can use to demonstrate your learning.
Copyright © 2022 The Open University
Intellectual property
Unless otherwise stated, this resource is released under the terms of the Creative Commons Licence v4.0 http://creativecommons.org/licenses/byncsa/4.0/deed.en_GB. Within that The Open University interprets this licence in the following way: www.open.edu/openlearn/aboutopenlearn/frequentlyaskedquestionsonopenlearn. Copyright and rights falling outside the terms of the Creative Commons Licence are retained or controlled by The Open University. Please read the full text before using any of the content.
We believe the primary barrier to accessing highquality educational experiences is cost, which is why we aim to publish as much free content as possible under an open licence. If it proves difficult to release content under our preferred Creative Commons licence (e.g. because we can’t afford or gain the clearances or find suitable alternatives), we will still release the materials for free under a personal enduser licence.
This is because the learning experience will always be the same high quality offering and that should always be seen as positive – even if at times the licensing is different to Creative Commons.
When using the content you must attribute us (The Open University) (the OU) and any identified author in accordance with the terms of the Creative Commons Licence.
The Acknowledgements section is used to list, amongst other things, third party (Proprietary), licensed content which is not subject to Creative Commons licensing. Proprietary content must be used (retained) intact and in context to the content at all times.
The Acknowledgements section is also used to bring to your attention any other Special Restrictions which may apply to the content. For example there may be times when the Creative Commons NonCommercial Sharealike licence does not apply to any of the content even if owned by us (The Open University). In these instances, unless stated otherwise, the content may be used for personal and noncommercial use.
We have also identified as Proprietary other material included in the content which is not subject to Creative Commons Licence. These are OU logos, trading names and may extend to certain photographic and video images and sound recordings and any other material as may be brought to your attention.
Unauthorised use of any of the content may constitute a breach of the terms and conditions and/or intellectual property laws.
We reserve the right to alter, amend or bring to an end any terms and conditions provided here without notice.
All rights falling outside the terms of the Creative Commons licence are retained or controlled by The Open University.
Head of Intellectual Property, The Open University
Session 1: Differentiation
Introduction to differentiation
The derivative of a real function f at a point c is the gradient of the tangent to the graph of f at c. This gradient is calculated by finding the gradient of the chord joining the point (c, f(c)) to a (nearby) point (x, f(x)), and taking the limit as x approaches c (Figure 1).
Now, the gradient of the chord is equal to the ratio
\frac{f(x)  f(c)}{x  c}.
This ratio is often called the difference quotient for f at c, and its limit as x tends to c provides a formal definition of the (real) derivative of f at c, denoted by f^{\prime }(c). Thus
f^{\prime }(c) = \lim _{x\rightarrow c}\frac{f(x)f(c)}{xc}.
In the case of complex functions, it is difficult to think about derivatives in terms of gradients of tangents, since the graph of a complex function is not drawn in two dimensions. Instead we define the derivative of a complex function directly in terms of difference quotients, using the notion of complex limits.
Fortunately, the derivatives of many complex functions turn out to have the same form as those of the corresponding real functions. For example, the derivative of the complex sine function is the complex cosine function, and the complex exponential function is its own derivative. On the other hand, the complex modulus function fails to be differentiable at any point of \mathbb{C}, even though the real modulus function (Figure 2) is differentiable at every point of \mathbb{R}  \{0\}. This reflects the fact that complex differentiation imposes a much stronger condition on functions than does real differentiation. Indeed, as the course progresses, you will see that differentiable complex functions have remarkably pleasant properties. For example, if a complex function can be differentiated once throughout a region, then it can be differentiated any number of times. There is no equivalent result for real functions.
Section 1 will define complex differentiation and show how the definition can be used to establish whether a function is differentiable. By introducing rules for combining differentiable functions, you will see how complex polynomial and rational functions can be differentiated just as in the real case. The end of Section 1 will give a geometric interpretation of complex differentiation by introducing the idea of a complex scale factor.
Section 2 will introduce the concept of partial differentiation for real functions of two real variables, and use it to establish a relationship between complex differentiation and real differentiation. This relationship sometimes enables us to differentiate a complex function using real derivatives. Indeed, at the end of the section, this approach will be used to show that the complex exponential function is its own derivative.
This OpenLearn course is an extract from the Open University course M337 Complex analysis.
1 Derivatives of complex functions
After working through this section, you should be able to:
use the definition of derivative to show that a given function is differentiable, and to find its derivative
use the Combination Rules for differentiation to differentiate polynomial and rational functions
use various strategies to show that a given function is not differentiable at a point
interpret the derivative of a complex function at a point as a rotation and a scaling of a small disc centred at the point.
1.1 Defining differentiable functions
As with limits and continuity, the way in which the derivative of a complex function is defined is similar to the real case. Thus a complex function is said to have a derivative at a point \alpha \in \mathbb{C} if the difference quotient, defined by
\frac{f(z)f(\alpha )}{z\alpha },
tends to a limit as z tends to \alpha . Equivalently, it is sometimes more convenient to replace z by \alpha + h, and examine the corresponding limit as h tends to 0. The difference quotient then has the form
\frac{f(\alpha + h)f(\alpha )}{h},
where h is a complex number. The equivalence of these two limits can be justified by noting that if z=\alpha +h, then ‘z\rightarrow \alpha ’ is equivalent to ‘h\rightarrow 0’.
Definitions
Let f be a complex function whose domain contains the point \alpha . Then the derivative of {\boldsymbol{f}} at {\boldsymbol{\alpha }} is
\lim _{z\rightarrow \alpha }\frac{f(z)f(\alpha )}{z\alpha } \quad \left (\text{or } \lim _{h\rightarrow 0}\frac{f(\alpha + h)f(\alpha )}{h}\right ),
provided that this limit exists. If it does exist, then f is differentiable at {\boldsymbol{\alpha }}. If f is differentiable at every point of a set A, then f is differentiable on {\boldsymbol{A}}. A function is differentiable if it is differentiable on its domain.
The derivative of f at \alpha is denoted by f^{\prime }(\alpha ), and the function
f^{\prime }\colon z\longmapsto f^{\prime }(z)
is called the derivative of {\boldsymbol{f}}. The domain of f^{\prime } is the set of all complex numbers at which f is differentiable.
The function f^{\prime } is sometimes called the derived function of f.
Remarks
The existence of the limit \lim _{z\rightarrow \alpha }\frac{f(z)f(\alpha )}{z\alpha } implicitly requires the domain of f to contain \alpha as one of its limit points. This always holds if the domain of f is a region.
The derivative f^{\prime }(z) is sometimes written as \dfrac{df}{dz}(z) or \dfrac{d}{dz} (f(z)).
Some other texts use the phrase complex derivative in place of derivative to draw a distinction with the standard real derivative of a function f\colon \mathbb{R}^2\longrightarrow \mathbb{R}^2 (which we will not need).
In certain cases it is easy to find the derivative of a function directly from the definition above.
Example 1
Use the definition of derivative to find the derivative of the function f(z)=z^2.
Solution
The domain of f(z)=z^2 is the whole of \mathbb{C}, so let \alpha be an arbitrary point of \mathbb{C}. Then
\begin{align*} f^{\prime }(\alpha )&=\lim _{z\rightarrow \alpha }\frac{f(z)f(\alpha )}{z\alpha }\\ &=\lim _{z\rightarrow \alpha }\frac{z^2\alpha ^2}{z\alpha }\\ &=\lim _{z\rightarrow \alpha }(z + \alpha ). \end{align*}
Now z\longmapsto z+\alpha is a basic continuous function, continuous at \alpha , so we see that f^{\prime }(\alpha ) = \alpha + \alpha =2\alpha .
Since \alpha is an arbitrary complex number, the derivative of f is the function f^{\prime }(z)=2z. Its domain is the whole of \mathbb{C}.
Notice the way in which the troublesome z\alpha term cancels from the numerator and the denominator in the calculation of f'(\alpha ) in the preceding example. This often happens when you calculate derivatives directly from the definition.
Exercise 1
Use the definition of derivative to find the derivative of
the constant function f(z)=1
the function f(z)=z.
f(z) = 1 is defined on the whole of \mathbb{C}, so let \alpha \in \mathbb{C}. Then \begin{align*} f^{\prime }(\alpha ) &= \lim _{z\rightarrow \alpha } \frac{f(z)  f(\alpha )}{z  \alpha } \\ &= \lim _{z\rightarrow \alpha } \frac{1  1}{z  \alpha }\\ &= \lim _{z \rightarrow \alpha } \frac{0}{z  \alpha } = 0. \end{align*}Since \alpha is an arbitrary complex number, f is differentiable on the whole of \mathbb{C}, and its derivative is the zero function f^{\prime }(z) = 0 \quad (z \in \mathbb{C}).
f(z) = z is defined on the whole of \mathbb{C}, so let \alpha \in \mathbb{C}. Then \begin{align*} f^{\prime }(\alpha ) &= \lim _{z\rightarrow \alpha } \frac{f(z)  f(\alpha )}{z \alpha } \\ &= \lim _{z\rightarrow \alpha } \frac{z  \alpha }{z  \alpha }\\ &= \lim _{z\rightarrow \alpha }1 = 1. \end{align*}
Since \alpha is an arbitrary complex number, f is differentiable on the whole of \mathbb{C}, and its derivative is the constant function
f^{\prime }(z) = 1\quad (z \in \mathbb{C}).
Example 1 and Exercise 1 show that the functions f(z)=1, f(z)=z and f(z)=z^2 are differentiable on the whole of \mathbb{C}. Functions that have this property are given a special name.
Definition
A function is entire if it is differentiable on the whole of \mathbb{C}.
Not all functions are entire; indeed, many interesting aspects of complex analysis arise from functions that fail to be differentiable at various points of \mathbb{C}.
Exercise 2
Use the definition of derivative to find the derivative of the function f(z)=1/z. Explain why f is not entire.
The domain of f(z) = 1/z is the region \mathbb{C}  \{0\}. Since f^{\prime }(\alpha ) cannot exist unless f is defined at \alpha , we confine our attention to \alpha \ne 0. Then
\begin{aligned} f^{\prime }(\alpha ) &= \lim _{z\rightarrow \alpha } \frac{f(z)  f(\alpha )}{z  \alpha }\\ &= \lim _{z\rightarrow \alpha } \frac{(1/z)(1/\alpha )}{z  \alpha }\\ &= \lim _{z\rightarrow \alpha } \frac{\alpha  z}{z\alpha (z  \alpha )}\\ &= \lim _{z\rightarrow \alpha } \frac{1}{z\alpha }. \end{aligned}
Now z\longmapsto 1/(z\alpha ) is a basic continuous function with domain \mathbb{C}\{0\}, so we see that
f'(\alpha )=\lim _{z\rightarrow \alpha } \frac{1}{z\alpha }=\frac{1}{\alpha ^2}.
Since \alpha is an arbitrary nonzero complex number, the derivative of f is
f^{\prime }(z) = \frac{1}{z^2}\quad (z \ne 0).
The function f is not entire since its domain is not \mathbb{C}.
Although the function f(z)=1/z is not entire, it is differentiable on the whole of its domain \mathbb{C}\{0\}. This domain is a region because it is obtained by removing the point 0 from \mathbb{C}. (The removal of a point from a region leaves a region.) As the course progresses, you will discover that regions provide an excellent setting for analysing the properties of differentiable functions. We therefore make the following definitions.
Definitions
A function that is differentiable on a region \mathcal{R} is said to be analytic on \boldsymbol{\mathcal{R}}. If the domain of a function f is a region, and if f is differentiable on its domain, then f is said to be analytic. A function is analytic at a point \boldsymbol{\alpha } if it is differentiable on a region containing \alpha .
It follows immediately from the definition that if a function is analytic on a region \mathcal{R}, then it is automatically analytic at each point of \mathcal{R}.
Notice that a function can have a derivative at a point without being analytic at the point. For example, in the next section we will ask you to show that the function g(z) = z^2 has a derivative at 0, but at no other point. This means that there is no region on which g is differentiable, and hence no point at which g is analytic.
By contrast, f(z)=1/z is analytic at every point of its domain. It is an analytic function, and it is analytic on any region that does not contain 0. Three such regions are illustrated in Figure 3.
An appropriate choice of region can often simplify the analysis of complex functions.
Exercise 3
Classify each of the following statements as True or False.
An entire function is analytic at every point of \mathbb{C}.
If a function is differentiable at each point of a set, then it is analytic on that set.
True.
False. (The set must be a region.)
There is a close connection between differentiation and continuity. The function f(z)=1/z, for example, is not only differentiable, but also continuous on its domain. This is no accident for, as in real analysis, differentiability implies continuity.
Theorem 1
Let f be a complex function that is differentiable at \alpha . Then f is continuous at \alpha .
Proof
Let f be differentiable at \alpha ; then
\lim _{z\rightarrow \alpha }\frac{f(z)f(\alpha )}{z\alpha }=f^{\prime }(\alpha ).
To prove that f is continuous at \alpha , we will show that f(z)\rightarrow f(\alpha ) as z\rightarrow \alpha . We do this by proving the equivalent result that f(z)f(\alpha ) \rightarrow 0 as z\rightarrow \alpha .
By the Product Rule for limits of functions, we have
\begin{aligned} \lim _{z\rightarrow \alpha }(f(z)f(\alpha )) &=\lim _{z\rightarrow \alpha } \left (\frac{f(z) f(\alpha )}{z\alpha }\right )\times \lim _{z\rightarrow \alpha } (z\alpha )\\ &=f^{\prime }(\alpha )\times 0=0. \end{aligned}
Hence f(z)\rightarrow f(\alpha ) as z\rightarrow \alpha , so f is continuous at \alpha .
In fact, differentiability implies more than continuity. Continuity asserts that for all z close to \alpha , f(z) is close to f(\alpha ). For differentiable functions, this ‘closeness’ has the ‘linear’ form described in the following theorem.
Theorem 2 Linear Approximation Theorem
Let f be a complex function that is differentiable at \alpha . Then f can be approximated near \alpha by a linear polynomial. More precisely,
f(z)=f(\alpha )+(z\alpha )f^{\prime }(\alpha )+e(z),
where e is an ‘error function’ satisfying e(z)/(z\alpha )\rightarrow 0 as z\rightarrow \alpha .
Informally speaking, the statement ‘e(z)/(z\alpha ) \rightarrow 0 as z \rightarrow \alpha ’ means that ‘e(z) tends to zero faster than z  \alpha does’.
Proof
We have to show that the function e defined by
e(z)=f(z)f(\alpha )(z\alpha )f^{\prime }(\alpha )
satisfies e(z)/(z\alpha )\rightarrow 0 as z \rightarrow \alpha .
Dividing e(z) by z\alpha and letting z tend to \alpha , we obtain
\begin{aligned} \lim _{z\rightarrow \alpha }\frac{e(z)}{z\alpha }&=\lim _{z\rightarrow \alpha }\left (\frac{f(z)f(\alpha )}{z\alpha }f^{\prime }(\alpha )\right )\\ &=f^{\prime }(\alpha )f^{\prime }(\alpha )=0, \end{aligned}
as required.
Theorem 1 and Theorem 2 are often used to investigate the properties of differentiable functions. An illustration of this occurs in the next subsection, where Theorem 1 is used in a proof of the Combination Rules for differentiation. Later in this section we use Theorem 2 to give a geometric interpretation of complex differentiation.
1.2 Combining differentiable functions
It would be tedious if we had to use the definition of the derivative every time we needed to differentiate a function. Fortunately, once the derivatives of simple functions like z\longmapsto 1 and z\longmapsto z are known, we can find the derivatives of other more complicated functions by applying the following theorem.
Theorem 3 Combination Rules for Differentiation
Let f and g be complex functions with domains A and B, respectively, and let \alpha be a limit point of A \cap B. If f and g are differentiable at \alpha , then
Sum Rule f+g is differentiable at \alpha , and (f+g)^{\prime }(\alpha )=f^{\prime }(\alpha )+g^{\prime }(\alpha )
Multiple Rule \lambda f is differentiable at \alpha , for \lambda \in \mathbb{C}, and (\lambda f)^{\prime }(\alpha )=\lambda f^{\prime }(\alpha )
Product Rule fg is differentiable at \alpha , and (fg)^{\prime }(\alpha )=f^{\prime }(\alpha )g(\alpha )+f(\alpha )g^{\prime }(\alpha )
Quotient Rule f/g is differentiable at \alpha (provided that g(\alpha )\neq 0), and \left (\dfrac{f}{g}\right )^{\!\prime }(\alpha )=\dfrac{g(\alpha )f^{\prime }(\alpha )f(\alpha )g^{\prime } (\alpha )}{(g(\alpha ))^2}.
We remark that if the domains A and B in Theorem 3 are regions, then every point of A\cap B is a limit point of A and of B.
In addition to these rules, there is a corollary to Theorem 3, known as the Reciprocal Rule, which is a special case of the Quotient Rule.
Corollary Reciprocal Rule for Differentiation
Let f be a function that is differentiable at \alpha . If f(\alpha )\neq 0, then 1/f is differentiable at \alpha , and
\left (\frac{1}{f}\right )^{\!\prime }(\alpha )=\frac{f'(\alpha )}{(f(\alpha ))^2}.
The proof of the Combination Rules for differentiation uses the Combination Rules for limits of functions. In the next example we illustrate the method by proving the Product Rule for differentiation. We use the Sum, Product and Multiple Rules for limits of functions, and we also use the fact that if a function g is differentiable at \alpha , then it is continuous at \alpha , so \displaystyle \lim _{z\rightarrow \alpha }g(z)=g(\alpha ).
Example 2
Prove the Product Rule for differentiation.
Solution
Let F=fg. Then
\begin{aligned} & \lim _{z\rightarrow \alpha }\frac{F(z)F(\alpha )}{z\alpha }&\\ & =\lim _{z\rightarrow \alpha } \frac{f(z)g(z)f(\alpha )g(\alpha )}{z\alpha }&\\ & =\lim _{z\rightarrow \alpha }\frac{(f(z)f(\alpha ))g(z)+f(\alpha )(g(z)g(\alpha ))}{z\alpha }&\\ & =\left (\lim _{z\rightarrow \alpha }\frac{f(z)f(\alpha )}{z\alpha }\right ) \left (\lim _{z\rightarrow \alpha }g(z)\right )+f(\alpha )\left (\lim _{z\rightarrow \alpha }\frac{g(z)g(\alpha )}{z\alpha }\right )&\\ & =f^{\prime }(\alpha )g(\alpha )+f(\alpha )g^{\prime }(\alpha ).& \end{aligned}
The proofs of the other Combination Rules are similar. We ask you to prove the Sum and Multiple Rules in Exercise 4, and the Quotient Rule later in Exercise 12.
Exercise 4
Prove the following rules for differentiation.
Sum Rule
Multiple Rule
Let F = f + g. Then \begin{align*} &\lim _{z\rightarrow \alpha } \frac{F(z)  F(\alpha )}{z  \alpha } \\ &=\lim _{z\rightarrow \alpha } \frac{(f(z) + g(z))  (f(\alpha ) + g(\alpha ))}{z \alpha }\\ &= \lim _{z\rightarrow \alpha } \frac{(f(z)  f(\alpha )) + (g(z)  g(\alpha ))}{z \alpha }\\ &= \lim _{z\rightarrow \alpha } \frac{f(z)  f(\alpha )}{z  \alpha } + \lim _{z \rightarrow \alpha } \frac{g(z)  g(\alpha )}{z  \alpha }\\ &= f^{\prime }(\alpha ) + g^{\prime }(\alpha ). \end{align*}
Let F = \lambda f, for \lambda \in \mathbb{C}. Then \begin{align*} \lim _{z\rightarrow \alpha } \frac{F(z)  F(\alpha )}{z  \alpha } &= \lim _{z\rightarrow \alpha } \frac{\lambda f(z)  \lambda f(\alpha )}{z  \alpha }\\ &= \lambda \lim _{z\rightarrow \alpha } \frac{f(z)  f(\alpha )}{z  \alpha }\\ &= \lambda f^{\prime }(\alpha ). \end{align*}
The Combination Rules enable us to differentiate any polynomial or rational function. (Recall that a rational function is the quotient of two polynomial functions.)
For example, since the function f(z)=z is entire with derivative f^{\prime }(z)=1, we can use the Product Rule repeatedly to show that the function
f(z)=z^n\quad (z\in \mathbb{C})
is entire, and that its derivative is
f^{\prime }(z)=nz^{n1}\quad (z\in \mathbb{C}).
(This result can be proved formally using the Principle of Mathematical Induction.) Next, we can use this fact, together with the Sum and Multiple Rules, to prove that any polynomial function is entire, and that its derivative is obtained by differentiating the polynomial function term by term. For example,
\text{if }f(z)=z^43z^2+2z+1,\text{ then }f^{\prime }(z)=4z^36z+2.
In general, we have the following corollary to Theorem 3.
Corollary Differentiating Polynomial Functions
Let p be the polynomial function
p(z)=a_nz^n+\dots +a_2z^2+a_1z+a_0 \quad (z\in \mathbb{C}),
where a_0, a_1, \ldots ,a_n\in \mathbb{C} and a_n \neq 0. Then p is entire with derivative
p^{\prime }(z)=na_nz^{n1}+\dots + 2a_2z+ a_1\quad (z\in \mathbb{C}).
Since a rational function is a quotient of two polynomial functions, it follows from the corollary on differentiating polynomial functions and the Quotient Rule that a rational function is differentiable at all points where its denominator is nonzero; that is, at all points of its domain.
Example 3
Find the derivative of
f(z)=\frac{2z^2+z}{z^2+1},
and specify its domain.
Solution
By the corollary on differentiating polynomial functions, the derivative of z\longmapsto 2z^2+z is
z\longmapsto 4z+1,
and the derivative of z\longmapsto z^2+1 is
z\longmapsto 2z.
Provided that z^2+1 is nonzero, we can apply the Quotient Rule to obtain
f^{\prime }(z)=\frac{(z^2 + 1)(4z + 1)(2z^2+z)(2z)}{(z^2+1)^2}=\frac{z^2+4z+1}{(z^2+1)^2}.
Since z^2+1 is nonzero everywhere apart from i and i, it follows that the domain of f^{\prime } is \mathbb{C}\{i,i\}.
Exercise 5
Find the derivative of each of the following functions. In each case specify the domain of the derivative.
f(z)=z^4+3z^3z^2+4z+2
f(z)=\dfrac{z^24z+2}{z^2+z+1}
By the corollary on differentiating polynomial functions, we have f^{\prime }(z) = 4z^3 + 9z^2  2z + 4\quad (z \in \mathbb{C}).
By the Quotient Rule, \begin{align*} f'(z)&=\frac{(z^2 + z + 1)(2z  4)  (z^2  4z + 2)(2z + 1)}{(z^2 + z +1)^2}&\\[4pt] &= \frac{5z^2  2z  6}{(z^2 + z + 1)^2}.& \end{align*}Now, z^2+z+1=0 if and only if z=\tfrac 12(1\pm \sqrt{3}i), so the domain of f' is \mathbb{C}  \{\tfrac 12(1 + \sqrt{3}i), \tfrac 12(1  \sqrt{3}i)\}.
So, any rational function is differentiable on the whole of its domain. What is more, this domain must be a region because it is obtained by removing a finite number of points (zeros of the denominator) from \mathbb{C}.
Corollary
Any rational function is analytic.
A particularly simple example of a rational function is f(z)=1/z^n, where n is a positive integer. This can be differentiated by means of the Reciprocal Rule:
f^{\prime }(z)=\frac{nz^{n1}}{(z^n)^2}=nz^{n1}.
If k is used to denote the negative integer n, then we can write f(z)=z^k and f^{\prime }(z)=kz^{k1}. In this form, it is apparent that the formula for differentiating a negative integer power is the same as the formula for differentiating a positive integer power. The only difference is that for negative powers, 0 is excluded from the domain. We state these observations as a final corollary to Theorem 3.
Corollary
Let k\in \mathbb{Z}\{0\}. The function f(z)=z^k has derivative
f^{\prime }(z)=kz^{k1}.
The domain of f^{\prime } is \mathbb{C} if k > 0 and \mathbb{C}  \{0\} if k < 0.
1.3 Nondifferentiability
In Theorem 1 you saw that differentiability implies continuity. An immediate consequence of this is the following test for nondifferentiability.
Strategy A for nondifferentiability
If f is discontinuous at \alpha , then f is not differentiable at \alpha .
Example 4
Show that there are no points of the negative real axis at which the function f(z)=\sqrt{z} is differentiable.
Solution
The function f(z) =\sqrt{z} is discontinuous at all points of the negative real axis. It follows that there are no points of the negative real axis at which f is differentiable.
Exercise 6
Show that there are no points of the negative real axis at which the principal logarithm function
\operatorname{Log} z=\log z+i\operatorname{Arg} z
is differentiable.
The function Arg is discontinuous at each point of the negative real axis. It follows that Log is discontinuous at each point of the negative real axis, and hence that there are no points on it at which Log is differentiable.
The converse of Theorem 1 is not true; if a function is continuous at a point, then it does not follow that it is differentiable at the point. A particularly striking illustration of this is provided by the modulus function f(z)=z. This is continuous on the whole of \mathbb{C} and yet, as you will see, it fails to be differentiable at any point of \mathbb{C}.
Since f(z)=z is continuous, Strategy A cannot be used to show that f is not differentiable at a given point \alpha . Instead we return to the definition of derivative and show that the difference quotient for f fails to have a limit.
In general, if the domain A of a function f contains \alpha as one of its limit points, then the existence of the limit
\lim _{z\rightarrow \alpha }\frac{f(z)f(\alpha )}{z\alpha }
means that for each sequence (z_n) in A\{\alpha \} that converges to \alpha ,
\lim _{n\rightarrow \infty }\frac{f(z_n)f(\alpha )}{z_n\alpha }
exists, and has a value that is independent of the sequence (z_n).
So, if two such sequences (z_n) and (z^{\prime }_n) can be found for which
\lim _{n\rightarrow \infty }\frac{f(z_n)f(\alpha )}{z_n\alpha }\neq \lim _{n\rightarrow \infty }\frac{f(z^{\prime }_n)f(\alpha )}{z^{\prime }_n\alpha },
then f cannot be differentiable at \alpha .
In the next example, you will see that f(z)=z is not differentiable at 0. This result should not surprise you because the real modulus function is not differentiable at 0. Indeed, the proof is identical to that of the real case.
Example 5
Prove that f(z)=z is not differentiable at 0.
Solution
We need to find two sequences (z_n) and (z^{\prime }_n) that converge to 0 which, when substituted into the difference quotient, yield sequences with different limits. A simple choice is to pick sequences (z_n) and (z^{\prime }_n) that approach 0 along the real axis: one from the right, and one from the left, as shown in Figure 4.
There is no point in picking sequences that are more complicated than they need to be, so let z_n=1/n, n = 1, 2, \ldots{}. Then
\lim _{n\rightarrow \infty }\frac{z_n0}{z_n0}=\lim _{n\rightarrow \infty } \frac{1/n}{1/n}=1.
Now let z^{\prime }_n=1/n, n = 1, 2, \ldots{}. Then
\lim _{n\rightarrow \infty }\frac{z^{\prime }_n0}{z^{\prime }_n0}= \lim _{n\rightarrow \infty }\frac{1/n}{1/n}=1.
Since the two limits do not agree, the difference quotient does not have a limit as z tends to 0. It follows that f(z)=z is not differentiable at 0.
The next exercise asks you to extend the method used in Example 5 to show that f(z)=z is not differentiable at any point of \mathbb{C}.
Exercise 7
Let \alpha be any nonzero complex number, and consider the circle through \alpha centred at the origin. By choosing one sequence (z_n) that approaches \alpha along the circumference of the circle, and another sequence (z^{\prime }_n) that approaches \alpha along the ray from 0 through \alpha , prove that f(z)=z is not differentiable at \alpha .
Let z_n = \alpha \exp (i/n), n=1,2,\ldots{}. Then (z_n) tends to \alpha along the circumference of the circle, and
\lim _{n\rightarrow \infty } \frac{z_n  \alpha }{z_n  \alpha } = \lim _{n\rightarrow \infty } \frac{\alpha   \alpha }{z_n  \alpha } = 0.
Now let z^{\prime }_n = \alpha (1 + 1/n), n=1,2,\ldots{}. Then (z^{\prime }_n) tends to \alpha along the ray from 0 through \alpha , and
\begin{aligned} &\lim _{n\rightarrow \infty } \frac{z_n^{\prime }  \alpha }{z_n^{\prime }  \alpha } = \lim _{n\rightarrow \infty } \frac{\alpha (1 + 1/n)  \alpha }{\alpha (1 + 1/n)  \alpha } = \frac{\alpha }{\alpha }.& \end{aligned}
Since \alpha /\alpha \ne 0 for \alpha \ne 0, these two limits do not agree. It follows that f(z) = z is not differentiable at \alpha \ne 0.
The modulus function illustrates an important difference between real and complex differentiation. When the modulus function is treated as a real function, the limit of its difference quotient has to be taken along the real line. But when treated as a complex function, the limit of the difference quotient is required to exist however the limit is taken. This explains why the real modulus function is differentiable at all nonzero real points, whereas the complex modulus function fails to be differentiable at any point of \mathbb{C}. More generally, it shows that complex differentiability is a much stronger condition than real differentiability.
In Exercise 7 you were asked to prove that the modulus function fails to be differentiable by observing that its behaviour along the circumference of a circle centred at 0 is different from its behaviour along a ray. Similar observations can be applied to other functions. For example, in the next exercise you may find it helpful to notice that directions of paths parallel to the imaginary axis are reversed by the function f(z)=\overline{z}, whereas directions of paths parallel to the real axis are left unchanged (Figure 5).
Exercise 8
Show that there are no points of \mathbb{C} at which the complex conjugate function f(z) =\overline{z} is differentiable.
Let \alpha be an arbitrary complex number. Directions of paths parallel to the imaginary axis through \alpha are reversed by f, while directions of paths parallel to the real axis are not. This suggests looking at the sequences z_n = \alpha + 1/n and z_n^{\prime } = \alpha + i/n, n=1,2,\ldots{}.
First let z_n = \alpha + 1/n; then
\begin{aligned} \lim _{n\rightarrow \infty } \frac{\overline{z_n}  \overline{\alpha }}{z_n  \alpha } &= \lim _{n\rightarrow \infty } \frac{\overline{(\alpha + 1/n)}  \overline{\alpha }}{(\alpha + 1/n)  \alpha }\\ &= \lim _{n\rightarrow \infty }\frac{1/n}{1/n} = 1. \end{aligned}
Now let z_n^{\prime } = \alpha + i/n; then
\begin{aligned} \lim _{n\rightarrow \infty } \frac{\overline{z_n^{\prime }}  \overline{\alpha }}{z_n^{\prime }  \alpha } &= \lim _{n\rightarrow \infty } \frac{\overline{(\alpha + i/n)}  \overline{\alpha }}{(\alpha + i/n)  \alpha }\\ & = \lim _{n\rightarrow \infty } \frac{i/n}{i/n} = 1. \end{aligned}
Since these two limits do not agree, and since \alpha is arbitrary, it follows that there are no points of \mathbb{C} at which f(z) = \overline{z} is differentiable.
For some functions f, you may be able to find a sequence (z_n) that converges to \alpha for which the sequence
\begin{equation} \label{a4got} w_n = \frac{f(z_n)f(\alpha )}{z_n\alpha }, \quad n=1,2,\ldots , \end{equation}
is divergent. In such cases, there is no need to look for a second sequence.
Example 6
Show that the function f(z)=\sqrt{z} is not differentiable at 0.
Solution
Strategy A cannot be used here, since f is continuous at 0. Instead we look for a sequence (z_n) that converges to 0 for which the sequence 1 (above) is divergent. To make the square roots easy to handle, let z_n=1/n^2, n=1,2,\ldots{}. Then
\frac{f(z_n)f(0)}{z_n0}=\frac{\sqrt{1/n^2}\sqrt{0}}{1/n^20}=\frac{1/n}{1/n^2}=n.
This sequence tends to infinity, and is therefore divergent. It follows that f is not differentiable at 0.
The methods exemplified above for showing that a function is not differentiable at a given point can be summarised as follows.
Strategy B for nondifferentiability
To prove that a function f is not differentiable at \alpha , apply the strategy for proving that a limit does not exist to the difference quotient
\frac{f(z)  f(\alpha )}{z  \alpha }.
If you think that a given function is not differentiable, then you should try to apply Strategy A or Strategy B. A third strategy for proving that f is not differentiable at a point will appear in Section 2.1. If, on the other hand, you think that the function is differentiable, then you should try to find the derivative.
Exercise 9
Decide whether each of the following functions is differentiable at i. If it is, then find its derivative at i.
f(z)=\operatorname{Re} z
f(z)=2z^2+3z+5
f(z)=\begin{cases} z,&\operatorname{Re} z<0\\ 4,&\operatorname{Re} z \geq 0 \end{cases}
The fact that \operatorname{Re} z is constant along the imaginary axis, but variable parallel to the real axis, suggests that \operatorname{Re} is not differentiable at i (or anywhere else, for that matter). It also suggests looking at the sequences z_n = i + i/n and z_n^{\prime } = i + 1/n, n=1,2,\ldots{}. First let z_n = i + i/n; then \begin{align*} \lim _{n\rightarrow \infty } \frac{\operatorname{Re} z_n  \operatorname{Re} i}{z_n  i} &= \lim _{n \rightarrow \infty } \frac{\operatorname{Re} (i + i/n)  \operatorname{Re} i}{(i + i/n)  i}&\\ &= \lim _{n\rightarrow \infty } \frac{0}{i/n} = 0.& \end{align*}Now let z_n^{\prime } = i + 1/n; then \begin{align*} \lim _{n\rightarrow \infty } \frac{\operatorname{Re} z_n^{\prime }  \operatorname{Re} i}{z_n^{\prime }  i} &= \lim _{n \rightarrow \infty } \frac{\operatorname{Re} (i + 1/n)  \operatorname{Re} i}{(i + 1/n)  i}&\\ &= \lim _{n\rightarrow \infty } \frac{1/n}{1/n} = 1.& \end{align*}Since these two limits do not agree, it follows that \operatorname{Re} is not differentiable at i.
f is a polynomial function, so f^{\prime }(z) = 4z + 3 for all z\in \mathbb{C}. Thus f^{\prime }(i) = 3 + 4i.
f is not differentiable at i, since it is not continuous at i.
1.4 Higherorder derivatives
In Exercise 2 you saw that the function f(z)=1/z has derivative f^{\prime }(z)=1/z^2, a result that you can also obtain using the Reciprocal Rule. If you now apply the Reciprocal Rule to the derivative f^{\prime }(z)=1/z^2, then you obtain a function
(f^{\prime })^{\prime }(z)=\frac{2}{z^3}\quad (z\neq 0).
In general, for a differentiable function f, the function (f^{\prime })^{\prime } is called the second derivative of {\boldsymbol{f}}, and is denoted by f^{\prime \prime }. Continued differentiation gives the socalled higherorder derivatives of {\boldsymbol{f}}. These are denoted by f^{\prime \prime },f^{\prime \prime \prime },f^{\prime \prime \prime \prime }, \ldots{}, and the values f^{\prime \prime }(\alpha ), f^{\prime \prime \prime }(\alpha ), f^{\prime \prime \prime \prime }(\alpha ),\ldots{}, are called the higherorder derivatives of {\boldsymbol{f}} at \boldsymbol{\alpha }.
Since the dashes in this notation can be rather cumbersome, we often indicate the order of the derivative by a number in brackets. Thus f^{(2)},f^{(3)},f^{(4)},\ldots mean the same as f^{\prime \prime }, f^{\prime \prime \prime }, f^{\prime \prime \prime \prime },\ldots{}, respectively. Here the brackets in f^{(4)} are needed to avoid confusion with the fourth power of f.
When we wish to discuss a derivative of general order, we will refer to the {\boldsymbol{n}}th derivative {\boldsymbol{f}}^{({\boldsymbol{n}})} of {\boldsymbol{f}}. It is often possible to find a formula for the nth derivative in terms of n. For example, if f(z)=1/z, then
f^{\prime \prime }(z)=\frac{2}{z^3},\quad f^{\prime \prime \prime }(z)=\frac{2\times 3}{z^4},\quad f^{(4)}(z)= \frac{2\times 3\times 4}{z^5},\quad \ldots ,
so the nth derivative is given by
f^{(n)}(z)=\frac{(1)^n n!}{z^{n+1}}.
(This can be proved formally by the Principle of Mathematical Induction.)
One interesting feature about this formula is that the domain \mathcal{R}=\mathbb{C}\{0\} remains the same, no matter how often the function f is differentiated. This is a special case of a much more general result which states that: a function that is analytic on a region \mathcal{R} has derivatives of all orders on \mathcal{R}. Here we confine our attention to first derivatives, and we continue to do this in the next subsection by giving a geometric interpretation of the first derivative.
1.5 A geometric interpretation of derivatives
As we mentioned in this session’s introduction, the derivative of a real function is often pictured geometrically as the gradient of the graph of the function. This interpretation is useful in real analysis, but it is of little use in complex analysis, since the graph of a complex function is not twodimensional.
Fortunately, there is another way of interpreting derivatives that works for complex functions.
If a complex function f is differentiable at a point \alpha , then any point z close to \alpha is mapped by f to a point f(z) close to f(\alpha ).
Indeed, by the Linear Approximation Theorem,
f(z)=f(\alpha )+(z\alpha )f^{\prime }(\alpha )+e(z),
where e(z)/(z\alpha )\rightarrow 0 as z\rightarrow \alpha . So if f^{\prime }(\alpha )\neq 0, then, to a close approximation,
f(z)f(\alpha )\approx f^{\prime }(\alpha )(z\alpha ).
Multiplication of z  \alpha by f^{\prime }(\alpha ) has the effect of scaling z  \alpha by the factor f^{\prime }(\alpha ) and rotating it about 0 through the angle \operatorname{Arg} f^{\prime }(\alpha ); see Figure 6. We refer to f'(\alpha ) as a complex scale factor, because it causes both a scaling and a rotation.
We can rewrite the equation above as
\begin{equation} \label{a4get} f(z)\approx f(\alpha )+f^{\prime }(\alpha )(z\alpha ). \end{equation}
From this we see that f(z) is obtained by scaling and rotating the vector z\alpha based at f(\alpha ) by the complex scale factor f'(\alpha ), as illustrated in Figure 7.
Another useful way to picture how f behaves geometrically is to consider the effect it has on a small disc centred at \alpha (still assuming that f'(\alpha )\neq 0). From equation 2 (above), we see that, to a close approximation, a small disc centred at \alpha is mapped to a small disc centred at f(\alpha ). In the process, the disc is rotated through the angle \operatorname{Arg} f^{\prime }(\alpha ), and it is scaled by the factor f^{\prime }(\alpha ) (see Figure 8). As usual, the rotation is anticlockwise if \operatorname{Arg} f^{\prime }(\alpha ) is positive, and clockwise if it is negative.
The geometric interpretation of derivatives is more complicated if f^{\prime }(\alpha )=0, and we do not discuss it here.
Example 7
Using the notion of a complex scale factor, describe what happens to points close to 1+i under the function f(z)=1/z.
Solution
To a close approximation, a small disc centred at 1+i is mapped by f to a small disc centred at
f(1+i)=1/(1 + i)=\tfrac{1}{2}(1i).
In the process, the disc is scaled by the factor f^{\prime }(1+i) and rotated through the angle \operatorname{Arg} f^{\prime }(1+i).
Now f^{\prime }(z)=1/z^2, so
f^{\prime }(1 + i)=\frac{1}{(1+i)^2}=\frac{1}{2i}= \frac{i}{2},
which has modulus 1/2 and principal argument \pi /2.
So f scales the disc by the factor 1/2 and rotates it anticlockwise through the angle \pi /2.
Exercise 10
Using the notion of a complex scale factor, describe what happens to points close to i under the function
f(z)=\dfrac{4z+3}{2z^2+1}.
To a close approximation, a small disc centred at i is mapped by f to a small disc centred at
f(i) = \frac{4i + 3}{2i^2 + 1} = 3  4i.
In the process the disc is scaled by the factor f^{\prime }(i) and rotated through the angle \operatorname{Arg} f^{\prime }(i).
By the Quotient Rule,
\begin{align*} f^{\prime }(z) &= \frac{4(2z^2 + 1)  4z (4z + 3)}{(2z^2 + 1)^2}\\ &= \frac{8z^2  12z + 4}{(2z^2 + 1)^2}. \end{align*}
So
f^{\prime }(i) = \frac{8i^2  12i + 4}{(2i^2 + 1)^2} = 12  12i.
This has modulus 12\sqrt{2} and principal argument \pi /4.
So f scales the disc by the factor 12\sqrt{2} and rotates it clockwise through the angle \pi /4.
It is important to bear in mind that the complex scale factor interpretation of a derivative is only an approximation, and that it is unlikely to be reliable far from the point under consideration.
1.6 Further exercises
Here are some further exercises to end this section.
Exercise 11
Use the definition of derivative to find the derivative of the function
f(z) = 2z^2 + 5.
The function f(z) = 2z^2 + 5 is defined on the whole of \mathbb{C}. Let \alpha \in \mathbb{C}. Then
\begin{aligned} f^{\prime }(\alpha ) &= \lim _{z\rightarrow \alpha } \frac{f(z)  f(\alpha )}{z  \alpha }\\ &= \lim _{z\rightarrow \alpha } \frac{(2z^2 + 5)  (2\alpha ^2 + 5)}{z  \alpha }\\ &= \lim _{z\rightarrow \alpha } \frac{2(z^2  \alpha ^2)}{z  \alpha }\\ &= \lim _{z\rightarrow \alpha } 2(z + \alpha )\\ &= 4\alpha . \end{aligned}
Since \alpha is an arbitrary complex number, f is differentiable on the whole of \mathbb{C}, and the derivative is the function
f^{\prime }(z) = 4z \quad (z \in \mathbb{C}).
Exercise 12
Prove the Quotient Rule for differentiation.
Let F = f/g. Then
\begin{aligned} & \frac{F(z)  F(\alpha )}{z  \alpha }&\\ & = \frac{f(z)/g(z)  f(\alpha )/g(\alpha )}{z  \alpha } \\ & = \frac{f(z)g(\alpha )  f(\alpha )g(z)}{(z\alpha )g(z)g(\alpha )} &\\ & = \frac{g(\alpha )(f(z)  f(\alpha ))  f(\alpha ) (g(z)  g(\alpha ))}{(z \alpha )g(z)g(\alpha )} &\\ & = \frac{g(\alpha ) \left (\dfrac{f(z)  f(\alpha )}{z\alpha }\right ) f(\alpha )\left (\dfrac{g(z)  g(\alpha )}{z  \alpha }\right )}{g(z)g(\alpha )}.& \end{aligned}
Using the Combination Rules for limits of functions, the continuity of g, and the fact that g(\alpha ) \ne 0, we can take limits to obtain
F^{\prime }(\alpha ) = \frac{g(\alpha ) f^{\prime }(\alpha )  f(\alpha ) g^{\prime }(\alpha )}{(g(\alpha ))^2}.
Exercise 13
Find the derivative of each of the following functions f. In each case specify the domain of f^{\prime }.
f(z) = \dfrac{z^2 + 2z + 1}{3z + 1}
f(z) = \dfrac{z^3 + 1}{z^2  z  6}
f(z) = \dfrac{1}{z^2 + 2z + 2}
f(z) = z^2 + 5z  2 + \dfrac{1}{z} + \dfrac{1}{z^2}
By the Combination Rules, \begin{aligned} f^{\prime }(z) &= \frac{(3z + 1)(2z + 2)  3(z^2 + 2z + 1)}{(3z + 1)^2}\\ &= \frac{3z^2 + 2z  1}{(3z + 1)^2}. \end{aligned}The domain of f^{\prime } is \mathbb{C}  \{1/3\}.
By the Combination Rules, \begin{aligned} f^{\prime }(z) &= \frac{(z^2  z  6)(3z^2)  (z^3 + 1)(2z  1)}{(z^2  z  6)^2}&\\ &= \frac{z^4  2z^3  18z^2 2z + 1}{(z^2  z  6)^2}.& \end{aligned}Since z^2  z  6 = (z + 2)(z  3), the domain of f^{\prime } is \mathbb{C}  \{{2},3\}.
By the Reciprocal Rule, f^{\prime }(z) = \frac{(2z + 2)}{(z^2 + 2z + 2)^2}. The roots of z^2 + 2z + 2 are \dfrac{2\pm \sqrt{4}}{2} = 1 \pm i. The domain of f^{\prime } is therefore \mathbb{C}  \{1 + i, 1 i\}.
By the Sum Rule and the rule for differentiating integer powers, f^{\prime }(z) = 2z + 5  \frac{1}{z^2}  \frac{2}{z^3}.The domain of f^{\prime } is \mathbb{C} \{0\}.
Exercise 14
Use Strategy B to show that there are no points of \mathbb{C} at which the function
f(z) = \operatorname{Im} z
is differentiable.
Consider an arbitrary complex number \alpha = a + ib, where a,b\in \mathbb{R}. Let z_n = \alpha +1/n, n=1,2,\ldots{}. Then z_n \rightarrow \alpha , and
\begin{aligned} \lim _{n\rightarrow \infty } \frac{\operatorname{Im} z_n  \operatorname{Im} \alpha }{z_n  \alpha } &=\lim _{n\rightarrow \infty } \frac{b  b}{1/n}\\ &= \lim _{n \rightarrow \infty } \frac{0}{1/n} =0. \end{aligned}
Now let z_n^{\prime } = \alpha + i/n, n=1,2,\ldots{}. Then z_n^{\prime } \rightarrow \alpha , and
\begin{align*} \lim _{n\rightarrow \infty } \frac{\operatorname{Im} z_n^{\prime }  \operatorname{Im} \alpha }{z_n^{\prime }  \alpha } &= \lim _{n\rightarrow \infty } \frac{(b + 1/n)  b}{i/n}\\ &= \lim _{n\rightarrow \infty } \frac{1/n}{i/n} = i. \end{align*}
Since the two limits do not agree, it follows that \operatorname{Im} fails to be differentiable at each point of \mathbb{C}.
Exercise 15
Describe the approximate geometric effect of the function
f(z) = \dfrac{z^3 + 8}{z  6}
on a small disc centred at the point 2.
To a close approximation, a small disc centred at 2 is mapped by f to small disc centred at f(2) = 4. In the process, the disc is scaled by the factor f^{\prime }(2) and rotated through the angle \operatorname{Arg} f^{\prime }(2).
By the Quotient Rule,
\begin{align*} f^{\prime }(z) &= \frac{3z^2(z  6)  (z^3 + 8)}{(z  6)^2}\\ &= \frac{2z^3  18z^2  8}{(z  6)^2}. \end{align*}
So f scales the disc by the factor 4 and rotates it anticlockwise through the angle \pi .
2 The Cauchy–Riemann equations
After working through this section, you should be able to:
find the partial derivatives of a function from \mathbb{R}^2 to \mathbb{R}
use the Cauchy–Riemann equations to show that a function is not differentiable at a given point
use the Cauchy–Riemann equations to show that a function, such as the exponential function, is differentiable at a given point, and to find the derivative.
This section is challenging, so you may find that you do not appreciate some of the details on a first reading. Most importantly, you should try to understand the definitions, strategies and theorems, and apply them in the examples and exercises.
2.1 The Cauchy–Riemann theorems
Here we will explore the relationship between complex differentiation and real differentiation. To do this, we introduce the notion of a partial derivative and use it to derive the Cauchy–Riemann equations (pronounced ‘cohshe reeman’). These equations are conditions that any differentiable complex function must satisfy, so they can be used to test whether a given complex function is differentiable. In particular, we use them to investigate the differentiability of the complex exponential function. The technique is to split the exponential function
\exp (x+iy)=e^x(\cos y+i\sin y)
into its real and imaginary parts:
u(x,y)= e^x \cos y\quad \text{and}\quad v(x,y)=e^x\sin y,
each of which is a realvalued function of the real variables x and y. The derivative of exp is then calculated by using the derivatives of the real trigonometric and exponential functions, which we assume to be known.
Before we deal with the exponential function, however, let us first consider the simpler function f(z)=z^3. By writing z=x+iy, we see that
f(x+iy)=(x+iy)^3= (x^33xy^2)+i(3x^2yy^3).
Let us define
u(x,y)=x^33xy^2\quad \text{and}\quad v(x,y)= 3x^2yy^3.
Then u and v are the real and imaginary parts of f, respectively; that is, u=\operatorname{Re} f and v=\operatorname{Im} f. For the moment we will concentrate on the real part u; part of its graph (given by the equation s=u(x,y)) is shown in Figure 9. Since u is a function of two real variables, its graph is a surface. The height of the surface above the (x,y)plane represents the value of the function at the point (x,y). For instance, the point P on the surface has coordinates (2,1,2) because u(2,1)=2^33\times 2\times 1^2=2.
Let us now explore the concept of the gradient of the surface at a point such as P. We will find that the answer depends on the ‘direction’ from which we approach the point. To make this more precise, consider Figure 10, in which the vertical plane with equation y=1 is shown intersecting the surface in a curve that passes through P. By substituting y=1 into u(x,y)=x^33xy^2, we see that the curve has equation x\longmapsto x^33x, so we can calculate its gradient at P; this is the gradient of the surface in the xdirection at P.
More generally, whenever we intersect the surface with a vertical plane with equation y=\text{constant}, we obtain a curve on the surface with equation x\longmapsto x^33xy^2 (where y is considered to be fixed). We can find the gradient at any point (a,b,u(a,b)) on this curve by differentiating with respect to x and then substituting x=a and y=b. The resulting expression is called the partial derivative of u with respect to x at (a,b), and it is denoted by
\frac{\partial u}{\partial x}(a,b).
A curly \partial is used rather than a straight d to emphasise that this is a partial derivative, for which we differentiate with respect to one variable and keep the other variable fixed. In our particular case, differentiating u(x,y)=x^33xy^2 with respect to x (and keeping y fixed) gives
\frac{\partial u}{\partial x}(x,y)=3x^23y^2,
and substituting x=2 and y=1 gives
\frac{\partial u}{\partial x}(2,1)=9.
Hence the gradient of the surface in the xdirection at the point P is 9. This is a positive value because near the point P, u increases as x increases (with y=1), as you can see from Figure 10.
Figure 11 shows the vertical plane with equation x=2 intersecting the surface in a different curve that passes through P.
Reasoning similarly to before, we see that intersecting the surface with a vertical plane with equation x=\text{constant} gives a curve on the surface, and we can obtain the gradient at a point (a,b,u(a,b)) on this curve by differentiating u(x,y) with respect to y while keeping x fixed (and then substituting x=a and y=b). The resulting expression is called the partial derivative of u with respect to y at (a,b), and it is denoted by
\frac{\partial u}{\partial y}(a,b).
Differentiating u(x,y)=x^33xy^2 with respect to y (and keeping x fixed) gives
\frac{\partial u}{\partial y}(x,y) = 6xy,\quad \text{so}\quad \frac{\partial u}{\partial y}(2,1) = 12;
this is the gradient of the surface in the ydirection at the point P. It is a negative value this time, because when x and y are positive, u decreases as y increases (keeping x fixed), as you can see from Figure 11.
You will need to work with partial derivatives a good deal here, so let us state the definitions formally.
Definitions
Let u\colon A\longrightarrow \mathbb{R} be a function with domain A a subset of \mathbb{R}^2 that contains the point (a,b).
The partial derivative of \boldsymbol{u} with respect to \boldsymbol{x} at \boldsymbol{(a,b)}, denoted \dfrac{\partial u}{\partial x}(a,b), is the derivative of the function x\longmapsto u(x,b) at x=a, provided that this derivative exists.
The partial derivative of \boldsymbol{u} with respect to \boldsymbol{y} at \boldsymbol{(a,b)}, denoted \dfrac{\partial u}{\partial y}(a,b), is the derivative of the function y\longmapsto u(a,y) at y=b, provided that this derivative exists.
Partial derivatives are real derivatives, not complex derivatives.
The next exercise asks you to work out the partial derivatives of the imaginary part of the complex function f(z)=z^3.
Exercise 16
Calculate the partial derivatives of v(x,y)=3x^2yy^3.
Evaluate these partial derivatives at (2,1).
Differentiating v(x,y) = 3x^2 y  y^3 with respect to x while keeping y fixed, we obtain \frac{\partial v}{\partial x} (x, y) = 6xy.Differentiating v with respect to y while keeping x fixed, we obtain \frac{\partial v}{\partial y} (x, y) = 3x^2  3y^2.
So, at (x, y) = (2, 1) the partial derivatives have the values \frac{\partial v}{\partial x} (2, 1) = 12\quad \text{and} \quad \frac{\partial v}{\partial y} (2, 1) = 9.
Let us collect together the partial derivatives of the real and imaginary parts u and v of the function f(z)=z^3:
\begin{alignat*}{2} \frac{\partial u}{\partial x}(a,b) &=3a^23b^2, \quad \,& \frac{\partial v}{\partial x}(a,b) &=6ab,\\[2pt] \frac{\partial u}{\partial y}(a,b) &=6ab, \quad \,& \frac{\partial v}{\partial y}(a,b) &=3a^23b^2. \end{alignat*}
As you can see, we have
\frac{\partial u}{\partial x}(a,b)= \frac{\partial v}{\partial y}(a,b)\quad \text{and}\quad \frac{\partial v}{\partial x}(a,b)= \frac{\partial u}{\partial y}(a,b).
This pair of equations is called the Cauchy–Riemann equations, and they hold true for the real and imaginary parts of any differentiable complex function, as the following important theorem testifies.
Theorem 4 Cauchy–Riemann Theorem
Let f(x+iy)=u(x,y)+iv(x,y) be defined on a region \mathcal{R} containing a+ib.
If f is differentiable at a+ib, then \dfrac{\partial u}{\partial x}, \dfrac{\partial u}{\partial y}, \dfrac{\partial v}{\partial x}, \dfrac{\partial v}{\partial y} exist at (a,b) and satisfy the Cauchy–Riemann equations
\frac{\partial u}{\partial x}(a,b)=\frac{\partial v}{\partial y}(a, b) \text{ and } \frac{\partial v}{\partial x} (a,b)=\frac{\partial u}{\partial y} (a, b).
Proof
Let \alpha =a+ib. Suppose that (z_n) is any sequence in \mathcal{R}\{\alpha \} that converges to \alpha . Let us write z_n=x_n+iy_n. According to the definition of a derivative, we have
f'(\alpha ) = \lim _{z\to \alpha } \frac{f(z)f(\alpha )}{z\alpha } = \lim _{n\to \infty } \frac{f(z_n)f(\alpha )}{z_n\alpha }.
Observe that, by expressing f in terms of its real and imaginary parts, we can write
\begin{equation} \label{a4urk} \frac{f(z_n)f(\alpha )}{z_n\alpha } = \left (\frac{u(x_n,y_n)u(a,b)}{(x_na)+i(y_nb)}\right )+i\left (\frac{v(x_n,y_n)v(a,b)}{(x_na)+i(y_nb)}\right ). \end{equation}
We proceed by choosing two different types of sequences (z_n), and observing the behaviour of the expressions in large brackets in equation 3 (above) in each case.
For our first choice, let us begin by defining (x_n) to be any sequence in \mathbb{R}\{a\} that converges to a. Let z_n=x_n+ib, so the sequence (z_n) converges to \alpha =a+ib. By removing a finite number of terms from (x_n), if need be, we can assume that each point z_n belongs to the open set \mathcal{R}\{\alpha \}. Substituting z_n=x_n+ib into equation 3 gives
\frac{f(z_n)f(\alpha )}{z_n\alpha } = \left (\frac{u(x_n,b)u(a,b)}{x_na}\right )+i\left (\frac{v(x_n,b)v(a,b)}{x_na}\right ).
We know that the expression on the lefthand side converges (to f'(\alpha )), so its real and imaginary parts (indicated by the bracketed expressions on the righthand side) converge too. Since (x_n) was chosen to be any sequence in \mathbb{R}\{a\} that converges to a, we see from the definition of partial derivatives that \dfrac{\partial u}{\partial x} and \dfrac{\partial v}{\partial x} exist at (a,b) and
\frac{u(x_n,b)u(a,b)}{x_na}\rightarrow \frac{\partial u}{\partial x}(a,b)\quad \text{and}\quad \frac{v(x_n,b)v(a,b)}{x_na}\rightarrow \frac{\partial v}{\partial x}(a,b).
In summary, we have
\begin{equation} \label{a4urk1} f'(\alpha ) = \frac{\partial u}{\partial x}(a,b)+i\frac{\partial v}{\partial x}(a,b). \end{equation}
Next let (y_n) be any sequence in \mathbb{R}\{b\} that converges to b, and define z_n=a+iy_n, so z_n\to \alpha . Again, by omitting a finite number of terms from (y_n), if need be, we can assume that z_n\in \mathcal{R}\{\alpha \} for all n. Substituting z_n=a+iy_n into equation 3 gives
\begin{aligned} \frac{f(z_n)f(\alpha )}{z_n\alpha } &= \left (\frac{u(a,y_n)u(a,b)}{i(y_nb)}\right )+i\left (\frac{v(a,y_n)v(a,b)}{i(y_nb)}\right )\\ &=\left (\frac{v(a,y_n)v(a,b)}{y_nb}\right )i\left (\frac{u(a,y_n)u(a,b)}{y_nb}\right ). \end{aligned}
Reasoning as before, we see that \dfrac{\partial u}{\partial y} and \dfrac{\partial v}{\partial y} exist at (a,b) and
\begin{equation} \label{a4urk2} f'(\alpha ) = \frac{\partial v}{\partial y}(a,b)i\frac{\partial u}{\partial y}(a,b). \end{equation}
Comparing equation 4 and equation 5 (both above), and equating real and imaginary parts, we obtain the Cauchy–Riemann equations, as required.
Origin of the Cauchy–Riemann equations
The Cauchy–Riemann equations are named after the mathematicians AugustinLouis Cauchy and Bernhard Riemann (1826–1866), who were among the first to recognise the importance of these equations in complex analysis.
The Cauchy–Riemann equations first appeared in the work of another mathematician, however: the Frenchman Jean le Rond d’Alembert (1717–1783), who is perhaps best remembered for his work in classical mechanics. Indeed, the Cauchy–Riemann equations were written down by d’Alembert in an essay on fluid dynamics in 1752 to describe the velocity components of a twodimensional irrotational fluid flow.
The Cauchy–Riemann Theorem gives us another strategy for proving the nondifferentiability of a complex function. (Two other strategies were described earlier in Section 1.3.) If a complex function is differentiable, then it must satisfy the Cauchy–Riemann equations. So if those equations do not hold, then the function cannot be differentiable.
Strategy C for nondifferentiability
Let f(x + iy) = u(x,y) + iv(x,y). If either
\frac{\partial u}{\partial x}(a,b) \neq \frac{\partial v}{\partial y}(a,b) \quad \text{or} \quad \frac{\partial v}{\partial x}(a,b) \neq  \frac{\partial u}{\partial y}(a,b),
then f is not differentiable at a + ib.
To illustrate this strategy, consider the function
f(x+iy)=(x^2+y^2)+i(2x+4y).
The real part u and imaginary part v of this function are given by
u(x,y)=x^2+y^2\quad \text{and}\quad v(x,y)=2x+4y.
Hence
As you can see, the partial derivatives have been grouped into two pairs according to the Cauchy–Riemann equations.
\frac{\partial u}{\partial x}(x,y)=\frac{\partial v}{\partial y}(x, y) \text{ and } \frac{\partial v}{\partial x} (x,y)=\frac{\partial u}{\partial y} (x, y).
In this case, these equations are 2x=4 and 2=2y, which are satisfied only when x=2 and y=1; that is, they are satisfied only when z=2i. If z\neq 2i, then the Cauchy–Riemann equations fail, so Strategy C tells us that f is not differentiable at z.
Notice that the Cauchy–Riemann Theorem and Strategy C do not tell us whether f is differentiable at the point 2i at which the Cauchy–Riemann equations are satisfied. To deal with points of this type we need another theorem, which we will come to shortly. First, however, try the following exercise, to practise applying Strategy C.
Exercise 17
Show that each of the following functions fails to be differentiable at all points of \mathbb{C}.
f(x+iy)=e^xie^y
f(z)=\overline{z}
Writing f in the form f(x + iy) = u(x, y) + iv(x, y),we obtain u(x, y) = e^x \quad \text{and} \quad v(x, y) = e^y.Hence \frac{\partial u}{\partial x} (x, y) = e^x \quad \text{and} \quad \frac{\partial v}{\partial y} (x, y) = e^y.Since e^x is always positive, whereas e^y is always negative, the first of the Cauchy–Riemann equations fails to hold for each (x, y). It follows that f fails to be differentiable at all points of \mathbb{C}.
Writing f(z) = \overline{z}=xiy in the form f(x + iy) = u(x, y) + iv(x, y),we obtain u(x, y) = x \quad \text{and} \quad v(x, y) = y.Hence \frac{\partial u}{\partial x} (x, y) = 1 \quad \text{and} \quad \frac{\partial v}{\partial y} (x, y) = 1.It follows that the first of the Cauchy–Riemann equations fails to hold for each (x, y), so f fails to be differentiable at all points of \mathbb{C}.
We have seen that if the Cauchy–Riemann equations are not satisfied, then the function is not differentiable. Let us now describe an example to show that even if the Cauchy–Riemann equations are satisfied, then the function may still not be differentiable.
Consider the function f(x+iy)=u(x,y)+iv(x,y), where v(x,y)=0 for all x and y, and
u(x,y)= \begin{cases} \min \{x,y\},& x,y>0,\\ 0, & \text{otherwise}. \end{cases}
The graph of u is shown in Figure 12.
Since u and v take the value 0 at all points on the x and yaxes, we see that all the partial derivatives vanish at (0,0); that is,
\begin{alignat*}{2} \frac{\partial u}{\partial x}(0,0) &=0, \quad \, & \frac{\partial v}{\partial x}(0,0) &=0,\\ \frac{\partial u}{\partial y}(0,0) &=0, \quad \, & \frac{\partial v}{\partial y}(0,0) &=0. \end{alignat*}
However, even though the Cauchy–Riemann equations are satisfied at the origin, f is not differentiable there. To see this, observe that if z_n=1/n, n=1,2,\ldots{}, then
\frac{f(z_n)f(0)}{z_n0} = \frac{u(1/n,0)0}{1/n0}=0 \rightarrow 0,
whereas if z_n = 1/n+i/n, n=1,2,\ldots{}, then
\frac{f(z_n)f(0)}{z_n0} = \frac{u(1/n,1/n)0}{1/n+i/n0}=\frac{1/n}{1/n+i/n}=\frac{1}{1+i} \rightarrow \frac{1}{1+i}.
The two limits 0 and 1/(1+i) differ, so f is not differentiable at 0.
This example demonstrates that the differentiability of a complex function does not follow from the Cauchy–Riemann equations alone. However, if certain extra conditions are satisfied, then f is differentiable, as the following theorem reveals.
Theorem 5 Cauchy–Riemann Converse Theorem
Let f(x+iy)=u(x,y)+iv(x,y) be defined on a region \mathcal{R} containing a+ib. If the partial derivatives \dfrac{\partial u}{\partial x}, \dfrac{\partial u}{\partial y}, \dfrac{\partial v}{\partial x}, \dfrac{\partial v}{\partial y}
exist at (x,y) for each x+iy\in \mathcal{R}
are continuous at (a,b)
satisfy the Cauchy–Riemann equations at (a,b),
then f is differentiable at a+ib and
f^{\prime }(a+ib)=\frac{\partial u}{\partial x}(a,b)+i\frac{\partial v}{\partial x} (a,b).
The proof of this theorem is postponed until the next subsection.
Let us now return to the function f(x+iy)=(x^2+y^2)+i(2x+4y), considered earlier, which satisfies the Cauchy–Riemann equations at the point z=2i only, and is therefore not differentiable at any other point. You saw earlier that the partial derivatives exist for every point (x,y) (so we can choose \mathcal{R}=\mathbb{C} in applying Theorem 5) and they satisfy
\begin{alignat*}{2} \frac{\partial u}{\partial x}(x,y) &=2x, \quad \, & \frac{\partial v}{\partial x}(x,y) &=2,\\[2pt] \frac{\partial u}{\partial y}(x,y) &=2y, \quad \, & \frac{\partial v}{\partial y}(x,y) &=4. \end{alignat*}
Each of these functions is continuous at (2,1) because each of them is either constant or a multiple of one of the basic continuous functions \operatorname{Re} z or \operatorname{Im} z. For example, the function (x,y)\longmapsto 2x can be thought of as z\longmapsto 2\operatorname{Re} z.
It follows, then, from the Cauchy–Riemann Converse Theorem that f is differentiable at 2i. In fact, the theorem even tells us the value of f'(2i), namely
f^{\prime }(2i)=\frac{\partial u}{\partial x}(2,1)+i\frac{\partial v}{\partial x} (2,1)=2\times 2 + i\times 2 = 4+2i.
Now, we investigate the differentiability of the complex exponential function, as promised earlier.
Example 8
Prove that the complex exponential function f(z)=e^z is entire, and find its derivative.
Solution
The real part u and the imaginary part v of f are given by
u(x,y)= e^x \cos y\quad \text{and}\quad v(x,y)=e^x\sin y.
Hence the partial derivatives of u and v exist for every point (x,y) and satisfy
\begin{alignat*}{2} \frac{\partial u}{\partial x}(x,y) &=e^x\cos y, \quad \, & \frac{\partial v}{\partial x}(x,y) &=e^x\sin y,\\ \frac{\partial u}{\partial y}(x,y) &=e^x\sin y, \quad \, & \frac{\partial v}{\partial y}(x,y) &=e^x\cos y. \end{alignat*}
Since the real exponential and trigonometric functions are continuous, and the real and imaginary part functions \operatorname{Re} z and \operatorname{Im} z are basic continuous functions, we see from the Combination Rules and Composition Rule for continuous functions that each partial derivative is continuous at every point (x,y).
The Cauchy–Riemann equations are satisfied at all points (x,y), so the Cauchy–Riemann Converse Theorem tells us that f is differentiable at every point of the complex plane (it is entire) and
f^{\prime }(z)=\frac{\partial u}{\partial x}(x,y)+i\frac{\partial v}{\partial x}(x,y)=e^x\cos y +ie^x\sin y=e^z.
Exercise 18
Use the Cauchy–Riemann theorems to find the derivatives of the following functions. In each case specify the domain of the derivative.
f(z)=\sin z
f(z)=z^2
(Hint: For part (a), write \sin z=\sin (x+iy) and use a trigonometric addition identity to find the real and imaginary parts of \sin z.)
From the trigonometric identities, \begin{align*} \sin (x + iy) &= \sin x \cos iy + \cos x \sin iy\\ &= \sin x \cosh y + i \cos x \sinh y, \end{align*}so f(x+iy)=u(x,y)+iv(x,y), where \begin{align*} u(x, y) &= \sin x \cosh y\quad \text{and}\\ v(x, y) &= \cos x \sinh y. \end{align*}Hence \begin{aligned} \dfrac{\partial u}{\partial x} (x, y) &= \cos x \cosh y,\\[3pt] \dfrac{\partial v}{\partial x} (x, y) &= \sin x \sinh y,\\[3pt] \dfrac{\partial u}{\partial y} (x, y) &= \sin x \sinh y, \\[3pt] \dfrac{\partial v}{\partial y} (x, y) &= \cos x \cosh y. \end{aligned}These partial derivatives are defined and continuous on the whole of \mathbb{C}. Furthermore, \begin{aligned} \frac{\partial u}{\partial x} (x, y)& = \frac{\partial v}{\partial y} (x, y)\quad \text{and}\\ \frac{\partial v}{\partial x} (x, y)& = \frac{\partial u}{\partial y} (x, y), \end{aligned}so the Cauchy–Riemann equations are satisfied at every point of \mathbb{C}. By the Cauchy–Riemann Converse Theorem, f(z) = \sin z is entire, and \begin{align*} f^{\prime }(x + iy) &= \frac{\partial u}{\partial x} (x, y) + i\frac{\partial v}{\partial x} (x, y)\\ &= \cos x \cosh y  i \sin x \sinh y\\ &= \cos x \cos iy  \sin x \sin iy\\ &= \cos (x + iy). \end{align*}Hence f^{\prime } has domain \mathbb{C} and f^{\prime }(z) = \cos z.
Here f(x + iy) = x + iy^2 = x^2 + y^2, so u(x,y) = x^2 + y^2 \quad \text{and} \quad v(x, y) = 0.Hence \begin{array}{@{}ll@{}} \dfrac{\partial u}{\partial x} (x, y) = 2x, &\dfrac{\partial v}{\partial x} (x, y) = 0,\\[6pt] \dfrac{\partial u}{\partial y} (x, y) = 2y,&\dfrac{\partial v}{\partial y} (x, y) = 0. \end{array}The Cauchy–Riemann equations cannot be satisfied unless 2x = 0 and 2y = 0, so f fails to be differentiable at all nonzero points of \mathbb{C}. However, the Cauchy–Riemann equations are satisfied at (0, 0), and the partial derivatives are defined on \mathbb{C} and continuous (at (0,0)), so by the Cauchy–Riemann Converse Theorem, f is differentiable at 0, and \begin{align*} f^{\prime }(0) &= \frac{\partial u}{\partial x} (0, 0) + i\frac{\partial v}{\partial x} (0, 0)\\ &= 0 + i0=0. \end{align*}Thus f^{\prime } has domain \{0\} and f^{\prime }(0) = 0. (This is the example referred to in Section 1.1 of a function that is differentiable at a point, but not analytic at that point.)
2.2 Proof of the Cauchy–Riemann Converse Theorem
The proof of the Cauchy–Riemann Converse Theorem is rather involved and may require more than one reading.
We will need two results from real analysis. The first result is known as the Mean Value Theorem.
Theorem 6 Mean Value Theorem
Let f be a real function that is continuous on the closed interval [a,x] and differentiable on the open interval (a,x). Then there is a number c\in (a,x) such that
\begin{equation} \label{a4eat} f(x)=f(a)+(xa)f^{\prime }(c). \end{equation}
To appreciate why this theorem is true, imagine pushing the chord between (a,f(a)) and (x,f(x)) in Figure 13 parallel to itself until it becomes a tangent to the graph of f at a point (c,f(c)), where c lies somewhere between a and x. Clearly, the gradient of the original chord must be equal to the gradient of the tangent, so
\frac{f(x)f(a)}{xa}=f^{\prime }(c).
Multiplication by xa gives f(x)=f(a)+(xa)f^{\prime }(c). Notice that this equation is also true if x = c= a.
The second result that we will need is a Linear Approximation Theorem, which asserts that if u is a realvalued function of two real variables x and y, then for (x,y) near (a,b), the value of u(x,y) can be approximated by the value of the linear function t defined by
t(x,y)=u(a,b)+(xa)\frac{\partial u}{\partial x}{(a,b)}+(yb) \frac{\partial u}{\partial y} (a,b).
Now, the graph of t is a plane passing through the point P=(a,b,u(a,b)) on the graph of u (Figure 14). Moreover, the partial x and yderivatives of t coincide with the partial x and yderivatives of u at (a,b). This means that both have the same gradient in the x and ydirections, so you can think of the plane as the tangent plane to the graph of u at P.
The accuracy with which this tangent plane approximates the graph of u depends on the smoothness of the graph of u. If the graph exhibits the kind of kink shown in Figure 12, then the approximation is not as good as for a function with continuous partial derivatives.
Theorem 7 Linear Approximation Theorem (\mathbb{R}^2 to \mathbb{R})
Let u be a realvalued function of two real variables, defined on a region \cal R in \mathbb{R}^2 containing (a,b). If the partial x and yderivatives of u exist on \cal R and are continuous at (a,b), then there is an ‘error function’ e such that
u(x,y)=u(a,b)+(xa)\frac{\partial u}{\partial x}(a,b)+(yb)\frac{\partial u}{\partial y} (a,b)+e(x,y),
where \dfrac{e(x,y)}{\sqrt{(xa)^2+(yb)^2}}\rightarrow 0 as (x,y)\rightarrow (a,b).
Since \sqrt{(xa)^2+(yb)^2} is the distance from (a,b) to (x,y), the theorem asserts that the error function tends to zero ‘faster’ than this distance. Theorem 7 is the realvalued function analogue of Theorem 2.
Proof
We have to show that the function e defined by
e(x,y)=u(x,y)u(a,b)(xa) \frac{\partial u}{\partial x}(a,b)(yb) \frac{\partial u}{\partial y}(a,b)
satisfies
\dfrac{e(x,y)}{\sqrt{(xa)^2+(yb)^2}}\rightarrow 0 \ \text{as}\ (x,y)\rightarrow (a,b).
Since the partial derivatives exist on \cal R, they must be defined on some disc centred at (a,b). Let us begin by finding an expression for u(x,y)u(a,b) on this disc. If we apply the Mean Value Theorem to the real functions x\longmapsto u(x,y) (where y is kept constant) and y\longmapsto u(a,y), then we obtain
u(x,y)=u(a,y)+(xa)\frac{\partial u}{\partial x}(r,y),
where r is between a and x, and
u(a,y)=u(a,b)+(yb)\frac{\partial u}{\partial y}(a,s),
where s is between b and y (see Figure 15). Hence
\begin{align*} u(x,y)u(a,b)&=(u(x,y)u(a,y))+(u(a,y)u(a,b))\\ &=(xa)\frac{\partial u}{\partial x}(r,y)+(yb)\frac{\partial u}{\partial y}(a,s). \end{align*}
Substituting this expression for u(x,y)u(a,b) into the definition of e, we obtain
e(x,y) =(xa)\left (\frac{\partial u}{\partial x}(r,y)\frac{\partial u}{\partial x}(a, b)\right ) +(yb)\left (\frac{\partial u}{\partial y}(a,s)\frac{\partial u}{\partial y}(a,b)\right ).
Dividing both sides by \sqrt{(xa)^2+(yb)^2}, and noting that
\frac{xa}{\sqrt{(xa)^2+(yb)^2}}\leq 1\quad \text{and}\quad \frac{yb}{\sqrt{(xa)^2+(yb)^2}}\leq 1
(because both (xa)^2 and (yb)^2 do not exceed (xa)^2+(yb)^2), we see that
\left \frac{e(x,y)}{\sqrt{(xa)^2+(yb)^2}}\right  \leq \left \frac{\partial u}{\partial x}(r,y)\frac{\partial u}{\partial x}(a,b)\right  +\left \frac{\partial u}{\partial y}(a,s) \frac{\partial u}{\partial y}(a,b)\right .
Figure 15 illustrates that as (x,y) tends to (a,b), so do (a,s) and (r,y). So, by the continuity of the partial x and yderivatives at (a,b), the two terms on the right of the inequality above must both tend to 0 as (x,y) tends to (a,b). It follows that e(x,y)/\sqrt{(xa)^2+(yb)^2} tends to 0 as (x,y) tends to (a,b).
We are now in a position to prove the Cauchy–Riemann Converse Theorem.
Theorem 5 Cauchy–Riemann Converse Theorem (revisited)
Let f(x+iy)=u(x,y)+iv(x,y) be defined on a region \mathcal{R} containing a+ib. If the partial derivatives \dfrac{\partial u}{\partial x}, \dfrac{\partial u}{\partial y}, \dfrac{\partial v}{\partial x}, \dfrac{\partial v}{\partial y}
exist at (x,y) for each x+iy\in \mathcal{R}
are continuous at (a,b)
satisfy the Cauchy–Riemann equations at (a,b),
then f is differentiable at a+ib and
f^{\prime }(a+ib)=\frac{\partial u}{\partial x}(a,b)+i\frac{\partial v}{\partial x} (a,b).
Proof
We need to show that the limit of the difference quotient for f at \alpha =a+ib exists and has the value indicated in the theorem. In order to calculate the difference quotient for f at \alpha , we find an expression for f(z)f(\alpha ). Since u and v fulfil the conditions of Theorem 7, it follows that
\begin{align*} f(z)f(\alpha )&=(u(x,y)u(a,b))+i(v(x,y)v(a,b))\\ &=\left ((xa)\frac{\partial u}{\partial x}(a,b)+(yb) \frac{\partial u}{\partial y}(a,b)+e_u(x,y)\right )\\ &\phantom{={}}+i\left ((xa)\frac{\partial v}{\partial x}(a,b)+(yb)\frac{\partial v}{\partial y} (a,b)+e_v(x,y)\right ), \end{align*}
where e_u and e_v are the error functions associated with u and v, respectively.
Collecting together terms, we see that
\begin{align*} f(z)f(\alpha )&=(xa)\left (\frac{\partial u}{\partial x}(a,b)+i\frac{\partial v}{\partial x}(a,b)\right )\\ &\phantom{={}}+i(yb)\left (\frac{\partial v}{\partial y}(a,b)i\frac{\partial u}{\partial y}(a,b)\right )+e_u(x,y)+ie_v(x,y). \end{align*}
Since u and v satisfy the Cauchy–Riemann equations, both expressions in the large brackets must be equal, so
\begin{align*} f(z)f(\alpha )&=((xa)+i(yb))\left (\frac{\partial u}{\partial x}(a,b)+i \frac{\partial v}{\partial x}(a,b)\right )\\ &\phantom{={}}+e_u(x,y)+ie_v(x,y). \end{align*}
Dividing by z\alpha =(xa)+i(yb) gives
\frac{f(z)f(\alpha )}{z\alpha }=\left (\frac{\partial u}{\partial x}(a,b)+i\frac{\partial v}{\partial x}(a, b)\right ) +\left (\frac{e_u(x,y)+ie_v(x, y)}{(xa)+i(yb)}\right ).
The limit f^{\prime }(\alpha ) of this difference quotient exists, and has the required value
\frac{\partial u}{\partial x}(a,b)+i\frac{\partial v}{\partial x}(a,b),
provided that we can show that the expression involving the error functions e_u and e_v tends to 0 as z=x+iy tends to \alpha . To this end, notice that (xa)+i(yb) is equal to \sqrt{(xa)^2+(yb)^2} and so, by the Triangle Inequality,
\left \frac{e_u(x,y)+ie_v(x,y)}{(xa)+i(yb)}\right  \leq \left \frac{e_u(x,y)}{\sqrt{(xa)^2+(yb)^2}}\right  +\left \frac{e_v(x,y)}{\sqrt{(xa)^2+(yb)^2}}\right .
By Theorem 7, both expressions on the right tend to 0 as x+iy tends to \alpha . Consequently, the expression on the left must also tend to 0, and the theorem follows.
2.3 Further exercises
Here are some further exercises to end this section.
Exercise 19
Calculate the partial derivatives \dfrac{\partial u}{\partial x} (x, y) and \dfrac{\partial u}{\partial y} (x, y) of each of the following functions.
u(x, y) = 3x + xy + 2x^2y^2
u(x, y) = x \cos y + \exp (xy)
u(x, y) = (x + y)^3
Differentiating u(x, y) = 3x + xy + 2x^2y^2 with respect to x while keeping y fixed, we obtain \frac{\partial u}{\partial x} (x, y) = 3 + y + 4xy^2.Differentiating with respect to y while keeping x fixed, we obtain \frac{\partial u}{\partial y} (x, y) = x + 4x^2y.
Here u(x, y) = x \cos y + \exp (xy), so \begin{align*} \frac{\partial u}{\partial x} (x, y) &= \cos y + y \exp (xy)\quad \text{and}\\ \frac{\partial u}{\partial y} (x, y) &=  x \sin y + x \exp (xy). \end{align*}
Here u(x, y) = (x + y)^3, so \begin{align*} \frac{\partial u}{\partial x} (x, y) &= 3(x + y)^2 \quad \text{and}\\ \frac{\partial u}{\partial y} (x, y) &= 3(x + y)^2. \end{align*}
Exercise 20
Calculate the partial derivatives \dfrac{\partial u}{\partial x} (x, y) and \dfrac{\partial u}{\partial y} (x, y) of each of the following functions, and evaluate these partial derivatives at (1,0).
u(x,y)=x^3yy\cos y
u(x,y)=ye^xxy^3
Here u(x, y) = x^3 y  y \cos y, so \begin{align*} \frac{\partial u}{\partial x} (x, y) &= 3x^2y \quad \text{and}\\ \dfrac{\partial u}{\partial y} (x, y) &= x^3  \cos y + y\sin y. \end{align*}So, at (x, y) = (1, 0) the partial derivatives have the values \frac{\partial u}{\partial x} (1, 0) = 0 \quad \text{and} \quad \frac{\partial u}{\partial y} (1, 0) = 0.
Here u(x, y) = ye^x  xy^3, so \begin{align*} \frac{\partial u}{\partial x} (x, y) &= ye^x  y^3\quad \text{and}\\ \frac{\partial u}{\partial y} (x, y) &= e^x  3xy^2. \end{align*}So, at (x, y) = (1, 0) the partial derivatives have the values \frac{\partial u}{\partial x} (1, 0) = 0 \quad \text{and} \quad \frac{\partial u}{\partial y} (1, 0) = e.
Exercise 21
Find the gradient of the graph of u(x, y) = x^2 + 2xy at the point (1,2,5) in the xdirection and in the ydirection.
Since u(x, y) = x^2 + 2xy, it follows that
\frac{\partial u}{\partial x} (x, y) = 2x + 2y \quad \text{and} \quad \frac{\partial u}{\partial y} (x, y) = 2x.
The gradient of the graph at (x,y)=(1,2) in the xdirection is
\frac{\partial u}{\partial x} (1,2) = 2 \times 1 + 2 \times 2 = 6.
The gradient of the graph at (x,y)=(1,2) in the ydirection is
\frac{\partial u}{\partial y} (1,2) = 2 \times 1 = 2.
Exercise 22
Use the Cauchy–Riemann equations to show that there is no point of \mathbb{C} at which the function
f(x + iy) = e^x (\sin y + i \cos y)
is differentiable.
Writing f in the form
f(x + iy) = u(x, y) + iv(x, y),
we obtain
u(x, y) = e^x \sin y \quad \text{and} \quad v(x, y) = e^x \cos y.
Hence
\begin{align*} \dfrac{\partial u}{\partial x} (x, y) = e^x \sin y,\quad &\dfrac{\partial v}{\partial x} (x, y) = e^x \cos y,\\[3pt] \dfrac{\partial u}{\partial y} (x, y) = e^x \cos y,\quad &\dfrac{\partial v}{\partial y} (x, y) = e^x \sin y. \end{align*}
If f is differentiable at x+iy, then the Cauchy–Riemann equations require that
\begin{align*} e^x \sin y &= e^x \sin y \quad \text{and}\\ e^x \cos y &= e^x \cos y; \end{align*}
that is,
e^x \sin y = 0 \quad \text{and} \quad e^x \cos y = 0.
But e^x is never zero, so \sin y = \cos y = 0, which is impossible. It follows that there is no point of \mathbb{C} at which f is differentiable.
Exercise 23
Use the Cauchy–Riemann equations to show that the function
f(x + iy) = (x^2 + x  y^2) + i(2xy + y)
is entire, and find its derivative.
In this case,
\begin{align*} u(x, y) &= x^2 + x  y^2\quad \text{and}\\ v(x, y) &= 2xy + y, \end{align*}
so
\begin{alignat*}{3} &\dfrac{\partial u}{\partial x} (x, y) = 2x + 1, \quad &\dfrac{\partial v}{\partial x} (x, y)& = 2y,\\[3pt] &\dfrac{\partial u}{\partial y} (x, y) = 2y,\quad &\dfrac{\partial v}{\partial y} (x, y)&= 2x + 1. \end{alignat*}
These partial derivatives are defined and continuous on the whole of \mathbb{C}. Furthermore,
\begin{align*} \frac{\partial u}{\partial x} (x, y) &= \frac{\partial v}{\partial y} (x, y)\quad \text{and}\\ \frac{\partial v}{\partial x} (x, y) &= \frac{\partial u}{\partial y} (x, y), \end{align*}
so the Cauchy–Riemann equations are satisfied at every point of \mathbb{C}.
By the Cauchy–Riemann Converse Theorem, f is entire, and
\begin{align*} f^{\prime }(x + iy) &= \frac{\partial u}{\partial x} (x, y) + i\frac{\partial v}{\partial x} (x, y)\\ &= (2x + 1) + 2yi. \end{align*}
(So f'(z)=2z+1, and in fact f(z)=z^2+z.)
Exercise 24
Use the Cauchy–Riemann equations to find all the points at which the following functions are differentiable, and calculate their derivatives.
f(x + iy) = (x^2 + y^2) + i (x^2  y^2)
f(x + iy) = xy
Here u(x, y) = x^2 + y^2 \quad \text{and}\quad v(x, y) = x^2  y^2,so \begin{align*} \dfrac{\partial u}{\partial x} (x, y) = 2x, \quad &\dfrac{\partial v}{\partial x} (x, y) = 2x,\\[3pt] \dfrac{\partial u}{\partial y} (x, y) = 2y, \quad &\dfrac{\partial v}{\partial y} (x, y)= 2y. \end{align*}The Cauchy–Riemann equations are satisfied only if x = y. So f cannot be differentiable at x+iy unless x = y. Since the partial derivatives above exist, and are continuous on \mathbb{C} (and in particular when x = y), it follows from the Cauchy–Riemann Converse Theorem that f is differentiable on the set \{x+iy: x = y\}. On this set, \begin{align*} f^{\prime }(x+iy) &= \frac{\partial u}{\partial x} (x, y) + i\frac{\partial v}{\partial x} (x, y)\\ &= 2x + 2xi = 2x(1 + i). \end{align*}
Here u(x, y) = xy\quad \text{and}\quad v(x, y) = 0,so \begin{align*} \dfrac{\partial u}{\partial x} (x, y) = y, \quad &\dfrac{\partial v}{\partial x} (x, y) = 0,\\[3pt] \dfrac{\partial u}{\partial y} (x, y) = x, \quad &\dfrac{\partial v}{\partial y} (x, y) = 0. \end{align*}The Cauchy–Riemann equations are not satisfied unless y =0 and x = 0. So f is not differentiable except possibly at 0. Since the partial derivatives above exist, and are continuous at (0, 0), it follows from the Cauchy–Riemann Converse Theorem that f is differentiable at 0. Furthermore, f^{\prime }(0) = \frac{\partial u}{\partial x} (0, 0) + i \frac{\partial v}{\partial x} (0, 0) = 0.
2.4 Laplace’s equation and electrostatics
The Cauchy–Riemann equations for a differentiable function f(x+iy)=u(x,y)+iv(x,y) tell us that
\frac{\partial u}{\partial x}(x,y)=\frac{\partial v}{\partial y}(x,y)\quad \text{and}\quad \frac{\partial v}{\partial x}(x,y)=\frac{\partial u}{\partial y}(x,y).
These partial derivatives are themselves functions of x and y, so, provided that they are suitably well behaved, we can partially differentiate both sides of the first of the two equations with respect to x, and partially differentiate both sides of the second equation with respect to y, to obtain
\frac{\partial ^2 u}{\partial x^2}=\frac{\partial ^2 v}{\partial x \partial y}\quad \text{and}\quad \frac{\partial ^2 v}{\partial y\partial x}=\frac{\partial ^2 u}{\partial y^2}.
(Here we have omitted the variables (x,y) after each derivative, for simplicity.) For sufficiently wellbehaved functions, the two partial derivatives
\frac{\partial ^2 v}{\partial x \partial y}\quad \text{and}\quad \frac{\partial ^2 v}{\partial y \partial x}
are equal; the order in which you partially differentiate with respect to x and y does not matter. Hence
\frac{\partial ^2 u}{\partial x^2}=\frac{\partial ^2 v}{\partial x \partial y}=\frac{\partial ^2 v}{\partial y\partial x}=\frac{\partial ^2 u}{\partial y^2},
which implies that
\frac{\partial ^2 u}{\partial x^2}+\frac{\partial ^2 u}{\partial y^2}=0.
This equation for u is called Laplace’s equation. (The imaginary part v of f satisfies Laplace’s equation too.) It is named after the distinguished French mathematician and scientist PierreSimon Laplace (1749–1827), who studied the equation in his work on gravitational potentials.
Laplace’s equation has proved to have huge importance to physics, with particular significance in fluid mechanics. It also has a key role in the subject of electrostatics. In that theory, it is known that the electrostatic potential V(x,y) at a point (x,y) of a region without charge satisfies Laplace’s equation. It can be shown that V is the real part of some differentiable function f. Using these observations allows one to move between complex analysis and electrostatics: many of the theorems of complex analysis have important physical interpretations in electrostatics.
3 Summary of Session 1
In this session you have seen how we can define differentiation for complex functions, check whether such a function is differentiable, and seen how to differentiate complex rational and polynomial functions. You have learnt how this can be extended to the partial derivatives of complex functions of more than one variable, and studied the CauchyRiemann equations that link the first partial derivatives of the real and imaginary parts of a differentiable complex function of two variables.
You can now move on to Session 2: Integration.
Session 2: Integration
Introduction to integration
This session introduces complex integration, an important concept which gives complex analysis its special flavour. We spend most of this session setting up the complex integral, deriving its main properties, and illustrating various techniques for evaluating it.
To define the integral of a complex function, it is instructive to first consider real integrals, such as
\int ^b_a x^2\,dx = \tfrac 13(b^3a^3),
where a<b, which represents the area of the shaded part of Figure 1 (for a>0). We can express this equation in words by saying that
the integral of the function f(x)=x^2 over the interval [a,b] is \tfrac{1}{3}(b^3a^3).
Suppose now that we wish to integrate the complex function f(z)=z^2 between two points \alpha and \beta in the complex plane. To do this, we first need to specify exactly how to get from \alpha to \beta . We could, for example, choose the line segment \Gamma from \alpha to \beta , as shown in Figure 2. It turns out (as you will see later) that if we make this choice, then
the integral of the function f(z)=z^2 along the line segment from \alpha to \beta is \tfrac{1}{3}(\beta \,^3\alpha ^3).
We write this as
\int _{\Gamma } z^2\,dz = \tfrac 13(\beta \,^3  \alpha ^3).
But there are many other paths in the complex plane from \alpha to \beta , which raises the following question. Do we get the same answer if we integrate the function f(z)=z^2 along a different path from \alpha to \beta ?
In order to address this question, we first need to explain exactly what it means to ‘integrate a function along a path’. This is one of the objectives of Section 1, where we briefly review the Riemann integral from real analysis, and then use similar ideas to construct the integral of a complex function along a path in the complex plane. We will see that if f is a complex function that is continuous on a smooth path \Gamma : \gamma (t)\ (t\in [a,b]) in the complex plane, then the integral of f along \Gamma , denoted by \displaystyle \int _{\Gamma } f(z)\,dz, is given by the formula
\int _{\Gamma } f(z)\, dz = \displaystyle \int ^b_a f(\gamma (t))\,\gamma \,^\prime (t)\,dt.
We can evaluate this integral by splitting f(\gamma (t))\,\gamma \,^\prime (t) into its real and imaginary parts u(t) and v(t), and evaluating the resulting pair of real integrals:
\int ^b_a f(\gamma (t))\,\gamma \,^\prime (t)\,dt = \displaystyle \int ^b_a u(t)\, dt + i \displaystyle \int ^b_a v(t)\, dt.
Section 2 begins with this definition of the integral of a complex function along a smooth path, and then extends the idea to allow integration along a contour – a finite sequence of smooth paths laid end to end.
In Section 3 we prove the Fundamental Theorem of Calculus, which shows that integration and differentiation are essentially inverse processes. From this result it follows that the integral of f(z)=z^2 along any contour from \alpha to \beta is \frac 13\,(\beta \,^3  \alpha ^3).
We will need to be careful about how we apply results such as the Fundamental Theorem of Calculus. For example, suppose that the endpoints \alpha and \beta of \Gamma coincide, as illustrated in Figure 3. Then
\int _{\Gamma } z^2\,dz = \tfrac 13\,(\beta \,^3 \alpha ^3)=0.
In this case, the integral of the function f(z)=z^2 along \Gamma is 0.
Consider now the function f(z)=1/z. We will see later in Example 4 that if we integrate f along the smooth paths \Gamma _1 and \Gamma _2 shown in Figure 4, where \Gamma _1 and \Gamma _2 are circles traversed once anticlockwise, then
\int _{\Gamma _1} \frac{1}{z}\,dz = 0,\quad \text{but}\quad \int _{\Gamma _2} \frac{1}{z}\,dz = 2\pi i.
The reason for this difference will become apparent in Section 3.
This OpenLearn course is an extract from the Open University course M337 Complex analysis.
1 Integrating real functions
After working through this section, you should be able to:
appreciate how the Riemann integral is defined
state the main properties of the Riemann integral
appreciate how complex integrals can be defined.
In this section we define the Riemann integral of a continuous real function (named after Bernhard Riemann, whom we met in Session 1 for the Cauchy–Riemann equations) and outline its main properties. We then discuss complex integrals.
1.1 Areas under curves
One of the uses of real integration is to determine the area under a curve. For example, the integral of a continuous function f that takes only positive values between the real numbers a and b, where a<b, is the area bounded by the graph of y=f(x), the xaxis, and the two vertical lines x=a and x=b, as illustrated by the shaded part of Figure 5.
We can estimate this area by first splitting the interval [a,b] into a finite number of subintervals, such as those shown in Figure 6.
We can then underestimate the area under the graph of y=f(x) between a and b by summing the areas of those rectangles that have the various subintervals as bases and for which the top edge of each rectangle touches the graph from below, as shown in Figure 7(a). Similarly, we can overestimate the area under y=f(x) between a and b by summing the areas of those rectangles that have the various subintervals as bases and for which the top edge of each rectangle touches the graph from above, as shown in Figure 7(b).
We now let the number of subintervals tend to infinity, in such a way that the lengths of the subintervals tend to zero. It can be shown that the underestimates and overestimates of the area tend to a common limit A, written as
A=\int ^b_a f(x)\,dx.
We call A the area under the graph of y=f(x) between a and b.
This underestimating and overestimating approach is often how Riemann integration is first introduced, and you may have seen it before. However, we encounter a problem if we try to generalise this particular approach to complex functions. Inequalities between complex numbers have no meaning, so it makes no sense to try to estimate complex numbers from ‘below’ or ‘above’. To get round this problem, we now outline a different approach to defining the integral of a real function – one that does generalise to complex functions.
Rather than underestimating and overestimating the area under the curve with rectangles, we choose a single point inside each subinterval and use this to construct a rectangle whose base is the subinterval, and whose height is the value of the function at the chosen point. The sum of the areas of these rectangles should then be an approximation to the area under the graph. As long as our function f is continuous on the interval [a,b], then this modified approach (which does generalise to complex integrals) agrees with the underestimating and overestimating approach.
In this section we use this modified approach to give a formal definition of the Riemann integral, and then we summarise the main properties of the Riemann integral. We omit all proofs, which can be found in texts on real analysis.
1.2 Integration on the real line
We wish to define the Riemann integral of a continuous real function f in such a way that if f is positive on some interval [a,b], then the integral of f from a to b is the area under the graph of y=f(x) between a and b. This is illustrated by the shaded part of Figure 8. To do this, we first split the interval [a,b] into a collection of subintervals called a partition.
Definitions
A partition P of the interval [a,b] is a finite collection of subintervals of [a,b],
P=\{ [x_0,x_1], [x_1,x_2], \ldots , [x_{n1},x_n]\},
for which
a=x_0\leq x_1\leq x_2\leq \cdots \leq x_n=b.
The length of the subinterval [x_{k1},x_k] is \delta x_k =x_kx_{k1}.
We use \ P\ to denote the maximum length of all the subintervals, so
\ P\ = \max \{ \delta x_1,\delta x_2,\dots ,\delta x_n \}.
Given a partition P=\{ [x_0,x_1], [x_1,x_2], \ldots , [x_{n1},x_n]\} of [a,b], we can approximate the area under the graph of y=f(x) between a and b by constructing a sequence of rectangles, as shown in Figure 9.
Here the kth rectangle has base [x_{k1},x_k] and height f(x_k) (so the topright corner of the rectangle touches the curve). The area of the rectangle is f(x_k)(x_k  x_{k1}) (see Figure 10). Note that we could equally have chosen the rectangle to be of height f(c_k) for any point c_k in [x_{k1},x_k], and the theory would still work. This is because, for a continuous function f, the difference between one set of choices of values for c_k, k=1,2,\dots ,n, and another disappears when we take limits of partitions. We have chosen f(x_k) merely for convenience.
Summing the areas of all the rectangles gives an approximation to the area under the graph. This sum is called the Riemann sum for f, with respect to this particular partition. (You may have seen upper Riemann sum and lower Riemann sum defined slightly differently elsewhere.)
Definition
The Riemann sum for f with respect to the partition
P=\{ [x_0,x_1], [x_1,x_2], \ldots , [x_{n1},x_n]\}
is the sum
R(f,P) =\sum _{k=1}^n f(x_k)\delta x_k =\sum _{k=1}^n f(x_k) ( x_k x_{k1}) .
We now calculate the Riemann sum for a particular choice of function and partition, and then ask you to do the same for a second function.
Example 1
Let f(x)=x^2, where x\in [0,1]. Show that for
P_n = \{ [0,1/n],[1/n,2/n],\ldots , [(n1)/n,1] \},
we have
R(f,P_n)=\tfrac{1}{6}(1+1/n)(2+1/n),
and determine \displaystyle \lim _{n\to \infty }R(f,P_n).
Solution
Each of the n subintervals of P_n has length 1/n. Therefore
\begin{align*} R(f,P_n) &= \sum _{k=1}^n f\left (\frac{k}{n}\right )\times \frac{1}{n}\\ &= \sum _{k=1}^n \left (\frac{k}{n}\right )^{\!\!2}\times \frac{1}{n}\\ &= \frac{1}{n^3}\sum _{k=1}^n k^2. \end{align*}
Using the identity
\sum _{k=1}^n k^2=1^2+2^2+\cdots +n^2=\tfrac 1{6}n(n+1)(2n+1),
we obtain
R(f,P_n)=\frac{1}{n^3}\times \tfrac{1}{6}n(n+1)(2n+1)=\tfrac{1}{6}(1+1/n)(2+1/n),
as required.
Finally, since (1/n) is a basic null sequence, we see that
\lim _{n\to \infty }R(f,P_n) =\tfrac{1}{6}(1+0)(2+0)=\tfrac{1}{3}.
Now try the following exercise, making use of the identity
1^3+2^3+\cdots +n^3=\tfrac{1}{4}n^2(n+1)^2.
Exercise 1
Let f(x)=x^3, where x\in [0,1]. Show that for
P_n = \{ [0,1/n],[1/n,2/n],\ldots , [(n1)/n,1] \},
we have
R(f,P_n)=\tfrac{1}{4}(1+1/n)^2,
and determine \displaystyle \lim _{n\to \infty }R(f,P_n).
Each of the n subintervals of P_n has length 1/n. Therefore
\begin{align*} R(f,P_n) &= \sum _{k=1}^n f\left (\frac{k}{n}\right )\times \frac{1}{n}\\ &= \sum _{k=1}^n \left (\frac{k}{n}\right )^{\!\!3}\times \frac{1}{n}\\ &= \frac{1}{n^4}\sum _{k=1}^n k^3\\ &= \frac{1}{n^4}\times \tfrac{1}{4}n^2(n+1)^2\\ &= \tfrac{1}{4}(1+1/n)^2, \end{align*}
as required.
Since (1/n) is a basic null sequence, we see that
\lim _{n\to \infty }R(f,P_n)= \tfrac{1}{4}(1+0)^2=\tfrac 14.
The Riemann sums R(f,P_n) of Example 1 approximate the area under the graph of y=x^2 between 0 and 1. The approximation improves as n increases, and we expect the limiting value \tfrac{1}{3} to actually be the area under the graph. However, to be sure that this limit gives us a sensible value, we should check that R(f,P_n)\to \tfrac{1}{3} for any sequence (P_n) of partitions of [0,1] such that \P_n\\to 0. The following important theorem, for which we omit the proof, provides this check.
Theorem 1
Let f \colon [a,b]\longrightarrow \mathbb{R} be a continuous function. Then there is a real number A such that
\displaystyle \lim _{n\to \infty }R(f,P_n)=A,
for any sequence (P_n) of partitions of [a,b] such that \ P_n\ \to 0.
We can now define the Riemann integral of a continuous function.
Definition
Let f \colon [a,b]\longrightarrow \mathbb{R} be a continuous function, where a<b. The value A determined by Theorem 1 is called the Riemann integral of f over [a,b] , and it is denoted by
\int _a^b f(x)\, dx.
The theorem tells us that to calculate the Riemann integral of f over [a,b], we can make any choice of partitions (P_n) for which \P_n\\to 0 and calculate \displaystyle \lim _{n\to \infty }R(f,P_n). Thus the calculation of Example 1 really does demonstrate that
\int _0^1 x^2\,dx = \tfrac 13.
We define the Riemann integral \displaystyle \int _a^b f(x)\,dx when a\geq b, as follows.
Definitions
Let f be a continuous real function.
If a>b, and [b,a] is contained in the domain of f, then we define
\int _a^b f(x)\,dx =\int _b^a f(x)\,dx.
Also, for values of a in the domain of f, we define
\int _a^a f(x)\,dx = 0.
As we have discussed, for a continuous real function f that takes only positive values on [a,b], where a<b, the Riemann integral
\int _a^b f(x)\,dx
measures the area under the graph of y=f(x) between a and b. If we no longer require f to be positive, then the integral still has a geometric meaning: it measures the signed area of the set between the curve y=f(x), the xaxis and the vertical lines x=a and x=b, where we count parts of the set above the xaxis as having positive area, and parts of the set below the xaxis as having negative area, as illustrated in Figure 11.
1.3 Properties of the Riemann integral
In practice we do not usually calculate integrals by looking at partitions, but instead use a powerful theorem known as the Fundamental Theorem of Calculus, which allows us to think of integration and differentiation as inverse processes.
To state the theorem, we need the notion of a primitive of a continuous real function f\colon [a,b]\longrightarrow \mathbb{R}; this is a real function F that is differentiable on [a,b] with derivative equal to f, that is, the function F satisfies F'(x)=f(x), for all x\in [a,b]. A primitive of a function is not unique, because if F is a primitive of f, then so is the function with rule F(x)+c, for any constant c.
Theorem 2 Fundamental Theorem of Calculus
Let f\colon [a,b]\longrightarrow \mathbb{R} be a continuous function. If F is a primitive of f, then the Riemann integral of f over [a,b] exists and is given by
\int _a^b f(x)\,dx = F(b)F(a).
We denote F(b)F(a) by \left [F(x)\right ]_a^b.
For example, a primitive of f(x)=x^2 is F(x)=x^3/3, so
\int _0^1 x^2\,dx =\left [\frac{x^3}{3}\right ]^1_0=\frac{1^3}{3}\frac{0^3}{3}= \frac{1}{3},
which agrees with our earlier calculation using Riemann sums.
The Riemann integral has a number of useful properties.
Theorem 3 Properties of the Riemann integral
Let f and g be real functions that are continuous on the interval [a,b].
Sum Rule \displaystyle \int _a^b (f(x)+g(x))\,dx =\displaystyle \int _a^b f(x)\,dx +\displaystyle \int _a^b g(x)\,dx.
Multiple Rule \displaystyle \int _a^b \lambda f(x)\,dx =\lambda \displaystyle \int _a^b f(x)\,dx, for \lambda \in \mathbb{R}.
Additivity Rule \displaystyle \int _a^b f(x)\,dx =\displaystyle \int _a^c f(x)\,dx +\displaystyle \int _c^b f(x)\,dx, \quad \text{for } a\leq c\leq b.
Substitution Rule If g is differentiable on [a,b] and its derivative g' is continuous on [a,b], and if f is continuous on \{g(x): a\leq x\leq b\}, then \displaystyle \int _a^b f(g(x)) g'(x)\,dx = \displaystyle \int _{g(a)}^{g(b)} f(t)\,dt.
Integration by Parts If f and g are differentiable on [a,b] and their derivatives f' and g' are continuous on [a,b], then \displaystyle \int _a^b f'(x)g(x)\,dx = \left [f(x)g(x)\right ]_a^b \displaystyle \int _a^b f(x)g'(x)\,dx.
Monotonicity Inequality If f(x)\leq g(x) for each x\in [a,b], then \displaystyle \int _a^b f(x)\,dx\leq \displaystyle \int _a^b g(x)\,dx.
Modulus Inequality \left  \displaystyle \int _a^b f(x)\,dx\right \leq \displaystyle \int _a^b f(x)\,dx.
The first five properties are probably familiar to you and we have stated them only for reference. The last two inequalities may be less familiar. The Monotonicity Inequality, illustrated in Figure 12, states that if you replace f by a greater function g, then the integral increases.
The Modulus Inequality, illustrated in Figure 13, says that the modulus of the integral of f over [a,b] (a nonnegative number) is less than or equal to the integral of the modulus of f over [a,b] (another nonnegative number). If f is positive, then these two numbers are equal, but if f takes negative values, then at least part of the signed area between y=f(x) and the xaxis is negative, so the first number is less than the second.
Exercise 2
Use the Monotonicity Inequality and the fact that
e^{x}\leq e^{x^2}\leq \frac{1}{1+x^2},\quad \text{for }0\leq x\leq 1,
to estimate \displaystyle \int ^1_0 e^{x^2}dx from above and below.
Since
e^{x}\leq e^{x^2}\leq \frac{1}{1+x^2},\quad \text{for }0\leq x \leq 1,
it follows from the Monotonicity Inequality that
\int ^1_0 e^{x}\,dx \leq \int ^1_0 e^{x^2} dx \leq \int ^1_0 \frac{1}{1+x^2}\,dx.
Hence
\left [e^{x}\right ]^1_0 \leq \int ^1_0 e^{x^2}dx \leq \left [\tan ^{1}x\right ]^1_0;
that is,
1e^{1}\leq \int ^1_0 e^{x^2}dx \leq \frac{\pi }{4}.
Since 0.63<1e^{1} and \pi /4< 0.79, we see that
0.63 < \int ^1_0 e^{x^2}dx < 0.79.
(In fact, \displaystyle \int ^1_0 e^{x^2}dx = 0.75 to two decimal places.)
1.4 Introducing complex integration
We come now to the central theme of this course – integrating complex functions. Informed by the discussion in the introduction, we should expect that the integral of a continuous complex function f from one point \alpha to another point \beta in the complex plane may depend on the path that we choose to take from \alpha to \beta . So it is necessary to first choose a smooth path \Gamma : \gamma (t) (t \in [a,b]) such that \gamma (a) = \alpha and \gamma (b) = \beta (see Figure 14), and then we will define the integral of f along this smooth path, denoting the resulting quantity by
\int _{\Gamma } f(z)\,dz.
There are two ways to achieve this goal.
One method is to imitate the approach of Section 1.2, as follows.
Choose a partition of the path \Gamma into subpaths P=\{\Gamma _1, \Gamma _2, \ldots ,\Gamma _n \}, determined by points \alpha =z_0, z_1, …, z_n=\beta , such as those illustrated in Figure 15.
Define a complex Riemann sum R(f,P)=\operatorname*{\displaystyle\sum}\limits ^n_{k=1} f(z_k)\delta z_k, where \delta z_k = z_k  z_{k1}, for k=1,2,\dots ,n, and define \P\ = \max \{\delta z_1,\delta z_2,\dots ,\delta z_n\}.
Define the complex integral \displaystyle \int _{\Gamma } f(z)\,dz to be \displaystyle \lim _{n\rightarrow \infty } R(f, P_n), where (P_n) is any sequence of partitions of \Gamma for which \P_n\\to 0 as n\to \infty .
It can be shown (although it is quite hard to do so) that this limit exists when f is continuous, and that it is independent of the choice of partitions of \Gamma . Thus we have defined the integral of a continuous complex function. We can then develop the standard properties of integrals, such as the Additivity Rule and the Combination Rules, by imitating the discussion of the real Riemann integral.
The second, quicker method is to define a complex integral in terms of two real integrals. To do this, we use a parametrisation \gamma \colon [a,b]\longrightarrow \mathbb{C} of the smooth path \Gamma , where \gamma (a)=\alpha and \gamma (b)=\beta . Any set of parameter values
\{t_0,t_1,\dots ,t_n:a=t_0<t_1<\cdots <t_n=b\}
yields a partition
P=\{\Gamma _1,\Gamma _2,\dots ,\Gamma _n\}
of \Gamma , where \Gamma _k is the subpath of \Gamma that joins z_{k1}=\gamma (t_{k1}) to z_k=\gamma (t_k), for k=1,2,\dots ,n. We can then define the complex Riemann sum
R(f,P)=\displaystyle \sum ^n_{k=1} f(z_k) \delta z_k,
where \delta z_k = z_k  z_{k1}, for k=1,2,\dots ,n; see Figure 16.
Notice that
\delta z_k = z_k  z_{k1} = \gamma (t_k)  \gamma (t_{k1}).
Hence, if t_k is close to t_{k1}, then, to a good approximation,
\gamma \,^\prime (t_k) \approx \frac{\gamma (t_k)\gamma (t_{k1})}{t_kt_{k1}}=\frac{\delta z_k}{\delta t_k},
where \delta t_k=t_kt_{k1}, so
\delta z_k \approx \gamma \,^\prime (t_k) \delta t_k.
Thus if \max \{\delta t_1,\delta t_2,\dots ,\delta t_n\} is small, then, to a good approximation,
R(f,P)=\sum ^n_{k=1} f(z_k)\delta z_k \approx \sum ^n_{k=1} f(\gamma (t_k))\,\gamma \,^\prime (t_k)\delta t_k.
The expression on the right has the form of a Riemann sum for the integral
\begin{equation} \label{b1eq1} \int ^b_a f(\gamma (t))\,\gamma \,^\prime (t)\,dt. \end{equation}
Here the integrand
t\longmapsto f(\gamma (t))\,\gamma \,^\prime (t)\quad (t\in [a,b])
is a complexvalued function of a real variable. We have defined integrals of only real functions so far, but if we split f(\gamma (t))\,\gamma \,^\prime (t) into its real and imaginary parts u(t)+iv(t), then the integral (1) above can be written as
\int ^b_a f(\gamma (t))\,\gamma \,^\prime (t)\,dt= \int ^b_a u(t)\,dt+i\int ^b_a v(t)\,dt,
which is a combination of two real integrals. We then define the integral of f along \Gamma by the formula
\begin{equation} \label{b1eq2} \int _\Gamma f(z)\,dz=\int ^b_a f(\gamma (t))\,\gamma \,^\prime (t)\,dt. \end{equation}
It can be shown that both of these methods for defining the integral of a continuous complex function f along a smooth path \Gamma give the same value for
\int _\Gamma f(z)\,dz.
In the next section we will develop properties of complex integrals, and there we will use the formula (2) above for the definition of the integral of a complex function f along a path \Gamma .
History of complex integration
The first significant steps in the development of real integration came in the seventeenth century with the work of a number of European mathematicians. Notable among this group was the French lawyer and mathematician Pierre de Fermat (1601–1665), who found areas under curves of the form y=ax^n, for n an integer (possibly negative), using partitions and arguments involving infinitesimals.
A major breakthrough was the discovery of calculus made independently by the English mathematician and scientist Isaac Newton (1642–1727) and the German philosopher and mathematician Gottfried Wilhelm Leibniz (1646–1716). They observed that differentiation and integration are inverse processes, a fact encapsulated in the Fundamental Theorem of Calculus.
Towards the end of the eighteenth century, mathematicians began to consider integrating complex functions.
Two pioneers in this endeavour were Leonhard Euler and PierreSimon Laplace. They were mainly concerned with manipulating complex integrals in order to evaluate difficult real integrals such as
\int _{\infty }^\infty \frac{\sin x}{x}\,dx = \pi \quad \text{and}\quad \int _{\infty }^\infty e^{x^2}\,dx=\sqrt{\pi }.
However, it was through the work of AugustinLouis Cauchy that complex integration began to assume the form that is now used in complex analysis. Cauchy’s first paper on complex integrals in 1814 treated complex integrals as purely algebraic objects; it was only much later that he came to properly appreciate their geometric significance.
By the mid to late nineteenth century, mathematicians began to consider how to expand the theory of integration to deal with functions that are not continuous. The first rigorous theory of integration to do this was put forward by Riemann in 1854. The Riemann integral was followed by a number of other formal definitions of integration, some equivalent to Riemann’s, and some more general, such as Lebesgue integration, named after the French mathematician Henri Lebesgue (1875–1941).
2 Integrating complex functions
After working through this section, you should be able to:
define the integral of a continuous function along a smooth path, and evaluate such integrals
explain what is meant by a contour, define the (contour) integral of a continuous function along a contour, and evaluate such integrals
define the reverse contour of a given contour, and state and use the Reverse Contour Theorem.
2.1 Integration along a smooth path
Motivated by the discussion of the preceding section, we make the following definition of the integral of a complex function.
Definition
Let \Gamma : \gamma (t)\ (t\in [a,b]) be a smooth path in \mathbb{C}, and let f be a function that is continuous on \Gamma . Then the integral of {\boldsymbol{f}} along the path {\boldsymbol{\Gamma }}, denoted by \displaystyle \int _{\Gamma } f(z)\,dz, is
\int _{\Gamma } f(z)\,dz = \int ^b_a f(\gamma (t))\,\gamma \,^\prime (t)\,dt.
The integral is evaluated by splitting f(\gamma (t))\,\gamma \,^\prime (t) into its real and imaginary parts u(t)=\operatorname{Re} (f(\gamma (t))\,\gamma \,^\prime (t)) and v(t)=\operatorname{Im} (f(\gamma (t))\,\gamma \,^\prime (t)), and evaluating the resulting pair of real integrals,
\int ^b_a f(\gamma (t))\,\gamma \,^\prime (t)\,dt=\int ^b_a u(t)\,dt+i\int ^b_av(t)\,dt.
Remarks
Since f is continuous on \Gamma and \gamma is a smooth parametrisation, the functions t\longmapsto f(\gamma (t)) and t\longmapsto \gamma \,^\prime (t) are both continuous on [a,b], so the function t\longmapsto f(\gamma (t))\,\gamma \,^\prime (t) is continuous on [a,b]. It follows that the real functions u and v are continuous on [a,b], and hence \displaystyle \int ^b_a u(t)\,dt\quad \text{and}\quad \displaystyle \int ^b_a v(t)\,dt exist, so \displaystyle \int ^b_a f(\gamma (t))\,\gamma \,^\prime (t)\,dt also exists.
An important special case is when \gamma (t)=t (t\in [a,b]), so \Gamma is the real line segment from a to b. Since \gamma \,^\prime (t)=1, we see that \displaystyle \int _\Gamma f(z)\,dz equals \int ^b_a f(t)\,dt=\int ^b_a u(t)\,dt+i\int ^b_a v(t)\,dt,where u=\operatorname{Re} f and v=\operatorname{Im} f. This equation is a formula for the integral of a complex function over a real interval.
An alternative notation for \displaystyle \int _{\Gamma } f(z)\,dz is \displaystyle \int _{\Gamma } f.
If the path of integration \Gamma has a standard parametrisation \gamma , then, unless otherwise stated, we use \gamma in the evaluation of the integral of f along \Gamma .
To help to remember the formula used to define \displaystyle \int _{\Gamma }f(z)\,dz, notice that it can be obtained by ‘substituting’ z=\gamma (t), \quad dz = \gamma \,^\prime (t)\,dt. We consider dz=\gamma \,^\prime (t)\,dt to be a shorthand for \dfrac{dz}{dt} = \gamma \,^\prime (t).
The following examples demonstrate how to evaluate integrals along paths. In each case, we follow the convention of Remark 3 and use the standard parametrisation of the path.
Example 2
Evaluate
\int _{\Gamma } z^2\,dz,
where \Gamma is the line segment from 0 to 1+i.
Solution
Here f(z) = z^2, and we use the standard parametrisation
\gamma (t) = (1+i)t\quad (t\in [0,1])
of \Gamma , which satisfies \gamma \,^\prime (t)=1+i.
Then f\left (\gamma (t)\right ) = \left ((1+i)t\right )^2, so
\begin{align*} \int _{\Gamma }z^2\,dz&=\int ^1_0 f(\gamma (t))\,\gamma \,^\prime (t)\,dt \\ &=\int ^1_0((1+i)t)^2 (1+i)\,dt \\ &=\int ^1_0 2it^2(1+i)\,dt \\ &=\int ^1_0(2+2i)t^2\,dt \\ &=2\int ^1_0 t^2\,dt+2i\int ^1_0 t^2\,dt \\ &=(2+2i)\int ^1_0t^2\,dt\\ &=(2+2i)\left [\tfrac 13 t^3\right ]^1_0\\ &=\tfrac 23 + \tfrac 23 i. \end{align*}
You need not include every line of working of Example 2 if you do not need to. Here is another example.
Example 3
Evaluate
\displaystyle \int _{\Gamma } \overline{z}\,dz,
where \Gamma is the line segment from 0 to 1+i.
Solution
Here f(z)=\overline{z}, and again we use the standard parametrisation
\gamma (t)=(1+i)t\quad (t\in [0,1])
of \Gamma , which satisfies \gamma \,^\prime (t)=1+i. Then
f(\gamma (t))=\overline{(1+i)t}=(1i)t,
so
\begin{align*} \int _{\Gamma }\overline{z}\,dz&=\int ^1_0(1i)t\times (1+i)\,dt \\ &=\int ^1_0 2t\,dt\\ &=\left [t^2\right ]^1_0\\ &= 1. \end{align*}
We set out our solution to the next example using the observation and notation of Remark 4.
Example 4
Evaluate
\displaystyle \int _{\Gamma }\frac{1}{z}\,dz,
where \Gamma is the unit circle \{z:z=1\}.
Solution
Here f(z)=1/z, and we use the standard parametrisation
\gamma (t)=e^{it}\quad (t\in [0,2\pi ])
of \Gamma . Then z=e^{it}, 1/z=e^{it} and dz=ie^{it}\,dt. Hence
\begin{align*} \int _{\Gamma }\frac{1}{z}\,dz&=\int ^{2\pi }_0 e^{it}\times ie^{it}\,dt \\ &=i\int ^{2\pi }_0 1\,dt\\ &=2\pi i. \end{align*}
Sometimes when evaluating integrals we will use the alternative notation of Example 4 instead of the notation of Example 2 and Example 3; both notations are commonly used in complex analysis.
In the examples above, we used the standard parametrisation in each case. The following exercise suggests that the value of the integral is not affected by the choice of parametrisation.
Exercise 3
Verify that the result of Example 3 is unchanged if we use the smooth parametrisation \gamma (t) = 2(1+i)t\quad \left (t \in \left [0,\tfrac 12\right ]\right ).
Verify that the result of Example 4 is unchanged if we use the smooth parametrisation \gamma (t) = e^{3it}\quad (t\in [0,2\pi /3]).
Here \gamma (t) = 2(1+i)t\ \left (t\in \left [0,\tfrac 12\right ]\right ). Let f(z)=\overline{z}. Then f(\gamma (t)) = \overline{2(1+i)t} = 2(1i)t, and, since \gamma \,^\prime (t)=2(1+i), we obtain \begin{align*} \int _{\Gamma } \overline{z}\,dz&=\int ^{1/2}_0 2(1i)t\times 2(1+i)\,dt \\ &=\int ^{1/2}_0 8t\,dt \\ &=\left [4t^2\right ]^{1/2}_0 \\ &=1, \end{align*}in accordance with Example 3.
We set out this solution in a similar style to Example 4. Here \gamma (t)=e^{3it}\ (t\in [0,2\pi /3 ]). Then z=e^{3it},\quad 1/z=e^{3it}\quad \text{and}\quad dz=3ie^{3it}\,dt. Hence \begin{align*} \int _{\Gamma } \frac{1}{z}\,dz&=\int ^{2\pi /3}_0 e^{3it}\times 3ie^{3it}\,dt \\ &= i\int ^{2\pi /3}_0 3\,dt \\ &=i\bigl [3t\bigr ]^{2\pi /3}_0 \\ &= 2\pi i, \end{align*}in accordance with Example 4.
The reason why we have obtained the same values in Exercise 3 as those in Example 3 and Example 4 is because of the following theorem.
Theorem 4
Let \gamma _1\colon [a_1,b_1]\longrightarrow \mathbb{C} and \gamma _2\colon [a_2,b_2]\longrightarrow \mathbb{C} be two smooth parametrisations of paths with the same image set \Gamma , and let f be a function that is continuous on \Gamma . Then
\displaystyle \int _{\Gamma } f(z)\,dz
does not depend on which parametrisation \gamma _1 or \gamma _2 is used.
The proof of Theorem 4 uses the Inverse Function rule and the Chain rule for the derivatives of complex functions, which are not covered within this course. So we shall omit the details of this proof.
In practical terms, this theorem allows you to choose any convenient smooth parametrisation when evaluating a complex integral along a given path. We will see how this can be helpful in the next subsection.
For further practice in integration, try the following exercise.
Exercise 4
Evaluate the following integrals.
\displaystyle \int _{\Gamma } \operatorname{Re} z\,dz, where \Gamma is the line segment from 0 to 1 + 2i.
\displaystyle \int _{\Gamma } \dfrac{1}{(z\alpha )^2}\,dz, where \Gamma is the circle with centre \alpha and radius r.
The standard parametrisation of \Gamma is \gamma (t) = (1+2i)t\quad (t\in [0,1]). Then z=(1+2i)t,\quad \operatorname{Re} z = t,\quad dz=(1+2i)\,dt. Hence \begin{aligned} \int _{\Gamma } \operatorname{Re} z\,dz&=\int ^1_0 t\times (1+2i)\,dt \\ &=(1+2i)\int ^1_0 t\,dt\\ &=(1+2i)\left [\tfrac 12 t^2\right ]^1_0 \\ &=\tfrac 12 + i. \end{aligned}
The standard parametrisation of \Gamma is \gamma (t) = \alpha + re^{it}\quad \left (t\in [0, 2\pi ]\right ). Then \begin{aligned} &z=\alpha + re^{it},\quad 1/(z\alpha )^2 = 1/(r^2e^{2it}),\\ &dz=rie^{it}\,dt. \end{aligned}Hence \begin{align*} \int _{\Gamma } \frac{1}{(z\alpha )^2}\,dz&=\int ^{2\pi }_0 \frac{rie^{it}}{r^2e^{2it}}\,dt &\\ &=\int ^{2\pi }_0 \frac{i}{r} e^{it}\,dt &\\ &=\int ^{2\pi }_0 \frac{i}{r} (\cos t  i\sin t)\,dt &\\ &=\int ^{2\pi }_0 \frac{1}{r} \sin t\,dt + i\int ^{2\pi }_0 \frac{1}{r} \cos t\,dt&\\ &=\left [\frac{1}{r} \cos t\right ]^{2\pi }_0 + i \left [\frac{1}{r} \sin t\right ]^{2\pi }_0 &\\ &=0+0i=0.& \end{align*}
2.2 Integration along a contour
Consider the path \Gamma from 0 to i in Figure 17, with parametrisation \gamma \colon [0,3]\longrightarrow \mathbb{C} given by
\gamma (t)= \begin{cases} 2t,&0\leq t\leq 1, \\ 2+i(t1),&1\leq t\leq 2, \\ 2+i2(t2),&2\leq t\leq 3. \end{cases}
This path is not smooth, because \gamma is not differentiable at t=1 or t=2. However, \Gamma can be split into three smooth straightline paths, joined end to end. This leads to the idea of a contour: it is simply what we get when we place a finite number of smooth paths end to end.
Definitions
A contour \Gamma is a path that can be subdivided into a finite number of smooth paths \Gamma _1,\Gamma _2,\dots ,\Gamma _n joined end to end. The order of these constituent smooth paths is indicated by writing
\Gamma = \Gamma _1 + \Gamma _2 + \cdots + \Gamma _n.
The initial point of \Gamma is the initial point of \Gamma _1, and the final point of \Gamma is the final point of \Gamma _n.
The definition of a contour is illustrated in Figure 18.
As an example, the contour \Gamma in Figure 17 can be written as \Gamma _1 + \Gamma _2 + \Gamma _3, where \Gamma _1, \Gamma _2 and \Gamma _3 are smooth paths with smooth parametrisations
\begin{equation} \label{b1ijak} \begin{aligned} &\gamma _1 (t)=2t&(t\in [0,1]), \\ &\gamma _2 (t)=2+i(t1)&(t\in [1,2]), \\ &\gamma _3 (t)=2+i2(t2)&(t\in [2,3]). \end{aligned} \end{equation}
Now, we have seen how to integrate a continuous function along a smooth path. It is natural to extend this definition to contours, by splitting the contour into smooth paths and integrating along each in turn. We formalise this idea in the following definition.
Definition
Let \Gamma = \Gamma _1 + \Gamma _2 + \cdots + \Gamma _n be a contour, and let f be a function that is continuous on \Gamma . Then the (contour) integral of {\boldsymbol{f}} along {\boldsymbol{\Gamma }}, denoted by \displaystyle \int _{\Gamma } f(z)\,dz, is
\int _{\Gamma } f(z)\,dz = \int _{\Gamma _1} f(z)\,dz +\int _{\Gamma _2} f(z)\,dz +\cdots + \int _{\Gamma _n} f(z)\,dz.
Remarks
It is clear that a contour can be split into smooth paths in many different ways. Fortunately, all such splittings lead to the same value for the contour integral. We omit the proof of this result, as it is straightforward but tedious.
When evaluating an integral along a contour \Gamma = \Gamma _1 + \Gamma _2 +\cdots + \Gamma _n, we often consider each smooth path \Gamma _1,\Gamma _2,\dots ,\Gamma _n separately, using a convenient parametrisation in each case. For example, consider the contour \Gamma = \Gamma _1 + \Gamma _2 + \Gamma _3 of Figure 17. To evaluate a contour integral of the form \int _{\Gamma }f(z)\,dz=\int _{\Gamma _1} f(z)\,dz+\int _{\Gamma _2}f(z)\,dz + \int _{\Gamma _3}f(z)\,dz, we can use the smooth parametrisations (above) of \Gamma _1, \Gamma _2 and \Gamma _3, or we could use another convenient choice of parametrisations, such as \begin{array}{@{}ll} \gamma _1(t)=t&(t\in [0,2]), \\ \gamma _2(t)=2+it&(t\in [0,1]), \\ \gamma _3(t)=2+it&(t\in [0,2]). \end{array}
The alternative notation \displaystyle \int _{\Gamma } f is sometimes used for contour integrals when the omission of the integration variable z will cause no confusion.
Example 5
Evaluate
\displaystyle \int _{\Gamma } z^2\,dz,
where \Gamma is the contour shown in Figure 19.
Solution
We split \Gamma into two smooth paths \Gamma = \Gamma _1 + \Gamma _2, where \Gamma _1 is the line segment from 0 to 1 with parametrisation \gamma _1 (t)=t\ (t\in [0,1]), and \Gamma _2 is the line segment from 1 to 1+i, with parametrisation \gamma _2 (t)=1+it\ (t\in [0,1]). Then
\begin{align*} \int _{\Gamma }z^2\,dz&=\int _{\Gamma _1} z^2\,dz + \int _{\Gamma _2} z^2\,dz \\ &=\int ^1_0t^2\,dt+\int ^1_0(1+it)^2 i\,dt \\ &=\int ^1_0t^2\,dt+\int ^1_0(2t+iit^2)\,dt \\ &=\int ^1_0(t^22t)\,dt+i\int ^1_0 (1t^2)\,dt \\ &=\left [\tfrac 13t^3t^2\right ]^1_0+i\left [t\tfrac 13t^3\right ]^1_0 \\ &=\left (\tfrac 13  1\right ) +i\left (1\tfrac 13\right )\\ &=\tfrac 23+\tfrac 23 i. \end{align*}
Notice that this answer is the same as that obtained in Example 2 for
\displaystyle \int _{\Gamma }z^2\,dz,
where \Gamma is the line segment from 0 to 1+i. The reason for this will become clear when we get to Theorem 8, the Contour Independence Theorem.
Exercise 5
Evaluate
\displaystyle \int _{\Gamma } \overline{z}\,dz
for each of the following contours \Gamma .
In part (b) the contour consists of a line segment and a semicircle, traversed once anticlockwise. Take 1 to be the initial (and final) point of this contour.
\Gamma =\Gamma _1+\Gamma _2+\Gamma _3, where \Gamma _1 is the line segment from 0 to 1, \Gamma _2 is the line segment from 1 to 1+i, and \Gamma _3 is the line segment from 1+i to i. We choose to use the associated standard parametrisations \begin{aligned} & \gamma _1 (t)=t\quad (t\in [0,1]), \\ & \gamma _2 (t)=1+it\quad (t\in [0,1]), \\ & \gamma _3 (t)=1t+i\quad (t\in [0,1]). \end{aligned}Then \gamma \,^\prime _1(t)=1, \gamma \,^\prime _2(t)=i, \gamma \,^\prime _3(t)=1. Hence \begin{align*} \int _{\Gamma } \overline{z}\,dz&=\int _{\Gamma _1} \overline{z}\,dz+ \int _{\Gamma _2}\overline{z}\,dz + \int _{\Gamma _3} \overline{z}\,dz \\ &=\int ^1_0 t\times 1\,dt + \int ^1_0 (1it)\times i\,dt \\ &\quad + \int ^1_0 (1ti)\times (1)\,dt \\ &=\int ^1_0 (3t+2i1)\,dt\\ &=\left [\tfrac 32t^2+(2i1)t\right ]^1_0\\ &= \tfrac 32+2i1 =\tfrac 12 + 2i. \end{align*}
\Gamma = \Gamma _1 + \Gamma _2, where \Gamma _1 is the line segment from 1 to 1, and \Gamma _2 is the upper half of the circle with centre 0 from 1 to 1. We choose to use the parametrisations \begin{align*} & \gamma _1(t)=t\quad \left (t\in [1, 1]\right ), \\ & \gamma _2(t)=e^{it}\quad \left (t\in [0,\pi ]\right ). \end{align*}Then \gamma \,^\prime _1(t)=1, \gamma \,^\prime _2(t)=ie^{it}. Hence \begin{align*} \int _{\Gamma }\overline{z}\,dz&=\int _{\Gamma _1} \overline{z}\,dz + \int _{\Gamma _2}\overline{z}\,dz \\ &=\int ^1_{1} t\times 1\,dt + \int ^{\pi }_0 e^{it} \times ie^{it}\,dt \\ &=\int ^1_{1} t\,dt + i \int ^{\pi }_0 1\,dt \\ &=\left [\tfrac 12 t^2\right ]^1_{1} + i\bigl [t\bigr ]^{\pi }_0 \\ &=0+i\pi =\pi i. \end{align*}
This section will conclude by stating some rules for combining contour integrals. To prove them, we split the contour \Gamma into constituent smooth paths, and use the Sum Rule and Multiple Rule for real integration given in Theorem 3 to prove the results for each path. We omit the details.
Theorem 5 Combination Rules for Contour Integrals
Let \Gamma be a contour, and let f and g be functions that are continuous on \Gamma .
Sum Rule \displaystyle \int _{\Gamma }(f(z)+g(z))\,dz = \int _{\Gamma } f(z)\,dz + \int _{\Gamma } g(z)\,dz.
Multiple Rule \displaystyle \int _{\Gamma }\lambda f(z)\,dz=\lambda \int _{\Gamma } f(z)\,dz,\quad \text{where }\lambda \in \mathbb{C}.
2.3 Reverse paths and contours
We now introduce the concept of the reverse path (some texts use the name opposite path) of a smooth path \Gamma . This is simply the path we obtain by traversing the original path in the opposite direction, starting from the final point of the original path and finishing at the initial point of the original path. In order to define the reverse path formally, we use the fact that as t increases from a to b, so a+bt decreases from b to a.
Definition
Let \Gamma :\gamma (t)\ (t\in [a,b]) be a smooth path. Then the reverse path of \Gamma , denoted by \widetilde{\Gamma }\vphantom{\widetilde{\Gamma ^2}}, is the path with parametrisation \widetilde{\gamma }, where
\widetilde{\gamma }\,(t)=\gamma (a+bt)\quad (t\in [a,b]).
Note that the initial point \widetilde{\gamma }(a) of \widetilde{\Gamma } is the final point \gamma (b) of \Gamma , and the final point \widetilde{\gamma }(b) of \widetilde{\Gamma } is the initial point \gamma (a) of \Gamma (see Figure 20). The path \widetilde{\Gamma } is smooth because \Gamma is smooth. Also note that, as sets, \Gamma and \widetilde{\Gamma } are the same.
Exercise 6
Write down the reverse path of the path \Gamma with parametrisation
\gamma (t)=2+it\quad (t\in [0,2]).
Since a=0 and b=2, the reverse path is \widetilde{\Gamma }:\widetilde{\gamma }(t) (t\in [0,2]), where
\begin{align*} \widetilde{\gamma }(t)&=\gamma (2t)\\ &=2+i(2t)\\ &=t+i\quad (t\in [0,2]). \end{align*}
We can also define a reverse contour. This is done in the natural way – namely by reversing each of the constituent smooth paths of a contour and reversing the order in which they are traversed.
Definition
Let \Gamma = \Gamma _1 + \Gamma _2 +\cdots + \Gamma _n be a contour. The reverse contour \widetilde{\Gamma } of \Gamma is
\widetilde{\Gamma } = \widetilde{\Gamma }_n + \widetilde{\Gamma }_{n1} + \cdots +\widetilde{\Gamma }_1.
A contour and its reverse contour are illustrated in Figure 21.
As an example, if \Gamma = \Gamma _1 + \Gamma _2 + \Gamma _3 is the contour from 0 to i in Figure 22(a), with smooth parametrisations
\begin{array}{@{}ll} \gamma _1(t)=t&(t\in [0,2]), \\ \gamma _2(t)=2+it&(t\in [0,1]), \\ \gamma _3(t)=2+it\quad &(t\in [0,2]), \end{array}
then \widetilde{\Gamma } = \widetilde{\Gamma }_3+\widetilde{\Gamma }_2+\widetilde{\Gamma }_1 is the contour from i to 0 in Figure 22(b), with smooth parametrisations
\begin{array}{@{}ll} \widetilde{\gamma }_3 (t)=t+i&(t\in [0,2]), \\ \widetilde{\gamma }_2 (t)=2+i(1t)\quad& (t\in [0,1]), \\ \widetilde{\gamma }_1 (t)=2t&(t\in [0,2]). \end{array}
Example 6
Evaluate
\int _{\,\widetilde{\Gamma }} \overline{z}\,dz,
where \widetilde{\Gamma } is the reverse path of the line segment \Gamma from 0 to 1+i.
Solution
We use the standard parametrisation
\gamma (t)=(1+i)t\quad (t\in [0,1])
of \Gamma . For the reverse path \widetilde{\Gamma }, the corresponding parametrisation is
\widetilde{\gamma }(t)=\gamma (1t)=(1+i)(1t)\quad (t\in [0,1]).
Then \widetilde{\gamma }\,^\prime (t)=(1+i), so we substitute
z=(1+i)(1t),\quad \overline{z}=(1i)(1t)\quad \text{and}\quad dz=(1+i)\,dt
to give
\begin{align*} \int _{\,\widetilde{\Gamma }}\overline{z}\,dz&=\int ^{1}_{0}(1i)(1t)\times (1+i)\,dt \\ &=\int ^1_0 2(1t)\,dt \\ &=\left [2tt^2\right ]^1_0=1. \end{align*}
In Example 3 we saw that
\displaystyle \int _{\,\Gamma } \overline{z}\,dz =1,
which is the negative of the value 1 that we obtained in Example 6. This illustrates the general result that if we integrate a function along a reverse contour \widetilde{\Gamma }, then the answer is the negative of the integral of the function along \Gamma .
Theorem 6 Reverse Contour Theorem
Let \Gamma be a contour, and let f be a function that is continuous on \Gamma . Then the integral of f along the reverse contour \widetilde{\Gamma } of \Gamma satisfies
\int _{\,\widetilde{\Gamma }} f(z)\,dz=\int _{\Gamma } f(z)\,dz.
Proof
The proof is in two parts. We first prove the result in the case when \Gamma is a smooth path, and then extend the proof to contours.
Let \Gamma : \gamma (t)\ (t\in [a,b]) be a smooth path. Then the parametrisation of \widetilde{\Gamma } is \widetilde{\gamma }(t) = \gamma (a+bt)\quad (t\in [a,b]). It follows that \widetilde{\gamma }\,^\prime (t)=\gamma \,^\prime (a+bt), by the Chain Rule, so \begin{align*} \int _{\,\widetilde{\Gamma }} f(z)\,dz&=\int ^b_a f(\widetilde{\gamma }(t))\,\widetilde{\gamma }\,^\prime (t)\,dt\\ &=\int ^b_a f(\gamma (a+bt))(\gamma \,^\prime (a+bt))\,dt\\ &=\int ^a_b f(\gamma (s))\,\gamma \,^\prime (s)\,ds\\ &=\int _{\Gamma } f(z)\,dz, \end{align*}where, in the secondtolast line, we have made the real substitution s=a+bt,\quad ds=dt.
To extend the proof to a general contour \Gamma , we argue as follows. Let \Gamma = \Gamma _1 + \Gamma _2 + \cdots + \Gamma _n, for smooth paths \Gamma _1,\Gamma _2,\dots ,\Gamma _n. Then \widetilde{\Gamma }=\widetilde{\Gamma }_n + \widetilde{\Gamma }_{n1} + \cdots + \widetilde{\Gamma }_1, and we can apply part (a) to see that \begin{align*} \int _{\,\widetilde{\Gamma }} f&=\int _{\,\widetilde{\Gamma }_n} f +\int _{\,\widetilde{\Gamma }_{n1}} f+\cdots + \int _{\,\widetilde{\Gamma }_1} f \\ &=\int _{\Gamma _n}f\int _{\Gamma _{n1}} f  \cdots \int _{\Gamma _1} f\\ &=\left (\int _{\Gamma _n}f + \int _{\Gamma _{n1}} f +\cdots + \int _{\Gamma _1} f\right ) \\ &=\int _{\Gamma } f. \end{align*}
In Example 4 we saw that
\int _\Gamma \frac{1}{z}\,dz = 2\pi i,
where \Gamma is the unit circle \{z:z=1\}. The next exercise asks you to check Theorem 6 for this contour integral.
Exercise 7
Verify that
\int _{\,\widetilde{\Gamma }} \frac{1}{z}\,dz = 2\pi i,
where \Gamma is the unit circle.
In Example 4 we used the parametrisation
\gamma (t) = e^{it}\quad \left (t\in [0,2\pi ]\right ).
For the reverse path \widetilde{\Gamma } we use the parametrisation
\widetilde{\gamma }(t) = \gamma (2\pi  t) = e^{i (2\pi  t)} \quad \left (t\in [0,2\pi ]\right ).
Since e^{2\pi i}=1, we have
\widetilde{\gamma }(t)=e^{it}\quad \left (t\in [0,2\pi ]\right ),
and \widetilde{\gamma }\,'(t)=ie^{it}. Hence
\begin{align*} \int _{\,\widetilde{\Gamma }} \frac 1z\,dz&=\int ^{2\pi }_0 \frac{1}{e^{it}} \times \left (ie^{it}\right )\,dt \\ &=i\int ^{2\pi }_0 1\,dt \\ &=2\pi i. \end{align*}
(Therefore, by Example 4,
\int _{\,\widetilde{\Gamma }} \frac 1z\,dz=\int _{\Gamma } \frac{1}{z}\,dz.)
2.4 Further exercises
Here are some further exercises to end this section.
Exercise 8
Evaluate the following integrals (using the standard parametrisation of the path \Gamma in each case).
\displaystyle \int _{\Gamma }z\,dz, \displaystyle \int _{\Gamma }\operatorname{Im} z\,dz, \displaystyle \int _{\Gamma }\overline{z}\,dz,where \Gamma is the line segment from 1 to i.
\displaystyle \int _{\Gamma } \overline{z}\,dz, \displaystyle \int _{\Gamma } z^2\,dz,where \Gamma is the unit circle \{z:z=1\}.
\displaystyle \int _{\Gamma } \frac{1}{z}\,dz, \displaystyle \int _{\Gamma } z\,dz,where \Gamma is the upper half of the circle with centre 0 and radius 2 traversed from 2 to 2.
The standard parametrisation of \Gamma , the line segment from 1 to i, is \gamma (t) = 1  t + it\quad (t\in [0,1]); hence \gamma \,^\prime (t)=i1. Here f(z)=z, and \begin{align*} \int _{\Gamma } z\,dz&=\int ^1_0 (1t+it)\times (i1)\,dt \\ &=\int ^1_0 (1+(12t)i)\,dt \\ &=\int ^1_0 (1)\,dt + i\int ^1_0 (12t)\,dt \\ &=\bigl [t\bigr ]^1_0 + i \left [tt^2\right ]^1_0 \\ &=1. \end{align*} Here f(z)=\operatorname{Im} z, and \begin{align*} \int _{\Gamma } \operatorname{Im} z\,dz&=\int ^1_0 (\operatorname{Im} (1t+it))\times (i1)\,dt \\ &=\int ^1_0 t(i1)\,dt \\ &=(i1)\int ^1_0 t\,dt \\ &=(i1)\left [\tfrac 12 t^2\right ]^1_0\\ &=\tfrac 12 (1+i). \end{align*}(Note that this integral is different from \operatorname{Im} \left (\displaystyle \int _{\Gamma } z \,dz\right ), which from part (a)(i) is 0.) Here f(z)=\overline{z}, and \begin{align*} \int _{\Gamma } \overline{z}\,dz &= \int ^1_0 \overline{(1t+it)}\times (i1)\,dt\\ &=\int ^1_0 (1tit)\times (i1)\,dt \\ &=\int ^1_0 (1+2t+i)\,dt \\ &=\int ^1_0 (1+2t)\,dt + i\int ^1_0 1\,dt \\ &=\left [t+t^2\right ]^1_0 + i \bigl [t\bigr ]^1_0 \\ &=i. \end{align*}(Again, note that this is different from \overline{ \displaystyle \int _{\Gamma } z\,dz}.)
We set out this solution in a similar style to Example 4. The standard parametrisation of \Gamma , the unit circle \{z:z=1\}, is \gamma (t) = e^{it}\quad (t\in [0,2\pi ]); hence z=e^{it},\quad dz=ie^{it}\,dt. Here f(z)=\overline{z}=e^{it}, and \begin{align*} \int _{\Gamma } \overline{z}\,dz &= \int ^{2\pi }_0 e^{it}\times ie^{it}\,dt \\ &=i\int ^{2\pi }_01\,dt \\ &=i\bigl [t\bigr ]^{2\pi }_0 \\ &=2\pi i. \end{align*} Here f(z)=z^2=e^{2it}, and \begin{align*} \int _{\Gamma } z^2\,dz&=\int ^{2\pi }_0 e^{2it}\times ie^{it}\,dt \\ &=\int ^{2\pi }_0 ie^{3it}\,dt \\ &=\int ^{2\pi }_0 i(\cos 3t + i\sin 3t)\,dt \\ &=\int ^{2\pi }_0 (\sin 3t)\,dt + i\int ^{2\pi }_0 \cos 3t\,dt \\ &=\left [\tfrac 13 \cos 3t\right ]^{2\pi }_0 + i \left [\tfrac 13 \sin 3t\right ]^{2\pi }_0 \\ &=0. \end{align*}
The standard parametrisation of \Gamma , the upper half of the circle with centre 0 and radius 2, traversed from 2 to 2, is \gamma (t) = 2e^{it}\quad (t\in [0,\pi ]); hence \gamma \,^\prime (t)=2ie^{it}. Here f(z)=1/z, and \begin{align*} \int _{\Gamma } \frac{1}{z}\,dz&=\int ^{\pi }_0 \frac{1}{2e^{it}}\times 2ie^{it}\,dt \\ &=i\int ^{\pi }_0 1\,dt \\ &=i\bigl [t\bigr ]^{\pi }_0 \\ &=\pi i. \end{align*} Here f(z)=z, and \begin{align*} \int _{\Gamma } z\,dz&=\int ^{\pi }_0 \left 2e^{it}\right \times 2ie^{it}\,dt \\ &=\int ^{\pi }_0 4i (\cos t + i\sin t)\,dt \\ &=\int ^{\pi }_0 (4\sin t)\,dt + i\int ^{\pi }_0 4\cos t\,dt \\ &=\bigl [4\cos t\bigr ]^{\pi }_0 + i\bigl [4\sin t\bigr ]^{\pi }_0 \\ &=8. \end{align*}
Exercise 9
Evaluate
\displaystyle \int _{\Gamma } \operatorname{Re} z\,dz
for each of the following contours \Gamma from 0 to 1+i.
\Gamma = \Gamma _1 + \Gamma _2, where \Gamma _1 is the line segment from 0 to i and \Gamma _2 is the line segment from i to 1+i. We choose to use the standard parametrisations \begin{align*} & \gamma _1 (t) = it\quad (t\in [0,1]), \\ & \gamma _2 (t) = t+i\quad (t\in [0,1]). \end{align*}Then \gamma \,^\prime _1 (t) = i, \gamma \,^\prime _2 (t) = 1. Hence \begin{align*} \int _{\Gamma } \operatorname{Re} z\,dz&=\int _{\Gamma _1} \operatorname{Re} z\,dz + \int _{\Gamma _2} \operatorname{Re} z\,dz &\\ &=\int ^1_0 \operatorname{Re} (it)\times i\,dt + \int ^1_0 \operatorname{Re} (t+i) \times 1\,dt &\\ &=\int ^1_0 0\,dt + \int ^1_0 t\,dt &\\ &=\left [\tfrac 12 t^2\right ]^1_0 = \tfrac 12.& \end{align*}
\Gamma = \Gamma _1 + \Gamma _2, where \Gamma _1 is the line segment from 0 to 1 and \Gamma _2 is the line segment from 1 to 1+i. We choose to use the standard parametrisations \begin{align*} & \gamma _1(t)=t\quad (t\in [0,1]), \\ & \gamma _2(t)=1+it\quad (t\in [0,1]). \end{align*}Then \gamma \,^\prime _1(t)=1, \gamma \,^\prime _2(t)=i. Hence \begin{align*} \int _{\Gamma } \operatorname{Re} z\,dz&=\int _{\Gamma _1} \operatorname{Re} z\,dz + \int _{\Gamma _2} \operatorname{Re} z\,dz &\\ &=\int ^1_0 \operatorname{Re} t\times 1\,dt + \int ^1_0 \operatorname{Re} (1+it)\times i\,dt &\\ &=\int ^1_0 t\,dt + i\int ^1_0 1\,dt &\\ &=\left [\tfrac 12 t^2\right ]^1_0 + i\bigl [t\bigr ]^1_0 = \tfrac 12 + i.& \end{align*}(Note that the integrals in parts (a) and (b) have different values.)
3 Evaluating contour integrals
After working through this section, you should be able to:
state and use the Fundamental Theorem of Calculus for contour integrals
state and use the Contour Independence Theorem
use the technique of Integration by Parts
state and use the Closed Contour Theorem, the Grid Path Theorem, the Zero Derivative Theorem and the Paving Theorem.
3.1 The Fundamental Theorem of Calculus
In Example 5 we saw that
\int _{\Gamma } z^2\,dz =  \tfrac{2}{3} + \tfrac{2}{3}i,
where \Gamma is the contour shown in Figure 23. Our method was to write down a smooth parametrisation for each of the two line segments, replace z in the integral by these parametrisations, and then integrate.
It is, however, tempting to approach this integral as you would a corresponding real integral and write
\begin{align*} \int _{\Gamma } z^2\,dz&= \left [\tfrac{1}{3} z^3\right ]^{1+i}_0\\ &=\tfrac{1}{3} (1 + i)^3  \tfrac{1}{3}\times 0^3\\ &= \tfrac{1}{3}(1+3i+3i^2+i^3)\\ &=  \tfrac{2}{3} + \tfrac{2}{3}i. \end{align*}
The Fundamental Theorem of Calculus for contour integrals tells us that this method of evaluation is permissible under certain conditions. Before stating it, we need the idea of a primitive of a complex function, which is defined in a similar way to the primitive of a real function (Section 1.3).
Definition
Let f and F be functions defined on a region \mathcal{R}. Then F is a primitive of {\boldsymbol{f}} on \boldsymbol{\mathcal{R}} if F is analytic on \mathcal{R} and
F^\prime (z) = f(z),\quad \text{for all }z \in \mathcal{R}.
The function F is also called an antiderivative or indefinite integral of f on \mathcal{R}.
For example, F(z) = \frac{1}{3}z^3 is a primitive of f(z) = z^2 on \mathbb{C}, since F is analytic on \mathbb{C} and F^\prime (z) = z^2, for all z \in \mathbb{C}. Another primitive is F(z) = \frac{1}{3}z^3 + 2i; indeed, any function of the form F(z) = \frac{1}{3} z^3 + c, where c \in \mathbb{C}, is a primitive of f on \mathbb{C}.
Exercise 10
Write down a primitive F of each of the following functions f on the given region \mathcal{R}.
f(z) = e^{3iz},\quad \mathcal{R} = \mathbb{C}
f(z) = (1 + iz)^{2},\quad \mathcal{R} = \mathbb{C}  \{i\}
f(z)=z^{1},\quad \mathcal{R}=\{z:\operatorname{Re} z>0\}
F(z)=\dfrac{1}{3i}\,e^{3iz}\quad (z\in \mathbb{C})
F(z)=i(1+iz)^{1}=(zi)^{1}\quad (z\in \mathbb{C}\{i\})
F(z)=\operatorname{Log} z\quad (\operatorname{Re} z>0)
We now state the Fundamental Theorem of Calculus for contour integrals, which gives us a quick way of evaluating a contour integral of a function with a primitive that we can determine. The theorem will be proved later in this section.
Theorem 7 Fundamental Theorem of Calculus
Let f be a function that is continuous and has a primitive F on a region \mathcal{R}, and let \Gamma be a contour in \mathcal{R} with initial point \alpha and final point \beta . Then
\int _{\Gamma } f(z)\,dz = F(\beta )  F(\alpha ).
We often use the notation
\bigl [F(z)\bigr ]^{\beta }_{\alpha }=F(\beta )  F(\alpha ).
Some texts write F(z)\big ^{\beta }_{\alpha } instead of \bigl [F(z)\bigr ]^{\beta }_{\alpha }.
For an example of the use of the Fundamental Theorem of Calculus, observe that if f(z)=z^2, then f is continuous on \mathbb{C} and has a primitive F(z)=\tfrac 13 z^3 there. Hence, for the contour \Gamma in Figure 23, we can write
\int _{\Gamma } z^2\,dz = \left [\tfrac{1}{3} z^3\right ]^{1+i}_0 = \tfrac{1}{3}(1+i)^3  \tfrac{1}{3}\times 0^3 =  \tfrac{2}{3} + \tfrac{2}{3}i.
Exercise 11
Use the Fundamental Theorem of Calculus to evaluate
\int _{\Gamma } e^{3iz}\,dz,
where \Gamma is the semicircular path shown in Figure 24.
Let f(z)=e^{3iz}, F(z)=e^{3iz}/(3i) and \cal R=\mathbb{C}. Then f is continuous on \mathcal{R}, and F is a primitive of f on \mathcal{R}. Thus, by the Fundamental Theorem of Calculus,
\begin{align*} \int _{\Gamma }e^{3iz}\,dz&=F(2)F(2) \\ &=\frac{1}{3i}\,(e^{6i}e^{6i})= \frac 23 \sin 6. \end{align*}
The final simplification follows from the formula
\sin z = \frac{1}{2i}(e^{iz}e^{iz}),
with z=6.
You have seen that
\int _{\Gamma } z^2\,dz =\tfrac 23 + \tfrac 23 i
both when \Gamma is the contour in Figure 23 and also when \Gamma is the line segment from 0 to 1+i (see Example 2). This is not a coincidence: in fact, it is a particular case of the following important consequence of the Fundamental Theorem of Calculus.
Theorem 8 Contour Independence Theorem
Let f be a function that is continuous and has a primitive F on a region \mathcal{R}, and let \Gamma _1 and \Gamma _2 be contours in \mathcal{R} with the same initial point \alpha and the same final point \beta . Then
\int _{\Gamma _1} f(z)\,dz = \int _{\Gamma _2} f(z)\,dz.
Proof
By the Fundamental Theorem of Calculus for contour integrals, the value of each of these integrals is F(\beta ) F(\alpha ).
The idea that a contour integral may, under suitable hypotheses, depend only on the endpoints of the contour (and not on the contour itself) has great significance.
Exercise 12
Use the Fundamental Theorem of Calculus to evaluate the following integrals.
\displaystyle \int _{\Gamma } e^{\pi z}\,dz, where \Gamma is any contour from i to i.
\displaystyle \int _{\Gamma }(3z1)^2\,dz, where \Gamma is any contour from 2 to 2i +\frac{1}{3}.
\displaystyle \int _{\Gamma } \sinh z\,dz, where \Gamma is any contour from i to 1.
\displaystyle \int _{\Gamma } e^{\sin z} \cos z\,dz, where \Gamma is any contour from 0 to \pi /2.
\displaystyle \int _{\Gamma } \frac{\sin z}{\cos ^2 z}\,dz, where \Gamma is any contour from 0 to \pi lying in \mathbb{C}  \left \{\left (n+\tfrac 12\right ) \pi :n\in \mathbb{Z}\right \}.
Let f(z)=e^{\pi z}, F(z)=e^{\pi z}/\pi and \mathcal{R}=\mathbb{C}. Then f is continuous on \mathcal{R}, and F is a primitive of f on \mathcal{R}. Thus, by the Fundamental Theorem of Calculus, \begin{align*} \int _{\Gamma }e^{\pi z}\,dz &=F(i)F(i) \\ &=\left (e^{\pi i}/\pi \right )\left (e^{\pi i}/\pi \right )\\ &=1/\pi 1/\pi =0. \end{align*}
Let f(z)=(3z1)^2, F(z)=\tfrac{1}{9}\,(3z1)^3 and \mathcal{R}=\mathbb{C}. Then f is continuous on \mathcal{R}, and F is a primitive of f on \mathcal{R}. Thus, by the Fundamental Theorem of Calculus, \begin{align*} \int _{\Gamma }(3z1)^2\,dz &= F\left (2i+\tfrac 13\right )F(2) \\ &=\tfrac{1}{9}(6i)^3\tfrac{1}{9}\times 5^3 \\ &=\tfrac{1}{9}(125+216i). \end{align*}
Let f(z)=\sinh z, F(z)=\cosh z and \mathcal{R}=\mathbb{C}. Then f is continuous on \mathcal{R}, and F is a primitive of f on \mathcal{R}. Thus, by the Fundamental Theorem of Calculus, \begin{align*} \int _{\Gamma } \sinh z\,dz&=F(1)F(i) \\[3pt] &=\cosh 1  \cosh i \\ &=\cosh 1  \cos 1. \end{align*}
The integrand e^{\sin z} \cos z can be written as \exp (\sin z)\times \sin ^{\,\prime \!} z, which equals (\exp \circ \sin )^\prime (z), by the Chain Rule. So let f(z)=\exp (\sin z) \cos z, F(z)=\exp (\sin z) and \mathcal{R}= \mathbb{C}. Then f is continuous on \mathcal{R}, and F is a primitive of f on \mathcal{R}. Thus, by the Fundamental Theorem of Calculus, \begin{align*} \int _{\Gamma } e^{\sin z} \cos z\,dz&=F(\pi /2)  F(0) \\[3pt] &=\exp (\sin ( \pi /2))  \exp (\sin 0) \\ &=e1. \end{align*}Remark: If you have a good deal of experience at differentiating and integrating real and complex functions, then you may have chosen to write down the primitive F(z)=e^{\sin z} of f(z)=e^{\sin z} \cos z straight away.
The integrand \sin z/\cos ^2 z can be written as \frac{1}{\cos ^2 z}\cos ^{\,\prime \!} z, which equals (h\circ \cos )^\prime (z),\quad \text{where }h(z)=1/z. So let \begin{align*} &f(z) = \sin z/\cos ^2 z, \\ &F(z) = h(\cos z)=1/\cos z,\\ &\mathcal{R} = \mathbb{C}  \left \{\left (n + \tfrac{1}{2}\right )\pi : n \in \mathbb{Z}\right \}. \end{align*}Then f is continuous on \mathcal{R}, and F is a primitive of f on \mathcal{R}. Thus, by the Fundamental Theorem of Calculus, \begin{align*} \int _{\Gamma } \frac{\sin z}{\cos ^2 z}\,dz&=F(\pi )F(0) \\ &=\frac{1}{\cos \pi }  \frac{1}{\cos 0} \\ &=11=2. \end{align*}(In this solution, note that the region \mathcal{R} does not contain the point \pi /2, as \cos \pi /2 = 0; thus \Gamma cannot be chosen to be a path that contains \pi /2. In particular, the real integral \displaystyle \int ^{\pi }_0 \dfrac{\sin x}{\cos ^2 x}\,dx does not exist.)
Next we give a version of Integration by Parts for contour integrals.
Theorem 9 Integration by Parts
Let f and g be functions that are analytic on a region \mathcal{R}, and suppose that f^\prime and g^\prime are continuous on \mathcal{R}. Let \Gamma be a contour in \mathcal{R} with initial point \alpha and final point \beta . Then
\int _{\Gamma } f(z) g^\prime (z)\,dz = \bigl [f(z)g(z)\bigr ]^\beta _{\alpha }  \int _{\Gamma } f^\prime (z)g(z)\,dz.
Proof
Let H(z) = f(z)g(z) and h(z) = f^\prime (z)g(z) + f(z)g^\prime (z). Then h is continuous on \mathcal{R}, by hypothesis. Also, h has primitive H, since H is analytic on \mathcal{R} and
H^\prime (z) = h(z),
by the Product Rule for differentiation. It follows from the Fundamental Theorem of Calculus that
\int _{\Gamma } h(z)\,dz = \bigl [H(z)\bigr ]^{\beta }_{\alpha };
that is,
\int _{\Gamma }(f^\prime (z)g(z) + f(z)g^\prime (z))\,dz = \bigl [f(z)g(z)\bigr ]^{\beta }_{\alpha }.
Using the Sum Rule (Theorem 5(a)) and rearranging the resulting equation, we obtain
\int _{\Gamma } f(z)g^\prime (z)\,dz = \bigl [f(z)g(z)\bigr ]^{\beta }_{\alpha }  \int _{\Gamma }f^\prime (z)g(z)\,dz,
as required.
Example 7
Use Integration by Parts to evaluate
\int _{\Gamma } z e^{2z}\,dz,
where \Gamma is any contour from 0 to \pi i.
Solution
We take f(z) = z, g(z) = \frac{1}{2}e^{2z} and \mathcal{R} = \mathbb{C}. Then f and g are analytic on \mathcal{R}, and f^\prime (z) = 1 and g^\prime (z) = e^{2z} are continuous on \mathcal{R}.
Integrating by parts, we obtain
\begin{align*} \int _{\Gamma } z e^{2z}\,dz&= \left [z \times \tfrac{1}{2}e^{2z}\right ]^{\pi i}_0\int _{\Gamma } 1 \times \tfrac{1}{2}e^{2z}\,dz\\ &= \left (\pi i\times \tfrac{1}{2}e^{2\pi i}  0\right ) \left [\tfrac{1}{4}e^{2z}\right ]^{\pi i}_0\\ &= \tfrac 12\pi i  \left (\tfrac{1}{4}  \tfrac{1}{4}\right )\\ &= \tfrac 12\pi i. \end{align*}
Exercise 13
Use Integration by Parts to evaluate the following integrals.
\displaystyle \int _{\Gamma } z \cosh z\,dz, where \Gamma is any contour from 0 to \pi i.
\displaystyle \int _{\Gamma } \operatorname{Log} z\,dz, where \Gamma is any contour from 1 to i lying in the cut plane \mathbb{C}\{x\in \mathbb{R}: x\leq 0\}.
(Hint: For part (b), take f(z)=\operatorname{Log} z and g(z)=z.)
We take f(z)=z, g(z)=\sinh z and \mathcal{R}=\mathbb{C}. Then f and g are analytic on \mathcal{R}, and f^\prime (z)=1 and g^\prime (z)=\cosh z are continuous on \mathcal{R}. Integrating by parts, we obtain \begin{align*} \int _{\Gamma }z\cosh z\,dz&=\bigl [z\sinh z\bigr ]^{\pi i}_0  \int _{\Gamma }1\times \sinh z\,dz \\ &=(\pi i \sinh \pi i0)\bigl [\cosh z\bigr ]^{\pi i}_0 \\ &=\pi i\times i\sin \pi (\cos \pi \cosh 0)\\ &=0(11)=2. \end{align*}
We take f(z)=\operatorname{Log} z, g(z)=z and \mathcal{R} =\mathbb{C}  \{x\in \mathbb{R}: x\leq 0\}. Then f and g are analytic on \mathcal{R}, and f^\prime (z)=1/z and g^\prime (z)=1 are continuous on \mathcal{R}. Integrating by parts, we obtain \begin{align*} \int _{\Gamma } \operatorname{Log} z\,dz&= \bigl [z\operatorname{Log} z\bigr ]^i_1  \int _{\Gamma } \frac 1z \times z\,dz &\\ &=i\operatorname{Log} i \operatorname{Log} 1  \bigl [z\bigr ]^i_1 &\\ &= \pi /2  (i1) = (1 \pi /2)  i.& \end{align*}
The Fundamental Theorem of Calculus is a useful tool when the function f being integrated has an easily determined primitive F. However, if the function f has no primitive, or if we are unable to find one, then we have to resort to the definition of an integral and use parametrisation. For example, we cannot use the Fundamental Theorem of Calculus to evaluate
\int _{\Gamma }\overline{z}\,dz
along any contour, since the function f(z) = \overline{z} has no primitive on any region.
To see why this is so, suppose that f is a function that is defined on a region in the complex plane. We observe that if fis not differentiable, then fhas no primitive F. This is because any differentiable complex function can be differentiated as many times as we like. Thus, if f has a primitive F, then F is differentiable with F^\prime = f. Hence f is also differentiable.
It follows that we cannot use the Fundamental Theorem of Calculus to evaluate integrals of nondifferentiable functions such as
\begin{align*} & z \longmapsto \overline{z},\quad \ z \longmapsto \operatorname{Re} z,\quad z \longmapsto \operatorname{Im} z\quad \text{and}\quad z \longmapsto z. \end{align*}
We conclude this section by proving the Fundamental Theorem of Calculus.
Proof
The proof of the Fundamental Theorem of Calculus is in two parts. We first prove the result in the case when \Gamma is a smooth path, and then extend the proof to contours.
Let \Gamma : \gamma (t)\ (t\in [a,b]) be a smooth path. Then \begin{align*} \int _{\Gamma } f(z)\,dz&= \int ^b_a f(\gamma (t))\,\gamma \,^\prime (t)\,dt \\ &= \int ^b_a F^\prime (\gamma (t))\,\gamma \,^\prime (t)\,dt\\ &=\int ^b_a (F\circ \gamma )^\prime (t)\,dt, \end{align*}by the Chain Rule. Now, if we write (F\circ \gamma )(t) as a sum of its real and imaginary parts u(t)+iv(t), then \int ^b_a (F\circ \gamma )^\prime (t)\,dt = \int _a^bu'(t)\,dt+i\int _a^bv'(t)\,dt. The Fundamental Theorem of Calculus for real integrals (Theorem 2) tells us that \int _a^bu'(t)\,dt=u(b)u(a)\quad \text{and}\quad \int _a^bv'(t)\,dt=v(b)v(a). Hence \int _{\Gamma } f(z)\,dz = (u(b)u(a))+i(v(b)v(a))=F(\beta )  F(\alpha ), since \beta =\gamma (b) and \alpha =\gamma (a).
To extend the proof to a general contour \Gamma with initial point \alpha and final point \beta , we argue as follows. Let \Gamma = \Gamma _1 + \Gamma _2 +\cdots + \Gamma _n, for smooth paths \Gamma _1,\Gamma _2,\dots ,\Gamma _n, and let the initial and final points of \Gamma _k be \alpha _k and \beta _k, for k = 1,2,\ldots , n. Then \alpha _1 = \alpha ,\quad \alpha _2 = \beta _1,\quad \ldots ,\quad \alpha _n = \beta _{n1},\quad \beta _n = \beta . By part (a), \int _{\Gamma _k} f(z)\,dz = F(\beta _k)F(\alpha _k)=F(\beta _k)F(\beta _{k1}), for k=1,2,\dots ,n (where \beta _0=\alpha ). Hence \begin{align*} \int _{\Gamma } f(z)\,dz &= \int _{\Gamma _1} f(z)\,dz + \int _{\Gamma _2} f(z)\,dz +\cdots + \int _{\Gamma _n} f(z)\,dz\\ &=(F(\beta _1)F(\beta _{0}))+\dots + (F(\beta _n)F(\beta _{n1}))\\ &= F(\beta _n)  F(\beta _0)\\ &= F(\beta )  F(\alpha ). \end{align*}
3.2 Further exercises
Here are some further exercises to end this section.
Exercise 14
For each of the following functions f, evaluate
\displaystyle \int _{\Gamma } f(z)\,dz,
where \Gamma is any contour from i to i.
f(z)=1
f(z)=z
f(z)=5z^4+3iz^2
f(z)=(1+2iz)^9
f(z)=e^{iz}
f(z)=\sin z
f(z)=ze^{z^2}
f(z)=z^3\cosh (z^4)
f(z)=ze^{z}
In each case, f is continuous on \mathbb{C} and has a primitive on \mathbb{C}, so we can apply the Fundamental Theorem of Calculus to evaluate the integral using any contour \Gamma from i to i.
\displaystyle \int _{\Gamma } 1\,dz = \bigl [z\bigr ]^i_{i} = i  (i) = 2i
\displaystyle \int _{\Gamma } z\,dz = \left [\tfrac 12 z^2\right ]^i_{i} = \tfrac 12 i^2 \tfrac 12 (i)^2 = 0
\begin{aligned}[t] \int _{\Gamma } \left (5z^4 + 3iz^2\right )dz &= \left [z^5 + iz^3\right ]^i_{i} \\ &=(i+1)(i1)\\ &=2+2i \end{aligned}
\begin{aligned}[t] \int _{\Gamma }(1+2iz)^9 \,dz &= \left [\left (1+2iz\right )^{10}/(10\times 2i)\right ]^i_{i}\\ &=\left ((1)^{10}3^{10}\right )/(20i) \\ &=\frac{3^{10}1}{20} i \end{aligned}
\begin{aligned}[t] \int _{\Gamma } e^{iz} \,dz&=\left [e^{iz}/(i)\right ]^i_{i} \\ &=\left (ee^{1}\right )/(i) = 2i\sinh 1 \end{aligned}
\begin{aligned}[t] \int _{\Gamma }\sin z\,dz&=\bigl [\cos z\bigr ]^i_{i} \\ &=\cos i + \cos (i) = 0 \end{aligned}
A primitive of f(z)=z e^{z^2} is F(z)=\tfrac 12 e^{z^2}. Hence \begin{align*} \int _{\Gamma } ze^{z^2} \,dz&=\left [\tfrac 12 e^{z^2}\right ]^i_{i} \\ &=\tfrac 12 \left (e^{1}e^{1}\right )=0. \end{align*}
A primitive of f(z)=z^3 \cosh (z^4) is F(z)=\tfrac 14 \sinh (z^4). Hence \begin{align*} \int _{\Gamma } z^3 \cosh (z^4) \,dz&=\left [\tfrac 14 \sinh (z^4)\right ]^i_{i} \\ &=\tfrac 14 (\sinh 1  \sinh 1) = 0. \end{align*}
Let g(z)=z, h(z)=e^z. Then g and h are entire (that is, g and h are differentiable on the whole of \mathbb{C}), and g^\prime and h^\prime are entire and hence continuous. Then, using Integration by Parts (Theorem 9), we have \begin{align*} \int _{\Gamma }ze^z \,dz&=\bigl [ze^z\bigr ]^i_{i}  \int _{\Gamma } 1\times e^z \,dz \\ &=\left (ie^i (i)e^{i}\right )\int _{\Gamma } e^z \,dz\\ &=i\left (e^i+e^{i}\right )  \bigl [e^z\bigr ]^i_{i} \\ &=2i\cos 1  \left (e^i  e^{i}\right ) \\ &=2i\cos 1  2i\sin 1 \\ &=2(\cos 1  \sin 1) i. \end{align*}
Exercise 15
Evaluate the following integrals. (In each case pay special attention to the hypotheses of the theorems you use.)
\displaystyle \int _{\Gamma } \frac{1}{z}\,dz, where \Gamma is the arc of the circle \{z:z=1\} from i to i passing through 1.
\displaystyle \int _{\Gamma } \sqrt{z}\,dz, where \Gamma is as in part (a).
\displaystyle \int _{\Gamma } \sin ^2 z\,dz, where \Gamma is the unit circle \{z:z=1\}.
\displaystyle \int _{\Gamma } \frac{1}{z^3}\,dz, where \Gamma is the circle \{z:z=27\}.
(Hint: For part (c), use the identity \sin ^2z=\tfrac 12(1\cos 2z).)
Let f(z)=1/z, F(z)=\operatorname{Log} z and \mathcal{R} = \mathbb{C}  \{x\in \mathbb{R} : x\leq 0\}. Then f is continuous on \mathcal{R}, F is a primitive of f on \mathcal{R}, and \Gamma is a contour in \mathcal{R}. Thus, by the Fundamental Theorem of Calculus, \begin{align*} \int _{\Gamma } \frac 1z \,dz&=\bigl [\operatorname{Log} z\bigr ]^i_{i} \\ &= \operatorname{Log} i  \operatorname{Log} (i) \\ &= \frac{\pi }{2}i\left (\frac{\pi }{2}i\right ) = \pi i. \end{align*}
Let f(z)=\sqrt{z}, F(z)=\frac 23 z^{3/2} and \mathcal{R}=\mathbb{C}\{x\in \mathbb{R}: x\leq 0\}. Then f is continuous on \mathcal{R}, F is a primitive of f on \mathcal{R}, and \Gamma is a contour in \mathcal{R}. Thus, by the Fundamental Theorem of Calculus, \begin{align*} \int _{\Gamma } \sqrt{z} \,dz&=\left [\tfrac 23 z^{3/2}\right ]^i_{i} \\ &=\tfrac 23 \left (i^{3/2}  (i)^{3/2}\right ) \\ &=\tfrac 23 \Big (\exp \left (\tfrac 32 \operatorname{Log} i\right )  \exp \left (\tfrac 32 \operatorname{Log} (i)\right )\Big ) \\ &=\frac 23 \left (\exp \left (\frac{3\pi }{4} i\right )\exp \left (\frac{3\pi }{4}i\right ) \right ) \\ &=\frac 23 \left (2i\sin \frac{3\pi }{4}\right ) \\ &=\frac{2\sqrt{2}}{3} i. \end{align*}
The function f(z)=\sin ^2 z=\tfrac 12 (1\cos 2z) is continuous and has an entire primitive F(z)=\tfrac 12 (z\tfrac 12 \sin 2z). Thus, by the Closed Contour Theorem, \int _{\Gamma } \sin ^2 z\,dz = 0.
Let f(z)=1/z^3, F(z)=1/(2z^2) and \mathcal{R}=\mathbb{C}\{0\}. Then f is continuous on \mathcal{R}, F is a primitive of f on \mathcal{R}, and \Gamma is a contour in \mathcal{R}. Thus, by the Closed Contour Theorem, \int _{\Gamma } \frac{1}{z^3}\,dz = 0.
Exercise 16
Construct a grid path from \alpha to \beta in the domain of the function \tan , for each of the following cases.
\alpha = 1, \beta = 6
\alpha = \dfrac{\pi }{2} + 2i, \beta =\dfrac{3\pi }{2}  i
The domain of \tan is the region
\mathcal{R}=\mathbb{C}\left \{\left (n+\tfrac 12\right )\pi : n \in \mathbb{Z}\right \}.
The figure shows one grid path in \mathcal{R} from 1 to 6 (there are many others).
The figure shows one grid path in \mathcal{R} from \dfrac{\pi }{2} + 2i to \dfrac{3\pi }{2} i (again, there are many others).
4 Summary of Session 2
In this session you have seen how the idea of integration of real functions can be extended to the integration of complex functions along paths in the complex plane. You have seen the surprising result that for a continuous function the integral is independent of the precise path taken.
Course conclusion
Well done on completing this course, Introduction to complex analysis. As well as being able to understand the terms and definitions, and use the results introduced, you should also find that your skills in understanding complex mathematical texts are improving.
You should now be able to:
use the definition of derivative to show that a given function is or is not differentiable at a point
use the Cauchy–Riemann equations to show that a function is or is not differentiable at a point
interpret the derivative of a complex function at a point as a rotation and a scaling of a small disc
appreciate how complex integrals can be defined by analogy with real integrals
define the integral of a complex function along a contour and evaluate such integrals
state and use several key theorems to evaluate contour integrals.
This OpenLearn course is an extract from the Open University course M337 Complex analysis.
This free course was written by the Open University School of Mathematics and Statistics.
Except for third party materials and otherwise stated (see terms and conditions), this content is made available under a Creative Commons AttributionNonCommercialShareAlike 4.0 Licence.
The material acknowledged below is Proprietary and used under licence (not subject to Creative Commons Licence). Grateful acknowledgement is made to the following sources for permission to reproduce material in this free course:
Images
Portrait of Jean le Rond d’Alembert (1717–1783); photographer Bonhams, London, 4 Dez 2013
Portrait of Pierre Simon Marquis de Laplace (17451827), by JeanBaptiste Paulin Guérin (1783–1855); photograph: http://www.photo.rmn.fr
Every effort has been made to contact copyright owners. If any have been inadvertently overlooked, the publishers will be pleased to make the necessary arrangements at the first opportunity.
Don't miss out
If reading this text has inspired you to learn more, you may be interested in joining the millions of people who discover our free learning resources and qualifications by visiting The Open University – www.open.edu/openlearn/freecourses.