% %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% % % % % The Project Gutenberg EBook of A Course of Pure Mathematics, by % % G. H. (Godfrey Harold) Hardy % % % % This eBook is for the use of anyone anywhere at no cost and with % % almost no restrictions whatsoever. You may copy it, give it away or % % re-use it under the terms of the Project Gutenberg License included % % with this eBook or online at www.gutenberg.net % % % % % % Title: A Course of Pure Mathematics % % Third Edition % % % % Author: G. H. (Godfrey Harold) Hardy % % % % Release Date: February 5, 2012 [EBook #38769] % % % % Language: English % % % % Character set encoding: ISO-8859-1 % % % % *** START OF THIS PROJECT GUTENBERG EBOOK A COURSE OF PURE MATHEMATICS *** % % % %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% % \def\ebook{38769} %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% %% %% %% Packages and substitutions: %% %% %% %% book: Required. %% %% inputenc: Standard DP encoding. Required. %% %% %% %% ifthen: Logical conditionals. Required. %% %% %% %% amsmath: AMS mathematics enhancements. Required. %% %% amssymb: Additional mathematical symbols. Required. %% %% %% %% alltt: Fixed-width font environment. Required. %% %% %% %% footmisc: Extended footnote capabilities. Required. %% %% %% %% indentfirst: Indent first word of each sectional unit. Required. %% %% icomma: Make the comma a decimal separator in math. Required. %% %% %% %% calc: Length calculations. Required. %% %% %% %% fancyhdr: Enhanced running headers and footers. Required. %% %% %% %% graphicx: Standard interface for graphics inclusion. Required. %% %% caption: Caption customization. Required. %% %% %% %% geometry: Enhanced page layout package. Required. %% %% hyperref: Hypertext embellishments for pdf output. Required. %% %% %% %% %% %% Producer's Comments: %% %% %% %% Changes are noted in this file in multiple ways. %% %% 1. \DPnote{} for in-line `placeholder' notes. %% %% 2. \DPtypo{}{} for typographical corrections, showing original %% %% and replacement text side-by-side. %% %% 3. \DPchg (stylistic uniformity) and \DPmod (modernization). %% %% 4. [** TN: Note]s for lengthier or stylistic comments. %% %% %% %% %% %% Compilation Flags: %% %% %% %% The following behavior may be controlled by boolean flags. %% %% %% %% ForPrinting (false by default): %% %% Compile a screen-optimized PDF file. Set to true for print- %% %% optimized file (two-sided layout, black hyperlinks). %% %% %% %% Modernize (true by default): %% %% Modernize the mathematical notation (see below for details). %% %% %% %% %% %% PDF pages: 587 (if ForPrinting set to false) %% %% PDF page size: 5.5 x 8in (non-standard) %% %% %% %% Images: 68 pdf diagrams, 1 png image (CUP device) %% %% %% %% Summary of log file: %% %% * One overfull hbox (7.3pt too wide). %% %% * Three underfull hboxes, four underfull vboxes. %% %% %% %% %% %% Compile History: %% %% %% %% January, 2012: (Andrew D. Hwang) %% %% texlive2007, GNU/Linux %% %% %% %% Command block: %% %% %% %% pdflatex x3 %% %% %% %% %% %% February 2012: pglatex. %% %% Compile this project with: %% %% pdflatex 38769-t.tex ..... THREE times %% %% %% %% pdfTeX, Version 3.1415926-1.40.10 (TeX Live 2009/Debian) %% %% %% %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \listfiles \documentclass[12pt]{book}[2005/09/16] %%%%%%%%%%%%%%%%%%%%%%%%%%%%% PACKAGES %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \usepackage[latin1]{inputenc}[2006/05/05] \usepackage{ifthen}[2001/05/26] %% Logical conditionals \usepackage{amsmath}[2000/07/18] %% Displayed equations \usepackage{amssymb}[2002/01/22] %% and additional symbols \usepackage{alltt}[1997/06/16] %% boilerplate, credits, license %% extended footnote capabilities \usepackage[symbol,perpage]{footmisc}[2005/03/17] \usepackage{indentfirst}[1995/11/23] \usepackage{icomma}[2002/03/10] \usepackage{calc}[2005/08/06] \usepackage{fancyhdr} \usepackage{graphicx}[1999/02/16]%% For diagrams \usepackage[labelformat=empty,textfont=small]{caption}[2007/01/07] % Modernize notation: Use square root signs instead of surds, square % brackets for closed intervals, reverse roles of delta and epsilon. \newboolean{Modernize} %% COMMENT the line below to revert to the original notation. %% Figures 27 (p117.pdf) and 30 (p176.pdf) will automatically work %% if all the Project Gutenberg diagram files are present in images/. %% (This switch does not affect typographical corrections.) \setboolean{Modernize}{true} %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% %%%% Interlude: Set up PRINTING (default) or SCREEN VIEWING %%%% %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% % ForPrinting=true (default) false % Asymmetric margins Symmetric margins % Black hyperlinks Blue hyperlinks % Start Preface, ToC, etc. recto No blank verso pages % \newboolean{ForPrinting} %% UNCOMMENT the next line for a PRINT-OPTIMIZED VERSION of the text %% %\setboolean{ForPrinting}{true} %% Initialize values to ForPrinting=false \newcommand{\ChapterSpace}{} \newcommand{\Margins}{hmarginratio=1:1} % Symmetric margins \newcommand{\HLinkColor}{blue} % Hyperlink color \newcommand{\PDFPageLayout}{SinglePage} \newcommand{\TransNote}{Transcriber's Note} %% TN at the beginning \newcommand{\TransNoteCommon}{% Minor typographical corrections and presentational changes have been made without comment. \ifthenelse{\boolean{Modernize}}{Notational modernizations are listed in the transcriber's note at the end of the book.}{} All changes are detailed in the \LaTeX\ source file, which may be downloaded from \begin{center} \texttt{www.gutenberg.org/ebooks/\ebook}. \end{center} \bigskip } %% TN at the end, regarding modernization \newcommand{\ModernizationNote}{% The notational modernizations listed below have been made. These changes may be reverted by commenting out one line in the \LaTeX\ source file and recompiling the book. \begin{itemize} \item Closed intervals are denoted with square brackets, \eg,~$[a, b]$, instead of round parentheses,~$(a, b)$. \item Repeating decimals are denoted with an overline, \eg,~$.217\Repeat{13}$, instead of with dot accents,~$.217\dot{1}\dot{3}$. \item The roles of $\delta$~and~$\epsilon$ in the definition of limits, \PageRef{p.}{113}~\textit{ff.}, have been interchanged in accordance with modern convention: ``For every $\epsilon > 0$, there exists a $\delta > 0$ such that~\dots''. \end{itemize} } %% TN at the end, regarding change of formula in Figure 16 \newcommand{\ChangeNote}{% In Example~11, \PageRef{p.}{57}~\textit{ff.}, the text refers to the formula \[ y = \left\{ \begin{array}{@{}cl} \sqrt{(1 + p^{2})(1 + q^{2})} & \text{if $x = p/q$ in lowest terms,} \\ x & \text{if $x$~is irrational.} \end{array} \right. \] The computer-generated \Fig{16} instead depicts the formula \[ y = \left\{ \begin{array}{@{}cl} \sqrt{(10 + p^{2})(10 + q^{2})} & \text{if $x = p/q$ in lowest terms,} \\ x & \text{if $x$~is irrational,} \end{array} \right. \] which exhibits the same mathematical behavior, but better matches the hand-drawn diagram in the original. } \newcommand{\TransNoteText}{% \TransNoteCommon This PDF file is optimized for screen viewing, but may easily be recompiled for printing. Please consult the preamble of the \LaTeX\ source file for instructions. } %% Re-set if ForPrinting=true \ifthenelse{\boolean{ForPrinting}}{% \renewcommand{\ChapterSpace}{\vspace*{1in}} \renewcommand{\Margins}{hmarginratio=2:3} % Asymmetric margins \renewcommand{\HLinkColor}{black} % Hyperlink color \renewcommand{\PDFPageLayout}{TwoPageRight} \renewcommand{\TransNoteText}{% \TransNoteCommon This PDF file is optimized for printing, but may easily be recompiled for screen viewing. Please consult the preamble of the \LaTeX\ source file for instructions. } \newcommand{\longpage}{} }{% If ForPrinting=false, don't skip to recto \renewcommand{\cleardoublepage}{\clearpage} \newcommand{\longpage}{\enlargethispage{\baselineskip}} } %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% %%%% End of PRINTING/SCREEN VIEWING code; back to packages %%%% %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \ifthenelse{\boolean{ForPrinting}}{% \setlength{\paperwidth}{8.5in}% \setlength{\paperheight}{11in}% \usepackage[body={5.25in,8.5in},\Margins]{geometry}[2002/07/08] }{% \setlength{\paperwidth}{5.5in}% \setlength{\paperheight}{8in}% \usepackage[body={5.25in,6.9in},\Margins,includeheadfoot]{geometry}[2002/07/08] } \providecommand{\ebook}{00000} % Overridden during white-washing \usepackage[pdftex, hyperfootnotes=false, pdftitle={The Project Gutenberg eBook \#\ebook: A Course of Pure Mathematics}, pdfauthor={Godfrey Harold Hardy}, pdfkeywords={Andrew D. Hwang, Brenda Lewis, Project Gutenberg Online Distributed Proofreading Team, Internet Archive/American Libraries}, pdfstartview=Fit, % default value pdfstartpage=1, % default value pdfpagemode=UseNone, % default value bookmarks=true, % default value linktocpage=false, % default value pdfpagelayout=\PDFPageLayout, pdfdisplaydoctitle, pdfpagelabels=true, bookmarksopen=true, bookmarksopenlevel=1, colorlinks=true, linkcolor=\HLinkColor]{hyperref}[2007/02/07] %%%% Fixed-width environment to format PG boilerplate %%%% \newenvironment{PGtext}{% \begin{alltt} \fontsize{9.2}{10.5}\ttfamily\selectfont}% {\end{alltt}} %%%% Global style parameters %%%% % No hrule in page header \renewcommand{\headrulewidth}{0pt} \setlength{\headheight}{15pt} % Loosen horizontal spacing \setlength{\emergencystretch}{1.5em} % Local spacing coercion \newcommand{\Loosen}{\spaceskip 0.375em plus 0.75em minus 0.25em} % Used only once, to coax a wide display into the text block \newcommand{\Squeeze}[2][0.98]{\scalebox{#1}[1]{#2}} % Allow \quad to compress a bit \let\oldquad=\quad \renewcommand{\quad}{\oldquad\hspace{0pt minus 3pt}} % Misc spacing parameters \setlength{\multlinegap}{2\parindent} \newcommand{\Medskip}{\vspace{0pt plus 0.5\baselineskip}} % "Scratch pad" for length calculations \newlength{\TmpLen} %% Parametrized vertical space %% \newcommand{\Strut}[1][12pt]{\rule{0pt}{#1}} %%%% Corrections and in-line transcriber's notes %%%% % In-line notes \newcommand{\DPnote}[1]{} % Errors \newcommand{\DPtypo}[2]{#2} %%%% Notational modernizations %%%% % ** If \epsilon -> \varepsilon, figures p117 and p176 must be recompiled \ifthenelse{\boolean{Modernize}}{% % Stylistic changes made for clarity or consistency \newcommand{\DPchg}[2]{#2} \newcommand{\Add}[1]{#1} % Modernize notation \newcommand{\DPmod}[2]{#2} % ** Incarnations of \sqrt; see below for significance \newcommand{\sqrtp}[2][\ ]{\sqrt[#1]{#2}} \newcommand{\sqrtb}[2][\ ]{\sqrt[#1]{#2}} \newcommand{\sqrtbr}[2][\ ]{\sqrt[#1]{#2}} \newcommand{\bigsqrt}[2][\ ]{\sqrt[#1]{#2}} \newcommand{\bigsqrtb}[2][\ ]{\sqrt[#1]{#2}} \newcommand{\bigsqrtp}[2][\ ]{\sqrt[#1]{#2}} % Exchange delta, epsilon in the definition of limits \newcommand{\DELTA}{\epsilon} \newcommand{\EPSILON}{\delta} % Add visual delimiters to large integers/long decimals \newcommand{\MC}{,}% "Math comma" \newcommand{\MS}{\,}% "Math space" }{% Modernize = false \newcommand{\DPchg}[2]{#1} \newcommand{\Add}[1]{} \newcommand{\DPmod}[2]{#1} % Use surd sign... \let\oldsqrt=\sqrt% \renewcommand*{\sqrt}[2][\ ]{\oldsqrt[#1]{\vphantom{|}}#2} % ... with parentheses or curly braces around radicand \newcommand{\sqrtp}[2][\ ]{\sqrt[#1]{(#2)}} \newcommand{\sqrtb}[2][\ ]{\sqrt[#1]{\{#2\}}} \newcommand{\sqrtbr}[2][\ ]{\sqrt[#1]{\,[#2]}} \newcommand{\bigsqrt}[2][\ ]{\oldsqrt[#1]{\vphantom{#2}}#2} \newcommand{\bigsqrtb}[2][\ ]{\oldsqrt[#1]{\vphantom{\bigg|}}\left\{#2\right\}} \newcommand{\bigsqrtp}[2][\ ]{\oldsqrt[#1]{\vphantom{#2}}\!\!\left(#2\right)} \newcommand{\DELTA}{\delta} \newcommand{\EPSILON}{\epsilon} % Don't add visual delimiters to large integers/long decimals \newcommand{\MC}{} \newcommand{\MS}{} } %% End of modernization code %% %%%% Running heads %%%% \newcommand{\FlushRunningHeads}{% \clearpage \pagestyle{fancy} \fancyhf{} \cleardoublepage \thispagestyle{empty} \ifthenelse{\boolean{ForPrinting}} {\fancyhead[RO,LE]{\thepage}} {\fancyhead[R]{\thepage}} } % ** \Chapter{X} uses optional argument to set separate running heads \newcommand{\SetCenterHeads}[2][]{% \ifthenelse{\equal{#1}{}}{% \fancyhead[C]{{\footnotesize #2}}% }{% \fancyhead[CE]{{\footnotesize #1}}% \fancyhead[CO]{{\footnotesize #2}}% }% } \newcommand{\SetCornerHeads}[1]{% \ifthenelse{\boolean{ForPrinting}}{% \fancyhead[RE]{[\ChapNo}% \fancyhead[LO]{#1]}% }{% \fancyhead[L]{[\ChapNo\,:\,#1]}% }% } \newcommand{\BookMark}[3][]{% \phantomsection% \ifthenelse{\equal{#1}{}}{% \pdfbookmark[#2]{#3}{#3}% }{% \pdfbookmark[#2]{#3}{#1}% }% } %%%% Major document divisions %%%% \newcommand{\FrontMatter}{% \cleardoublepage \frontmatter \BookMark{-1}{Front Matter} } \newcommand{\PGBoilerPlate}{% \pagenumbering{Alph} \pagestyle{empty} \BookMark{0}{PG Boilerplate} } \newcommand{\MainMatter}{% \FlushRunningHeads \mainmatter \BookMark{-1}{Main Matter} } \newcommand{\BackMatter}{% \FlushRunningHeads \backmatter \BookMark{-1}{Back Matter} } \newcommand{\PGLicense}{% \FlushRunningHeads \pagenumbering{Roman} \BookMark{-1}{PG License} \SetCenterHeads{License} } \newcommand{\TranscribersNote}[2][]{% \begin{minipage}{0.85\textwidth} \small \BookMark[#1]{0}{Transcriber's Note} \subsection*{\centering\normalfont\scshape\normalsize\TransNote} #2 \end{minipage} } %%%% Table of Contents %%%% % Misc. macros for internal use \newcounter{tocentry} \setcounter{tocentry}{0} \newcommand{\ToCAnchor}{} \newcommand{\SectPageLine}{% \parbox{\textwidth}{\scriptsize SECT.\hfill PAGE}\\% } % Contents heading \newcommand{\Contents}{% \FlushRunningHeads \setlength{\headheight}{15pt} \SetCenterHeads{CONTENTS} \BookMark{0}{Contents} \Section{CONTENTS} } % Chapter entries \newcommand{\ToCChap}[2]{% \subsection*{\centering\normalfont\small #1} \subsubsection*{\centering\normalfont\footnotesize #2} } % Section(s) entries % ** Macro discards third argument (original page number) \newcommand{\ToCSect}[4]{% \noindent\Strut% Issue vertical space to see if we'll be set on a new page % If #1 is empty, generate our own label, and update \ToCAnchor \ifthenelse{\equal{#1}{}}{% "Miscellaneous examples" line \stepcounter{tocentry}\label{toc:special\thetocentry}% \ifthenelse{\not\equal{\pageref{toc:special\thetocentry}}{\ToCAnchor}}{% \renewcommand{\ToCAnchor}{\pageref{toc:special\thetocentry}}% \SectPageLine% }{}% }{% else use #1 to generate label, and update \ToCAnchor \label{toc:#1}% \ifthenelse{\not\equal{\pageref{toc:#1}}{\ToCAnchor}}{% \renewcommand{\ToCAnchor}{\pageref{toc:#1}}% \SectPageLine% }{}% }% \settowidth{\TmpLen}{999--999. }% Maximum heading width % ** Width (2em) must match \ToCPage width below \parbox[b]{\textwidth-2em}{\Strut\small\hangindent1.5\TmpLen% \makebox[\TmpLen][l]{#1}#2\ \dotfill}\ToCPage{#4}% } % Appendix entries % ** Macro discards third argument (original page number) \newcommand{\ToCApp}[3]{% \noindent\parbox[b]{\textwidth-2em}{% \small\textsc{Appendix}~\makebox[2em][l]{#1.} #2\ \dotfill}\ToCPage{appendix:#1}% } % Page numbers \newcommand{\ToCPage}[1]{\makebox[2em][r]{\small\pageref{#1}}} % ** Approximate; refers to original page separators, no hyperlink \newcommand{\PgNo}[2][]{% \ifthenelse{\equal{#1}{}}{% \pageref*{pg:#2}% }{% \pageref{page:#2}% }% } \newenvironment{ToCPar}{\begin{quote}\footnotesize}{\end{quote}} %%%% Document Sectioning %%%% \newcommand{\RunInHead}[1]{\paragraph*{\indent #1}} \newcommand{\ChapNo}{} \newcommand{\Preface}[1]{% \section*{\centering\normalfont#1} } % \Chapter[Running head]{Number}{Heading title} \newcommand{\Chapter}[3][]{% \FlushRunningHeads \phantomsection \label{chapter:#2} \BookMark{0}{Chapter #2}% \renewcommand{\ChapNo}{#2} \thispagestyle{plain} % \ifthenelse{\equal{#1}{}}{% \ifthenelse{\equal{#2}{IV}}{% Chapters IV, X get asymmetric heading \SetCenterHeads[LIMITS OF FUNCTIONS OF A] {POSITIVE INTEGRAL VARIABLE}% }{% Not Chapter IV \ifthenelse{\equal{#2}{X}}{% \SetCenterHeads[THE GENERAL THEORY OF THE LOGARITHMIC,] {EXPONENTIAL, AND CIRCULAR FUNCTIONS}% }{% Not Chapter X, but running head manually specified \SetCenterHeads{#3}% }% }% }{% Otherwise, use chapter title as running head \SetCenterHeads{#1}% } \ChapterSpace \section*{\centering CHAPTER #2} \subsection*{\centering\normalfont\small #3} } \newcommand{\Section}[1]{\subsection*{\centering\normalfont #1}} \newcommand{\Appendix}[3]{% \FlushRunningHeads \label{appendix:#1} \BookMark{0}{Appendix #1} \renewcommand{\ChapNo}{A.#2} \thispagestyle{plain} % \SetCenterHeads{APPENDIX #1} \ChapterSpace \section*{\centering\normalfont APPENDIX #1} \subsection*{\centering\normalfont\scshape #2} \subsubsection*{\centering\normalfont\itshape #3} } % Numbered sections; use dedicated counter (not macro arg.) to create labels \newcounter{ParNo} \newcommand{\Paragraph}[1]{% \RunInHead{#1}% \stepcounter{ParNo}\phantomsection\label{par:\theParNo}% \SetCornerHeads{\theParNo}% } \newcommand{\Par}[1]{\RunInHead{\normalfont\itshape #1}} %%%% Other semantic units %%%% % Numbered item \newcommand{\Item}[1]{\makebox[1.5em][l]{\normalfont\upshape#1\Strut}} % Parenthesized item \newcommand{\Itemp}[1]{% \ifmmode\makebox[2.25em][l]{\normalfont\upshape#1}% \else\makebox[2.25em][l]{\normalfont\upshape#1\Strut}% \fi% } \newcommand{\SubItem}[1]{\quad\Itemp{#1}} \newcommand{\Hang}[1][6em]{\hangindent#1} % Template for definitions, theorems, corollaries \newenvironment{MyEnvt}[2]{% \Medskip\par% \ifthenelse{\equal{#1}{}}{% \textsc{#2.} }{% \textsc{#2 #1} }% \itshape\ignorespaces }{\normalfont\Medskip} % Document-level environments \newenvironment{Theorem}[1][]{\begin{MyEnvt}{#1}{Theorem}}{\end{MyEnvt}} \newenvironment{Corollary}[1][]{\begin{MyEnvt}{#1}{Corollary}}{\end{MyEnvt}} \newenvironment{Cor}[1][]{\begin{MyEnvt}{#1}{Cor}}{\end{MyEnvt}} \newenvironment{Definition}[1][]{\begin{MyEnvt}{#1}{Definition}}{\end{MyEnvt}} \newenvironment{Definitions}[1][]{\begin{MyEnvt}{#1}{Definitions}}{\end{MyEnvt}} % Miscellaneous italicized constructs \newenvironment{Construction}[1][\Medskip]{#1\itshape}{\normalfont\Medskip} \newenvironment{Defn}{\itshape\ignorespaces}{\normalfont} \newenvironment{Result}{\itshape\ignorespaces}{\par\Medskip\normalfont} \newenvironment{ParTheorem}[1]{\RunInHead{#1}\itshape}{\normalfont\Medskip} % "Examples" sections; auto-number using dedicated counter \newcounter{ExNo} \newenvironment{Examples}[1]{% \small\ifthenelse{\not\equal{#1}{}}{% \RunInHead{Examples #1}% \stepcounter{ExNo}\phantomsection\label{examples:\roman{ExNo}}% }{ \phantomsection\label{misc:\ChapNo}% }% }{\par\Medskip\normalsize} % Passages of small text having no special run-in heading \newenvironment{Remark}{\Medskip\par\small}{\normalsize\Medskip} \newcommand{\Signature}[2]{% \null\hfill#1\hspace*{\parindent}\\ \hspace*{\parindent}#2% } % Equation-like entities with \Item-like numbering \newcommand{\CenterLine}[3][\qquad]{% \[ \makebox[\textwidth]{\indent#2 \hfill #3 #1 \hfill} \] } % Same, but with "Df." tag at right margin \newcommand{\CenterDef}[3][]{% \[ \makebox[\textwidth]{\indent#2 \hfill #3\qquad \hfill\text{Df.}\rlap{#1}\quad} \] } \newcommand{\MathTrip}[1]{% \pagebreak[0]% \hfil\allowbreak\null\nobreak\hfill\nobreak\mbox{(\textit{Math.\ Trip.}\ #1)}% \pagebreak[1]% } %%%% Misc. textual macros %%%% \newcommand{\First}[1]{\textsc{#1}} \newcommand{\continued}{{\normalfont\textit{continued}}} \newcommand{\Emph}[1]{{\textbf{\upshape#1}}} \newcommand{\Topic}[1]{\textbf{\upshape#1}} \newcommand{\eg}{\textit{e.g.}} \newcommand{\ie}{\textit{i.e.}} \newcommand{\Ie}{\textit{I.e.}} \renewcommand{\(}{{\upshape(}} \renewcommand{\)}{{\upshape)}} % Fixed-width lines on the copyright page \newcommand{\SetLine}[2][\TmpLen]{\makebox[#1][s]{#2}} %%%% Illustrations %%%% % Inclusion wrapper \newcommand{\Graphic}[3][pdf]{\includegraphics[width=#2]{./images/#3.#1}} %\Figure[width]{Figure number}{File name} \newcommand{\Figure}[3][0.8\textwidth]{% \begin{figure}[hbt!] \centering \Graphic{#1}{#3} \caption{Fig.~#2.} \label{fig:#2} \end{figure}\ignorespaces% } % \Figures{width1}{fig1}{graphic1}{width2}{fig2}{graphic2} \newcommand{\Figures}[6]{% \begin{figure}[hbt!] \begin{minipage}{0.5\textwidth} \centering \Graphic{#1}{#3} \caption{Fig.~#2.} \label{fig:#2} \end{minipage}% \begin{minipage}{0.5\textwidth} \centering \Graphic{#4}{#6} \caption{Fig.~#5.} \label{fig:#5} \end{minipage}% \end{figure}\ignorespaces% } %%%% Cross-referencing %%%% % Original page separators; generated numbers used in the ToC \newcommand{\PageSep}[1]{\PageLabel[pg]{#1}\ignorespaces} %% Anchors \newcommand{\PageLabel}[2][page]{\phantomsection\label{#1:#2}} % Code stub; cross-referencing eqn numbers not feasible \newcommand{\Tag}[1]{\tag*{#1}} %% Links \newcommand{\PageRef}[2]{\hyperref[page:#2]{#1~\pageref*{page:#2}}} \newcommand{\Fig}[1]{\hyperref[fig:#1]{Fig.~#1}} % Chapter/appendix reference \newcommand{\Ref}[2]{% \ifthenelse{\equal{#1}{Appendix}}{% \hyperref[appendix:#2]{#1~#2}% }{% \ifthenelse{\not\equal{#1}{}}{% \hyperref[chapter:#2]{#1~#2}% }{% \hyperref[chapter:#2]{#2}% }% }% } % Paragraph reference \newcommand{\SecNo}[2][]{% \ifthenelse{\equal{#1}{}}{% \hyperref[par:#2]{{\normalfont\upshape #2}}% }{% \hyperref[par:#2]{#1\:{\normalfont\upshape #2}}% }% } % "Examples" section reference \newcommand{\Ex}[1]{% \hyperref[examples:#1]{Ex.~\textsc{#1}}% } \newcommand{\Exs}[2][Exs.~]{% \hyperref[examples:#2]{#1\textsc{#2}}% } \newcommand{\MiscEx}[1]{\hyperref[misc:#1]{Misc.~Ex.}} \newcommand{\MiscExs}[1]{\hyperref[misc:#1]{Misc.~Exs.}} % Code stub; no hyperlinking \newcommand{\Eq}[1]{{\upshape#1}} %%%% Typographical conveniences %%%% \newcommand{\Inum}[1]{{\upshape#1}} \newcommand{\ia}{\textit{a}} \newcommand{\ib}{\textit{b}} \newcommand{\ic}{\textit{c}} \newcommand{\id}{\textit{d}} \newcommand{\TEntry}[1]{\multicolumn{1}{c}{#1}} %%%% Misc. mathematical macros %%%% \newcommand{\ds}{\displaystyle} \renewcommand{\leq}{\leqq} \renewcommand{\geq}{\geqq} \newcommand{\bigint}[1][1.3]{\scalebox{#1}{$\ds\int$}} \newcommand{\btw}{\mathbin{)\kern-5pt(}} \newcommand{\dd}{\partial} \newcommand{\tsum}{{\textstyle\sum}} \newcommand{\Mu}{\mathrm{M}} % Duplicate Hardy's notation \renewcommand{\limsup}{\varlimsup} \renewcommand{\liminf}{\varliminf} %% Named operators \DeclareMathOperator{\ArcCos}{arc\,cos} \DeclareMathOperator{\ArcCosec}{arc\,cosec} \DeclareMathOperator{\ArcCot}{arc\,cot} \DeclareMathOperator{\ArcSec}{arc\,sec} \DeclareMathOperator{\ArcSin}{arc\,sin} \DeclareMathOperator{\ArcTan}{arc\,tan} \DeclareMathOperator{\cosec}{cosec} \DeclareMathOperator{\sech}{sech} \DeclareMathOperator{\cosech}{cosech} \DeclareMathOperator{\argcosh}{arg\,cosh} \DeclareMathOperator{\argcoth}{arg\,coth} \DeclareMathOperator{\argsinh}{arg\,sinh} \DeclareMathOperator{\argtanh}{arg\,tanh} \newcommand{\arccosec}{\ArcCosec} \newcommand{\arccot}{\ArcCot} \newcommand{\arcsec}{\ArcSec} \renewcommand{\arccos}{\ArcCos} \renewcommand{\arcsin}{\ArcSin} \renewcommand{\arctan}{\ArcTan} \DeclareMathOperator{\Cis}{Cis} \DeclareMathOperator{\Log}{Log} \DeclareMathOperator{\sgn}{\textit{sgn}\,} \DeclareMathOperator{\am}{am} \DeclareMathOperator{\Real}{\mathbf{R}} \DeclareMathOperator{\Imag}{\mathbf{I}} \renewcommand{\Re}{\Real} \renewcommand{\Im}{\Imag} % Handle degree symbols and centered dots as Latin-1 characters \DeclareInputText{176}{\ifmmode{{}^\circ}\else\textdegree\fi} \DeclareInputText{183}{\ifmmode\cdot\else\textperiodcentered\fi} % Repeating decimals \newcommand{\Repeat}[1]{\overline{#1\vphantom{|}}} % Line segments \newcommand{\Seg}[1]{\overline{#1\vphantom{P'}}} % Wide ellipsis (used only once) \newcommand{\DotRow}[1]{\makebox[#1][c]{\dotfill}} %%%%%%%%%%%%%%%%%%%%%%%% START OF DOCUMENT %%%%%%%%%%%%%%%%%%%%%%%%%% \begin{document} \FrontMatter %%%% PG BOILERPLATE %%%% \PGBoilerPlate \begin{center} \begin{minipage}{\textwidth} \small \begin{PGtext} The Project Gutenberg EBook of A Course of Pure Mathematics, by G. H. (Godfrey Harold) Hardy This eBook is for the use of anyone anywhere at no cost and with almost no restrictions whatsoever. You may copy it, give it away or re-use it under the terms of the Project Gutenberg License included with this eBook or online at www.gutenberg.net Title: A Course of Pure Mathematics Third Edition Author: G. H. (Godfrey Harold) Hardy Release Date: February 5, 2012 [EBook #38769] Language: English Character set encoding: ISO-8859-1 *** START OF THIS PROJECT GUTENBERG EBOOK A COURSE OF PURE MATHEMATICS *** \end{PGtext} \end{minipage} \end{center} \clearpage %%%% Credits and transcriber's note %%%% \begin{center} \begin{minipage}{\textwidth} \begin{PGtext} Produced by Andrew D. Hwang, Brenda Lewis, and the Online Distributed Proofreading Team at http://www.pgdp.net (This file was produced from images generously made available by The Internet Archive/American Libraries.) \end{PGtext} \end{minipage} \vfill \TranscribersNote{\TransNoteText} \end{center} %%%%%%%%%%%%%%%%%%%%%%%%%%% FRONT MATTER %%%%%%%%%%%%%%%%%%%%%%%%%% \PageSep{i} \cleardoublepage \pagenumbering{roman} \null\vfill \begin{center} \bfseries \LARGE A COURSE \\[\baselineskip] OF \\[\baselineskip] \Huge PURE MATHEMATICS \end{center} \vfill \clearpage \PageSep{ii} \null\vfill \begin{center} CAMBRIDGE UNIVERSITY PRESS \\ C. F. CLAY, \textsc{Manager} \\ LONDON: FETTER LANE, E.C. 4 \\ % [Publisher's device] \Graphic[png]{1in}{device} \small \settowidth{\TmpLen}{TOKYO: MARUZEN-KABUSHIKI-KAISHA\quad}% \SetLine{NEW YORK : THE MACMILLAN CO.} \\ \SetLine{$\left.\kern -1pt%\setlength{\arraycolsep}{0pt} \begin{array}{@{}l@{}} \text{BOMBAY} \\ \text{CALCUTTA} \\ \text{MADRAS} \end{array} \right\}$ MACMILLAN AND CO., \textsc{Ltd.}} \\ \SetLine{TORONTO : THE MACMILLAN CO. OF} \\ \SetLine{\hfil CANADA, \textsc{Ltd.}\hfil} \\ \SetLine{TOKYO : MARUZEN-KABUSHIKI-KAISHA} \\[24pt] \footnotesize ALL RIGHTS RESERVED \end{center} \vfill \clearpage \PageSep{iii} \begin{center} \bfseries \LARGE A COURSE \\[12pt] \large OF \\[12pt] \Huge PURE MATHEMATICS \par \vfil \normalfont \normalsize BY \\ \Large G.~H. HARDY, M.A., F.R.S. \bigskip \footnotesize FELLOW OF NEW COLLEGE \\[4pt] SAVILIAN PROFESSOR OF GEOMETRY IN THE UNIVERSITY \\[4pt] OF OXFORD \\[4pt] LATE FELLOW OF TRINITY COLLEGE, CAMBRIDGE \vfil\vfil \textsf{THIRD EDITION} \vfil\vfil\vfil \Large Cambridge \\ at the University Press \\ 1921 \end{center} \clearpage \PageSep{iv} \null\vfill \begin{center} \textit{First Edition} 1908 \\ \textit{Second Edition} 1914 \\ \textit{Third Edition} 1921 \end{center} \vfill\clearpage \PageSep{v} \Preface{PREFACE TO THE THIRD EDITION} \First{No} extensive changes have been made in this edition. The most important are in \SecNo[§§]{80}--\SecNo{82}, which I have rewritten in accordance with suggestions made by Mr~S.~Pollard. The earlier editions contained no satisfactory account of the genesis of the circular functions. I have made some attempt to meet this objection in \SecNo[§]{158} and \Ref{Appendix}{III}\@. \Ref{Appendix}{IV} is also an addition. It is curious to note how the character of the criticisms I have had to meet has changed. I was too meticulous and pedantic for my pupils of fifteen years ago: I am altogether too popular for the Trinity scholar of to-day. I need hardly say that I find such criticisms very gratifying, as the best evidence that the book has to some extent fulfilled the purpose with which it was written. \Signature{G.~H.~H.}{\textit{August} 1921} \Preface{EXTRACT FROM THE PREFACE TO THE SECOND EDITION} \First{The} principal changes made in this edition are as follows. I have inserted in \Ref{Chapter}{I} a sketch of Dedekind's theory of real numbers, and a proof of Weierstrass's theorem concerning points of condensation; in \Ref{Chapter}{IV} an account of `limits of indetermination' and the `general principle of convergence'; in \Ref{Chapter}{V} a proof of the `Heine-Borel Theorem', Heine's theorem concerning uniform continuity, and the fundamental theorem concerning implicit functions; in \Ref{Chapter}{VI} some additional matter concerning the integration of algebraical functions; and in \Ref{Chapter}{VII} a section on differentials. I have also rewritten in a more general form the sections which deal with the definition of the definite integral. In order to find space for these insertions I have deleted a good deal of the analytical geometry and formal trigonometry contained in Chapters II~and~III of the first edition. These changes have naturally involved a large number of minor alterations. \Signature{G.~H.~H.}{\textit{October} 1914} \PageSep{vi} \Preface{EXTRACT FROM THE PREFACE TO THE FIRST EDITION} \First{This} book has been designed primarily for the use of first year students at the Universities whose abilities reach or approach something like what is usually described as `scholarship standard'. I hope that it may be useful to other classes of readers, but it is this class whose wants I have considered first. It is in any case a book for mathematicians: I have nowhere made any attempt to meet the needs of students of engineering or indeed any class of students whose interests are not primarily mathematical. I regard the book as being really elementary. There are plenty of hard examples (mainly at the ends of the chapters): to these I have added, wherever space permitted, an outline of the solution. But I have done my best to avoid the inclusion of anything that involves really difficult ideas. For instance, I make no use of the `principle of convergence': uniform convergence, double series, infinite products, are never alluded to: and I prove no general theorems whatever concerning the inversion of limit-operations---I never even define $\dfrac{\dd^{2} f}{\dd x\, \dd y}$ and $\dfrac{\dd^{2} f}{\dd y\, \dd x}$. In the last two chapters I have occasion once or twice to integrate a power-series, but I have confined myself to the very simplest cases and given a special discussion in each instance. Anyone who has read this book will be in a position to read with profit Dr~Bromwich's \textit{Infinite Series}, where a full and adequate discussion of all these points will be found. \Signature{}{\textit{September} 1908} \PageSep{vii} \Contents \ToCChap{CHAPTER I}{REAL VARIABLES} % SECT. PAGE \ToCSect{1--2.}{Rational numbers}{1}{par:1} \ToCSect{3--7.}{Irrational numbers}{3}{par:3} \ToCSect{8.}{Real numbers}{13}{par:8} \ToCSect{9.}{Relations of magnitude between real numbers}{15}{par:9} \ToCSect{10--11.}{Algebraical operations with real numbers}{17}{par:10} \ToCSect{12.}{The number~$\sqrt{2}$}{19}{par:12} \ToCSect{13--14.}{Quadratic surds}{19}{par:13} \ToCSect{15.}{The continuum}{23}{par:15} \ToCSect{16.}{The continuous real variable}{26}{par:16} \ToCSect{17.}{Sections of the real numbers. Dedekind's Theorem}{27}{par:17} \ToCSect{18.}{Points of condensation}{29}{par:18} \ToCSect{19.}{Weierstrass's Theorem}{30}{par:19} \ToCSect{}{Miscellaneous Examples}{31}{misc:I} \begin{ToCPar} Decimals,~\PgNo{1}. Gauss's Theorem,~\PgNo{6}. Graphical solution of quadratic equations,~\PgNo{20}. Important inequalities,~\PgNo{32}. Arithmetical and geometrical means,~\PgNo{32}. Schwarz's Inequality,~\PgNo{33}. Cubic and other surds,~\PgNo{34}. Algebraical numbers,~\PgNo{36}. \end{ToCPar} \ToCChap{CHAPTER II}{FUNCTIONS OF REAL VARIABLES} \ToCSect{20.}{The idea of a function}{38}{par:20} \ToCSect{21.}{The graphical representation of functions. Coordinates}{41}{par:21} \ToCSect{22.}{Polar coordinates}{43}{par:22} \ToCSect{23.}{Polynomials}{44}{par:23} \ToCSect{24--25.}{Rational functions}{47}{par:24} \ToCSect{26--27.}{Algebraical functions}{49}{par:26} \ToCSect{28--29.}{Transcendental functions}{52}{par:28} \ToCSect{30.}{Graphical solution of equations}{58}{par:30} \ToCSect{31.}{Functions of two variables and their graphical representation}{59}{par:31} \PageSep{viii} \ToCSect{32.}{Curves in a plane}{60}{par:32} \ToCSect{33.}{Loci in space}{61}{par:33} \ToCSect{}{Miscellaneous Examples}{65}{misc:II} \begin{ToCPar} Trigonometrical functions,~\PgNo{53}. Arithmetical functions,~\PgNo{55}. Cylinders,~\PgNo{62}. Contour maps,~\PgNo{62}. Cones,~\PgNo{63}. Surfaces of revolution,~\PgNo{63}. Ruled surfaces,~\PgNo{64}. Geometrical constructions for irrational numbers,~\PgNo{66}. Quadrature of the circle,~\PgNo{68}. \end{ToCPar} \ToCChap{CHAPTER III}{COMPLEX NUMBERS} \ToCSect{34--38.}{Displacements}{69}{par:34} \ToCSect{39--42.}{Complex numbers}{78}{par:39} \ToCSect{43.}{The quadratic equation with real coefficients}{81}{par:43} \ToCSect{44.}{Argand's diagram}{84}{par:44} \ToCSect{45.}{\DPchg{de~Moivre's}{De~Moivre's} Theorem}{86}{par:45} \ToCSect{46.}{Rational functions of a complex variable}{88}{par:46} \ToCSect{47--49.}{Roots of complex numbers}{98}{par:47} \ToCSect{}{Miscellaneous Examples}{101}{misc:III} \begin{ToCPar} Properties of a triangle,~\PgNo{90},~\PgNo{101}. Equations with complex coefficients,~\PgNo{91}. Coaxal circles,~\PgNo{93}. Bilinear and other transformations,~\PgNo{94},~\PgNo{97},~\PgNo{104}. Cross ratios,~\PgNo{96}. Condition that four points should be concyclic,~\PgNo{97}. Complex functions of a real variable,~\PgNo{97}. Construction of regular polygons by Euclidean methods,~\PgNo{100}. Imaginary points and lines,~\PgNo{103}. \end{ToCPar} \ToCChap{CHAPTER IV}{LIMITS OF FUNCTIONS OF A POSITIVE INTEGRAL VARIABLE} \ToCSect{50.}{Functions of a positive integral variable}{106}{par:50} \ToCSect{51.}{Interpolation}{107}{par:51} \ToCSect{52.}{Finite and infinite classes}{108}{par:52} \ToCSect{53--57.}{Properties possessed by a function of~$n$ for large values of~$n$}{109}{par:53} \ToCSect{58--61.}{Definition of a limit and other definitions}{116}{par:58} \ToCSect{62.}{Oscillating functions}{121}{par:62} \ToCSect{63--68.}{General theorems concerning limits}{125}{par:63} \ToCSect{69--70.}{Steadily increasing or decreasing functions}{131}{par:69} \ToCSect{71.}{Alternative proof of Weierstrass's Theorem}{134}{par:71} \ToCSect{72.}{The limit of~$x^{n}$}{134}{par:72} \ToCSect{73.}{The limit of $\left(1 + \dfrac{1}{n}\right)^{n}$}{137}{par:73} \ToCSect{74.}{Some algebraical lemmas}{138}{par:74} \ToCSect{75.}{The limit of $n(\sqrt[n]{x} - 1)$}{139}{par:75} \ToCSect{76--77.}{Infinite series}{140}{par:76} \ToCSect{78.}{The infinite geometrical series}{143}{par:78} \PageSep{ix} \ToCSect{79.}{The representation of functions of a continuous real variable by means of limits}{147}{par:79} \ToCSect{80.}{The bounds of a bounded aggregate}{149}{par:80} \ToCSect{81.}{The bounds of a bounded function}{149}{par:81} \ToCSect{82.}{The limits of indetermination of a bounded function}{150}{par:82} \ToCSect{83--84.}{The general principle of convergence}{151}{par:83} \ToCSect{85--86.}{Limits of complex functions and series of complex terms}{153}{par:85} \ToCSect{87--88.}{Applications to~$z^{n}$ and the geometrical series}{156}{par:87} \ToCSect{}{Miscellaneous Examples}{157}{misc:IV} \begin{ToCPar} Oscillation of $\sin n\theta\pi$,~\PgNo{121},~\PgNo{123},~\PgNo{151}. Limits of $n^{k} x^{n}$, $\sqrt[n]{x}$, $\sqrt[n]{n}$, $\sqrtp[n]{n!}$, $\dfrac{x^{n}}{n!}$, $\dbinom{m}{n} x^{n}$,~\PgNo{136},~\PgNo{139}. Decimals,~\PgNo{143}. Arithmetical series,~\PgNo{146}. Harmonical series,~\PgNo{147}. Equation $x_{n+1} = f(x_{n})$,~\PgNo{158}. Expansions of rational functions,~\PgNo{159}. Limit of a mean value,~\PgNo{160}. \end{ToCPar} \ToCChap{CHAPTER V}{LIMITS OF FUNCTIONS OF A CONTINUOUS VARIABLE\@. CONTINUOUS AND DISCONTINUOUS FUNCTIONS} \ToCSect{89--92.}{Limits as $x\to\infty$ or $x\to-\infty$}{162}{par:89} \ToCSect{93--97.}{Limits as $x\to a$}{165}{par:93} \ToCSect{98--99.}{Continuous functions of a real variable}{174}{par:98} \ToCSect{100--104.}{Properties of continuous functions. Bounded functions. The oscillation of a function in an interval}{179}{par:100} \ToCSect{105--106.}{Sets of intervals on a line. The Heine-Borel Theorem}{185}{par:105} \ToCSect{107.}{Continuous functions of several variables}{190}{par:107} \ToCSect{108--109.}{Implicit and inverse functions}{191}{par:108} \ToCSect{}{Miscellaneous Examples}{194}{misc:V} \begin{ToCPar} Limits and continuity of polynomials and rational functions,~\PgNo{169},~\PgNo{176}. Limit of $\dfrac{x^{m} - a^{m}}{x - a}$,~\PgNo{171}. Orders of smallness and greatness,~\PgNo{172}. Limit of $\dfrac{\sin{x}}{x}$,~\PgNo{173}. Infinity of a function,~\PgNo{177}. Continuity of $\cos x$ and $\sin x$,~\PgNo{177}. Classification of discontinuities,~\PgNo{178}. \end{ToCPar} \ToCChap{CHAPTER VI}{DERIVATIVES AND INTEGRALS} \ToCSect{110--112.}{Derivatives}{197}{par:110} \ToCSect{113.}{General rules for differentiation}{203}{par:113} \ToCSect{114.}{Derivatives of complex functions}{205}{par:114} \ToCSect{115.}{The notation of the differential calculus}{205}{par:115} \ToCSect{116.}{Differentiation of polynomials}{207}{par:116} \ToCSect{117.}{Differentiation of rational functions}{209}{par:117} \ToCSect{118.}{Differentiation of algebraical functions}{210}{par:118} \PageSep{x} \ToCSect{119.}{Differentiation of transcendental functions}{212}{par:119} \ToCSect{120.}{Repeated differentiation}{214}{par:120} \ToCSect{121.}{General theorems concerning derivatives. Rolle's Theorem}{217}{par:121} \ToCSect{122--124.}{Maxima and minima}{219}{par:122} \ToCSect{125--126.}{The Mean Value Theorem}{226}{par:125} \ToCSect{127--128.}{Integration. The logarithmic function}{228}{par:127} \ToCSect{129.}{Integration of polynomials}{232}{par:129} \ToCSect{130--131.}{Integration of rational functions}{233}{par:130} \ToCSect{132--139.}{Integration of algebraical functions. Integration by rationalisation. Integration by parts}{236}{par:132} \ToCSect{140--144.}{Integration of transcendental functions}{245}{par:140} \ToCSect{145.}{Areas of plane curves}{249}{par:145} \ToCSect{146.}{Lengths of plane curves}{251}{par:146} \ToCSect{}{Miscellaneous Examples}{253}{misc:VI} \begin{ToCPar} Derivative of $x^{m}$,~\PgNo{201}. Derivatives of $\cos{x}$ and $\sin{x}$,~\PgNo{201}. Tangent and normal to a curve,~\PgNo{201},~\PgNo{214}. Multiple roots of equations,~\PgNo{208},~\PgNo{255}. Rolle's Theorem for polynomials,~\PgNo{209}. Leibniz' Theorem,~\PgNo{215}. Maxima and minima of the quotient of two quadratics,~\PgNo{223},~\PgNo{256}. Axes of a conic,~\PgNo{226}. Lengths and areas in polar coordinates,~\PgNo{253}. Differentiation of a determinant,~\PgNo{254}. Extensions of the Mean Value Theorem,~\PgNo{258}. Formulae of reduction,~\PgNo{259}. \end{ToCPar} \ToCChap{CHAPTER VII}{ADDITIONAL THEOREMS IN THE DIFFERENTIAL AND INTEGRAL CALCULUS} \ToCSect{147.}{Taylor's Theorem}{262}{par:147} \ToCSect{148.}{Taylor's Series}{266}{par:148} \ToCSect{149.}{Applications of Taylor's Theorem to maxima and minima}{268}{par:149} \ToCSect{150.}{Applications of Taylor's Theorem to the calculation of limits}{268}{par:150} \ToCSect{151.}{The contact of plane curves}{270}{par:151} \ToCSect{152--154.}{Differentiation of functions of several variables}{274}{par:152} \ToCSect{155.}{Differentials}{280}{par:155} \ToCSect{156--161.}{Definite Integrals. Areas of curves}{283}{par:156} \ToCSect{162.}{Alternative proof of Taylor's Theorem}{298}{par:162} \ToCSect{163.}{Application to the binomial series}{299}{par:163} \ToCSect{164.}{Integrals of complex functions}{299}{par:164} \ToCSect{}{Miscellaneous Examples}{300}{misc:VII} \begin{ToCPar} Newton's method of approximation to the roots of equations,~\PgNo{265}. Series for $\cos{x}$ and $\sin{x}$,~\PgNo{267}. Binomial series,~\PgNo{267}. Tangent to a curve, \PgNo{272},~\PgNo{283},~\PgNo{303}. Points of inflexion,~\PgNo{272}. Curvature,~\PgNo{273},~\PgNo{302}. Osculating conics,~\PgNo{274},~\PgNo{302}. Differentiation of implicit functions,~\PgNo{283}. Fourier's integrals,~\PgNo{290},~\PgNo{294}. The second mean value theorem,~\PgNo{296}. Homogeneous functions,~\PgNo{302}. Euler's Theorem,~\PgNo{302}. Jacobians,~\PgNo{303}. Schwarz's inequality for integrals,~\PgNo{306}. Approximate values of definite integrals,~\PgNo{307}. Simpson's Rule,~\PgNo{307}. \end{ToCPar} \PageSep{xi} \ToCChap{CHAPTER VIII}{THE CONVERGENCE OF INFINITE SERIES AND INFINITE INTEGRALS} \ToCSect{165--168.}{Series of positive terms. Cauchy's and d'Alembert's tests of convergence}{308}{par:165} \ToCSect{169.}{Dirichlet's Theorem}{313}{par:169} \ToCSect{170.}{Multiplication of series of positive terms}{313}{par:170} \ToCSect{171--174.}{Further tests of convergence. Abel's Theorem. Maclaurin's integral test}{315}{par:171} \ToCSect{175.}{The series $\sum n^{-s}$}{319}{par:175} \ToCSect{176.}{Cauchy's condensation test}{320}{par:176} \ToCSect{177--182.}{Infinite integrals}{321}{par:177} \ToCSect{183.}{Series of positive and negative terms}{335}{par:183} \ToCSect{184--185.}{Absolutely convergent series}{336}{par:184} \ToCSect{186--187.}{Conditionally convergent series}{338}{par:186} \ToCSect{188.}{Alternating series}{340}{par:188} \ToCSect{189.}{Abel's and Dirichlet's tests of convergence}{342}{par:189} \ToCSect{190.}{Series of complex terms}{344}{par:190} \ToCSect{191--194.}{Power series}{345}{par:191} \ToCSect{195.}{Multiplication of series in general}{349}{par:195} \ToCSect{}{Miscellaneous Examples}{350}{misc:VIII} \begin{ToCPar} The series $\sum n^{k}r^{n}$ and allied series,~\PgNo{311}. Transformation of infinite integrals by substitution and integration by parts,~\PgNo{327},~\PgNo{328},~\PgNo{333}. The series $\sum a_{n} \cos n\theta$, $\sum a_{n} \sin n\theta$,~\PgNo{338},~\PgNo{343},~\PgNo{344}. Alteration of the sum of a series by rearrangement,~\PgNo{341}. Logarithmic series,~\PgNo{348}. Binomial series, \PgNo{348},~\PgNo{349}. Multiplication of conditionally convergent series,~\PgNo{350},~\PgNo{354}. Recurring series,~\PgNo{352}. Difference equations,~\PgNo{353}. Definite integrals,~\PgNo{355}. Schwarz's inequality for infinite integrals,~\PgNo{356}. \end{ToCPar} \ToCChap{CHAPTER IX}{THE LOGARITHMIC AND EXPONENTIAL FUNCTIONS OF A REAL VARIABLE} \ToCSect{196--197.}{The logarithmic function}{357}{par:196} \ToCSect{198.}{The functional equation satisfied by $\log x$}{360}{par:198} \ToCSect{199--201.}{The behaviour of $\log x$ as $x$~tends to infinity or to zero}{360}{par:199} \ToCSect{202.}{The logarithmic scale of infinity}{362}{par:202} \ToCSect{203.}{The number~$e$}{363}{par:203} \ToCSect{204--206.}{The exponential function}{364}{par:204} \ToCSect{207.}{The general power~$a^{x}$}{366}{par:207} \ToCSect{208.}{The exponential limit}{368}{par:208} \ToCSect{209.}{The logarithmic limit}{369}{par:209} \ToCSect{210.}{Common logarithms}{369}{par:210} \ToCSect{211.}{Logarithmic tests of convergence}{374}{par:211} \PageSep{xii} \ToCSect{212.}{The exponential series}{378}{par:212} \ToCSect{213.}{The logarithmic series}{381}{par:213} \ToCSect{214.}{The series for $\arctan x$}{382}{par:214} \ToCSect{215.}{The binomial series}{384}{par:215} \ToCSect{216.}{Alternative development of the theory}{386}{par:216} \ToCSect{}{Miscellaneous Examples}{387}{misc:IX} \begin{ToCPar} Integrals containing the exponential function,~\PgNo{370}. The hyperbolic functions,~\PgNo{372}. Integrals of certain algebraical functions,~\PgNo{373}. Euler's constant,~\PgNo{377},~\PgNo{389}. Irrationality of~$e$,~\PgNo{380}. Approximation to surds by the binomial theorem,~\PgNo{385}. Irrationality of~$\log_{10} n$,~\PgNo{387}. Definite integrals,~\PgNo{393}. \end{ToCPar} \ToCChap{CHAPTER X}{THE GENERAL THEORY OF THE LOGARITHMIC, EXPONENTIAL, AND CIRCULAR FUNCTIONS} \ToCSect{217--218.}{Functions of a complex variable}{395}{par:217} \ToCSect{219.}{Curvilinear integrals}{396}{par:219} \ToCSect{220.}{Definition of the logarithmic function}{397}{par:220} \ToCSect{221.}{The values of the logarithmic function}{399}{par:221} \ToCSect{222--224.}{The exponential function}{403}{par:222} \ToCSect{225--226.}{The general power~$a^{z}$}{404}{par:225} \ToCSect{227--230.}{The trigonometrical and hyperbolic functions}{409}{par:227} \ToCSect{231.}{The connection between the logarithmic and inverse trigonometrical functions}{413}{par:231} \ToCSect{232.}{The exponential series}{414}{par:232} \ToCSect{233.}{The series for $\cos z$ and $\sin z$}{416}{par:233} \ToCSect{234--235.}{The logarithmic series}{417}{par:234} \ToCSect{236.}{The exponential limit}{421}{par:236} \ToCSect{237.}{The binomial series}{422}{par:237} \ToCSect{}{Miscellaneous Examples}{425}{misc:X} \begin{ToCPar} The functional equation satisfied by $\Log z$,~\PgNo{402}. The function~$e^{z}$,~\PgNo{407}. Logarithms to any base,~\PgNo{408}. The inverse cosine, sine, and tangent of a complex number,~\PgNo{412}. Trigonometrical series,~\PgNo{417},~\PgNo{420},~\PgNo{431}. Roots of transcendental equations,~\PgNo{425}. Transformations,~\PgNo{426},~\PgNo{428}. Stereographic projection,~\PgNo{427}. Mercator's projection,~\PgNo{428}. Level curves,~\PgNo{429}. Definite integrals,~\PgNo{432}. \end{ToCPar} \ToCApp{I}{The proof that every equation has a root}{433} \ToCApp{II}{A note on double limit problems}{439} \ToCApp{III}{The circular functions}{443} \ToCApp{IV}{The infinite in analysis and geometry}{445} \MainMatter \PageSep{1} \Chapter{I}{REAL VARIABLES} \Paragraph{1. Rational numbers.} A fraction $r = p/q$, where $p$~and~$q$ are positive or negative integers, is called a \emph{rational number}. We can suppose (i)~that $p$~and~$q$ have no common factor, as if they have a common factor we can divide each of them by it, and (ii)~that $q$~is positive, since \[ p/(-q) = (-p)/q,\quad (-p)/(-q) = p/q. \] To the rational numbers thus defined we may add the `rational number~$0$' obtained by taking $p = 0$. We assume that the reader is familiar with the ordinary arithmetical rules for the manipulation of rational numbers. The examples which follow demand no knowledge beyond this. \begin{Examples}{I.} \Item{1.} If $r$~and~$s$ are rational numbers, then $r + s$, $r - s$, $rs$, and $r/s$ are rational numbers, unless in the last case $s = 0$ (when $r/s$~is of course meaningless). \Item{2.} {\Loosen If $\lambda$,~$m$, and~$n$ are positive rational numbers, and $m > n$, then $\lambda(m^{2} - n^{2})$, $2\lambda mn$, and $\lambda(m^{2} + n^{2})$ are positive rational numbers. Hence show how to determine any number of right-angled triangles the lengths of all of whose sides are rational.} \Item{3.} Any terminated decimal represents a rational number whose denominator contains no factors other than $2$~or~$5$. Conversely, any such rational number can be expressed, and in one way only, as a terminated decimal. [The general theory of decimals will be considered in \Ref{Ch.}{IV}.] \Item{4.} The positive rational numbers may be arranged in the form of a simple series as follows: \[ \tfrac{1}{1},\quad \tfrac{2}{1},\quad \tfrac{1}{2},\quad \tfrac{3}{1},\quad \tfrac{2}{2},\quad \tfrac{1}{3},\quad \tfrac{4}{1},\quad \tfrac{3}{2},\quad \tfrac{2}{3},\quad \tfrac{1}{4},\ \dots. \] Show that $p/q$ is the $[\frac{1}{2}(p + q - 1)(p + q - 2) + q]$th term of the series. [In this series every rational number is repeated indefinitely. Thus $1$ occurs as $\frac{1}{1}$,~$\frac{2}{2}$, $\frac{3}{3}, \dots$. We can of course avoid this by omitting every number \PageSep{2} which has already occurred in a simpler form, but then the problem of determining the precise position of~$p/q$ becomes more complicated.] \end{Examples} \Paragraph{2. The representation of rational numbers by points on a line.} It is convenient, in many branches of mathematical analysis, to make a good deal of use of geometrical illustrations. The use of geometrical illustrations in this way does not, of course, imply that analysis has any sort of dependence upon geometry: they are illustrations and nothing more, and are employed merely for the sake of clearness of exposition. This being so, it is not necessary that we should attempt any logical analysis of the ordinary notions of elementary geometry; we may be content to suppose, however far it may be from the truth, that we know what they mean. Assuming, then, that we know what is meant by a \emph{straight line}, a \emph{segment} of a line, and the \emph{length} of a segment, let us take a straight line~$\Lambda$, produced indefinitely in both directions, and a segment~$A_{0}A_{1}$ of any length. We call $A_{0}$ the \emph{origin}, or \emph{the point~$0$}, and $A_{1}$ \emph{the point~$1$}, and we regard these points as representing the numbers $0$~and~$1$. In order to obtain a point which shall represent a positive rational number $r = p/q$, we choose the point~$A_{r}$ such that \[ A_{0}A_{r}/A_{0}A_{1} = r, \] $A_{0}A_{r}$ being a stretch of the line extending in the same direction along the line as~$A_{0}A_{1}$, a direction which we shall suppose to be from left to right when, as in \Fig{1}, the line is drawn horizontally across the paper. In order to obtain a point to represent a %[Illustration: Fig. 1.] \Figure[0.9\textwidth]{1}{p002} negative rational number $r = -s$, it is natural to regard length as a magnitude capable of sign, positive if the length is measured in one direction (that of~$A_{0}A_{1}$), and negative if measured in the other, so that $AB = -BA$; and to take as the point representing $r$ the point~$A_{-s}$ such that \[ A_{0}A_{-s} = -A_{-s}A_{0} = -A_{0}A_{s}. \] \PageSep{3} We thus obtain a point~$A_{r}$ on the line corresponding to every rational value of~$r$, positive or negative, and such that \[ A_{0}A_{r} = r · A_{0}A_{1}; \] {\Loosen and if, as is natural, we take $A_{0}A_{1}$ as our unit of length, and write $A_{0}A_{1} = 1$, then we have} \[ A_{0}A_{r} = r. \] We shall call the points~$A_{r}$ the \emph{rational points} of the line. \Paragraph{3. Irrational numbers.} If the reader will mark off on the line all the points corresponding to the rational numbers whose denominators are $1$,~$2$, $3, \dots$ in succession, he will readily convince himself that he can cover the line with rational points as closely as he likes. We can state this more precisely as follows: \emph{if we take any segment~$BC$ on~$\Lambda$, we can find as many rational points as we please on~$BC$}. Suppose, for example, that $BC$~falls within the segment~$A_{1}A_{2}$. It is evident that if we choose a positive integer~$k$ so that \[ k · BC > 1,\footnotemark \Tag{(1)} \] \footnotetext{The assumption that this is possible is equivalent to the assumption of what is known as the Axiom of Archimedes.}% and divide~$A_{1}A_{2}$ into $k$~equal parts, then at least one of the points of division (say~$P$) must fall inside~$BC$, without coinciding with either $B$~or~$C$. For if this were not so, $BC$~would be entirely included in one of the $k$~parts into which~$A_{1}A_{2}$ has been divided, which contradicts the supposition~\Eq{(1)}. But $P$~obviously corresponds to a rational number whose denominator is~$k$. Thus at least one rational point~$P$ lies between $B$~and~$C$. But then we can find another such point~$Q$ between $B$~and~$P$, another between $B$~and~$Q$, and so on indefinitely; \ie, as we asserted above, we can find as many as we please. We may express this by saying that $BC$~includes \emph{infinitely many} rational points. \begin{Remark} The meaning of such phrases as `\emph{infinitely many}' or `\emph{an infinity of}', in such sentences as `$BC$~includes infinitely many rational points' or `there are an infinity of rational points on~$BC$' or `there are an infinity of positive integers', will be considered more closely in \Ref{Ch.}{IV}\@. The assertion `there are an infinity of positive integers' means `given any positive integer~$n$, however large, we can find more than~$n$ positive integers'. This is plainly true \PageSep{4} whatever $n$~may be, \eg\ for $n = 100,000$ or $100,000,000$. The assertion means exactly the same as `we can find \emph{as many positive integers as we please}'. The reader will easily convince himself of the truth of the following assertion, which is substantially equivalent to what was proved in the second paragraph of this section: given any rational number~$r$, and any positive integer~$n$, we can find another rational number lying on either side of~$r$ and differing from~$r$ by less than~$1/n$. It is merely to express this differently to say that we can find a rational number lying on either side of~$r$ and differing from~$r$ \emph{by as little as we please}. Again, given any two rational numbers $r$~and~$s$, we can interpolate between them a chain of rational numbers in which any two consecutive terms differ by as little as we please, that is to say by less than~$1/n$, where $n$~is any positive integer assigned beforehand. \end{Remark} From these considerations the reader might be tempted to infer that an adequate view of the nature of the line could be obtained by imagining it to be formed simply by the rational points which lie on it. And it is certainly the case that if we imagine the line to be made up solely of the rational points, and all other points (if there are any such) to be eliminated, the figure which remained would possess most of the properties which common sense attributes to the straight line, and would, to put the matter roughly, look and behave very much like a line. A little further consideration, however, shows that this view would involve us in serious difficulties. Let us look at the matter for a moment with the eye of common sense, and consider some of the properties which we may reasonably expect a straight line to possess if it is to satisfy the idea which we have formed of it in elementary geometry. The straight line must be composed of points, and any segment of it by all the points which lie between its end points. With any such segment must be associated a certain entity called its \emph{length}, which must be a \emph{quantity} capable of \emph{numerical measurement} in terms of any standard or unit length, and these lengths must be capable of combination with one another, according to the ordinary rules of algebra, by means of addition or multiplication. Again, it must be possible to construct a line whose length is the sum or product of any two given lengths. If the length~$PQ$, along a given line, is~$a$, and the length~$QR$, along the same straight line, is~$b$, the length~$PR$ must be~$a + b$. \PageSep{5} Moreover, if the lengths $OP$,~$OQ$, along one straight line, are $1$~and~$a$, and the length~$OR$ along another straight line is~$b$, and if we determine the length~$OS$ by Euclid's construction (Euc.~\textsc{vi}.~12) for a fourth proportional to the lines $OP$,~$OQ$,~$OR$, this length must be~$ab$, the algebraical fourth proportional to $1$,~$a$,~$b$. And it is hardly necessary to remark that the sums and products thus defined must obey the ordinary `laws of algebra'; viz. \begin{gather*} a + b = b + a,\quad a + (b + c) = (a + b) + c,\\ ab = ba,\quad a(bc) = (ab)c,\quad a(b + c) = ab + ac. \end{gather*} The lengths of our lines must also obey a number of obvious laws concerning inequalities as well as equalities: thus if $A$,~$B$,~$C$ are three points lying along~$\Lambda$ from left to right, we must have $AB < AC$, and so on. Moreover it must be possible, on our fundamental line~$\Lambda$, to find a point~$P$ such that~$A_{0}P$ is equal to any segment whatever taken along~$\Lambda$ or along any other straight line. All these properties of a line, and more, are involved in the presuppositions of our elementary geometry. Now it is very easy to see that the idea of a straight line as composed of a series of points, each corresponding to a rational number, cannot possibly satisfy all these requirements. There are various elementary geometrical constructions, for example, which purport to construct a length~$x$ such that $x^{2} = 2$. For instance, we %[Illustration: Fig. 2.] \Figure{2}{p005} may construct an isosceles right-angled triangle~$ABC$ such that $AB = AC = 1$. Then if $BC = x$, $x^{2} = 2$. Or we may determine the length~$x$ by means of Euclid's construction (Euc.~\textsc{vi}.~13) for a mean proportional to $1$~and~$2$, as indicated in the figure. Our requirements therefore involve the existence of a length measured by a number~$x$, and a point~$P$ on~$\Lambda$ such that \[ A_{0}P = x,\quad x^{2} = 2. \] \PageSep{6} But it is easy to see that \emph{there is no rational number such that its square is~$2$}. In fact we may go further and say that there is no rational number whose square is~$m/n$, where $m/n$~is any positive fraction in its lowest terms, unless $m$~and~$n$ are both perfect squares. For suppose, if possible, that \[ p^{2}/q^{2} = m/n\DPtypo{.}{,} \] $p$~having no factor in common with~$q$, and $m$~no factor in common with~$n$. Then $np^{2} = mq^{2}$. Every factor of~$q^{2}$ must divide~$np^{2}$, and as $p$~and~$q$ have no common factor, every factor of~$q^{2}$ must divide~$n$. Hence $n = \lambda q^{2}$, where $\lambda$~is an integer. But this involves $m = \lambda p^{2}$: and as $m$~and~$n$ have no common factor, $\lambda$~must be unity. Thus $m = p^{2}$, $n = q^{2}$, as was to be proved. In particular it follows, by taking $n = 1$, that an integer cannot be the square of a rational number, unless that rational number is itself integral. It appears then that our requirements involve the existence of a number~$x$ and a point~$P$, not one of the rational points already constructed, such that $A_{0}P = x$, $x^{2} = 2$; and (as the reader will remember from elementary algebra) we write $x = \sqrt{2}$. \begin{Remark} The following alternative proof that no rational number can have its square equal to~$2$ is interesting. Suppose, if possible, that $p/q$~is a positive fraction, in its lowest terms, such that $(p/q)^{2} = 2$ or $p^{2} = 2q^{2}$. It is easy to see that this involves $(2q - p)^{2} = 2(p - q)^{2}$; and so $(2q - p)/(p - q)$ is another fraction having the same property. But clearly $q < p < 2q$, and so $p - q < q$. Hence there is another fraction equal to~$p/q$ and having a smaller denominator, which contradicts the assumption that $p/q$~is in its lowest terms. \end{Remark} \begin{Examples}{II.} \Item{1.} Show that no rational number can have its cube equal to~$2$. \Item{2.} Prove generally that a rational fraction~$p/q$ in its lowest terms cannot be the cube of a rational number unless $p$~and~$q$ are both perfect cubes. \Item{3.} A more general proposition, which is due to Gauss and includes those which precede as particular cases, is the following: \emph{an algebraical equation \[ x^{n} + p_{1}x^{n-1} + p_{2}x^{n-2} + \dots + p_{n} = 0, \] with integral coefficients, cannot have a rational but non-integral root}. [For suppose that the equation has a root~$a/b$, where $a$~and~$b$ are integers \PageSep{7} without a common factor, and $b$~is positive. Writing~$a/b$ for~$x$, and multiplying by~$b^{n-1}$, we obtain \[ -\frac{a^{n}}{b} = p_{1}a^{n-1} + p_{2}a^{n-2}b + \dots + p_{n}b^{n-1}, \] a fraction in its lowest terms equal to an integer, which is absurd. Thus $b = 1$, and the root is~$a$. It is evident that $a$~must be a divisor of~$p_{n}$.] \Item{4.} Show that if $p_{n} = 1$ and neither of \[ 1 + p_{1} + p_{2} + p_{3} + \dots,\quad 1 - p_{1} + p_{2} - p_{3} + \dots \] is zero, then the equation cannot have a rational root. \Item{5.} Find the rational roots (if any) of \[ x^{4} - 4x^{3} - 8x^{2} + 13x + 10 = 0. \] [The roots can only be integral, and so $±1$, $±2$, $±5$, $±10$ are the only possibilities: whether these are roots can be determined by trial. It is clear that we can in this way determine the rational roots of any such equation.] \end{Examples} \Paragraph{4. Irrational numbers (\continued).} The result of our geometrical representation of the rational numbers is therefore to suggest the desirability of enlarging our conception of `number' by the introduction of further numbers of a new kind. The same conclusion might have been reached without the use of geometrical language. One of the central problems of algebra is that of the solution of equations, such as \[ x^{2} = 1,\quad x^{2} = 2. \] The first equation has the two rational roots $1$~and~$-1$. But, if our conception of number is to be limited to the rational numbers, we can only say that the second equation has no roots; and the same is the case with such equations as $x^{3} = 2$, $x^{4} = 7$. These facts are plainly sufficient to make some generalisation of our idea of number desirable, if it should prove to be possible. Let us consider more closely the equation $x^{2} = 2$. We have already seen that there is no rational number~$x$ which satisfies this equation. The square of any rational number is either less than or greater than~$2$. We can therefore divide the rational numbers into two classes, one containing the numbers whose squares are less than~$2$, and the other those whose squares are greater than~$2$. We shall confine our attention to the \emph{positive} rational numbers, and we shall call these two classes \emph{the class~$L$}, or \emph{the lower class}, or \emph{the left-hand class}, and \emph{the class~$R$}, or \emph{the upper \PageSep{8} class}, or \emph{the right-hand class}. It is obvious that every member of~$R$ is greater than all the members of~$L$. Moreover it is easy to convince ourselves that we can find a member of the class~$L$ whose square, though less than~$2$, differs from~$2$ by as little as we please, and a member of~$R$ whose square, though greater than~$2$, also differs from~$2$ by as little as we please. In fact, if we carry out the ordinary arithmetical process for the extraction of the square root of~$2$, we obtain a series of rational numbers, viz. \[ 1,\quad 1.4,\quad 1.41,\quad 1.414,\quad 1.4142,\ \dots \] whose squares \[ 1,\quad 1.96,\quad 1.9881,\quad 1.999\MS396,\quad 1.999\MS961\MS64,\ \dots \] are all less than~$2$, but approach nearer and nearer to it; and by taking a sufficient number of the figures given by the process we can obtain as close an approximation as we want. And if we increase the last figure, in each of the approximations given above, by unity, we obtain a series of rational numbers \[ 2,\quad 1.5,\quad 1.42,\quad 1.415,\quad 1.4143,\ \dots \] whose squares \[ 4,\quad 2.25,\quad 2.0164,\quad 2.002\MS225,\quad 2.000\MS244\MS49,\ \dots \] are all greater than~$2$ but approximate to~$2$ as closely as we please. \begin{Remark} The reasoning which precedes, although it will probably convince the reader, is hardly of the precise character required by modern mathematics. We can supply a formal proof as follows. In the first place, we can find a member of~$L$ and a member of~$R$, differing by as little as we please. For we saw in~\SecNo[§]{3} that, given any two rational numbers $a$~and~$b$, we can construct a chain of rational numbers, of which $a$~and~$b$ are the first and last, and in which any two consecutive numbers differ by as little as we please. Let us then take a member~$x$ of~$L$ and a member~$y$ of~$R$, and interpolate between them a chain of rational numbers of which $x$~is the first and $y$~the last, and in which any two consecutive numbers differ by less than~$\delta$, $\delta$~being any positive rational number as small as we please, such as $.01$ or $.0001$ or $.000\MS001$. In this chain there must be a last which belongs to~$L$ and a first which belongs to~$R$, and these two numbers differ by less than~$\delta$. We can now prove that \emph{an~$x$ can be found in~$L$ and a~$y$ in~$R$ such that $2 - x^{2}$ and $y^{2} - 2$ are as small as we please}, say less than~$\delta$. Substituting $\frac{1}{4}\delta$ for~$\delta$ in the argument which precedes, we see that we can choose $x$~and~$y$ so that $y - x < \frac{1}{4}\delta$; and we may plainly suppose that both $x$~and~$y$ are less than~$2$. Thus \[ y + x < 4,\quad y^{2} - x^{2} = (y - x)(y + x) < 4(y - x) < \delta; \] \PageSep{9} and since $x^{2} < 2$ and $y^{2} > 2$ it follows \textit{a~fortiori} that $2 - x^{2}$ and $y^{2} - 2$ are each less than~$\delta$. \end{Remark} It follows also that \emph{there can be no largest member of~$L$ or smallest member of~$R$}. For if $x$~is any member of~$L$, then $x^{2} < 2$. Suppose that $x^{2} = 2 - \delta$. Then we can find a member~$x_{1}$ of~$L$ such that $x_{1}^{2}$~differs from~$2$ by less than~$\delta$, and so $x_{1}^{2} > x^{2}$ or $x_{1} > x$. Thus there are larger members of~$L$ than~$x$; and as $x$~is \emph{any} member of~$L$, it follows that no member of~$L$ can be larger than all the rest. Hence $L$~has no largest member, and similarly $R$~has no smallest. \Paragraph{5. Irrational numbers (\continued).} We have thus divided the positive rational numbers into two classes, $L$~and~$R$, such that (i)~every member of~$R$ is greater than every member of~$L$, (ii)~we can find a member of~$L$ and a member of~$R$ whose difference is as small as we please, (iii)~$L$~has no greatest and $R$~no least member. Our common-sense notion of the attributes of a straight line, the requirements of our elementary geometry and our elementary algebra, alike demand \emph{the existence of a number~$x$ greater than all the members of~$L$ and less than all the members of~$R$, and of a corresponding point~$P$ on~$\Lambda$ such that $P$~divides the points which correspond to members of~$L$ from those which correspond to members of~$R$}. %[Illustration: Fig. 3.] \Figure[0.9\textwidth]{3}{p009} Let us suppose for a moment that there is such a number~$x$, and that it may be operated upon in accordance with the laws of algebra, so that, for example, $x^{2}$~has a definite meaning. Then $x^{2}$ cannot be either less than or greater than~$2$. For suppose, for example, that $x^{2}$~is less than~$2$. Then it follows from what precedes that we can find a positive rational number~$\xi$ such that $\xi^{2}$~lies \PageSep{10} between $x^{2}$~and~$2$. That is to say, we can find a member of~$L$ greater than~$x$; and this contradicts the supposition that $x$~divides the members of~$L$ from those of~$R$. Thus $x^{2}$~cannot be less than~$2$, and similarly it cannot be greater than~$2$. We are therefore driven to the conclusion that $x^{2} = 2$, and that $x$~is the number which in algebra we denote by~$\sqrt{2}$. And of course this number $\sqrt{2}$~is not rational, for no rational number has its square equal to~$2$. It is the simplest example of what is called an \Emph{irrational} number. But the preceding argument may be applied to equations other than $x^{2} = 2$, almost word for word; for example to $x^{2} = N$, where $N$~is any integer which is not a perfect square, or to \[ x^{3} = 3,\quad x^{3} = 7,\quad x^{4} = 23, \] or, as we shall see later on, to $x^{3} = 3x + 8$. We are thus led to believe in the existence of irrational numbers~$x$ and points~$P$ on~$\Lambda$ such that $x$~satisfies equations such as these, even when these lengths cannot (as $\sqrt{2}$~can) be constructed by means of elementary geometrical methods. \begin{Remark} The reader will no doubt remember that in treatises on elementary algebra the root of such an equation as $x^{q} = n$ is denoted by $\sqrt[q]n$~or~$n^{1/q}$, and that a meaning is attached to such symbols as \[ n^{p/q},\quad n^{-p/q} \] by means of the equations \[ n^{p/q} = (n^{1/q})^{p},\quad n^{p/q} n^{-p/q} = 1. \] And he will remember how, in virtue of these definitions, the `laws of indices' such as \[ n^{r} × n^{s} = n^{r+s},\quad (n^{r})^{s} = n^{rs} \] are extended so as to cover the case in which $r$~and~$s$ are any rational numbers whatever. \end{Remark} The reader may now follow one or other of two alternative courses. He may, if he pleases, be content to assume that `irrational numbers' such as $\sqrt{2}$,~$\sqrt[3]{3}, \dots$ exist and are amenable to the algebraical laws with which he is familiar.\footnote {This is the point of view which was adopted in the first edition of this book.} If he does this he will be able to avoid the more abstract discussions of the next few sections, and may pass on at once to \SecNo[§§]{13}~\textit{et~seq.} If, on the other hand, he is not disposed to adopt so \textit{naive} an \PageSep{11} attitude, he will be well advised to pay careful attention to the sections which follow, in which these questions receive fuller consideration.\footnote {In these sections I have borrowed freely from Appendix~I of Bromwich's \textit{Infinite Series}.} \begin{Examples}{III.} \Item{1.} Find the difference between~$2$ and the squares of the decimals given in \SecNo[§]{4} as approximations to~$\sqrt{2}$. \Item{2.} Find the differences between~$2$ and the squares of \[ \tfrac{1}{1},\quad \tfrac{3}{2},\quad \tfrac{7}{5},\quad \tfrac{17}{12},\quad \tfrac{41}{29},\quad \tfrac{99}{70}. \] \Item{3.} Show that if $m/n$ is a good approximation to~$\sqrt{2}$, then~$(m + 2n)/(m + n)$ is a better one, and that the errors in the two cases are in opposite directions. Apply this result to continue the series of approximations in the last example. \Item{4.} If $x$~and~$y$ are approximations to~$\sqrt{2}$, by defect and by excess respectively, and $2 - x^{2} < \delta$, $y^{2} - 2 < \delta$, then $y - x < \delta$. \Item{5.} The equation $x^{2} = 4$ is satisfied by $x = 2$. Examine how far the argument of the preceding sections applies to this equation (writing~$4$ for~$2$ throughout). [If we define the classes $L$,~$R$ as before, they do not include \emph{all} rational numbers. The rational number~$2$ is an exception, since~$2^{2}$ is neither less than or greater than~$4$.] \end{Examples} \Paragraph{6. Irrational numbers (\continued).} In \SecNo[§]{4} we discussed a special mode of division of the positive rational numbers~$x$ into two classes, such that $x^{2} < 2$ for the members of one class and $x^{2} > 2$ for those of the others. Such a mode of division is called a \Emph{section} of the numbers in question. It is plain that we could equally well construct a section in which the numbers of the two classes were characterised by the inequalities $x^{3} < 2$ and $x^{3} > 2$, or $x^{4} < 7$ and $x^{4} > 7$. Let us now attempt to state the principles of the construction of such `sections' of the positive rational numbers in quite general terms. Suppose that $P$~and~$Q$ stand for two properties which are mutually exclusive and one of which must be possessed by every positive rational number. Further, suppose that every such number which possesses~$P$ is less than any such number which possesses~$Q$. Thus $P$~might be the property `$x^{2} < 2$' and $Q$~the property `$x^{2} > 2$.' Then we call the numbers which possess~$P$ the lower or left-hand class~$L$ and those which possess~$Q$ the upper or \PageSep{12} right-hand class~$R$. In general both classes will exist; but it may happen in special cases that one is non-existent and that every number belongs to the other. This would obviously happen, for example, if $P$ (or~$Q$) were the property of being rational, or of being positive. For the present, however, we shall confine ourselves to cases in which both classes do exist; and then it follows, as in \SecNo[§]{4}, that we can find a member of~$L$ and a member of~$R$ whose difference is as small as we please. In the particular case which we considered in \SecNo[§]{4}, $L$~had no greatest member and $R$~no least. This question of the existence of greatest or least members of the classes is of the utmost importance. We observe first that it is impossible in any case that $L$~should have a greatest member \emph{and} $R$~a least. For if $l$ were the greatest member of~$L$, and $r$~the least of~$R$, so that $l < r$, then $\frac{1}{2}(l + r)$ would be a positive rational number lying between $l$~and~$r$, and so could belong neither to~$L$ nor to~$R$; and this contradicts our assumption that every such number belongs to one class or to the other. This being so, there are but three possibilities, which are mutually exclusive. Either (i)~$L$~has a greatest member~$l$, or (ii)~$R$~has a least member~$r$, or (iii)~$L$~has no greatest member and $R$~no least. \begin{Remark} The section of \SecNo[§]{4} gives an example of the last possibility. An example of the first is obtained by taking~$P$ to be `$x^{2} \leq 1$' and $Q$~to be `$x^{2} > 1$'; here $l = 1$. If $P$~is `$x^{2} < 1$' and $Q$~is `$x^{2} \geq 1$', we have an example of the second possibility, with $r = 1$. It should be observed that we do not obtain a section at all by taking $P$ to be `$x^{2} < 1$' and $Q$~to be `$x^{2} > 1$'; for the special number~$1$ escapes classification (cf.\ \Ex{iii}.~5). \end{Remark} \Paragraph{7. Irrational numbers (\continued).} In the first two cases we say that the section \emph{corresponds} to a positive rational number~$a$, which is~$l$ in the one case and $r$~in the other. Conversely, it is clear that to any such number~$a$ corresponds a section which we shall denote by~$\alpha$.\footnote {It will be convenient to denote a section, corresponding to a rational number denoted by an English letter, by the corresponding Greek letter.} For we might take $P$~and~$Q$ to be the properties expressed by \[ x \leq a,\quad x > a \] respectively, or by $x < a$ and $x \geq a$. In the first case $a$~would be the greatest member of~$L$, and in the second case the least member \PageSep{13} of~$R$. There are in fact just two sections corresponding to any positive rational number. In order to avoid ambiguity we select one of them; let us select that in which the number itself belongs to the \emph{upper} class. In other words, let us agree that we will consider only sections in which the lower class~$L$ has no greatest number. There being this correspondence between the positive rational numbers and the sections defined by means of them, it would be perfectly legitimate, for mathematical purposes, to replace the numbers by the sections, and to regard the symbols which occur in our formulae as standing for the sections instead of for the numbers. Thus, for example, $\alpha > \alpha'$ would mean the same as $a > a'$, if $\alpha$~and~$\alpha'$ are the sections which correspond to $a$~and~$a'$. But when we have in this way substituted sections of rational numbers for the rational numbers themselves, we are almost forced to a generalisation of our number system. For there are sections (such as that of \SecNo[§]{4}) which do \emph{not} correspond to any rational number. The aggregate of sections is a larger aggregate than that of the positive rational numbers; it includes sections corresponding to all these numbers, and more besides. It is this fact which we make the basis of our generalisation of the idea of number. We accordingly frame the following definitions, which will however be modified in the next section, and must therefore be regarded as temporary and provisional. \begin{Defn} A section of the positive rational numbers, in which both classes exist and the lower class has no greatest member, is called a \Emph{positive real number}. \end{Defn} \begin{Defn} A positive real number which does not correspond to a positive rational number is called a positive \Emph{irrational} number. \end{Defn} \Paragraph{8. Real numbers.} We have confined ourselves so far to certain sections of the positive rational numbers, which we have agreed provisionally to call `positive real numbers.' Before we frame our final definitions, we must alter our point of view a little. We shall consider sections, or divisions into two classes, not merely of the positive rational numbers, but of all rational numbers, including zero. We may then repeat all that we have said about sections of the positive rational numbers in \SecNo[§§]{6},~\SecNo{7}, merely omitting the word positive occasionally. \PageSep{14} \begin{Definitions} A section of the rational numbers, in which both classes exist and the lower class has no greatest member, is called a \Emph{real number}, or simply a \Emph{number}. A real number which does not correspond to a rational number is called an \Emph{irrational} number. \end{Definitions} If the real number does correspond to a rational number, we shall use the term `rational' as applying to the real number also. \begin{Remark} The term `rational number' will, as a result of our definitions, be ambiguous; it may mean the rational number of \SecNo[§]{1}, or the corresponding real number. If we say that $\frac{1}{2} > \frac{1}{3}$, we may be asserting either of two different propositions, one a proposition of elementary arithmetic, the other a proposition concerning sections of the rational numbers. Ambiguities of this kind are common in mathematics, and are perfectly harmless, since the relations between different propositions are exactly the same whichever interpretation is attached to the propositions themselves. From $\frac{1}{2} > \frac{1}{3}$ and $\frac{1}{3} > \frac{1}{4}$ we can infer $\frac{1}{2} > \frac{1}{4}$; the inference is in no way affected by any doubt as to whether $\frac{1}{2}$,~$\frac{1}{3}$, and~$\frac{1}{4}$ are arithmetical fractions or real numbers. Sometimes, of course, the context in which (\eg)~`$\frac{1}{2}$' occurs is sufficient to fix its interpretation. When we say (see \SecNo[§]{9}) that $\frac{1}{2} < \sqrt{\frac{1}{3}}$, we \emph{must} mean by~`$\frac{1}{2}$' the real number~$\frac{1}{2}$. The reader should observe, moreover, that no particular logical importance is to be attached to the precise form of definition of a `real number' that we have adopted. We defined a `real number' as being a section, \ie\ a pair of classes. We might equally well have defined it as being the lower, or the upper, class; indeed it would be easy to define an infinity of classes of entities each of which would possess the properties of the class of real numbers. What is essential in mathematics is that its symbols should be capable of \emph{some} interpretation; generally they are capable of \emph{many}, and then, so far as mathematics is concerned, it does not matter which we adopt. Mr~Bertrand Russell has said that `mathematics is the science in which we do not know what we are talking about, and do not care whether what we say about it is true', a remark which is expressed in the form of a paradox but which in reality embodies a number of important truths. It would take too long to analyse the meaning of Mr~Russell's epigram in detail, but one at any rate of its implications is this, that the symbols of mathematics are capable of varying interpretations, and that we are in general at liberty to adopt whichever we prefer. \end{Remark} There are now three cases to distinguish. It may happen that all negative rational numbers belong to the lower class and zero and all positive rational numbers to the upper. We describe this section as the \Emph{real number zero}. Or again it may happen that the lower class includes some positive numbers. Such a section \PageSep{15} we describe as a \Emph{positive real number}. Finally it may happen that some negative numbers belong to the upper class. Such a section we describe as a \Emph{negative real number}.\footnote {There are also sections in which every number belongs to the lower or to the upper class. The reader may be tempted to ask why we do not regard these sections also as defining numbers, which we might call the \emph{real numbers positive and negative infinity}. There is no logical objection to such a procedure, but it proves to be inconvenient in practice. The most natural definitions of addition and multiplication do not work in a satisfactory way. Moreover, for a beginner, the chief difficulty in the elements of analysis is that of learning to attach precise senses to phrases containing the word `infinity'; and experience seems to show that he is likely to be confused by any addition to their number.} \begin{Remark} The difference between our present definition of a positive real number~$a$ and that of \SecNo[§]{7} amounts to the addition to the lower class of zero and all the negative rational numbers. An example of a negative real number is given by taking the property~$P$ of \SecNo[§]{6} to be $x + 1 < 0$ and $Q$~to be $x + 1 \geq 0$. This section plainly corresponds to the negative rational number~$-1$. If we took $P$~to be $x^{3} < -2$ and $Q$~to be $x^{3} > -2$, we should obtain a negative real number which is not rational. \end{Remark} \Paragraph{9. Relations of magnitude between real numbers.} It is plain that, now that we have extended our conception of number, we are bound to make corresponding extensions of our conceptions of equality, inequality, addition, multiplication, and so on. We have to show that these ideas can be applied to the new numbers, and that, when this extension of them is made, all the ordinary laws of algebra retain their validity, so that we can operate with real numbers in general in exactly the same way as with the rational numbers of \SecNo[§]{1}. To do all this systematically would occupy a considerable space, and we shall be content to indicate summarily how a more systematic discussion would proceed. We denote a real number by a Greek letter such as $\alpha$, $\beta$, $\gamma, \dots$; the rational numbers of its lower and upper classes by the corresponding English letters $a$,~$A$; $b$,~$B$; $c$,~$C$;~\dots. The classes themselves we denote by $(a)$,~$(A), \dots$. If $\alpha$~and~$\beta$ are two real numbers, there are three possibilities: \Itemp{(i)} every~$a$ is a~$b$ and every~$A$ a~$B$; in this case $(a)$~is identical with~$(b)$ and $(A)$~with~$(B)$; \PageSep{16} \Itemp{(ii)} every~$a$ is a~$b$, but not all~$A$'s are~$B$'s; in this case $(a)$~is a proper part of~$(b)$,\footnote {\Ie\ is included in but not identical with~$(b)$.} and $(B)$~a proper part of~$(A)$; \Itemp{(iii)} every~$A$ is a~$B$, but not all~$a$'s are~$b$'s. These three cases may be indicated graphically as in \Fig{4}. In case~(i) we write $\alpha = \beta$, in case~(ii) $\alpha < \beta$, and in case~(iii) $\alpha > \beta$. It is clear that, when $\alpha$~and~$\beta$ are both rational, these %[Illustration: Fig. 4.] \Figure[0.4\textwidth]{4}{p016} definitions agree with the ideas of equality and inequality between rational numbers which we began by taking for granted; and that any positive number is greater than any negative number. It will be convenient to define at this stage the negative~$-\alpha$ of a positive number~$\alpha$. If $(a)$,~$(A)$ are the classes which constitute~$\alpha$, we can define another section of the rational numbers by putting all numbers~$-A$ in the lower class and all numbers~$-a$ in the upper. The real number thus defined, which is clearly negative, we denote by~$-\alpha$. Similarly we can define~$-\alpha$ when $\alpha$~is negative or zero; if $\alpha$~is negative, $-\alpha$~is positive. It is plain also that $-(-\alpha) = \alpha$. Of the two numbers $\alpha$~and~$-\alpha$ one is always positive (unless $\alpha = 0$). The one which is positive we denote by~$|\alpha|$ and call the \emph{modulus} of~$\alpha$. \begin{Examples}{IV.} \Item{1.} Prove that $0 = -0$. \Item{2.} Prove that $\beta = \alpha$, $\beta < \alpha$, or $\beta > \alpha$ according as $\alpha = \beta$, $\alpha > \beta$, or $\alpha < \beta$. \Item{3.} If $\alpha = \beta$ and $\beta = \gamma$, then $\alpha = \gamma$. \Item{4.} If $\alpha \leq \beta$, $\beta < \gamma$, or $\alpha < \beta$, $\beta \leq \gamma$, then $\alpha < \gamma$. \Item{5.} Prove that $-\beta = -\alpha$, $-\beta < -\alpha$, or $-\beta > -\alpha$, according as $\alpha = \beta$, $\alpha < \beta$, or $\alpha > \beta$. \Item{6.} Prove that $\alpha > 0$ if $\alpha$~is positive, and $\alpha < 0$ if $\alpha$~is negative. \Item{7.} Prove that $\alpha \leq |\alpha|$. \Item{8.} Prove that $1 < \sqrt{2} < \sqrt{3} < 2$. \Item{9.} Prove that, if $\alpha$~and~$\beta$ are two different real numbers, we can always find an infinity of rational numbers lying between $\alpha$~and~$\beta$. [All these results are immediate consequences of our definitions.] \end{Examples} \PageSep{17} \Paragraph{10. Algebraical operations with real numbers.} We now proceed to define the meaning of the elementary algebraical operations such as addition, as applied to real numbers in general. \Par{\Itemp{(i)} Addition.} In order to define the sum of two numbers $\alpha$~and~$\beta$, we consider the following two classes: (i)~the class~$(c)$ formed by all sums $c = a + b$, (ii)~the class~$(C)$ formed by all sums $C = A + B$. Plainly $c < C$ in all cases. Again, there cannot be more than one rational number which does not belong either to~$(c)$ or to~$(C)$. For suppose there were two, say $r$~and~$s$, and let $s$~be the greater. Then both $r$~and~$s$ must be greater than every~$c$ and less than every~$C$; and so $C - c$ cannot be less than $s - r$. But \[ C - c = (A - a) + (B - b); \] and we can choose $a$, $b$, $A$, $B$ so that both $A - a$ and $B - b$ are as small as we like; and this plainly contradicts our hypothesis. If every rational number belongs to~$(c)$ or to~$(C)$, the classes $(c)$,~$(C)$ form a section of the rational numbers, that is to say, a number~$\gamma$. If there is one which does not, we add it to~$(C)$. We have now a section or real number~$\gamma$, which must clearly be rational, since it corresponds to the least member of~$(C)$. \emph{In any case we call~$\gamma$ the sum of $\alpha$~and~$\beta$, and write} \[ \gamma = \alpha + \beta. \] \begin{Remark} If both $\alpha$~and~$\beta$ are rational, they are the least members of the upper classes $(A)$~and~$(B)$. In this case it is clear that $\alpha + \beta$ is the least member of~$(C)$, so that our definition agrees with our previous ideas of addition. \end{Remark} \Par{\Itemp{(ii)} Subtraction.} We define $\alpha - \beta$ by the equation \[ \alpha - \beta = \alpha + (-\beta). \] The idea of subtraction accordingly presents no fresh difficulties. \begin{Examples}{V.} \Item{1.} Prove that $\alpha + (-\alpha) = 0$. \Item{2.} Prove that $\alpha + 0 = 0 + \alpha = \alpha$. \Item{3.} Prove that $\alpha + \beta = \beta + \alpha$. [This follows at once from the fact that the classes $(a + b)$~and~$(b + a)$, or $(A + B)$~and~$(B + A)$, are the same, since, \eg, $a + b = b + a$ when $a$~and~$b$ are rational.] \Item{4.} Prove that $\alpha + (\beta + \gamma) = (\alpha + \beta) + \gamma$. \PageSep{18} \Item{5.} Prove that $\alpha - \alpha = 0$. \Item{6.} Prove that $\alpha - \beta = -(\beta - \alpha)$. \Item{7.} From the definition of subtraction, and Exs.\ 4,~1, and~2 above, it follows that \[ (\alpha - \beta) + \beta = \{\alpha + (-\beta)\} + \beta = \alpha + \{(-\beta) + \beta\} = \alpha + 0 = \alpha. \] We might therefore define the difference $\alpha - \beta = \gamma$ by the equation $\gamma + \beta = \alpha$. \Item{8.} Prove that $\alpha - (\beta - \gamma) = \alpha - \beta + \gamma$. \Item{9.} Give a definition of subtraction which does not depend upon a previous definition of addition. [To define $\gamma = \alpha - \beta$, form the classes $(c)$,~$(C)$ for which $c = a - B$, $C = A - b$. It is easy to show that this definition is equivalent to that which we adopted in the text.] \Item{10.} Prove that \[ \big||\alpha| - |\beta|\big| \leq |\alpha ± \beta| \leq |\alpha| + |\beta|. \] \end{Examples} \Paragraph{11. Algebraical operations with real numbers (\continued).} \Itemp{(iii)}~\emph{Multiplication}. When we come to multiplication, it is most convenient to confine ourselves to \emph{positive} numbers (among which we may include~$0$) in the first instance, and to go back for a moment to the sections of positive rational numbers only which we considered in \SecNo[§§]{4}--\SecNo{7}. We may then follow practically the same road as in the case of addition, taking~$(c)$ to be~$(ab)$ and $(C)$ to be~$(AB)$. The argument is the same, except when we are proving that all rational numbers with at most one exception must belong to $(c)$~or~$(C)$. This depends, as in the case of addition, on showing that we can choose $a$,~$A$, $b$, and~$B$ so that $C - c$ is as small as we please. Here we use the identity \[ C - c = AB - ab = (A - a)B + a(B - b). \] Finally we include negative numbers within the scope of our definition by agreeing that, if $\alpha$~and~$\beta$ are positive, then \[ (-\alpha)\beta = -\alpha\beta,\quad \alpha(-\beta) = -\alpha\beta,\quad (-\alpha)(-\beta) = \alpha\beta. \] \Par{\Itemp{(iv)} Division.} In order to define division, we begin by defining the reciprocal~$1/\alpha$ of a number~$\alpha$ (other than zero). Confining ourselves in the first instance to positive numbers and sections of positive rational numbers, we define the reciprocal of a positive number~$\alpha$ by means of the lower class~$(1/A)$ and the upper class~$(1/a)$. We then define the reciprocal of a negative number~$-\alpha$ by the equation $1/(-\alpha) = -(1/\alpha)$. Finally we define $\alpha/\beta$ by the equation \[ \alpha/\beta = \alpha × (1/\beta). \] \PageSep{19} We are then in a position to apply to all real numbers, rational or irrational, the whole of the ideas and methods of elementary algebra. Naturally we do not propose to carry out this task in detail. It will be more profitable and more interesting to turn our attention to some special, but particularly important, classes of irrational numbers. \begin{Examples}{VI.} Prove the theorems expressed by the following formulae: %[** TN: One-off two-column layout] \begin{minipage}{0.5\textwidth-\parindent} \Item{1.} $\alpha × 0 = 0 × \alpha = 0$. \Item{2.} $\alpha × 1 = 1 × \alpha = \alpha$. \Item{3.} $\alpha × (1/\alpha) = 1$. \Item{4.} $\alpha\beta = \beta\alpha$. \end{minipage}% \begin{minipage}{0.5\textwidth} \Item{5.} $\alpha(\beta\gamma) = (\alpha\beta)\gamma$. \Item{6.} $\alpha(\beta + \gamma) = \alpha\beta + \alpha\gamma$. \Item{7.} $(\alpha + \beta)\gamma = \alpha\gamma + \beta\gamma$. \Item{8.} $|\alpha\beta| = |\alpha|\, |\beta|$. \end{minipage} \end{Examples} \Paragraph{12. The number $\sqrt{2}$.} Let us now return for a moment to the particular irrational number which we discussed in \SecNo[§§]{4}--\SecNo{5}. We there constructed a section by means of the inequalities $x^{2} < 2$, $x^{2} > 2$. This was a section of the positive rational numbers only; but we replace it (as was explained in \SecNo[§]{8}) by a section of all the rational numbers. We denote the section or number thus defined by the symbol~$\sqrt{2}$. The classes by means of which the product of $\sqrt{2}$ by itself is defined are (i)~$(aa')$, where $a$~and~$a'$ are positive rational numbers whose squares are less than~$2$, (ii)~$(AA')$, where $A$~and~$A'$ are positive rational numbers whose squares are greater than~$2$. These classes exhaust all positive rational numbers save one, which can only be~$2$ itself. Thus \[ (\sqrt{2})^{2} = \sqrt{2}\sqrt{2} = 2. \] Again \[ (-\sqrt{2})^{2} = (-\sqrt{2})(-\sqrt{2}) = \sqrt{2}\sqrt{2} = (\sqrt{2})^{2} = 2. \] Thus \emph{the equation $x^{2} = 2$ has the two roots $\sqrt{2}$~and~$-\sqrt{2}$}. Similarly we could discuss the equations $x^{2} = 3$, $x^{3} = 7, \dots$ and the corresponding irrational numbers $\sqrt{3}$,~$-\sqrt{3}$, $\sqrt[3]{7}, \dots$. \Paragraph{13. Quadratic surds.} A number of the form~$±\sqrt{a}$, where $a$~is a positive rational number which is not the square of another rational number, is called a \emph{pure quadratic surd}. A number of the form $a ± \sqrt{b}$, where $a$~is rational, and $\sqrt{b}$~is a pure quadratic surd, is sometimes called a mixed quadratic surd. \PageSep{20} \begin{Remark} The two numbers $a ± \sqrt{b}$ are the roots of the quadratic equation \[ x^{2} - 2ax + a^{2} - b = 0. \] Conversely, the equation $x^{2} + 2px + q = 0$, where $p$~and~$q$ are rational, and $p^{2} - q > 0$, has as its roots the two quadratic surds $-p ± \sqrtp{p^{2} - q}$. \end{Remark} The only kind of irrational numbers whose existence was suggested by the geometrical considerations of \SecNo[§]{3} are these quadratic surds, pure and mixed, and the more complicated irrationals which may be expressed in a form involving the repeated extraction of square roots, such as \[ \sqrt{2} + \sqrtp{2 + \sqrt{2}} + \sqrtb{2 + \sqrtp{2 + \sqrt{2}}}. \] It is easy to construct geometrically a line whose length is equal to any number of this form, as the reader will easily see for himself. That irrational numbers of these kinds \emph{only} can be constructed by Euclidean methods (\ie~by geometrical constructions with ruler and compasses) is a point the proof of which must be deferred for the present.\footnote {See \Ref{Ch.}{II}, \MiscExs{II}~22.} This property of quadratic surds makes them especially interesting. \begin{Examples}{VII.} \Item{1.} Give geometrical constructions for \[ \sqrt{2},\quad \sqrtp{2 + \sqrt{2}},\quad \sqrtb{2 + \sqrtp{2 + \sqrt{2}}}. \] \Item{2.} The quadratic equation $ax^{2} + 2bx + c = 0$ has two real roots\footnote {\Ie\ there are two values of~$x$ for which $ax^{2} + 2bx + c = 0$. If $b^{2} - ac < 0$ there are no such values of~$x$. The reader will remember that in books on elementary algebra the equation is said to have two `complex' roots. The meaning to be attached to this statement will be explained in \Ref{Ch.}{III}\@. When $b^{2} = ac$ the equation has only one root. For the sake of uniformity it is generally said in this case to have `two equal' roots, but this is a mere convention.} if $b^{2} - ac > 0$. Suppose $a$,~$b$,~$c$ rational. Nothing is lost by taking all three to be integers, for we can multiply the equation by the least common multiple of their denominators. The reader will remember that the roots are $\{-b ± \sqrtp{b^{2} - ac}\}/a$. It is easy to construct these lengths geometrically, first constructing $\sqrtp{b^{2} - ac}$. A much more elegant, though less straightforward, construction is the following. \PageSep{21} \begin{Construction} Draw a circle of unit radius, a diameter~$PQ$, and the tangents at the ends of the diameters. %[Illustration: Fig. 5.] \Figure[0.7\textwidth]{5}{p021} Take $PP' = -2a/b$ and $QQ' = -c/2b$, having regard to sign.\footnote {The figure is drawn to suit the case in which $b$~and~$c$ have the same and $a$ the opposite sign. The reader should draw figures for other cases.} Join $P'Q'$, cutting the circle in $M$~and~$N$. Draw $PM$~and~$PN$, cutting~$QQ'$ in $X$~and~$Y$. Then $QX$~and~$QY$ are the roots of the equation with their proper signs.\footnote {I have taken this construction from Klein's \textit{Leçons sur certaines questions de géométrie élémentaire} (French translation by J.~Griess, Paris, 1896).} \end{Construction} The proof is simple and we leave it as an exercise to the reader. Another, perhaps even simpler, construction is the following. \begin{Construction}[]Take a line $AB$ of unit length. Draw $BC = -2b/a$ perpendicular to~$AB$, and $CD = c/a$ perpendicular to~$BC$ and in the same direction as~$BA$. On~$AD$ as diameter describe a circle cutting~$BC$ in $X$~and~$Y$. Then $BX$~and~$BY$ are the roots. \end{Construction} \Item{3.} If $ac$ is positive $PP'$~and~$QQ'$ will be drawn in the same direction. Verify that $P'Q'$~will not meet the circle if $b^{2} < ac$, while if $b^{2} = ac$ it will be a tangent. Verify also that if $b^{2} = ac$ the circle in the second construction will touch~$BC$. \Item{4.} Prove that \[ \sqrtp{pq} = \sqrt{p} × \sqrt{q},\quad \sqrtp{p^{2}q} = p\sqrt{q}. \] \end{Examples} \Paragraph{14. Some theorems concerning quadratic surds.} Two pure quadratic surds are said to be \emph{similar} if they can be expressed as rational multiples of the same surd, and otherwise to be \emph{dissimilar}. Thus \[ \sqrt{8} = 2\sqrt{2},\quad \sqrt{\tfrac{25}{2}} = \tfrac{5}{2}\sqrt{2}, \] and so $\sqrt{8}$,~$\sqrt{\frac{25}{2}}$ are similar surds. On the other hand, if $M$~and~$N$ are integers which have no common factor, and neither of which is a perfect square, $\sqrt{M}$~and~$\sqrt{N}$ are dissimilar surds. For suppose, if possible, \[ \sqrt{M} = \frac{p}{q}\bigsqrt{\frac{t}{u}},\quad \sqrt{N} = \frac{r}{s}\bigsqrt{\frac{t}{u}}, \] where all the letters denote integers. \PageSep{22} Then $\DPtypo{\sqrt{MN}}{\sqrtp{MN}}$ is evidently rational, and therefore (\Ex{ii}.~3) integral. Thus $MN = P^{2}$, where $P$~is an integer. Let $a$,~$b$, $c, \dots$ be the prime factors of~$P$, so that \[ MN = a^{2\alpha} b^{2\beta} c^{2\gamma}\ \dots, \] where $\alpha$,~$\beta$, $\gamma, \dots$ are positive integers. Then $MN$~is divisible by~$a^{2\alpha}$, and therefore either (1)~$M$~is divisible by~$a^{2\alpha}$, or (2)~$N$~is divisible by~$a^{2\alpha}$, or (3)~$M$ and~$N$ are both divisible by~$a$. The last case may be ruled out, since $M$~and~$N$ have no common factor. This argument may be applied to each of the factors $a^{2\alpha}$,~$b^{2\beta}$, $c^{2\gamma}, \dots$, so that $M$~must be divisible by some of these factors and $N$~by the remainder. Thus \[ M = P_{1}^{2},\quad N = P_{2}^{2}, \] where $P_{1}^{2}$~denotes the product of some of the factors $a^{2\alpha}$,~$b^{2\beta}$, $c^{2\gamma}, \dots$ and $P_{2}^{2}$~the product of the rest. Hence $M$~and~$N$ are both perfect squares, which is contrary to our hypothesis. \begin{Theorem} If $A$, $B$, $C$, $D$ are rational and \[ A + \sqrt{B} = C + \sqrt{D}, \] then either \Inum{(i)}~$A = C$, $B = D$ or \Inum{(ii)}~$B$ and~$D$ are both squares of rational numbers. \end{Theorem} For $B - D$ is rational, and so is \[ \sqrt{B} - \sqrt{D} = C - A. \] If $B$~is not equal to~$D$ (in which case it is obvious that $A$~is also equal to~$C$), it follows that \[ \sqrt{B} + \sqrt{D} = (B - D)/(\sqrt{B}- \sqrt{D}) \] is also rational. Hence $\sqrt{B}$~and~$\sqrt{D}$ are rational. \begin{Corollary} If $A + \sqrt{B} = C + \sqrt{D}$, then $A - \sqrt{B} = C - \sqrt{D}$ \(unless $\sqrt{B}$~and~$\sqrt{D}$ are both rational\). \end{Corollary} \begin{Examples}{VIII.} \Item{1.} Prove \textit{ab initio} that $\sqrt{2}$~and~$\sqrt{3}$ are not similar surds. \Item{2.} Prove that $\sqrt{a}$~and~$\sqrtp{1/a}$, where $a$~is rational, are similar surds (unless both are rational). \Item{3.} If $a$~and~$b$ are rational, then $\sqrt{a} + \sqrt{b}$ cannot be rational unless $\sqrt{a}$~and~$\sqrt{b}$ are rational. The same is true of $\sqrt{a}- \sqrt{b}$, unless $a = b$. \PageSep{23} \Item{4.} If \[ \sqrt{A} + \sqrt{B} = \sqrt{C} + \sqrt{D}, \] then either (\ia)~$A = C$ and~$B = D$, or (\ib)~$A = D$ and~$B = C$, or (\ic)~$\sqrt{A}$, $\sqrt{B}$, $\sqrt{C}$, $\sqrt{D}$ are all rational or all similar surds. [Square the given equation and apply the theorem above.] \Item{5.} Neither $(a + \sqrt{b})^{3}$ nor $(a - \sqrt{b})^{3}$ can be rational unless $\sqrt{b}$~is rational. \Item{6.} Prove that if $x = p + \sqrt{q}$, where $p$~and~$q$ are rational, then $x^{m}$, where $m$~is any integer, can be expressed in the form $P + Q \sqrt{q}$, where $P$~and~$Q$ are rational. For example, \[ (p + \sqrt{q})^{2} = p^{2} + q + 2p\sqrt{q},\quad (p + \sqrt{q})^{3} = p^{3} + 3pq + (3p^{2} + q)\sqrt{q}. \] Deduce that any polynomial in~$x$ with rational coefficients (\ie~any expression of the form \[ a_{0}x^{n} + a_{1}x^{n-1} + \dots + a_{n}, \] where $a_{0}$,~\dots\Add{,} $a_{n}$ are rational numbers) can be expressed in the form $P + Q\sqrt{q}$. \Item{7.} If $a + \sqrt{b}$, where $b$~is not a perfect square, is the root of an algebraical equation with rational coefficients, then $a - \sqrt{b}$ is another root of the same equation. \Item{8.} Express $1/(p + \sqrt{q})$ in the form prescribed in Ex.~6. [Multiply numerator and denominator by~$p - \sqrt{q}$.] \Item{9.} Deduce from Exs.\ 6~and~8 that any expression of the form $G(x)/H(x)$, where $G(x)$~and~$H(x)$ are polynomials in~$x$ with rational coefficients, can be expressed in the form $P + Q\sqrt{q}$, where $P$~and~$Q$ are rational. \Item{10.} If $p$,~$q$, and $p^{2} - q$ are positive, we can express $\sqrtp{p + \sqrt{q}}$ in the form $\sqrt{x} + \sqrt{y}$, where \[ x = \tfrac{1}{2}\{p + \sqrtp{p^{2} - q}\},\quad y = \tfrac{1}{2}\{p - \sqrtp{p^{2} - q}\}. \] \Item{11.} Determine the conditions that it may be possible to express $\sqrtp{p + \sqrt{q}}$, where $p$~and~$q$ are rational, in the form $\sqrt{x} + \sqrt{y}$, where $x$~and~$y$ are rational. \Item{12.} If $a^{2} - b$ is positive, the necessary and sufficient conditions that \[ \sqrtp{a + \sqrt{b}} + \sqrtp{a - \sqrt{b}} \] should be rational are that $a^{2} - b$ and $\frac{1}{2}\{a + \sqrtp{a^{2} - b}\}$ should both be squares of rational numbers. \end{Examples} \Paragraph{15. The continuum.} The aggregate of all real numbers, rational and irrational, is called the \Emph{arithmetical continuum}. It is convenient to suppose that the straight line~$\Lambda$ of \SecNo[§]{2} is composed of points corresponding to all the numbers of the arithmetical continuum, and of no others.\footnote {This supposition is merely a hypothesis adopted (i)~because it suffices for the purposes of our geometry and (ii)~because it provides us with convenient geometrical illustrations of analytical processes. As we use geometrical language only for purposes of illustration, it is not part of our business to study the foundations of geometry.} The points of the \PageSep{24} line, the aggregate of which may be said to constitute the \Emph{linear continuum}, then supply us with a convenient image of the arithmetical continuum. We have considered in some detail the chief properties of a few classes of real numbers, such, for example, as rational numbers or quadratic surds. We add a few further examples to show how very special these particular classes of numbers are, and how, to put it roughly, they comprise only a minute fraction of the infinite variety of numbers which constitute the continuum. \begin{Remark} \Itemp{(i)} Let us consider a more complicated surd expression such as \[ z = \sqrtp[3]{4 + \sqrt{15}} + \sqrtp[3]{4 - \sqrt{15}}. \] Our argument for supposing that the expression for~$z$ has a meaning might be as follows. We first show, as in \SecNo[§]{12}, that there is a number $y = \sqrt{15}$ such that $y^{2} = 15$, and we can then, as in \SecNo[§]{10}, define the numbers $4 + \sqrt{15}$, $4 - \sqrt{15}$. Now consider the equation in~$z_{1}$, \[ z_{1}^{3} = 4 + \sqrt{15}. \] The right-hand side of this equation is not rational: but exactly the same reasoning which leads us to suppose that there is a real number~$x$ such that $x^{3} = 2$ (or any other rational number) also leads us to the conclusion that there is a number~$z_{1}$ such that $z_{1}^{3} = 4 + \sqrt{15}$. We thus define $z_{1} = \sqrtp[3]{4 + \sqrt{15}}$, and similarly we can define $z_{2} = \sqrtp[3]{4 - \sqrt{15}}$; and then, as in \SecNo[§]{10}, we define $z = z_{1} + z_{2}$. Now it is easy to verify that \[ z^{3} = 3z + 8. \] And we might have given a direct proof of the existence of a unique number~$z$ such that $z^{3} = 3z + 8$. It is easy to see that there cannot be two such numbers. For if $z_{1}^{3} = 3z_{1} + 8$ and $z_{2}^{3} = 3z_{2} + 8$, we find on subtracting and dividing by $z_{1} - z_{2}$ that $z_{1}^{2} + z_{1}z_{2} + z_{2}^{2} = 3$. But if $z_{1}$~and~$z_{2}$ are positive $z_{1}^{3}>8$, $z_{2}^{3}>8$ and therefore $z_{1} > 2$, $z_{2} > 2$, $z_{1}^{2} + z_{1}z_{2} + z_{2}^{2} > 12$, and so the equation just found is impossible. And it is easy to see that neither $z_{1}$ nor~$z_{2}$ can be negative. For if $z_{1}$~is negative and equal to~$-\zeta$, $\zeta$~is positive and $\zeta^{3} - 3\zeta + 8 = 0$, or $3 - \zeta^{2} = 8/\zeta$. Hence $3 - \zeta^{2} > 0$, and so $\zeta < 2$. But then $8/\zeta > 4$, and so $8/\zeta$ cannot be equal to~$3 - \zeta^{2}$, which is less~than~$3$. Hence there is at most one~$z$ such that $z^{3} = 3z + 8$. And it cannot be rational. For any rational root of this equation must be integral and a factor of~$8$ (\Ex{ii}.~3), and it is easy to verify that no one of $1$, $2$, $4$,~$8$ is a root. Thus $z^{3} = 3z + 8$ has at most one root and that root, if it exists, is positive and not rational. We can now divide the positive rational numbers~$x$ into two classes $L$,~$R$ according as $x^{3} < 3x + 8$ or $x^{3} > 3x + 8$. It is easy to see that if $x^{3} > 3x + 8$ and $y$~is any number greater than~$x$, then also $y^{3} > 3y + 8$. For suppose if possible $y^{3} \leq 3y + 8$. Then since $x^{3} > 3x + 8$ we obtain on subtracting $y^{3} - x^{3} < 3(y - x)$, or $y^{2} + xy + x^{2} < 3$, which is impossible; for $y$~is \PageSep{25} positive and $x > 2$ (since $x^{3} > 8$). Similarly we can show that if $x^{3} < 3x + 8$ and $y < x$ then also $y^{3} < 3y + 8$. Finally, it is evident that the classes $L$~and~$R$ both exist; and they form a section of the positive rational numbers or positive real number~$z$ which satisfies the equation $z^{3} = 3z + 8$. The reader who knows how to solve cubic equations by Cardan's method will be able to obtain the explicit expression of~$z$ directly from the equation. \end{Remark} \Itemp{(ii)} The direct argument applied above to the equation $x^{3} = 3x + 8$ could be applied (though the application would be a little more difficult) to the equation \[ x^{5} = x + 16\DPtypo{.}{,} \] and would lead us to the conclusion that a unique positive real number exists which satisfies this equation. In this case, however, it is not possible to obtain a simple explicit expression for~$x$ composed of any combination of surds. It can in fact be proved (though the proof is difficult) that it is \emph{generally} impossible to find such an expression for the root of an equation of higher degree than~$4$. Thus, besides irrational numbers which can be expressed as pure or mixed quadratic or other surds, or combinations of such surds, there are others which are roots of algebraical equations but cannot be so expressed. It is only in very special cases that such expressions can be found. \Itemp{(iii)} But even when we have added to our list of irrational numbers roots of equations (such as $x^{5} = x + 16$) which cannot be explicitly expressed as surds, we have not exhausted the different kinds of irrational numbers contained in the continuum. Let us draw a circle whose diameter is equal to~$A_{0}A_{1}$, \ie~to unity. It is natural to suppose\footnote {A proof will be found in \Ref{Ch.}{VII}\@.} that the circumference of such a circle has a length capable of numerical measurement. This length is usually denoted by~$\pi$. And it has been shown\footnote {See Hobson's \textit{Trigonometry} (3rd~edition), pp.~305 \textit{et~seq.}, or the same writer's \textit{Squaring the Circle} (Cambridge,~1913).} (though the proof is unfortunately long and difficult) that this number~$\pi$ is not the root of any algebraical equation with integral coefficients, such, for example, as \[ \pi^{2} = n,\quad \pi^{3} = n,\quad \pi^{5} = \pi + n, \] \PageSep{26} where $n$~is an~integer. In this way it is possible to define a number which is not rational nor yet belongs to any of the classes of irrational numbers which we have so far considered. And this number~$\pi$ is no isolated or exceptional case. Any number of other examples can be constructed. In fact it is only special classes of irrational numbers which are roots of equations of this kind, just as it is only a still smaller class which can be expressed by means of surds. \Paragraph{16. The continuous real variable.} The `real numbers' may be regarded from two points of view. We may think of them \emph{as an aggregate}, the `arithmetical continuum' defined in the preceding section, or \emph{individually}. And when we think of them individually, we may think either of a particular \emph{specified} number (such as $1$, $-\frac{1}{2}$, $\sqrt{2}$, or~$\pi$) or we may think of \emph{any} number, \emph{an unspecified} number, \emph{the number~$x$}. This last is our point of view when we make such assertions as `$x$~is a~number', `$x$~is the measure of a length', `$x$~may be rational or irrational'\DPtypo{,}{.} The~$x$ which occurs in propositions such as these is called \emph{the continuous real variable}: and the individual numbers are called the \emph{values} of the variable. A `variable', however, need not necessarily be continuous. Instead of considering the aggregate of \emph{all} real numbers, we might consider some partial aggregate contained in the former aggregate, such as the aggregate of rational numbers, or the aggregate of positive integers. Let us take the last case. Then in statements about \emph{any} positive integer, or \emph{an unspecified} positive integer, such as `$n$~is either odd or~even', $n$~is called the variable, a \emph{positive integral variable}, and the individual positive integers are its values. Naturally `$x$'~and~`$n$' are only examples of variables, the variable whose `field of variation' is formed by all the real numbers, and that whose field is formed by the positive integers. These are the most important examples, but we have often to consider other cases. In the theory of decimals, for instance, we may denote by~$x$ any figure in the expression of any number as a decimal. Then $x$~is a variable, but a variable which has only ten different values, viz.\ $0$, $1$, $2$, $3$, $4$, $5$, $6$, $7$, $8$,~$9$. The reader should \PageSep{27} think of other examples of variables with different fields of variation. He will find interesting examples in ordinary life: policeman~$x$, the driver of cab~$x$, the year~$x$, the $x$th~day of the week. The values of these variables are naturally not numbers. \Paragraph{17. Sections of the real numbers.} In \SecNo[§§]{4}--\SecNo{7} we considered `sections' of the rational numbers, \ie\ modes of division of the rational numbers (or of the positive rational numbers only) into two classes $L$~and~$R$ possessing the following characteristic properties: \Itemp{(i)} that every number of the type considered belongs to one and only one of the two classes; \Itemp{(ii)} that both classes exist; \Itemp{(iii)} that any member of~$L$ is less than any member of~$R$. It is plainly possible to apply the same idea to the aggregate of all real numbers, and the process is, as the reader will find in later chapters, of very great importance. Let us then suppose\footnote {The discussion which follows is in many ways similar to that of \SecNo[§]{6}. We have not attempted to avoid a certain amount of repetition. The idea of a `section,' first brought into prominence in Dedekind's famous pamphlet \textit{Stetigkeit und irrationale Zahlen}, is one which can, and indeed must, be grasped by every reader of this book, even if he be one of those who prefer to omit the discussion of the notion of an irrational number contained in \SecNo[§§]{6}--\SecNo{12}.} that $P$~and~$Q$ are two properties which are mutually exclusive, and one of which is possessed by every real number. Further let us suppose that any number which possesses~$P$ is less than any which possesses~$Q$. We call the numbers which possess~$P$ the \emph{lower} or \emph{left-hand class}~$L$, and those which possess~$Q$ the \emph{upper} or \emph{right-hand class}~$R$. \begin{Remark} Thus~$P$ might be $x \leq \sqrt{2}$ and $Q$~be $x > \sqrt{2}$. It is important to observe that a pair of properties which suffice to define a section of the rational numbers may not suffice to define one of the real numbers. This is so, for example, with the pair `$x < \sqrt{2}$' and `$x > \sqrt{2}$' or (if we confine ourselves to positive numbers) with `$x^{2} < 2$' and `$x^{2} > 2$'. Every rational number possesses one or other of the properties, but not every real number, since in either case $\sqrt{2}$~escapes classification. \end{Remark} There are now two possibilities.\footnote {There were three in \SecNo[§]{6}.} Either $L$~has a greatest member~$l$, or $R$~has a least member~$r$. \emph{Both} of these events \PageSep{28} cannot occur. For if $L$~had a greatest member~$l$, and $R$~a least member~$r$, the number $\frac{1}{2}(l + r)$ would be greater than all members of~$L$ and less than all members of~$R$, and so could not belong to either class. On the other hand \emph{one} event must occur.\footnote {This was not the case in \SecNo[§]{6}.} For let $L_{1}$~and~$R_{1}$ denote the classes formed from $L$~and~$R$ by taking only the rational members of $L$~and~$R$. Then the classes $L_{1}$~and~$R_{1}$ form a section of the rational numbers. There are now two cases to distinguish. It may happen that $L_{1}$~has a greatest member~$\alpha$. In this case $\alpha$~must be also the greatest member of~$L$. For if not, we could find a greater, say~$\beta$. There are rational numbers lying between $\alpha$~and~$\beta$, and these, being less than~$\beta$, belong to~$L$, and therefore to~$L_{1}$; and this is plainly a contradiction. Hence $\alpha$~is the greatest member of~$L$. On the other hand it may happen that $L_{1}$~has no greatest member. In this case the section of the rational numbers formed by $L_{1}$~and~$R_{1}$ is a real number~$\alpha$. This number~$\alpha$ must belong to~$L$ or to~$R$. If it belongs to~$L$ we can \DPtypo{shew}{show}, precisely as before, that it is the greatest member of~$L$, and similarly, if it belongs to~$R$, it is the least member of~$R$. Thus in any case either $L$~has a greatest member or $R$~a least. Any section of the real numbers therefore `corresponds' to a real number in the sense in which a section of the rational numbers sometimes, but not always, corresponds to a rational number. This conclusion is of very great importance; for it shows that the consideration of sections of all the real numbers does not lead to any further generalisation of our idea of number. Starting from the rational numbers, we found that the idea of a section of the rational numbers led us to a new conception of a number, that of a real number, more general than that of a rational number; and it might have been expected that the idea of a section of the real numbers would have led us to a conception more general still. The discussion which precedes shows that this is not the case, and that the aggregate of real numbers, or the continuum, has a kind of completeness which the aggregate of the rational numbers lacked, a completeness which is expressed in technical language by saying that the continuum is closed. \PageSep{29} The result which we have just proved may be stated as follows: \begin{ParTheorem}{Dedekind's Theorem.} If the real numbers are divided into two classes $L$~and~$R$ in such a way that \Itemp{(i)} every number belongs to one or other of the two classes, \Itemp{(ii)} each class contains at least one number, \Itemp{(iii)} any member of~$L$ is less than any member of~$R$, \\ then there is a number~$\alpha$, which has the property that all the numbers less than it belong to~$L$ and all the numbers greater than it to~$R$. The number~$\alpha$ itself may belong to either class. \end{ParTheorem} \begin{Remark} In applications we have often to consider sections not of \emph{all} numbers but of all those contained in an \emph{interval} $\DPmod{(\beta, \gamma)}{[\beta, \gamma]}$, that is to say of all numbers~$x$ such that $\beta \leq x \leq \gamma$. A `section' of such numbers is of course a division of them into two classes possessing the properties (i),~(ii), and~(iii). Such a section may be converted into a section of \emph{all} numbers by adding to~$L$ all numbers less than~$\beta$ and to~$R$ all numbers greater than~$\gamma$. It is clear that the conclusion stated in Dedekind's Theorem still holds if we substitute `the real numbers of the interval $\DPmod{(\beta, \gamma)}{[\beta, \gamma]}$' for `the real numbers', and that the number~$\alpha$ in this case satisfies the inequalities $\beta \leq \alpha \leq \gamma$. \end{Remark} \Paragraph{18. Points of accumulation.} A system of real numbers, or of the points on a straight line corresponding to them, defined in any way whatever, is called an \Emph{aggregate} or \Emph{set} of numbers or points. The set might consist, for example, of all the positive integers, or of all the rational points. It is most convenient here to use the language of geometry.\footnote {The reader will hardly require to be reminded that this course is adopted solely for reasons of linguistic convenience.} Suppose then that we are given a set of points, which we will denote by~$S$. Take any point~$\xi$, which may or may not belong to $S$. Then there are two possibilities. Either (i)~it is possible to choose a positive number~$\delta$ so that the interval $\DPmod{(\xi - \delta, \xi + \delta)}{[\xi - \delta, \xi + \delta]}$ does not contain any point of~$S$, other than $\xi$~itself,\footnote {This clause is of course unnecessary if $\xi$~does not itself belong to~$S$.} or (ii)~this is not possible. \begin{Remark} Suppose, for example, that $S$~consists of the points corresponding to all the positive integers. If $\xi$~is itself a positive integer, we can take $\delta$ to be any number less than~$1$, and (i)~will be true; or, if $\xi$~is halfway between two positive integers, we can take $\delta$ to be any number less than~$\frac{1}{2}$. On the other hand, if $S$~consists of all the rational points, then, whatever the value of~$\xi$, (ii)~is true; for any interval whatever contains an infinity of rational points. \end{Remark} \PageSep{30} Let us suppose that (ii)~is true. Then any interval $\DPmod{(\xi - \delta, \xi + \delta)}{[\xi - \delta, \xi + \delta]}$, however small its length, contains at least one point~$\xi_{1}$ which belongs to~$S$ and does not coincide with~$\xi$; and this whether $\xi$~itself be a member of~$S$ or not. In this case we shall say that $\xi$~is a \Emph{point of accumulation} of~$S$. It is easy to see that the interval $\DPmod{(\xi - \delta, \xi + \delta)}{[\xi - \delta, \xi + \delta]}$ must contain, not merely one, but infinitely many points of~$S$. For, when we have determined~$\xi_{1}$, we can take an interval $\DPmod{(\xi - \delta_{1}, \xi + \delta_{1})}{[\xi - \delta_{1}, \xi + \delta_{1}]}$ surrounding~$\xi$ but not reaching as far as~$\xi_{1}$. But this interval also must contain a point, say~$\xi_{2}$, which is a member of~$S$ and does not coincide with~$\xi$. Obviously we may repeat this argument, with $\xi_{2}$~in the place of~$\xi_{1}$; and so on indefinitely. In this way we can determine as many points \[ \xi_{1},\quad \xi_{2},\quad \xi_{3},\ \dots \] as we please, all belonging to~$S$, and all lying inside the interval $\DPmod{(\xi - \delta, \xi + \delta)}{[\xi - \delta, \xi + \delta]}$. A point of accumulation of~$S$ may or may not be itself a point of~$S$. The examples which follow illustrate the various possibilities. \begin{Examples}{IX.} \Item{1.} If $S$~consists of the points corresponding to the positive integers, or all the integers, there are no points of accumulation. \Item{2.} If $S$~consists of all the rational points, every point of the line is a point of accumulation. \Item{3.} If $S$~consists of the points $1$, $\frac{1}{2}$, $\frac{1}{3}, \dots$, there is one point of accumulation, viz.\ the origin. \Item{4.} If $S$~consists of all the positive rational points, the points of accumulation are the origin and all positive points of the line. \end{Examples} \Paragraph{19. Weierstrass's Theorem.} The general theory of sets of points is of the utmost interest and importance in the higher branches of analysis; but it is for the most part too difficult to be included in a book such as this. There is however one fundamental theorem which is easily deduced from Dedekind's Theorem and which we shall require later. \begin{Theorem} If a set~$S$ contains infinitely many points, and is entirely situated in an interval $\DPmod{(\alpha, \beta)}{[\alpha, \beta]}$, then at least one point of the interval is a point of accumulation of~$S$. \end{Theorem} We divide the points of the line~$\Lambda$ into two classes in the following manner. The point~$P$ belongs to~$L$ if there are an \PageSep{31} infinity of points of~$S$ to the right of~$P$, and to~$R$ in the contrary case. Then it is evident that conditions (i)~and~(iii) of Dedekind's Theorem are satisfied; and since $\alpha$~belongs to~$L$ and $\beta$~to~$R$, condition~(ii) is satisfied also. Hence there is a point~$\xi$ such that, however small be~$\delta$, $\xi - \delta$ belongs to~$L$ and $\xi + \delta$ to~$R$, so that the interval $\DPmod{(\xi-\delta, \xi+\delta)}{[\xi - \delta, \xi + \delta]}$ contains an infinity of points of~$S$. Hence $\xi$~is a~point of accumulation of~$S$. \begin{Remark} This point may of course coincide with $\alpha$~or~$\beta$, as for instance when $\alpha = 0$, $\beta = 1$, and $S$~consists of the points $1$, $\frac{1}{2}$, $\frac{1}{3}, \dots$. In this case $0$~is the sole point of accumulation. \end{Remark} \Section{MISCELLANEOUS EXAMPLES ON CHAPTER I.} \begin{Examples}{} \Item{1.} What are the conditions that $ax + by + cz = 0$, (1)~for all values of $x$, $y$,~$z$; (2)~for all values of $x$, $y$,~$z$ subject to $\alpha x + \beta y + \gamma z=0$; (3)~for all values of $x$, $y$,~$z$ subject to both $\alpha x + \beta y + \gamma z = 0$ and $Ax + By + Cz = 0$? \Item{2.} Any positive rational number can be expressed in one and only one way in the form \[ a_{1} + \frac{a_{2}}{1·2} + \frac{a_{3}}{1·2·3} + \dots + \frac{a_{k}}{1·2·3\dots k}, \] where $a_{1}$, $a_{2}$, \dots,~$a_{k}$ are integers, and \[ 0 \leq a_{1},\quad 0 \leq a_{2} < 2,\quad 0 \leq a_{3} < 3,\ \dots\quad 0 < a_{k} < k. \] \Item{3.} Any positive rational number can be expressed in one and one way only as a simple continued fraction \[ %[** TN: Modernized notation, added second-to-last numerator] a_{1} + \cfrac{1}{a_{2} + \cfrac{1}{a_{3} + \cfrac{1}{\dots + \cfrac{1}{a_{n}}}}}\;, \] where $a_{1}$, $a_{2}, \dots$ are positive integers, of which the first only may be zero. [Accounts of the theory of such continued fractions will be found in text-books of algebra. For further information as to modes of representation of rational and irrational numbers, see Hobson, \textit{Theory of Functions of a Real Variable}, pp.~45--49.] \Item{4.} Find the rational roots (if any) of $9x^{3} - 6x^{2} + 15x - 10 = 0$. \Item{5.} A line~$AB$ is divided at~$C$ \textit{in aurea sectione} (Euc.~\textsc{ii}.~11)---\ie\ so that $AB·AC = BC^{2}$. Show that the ratio~$AC/AB$ is irrational. [A direct geometrical proof will be found in Bromwich's \textit{Infinite Series}, §\:143, p.~363.] \Item{6.} $A$~is irrational. In what circumstances can $\smash{\dfrac{aA + b}{cA + d}}$, where $a$, $b$, $c$,~$d$ are rational, be rational? \PageSep{32} \Item{7.} \Topic{Some elementary inequalities.} In what follows $a_{1}$, $a_{2}, \dots$ denote positive numbers (including zero) and $p$, $q, \dots$ positive integers. Since $a_{1}^{p} - a_{2}^{p}$ and $a_{1}^{q} - a_{2}^{q}$ have the same sign, we have $(a_{1}^{p} - a_{2}^{p}) (a_{1}^{q} - a_{2}^{q}) \geq 0$, or \[ a_{1}^{p+q} + a_{2}^{p+q} \geq a_{1}^{p} a_{2}^{q} + a_{1}^{q} a_{2}^{p}, \Tag{(1)} \] an inequality which may also be written in the form \[ \frac{a_{1}^{p+q} + a_{2}^{p+q}}{2} \geq \left(\frac{a_{1}^{p} + a_{2}^{p}}{2}\right) \left(\frac{a_{1}^{q} + a_{2}^{q}}{2}\right). \Tag{(2)} \] By repeated application of this formula we obtain \[ \frac{a_{1}^{p+q+r+\dots} + a_{2}^{p+q+r+\dots}}{2} \geq \left(\frac{a_{1}^{p} + a_{2}^{p}}{2}\right) \left(\frac{a_{1}^{q} + a_{2}^{q}}{2}\right) \left(\frac{a_{1}^{r} + a_{2}^{r}}{2}\right) \dots, \Tag{(3)} \] and in particular \[ \frac{a_{1}^{p} + a_{2}^{p}}{2} \geq \left(\frac{a_{1} + a_{2}}{2}\right)^{p}. \Tag{(4)} \] When $p = q = 1$ in~\Eq{(1)}, or $p = 2$ in~\Eq{(4)}, the inequalities are merely different forms of the inequality $a_{1}^{2} + a_{2}^{2} \geq 2a_{1} a_{2}$, which expresses the fact that the arithmetic mean of two positive numbers is not less than their geometric mean. \Item{8.} \Topic{Generalisations for $n$~numbers.} If we write down the $\frac{1}{2} n(n - 1)$ inequalities of the type~\Eq{(1)} which can be formed with $n$~numbers $a_{1}$, $a_{2}$, \dots,~$a_{n}$, and add the results, we obtain the inequality \[ n \tsum{a^{p+q}} \geq \tsum a^{p} \tsum a^{q}, \Tag{(5)} \] or \[ \left(\tsum a^{p+q}\right)/n \geq \left\{\left(\tsum a^{p}\right)/n\right\} \left\{\left(\tsum a^{q}\right)/n\right\}. \Tag{(6)} \] Hence we can deduce an obvious extension of~\Eq{(3)} which the reader may formulate for himself, and in particular the inequality \[ \left(\tsum a^{p}\right)/n \geq \left\{\left(\tsum a\right)/n\right\}^{p}. \Tag{(7)} \] \Item{9.} \Topic{The general form of the theorem concerning the arithmetic and geometric means.} An inequality of a slightly different character is that which asserts that the arithmetic mean of $a_{1}$, $a_{2}$, \dots,~$a_{n}$ is not less than their geometric mean. Suppose that $a_{r}$~and~$a_{s}$ are the greatest and least of the~$a$'s (if there are several greatest or least~$a$'s we may choose any of them indifferently), and let $G$ be their geometric mean. We may suppose $G > 0$, as the truth of the proposition is obvious when $G = 0$. If now we replace $a_{r}$~and~$a_{s}$ by \[ a_{r}' = G,\quad a_{s}' = a_{r}a_{s}/G, \] we do not alter the value of the geometric mean; and, since \[ a_{r}' + a_{s}' - a_{r} - a_{s} = (a_{r} - G)(a_{s} - G)/G \leq 0, \] we certainly do not increase the arithmetic mean. It is clear that we may repeat this argument until we have replaced each of $a_{1}$, $a_{2}$, \dots,~$a_{n}$ by~$G$; at most $n$~repetitions will be necessary. As the final value of the arithmetic mean is~$G$, the initial value cannot have been less. \PageSep{33} \Item{10.} \Topic{Schwarz's inequality.} Suppose that $a_{1}$, $a_{2}$, \dots,~$a_{n}$ and $b_{1}$, $b_{2}$, \dots,~$b_{n}$ are any two sets of numbers positive or negative. It is easy to verify the identity \[ \left(\tsum a_{r} b_{r}\right)^{2} = \tsum a_{r}^{2} \tsum a_{s}^{2} - \tsum (a_{r} b_{s} - a_{s} b_{r})^{2}, \] where $r$~and~$s$ assume the values $1$, $2$, \dots,~$n$. It follows that \[ \left(\tsum a_{r} b_{r}\right)^{2} \leq \tsum a_{r}^{2} \tsum b_{r}^{2}, \] an inequality usually known as Schwarz's (though due originally to Cauchy). \Item{11.} If $a_{1}$, $a_{2}$, \dots,~$a_{n}$ are all positive, and $s_{n} = a_{1} + a_{2} + \dots + a_{n}$, then \[ (1 + a_{1})(1 + a_{2}) \dots (1 + a_{n}) \leq 1 + s_{n} + \frac{s_{n}^{2}}{2!} + \dots + \frac{s_{n}^{n}}{n!}. \] \MathTrip{1909.} \Item{12.} If $a_{1}$, $a_{2}$, \dots,~$a_{n}$ and $b_{1}$, $b_{2}$, \dots,~$b_{n}$ are two sets of positive numbers, arranged in descending order of magnitude, then \[ (a_{1} + a_{2} + \dots + a_{n}) (b_{1} + b_{2} + \dots + b_{n}) \leq n(a_{1}b_{1} + a_{2}b_{2} + \dots + a_{n}b_{n}). \] \Item{13.} If $a$, $b$, $c$, \dots~$k$ and $A$, $B$, $C$, \dots~$K$ are two sets of numbers, and all of the first set are positive, then \[ \frac{aA + bB + \dots + kK}{a + b + \dots + k} \] lies between the algebraically least and greatest of $A$, $B$, \dots,~$K$. \Item{14.} If $\sqrt{p}$,~$\sqrt{q}$ are dissimilar surds, and $a + b\sqrt{p} + c\sqrt{q} + d\sqrtp{pq} = 0$, where $a$, $b$, $c$,~$d$ are rational, then $a = 0$, $b = 0$, $c = 0$, $d = 0$. [Express $\sqrt{p}$ in the form $M + N \sqrt{q}$, where $M$~and~$N$ are rational, and apply the theorem of \SecNo[§]{14}.] \Item{15.} Show that if $a\sqrt{2} + b\sqrt{3} + c\sqrt{5} = 0$, where $a$,~$b$,~$c$ are rational numbers, then $a = 0$, $b = 0$, $c = 0$. \Item{16.} Any polynomial in $\sqrt{p}$~and~$\sqrt{q}$, with rational coefficients (\ie\ any sum of a finite number of terms of the form $A(\sqrt{p})^{m}(\sqrt{q})^{n}$, where $m$~and~$n$ are integers, and $A$~rational), can be expressed in the form \[ a + b\sqrt{p} + c\sqrt{q} + d\DPtypo{\sqrt{pq}}{\sqrtp{pq}}, \] where $a$, $b$, $c$,~$d$ are rational. \Item{17.} Express $\dfrac{a + b\sqrt{p} + c\sqrt{q}}{d + e\sqrt{p} + f\sqrt{q}}$, where $a$,~$b$,~etc.\ are rational, in the form \[ A + B\sqrt{p} + C\sqrt{q} + D\DPtypo{\sqrt{pq}}{\sqrtp{pq}}, \] where $A$, $B$, $C$,~$D$ are rational. [Evidently %[** TN: Set on one line in the original] \begin{align*} \frac{a + b\sqrt{p} + c\sqrt{q}}{d + e\sqrt{p} + f\sqrt{q}} &= \frac{(a + b\sqrt{p} + c\sqrt{q})(d + e\sqrt{p} - f\sqrt{q})} {(d + e\sqrt{p})^{2} - f^{2}q} \\ &= \frac{\alpha + \beta\sqrt{p} + \gamma\sqrt{q} + \delta\DPtypo{\sqrt{pq}}{\sqrtp{pq}}} {\epsilon + \zeta\sqrt{p}}, \end{align*} where $\alpha$, $\beta$,~etc.\ are rational numbers which can easily be found. The required \PageSep{34} reduction may now be easily completed by multiplication of numerator and denominator by $\epsilon - \zeta\sqrt{p}$. For example, prove that \[ \frac{1}{1 + \sqrt{2} + \sqrt{3}} = \frac{1}{2} + \frac{1}{4}\sqrt{2} - \frac{1}{4}\sqrt{6}.] \] \Item{18.} If $a$, $b$, $x$, $y$ are rational numbers such that \[ (ay - bx)^{2} + 4(a - x)(b - y) = 0, \] then either (i)~$x = a$, $y = b$ or (ii)~$1 - ab$ and~$1 - xy$ are squares of rational numbers. \MathTrip{1903.} \Item{19.} If all the values of $x$~and~$y$ given by \[ ax^{2} + 2hxy + by^{2} = 1,\quad a'x^{2} + 2h'xy + b'y^{2} = 1 \] (where $a$,~$h$,~$b$, $a'$,~$h'$,~$b'$ are rational) are rational, then \[ (h - h')^{2} - (a - a')(b - b'),\quad (ab' - a'b)^{2} + 4(ah' - a'h)(bh' - b'h) \] are both squares of rational numbers. \MathTrip{1899.} \Item{20.} Show that $\sqrt{2}$~and~$\sqrt{3}$ are cubic functions of $\sqrt{2} + \sqrt{3}$, with rational coefficients, and that $\sqrt{2} - \sqrt{6} + 3$ is the ratio of two linear functions of $\sqrt{2} + \sqrt{3}$. \MathTrip{1905.} \Item{21.} The expression \[ \sqrtb{a + 2m\sqrtp{a - m^{2}}} + \sqrtb{a - 2m\sqrtp{a - m^{2}}} \] is equal to~$2m$ if $2m^{2} > a > m^{2}$, and to $2\sqrtp{a - m^{2}}$ if $a > 2m^{2}$. \Item{22.} Show that any polynomial in~$\sqrt[3]{2}$, with rational coefficients, can be expressed in the form \[ a + b\sqrt[3]{2} + c\sqrt[3]{4}, \] where $a$,~$b$,~$c$ are rational. More generally, if $p$~is any rational number, any polynomial in~$\sqrt[m]{p}$ with rational coefficients can be expressed in the form \[ a_{0} + a_{1}\alpha + a_{2}\alpha^{2} + \dots + a_{m-1}\alpha^{m-1}, \] where $a_{0}$, $a_{1}, \dots$ are rational and $\alpha = \sqrt[m]{p}$. For any such polynomial is of the form \[ b_{0} + b_{1}\alpha + b_{2}\alpha^{2} + \dots + b_{k}\alpha^{k}, \] where the~$b$'s are rational. If $k \leq m - 1$, this is already of the form required. If $k > m - 1$, let $\alpha^{r}$~be any power of~$\alpha$ higher than the~$(m - 1)$th. Then $r = \lambda m + s$, where $\lambda$~is an integer and $0 \leq s \leq m - 1$; and $\alpha^{r} = \alpha^{\lambda m + s} = p^{\lambda}\alpha^{s}$. Hence we can get rid of all powers of~$\alpha$ higher than the~$(m - 1)$th. \Item{23.} Express $(\sqrt[3]{2} - 1)^{5}$ and $(\sqrt[3]{2} - 1)/(\sqrt[3]{2} + 1)$ in the form $a + b\sqrt[3]{2} + c\sqrt[3]{4}$, where $a$,~$b$,~$c$ are rational. [Multiply numerator and denominator of the second expression by $\sqrt[3]{4} - \sqrt[3]{2} + 1$.] \Item{24.} If \[ a + b\sqrt[3]{2} + c\sqrt[3]{4} = 0, \] where $a$,~$b$,~$c$ are rational, then $a = 0$, $b = 0$, $c = 0$. \PageSep{35} [Let $y = \sqrt[3]{2}$. Then $y^{3} = 2$ and \[ cy^{2} + by + a = 0. \] Hence $2cy^{2} + 2by + ay^{3} = 0$ or \[ ay^{2} + 2cy + 2b = 0. \] Multiplying these two quadratic equations by $a$~and~$c$ and subtracting, we obtain $(ab - 2c^{2})y + a^{2} - 2bc = 0$, or $y = -(a^{2} - 2bc)/(ab - 2c^{2})$, a rational number, which is impossible. The only alternative is that $ab - 2c^{2} = 0$, $a^{2} - 2bc = 0$. Hence $ab = 2c^{2}$, $a^{4} = 4b^{2}c^{2}$. If neither $a$~nor~$b$ is zero, we can divide the second equation by the first, which gives $a^{3} = 2b^{3}$: and this is impossible, since $\sqrt[3]{2}$~cannot be equal to the rational number~$a/b$. Hence $ab = 0$, $c = 0$, and it follows from the original equation that $a$,~$b$, and~$c$ are all zero. As a corollary, if $a + b\sqrt[3]{2} + c\sqrt[3]{4} = d + e\sqrt[3]{2} + f\sqrt[3]{4}$, then $a = d$, $b = e$, $c = f$. It may be proved, more generally, that if \[ a_{0} + a_{1}p^{1/m} + \dots + a_{m-1}p^{(m-1)/m} = 0, \] $p$ not being a perfect $m$th~power, then $a_{0} = a_{1} = \dots = a_{m-1} = 0$; but the proof is less simple.] \Item{25.} If $A + \sqrt[3]{B} = C + \sqrt[3]{D}$, then either $A = C$, $B = D$, or $B$~and~$D$ are both cubes of rational numbers. \Item{26.} If $\sqrt[3]{A} + \sqrt[3]{B} + \sqrt[3]{C} = 0$, then either one of $A$,~$B$,~$C$ is zero, and the other two equal and opposite, or $\sqrt[3]{A}$,~$\sqrt[3]{B}$,~$\sqrt[3]{C}$ are rational multiples of the same surd~$\sqrt[3]{X}$. \Item{27.} Find rational numbers $\alpha$,~$\beta$ such that \[ \sqrtp[3]{7 + 5\sqrt{2}} = \alpha + \beta\sqrt{2}. \] \Item{28.} If $(a - b^{3})b > 0$, then \[ \bigsqrtb[3]{a + \frac{9b^{3} + a}{3b}\bigsqrtp{\frac{a - b^{3}}{3b}}} + \bigsqrtb[3]{a - \frac{9b^{3} + a}{3b}\bigsqrtp{\frac{a - b^{3}}{3b}}} \] is rational. [Each of the numbers under a cube root is of the form \[ \left\{\alpha + \beta\bigsqrtp{\frac{a - b^{3}}{3b}}\right\}^{3} \] where $\alpha$~and~$\beta$ are rational.] \Item{29.} If $\alpha = \sqrt[n]{p}$, any polynomial in~$\alpha$ is the root of an equation of degree~$n$, with rational coefficients. [We can express the polynomial ($x$~say) in the form \[ x = l_{1} + m_{1}\alpha + \dots + r_{1}\alpha^{(n-1)}, \] where $l_{1}$,~$m_{1}, \dots$ are rational, as in Ex.~22. \PageSep{36} Similarly \begin{alignat*}{4} x^{2} &= l_{2} &&+ m_{2}a &&+ \dots &&+ r_{2}a^{(n-1)}, \\ \multispan{9}{\dotfill} \\ x^{n} &= l_{n} &&+ m_{n}a &&+ \dots &&+ r_{n}a^{(n-1)}. \end{alignat*} Hence \[ L_{1}x + L_{2}x^{2} + \dots + L_{n}x^{n} = \Delta, \] where $\Delta$~is the determinant \[ \left| \begin{array}{cccc} l_{1} & m_{1} & \dots & r_{1} \\ l_{2} & m_{2} & \dots & r_{2} \\ \hdotsfor{4} \\ l_{n} & m_{n} & \dots & r_{n} \\ \end{array} \right| \] and $L_{1}$, $L_{2}, \dots$ the minors of $l_{1}$, $l_{2}, \dots$.] \Item{30.} Apply this process to $x = p + \sqrt{q}$, and deduce the theorem of \SecNo[§]{14}. \Item{31.} Show that $y = a + bp^{1/3} + cp^{2/3}$ satisfies the equation \[ y^{3} - 3ay^{2} + 3y(a^{2} - bcp) - a^{3} - b^{3}p - c^{3}p^{2} + 3abcp = 0. \] \Item{32.} \Topic{Algebraical numbers.} We have seen that some irrational numbers (such as~$\sqrt{2}$) are roots of equations of the type \[ a_{0}x^{n} + a_{1}x^{n-1} + \dots + a_{n} = 0, \] where $a_{0}$, $a_{1}$, \dots,~$a_{n}$ are integers. Such irrational numbers are called \emph{algebraical} numbers: all other irrational numbers, such as~$\pi$ (\SecNo[§]{15}), are called \emph{transcendental} numbers. Show that if $x$~is an algebraical number, then so are~$kx$, where $k$~is any rational number, and~$x^{m/n}$, where $m$~and~$n$ are any integers. \Item{33.} If $x$~and~$y$ are algebraical numbers, then so are $x + y$, $x - y$, $xy$ and~$x/y$. [We have equations \begin{alignat*}{4} a_{0}x^{m} &+ a_{1}x^{m-1} &&+ \dots &&+ a_{m} &&= 0, \\ b_{0}y^{n} &+ b_{1}y^{n-1} &&+ \dots &&+ b_{n} &&= 0, \end{alignat*} where the $a$'s~and~$b$'s are integers. Write $x + y = z$, $y = z - x$ in the second, and eliminate~$x$. We thus get an equation of similar form \[ c_{0}z^{p} + c_{1}z^{p-1} + \dots + c_{p} = 0, \] satisfied by~$z$. Similarly for the other cases.] \Item{34.} If \[ a_{0}x^{n} + a_{1}x^{n-1} + \dots + a_{n} = 0, \] where $a_{0}$, $a_{1}$, \dots,~$a_{n}$ are any algebraical numbers, then $x$~is an algebraical number. [We have $n + 1$~equations of the type \[ a_{0, r}a_{r}^{m_{r}} + a_{1, r}a_{r}^{m_{r}-1} + \dots + a_{m_{r}, r} = 0\quad (r = 0,\ 1,\ \dots,\ n), \] in which the coefficients $a_{0, r}$, $a_{1, r}, \dots$ are integers\Add{.} Eliminate $a_{0}$\DPtypo{.}{,} $a_{1}$, \dots,~$a_{n}$ between these and the original equation for~$x$.] \Item{35.} Apply this process to the equation $x^{2} - 2x\sqrt{2} + \sqrt{3} = 0$. [The result is $x^{8} - 16x^{6} + 58x^{4} - 48x^{2} + 9 = 0$.] \PageSep{37} \Item{36.} Find equations, with rational coefficients, satisfied by \[ 1 + \sqrt{2} + \sqrt{3},\quad \frac{\sqrt{3} + \sqrt{2}} {\sqrt{3} - \sqrt{2}},\quad \sqrtb{\sqrt{3}+ \sqrt{2}} + \sqrtb{\sqrt{3} - \sqrt{2}},\quad \sqrt[3]{2} + \sqrt[3]{3}. \] \Item{37.} If $x^{3} = x + 1$, then $x^{3n} = a_{n}x + b_{n} + c_{n}/x$, where \[ a_{n+1} = a_{n} + b_{n},\quad b_{n+1} = a_{n} + b_{n} + c_{n},\quad c_{n+1} = a_{n} + c_{n}. \] \Item{38.} If $x^{6} + x^{5} - 2x^{4} - x^{3} + x^{2} + 1 = 0$ and $y = x^{4} - x^{2} + x - 1$, then $y$~satisfies a quadratic equation with rational coefficients. \MathTrip{1903.} [It will be found that $y^{2} + y + 1 = 0$.] \end{Examples} \PageSep{38} \Chapter{II}{FUNCTIONS OF REAL VARIABLES} \Paragraph{20. The idea of a function.} Suppose that $x$~and~$y$ are two continuous real variables, which we may suppose to be represented geometrically by distances $A_{0}P = x$, $B_{0}Q = y$ measured from fixed points $A_{0}$,~$B_{0}$ along two straight lines $\Lambda$,~$\Mu$. And let us suppose that the positions of the points $P$~and~$Q$ are not independent, but connected by a relation which we can imagine to be expressed as a relation between $x$~and~$y$: so that, when $P$~and~$x$ are known, $Q$~and~$y$ are also known. We might, for example, suppose that $y = x$, or $y = 2x$, or~$\frac{1}{2}x$, or~$x^{2} + 1$. In all of these cases the value of~$x$ determines that of~$y$. Or again, we might suppose that the relation between $x$~and~$y$ is given, not by means of an explicit formula for~$y$ in terms of~$x$, but by means of a geometrical construction which enables us to determine~$Q$ when~$P$ is known. In these circumstances $y$~is said to be a \emph{function} of~$x$. This notion of functional dependence of one variable upon another is perhaps the most important in the whole range of higher mathematics. In order to enable the reader to be certain that he understands it clearly, we shall, in this chapter, illustrate it by means of a large number of examples. But before we proceed to do this, we must point out that the simple examples of functions mentioned above possess three characteristics which are by no means involved in the general idea of a function, viz.: \Item{(1)} $y$~is determined \emph{for every value of~$x$}; \Item{(2)} to each value of~$x$ for which $y$~is given corresponds \emph{one and only one value of~$y$}; \Item{(3)} the relation between $x$~and~$y$ is expressed by means of \emph{an analytical formula}, from which the value of~$y$ corresponding to a given value of~$x$ can be calculated by direct substitution of the latter. \PageSep{39} It is indeed the case that these particular characteristics are possessed by many of the most important functions. But the consideration of the following examples will make it clear that they are by no means essential to a function. All that is essential is that there should be some relation between $x$~and~$y$ such that to some values of~$x$ at any rate correspond values of~$y$. \begin{Examples}{X.} \Item{1.} Let $y = x$ or~$2x$ or~$\frac{1}{2}x$ or $x^{2} +1$. Nothing further need be said at present about cases such as these. \Item{2.} Let $y = 0$ whatever be the value of~$x$. Then $y$~is a function of~$x$, for we can give~$x$ any value, and the corresponding value of~$y$ (viz.~$0$) is known. In this case the functional relation makes the same value of~$y$ correspond to all values of~$x$. The same would be true were $y$~equal to~$1$ or~$-\frac{1}{2}$ or~$\sqrt{2}$ instead of~$0$. Such a function of~$x$ is called \emph{a constant}. \Item{3.} Let $y^{2} = x$. Then if $x$~is positive this equation defines \emph{two} values of~$y$ corresponding to each value of~$x$, viz.~$±\sqrt{x}$. If $x = 0$, $y = 0$. Hence to the particular value~$0$ of~$x$ corresponds \emph{one} and only one value of~$y$. But if $x$~is negative there is \emph{no} value of~$y$ which satisfies the equation. That is to say, the function~$y$ is not defined for negative values of~$x$. This function therefore possesses the characteristic~(3), but neither (1)~nor~(2). \Item{4.} Consider a volume of gas maintained at a constant temperature and contained in a cylinder closed by a sliding piston.\footnote {I borrow this instructive example from Prof.\ H.~S. Carslaw's \textit{Introduction to the Calculus.}} Let $A$ be the area of the cross section of the piston and $W$~its weight. The gas, held in a state of compression by the piston, exerts a certain pressure $p_{0}$~per unit of area on the piston, which balances the weight~$W$, so that \[ W = Ap_{0}. \] Let $v_{0}$ be the volume of the gas when the system is thus in equilibrium. If additional weight is placed upon the piston the latter is forced downwards. The volume~($v$) of the gas diminishes; the pressure~($p$) which it exerts upon unit area of the piston increases. Boyle's experimental law asserts that the product of $p$~and~$v$ is very nearly constant, a correspondence which, if exact, would be represented by an equation of the type \[ pv = a, \Tag{(i)} \] where $a$~is a number which can be determined approximately by experiment. Boyle's law, however, only gives a reasonable approximation to the facts provided the gas is not compressed too much. When $v$~is decreased and $p$~increased beyond a certain point, the relation between them is no longer expressed with tolerable exactness by the equation~\Eq{(i)}. It is known that a \PageSep{40} much better approximation to the true relation can then be found by means of what is known as `van~der Waals' law', expressed by the equation \[ \left(p + \frac{\alpha}{v^{2}}\right)(v - \beta) = \gamma, \Tag{(ii)} \] where $\alpha$, $\beta$, $\gamma$ are numbers which can also be determined approximately by experiment. Of course the two equations, even taken together, do not give anything like a complete account of the relation between $p$~and~$v$. This relation is no doubt in reality much more complicated, and its form changes, as $v$~varies, from a form nearly equivalent to~\Eq{(i)} to a form nearly equivalent to~\Eq{(ii)}. But, from a mathematical point of view, there is nothing to prevent us from contemplating an ideal state of things in which, for all values of~$v$ not less than a certain value~$V$, \Eq{(i)}~would be exactly true, and \Eq{(ii)}~exactly true for all values of~$v$ less than~$V$. And then we might regard the two equations as together defining~$p$ as a function of~$v$. It is an example of a function which for some values of~$v$ is defined by one formula and for other values of~$v$ is defined by another. This function possesses the characteristic~(2)\DPtypo{.}{;} to any value of~$v$ only one value of~$p$ corresponds: but it does not possess~(1). For $p$~is not defined as a function of~$v$ for negative values of~$v$; a `negative volume' means nothing, and so negative values of~$v$ do not present themselves for consideration at all. \Item{5.} Suppose that a perfectly elastic ball is dropped (without rotation) from a height~$\frac{1}{2}g\tau^{2}$ on to a fixed horizontal plane, and rebounds continually. The ordinary formulae of elementary dynamics, with which the reader is probably familiar, show that $h = \frac{1}{2}gt^{2}$ if $0 \leq t \leq \tau$, $h = \frac{1}{2}g(2\tau - t)^{2}$ if $\tau \leq t \leq 3\tau$, and generally \[ h = \tfrac{1}{2}g(2n\tau - t)^{2} \] if $(2n - 1)\tau \leq t \leq (2n + 1)\tau$, $h$~being the depth of the ball, at time~$t$, below its original position. Obviously $h$~is a function of~$t$ which is only defined for positive values of~$t$. \Item{6.} Suppose that $y$~is defined as being \emph{the largest prime factor of~$x$}. This is an instance of a definition which only applies to a particular class of values of~$x$, viz.\ \emph{integral} values. `The largest prime factor of~$\frac{11}{3}$ or of~$\sqrt{2}$ or of~$\pi$' means nothing, and so our defining relation fails to define for such values of~$x$ as these. Thus this function does not possess the characteristic~(1). It does possess~(2), but not~(3), as there is no simple formula which expresses~$y$ in terms of~$x$. \Item{7.} Let $y$~be defined as \emph{the denominator of~$x$ when $x$~is expressed in its lowest terms}. This is an example of a function which is defined if and only if $x$~is \emph{rational}. Thus $y = 7$ if $x = -11/7$: but $y$~is not defined for $x = \sqrt{2}$, `the denominator of~$\sqrt{2}$' being a meaningless form of words. \PageSep{41} \Item{8.} Let $y$~be defined as \emph{the height in inches of policeman~$Cx$, in the Metropolitan Police, at {\upshape5.30}~\DPchg{p.m.}{\textsc{p.m.}}\ on {\upshape8}~Aug.~{\upshape1907}}. Then $y$~is defined for a certain number of integral values of~$x$, viz.\ $1$, $2$, \dots,~$N$, where $N$~is the total number of policemen in division~$C$ at that particular moment of time. \end{Examples} \Paragraph{21. The graphical representation of functions.} Suppose that the variable~$y$ is a function of the variable~$x$. It will generally be open to us also to regard~$x$ as a function of~$y$, in virtue of the functional relation between $x$~and~$y$. But for the present we shall look at this relation from the first point of view. We shall then call~$x$ the \emph{independent variable} and~$y$ the \emph{dependent variable}; and, when the particular form of the functional relation is not specified, we shall express it by writing \[ y = f(x) \] (or $F(x)$, $\phi(x)$, $\psi(x), \dots$, as the case may be). The nature of particular functions may, in very many cases, be illustrated and made easily intelligible as follows. Draw two lines $OX$,~$OY$ at right angles to one another and produced indefinitely in both directions. We can represent values of $x$~and~$y$ by distances measured from~$O$ along the lines $OX$,~$OY$ respectively, regard being paid, of course, to sign, and the positive directions of measurement being those indicated by arrows in \Fig{6}. %[Illustration: Fig. 6.] \Figure[1.75in]{6}{p041} Let $a$ be any value of~$x$ for which $y$~is defined and has (let us suppose) the single value~$b$. Take $OA = a$, $OB = b$, and complete the rectangle~$OAPB$. Imagine the point~$P$ marked on the diagram. This marking of the point~$P$ may be regarded as showing that the value of~$y$ for $x = a$ is~$b$. If to the value~$a$ of~$x$ correspond several values of~$y$ (say $b$,~$b'$,~$b''$), we have, instead of the single point~$P$, a number of points $P$,~$P'$,~$P''$. We shall call~$P$ the \emph{point} $(a, b)$; $a$~and~$b$ the \emph{coordinates of~$P$ referred to the axes $OX$,~$OY$}; $a$~the \emph{abscissa}, $b$~the \emph{ordinate} of~$P$; $OX$~and~$OY$ the \emph{axis of~$x$} and the \emph{axis of~$y$}, or together the \PageSep{42} \emph{axes of coordinates}, and $O$~the \emph{origin of coordinates}, or simply the \emph{origin}. Let us now suppose that for all values~$a$ of~$x$ for which $y$~is defined, the value~$b$ (or values $b$,~$b'$,~$b'', \dots$) of~$y$, and the corresponding point~$P$ (or points $P$,~$P'$,~$P'', \dots$), have been determined. We call the aggregate of all these points the \Emph{graph} of the function~$y$. To take a very simple example, suppose that $y$~is defined as a function of~$x$ by the equation \[ Ax + By + C = 0, \Tag{(1)} \] where $A$, $B$, $C$ are any fixed numbers.\footnote {If $B = 0$, $y$~does not occur in the equation. We must then regard~$y$ as a function of~$x$ defined for one value only of~$x$, viz.\ $x = -C/A$, and then having \emph{all} values.} Then $y$~is a function of~$x$ which possesses all the characteristics (1),~(2),~(3) of \SecNo[§]{20}. It is easy to show that \emph{the graph of~$y$ is a straight line}. The reader is in all probability familiar with one or other of the various proofs of this proposition which are given in text-books of Analytical Geometry. We shall sometimes use another mode of expression. We shall say that when $x$~and~$y$ vary in such a way that equation~\Eq{(1)} is always true, \emph{the locus of the point~$(x, y)$ is a straight line}, and we shall call~\Eq{(1)} \emph{the equation of the locus}, and say that the equation \emph{represents} the locus. This use of the terms `locus', `equation of the locus' is quite general, and may be applied whenever the relation between $x$~and~$y$ is capable of being represented by an analytical formula. The equation $Ax + By + C = 0$ is \emph{the general equation of the first degree}, for $Ax + By + C$ is the most general polynomial in $x$~and~$y$ which does not involve any terms of degree higher than the first in $x$~and~$y$. Hence \emph{the general equation of the first degree represents a straight line}. It is equally easy to prove the converse proposition that \emph{the equation of any straight line is of the first degree}. We may mention a few further examples of interesting geometrical loci defined by equations. An equation of the form \[ (x - \alpha)^{2} + (y - \beta)^{2} = \rho^{2}, \] \PageSep{43} or \[ x^{2} + y^{2} + 2Gx + 2Fy + C = 0, \] where $G^{2} + F^{2} - C > 0$, represents a circle. The equation \[ Ax^{2} + 2Hxy + By^{2} + 2Gx + 2Fy + C = 0 \] (\emph{the general equation of the second degree}) represents, assuming that the coefficients satisfy certain inequalities, a conic section, \ie\ an ellipse, parabola, or hyperbola. For further discussion of these loci we must refer to books on Analytical Geometry. \Paragraph{22. Polar coordinates.} In what precedes we have determined the position of~$P$ by the lengths of its coordinates $OM = x$, $MP = y$. If $OP = r$ and $MOP = \theta$, $\theta$~being an angle between $0$~and~$2\pi$ (measured in the positive direction), it is evident that \begin{gather*} x = r\cos\theta,\qquad y = r\sin\theta, \\ r = \sqrtp{x^{2} + y^{2}},\quad \cos\theta : \sin\theta : 1 :: x : y : r, \end{gather*} and that the position of~$P$ is equally well determined by a knowledge of $r$~and~$\theta$. We call $r$~and~$\theta$ the \emph{polar coordinates} of~$P$. The former, it should be observed, is essentially positive.\footnote {Polar coordinates are sometimes defined so that $r$~may be positive or negative. In this case two pairs of coordinates---\eg\ $(1, 0)$ and $(-1, \pi)$---correspond to the same point. The distinction between the two systems may be illustrated by means of the equation $l/r = 1 - e\cos\theta$, where $l > 0$, $e > 1$. According to our definitions $r$~must be positive and therefore $\cos\theta < 1/e$: the equation represents one branch only of a hyperbola, the other having the equation $-l/r = 1 - e\cos\theta$. With the system of coordinates which admits negative values of~$r$, the equation represents the whole hyperbola.} %[Illustration: Fig. 7.] \Figure[1.75in]{7}{p043} If $P$~moves on a locus there will be some relation between $r$~and~$\theta$, say $r = f(\theta)$ or $\theta = F(r)$. This we call the \emph{polar equation} of the locus. The polar equation may be deduced from the $(x, y)$ equation (or \textit{vice versa}) by means of the formulae above. Thus the polar equation of a straight line is of the form \[ r\cos(\theta - \alpha) = p, \] where $p$~and~$\alpha$ are constants. The equation $r = 2a\cos\theta$ represents a circle passing through the origin; and the general equation of a circle is of the form \[ r^{2} + c^{2} - 2rc\cos(\theta - \alpha) = A^{2}, \] where $A$, $c$, and~$\alpha$ are constants. \PageSep{44} \Paragraph{23. Further examples of functions and their graphical representation.} The examples which follow will give the reader a better notion of the infinite variety of possible types of functions. \Topic{\Item{A.} Polynomials.} A \emph{polynomial} in~$x$ is a function of the form \[ a_{0}x^{m} + a_{1}x^{m-1} + \dots + a_{m}, \] where $a_{0}$, $a_{1}$, \dots,~$a_{m}$ are constants. The simplest polynomials are the simple powers $y = x$, $x^{2}$, $x^{3}$,~\dots, $x^{m}, \dots$. The graph of the function~$x^{m}$ is of two distinct types, according as $m$~is even or odd. First let $m = 2$. Then three points on the graph are $(0, 0)$, $(1, 1)$, $(-1, 1)$. Any number of additional points on the graph may be found by assigning other special values to~$x$: thus the values \begin{alignat*}{6} x &= \tfrac{1}{2},\quad &&2,\quad &&3,\quad -&&\tfrac{1}{2},\quad -&&2,\quad &&3 \\ \intertext{give} y &= \tfrac{1}{4},\quad &&4,\quad &&9,\quad &&\tfrac{1}{4},\quad &&4,\quad &&9. \end{alignat*} If the reader will plot off a fair number of points on the graph, he will be led to conjecture that the form of the graph is something like that shown in \Fig{8}. If he draws a curve through the special points which he has proved to lie on the graph and then tests its accuracy by giving~$x$ new values, and calculating the corresponding values of~$y$, he will find that they lie as near to the curve as it is reasonable to expect, when the inevitable inaccuracies of drawing are considered. The curve is of course a parabola. %[Illustration: Fig. 8.] \Figure[2.25in]{8}{p044} There is, however, one fundamental question which we cannot answer adequately at present. The reader has no doubt some notion as to what is meant by a \emph{continuous} curve, a curve without breaks or jumps; such a curve, in fact, as is roughly represented in \Fig{8}. The question is whether the graph of the function $y = x^{2}$ is in fact such a curve. This cannot be \emph{proved} by merely \PageSep{45} constructing any number of isolated points on the curve, although the more such points we construct the more probable it will appear. This question cannot be discussed properly until \Ref{Ch.}{V}\@. In that chapter we shall consider in detail what our common sense idea of continuity really means, and how we can prove that such graphs as the one now considered, and others which we shall consider later on in this chapter, are really continuous curves. For the present the reader may be content to draw his curves as common sense dictates. \begin{Remark} It is easy to see that the curve $y = x^{2}$ is everywhere convex to the axis of~$x$. Let $P_{0}$,~$P_{1}$ (\Fig{8}) be the points $(x_{0}, x_{0}^{2})$, $(x_{1}, x_{1}^{2})$. Then the coordinates of a point on the chord~$P_{0}P_{1}$ are $x = \lambda x_{0} + \mu x_{1}$, $y = \lambda x_{0}^{2} + \mu x_{1}^{2}$, where $\lambda$~and~$\mu$ are positive numbers whose sum is~$1$. And \[ y - x^{2} = (\lambda + \mu)(\lambda x_{0}^{2} + \mu x_{1}^{2}) - (\lambda x_{0} + \mu x_{1} )^{2} = \lambda\mu(x_{1} - x_{0})^{2} \geq 0, \] so that the chord lies entirely above the curve. \end{Remark} The curve $y = x^{4}$ is similar to $y = x^{2}$ in general appearance, but flatter near~$O$, and steeper beyond the points $A$,~$A'$ (\Fig{9}), and $y = x^{m}$, where $m$~is even and greater than~$4$, is still more so. As $m$~gets larger and larger the flatness and steepness grow more and more pronounced, until the curve is practically indistinguishable from the thick line in the figure. %[Illustration: Fig. 9.] %[Illustration: Fig. 10.] %[** TN: Captions vertically aligned in the original] \Figures{2.25in}{9}{p045a}{2.25in}{10}{p045b}\PageLabel{45} The reader should next consider the curves given by $y = x^{m}$, when $m$~is odd. The fundamental difference between the two cases is that whereas when $m$~is even $(-x)^{m} = x^{m}$, so that the curve is symmetrical about~$OY$, when $m$~is odd $(-x)^{m} = -x^{m}$, so \PageSep{46} that $y$~is negative when $x$~is negative. \Fig{10} shows the curves $y = x$, $y = x^{3}$, and the form to which $y = x^{m}$ approximates for larger odd values of~$m$. It is now easy to see how (theoretically at any rate) the graph of any polynomial may be constructed. In the first place, from the graph of $y = x^{m}$ we can at once derive that of~$Cx^{m}$, where $C$~is a constant, by multiplying the ordinate of every point of the curve by~$C$. And if we know the graphs of $f(x)$~and~$F(x)$, we can find that of $f(x) + F(x)$ by taking the ordinate of every point to be the sum of the ordinates of the corresponding points on the two original curves. The drawing of graphs of polynomials is however so much facilitated by the use of more advanced methods, which will be explained later on, that we shall not pursue the subject further here. \begin{Examples}{XI.} \Item{1.} Trace the curves $y = 7x^{4}$, $y = 3x^{5}$, $y = x^{10}$. [The reader should draw the curves carefully, and all three should be drawn in one figure.\footnote {It will be found convenient to take the scale of measurement along the axis of~$y$ a good deal smaller than that along the axis of~$x$, in order to prevent the figure becoming of an awkward size.} He will then realise how rapidly the higher powers of~$x$ increase, as $x$~gets larger and larger, and will see that, in such a polynomial as \[ x^{10} + 3x^{5} + 7x^{4} \] (or even $x^{10} + 30x^{5} + 700x^{4}$), it is the \emph{first} term which is of really preponderant importance when $x$~is fairly large. Thus even when $x = 4$, $x^{10} > 1,000,000$, while $30x^{5} < 35,000$ and $700x^{4} < 180,000$; while if $x = 10$ the preponderance of the first term is still more marked.] \Item{2.} Compare the relative magnitudes of $x^{12}$, $1,000,000x^{6}$, $1,000,000,000,000x$ when $x = 1$, $10$, $100$,~etc. [The reader should make up a number of examples of this type for himself. This idea of the \emph{relative rate of growth} of different functions of~$x$ is one with which we shall often be concerned in the following chapters.] \Item{3.} Draw the graph of $ax^{2} + 2bx + c$. [Here $y - \{(ac - b^{2})/a\} = a\{x + (b/a)\}^{2}$. If we take new axes parallel to the old and passing through the point $x = -b/a$, $y = (ac - b^{2})/a$, the new equation is $y' = ax'^{2}$. The curve is a parabola.] \Item{4.} Trace the curves $y = x^{3} - 3x + 1$, $y = x^{2}(x - 1)$, $y = x(x - 1)^{2}$. \end{Examples} \PageSep{47} \Paragraph{24.} \Topic{\Item{B.} Rational Functions.} The class of functions which ranks next to that of polynomials in simplicity and importance is that of \emph{rational functions}. A rational function is the quotient of one polynomial by another: thus if $P(x)$,~$Q(x)$ are polynomials, we may denote the general rational function by \[ R(x) = \frac{P(x)}{Q(x)}. \] In the particular case when $Q(x)$~reduces to unity or any other constant (\ie\ does not involve~$x$), $R(x)$~reduces to a polynomial: thus the class of rational functions includes that of polynomials as a sub-class. The following points concerning the definition should be noticed. \begin{Remark} \Item{(1)} We usually suppose that $P(x)$~and~$Q(x)$ have no common factor $x + a$ or $x^{p} + ax^{p-1} + bx^{p-2} + \dots + k$, all such factors being removed by division. \Item{(2)} It should however be observed that this removal of common factors \emph{does as a rule change the function}. Consider for example the function~$x/x$, which is a rational function. On removing the common factor~$x$ we obtain $1/1 = 1$. But the original function is not \emph{always} equal to~$1$: it is equal to~$1$ only so long as $x\neq 0$. If $x = 0$ it takes the form~$0/0$, which is meaningless. Thus the function~$x/x$ is equal to~$1$ if $x\neq 0$ and is undefined when $x = 0$. It therefore differs from the function~$1$, which is \emph{always} equal to~$1$. \Item{(3)} Such a function as \[ \left(\frac{1}{x + 1} + \frac{1}{x - 1}\right) \bigg/ \left(\frac{1}{x} + \frac{1}{x - 2}\right) \] may be reduced, by the ordinary rules of algebra, to the form \[ \frac{x^{2}(x - 2)}{(x - 1)^{2} (x + 1)}, \] which is a rational function of the standard form. But here again it must be noticed that the reduction is not \emph{always} legitimate. In order to calculate the value of a function for a given value of~$x$ we must substitute the value for~$x$ in the function \emph{in the form in which it is given}. In the case of this function the values $x = -1$, $1$,~$0$,~$2$ all lead to a meaningless expression, and so the function is not defined for these values. The same is true of the reduced form, so far as the values $-1$~and~$1$ are concerned. But $x = 0$ and $x = 2$ give the value~$0$. Thus once more the two functions are not the same. \Item{(4)} But, as appears from the particular example considered under~(3), there will generally be a certain number of values of~$x$ for which the function is not defined even when it has been reduced to a rational function of the standard form. These are the values of~$x$ (if any) for which the denominator vanishes. Thus $(x^{2} - 7)/(x^{2} - 3x + 2)$ is not defined when $x = 1$ or~$2$. \PageSep{48} \Item{(5)} Generally we agree, in dealing with expressions such as those considered in (2)~and~(3), to disregard the exceptional values of~$x$ for which such processes of simplification as were used there are illegitimate, and to reduce our function to the standard form of rational function. The reader will easily verify that (on this understanding) the sum, product, or quotient of two rational functions may themselves be reduced to rational functions of the standard type. And generally \emph{a rational function of a rational function is itself a rational function}: \ie\ if in $z = P(y)/Q(y)$, where $P$~and~$Q$ are polynomials, we substitute $y = P_{1}(x)/Q_{1}(x)$, we obtain on simplification an equation of the form $z = P_{2}(x)/Q_{2}(x)$. \Item{(6)} It is in no way presupposed in the definition of a rational function that the constants which occur as coefficients should be rational \emph{numbers}. The word rational has reference solely to the way in which the variable~$x$ appears in the function. Thus \[ \frac{x^{2} + x + \sqrt{3}}{x\sqrt[3]{2} - \pi} \] is a rational function. The use of the word rational arises as follows. The rational function $P(x)/Q(x)$ may be generated from~$x$ by a finite number of operations upon~$x$, including only multiplication of $x$ by itself or a constant, addition of terms thus obtained and division of one function, obtained by such multiplications and additions, by another. In so far as the variable~$x$ is concerned, this procedure is very much like that by which all rational numbers can be obtained from unity, a procedure exemplified in the equation \[ \frac{5}{3} = \frac{1 + 1 + 1 + 1 + 1}{1 + 1 + 1}. \] Again, \emph{any} function which can be deduced from~$x$ by the elementary operations mentioned above using at each stage of the process functions which have already been obtained from~$x$ in the same way, can be reduced to the standard type of rational function. The most general kind of function which can be obtained in this way is sufficiently illustrated by the example \[ \Biggl(\frac{x}{x^{2} + 1} + \frac{2x + 7}{x^{2} + \dfrac{11x - 3\sqrt{2}}{9x + 1}}\Biggr) \Bigg/ \left(17 + \frac{2}{x^{3}}\right), \] which can obviously be reduced to the standard type of rational function. \end{Remark} \Paragraph{25.} The drawing of graphs of rational functions, even more than that of polynomials, is immensely facilitated by the use of methods depending upon the differential calculus. We shall therefore content ourselves at present with a very few examples. \begin{Examples}{XII.} \Item{1.} Draw the graphs of $y = 1/x$, $y = 1/x^{2}$, $y = 1/x^{3}$,~\dots. [The figures show the graphs of the first two curves. It should be observed that since $1/0$,~$1/0^{2}$,~\dots\ are meaningless expressions, these functions are not defined for $x = 0$.] \PageSep{49} %[Illustration: Fig. 11.] %[Illustration: Fig. 12.] %[** TN: Moved up three paragraphs] \Figures{2.25in}{11}{p049a}{2.25in}{12}{p049b} \Item{2.} Trace $y = x + (1/x)$, $x - (1/x)$, $x^{2} + (1/x^{2})$, $x^{2} - (1/x^{2})$ and $ax + (b/x)$ taking various values, positive and negative, for $a$~and~$b$. \Item{3.} Trace \[ y = \frac{x + 1}{x - 1},\quad \left(\frac{x + 1}{x - 1}\right)^{2},\quad \frac{1}{(x - 1)^{2}},\quad \frac{x^{2} + 1}{x^{2} - 1}. \] \Item{4.} Trace $y = 1/(x - a)(x - b)$, $1/(x - a)(x - b)(x - c)$, where $a < b < c$. \Item{5.} Sketch the general form assumed by the curves $y = 1/x^{m}$ as $m$~becomes larger and larger, considering separately the cases in which $m$~is odd or even. \end{Examples} \Paragraph{26.} \Topic{\Item{C.} Explicit Algebraical Functions.} The next important class of functions is that of \emph{explicit algebraical functions}. These are functions which can be generated from~$x$ by a finite number of operations such as those used in generating rational functions, together with a finite number of operations of root extraction. Thus \[ %[** TN: On two lines in the original] \frac{\sqrtp{1 + x} - \sqrtp[3]{1 - x}} {\sqrtp{1 + x} + \sqrtp[3]{1 - x}},\quad \sqrt{x} + \sqrtp{x +\sqrt{x}},\quad \left(\frac{x^{2} + x + \sqrt{3}}{x\sqrt[3]{2} - \pi}\right)^{\frac{2}{3}} \] are explicit algebraical functions, and so is $x^{m/n}$ (\ie~$\sqrt[n]{x^{m}}$), where $m$~and~$n$ are any integers. It should be noticed that there is an ambiguity of notation involved in such an equation as $y = \sqrt{x}$. We have, up to the present, regarded (\eg)~$\sqrt{2}$ as denoting the \emph{positive} square root of~$2$, and it would be natural to denote by~$\sqrt{x}$, where $x$~is any \PageSep{50} positive number, the positive square root of~$x$, in which case $y = \sqrt{x}$ would be a one-valued function of~$x$. It is however often more convenient to regard~$\sqrt{x}$ as standing for the two-valued function whose two values are the positive and negative square roots of~$x$. The reader will observe that, when this course is adopted, the function~$\sqrt{x}$ differs fundamentally from rational functions in two respects. In the first place a rational function is always defined for all values of~$x$ with a certain number of isolated exceptions. But $\sqrt{x}$~is undefined for a \emph{whole range} of values of~$x$ (\ie\ all negative values). Secondly the function, when $x$~has a value for which it is defined, has generally two values of opposite signs. The function~$\sqrt[3]{x}$, on the other hand, is one-valued and defined for all values of~$x$. \begin{Examples}{XIII.} \Item{1.} $\sqrtb{(x - a)(b - x)}$, where $a < b$, is defined only for $a \leq x \leq b$. If $a < x < b$ it has two values: if $x = a$ or $b$ only one, viz.~$0$. \Item{2.} Consider similarly \begin{gather*} \sqrtb{(x - a)(x - b)(x - c)} \quad (a < b < c), \\ \sqrtb{x(x^{2} - a^{2})},\quad \sqrtb[3]{(x - a)^{2}(b - x)}\quad (a < b), \\ \frac{\sqrtp{1 + x} - \sqrtp{1 - x}} {\sqrtp{1 + x} + \sqrtp{1 - x}},\quad \sqrtb{x + \sqrt{x}}. \end{gather*} \Item{3.} Trace the curves $y^{2} = x$, $y^{3} = x$, $y^{2} = x^{3}$. \Item{4.} Draw the graphs of the functions %[** TN: Not displayed in the original] \[ y = \sqrtp{a^{2} - x^{2}},\quad y = b\sqrtb{1 - (x^{2}/a^{2})}. \] \end{Examples} \Paragraph{27.} \Topic{\Item{D.} Implicit Algebraical Functions.} It is easy to verify that if \[ y = \frac{\sqrtp{1 + x} - \sqrtp[3]{1 - x}} {\sqrtp{1 + x} + \sqrtp[3]{1 - x}}, \] then \[ \left(\frac{1 + y}{1 - y}\right)^{6} = \frac{(1 + x)^{3}}{(1 - x)^{2}}; \] or if \[ y = \sqrt{x} + \sqrtp{x + \sqrt{x}}, \] then \[ y^{4} - (4y^{2} + 4y + 1)x = 0. \] Each of these equations may be expressed in the form \[ y^{m} + R_{1}y^{m-1} + \dots + R_{m} = 0, \Tag{(1)} \] where $R_{1}$, $R_{2}$, \dots,~$R_{m}$ are rational functions of~$x$: and the reader will easily verify that, if $y$~is any one of the functions considered in the last set of examples, $y$~satisfies an equation of this form. \PageSep{51} It is naturally suggested that the same is true of any explicit algebraic function. And this is in fact true, and indeed not difficult to prove, though we shall not delay to write out a formal proof here. An example should make clear to the reader the lines on which such a proof would proceed. Let \[ y = \frac{x + \sqrt{x} + \sqrtb{x + \sqrt{x}} + \sqrtp[3]{1 + x}} {x - \sqrt{x} + \sqrtb{x + \sqrt{x}} - \sqrtp[3]{1 + x}}. \] Then we have the equations \begin{gather*} y = \frac{x + u + v + w} {x - u + v - w}, \\ u^{2} = x,\quad v^{2} = x + u,\quad w^{3} = 1 + x, \end{gather*} and we have only to eliminate $u$,~$v$,~$w$ between these equations in order to obtain an equation of the form desired. We are therefore led to give the following definition: \emph{a function $y = f(x)$ will be said to be an algebraical function of~$x$ if it is the root of an equation such as~\Eq{(1)}, \ie~the root of an equation of the $m$\textsuperscript{th}~degree in~$y$, whose coefficients are rational functions of~$x$}. There is plainly no loss of generality in supposing the first coefficient to be unity. This class of functions includes all the explicit algebraical functions considered in \SecNo[§]{26}. But it also includes other functions which cannot be expressed as explicit algebraical functions. For it is known that in general such an equation as~\Eq{(1)} cannot be solved explicitly for~$y$ in terms of~$x$, when $m$~is greater than~$4$, though such a solution is always possible if $m = 1$, $2$,~$3$, or~$4$ and in special cases for higher values of~$m$. The definition of an algebraical function should be compared with that of an algebraical number given in the last chapter (\MiscExs{I}~32). \begin{Examples}{XIV.} \Item{1.} If $m = 1$, $y$~is a rational function. \Item{2.} If $m = 2$, the equation is $y^{2} + R_{1}y + R_{2} = 0$, so that \[ y = \tfrac{1}{2}\{-R_{1} ± \sqrtp{R_{1}^{2} - 4R_{2}}\}. \] This function is defined for all values of~$x$ for which $R_{1}^{2} \geq 4R_{2}$. It has two values if $R_{1}^{2} > 4R_{2}$ and one if $R_{1}^{2} = 4R_{2}$. If $m = 3$ or~$4$, we can use the methods explained in treatises on Algebra for the solution of cubic and biquadratic equations. But as a rule the process is complicated and the results inconvenient in form, and we can generally study the properties of the function better by means of the original equation. \PageSep{52} \Item{3.} Consider the functions defined by the equations \[ y^{2} - 2y - x^{2} = 0,\quad y^{2} - 2y + x^{2} = 0,\quad y^{4} - 2y^{2} + x^{2} = 0, \] in each case obtaining~$y$ as an explicit function of~$x$, and stating for what values of~$x$ it is defined. \Item{4.} Find algebraical equations, with coefficients rational in~$x$, satisfied by each of the functions \[ \sqrt{x} + \sqrtp{1/x},\quad \sqrt[3]{x} + \sqrtp[3]{1/x},\quad \sqrtp{x + \sqrt{x}},\quad \sqrt{x + \sqrtp{x + \sqrt{x}}}. \] \Item{5.} Consider the equation $y^{4} = x^{2}$. [Here $y^{2} = ±x$. If $x$~is positive, $y = \sqrt{x}$: if negative, $y = \sqrtp{-x}$. Thus the function has two values for all values of~$x$ save $x = 0$.] \Item{6.} An algebraical function of an algebraical function of~$x$ is itself an algebraical function of~$x$. [For we have \begin{alignat*}{4} y^{m} &+ R_{1}(z)y^{m-1} &&+ \dots &&+ R_{m}(z) &&= 0, \intertext{where} z^{n} &+ S_{1}(x)z^{n-1} &&+ \dots &&+ S_{n}(x) &&= 0. \intertext{Eliminating~$z$ we find an equation of the form} y^{p} &+ T_{1}(x)y^{p-1} &&+ \dots &&+ T_{p}(x) &&= 0. \end{alignat*} Here all the capital letters denote rational functions.] \Item{7.} An example should perhaps be given of an algebraical function which cannot be expressed in an explicit algebraical form. Such an example is the function~$y$ defined by the equation \[ y^{5} - y - x = 0. \] But the proof that we cannot find an explicit algebraical expression for~$y$ in terms of~$x$ is difficult, and cannot be attempted here. \end{Examples} \Paragraph{28. Transcendental functions.} All functions of~$x$ which are not rational or even algebraical are called \emph{transcendental} functions. This class of functions, being defined in so purely negative a manner, naturally includes an infinite variety of whole kinds of functions of varying degrees of simplicity and importance. Among these we can at present distinguish two kinds which are particularly interesting. \Topic{\Item{E.} The direct and inverse trigonometrical or circular functions.} These are the sine and cosine functions of elementary trigonometry, and their inverses, and the functions derived from them. We may assume provisionally that the reader is familiar with their most important properties.\footnote {The definitions of the circular functions given in elementary trigonometry presuppose that any sector of a circle has associated with it a definite number called its \emph{area}. How this assumption is justified will appear in \Ref{Ch.}{VII}\@.} \PageSep{53} \begin{Examples}{XV.} \Item{1.} Draw the graphs of $\cos x$, $\sin x$, and $a\cos x + b\sin x$. [Since $a\cos x + b\sin x = \beta\cos(x - \alpha)$, where $\beta = \sqrtp{a^{2} + b^{2}}$, and $\alpha$~is an angle whose cosine and sine are $a/\sqrtp{a^{2} + b^{2}}$ and $b/\sqrtp{a^{2} + b^{2}}$, the graphs of these three functions are similar in character.] \Item{2.} Draw the graphs of $\cos^{2} x$, $\sin^{2} x$, $a\cos^{2} x + b\sin^{2} x$. \Item{3.} Suppose the graphs of $f(x)$~and~$F(x)$ drawn. Then the graph of \[ f(x)\cos^{2} x + F(x)\sin^{2} x \] is a wavy curve which oscillates between the curves $y = f(x)$, $y = F(x)$. Draw the graph when $f(x) = x$, $F(x) = x^{2}$. \Item{4.} Show that the graph of $\cos px + \cos qx$ lies between those of $2\cos\frac{1}{2}(p - q)x$ and $-2\cos\frac{1}{2}(p + q)x$, touching each in turn. Sketch the graph when $(p - q)/(p + q)$ is small. \MathTrip{1908.} \Item{5.} Draw the graphs of $x + \sin x$, $(1/x) + \sin x$, $x\sin x$, $(\sin x)/x$. \Item{6.} Draw the graph of~$\sin(1/x)$. [If $y = \sin(1/x)$, then $y = 0$ when $x = 1/m\pi$, where $m$~is any integer. Similarly $y = 1$ when $x = 1/(2m + \frac{1}{2})\pi$ and $y = -1$ when $x = 1/(2m - \frac{1}{2})\pi$. The curve is entirely comprised between the lines $y = -1$ and $y = 1$ (\Fig{13}). It oscillates up and down, the rapidity of the oscillations becoming greater and greater as $x$~approaches~$0$. For $x = 0$ the function is undefined. When $x$~is large $y$~is small.\footnote {See \Ref{Chs.}{IV}~and~\Ref{}{V} for explanations as to the precise meaning of this phrase.} The negative half of the curve is similar in character to the positive half.] \Item{7.} Draw the graph of $x\sin(1/x)$. [This curve is comprised between the lines $y = -x$ and $y = x$ just as the last curve is comprised between the lines $y = -1$ and $y = 1$ (\Fig{14}).] %[Illustration: Fig. 13.] %[Illustration: Fig. 14.] \Figures{2.25in}{13}{p053a}{2.25in}{14}{p053b}\PageLabel{53} \PageSep{54} \Item{8.} Draw the graphs of $x^{2}\sin(1/x)$, $(1/x)\sin(1/x)$, $\sin^{2}(1/x)$, $\{x\sin(1/x)\}^{2}$, $a\cos^{2}(1/x) + b\sin^{2}(1/x)$, $\sin x + \sin(1/x)$, $\sin x\sin(1/x)$. \Item{9.} Draw the graphs of $\cos x^{2}$, $\sin x^{2}$, $a\cos x^{2} + b\sin x^{2}$. \Item{10.} Draw the graphs of $\arccos x$ and $\arcsin x$. [If $y = \arccos x$, $x = \cos y$. This enables us to draw the graph of~$x$, considered as a function of~$y$, and the same curve shows $y$~as a function of~$x$. It is clear that $y$~is only defined for $-1 \leq x \leq 1$, and is infinitely many-valued for these values of~$x$. As the reader no doubt remembers, there is, when $-1 < x < 1$, a value of~$y$ between $0$~and~$\pi$, say~$\alpha$, and the other values of~$y$ are given by the formula~$2n\pi ± \alpha$, where $n$~is any integer, positive or negative.] \Item{11.} Draw the graphs of \[ \tan x,\quad \cot x,\quad \sec x,\quad \cosec x,\quad \tan^{2} x,\quad \cot^{2} x,\quad \sec^{2} x,\quad \cosec^{2} x. \] \Item{12.} Draw the graphs of $\arctan x$, $\arccot x$, $\arcsec x$, $\arccosec x$. Give formulae (as in Ex.~10) expressing all the values of each of these functions in terms of any particular value. \Item{13.} Draw the graphs of $\tan(1/x)$, $\cot(1/x)$, $\sec(1/x)$, $\cosec(1/x)$. \Item{14.} Show that $\cos x$ and $\sin x$ are not rational functions of~$x$. [A function is said to be \emph{periodic}, with period~$a$, if $f(x) = f(x + a)$ for all values of~$x$ for which $f(x)$~is defined. Thus $\cos x$ and $\sin x$ have the period~$2\pi$. It is easy to see that no periodic function can be a rational function, unless it is a constant. For suppose that \[ f(x) = P(x)/Q(x), \] where $P$~and~$Q$ are polynomials, and that $f(x) = f(x + a)$, each of these equations holding for all values of~$x$. Let $f(0) = k$. Then the equation $P(x) - kQ(x) = 0$ is satisfied by an infinite number of values of~$x$, viz.\ $x = 0$, $a$,~$2a$,~etc., and therefore for all values of~$x$. Thus $f(x) = k$ for all values of~$x$, \ie\ $f(x)$~is a constant.] \Item{15.} Show, more generally, that no function with a period can be an algebraical function of~$x$. [Let the equation which defines the algebraical function be \[ y^{m} + R_{1}y^{m-1} + \dots + R_{m} = 0 \Tag{(1)} \] where $R_{1}$,~\dots\ are rational functions of~$x$. This may be put in the form \[ P_{0}y^{m} + P_{1}y^{m-1} + \dots + P_{m} = 0, \] where $P_{0}$, $P_{1}$,~\dots\ are polynomials in~$x$. Arguing as above, we see that \[ P_{0}k^{m} + P_{1}k^{m-1} + \dots + P_{m} = 0 \] \PageSep{55} for all values of~$x$. Hence $y = k$ satisfies the equation~\Eq{(1)} for all values of~$x$, and one set of values of our algebraical function reduces to a constant. Now divide~\Eq{(1)} by $y - k$ and repeat the argument. Our final conclusion is that our algebraical function has, for any value of~$x$, the same set of values $k$,~$k'$,~\dots; \ie\ it is composed of a certain number of constants.] \Item{16.} The inverse sine and inverse cosine are not rational or algebraical functions. [This follows from the fact that, for any value of~$x$ between $-1$ and~$+1$, $\arcsin x$ and $\arccos x$ have infinitely many values.] \end{Examples} \Paragraph{29.} \Topic{\Item{F.} Other classes of transcendental functions.} Next in importance to the trigonometrical functions come the exponential and logarithmic functions, which will be discussed in \Ref{Chs.}{IX}~and~\Ref{}{X}\@. But these functions are beyond our range at present. And most of the other classes of transcendental functions whose properties have been studied, such as the elliptic functions, Bessel's and Legendre's functions, Gamma-functions, and so forth, lie altogether beyond the scope of this book. There are however some elementary types of functions which, though of much less importance theoretically than the rational, algebraical, or trigonometrical functions, are particularly instructive as illustrations of the possible varieties of the functional relation. \begin{Examples}{XVI.} \Item{1.} Let $y = [x]$, where $[x]$~denotes the greatest integer not greater than~$x$. The graph is shown in \Fig{15a}. The left-hand end points of the thick lines, but not the right-hand ones, belong to the graph. \Item{2.} $y = x - [x]$. (\Fig{15b}.) %[Illustration: Fig. 15a.] %[Illustration: Fig. 15b.] \Figures{2.25in}{15a}{p055a}{2.25in}{15b}{p055b} \PageSep{56} \Item{3.} $y = \sqrtb{x - [x]}$. (\Fig{15c}.) \Item{4.} $y = [x] + \sqrtb{x - [x]}$. (\Fig{15d}.) \Item{5.} $y = (x - [x])^{2}$, $[x] + (x - [x])^{2}$. \Item{6.} $y = [\sqrt{x}]$, $[x^{2}]$, $\sqrt{x} - [\sqrt{x}]$, $x^{2} - [x^{2}]$, $[1 - x^{2}]$. %[Illustration: Fig. 15c.] %[Illustration: Fig. 15d.] \Figures{2.25in}{15c}{p056a}{2.25in}{15d}{p056b} \Item{7.} Let $y$~be defined as \emph{the largest prime factor of~$x$} (cf.\ \Exs{x}.~6). Then $y$~is defined only for integral values of~$x$. If \begin{alignat*}{3} x &= 1,\ 2,\ 3,\ 4,\ 5,\ 6,\ 7,\ 8,\ 9,\ &10,&\ 11,\ &12,&\ 13,\ \dots, \\ \intertext{then} y &= 1,\ 2,\ 3,\ 2,\ 5,\ 3,\ 7,\ 2,\ 3,\ & 5,&\ 11,\ & 3,&\ 13,\ \dots. \end{alignat*} The graph consists of a number of isolated points. \Item{8.} Let $y$~be \emph{the denominator of~$x$} (\Exs{x}.~7). In this case $y$~is defined only for rational values of~$x$. We can mark off as many points on the graph as we please, but the result is not in any ordinary sense of the word a curve, and there are no points corresponding to any irrational values of~$x$. Draw the straight line joining the points $(N - 1, N)$,~$(N, N)$, where $N$~is a positive integer. Show that the number of points of the locus which lie on this line is equal to the number of positive integers less than and prime to~$N$. \Item{9.} Let $y = 0$ when $x$~is an integer, $y = x$ when $x$~is not an integer. The graph is derived from the straight line $y = x$ by taking out the points \[ \dots\ (-1, -1),\quad (0, 0),\quad (1, 1),\quad (2, 2),\ \dots \] and adding the points $(-1, 0)$, $(0, 0)$, $(1, 0)$,~\dots\ on the axis of~$x$. The reader may possibly regard this as an unreasonable function. \emph{Why}, he may ask, if $y$~is equal to~$x$ for all values of~$x$ save integral values, should it not be equal to~$x$ for integral values too? The answer is simply, \emph{why should it}? The function~$y$ does in point of fact answer to the definition of a function: there is a relation between $x$~and~$y$ such that when $x$~is known $y$~is known. We are perfectly at liberty to take this relation to be what we please, however arbitrary and apparently futile. This function~$y$ is, of course, a quite different function from that one which is \emph{always} equal to~$x$, whatever value, integral or otherwise, $x$~may have. \PageSep{57} \Item{10.} Let $y = 1$ when $x$~is rational, but $y = 0$ when $x$~is irrational. The graph consists of two series of points arranged upon the lines $y = 1$ and $y = 0$. To the eye it is not distinguishable from two continuous straight lines, but in reality an infinite number of points are missing from each line. \Item{11.} Let $y = x$ when $x$~is irrational and $y = \sqrtb{(1 + p^{2})/(1 + q^{2})}$ when $x$~is a\PageLabel{57} rational fraction~$p/q$. %[Illustration: Fig. 16.] %[** TN: The formula depicted for x = p/q, chosen to match the book visually, % is sqrt{(10 + p^2)/(10 + q^2)}. See Transcriber's Note at end for details.] \Figure{16}{p057} The irrational values of~$x$ contribute to the graph a curve in reality discontinuous, but apparently not to be distinguished from the straight line $y = x$. Now consider the rational values of~$x$. First let $x$~be positive. Then $\sqrtb{(1 + p^{2})/(1 + q^{2})}$ cannot be equal to~$p/q$ unless $p = q$, \ie\ $x = 1$. Thus all the points which correspond to rational values of~$x$ lie off the line, except the one point~$(1, 1)$. Again, if $p < q$, $\sqrtb{(1 + p^{2})/(1 + q^{2})} > p/q$; if $p > q$, $\sqrtb{(1 + p^{2})/(1 + q^{2})} < p/q$. Thus the points lie above the line $y = x$ if $0 < x < 1$, below if $x > 1$. If $p$~and~$q$ are large, $\sqrtb{(1 + p^{2})/(1 + q^{2})}$ is nearly equal to~$p/q$. Near any value of~$x$ we can find any number of rational fractions with large numerators and denominators. Hence the graph contains a large number of points which crowd round the line $y = x$. Its general appearance (for positive values of~$x$) is that of a line surrounded by a swarm of isolated points which gets denser and denser as the points approach the line. The part of the graph which corresponds to negative values of~$x$ consists of the rest of the discontinuous line together with the reflections of all these isolated points in the axis of~$y$. Thus to the left of the axis of~$y$ the swarm of points is not round $y = x$ but round $y = -x$, which is not itself part of the graph. See \Fig{16}. \end{Examples} \PageSep{58} \Paragraph{30. Graphical solution of equations containing a single unknown number.} Many equations can be expressed in the form \[ f(x) = \phi(x), \Tag{(1)} \] where $f(x)$~and~$\phi(x)$ are functions whose graphs are easy to draw. And if the curves \[ y = f(x),\quad y = \phi(x) \] intersect in a point~$P$ whose abscissa is~$\xi$, then $\xi$~is a root of the equation~\Eq{(1)}. \begin{Examples}{XVII.} \Item{1.} \Topic{The quadratic equation $ax^{2} + 2bx + c = 0$.} This may be solved graphically in a variety of ways. For instance we may draw the graphs of \[ y = ax + 2b,\quad y = -c/x, \] whose intersections, if any, give the roots. Or we may take \[ y = x^{2},\quad y = -(2bx + c)/a. \] But the most elementary method is probably to draw the circle \[ a(x^{2} + y^{2}) + 2bx + c = 0, \] whose centre is~$(-b/a, 0)$ and radius $\{\sqrtp{b^{2} - ac}\}/a$. The abscissae of its intersections with the axis of~$x$ are the roots of the equation. \Item{2.} Solve by any of these methods \[ x^{2} + 2x - 3 = 0,\quad x^{2} - 7x + 4 = 0,\quad 3x^{2} + 2x - 2 = 0. \] \Item{3.} \Topic{The equation $x^{m} + ax + b = 0$.} This may be solved by constructing the curves $y = x^{m}$, $y = -ax - b$. Verify the following table for the number of roots of \begin{gather*} x^{m} + ax + b = 0: \\ \begin{alignedat}{3} &\Item{(\ia)} &&m~\emph{even} &&\left\{ \begin{aligned} &\text{$b$~positive, \emph{two or none},}\\ &\text{$b$~negative, \emph{two}\Add{;}} \end{aligned} \right. \\ &\Item{(\ib)} &&m~\emph{odd} &&\left\{ \begin{aligned} &\text{$a$~positive, \emph{one},}\\ &\text{$a$~negative, \emph{three or one}.\qquad\qquad\qquad\qquad\qquad} \end{aligned} \right. \end{alignedat} \end{gather*} Construct numerical examples to illustrate all possible cases. \Item{4.} Show that the equation $\tan x = ax + b$ has always an infinite number of roots. \Item{5.} Determine the number of roots of \[ \sin x = x,\quad \sin x = \tfrac{1}{3} x,\quad \sin x = \tfrac{1}{8} x,\quad \sin x = \tfrac{1}{120} x. \] \Item{6.} Show that if $a$~is small and positive (\eg\ $a = .01$), the equation \[ x - a = \tfrac{1}{2}\pi\sin^{2} x \] has three roots. Consider also the case in which $a$~is small and negative. Explain how the number of roots varies as $a$~varies. \end{Examples} \PageSep{59} \Paragraph{31. Functions of two variables and their graphical representation.} In \SecNo[§]{20} we considered two variables connected by a relation. We may similarly consider \emph{three} variables ($x$,~$y$, and~$z$) connected by a relation such that when the values of $x$~and~$y$ are both given, the value or values of~$z$ are known. In this case we call~$z$ a \emph{function of the two variables} $x$~and~$y$; $x$~and~$y$ the \emph{independent} variables, $z$~the \emph{dependent} variable; and we express this dependence of~$z$ upon $x$~and~$y$ by writing \[ z = f(x, y). \] The remarks of \SecNo[§]{20} may all be applied, \textit{mutatis mutandis}, to this more complicated case. The method of representing such functions of two variables graphically is exactly the same in principle as in the case of functions of a single variable. We must take three axes, $OX$, $OY$, $OZ$ in space of three dimensions, each axis being perpendicular to the other two. The point~$(a, b, c)$ is the point whose distances from the planes $YOZ$, $ZOX$, $XOY$, measured parallel to $OX$, $OY$, $OZ$, are $a$,~$b$, and~$c$. Regard must of course be paid to sign, lengths measured in the directions $OX$, $OY$, $OZ$ being regarded as positive. The definitions of \emph{coordinates}, \emph{axes}, \emph{origin} are the same as before. Now let \[ z = f(x, y). \] As $x$~and~$y$ vary, the point~$(x, y, z)$ will move in space. The aggregate of all the positions it assumes is called the \emph{locus} of the point $(x, y, z)$ or the \emph{graph} of the function $z = f(x, y)$. When the relation between $x$,~$y$, and~$z$ which defines~$z$ can be expressed in an analytical formula, this formula is called the \emph{equation} of the locus. It is easy to show, for example, that the equation \[ Ax + By + Cz + D = 0 \] (\emph{the general equation of the first degree}) represents a \emph{plane}, and that the equation of any plane is of this form. The equation \[ (x - \alpha)^{2} + (y - \beta)^{2} + (z - \gamma)^{2} = \rho^{2}, \] or \[ x^{2} + y^{2} + z^{2} + 2Fx + 2Gy + 2Hz + C = 0, \] where $F^{2} + G^{2} + H^{2} - C > 0$, represents a \emph{sphere}; and so on. For proofs of these propositions we must again refer to text-books of Analytical Geometry. \PageSep{60} \Paragraph{32. Curves in a plane.} We have hitherto used the notation \[ y = f(x) \Tag{(1)} \] to express functional dependence of~$y$ upon~$x$. It is evident that this notation is most appropriate in the case in which $y$~is expressed explicitly in terms of~$x$ by means of a formula, as when for example \[ y = x^{2},\quad \sin x,\quad a\cos^{2}x + b\sin^{2}x. \] We have however very often to deal with functional relations which it is impossible or inconvenient to express in this form. If, for example, $y^{5} - y - x = 0$ or $x^{5} + y^{5} - ay = 0$, it is known to be impossible to express~$y$ explicitly as an algebraical function of~$x$. If \[ x^{2} + y^{2} + 2Gx + 2Fy+ C = 0, \] $y$~can indeed be so expressed, viz.\ by the formula \[ y = -F + \sqrtp{F^{2} - x^{2} - 2Gx - C}; \] but the functional dependence of~$y$ upon~$x$ is better and more simply expressed by the original equation. It will be observed that in these two cases the functional relation is fully expressed \emph{by equating a function of the two variables $x$~and~$y$ to zero}, \ie\ by means of an equation \[ f(x, y) = 0. \Tag{(2)} \] We shall adopt this equation as the standard method of expressing the functional relation. It includes the equation~\Eq{(1)} as a special case, since $y - f(x)$ is a special form of a function of $x$~and~$y$. We can then speak of the locus of the point $(x, y)$ subject to $f(x, y) = 0$, the graph of the function~$y$ defined by $f(x, y) = 0$, the curve or locus $f(x, y) = 0$, and the equation of this curve or locus. There is another method of representing curves which is often useful. Suppose that $x$~and~$y$ are both functions of a third variable~$t$, which is to be regarded as essentially auxiliary and devoid of any particular geometrical significance. We may write \[ x = f(t),\quad y = F(t). \Tag{(3)} \] If a particular value is assigned to~$t$, the corresponding values of $x$ and of~$y$ are known. Each pair of such values defines a point~$(x, y)$. \PageSep{61} If we construct all the points which correspond in this way to different values of~$t$, we obtain \emph{the graph of the locus defined by the equations}~\Eq{(3)}. Suppose for example \[ x = a\cos t,\quad y = a\sin t. \] Let $t$~vary from~$0$ to~$2\pi$. Then it is easy to see that the point $(x, y)$ describes the circle whose centre is the origin and whose radius is~$a$. If $t$~varies beyond these limits, $(x, y)$ describes the circle over and over again. We can in this case at once obtain a direct relation between $x$~and~$y$ by squaring and adding: we find that $x^{2} + y^{2} = a^{2}$, $t$~being now eliminated. \begin{Examples}{XVIII.} \Item{1.} The points of intersection of the two curves whose equations are $f(x, y) = 0$, $\phi(x, y) = 0$, where $f$~and~$\phi$ are polynomials, can be determined if these equations can be solved as a pair of simultaneous equations in $x$~and~$y$. The solution generally consists of a finite number of pairs of values of $x$~and~$y$. The two equations therefore generally represent a finite number of isolated points. \Item{2.} Trace the curves $(x + y)^{2} = 1$, $xy = 1$, $x^{2} - y^{2} = 1$. \Item{3.} The curve $f(x, y) + \lambda\phi(x, y) = 0$ represents a curve passing through the points of intersection of $f = 0$ and $\phi = 0$. \Item{4.} What loci are represented by \[ \Item{$(\alpha)$}\ x = at + b,\quad y = ct + d,\qquad \Item{$(\beta)$}\ x/a = 2t/(1 + t^{2}),\quad y/a = (1 - t^{2})/(1 + t^{2}), \] when $t$~varies through all real values? \end{Examples} \Paragraph{33. Loci in space.} In space of three dimensions there are two fundamentally different kinds of loci, of which the simplest examples are the plane and the straight line. A particle which moves along a straight line has only \emph{one degree of freedom}. Its direction of motion is fixed; its position can be completely fixed by one measurement of position, \eg\ by its distance from a fixed point on the line. If we take the line as our fundamental line~$\Lambda$ of \Ref{Chap.}{I}, the position of any of its points is determined by a single coordinate~$x$. A particle which moves in a plane, on the other hand, has \emph{two} degrees of freedom; its position can only be fixed by the determination of two coordinates. A locus represented by a single equation \[ z = f(x, y) \] plainly belongs to the second of these two classes of loci, and is called a \emph{surface}. It may or may not (in the obvious simple cases \PageSep{62} it will) satisfy our common-sense notion of what a surface should be. The considerations of \SecNo[§]{31} may evidently be generalised so as to give definitions of a function $f(x, y, z)$ of \emph{three} variables (or of functions of any number of variables). And as in \SecNo[§]{32} we agreed to adopt $f(x, y) = 0$ as the standard form of the equation of a plane curve, so now we shall agree to adopt \[ f(x, y, z) = 0 \] as the standard form of equation of a surface. {\Loosen The locus represented by \emph{two} equations of the form $z = f(x, y)$ or $f(x, y, z) = 0$ belongs to the first class of loci, and is called a \emph{curve}. Thus a \emph{straight line} may be represented by two equations of the type $Ax + By + Cz + D = 0$. A \emph{circle} in space may be regarded as the intersection of a sphere and a plane; it may therefore be represented by two equations of the forms} \[ (x - \alpha)^{2} + (y - \beta)^{2} + (z - \gamma)^{2} = \rho^{2},\quad Ax + By + Cz + D = 0. \] \begin{Examples}{XIX.} \Item{1.} What is represented by \emph{three} equations of the type $f(x, y, z) = 0$? \Item{2.} Three linear equations in general represent a single point. What are the exceptional cases? \Item{3.} What are the equations of a plane curve $f(x, y) = 0$ in the plane~$XOY$, when regarded as a curve in space? [$f(x, y) = 0$, $z = 0$.] \Item{4.} \Topic{Cylinders.} What is the meaning of a single equation $f(x, y) = 0$, considered as a locus in space of three dimensions? [All points on the surface satisfy $f(x, y) = 0$, whatever be the value of~$z$. The curve $f(x, y) = 0$, $z = 0$ is the curve in which the locus cuts the plane~$XOY$. The locus is the surface formed by drawing lines parallel to~$OZ$ through all points of this curve. Such a surface is called a \emph{cylinder}.] \Item{5.} \Topic{Graphical representation of a surface on a plane. Contour Maps.} It might seem to be impossible to represent a surface adequately by a drawing on a plane; and so indeed it is: but a very fair notion of the nature of the surface may often be obtained as follows. Let the equation of the surface be $z = f(x, y)$. If we give~$z$ a particular value~$a$, we have an equation $f(x, y) = a$, which we may regard as determining a plane curve on the paper. We trace this curve and mark it~$(a)$. Actually the curve~$(a)$ is the projection on the plane~$XOY$ \PageSep{63} of the section of the surface by the plane $z = a$. We do this for all values of~$a$ (practically, of course, for a selection of values of~$a$). We obtain some such figure as is shown in \Fig{17}. It will at once suggest a contoured Ordnance Survey map: and in fact this is the principle on which such maps are constructed. The contour line~$1000$ is the projection, on the plane of the sea level, of the section of the surface of the land by the plane parallel to the plane of the sea level and $1000$~ft.\ above it.\footnote {We assume that the effects of the earth's curvature may be neglected.} %[Illustration: Fig. 17.] \Figure{17}{p063} \Item{6.} {\Loosen Draw a series of contour lines to illustrate the form of the surface $2z = 3xy$.} \Item{7.} \Topic{Right circular cones.} Take the origin of coordinates at the vertex of the cone and the axis of~$z$ along the axis of the cone; and let~$\alpha$ be the semi-vertical angle of the cone. The equation of the cone (which must be regarded as extending both ways from its vertex) is $x^{2} + y^{2} - z^{2}\tan^{2} \alpha = 0$. \Item{8.} \Topic{Surfaces of revolution in general.} The cone of Ex.~7 cuts~$ZOX$ in two lines whose equations may be combined in the equation $x^{2} = z^{2}\tan^{2}\alpha$. That is to say, the equation of the surface generated by the revolution of the curve $y = 0$, $x^{2} = z^{2}\tan^{2}\alpha$ round the axis of~$z$ is derived from the second of these equations by changing~$x^{2}$ into~$x^{2} + y^{2}$. Show generally that the equation of the surface generated by the revolution of the curve $y = 0$, $x = f(z)$, round the axis of~$z$, is \[ \sqrtp{x^{2} + y^{2}} = f(z). \] \Item{9.} \Topic{Cones in general.} A surface formed by straight lines passing through a fixed point is called a \emph{cone}: the point is called the \emph{vertex}. A particular case is given by the right circular cone of Ex.~7. Show that the equation of a cone whose vertex is~$O$ is of the form $f(z/x, z/y) = 0$, and that any equation of this form represents a cone. [If $(x, y, z)$ lies on the cone, so must $(\lambda x, \lambda y, \lambda z)$, for any value of~$\lambda$.] \PageSep{64} \Item{10.} \Topic{Ruled surfaces.} Cylinders and cones are special cases of \emph{surfaces composed of straight lines}. Such surfaces are called \emph{ruled surfaces}. The two equations \[ x = az + b,\quad y = cz + d, \Tag{(1)} \] represent the intersection of two planes, \ie\ a straight line. Now suppose that $a$, $b$, $c$, $d$ instead of being fixed are \emph{functions of an auxiliary variable~$t$}. For any particular value of~$t$ the equations~\Eq{(1)} give a line. As $t$~varies, this line moves and generates a surface, whose equation may be found by eliminating~$t$ between the two equations~\Eq{(1)}. For instance, in Ex.~7 the equations of the line which generates the cone are \[ x = z\tan \alpha\cos t,\quad y = z\tan \alpha\sin t, \] where $t$~is the angle between the plane~$XOZ$ and a plane through the line and the axis of~$z$. Another simple example of a ruled surface may be constructed as follows. Take two sections of a right circular cylinder perpendicular to the axis and at a distance~$l$ apart (\Fig{18a}). We can imagine the surface of the cylinder to be made up of a number of thin parallel rigid rods of length~$l$, such as~$PQ$, the ends of the rods being fastened to two circular rods of radius~$a$. Now let us take a third circular rod of the same radius and place it round the surface of the cylinder at a distance~$h$ from one of the first two rods (see \Fig{18a}, where $Pq = h$). Unfasten the end~$Q$ of the rod~$PQ$ and turn~$PQ$ about~$P$ until $Q$~can be fastened to the third circular rod in the position~$Q'$. The angle $qOQ' = \alpha$ in the figure is evidently given by \[ l^{2} - h^{2} = qQ'^{2} = \left (2a\sin\tfrac{1}{2} \alpha\right)^{2}. \] Let all the other rods of which the cylinder was composed be treated in the same way. We obtain a ruled surface whose form is indicated in \Fig{18b}. It is entirely built up of straight lines; but the surface is curved everywhere, and is in general shape not unlike certain forms of table-napkin rings (\Fig{18c}). %[Illustration: Fig. 18a.] %[Illustration: Fig. 18b.] %[Illustration: Fig. 18c.] \begin{figure}[hbt!] \begin{minipage}{0.3\textwidth} \centering \Graphic{1.5in}{p064a} \caption{Fig.~18a.} \label{fig:18a} \end{minipage}\hfill \begin{minipage}{0.3\textwidth} \centering \Graphic{1.5in}{p064b} \caption{Fig.~18b.} \label{fig:18b} \end{minipage}\hfill \begin{minipage}{0.3\textwidth} \centering \Graphic{1.5in}{p064c} \caption{Fig.~18c.} \label{fig:18c} \end{minipage} \end{figure} \end{Examples} \PageSep{65} \Section{MISCELLANEOUS EXAMPLES ON CHAPTER II.} \begin{Examples}{} \Item{1.} Show that if $y = f(x) = (ax + b)/(cx - a)$ then $x = f(y)$. \Item{2.} If $f(x) = f(-x)$ for all values of~$x$, $f(x)$~is called an \emph{even} function. If $f(x) = -f(-x)$, it is called an \emph{odd} function. Show that any function of~$x$, defined for all values of~$x$, is the sum of an even and an odd function of~$x$. [Use the identity $f(x) = \frac{1}{2}\{f(x) + f(-x)\} + \frac{1}{2}\{f(x) - f(-x)\}$.] \Item{3.} Draw the graphs of the functions \[ 3\sin x + 4\cos x,\quad \sin\left(\frac{\pi}{\sqrt{2}} \sin x\right). \] \MathTrip{1896.} \Item{4.} Draw the graphs of the functions \[ \sin x(a\cos^{2} x + b\sin^{2} x),\quad \frac{\sin x}{x}(a\cos^{2} x + b\sin^{2} x),\quad \left(\frac{\sin x}{x}\right)^{2}. \] \Item{5.} Draw the graphs of the functions $x[1/x]$, $[x]/x$. \Item{6.} Draw the graphs of the functions \begin{align*} \Itemp{(i)} & \arccos(2x^{2} - 1) - 2 \arccos{x}, \\ \Itemp{(ii)} & \arctan \frac{a + x}{1 - ax} - \arctan{a} - \arctan{x}, \end{align*} where the symbols $\arccos a$, $\arctan a$ denote, for any value of~$a$, the least positive (or zero) angle, whose cosine or tangent is~$a$. \Item{7.} Verify the following method of constructing the graph of $f\{\phi(x)\}$ by means of the line $y = x$ and the graphs of $f(x)$~and~$\phi(x)$: take $OA = x$ along~$OX$, draw $AB$ parallel to~$OY$ to meet $y = \phi(x)$ in~$B$, $BC$~parallel to~$OX$ to meet $y = x$ in~$C$, $CD$~parallel to~$OY$ to meet $y = f(x)$ in~$D$, and $DP$~parallel to~$OX$ to meet~$AB$ in~$P$; then $P$~is a point on the graph required. \Item{8.} Show that the roots of $x^{3} + px + q = 0$ are the abscissae of the points of intersection (other than the origin) of the parabola $y = x^{2}$ and the circle \[ x^{2} + y^{2} + (p - 1)y + qx = 0. \] \Item{9.} The roots of $x^{4} + nx^{3} + px^{2} + qx + r = 0$ are the abscissae of the points of intersection of the parabola $x^{2} = y - \frac{1}{2}nx$ and the circle \[ x^{2} + y^{2} + (\tfrac{1}{8}n^{2} - \tfrac{1}{2}pn + \tfrac{1}{2}n + q)x + (p - 1 - \tfrac{1}{4}n^{2})y + r = 0. \] \Item{10.} Discuss the graphical solution of the equation \[ x^{m} + ax^{2} + bx + c = 0 \] by means of the curves $y = x^{m}$, $y = -ax^{2} - bx - c$. Draw up a table of the various possible numbers of roots. \Item{11.} Solve the equation $\sec\theta + \cosec\theta = 2\sqrt{2}$; and show that the equation $\sec\theta + \cosec\theta = c$ has two roots between $0$~and~$2\pi$ if $c^{2} < 8$ and four if $c^{2} > 8$. \PageSep{66} \Item{12.} Show that the equation \[ 2x = (2n + 1)\pi(1 - \cos x), \] where $n$~is a positive integer, has $2n + 3$ roots and no more, indicating their localities roughly. \MathTrip{1896.} \Item{13.} Show that the equation $\frac{2}{3}x\sin x = 1$ has four roots between $-\pi$~and~$\pi$. \Item{14.} Discuss the number and values of the roots of the equations %[** TN: Items in multiple columns in the original] \SubItem{(1)} $\cot x + x - \frac{3}{2}\pi = 0$, \SubItem{(2)} $x^{2} + \sin^{2} x = 1$, \SubItem{(3)} $\tan x = 2x/(1 + x^{2})$, \SubItem{(4)} $\sin x - x + \frac{1}{6}x^{3} = 0$, \SubItem{(5)} $(1 - \cos x)\tan\alpha - x + \sin x = 0$. \Item{15.} The polynomial of the second degree which assumes, when $x = a$, $b$,~$c$ the values $\alpha$,~$\beta$,~$\gamma$ is \[ \alpha\frac{(x - b)(x - c)}{(a - b)(a - c)} + \beta \frac{(x - c)(x - a)}{(b - c)(b - a)} + \gamma\frac{(x - a)(x - b)}{(c - a)(c - b)}. \] Give a similar formula for the polynomial of the $(n - 1)$th~degree which assumes, when $x = a_{1}$, $a_{2}$, \dots~$a_{n}$, the values $\alpha_{1}$, $\alpha_{2}$, \dots~$\alpha_{n}$. \Item{16.} Find a polynomial in~$x$ of the second degree which for the values $0$,~$1$,~$2$ of~$x$ takes the values $1/c$, $1/(c + 1)$, $1/(c + 2)$; and show that when $x = c + 2$ its value is~$1/(c + 1)$. \MathTrip{1911.} \Item{17.} Show that if $x$~is a rational function of~$y$, and $y$~is a rational function of~$x$, then $Axy + Bx + Cy + D = 0$. \Item{18.} If $y$~is an algebraical function of~$x$, then $x$~is an algebraical function of~$y$. \Item{19.} Verify that the equation \[ \cos\tfrac{1}{2}\pi x = 1 - \frac{x^{2}}{x + (x - 1)\bigsqrtp{\dfrac{2 - x}{3}}} \] is approximately true for all values of~$x$ between $0$~and~$1$. [Take $x = 0$, $\frac{1}{6}$, $\frac{1}{3}$, $\tfrac{1}{2}$, $\frac{2}{3}$, $\frac{5}{6}$,~$1$, and use tables. For which of these values is the formula exact?] \Item{20.} What is the form of the graph of the functions \[ z = [x] + [y],\quad z = x + y - [x] - [y]? \] \Item{21.} What is the form of the graph of the functions $z = \sin x + \sin y$, $z = \sin x\sin y$, $z = \sin xy$, $z = \sin(x^{2} + y^{2})$? \Item{22.} \Topic{Geometrical constructions for irrational numbers.} In \Ref{Chapter}{I} we indicated one or two simple geometrical constructions for a length equal to~$\sqrt{2}$, starting from a given unit length. We also showed how to construct the roots of any quadratic equation $ax^{2} + 2bx + c = 0$, it being supposed that we can construct lines whose lengths are equal to any of the ratios of the coefficients $a$,~$b$,~$c$, as is certainly the case if $a$,~$b$,~$c$ are rational. All these constructions were what may be called Euclidean constructions; they depended on the ruler and compasses only. \PageSep{67} It is fairly obvious that we can construct by these methods the length measured by any irrational number which is defined by any combination of square roots, however complicated. Thus \[ \bigsqrtb[4]{\bigsqrtp{\frac{17 + 3\sqrt{11}}{17 - 3\sqrt{11}}} - \bigsqrtp{\frac{17 - 3\sqrt{11}}{17 + 3\sqrt{11}}}} \] is a case in point. This expression contains a fourth root, but this is of course the square root of a square root. We should begin by constructing~$\sqrt{11}$, \eg\ as the mean between $1$~and~$11$: then $17 + 3\sqrt{11}$ and $17 - 3\sqrt{11}$, and so on. Or these two mixed surds might be constructed directly as the roots of $x^{2} - 34x + 190 = 0$. Conversely, \emph{only} irrationals of this kind can be constructed by Euclidean methods. Starting from a unit length we can construct any \emph{rational} length. And hence we can construct the line $Ax + By + C = 0$, provided that the ratios of $A$,~$B$,~$C$ are rational, and the circle \[ (x - \alpha)^{2} + (y - \beta)^{2} = \rho ^{2} \] (or $x^{2} + y^{2} + 2gx + 2fy + c = 0$), provided that $\alpha$,~$\beta$,~$\rho$ are rational, a condition which implies that $g$,~$f$,~$c$ are rational. Now in any Euclidean construction each new point introduced into the figure is determined as the intersection of two lines or circles, or a line and a circle. But if the coefficients are rational, such a pair of equations as \[ Ax + By + C = 0,\quad x^{2} + y^{2} + 2gx + 2fy + c = 0 \] give, on solution, values of $x$~and~$y$ of the form $m + n\sqrt{p}$, where $m$,~$n$,~$p$ are rational: for if we substitute for~$x$ in terms of~$y$ in the second equation we obtain a quadratic in~$y$ with rational coefficients. Hence the coordinates of all points obtained by means of lines and circles with rational coefficients are expressible by rational numbers and quadratic surds. And so the same is true of the distance $\sqrtb{(x_{1} - x_{2})^{2} + (y_{1} - y_{2})^{2}}$ between any two points so obtained. With the irrational distances thus constructed we may proceed to construct a number of lines and circles whose coefficients may now themselves involve quadratic surds. It is evident, however, that all the lengths which we can construct by the use of such lines and circles are still expressible by square roots only, though our surd expressions may now be of a more complicated form. And this remains true however often our constructions are repeated. Hence \emph{Euclidean methods will construct any surd expression involving square roots only, and no others}. One of the famous problems of antiquity was that of the duplication of the cube, that is to say of the construction by Euclidean methods of a length measured by~$\sqrt[3]{2}$. It can be shown that $\sqrt[3]{2}$~cannot be expressed by means of any finite combination of rational numbers and square roots, and so that the problem is an impossible one. See Hobson, \textit{Squaring the Circle}, pp.~47~\textit{et~seq.}; the first stage of the proof, viz.\ the proof that $\sqrt[3]{2}$~cannot be a root of a quadratic equation $ax^{2} + 2bx + c = 0$ with rational coefficients, was given in \Ref{Ch.}{I} (\MiscExs{I}~24). \PageSep{68} \Item{23.} \Topic{Approximate quadrature of the circle.} Let $O$~be the centre of a circle of radius~$R$. On the tangent at~$A$ take $AP = \frac{11}{5}R$ and $AQ = \frac{13}{5}R$, in the same direction. On~$AO$ take $AN = OP$ and draw~$NM$ parallel to~$OQ$ and cutting~$AP$ in~$M$. Show that \[ AM/R = \tfrac{13}{25}\sqrt{146}, \] and that to take~$AM$ as being equal to the circumference of the circle would lead to a value of~$\pi$ correct to five places of decimals. If $R$~is the earth's radius, the error in supposing $AM$ to be its circumference is less than $11$~yards. \Item{24.} Show that the only lengths which can be constructed with the ruler only, starting from a given unit length, are rational lengths. \Item{25.} \Topic{Constructions for $\sqrt[3]{2}$.} $O$~is the vertex and $S$~the focus of the parabola $y^{2} = 4x$, and $P$~is one of its points of intersection with the parabola $x^{2} = 2y$. Show that $OP$~meets the latus rectum of the first parabola in a point~$Q$ such that $SQ = \sqrt[3]{2}$. \Item{26.} Take a circle of unit diameter, a diameter~$OA$ and the tangent at~$A$. Draw a chord~$OBC$ cutting the circle at~$B$ and the tangent at~$C$. On this line take $OM = BC$. Taking $O$~as origin and $OA$~as axis of~$x$, show that the locus of~$M$ is the curve \[ (x^{2} + y^{2})x - y^{2} = 0 \] (the \emph{Cissoid of Diocles}). Sketch the curve. Take along the axis of~$y$ a length $OD = 2$. Let $AD$~cut the curve in~$P$ and $OP$~cut the tangent to the circle at~$A$ in~$Q$. Show that $AQ = \sqrt[3]{2}$. \end{Examples} \PageSep{69} \Chapter{III}{COMPLEX NUMBERS} \Paragraph{34. Displacements along a line and in a plane.} The `real number'~$x$, with which we have been concerned in the two preceding chapters, may be regarded from many different points of view. It may be regarded as a pure number, destitute of geometrical significance, or a geometrical significance may be attached to it in at least three different ways. It may be regarded as \emph{the measure of a length}, viz.~the length~$A_{0}P$ along the line~$\Lambda$ of \Ref{Chap.}{I}\@. It may be regarded as \emph{the mark of a point}, viz.~the point~$P$ whose distance from~$A_{0}$ is~$x$. Or it may be regarded as \emph{the measure of a displacement} or \emph{change of position} on the line~$\Lambda$. It is on this last point of view that we shall now concentrate our attention. Imagine a small particle placed at~$P$ on the line~$\Lambda$ and then displaced to~$Q$. We shall call the displacement or change of position which is needed to transfer the particle from $P$ to~$Q$ \emph{the displacement~$\Seg{PQ}$}. To specify a displacement completely three things are needed, its \emph{magnitude}, its \emph{sense} forwards or backwards along the line, and what may be called its \emph{point of application}, \ie\ the original position~$P$ of the particle. But, when we are thinking merely of the change of position produced by the displacement, it is natural to disregard the point of application and to consider all displacements as equivalent whose lengths and senses are the same. Then the displacement is completely specified by the length $PQ = x$, the sense of the displacement being fixed by the sign of~$x$. We may therefore, without ambiguity, speak of \emph{the displacement~$[x]$},\footnote {It is hardly necessary to caution the reader against confusing this use of the symbol~$[x]$ and that of \Ref{Chap.}{II} (\Exs{xvi}.\ and \MiscExs{II}).} and we may write $\Seg{PQ} = [x]$. \PageSep{70} We use the square bracket to distinguish the displacement~$[x]$ from the length or number~$x$.\footnote {Strictly speaking we ought, by some similar difference of notation, to distinguish the actual length~$x$ from the number~$x$ which measures it. The reader will perhaps be inclined to consider such distinctions futile and pedantic. But increasing experience of mathematics will reveal to him the great importance of distinguishing clearly between things which, however intimately connected, are not the same. If cricket were a mathematical science, it would be very important to distinguish between the \emph{motion} of the batsman between the wickets, the \emph{run} which he scores, and the \emph{mark} which is put down in the score-book.} If the coordinate of~$P$ is~$a$, that of~$Q$ will be~$a + x$; the displacement~$[x]$ therefore transfers a particle from the point~$a$ to the point~$a + x$. We come now to consider \emph{displacements in a plane}. We may define the displacement~$\Seg{PQ}$ as before. But now more data are required in order to specify it completely. We require to know: (i)~the \emph{magnitude} of the displacement, \ie\ the length of the straight line~$PQ$; (ii)~the \emph{direction} of the displacement, which is determined by the angle which $PQ$~makes with some fixed line in the plane; (iii)~the \emph{sense} of the displacement; and (iv)~its \emph{point of application}. Of these requirements we may disregard the fourth, if we consider two displacements as equivalent if they are %[Illustration: Fig. 19.] \Figure[2in]{19}{p070} the same in magnitude, direction, and sense. In other words, if $PQ$~and~$RS$ are equal and parallel, and the sense of motion from $P$~to~$Q$ is the same as that of motion from $R$~to~$S$, we regard the displacements $\Seg{PQ}$ and~$\Seg{RS}$ as equivalent, and write \[ \Seg{PQ} = \Seg{RS}. \] Now let us take any pair of coordinate axes in the plane (such as $OX$,~$OY$ in \Fig{19}). Draw a line~$OA$ equal and parallel to~$PQ$, the sense of motion from $O$ to~$A$ being the same as that from $P$ to~$Q$. Then $\Seg{PQ}$~and~$\Seg{OA}$ are equivalent displacements. Let $x$~and~$y$ be the coordinates of~$A$. Then it is evident that $\Seg{OA}$~is completely specified if $x$~and~$y$ are given. We call $\Seg{OA}$ \emph{the displacement $[x, y]$} and write \[ \Seg{OA} = \Seg{PQ} = \Seg{RS} = [x, y]. \] \PageSep{71} \Paragraph{35. Equivalence of displacements. Multiplication of displacements by numbers.} If $\xi$~and~$\eta$ are the coordinates of~$P$, and $\xi'$~and~$\eta'$ those of~$Q$, it is evident that \[ x = \xi' - \xi,\quad y = \eta' - \eta. \] The displacement from $(\xi, \eta)$ to $(\xi', \eta')$ is therefore \[ [\xi' - \xi, \eta' - \eta]. \] It is clear that two displacements $[x, y]$, $[x', y']$ are equivalent if, and only if, $x = x'$, $y = y'$. Thus $[x, y] = [x', y']$ if and only if \[ x = x',\quad y = y'. \Tag{(1)} \] The reverse displacement $\Seg{QP}$ would be $[\xi - \xi', \eta - \eta']$, and it is natural to agree that \begin{align*} [\xi - \xi', \eta - \eta'] &= -[\xi' - \xi, \eta' - \eta],\\ \Seg{QP} &= -\Seg{PQ}, \end{align*} {\Loosen these equations being really definitions of the meaning of the symbols $-[\xi' - \xi, \eta' - \eta]$, $-\Seg{PQ}$. Having thus agreed that} \[ -[x, y] = [-x, -y], \] it is natural to agree further that \[ \alpha[x, y] = [\alpha x, \alpha y], \Tag{(2)} \] {\Loosen where $\alpha$~is any real number, positive or negative. Thus (\Fig{19}) if $OB = -\frac{1}{2}OA$ then} \[ \Seg{OB} = -\tfrac{1}{2}\Seg{OA} = -\tfrac{1}{2}[x, y] = [-\tfrac{1}{2}x, -\tfrac{1}{2}y]. \] The equations \Eq{(1)}~and~\Eq{(2)} define the first two important ideas connected with displacements, viz.\ \emph{equivalence} of displacements, and \emph{multiplication of displacements by numbers}. \Paragraph{36. Addition of displacements.} We have not yet given any definition which enables us to attach any meaning to the expressions \[ \Seg{PQ} + \Seg{P'Q'},\quad [x, y] + [x', y']. \] Common sense at once suggests that we should define the sum of two displacements as the displacement which is the result of the successive application of the two given displacements. In \PageSep{72} other words, it suggests that if $QQ_{1}$~be drawn equal and parallel to~$P'Q'$, so that the result of successive displacements $\Seg{PQ}$,~$\Seg{P'Q'}$ on a particle at~$P$ is to transfer it first to~$Q$ and then to~$Q_{1}$ then we should define the sum of $\Seg{PQ}$~and~$\Seg{P'Q'}$ as being~$\Seg{PQ_{1}}$. If then we draw $OA$~equal and parallel to~$PQ$, and $OB$~equal and parallel to~$P'Q'$, and complete the parallelogram~$OACB$, we have %[Illustration: Fig. 20.] \Figure[3.5in]{20}{p072} \[ \Seg{PQ} + \Seg{P'Q'} = \Seg{PQ_{1}} = \Seg{OA} + \Seg{OB} = \Seg{OC}. \] Let us consider the consequences of adopting this definition. If the coordinates of~$B$ are $x'$,~$y'$, then those of the middle point of~$AB$ are $\frac{1}{2}(x + x')$, $\frac{1}{2} (y + y')$, and those of~$C$ are $x + x'$, $y + y'$. Hence \[ [x, y] + [x', y'] = [x + x', y + y'], \Tag{(3)} \] which may be regarded as the symbolic definition of addition of displacements. We observe that \begin{align*} [x', y'] + [x, y] &= [x' + x, y' + y]\\ &= [x + x', y + y'] = [x, y] + [x', y'] \end{align*} In other words, \emph{addition of displacements obeys the commutative law} expressed in ordinary algebra by the equation $a + b = b + a$. This law expresses the obvious geometrical fact that if we move from~$P$ first through a distance~$PQ_{2}$ equal and parallel to~$P'Q'$, and then through a distance equal and parallel to~$PQ$, we shall arrive at the same point~$Q_{1}$ as before. \PageSep{73} In particular \[ [x, y] = [x, 0] + [0, y]. \Tag{(4)} \] Here $[x, 0]$ denotes a displacement through a distance~$x$ in a direction parallel to~$OX$. It is in fact what we previously denoted by~$[x]$, when we were considering only displacements along a line. We call $[x, 0]$~and~$[0, y]$ the \emph{components} of~$[x, y]$, and $[x, y]$ their \emph{resultant}. When we have once defined addition of two displacements, there is no further difficulty in the way of defining addition of any number. Thus, by definition, \begin{gather*} [x, y] + [x', y'] + [x'', y''] = ([x, y] + [x', y']) + [x'', y'']\\ = [x + x', y + y'] + [x'', y''] = [x + x' + x'', y + y' + y'']. \end{gather*} We define \emph{subtraction} of displacements by the equation \[ [x, y] - [x', y'] = [x, y] + (-[x', y']), \Tag{(5)} \] which is the same thing as $[x, y] + [-x', -y']$ or as $[x - x', y - y']$. In particular \[ [x, y] - [x, y] = [0, 0]. \] The displacement~$[0, 0]$ leaves the particle where it was; it is the \emph{zero displacement}, and we agree to write $[0, 0] = 0$. \begin{Examples}{XX.} \Item{1.} Prove that \SubItem{(i)} $\alpha [\beta x, \beta y] = \beta [\alpha x, \alpha y] = [\alpha \beta x, \alpha \beta y]$, \SubItem{(ii)} $([x, y] + [x', y']) + [x'', y''] = [x, y] + ([x', y'] + [x'', y''])$, \SubItem{(iii)} $[x, y] + [x', y'] = [x', y'] + [x, y]$, \SubItem{(iv)} $(\alpha + \beta) [x, y] = \alpha [x, y] + \beta [x, y]$, \SubItem{(v)} $\alpha \{[x, y] + [x', y']\} = \alpha [x, y] + \alpha [x', y']$. [We have already proved~(iii). The remaining equations follow with equal ease from the definitions. The reader should in each case consider the geometrical significance of the equation, as we did above in the case of~(iii).] \Item{2.} If $M$~is the middle point of~$PQ$, then $\Seg{OM} = \frac{1}{2}(\Seg{OP} + \Seg{OQ})$. More generally, if $M$~divides~$PQ$ in the ratio~$\mu : \lambda$, then \[ \Seg{OM} = \frac{\lambda}{\lambda + \mu}\, \Seg{OP} + \frac{\mu}{\lambda + \mu}\, \Seg{OQ}. \] \Item{3.} If $G$~is the centre of mass of equal particles at $P_{1}$, $P_{2}$, \dots,~$P_{n}$, then \[ \Seg{OG} = (\Seg{OP_{1}} + \Seg{OP_{2}} + \dots + \Seg{OP_{n}})/n. \] \PageSep{74} \Item{4.} If $P$,~$Q$,~$R$ are collinear points in the plane, then it is possible to find real numbers $\alpha$,~$\beta$,~$\gamma$, not all zero, and such that \[ \alpha · \Seg{OP} + \beta · \Seg{OQ} + \gamma · \Seg{OR} = 0; \] and conversely. [This is really only another way of stating Ex.~2.] \Item{5.} If $\Seg{AB}$~and~$\Seg{AC}$ are two displacements not in the same straight line, and \[ \alpha · \Seg{AB} + \beta · \Seg{AC} = \gamma · \Seg{AB} + \delta · \Seg{AC}, \] then $\alpha = \gamma$ and $\beta = \delta$. [Take $AB_{1} = \alpha · AB$, $AC_{1} = \beta · AC$. Complete the parallelogram $AB_{1}P_{1}C_{1}$. Then $\Seg{AP_{1}} = \alpha · \Seg{AB} + \beta · \Seg{AC}$. It is evident that $\Seg{AP_{1}}$~can only be expressed in this form in one way, whence the theorem follows.] \Item{6.} $ABCD$~is a parallelogram. Through~$Q$, a point inside the parallelogram, $RQS$~and~$TQU$ are drawn parallel to the sides. Show that $RU$,~$TS$ intersect on~$AC$. %[Illustration: Fig. 21.] \Figure[2.75in]{21}{p074} [Let the ratios $AT:AB$, $AR:AD$ be denoted by $\alpha$,~$\beta$. Then \begin{gather*} \Seg{AT} = \alpha · \Seg{AB},\quad \Seg{AR} = \beta · \Seg{AD}, \\ \Seg{AU} = \alpha · \Seg{AB} + \Seg{AD},\quad \Seg{AS} = \Seg{AB} + \beta · \Seg{AD}. \end{gather*} Let $RU$~meet $AC$ in~$P$. Then, since $R$,~$U$,~$P$ are collinear, \[ \Seg{AP} = \frac{\lambda}{\lambda + \mu}\, \Seg{AR} + \frac{\mu}{\lambda + \mu}\, \Seg{AU}, \] where $\mu/\lambda$ is the ratio in which $P$~divides~$RU$. That is to say \[ \Seg{AP} = \frac{\alpha\mu}{\lambda + \mu}\, \Seg{AB} + \frac{\beta\lambda + \mu}{\lambda + \mu}\, \Seg{AD}. \] But since $P$~lies on~$AC$, $\Seg{AP}$~is a numerical multiple of~$\Seg{AC}$; say \[ \Seg{AP} = k · \Seg{AC} = k · \Seg{AB} + k · \Seg{AD}. \] Hence (Ex.~5) $\alpha\mu = \beta\lambda + \mu = (\lambda + \mu)k$, from which we deduce \[ k = \frac{\alpha\beta}{\alpha + \beta - 1}. \] The symmetry of this result shows that a similar argument would also give \[ \Seg{AP'} = \frac{\alpha\beta}{\alpha + \beta - 1}\, \Seg{AC}, \] if $P'$~is the point where $TS$~meets~$AC$. Hence $P$~and~$P'$ are the same point.] \Item{7.} $ABCD$~is a parallelogram, and $M$~the middle point of~$AB$. Show that $DM$~trisects and is trisected by~$AC$.\footnote {The two preceding examples are taken from Willard Gibbs' \textit{Vector Analysis}.} \end{Examples} \PageSep{75} \Paragraph{37. Multiplication of displacements.} So far we have made no attempt to attach any meaning whatever to the notion of the \emph{product} of two displacements. The only kind of multiplication which we have considered is that in which a displacement is multiplied by a number. The expression \[ [x, y] × [x', y'] \] so far means nothing, and we are at liberty to define it to mean anything we like. It is, however, fairly clear that if any definition of such a product is to be of any use, the product of two displacements must itself be a displacement. We might, for example, define it as being equal to \[ [x + x', y + y']; \] in other words, we might agree that the product of two displacements was to be always equal to their sum. But there would be two serious objections to such a definition. In the first place our definition would be futile. We should only be introducing a new method of expressing something which we can perfectly well express without it. In the second place our definition would be inconvenient and misleading for the following reasons. If $\alpha$~is a real number, we have already defined $\alpha [x, y]$ as~$[\alpha x, \alpha y]$. Now, as we saw in \SecNo[§]{34}, the real number~$\alpha$ may itself from one point of view be regarded as a displacement, viz.\ the displacement~$[\alpha]$ along the axis~$OX$, or, in our later notation, the displacement $[\alpha, 0]$. It is therefore, if not absolutely necessary, at any rate most desirable, that our definition should be such that \[ [\alpha, 0] [x, y] = [\alpha x, \alpha y], \] and the suggested definition does not give this result. A more reasonable definition might appear to be \[ [x, y] [x', y'] = [xx', yy']. \] But this would give \[ [\alpha, 0] [x, y] = [\alpha x, 0]; \] and so this definition also would be open to the second objection. In fact, it is by no means obvious what is the best meaning to attach to the product $[x, y] [x', y']$. All that is clear is (1)~that, if our definition is to be of any use, this product must itself be \PageSep{76} a displacement whose coordinates depend on $x$~and~$y$, or in other words that we must have \[ [x, y] [x', y'] = [X, Y], \] where $X$~and~$Y$ are functions of $x$,~$y$,~$x'$, and~$y'$; (2)~that the definition must be such as to agree with the equation \[ [x, 0] [x', y'] = [xx', xy']; \] and (3)~that the definition must obey the ordinary commutative, distributive, and associative laws of multiplication, so that \begin{align*} [x, y] [x', y'] &= [x', y'] [x, y],\\ ([x, y] + [x', y']) [x'', y''] &= [x, y] [x'', y''] + [x', y'] [x'', y''],\\ [x, y] ([x', y'] + [x'', y'']) &= [x, y] [x', y'] + [x, y] [x'', y''],\\ \intertext{and} [x, y] ([x', y'] [x'', y'']) &= ([x, y] [x', y']) [x'', y'']. \end{align*} \Paragraph{38.} The right definition to take is suggested as follows. We know that, if $OAB$,~$OCD$ are two similar triangles, the angles corresponding in the order in which they are written, then \[ OB/OA = OD/OC, \] or $OB · OC = OA · OD$. This suggests that we should try to define multiplication and division of displacements in such a way that \[ \Seg{OB}/\Seg{OA} = \Seg{OD}/\Seg{OC},\quad \Seg{OB} · \Seg{OC} = \Seg{OA} · \Seg{OD}. \] %[Illustration: Fig. 22.] \Figure[3.5in]{22}{p076} Now let \[ \Seg{OB} = [x, y],\quad \Seg{OC} = [x', y'],\quad \Seg{OD} = [X, Y], \] \PageSep{77} and suppose that $A$~is the point~$(1, 0)$, so that $\Seg{OA} = [1, 0]$. Then \[ \Seg{OA} · \Seg{OD} = [1, 0] [X, Y] = [X, Y], \] and so \[ [x, y] [x', y'] = [X, Y]. \] The product $\Seg{OB} · \Seg{OC}$ is therefore to be defined as $\Seg{OD}$, $D$~being obtained by constructing on~$OC$ a triangle similar to~$OAB$. In order to free this definition from ambiguity, it should be observed that on~$OC$ we can describe \emph{two} such triangles, $OCD$~and~$OCD'$. We choose that for which the angle~$COD$ is equal to~$AOB$ in sign as well as in magnitude. We say that the two triangles are then \emph{similar in the same sense}. If the polar coordinates of $B$~and~$C$ are $(\rho, \theta)$ and $(\sigma, \phi)$, so that \[ x = \rho\cos\theta,\quad y = \rho\sin\theta,\quad x' = \sigma\cos\phi,\quad y' = \sigma\sin\phi, \] then the polar coordinates of~$D$ are evidently $\rho\sigma$ and $\theta + \phi$. Hence \begin{alignat*}{2} X &= \rho\sigma\cos(\theta + \phi) &&= xx' - yy',\\ Y &= \rho\sigma\sin(\theta + \phi) &&= xy' + yx'. \end{alignat*} The required definition is therefore \[ [x, y] [x', y'] = [xx' - yy', xy' + yx']. \Tag{(6)} \] We observe (1)~that if $y = 0$, then $X = xx'$, $Y = xy'$, as we desired; (2)~that the right-hand side is not altered if we interchange $x$~and~$x'$, and $y$~and~$y'$, so that \[ [x, y] [x', y'] = [x', y'] [x, y]; \] and (3)~that \begin{multline*} \{[x, y] + [x', y']\} [x'', y''] = [x + x', y + y'] [x'', y'']\\ \begin{aligned}[t] &= [(x + x') x'' - (y + y') y'', (x + x') y'' + (y + y') x'']\\ &= [xx'' - yy'', xy'' + yx''] + [x'x'' - y'y'', x'y'' + y'x'']\\ &= [x, y] [x'', y''] + [x', y'] [x'', y'']. \end{aligned} \end{multline*} Similarly we can verify that all the equations at the end of \SecNo[§]{37} are satisfied. Thus the definition~\Eq{(6)} fulfils all the requirements which we made of it in \SecNo[§]{37}. \begin{Remark} \Par{Example.} Show directly from the geometrical definition given above that multiplication of displacements obeys the commutative and distributive laws. [Take the commutative law for example. The product $\Seg{OB} · \Seg{OC}$ is~$\Seg{OD}$ (\Fig{22}), $COD$~being similar to~$AOB$. To construct the product $\Seg{OC} · \Seg{OB}$ we \PageSep{78} should have to construct on~$OB$ a triangle~$BOD_{1}$ similar to~$AOC$; and so what we want to prove is that $D$~and~$D_{1}$ coincide, or that $BOD$~is similar to~$AOC$. This is an easy piece of elementary geometry.] \end{Remark} \Paragraph{39. Complex numbers.} Just as to a displacement~$[x]$ along~$OX$ correspond a point~$(x)$ and a real number~$x$, so to a displacement~$[x, y]$ in the plane correspond a point~$(x, y)$ and a \emph{pair of real numbers $x$,~$y$}. We shall find it convenient to denote this pair of real numbers $x$,~$y$ by the symbol \[ x + yi. \] The reason for the choice of this notation will appear later. For the present the reader must regard $x + yi$ as \emph{simply another way of writing $[x, y]$}. The expression $x + yi$ is called a \emph{complex number}. We proceed next to define \emph{equivalence}, \emph{addition}, and \emph{multiplication} of complex numbers. To every complex number corresponds a displacement. Two complex numbers are equivalent if the corresponding displacements are equivalent. The sum or product of two complex numbers is the complex number which corresponds to the sum or product of the two corresponding displacements. Thus \[ x + yi = x' + y'i, \Tag{(1)} \] if and only if $x = x'$, $y = y'$; \begin{gather*} (x + yi) + (x' + y'i) = (x + x') + (y + y')i; \Tag{(2)}\\ (x + yi) (x' + y'i) = xx' - yy' + (xy' + yx')i. \Tag{(3)} \end{gather*} In particular we have, as special cases of \Eq{(2)}~and~\Eq{(3)}, \begin{gather*} x + yi = (x + 0i) + (0 + yi),\\ (x + 0i) (x' + y'i) = xx' + xy'i; \end{gather*} and these equations suggest that there will be no danger of confusion if, when dealing with complex numbers, we write $x$~for $x + 0i$ and $yi$~for $0 + yi$, as we shall henceforth. Positive integral powers and polynomials of complex numbers are then defined as in ordinary algebra. Thus, by putting $x = x'$, $y = y'$ in~\Eq{(3)}, we obtain \[ (x + yi)^{2} = (x + yi) (x + yi) = x^{2} - y^{2} + 2xyi. \] \PageSep{79} The reader will easily verify for himself that addition and multiplication of complex numbers obey the laws of algebra expressed by the equations \begin{gather*} \DPchg{x + yi}{(x + yi)} + (x' + y'i) = (x' + y'i) + (x + yi),\\ \{(x + yi) + (x' + y'i)\} + (x'' + y''i) = (x + yi) + \{(x' + y'i) + (x'' + y''i)\},\\ (x + yi) (x' + y'i) = (x' + y'i) (x + yi),\\ (x + yi)\{(x' + y'i) + (x'' + y''i)\} = (x + yi)(x' + y'i) + (x + yi)(x'' + y''i),\\ \Squeeze{$\{(x + yi) + (x' + y'i)\}(x'' + y''i) = (x + yi)(x'' + y''i) + (x' + y'i)(x'' + y''i)$,}\\ (x + yi) \{(x' + y'i) (x'' + y''i)\} = \{(x + yi) (x' + y'i)\} (x'' + y''i), \end{gather*} the proofs of these equations being practically the same as those of the corresponding equations for the corresponding displacements. Subtraction and division of complex numbers are defined as in ordinary algebra. Thus we may define $(x + yi) - (x' + y'i)$ as \[ (x + yi) + \{- (x' + y'i)\} = x + yi + (-x' - y'i) = (x - x') + (y - y')i; \] or again, as the number $\xi + \eta i$ such that \[ (x' + y'i) + (\xi + \eta i) = x + yi, \] which leads to the same result. And $(x + yi)/(x' + y'i)$ is defined as being the complex number $\xi + \eta i$ such that \[ (x' + y'i) (\xi + \eta i) = x + yi, \] or \[ x' \xi - y' \eta + (x' \eta + y' \xi)i = x + yi, \] or \[ x' \xi - y' \eta = x,\quad x' \eta + y' \xi = y. \Tag{(4)} \] Solving these equations for $\xi$~and~$\eta$, we obtain \[ \xi = \frac{xx' + yy'}{x'^{2} + y'^{2}},\quad \eta = \frac{yx' - xy'}{x'^{2} + y'^{2}}. \] This solution fails if $x'$~and~$y'$ are both zero, \ie\ if $x' + y'i = 0$. Thus subtraction is always possible; division is always possible unless the divisor is zero. \PageSep{80} \begin{Remark} \Par{Examples.} \Item{(1)} From a geometrical point of view, the problem of the division of the displacement~$\Seg{OB}$ by~$\Seg{OC}$ is that of finding~$D$ so that the triangles $COB$,~$AOD$ are similar, and this is evidently possible (and the solution unique) unless $C$~coincides with~$0$, or $\Seg{OC} = 0$. %[Illustration: Fig. 23.] \Figure[2.5in]{23}{p080} \Item{(2)} The numbers $x + yi$, $x - yi$ are said to be \emph{conjugate}. Verify that \[ (x + yi)(x - yi) = x^{2} + y^{2}, \] so that the product of two conjugate numbers is real, and that %[** TN: Set on two lines in the original] \[ \frac{x + yi}{x' + y'i} = \frac{(x + yi)(x' - y'i)}{(x' + y'i)(x' - y'i)}\\ = \frac{xx' + yy' + (x'y - xy')i}{x'^{2} + y'^{2}}. \] \end{Remark} \Paragraph{40.} One most important property of real numbers is that known as \emph{the factor theorem}, which asserts that \emph{the product of two numbers cannot be zero unless one of the two is itself zero}. To prove that this is also true of complex numbers we put $x = 0$, $y = 0$ in the equations~\Eq{(4)} of the preceding section. Then \[ x'\xi - y'\eta = 0,\quad x'\eta + y'\xi = 0. \] These equations give $\xi = 0$, $\eta = 0$, \ie \[ \xi + \eta i = 0, \] unless $x' = 0$ and $y' = 0$, or $x' + y'i = 0$. Thus $x + yi$ cannot vanish unless either $x' + y'i$ or $\xi + \eta i$ vanishes. \Paragraph{41. The equation $i^{2} = -1$.} We agreed to simplify our notation by writing~$x$ instead of $x + 0i$ and $yi$~instead of $0 + yi$. The particular complex number~$1i$ we shall denote simply by~$i$. It is the number which corresponds to a unit displacement along~$OY$. Also \[ i^{2} = ii = (0 + 1i) (0 + 1i) = (0 · 0 - 1 · 1) + (0 · 1 + 1 · 0)i = -1. \] Similarly $(-i)^{2} = -1$. Thus the complex numbers $i$~and~$-i$ satisfy the equation $x^{2} = -1$. The reader will now easily satisfy himself that the upshot of the rules for addition and multiplication of complex numbers is this, that \emph{we operate with complex numbers in exactly the same way as with real numbers, treating the symbol~$i$ as itself a number, \PageSep{81} but replacing the product $ii = i^{2}$ by~$-1$ whenever it occurs}. Thus, for example, \begin{align*} (x + yi) (x' + y'i) &= xx' + xy'i + yx'i + yy'i^{2}\\ &= (xx' - yy') + (xy'+ yx')i. \end{align*} \Paragraph{42. The geometrical interpretation of multiplication by~$i$.} Since \[ (x + yi)i = -y + xi, \] it follows that if $x + yi$ corresponds to~$\Seg{OP}$, and $OQ$~is drawn equal to~$OP$ and so that $POQ$~is a positive right angle, then $(x + yi)i$ corresponds to~$\Seg{OQ}$. In other words, \emph{multiplication of a complex number by~$i$ turns the corresponding displacement through a right angle}. We might have developed the whole theory of complex numbers from this point of view. Starting with the ideas of $x$~as representing a displacement along~$OX$, and of $i$~as a symbol of operation equivalent to turning~$x$ through a right angle, we should have been led to regard~$yi$ as a displacement of magnitude~$y$ along~$OY$. It would then have been natural to define $x + yi$ as in \SecNo[§§]{37}~and~\SecNo{40}, and $(x + yi)i$ would have represented the displacement obtained by turning $x + yi$ through a right angle, \ie\ $-y + xi$. Finally, we should naturally have defined $(x + yi)x'$ as $xx' + yx'i$, $(x + yi)y'i$ as $-yy' + xy'i$, and $(x + yi) (x' + y'i)$ as the sum of these displacements, \ie\ as \[ xx' - yy' + (xy' + yx')i. \] \Paragraph{43. The equations $z^{2} + 1 = 0$, $az^{2} + 2bz + c = 0$.} There is no real number~$z$ such that $z^{2} + 1 = 0$; this is expressed by saying that the equation has \emph{no real roots}. But, as we have just seen, the two complex numbers $i$~and~$-i$ satisfy this equation. We express this by saying that the equation has \emph{the two complex roots} $i$~and~$-i$. Since $i$~satisfies $z^{2} = -1$, it is sometimes written in the form~$\sqrtp{-1}$. Complex numbers are sometimes called \emph{imaginary}.\footnote {The phrase `real number' was introduced as an antithesis to `imaginary number'.} The expression is by no means a happily chosen one, but it is firmly \PageSep{82} established and has to be accepted. It cannot, however, be too strongly impressed upon the reader that an `imaginary number' is no more `imaginary', in any ordinary sense of the word, than a `real' number; and that it is not a number at all, in the sense in which the `real' numbers are numbers, but, as should be clear from the preceding discussion, \emph{a pair of numbers $(x, y)$}, united symbolically, for purposes of technical convenience, in the form $x + yi$. Such a pair of numbers is no less `real' than any ordinary number such as~$\frac{1}{2}$, or than the paper on which this is printed, or than the Solar System. Thus \[ i = 0 + 1i \] stands for the pair of numbers $(0, 1)$, and may be represented geometrically by a point or by the displacement $[0, 1]$. And when we say that $i$~is a root of the equation $z^{2} + 1 = 0$, what we mean is simply that we have defined a method of combining such pairs of numbers (or displacements) which we call `multiplication', and which, when we so combine $(0, 1)$ with itself, gives the result~$(-1, 0)$. Now let us consider the more general equation \[ az^{2} + 2bz + c = 0, \] where $a$,~$b$,~$c$ are real numbers. If $b^{2} > ac$, the ordinary method of solution gives two real roots \[ \{-b ± \sqrtp{b^{2} - ac}\}/a. \] If $b^{2} < ac$, the equation has no real roots. It may be written in the form \[ \{z + (b/a)\}^{2} = -(ac - b^{2})/a^{2}, \] an equation which is evidently satisfied if we substitute for $z + (b/a)$ either of the complex numbers $±i\sqrtp{ac - b^{2}}/a$.\footnote {We shall sometimes write $x + iy$ instead of $x + yi$ for convenience in printing.} We express this by saying that the equation has \emph{the two complex roots} \[ \{-b ± i\sqrtp{ac - b^{2}}\}/a. \] If we agree as a matter of convention to say that when $b^{2} = ac$ (in which case the equation is satisfied by \emph{one} value of~$x$ only, viz.~$-b/a$), the equation has \emph{two equal roots}, we can say that \emph{a quadratic equation with real coefficients has two roots in all cases, either two distinct real roots, or two equal real roots, or two distinct complex roots}. \PageSep{83} The question is naturally suggested whether a quadratic equation may not, when complex roots are once admitted, have more than two roots. It is easy to see that this is not possible. Its impossibility may in fact be proved by precisely the same chain of reasoning as is used in elementary algebra to prove that an equation of the $n$th~degree cannot have more than $n$ real roots. Let us denote the complex number $x + yi$ by the single letter~$z$, a convention which we may express by writing $z = x + yi$. Let $f(z)$~denote any polynomial in~$z$, with real or complex coefficients. Then we prove in succession: \Item{(1)} that the remainder, when $f(z)$~is divided by~$z - a$, $a$~being any real or complex number, is~$f(a)$; \Item{(2)} that if $a$~is a root of the equation $f(z) = 0$, then $f(z)$~is divisible by~$z - a$; \Item{(3)} that if $f(z)$~is of the $n$th~degree, and $f(z) = 0$ has the $n$~roots $a_{1}$, $a_{2}$, \dots,~$a_{n}$, then \[ f(z) = A(z - a_{1}) (z - a_{2}) \dots (z - a_{n}), \] where $A$~is a constant, real or complex, in fact the coefficient of~$z^{n}$ in~$f(z)$. From the last result, and the theorem of \SecNo[§]{40}, it follows that $f(z)$~cannot have more than $n$ roots. We conclude that a quadratic equation with real coefficients has exactly two roots. We shall see later on that a similar theorem is true for an equation of any degree and with either real or complex coefficients: \emph{an equation of the $n$th~degree has exactly $n$~roots}. The only point in the proof which presents any difficulty is the first, viz.\ the proof that any equation must have \emph{at least one} root. This we must postpone for the present.\footnote {See \Ref{Appendix}{I}.} We may, however, at once call attention to one very interesting result of this theorem. In the theory of number we start from the positive integers and from the ideas of addition and multiplication and the converse operations of subtraction and division. We find that these operations are not always possible unless we admit new kinds of numbers. We can only attach a meaning to~$3 - 7$ if we admit \emph{negative} numbers, or to~$\frac{3}{7}$ if we admit \emph{rational fractions}. When we extend our list of arithmetical operations so as to include root extraction and the solution of equations, we find that some of \PageSep{84} them, such as that of the extraction of the square root of a number which (like~$2$) is not a perfect square, are not possible unless we widen our conception of a number, and admit the \emph{irrational} numbers of \Ref{Chap.}{I}. Others, such as the extraction of the square root of~$-1$, are not possible unless we go still further, and admit the \emph{complex} numbers of this chapter. And it would not be unnatural to suppose that, when we come to consider equations of higher degree, some might prove to be insoluble even by the aid of complex numbers, and that thus we might be led to the considerations of higher and higher types of, so to say, \emph{hyper-complex} numbers. The fact that the roots of any algebraical equation whatever are ordinary complex numbers shows that this is not the case. The application of any of the ordinary algebraical operations to complex numbers will yield only complex numbers. In technical language `the field of the complex numbers is closed for algebraical operations'. Before we pass on to other matters, let us add that all theorems of elementary algebra which are proved merely by the application of the rules of addition and multiplication are true \emph{whether the numbers which occur in them are real or complex}, since the rules referred to apply to complex as well as real numbers. For example, we know that, if $\alpha$~and~$\beta$ are the roots of \[ az^{2} + 2bz + c = 0, \] then \[ \alpha + \beta = -(2b/a),\quad \alpha\beta = (c/a). \] Similarly, if $\alpha$,~$\beta$,~$\gamma$ are the roots of \[ az^{3} + 3bz^{2} + 3cz + d = 0, \] then \[ \alpha + \beta + \gamma = -(3b/a),\quad \beta\gamma + \gamma\alpha + \alpha\beta = (3c/a),\quad \alpha\beta\gamma = -(d/a). \] All such theorems as these are true whether $a$,~$b$,~\dots\Add{,} $\alpha$,~$\beta$,~\dots\ are real or complex. \Paragraph{44. Argand's diagram.} Let $P$ (\Fig{24}) be the point $(x, y)$, $r$~the length~$OP$, and $\theta$~the angle~$XOP$, so that \[ x = r\cos\theta,\quad y = r\sin\theta,\quad r = \sqrtp{x^{2} + y^{2}},\quad \cos\theta : \sin\theta : 1 :: x : y : r. \] \PageSep{85} We denote the complex number $x + yi$ by~$z$, as in \SecNo[§]{43}, and we call~$z$ the \emph{complex variable}. %[Illustration: Fig. 24.] \Figure[2.5in]{24}{p085} We call~$P$ \emph{the point}~$z$, or the point corresponding to~$z$; $z$~the \emph{argument} of~$P$, $x$~the \emph{real part}, $y$~the \emph{imaginary part}, $r$~the \emph{modulus}, and $\theta$~the \emph{amplitude} of~$z$; and we write \[ %[** TN: Set on two lines in the original] x = \Real(z),\quad y = \Imag(z),\quad r = |z|,\quad \theta = \am z. \] When $y = 0$ we say that \emph{$z$~is real}, when $x = 0$ that \emph{$z$~is purely imaginary}. Two numbers $x + yi$, $x - yi$ which differ only in the signs of their imaginary parts, we call \emph{conjugate}. It will be observed that the sum~$2x$ of two conjugate numbers and their product $x^{2} + y^{2}$ are both real, that they have the same modulus $\sqrtp{x^{2} + y^{2}}$ and that their product is equal to the square of the modulus of either. The roots of a quadratic with real coefficients, for example, are conjugate, when not real. It must be observed that $\theta$ or $\am z$ is a many-valued function of $x$~and~$y$, having an infinity of values, which are angles differing by multiples of~$2\pi$.\footnote {It is evident that $|z|$~is identical with the polar coordinate~$r$ of~$P$, and that the other polar coordinate~$\theta$ is one value of~$\am z$. This value is not necessarily the \emph{principal} value, as defined below, for the polar coordinate of \SecNo[§]{22} lies between $0$~and~$2\pi$, and the principal value between $-\pi$~and~$\pi$.} A line originally lying along~$OX$ will, if turned through any of these angles, come to lie along~$OP$. We shall describe that one of these angles which lies between $-\pi$~and~$\pi$ as the \emph{principal value} of the amplitude of~$z$. This definition is unambiguous except when one of the values is~$\pi$, in which case $-\pi$~is also a value. In this case we must make some special provision as to which value is to be regarded as the principal value. In general, when we speak of the amplitude of~$z$ we shall, unless the contrary is stated, mean the principal value of the amplitude. \Fig{24} is usually known as Argand's diagram. \PageSep{86} \Paragraph{45. De~Moivre's Theorem.} The following statements follow immediately from the definitions of addition and multiplication. \Item{(1)} The real (or imaginary) part of the sum of two complex numbers is equal to the sum of their real (or imaginary) parts. \Item{(2)} The modulus of the product of two complex numbers is equal to the product of their moduli. \Item{(3)} The amplitude of the product of two complex numbers is either equal to the sum of their amplitudes, or differs from it by~$2\pi$. \begin{Remark} It should be observed that it is not always true that the principal value of~$\am(zz')$ is the sum of the principal values of $\am z$ and~$\am z'$. For example, if $z = z' = -1 + i$, then the principal values of the amplitudes of $z$~and~$z'$ are each~$\frac{3}{4}\pi$. But $zz' = -2i$, and the principal value of~$\am(zz')$ is~$-\frac{1}{2}\pi$ and not~$\frac{3}{2}\pi$. \end{Remark} The two last theorems may be expressed in the equation \[ %[** TN: Set on two lines in the original] r(\cos\theta + i\sin\theta) × \rho(\cos\phi + i\sin\phi) = r\rho\{\cos(\theta + \phi) + i\sin(\theta + \phi)\}, \] which may be proved at once by multiplying out and using the ordinary trigonometrical formulae for $\cos(\theta + \phi)$ and $\sin(\theta + \phi)$. More generally \begin{gather*} r_{1}(\cos\theta_{1} + i\sin\theta_{1}) × r_{2}(\cos\theta_{2} + i\sin\theta_{2}) × \dots × r_{n}(\cos\theta_{n} + i\sin\theta_{n})\\ = r_{1}r_{2} \dots r_{n} \{\cos(\theta_{1} + \theta_{2} + \dots + \theta_{n}) + i \sin(\theta_{1} + \theta_{2} + \dots + \theta_{n})\}. \end{gather*} A particularly interesting case is that in which \[ r_{1} = r_{2} = \dots = r_{n} = 1, \quad \theta_{1} = \theta_{2} = \dots = \theta_{n} = \theta\Add{.} \] We then obtain the equation \[ (\cos\theta + i\sin\theta)^{n} = \cos n\theta + i\sin n\theta, \] where $n$~is any positive integer: a result known as \emph{De~Moivre's Theorem}.\footnote {It will sometimes be convenient, for the sake of brevity, to denote $\cos\theta + i\sin\theta$ by~$\Cis\theta$: in this notation, suggested by Profs.\ Harkness and Morley, De~Moivre's theorem is expressed by the equation $(\Cis\theta)^{n} = \Cis n\theta$.} Again, if \[ z = r(\cos\theta + i\sin\theta) \] then \[ 1/z = (\cos\theta - i\sin\theta)/r. \] Thus the modulus of the reciprocal of~$z$ is the reciprocal of the modulus of~$z$, and the amplitude of the reciprocal is the negative of the amplitude of~$z$. We can now state the theorems for quotients which correspond to \Eq{(2)}~and~\Eq{(3)}. \PageSep{87} \Item{(4)} The modulus of the quotient of two complex numbers is equal to the quotient of their moduli. \Item{(5)} The amplitude of the quotient of two complex numbers either is equal to the difference of their amplitudes, or differs from it by~$2\pi$. Again \begin{align*} (\cos\theta + i\sin\theta)^{-n} &= (\cos\theta - i\sin\theta)^{n}\\ &= \{\cos(-\theta) + i\sin(-\theta)\}^{n}\\ &= \cos(-n\theta) + i\sin(-n\theta). \end{align*} Hence \emph{De Moivre's Theorem holds for all integral values of~$n$, positive or negative}. To the theorems (1)--(5) we may add the following theorem, which is also of very great importance. \Item{(6)} The modulus of the sum of any number of complex numbers is not greater than the sum of their moduli. %[Illustration: Fig. 25.] \Figure{25}{p087} Let $\Seg{OP}$, $\Seg{OP'}$,~\dots\ be the displacements corresponding to the various complex numbers. Draw $PQ$ equal and parallel to~$OP'$, $QR$~equal and parallel to~$OP''$, and so on. Finally we reach a point~$U$, such that \[ \Seg{OU} = \Seg{OP} + \Seg{OP'} + \Seg{OP''} + \dots. \] The length~$OU$ is the modulus of the sum of the complex numbers, whereas the sum of their moduli is the total length of the broken line $OPQR\dots U$, which is not less than~$OU$. A purely arithmetical proof of this theorem is outlined in \Exs{xxi}.~1. \PageSep{88} \Paragraph{46.} We add some theorems concerning rational functions of complex numbers. A \emph{rational function} of the complex variable~$z$ is defined exactly as is a rational function of a real variable~$x$, viz.\ as the quotient of two polynomials in~$z$. \begin{Theorem}[1.] Any rational function~$R(z)$ can be reduced to the form $X + Yi$, where $X$~and~$Y$ are rational functions of $x$~and~$y$ with real coefficients. \end{Theorem} In the first place it is evident that any polynomial $P(x + yi)$ can be reduced, in virtue of the definitions of addition and multiplication, to the form $A + Bi$, where $A$~and~$B$ are polynomials in $x$~and~$y$ with real coefficients. Similarly $Q(x + yi)$ can be reduced to the form $C + Di$. Hence \[ R(x + yi) = P(x + yi)/Q(x + yi) \] can be expressed in the form \begin{align*} (A + Bi)/(C + Di) &= (A + Bi) (C - Di)/(C + Di) (C - Di)\\ &= \frac{AC + BD}{C^{2} + D^{2}} + \frac{BC - AD}{C^{2} + D^{2}} i, \end{align*} which proves the theorem. \begin{Theorem}[2.] If $R(x + yi) = X + Yi$, $R$~denoting a rational function as before, but with \Emph{real} coefficients, then $R(x - yi) = X - Yi$. \end{Theorem} In the first place this is easily verified for a power $(x + yi)^{n}$ by actual expansion. It follows by addition that the theorem is true for any polynomial with real coefficients. Hence, in the notation used above, \[ R(x - yi) = \frac{A - Bi}{C - Di} = \frac{AC + BD}{C^{2} + D^{2}} - \frac{BC - AD}{C^{2} + D^{2}}i, \] the reduction being the same as before except that the sign of~$i$ is changed throughout. It is evident that results similar to those of Theorems 1~and~2 hold for functions of any number of complex variables. \begin{Theorem}[3.] The roots of an equation \[ a_{0}z^{n} + a_{1}z^{n-1} + \dots + a_{n} = 0, \] whose coefficients are real, may, in so far as they are not themselves real, be arranged in conjugate pairs. \end{Theorem} \PageSep{89} For it follows from Theorem~2 that if $x + yi$~is a root then so is~$x - yi$. A particular case of this theorem is the result (\SecNo[§]{43}) that the roots of a quadratic equation with real coefficients are either real or conjugate. This theorem is sometimes stated as follows: \emph{in an equation with real coefficients complex roots occur in conjugate pairs}. It should be compared with the result of \Exs{viii}.~7, which may be stated as follows: \emph{in an equation with rational coefficients irrational roots occur in conjugate pairs}.\footnote {The numbers $a + \sqrt{b}$, $a - \sqrt{b}$, where $a$,~$b$ are rational, are sometimes said to be `conjugate'.} \begin{Examples}{XXI.} \Item{1.} Prove theorem~(6) of \SecNo[§]{45} directly from the definitions and without the aid of geometrical considerations. [First, to prove that $|z + z'| \leq |z| + |z'|$ is to prove that \[ (x + x')^{2} + (y + y')^{2} \leq \{\sqrtp{x^{2} + y^{2}} + \sqrtp{x'^{2} + y'^{2}}\}^{2}. \] The theorem is then easily extended to the general case.] \Item{2.} The one and only case in which \[ |z| + |z'| + \dots = |z + z' + \dots|, \] is that in which the numbers $z$, $z'$,~\dots\ have all the same amplitude. Prove this both geometrically and analytically. \Item{3.} The modulus of the sum of any number of complex numbers is not less than the sum of their real (or imaginary) parts. \Item{4.} If the sum and product of two complex numbers are both real, then the two numbers must either be real or conjugate. \Item{5.} If \[ a + b\sqrt{2} + (c + d \sqrt{2})i = A + B\sqrt{2} + (C + D\sqrt{2})i, \] where $a$,~$b$,~$c$,~$d$, $A$,~$B$,~$C$,~$D$ are real rational numbers, then \[ a = A,\quad b = B,\quad c = C,\quad d = D. \] \Item{6.} Express the following numbers in the form $A + Bi$, where $A$~and~$B$ are real numbers: \[ (1 + i)^{2},\quad \left(\frac{1 + i}{1 - i}\right)^{2},\quad \left(\frac{1 - i}{1 + i}\right)^{2},\quad \frac{\lambda + \mu i}{\lambda - \mu i},\quad \left(\frac{\lambda + \mu i}{\lambda - \mu i}\right)^{2} - \left(\frac{\lambda - \mu i}{\lambda + \mu i}\right)^{2}, \] where $\lambda$~and~$\mu$ are real numbers. \Item{7.} Express the following functions of $z = x + yi$ in the form $X + Yi$, where $X$~and~$Y$ are real functions of $x$~and~$y$: $z^{2}$,~$z^{3}$,~$z^{n}$, $1/z$, $z + (1/z)$, $(\alpha + \beta z)/(\gamma + \delta z)$, where $\alpha$,~$\beta$,~$\gamma$,~$\delta$ are real numbers. \Item{8.} Find the moduli of the numbers and functions in the two preceding examples. \PageSep{90} \Item{9.} The two lines joining the points $z = a$, $z = b$ and $z = c$, $z = d$ will be perpendicular if \[ \am\left(\frac{a - b}{c - d}\right) = ±\tfrac{1}{2} \pi, \] \ie\ if $(a - b)/(c - d)$ is purely imaginary. What is the condition that the lines should be parallel? \Item{10.} The three angular points of a triangle are given by $z = \alpha$, $z = \beta$, $z = \gamma$, where $\alpha$,~$\beta$,~$\gamma$ are complex numbers. Establish the following propositions: \SubItem{(i)} \emph{the centre of gravity is given by $z = \frac{1}{3}(\alpha + \beta + \gamma)$}; %[** TN: Sole instance of circum-centre, keeping hyphenation] \SubItem{(ii)} \emph{the circum-centre is given by $|z - \alpha| = |z - \beta| = |z - \gamma|$}; \SubItem{(iii)} \emph{the three perpendiculars from the angular points on the opposite sides meet in a point given by} \[ \Re\left(\frac{z - \alpha}{\beta - \gamma}\right) = \Re\left(\frac{z - \beta}{\gamma - \alpha}\right) = \Re\left(\frac{z - \gamma}{\alpha - \beta}\right) = 0; \] \SubItem{(iv)} \emph{there is a point~$P$ inside the triangle such that \[ CBP = ACP = BAP = \omega, \] and} \[ \cot\omega = \cot A + \cot B + \cot C. \] [To prove~(iii) we observe that if $A$,~$B$,~$C$ are the vertices, and $P$~any point~$z$, then the condition that $AP$~should be perpendicular to~$BC$ is (Ex.~9) that $(z - \alpha)/(\beta - \gamma)$ should be purely imaginary, or that \[ \Re(z - \alpha) \Re(\beta - \gamma) + \Im(z - \alpha) \Im(\beta - \gamma) = 0. \] This equation, and the two similar equations obtained by permuting $\alpha$,~$\beta$,~$\gamma$ cyclically, are satisfied by the same value of~$z$, as appears from the fact that the sum of the three left-hand sides is zero. To prove~(iv), take $BC$~parallel to the positive direction of the axis of~$x$. Then\footnote {We suppose that as we go round the triangle in the direction~$ABC$ we leave it on our left.} \[ \gamma - \beta = a,\quad \alpha - \gamma = - b\Cis(-C),\quad \beta - \alpha = - c\Cis B. \] We have to determine $z$~and~$\omega$ from the equations \[ \frac{(z - \alpha)(\beta_{0} - \alpha_{0})} {(z_{0} - \alpha_{0})(\beta - \alpha)} = \frac{(z - \beta)(\gamma_{0} - \beta_{0})} {(z_{0} - \beta_{0})(\gamma - \beta)} = \frac{(z - \gamma)(\alpha_{0} - \gamma_{0})} {(z_{0} - \gamma_{0})(\alpha - \gamma)} = \Cis 2\omega, \] where $z_{0}$, $\alpha_{0}$,~$\beta_{0}$,~$\gamma_{0}$ denote the conjugates of $z$, $\alpha$,~$\beta$,~$\gamma$. Adding the numerators and denominators of the three equal fractions, and using the equation \[ i\cot\omega = (1 + \Cis 2\omega)/(1 - \Cis 2\omega), \] we find that \[ i\cot\omega = \frac{(\beta - \gamma)(\beta_{0} - \gamma_{0}) + (\gamma - \alpha)(\gamma_{0} - \alpha_{0}) + (\alpha - \beta)(\alpha_{0} - \beta_{0})} {\beta\gamma_{0} - \beta_{0}\gamma + \gamma\alpha_{0} - \gamma_{0}\alpha + \alpha\beta_{0} - \alpha_{0}\beta}. \] From this it is easily deduced that the value of~$\cot\omega$ is $(a^{2} + b^{2} +c^{2})/4\Delta$, where $\Delta$~is the area of the triangle; and this is equivalent to the result given. \PageSep{91} To determine~$z$, we multiply the numerators and denominators of the equal fractions by $(\gamma_{0} - \beta_{0})/(\beta - \alpha)$, $(\alpha_{0} - \gamma_{0})/(\gamma - \beta)$, $(\beta_{0} - \alpha_{0})/(\alpha - \gamma)$, and add to form a new fraction. It will be found that \[ z = \frac{a\alpha \Cis A + b\beta \Cis B + c\gamma \Cis C} {a\Cis A + b\Cis B + c\Cis C}.] \] \Item{11.} The two triangles whose vertices are the points $a$,~$b$,~$c$ and $x$,~$y$,~$z$ respectively will be similar if \[ \begin{vmatrix} 1 & 1 & 1\\ a & b & c \\ x & y & z \end{vmatrix} = 0 \] {\Loosen[The condition required is that $\Seg{AB}/\Seg{AC} = \Seg{XY}/\Seg{XZ}$ (large letters denoting the points whose arguments are the corresponding small letters), or $(b - a)/(c - a) = (y - x)/(z - x)$, which is the same as the condition given.]} \Item{12.} Deduce from the last example that if the points $x$,~$y$,~$z$ are collinear then we can find \emph{real} numbers $\alpha$,~$\beta$,~$\gamma$ such that $\alpha + \beta + \gamma = 0$ and $\alpha x + \beta y + \gamma z = 0$, and conversely (cf.\ \Exs{xx}.~4). [Use the fact that in this case the triangle formed by $x$,~$y$,~$z$ is similar to a certain line-triangle on the axis~$OX$, and apply the result of the last example.] \Item{13.} \Topic{The general linear equation with complex coefficients.} The equation $\alpha z + \beta = 0$ has the one solution $z = -(\beta/\alpha)$, unless $\alpha = 0$. If we put \[ \alpha = a + Ai,\quad \beta = b + Bi,\quad z = x + yi, \] and equate real and imaginary parts, we obtain two equations to determine the two real numbers $x$~and~$y$. The equation will have a real root if $y = 0$, which gives $ax + b = 0$, $Ax + B = 0$, and the condition that these equations should be consistent is~$aB - bA = 0$. \Item{14.} \Topic{The general quadratic equation with complex coefficients.} This equation is \[ (a + Ai)z^{2} + 2(b + Bi)z + (c + Ci) = 0. \] Unless $a$~and~$A$ are both zero we can divide through by~$a + iA$. Hence we may consider \[ z^{2} + 2(b + Bi)z + (c + Ci) = 0 \Tag{(1)} \] as the standard form of our equation. Putting $z = x + yi$ and equating real and imaginary parts, we obtain a pair of simultaneous equations for $x$~and~$y$, viz. \[ x^{2} - y^{2} + 2(bx - By) + c = 0,\quad 2xy + 2(by + Bx) + C = 0. \] If we put \[ x + b = \xi,\quad y + B = \eta,\quad b^{2} - B^{2} - c = h,\quad 2bB - C = k, \] these equations become \[ \xi^{2} - \eta^{2} = h,\quad 2\xi\eta = k. \] \PageSep{92} Squaring and adding we obtain \[ \xi^{2} + \eta^{2} = \sqrtp{h^{2} + k^{2}},\quad \xi = ±\sqrtbr{\tfrac{1}{2}\{\sqrtp{h^{2} + k^{2}} + h\}},\quad \eta = ±\sqrtbr{\tfrac{1}{2}\{\sqrtp{h^{2} + k^{2}} - h\}}. \] We must choose the signs so that $\xi\eta$~has the sign of~$k$: \ie\ if $k$~is positive we must take like signs, if $k$~is negative unlike signs. \Par{Conditions for equal roots.} The two roots can only be equal if both the square roots above vanish, \ie\ if $h = 0$, $k = 0$, or if $c = b^{2} - B^{2}$, $C = 2bB$. These conditions are equivalent to the single condition $c + Ci = (b + Bi)^{2}$, which obviously expresses the fact that the left-hand side of~\Eq{(1)} is a perfect square. \Par{Condition for a real root.} If $x^{2} + 2(b + Bi) x + (c + Ci) = 0$, where $x$~is real, then $x^{2} + 2bx + c = 0$, $2Bx + C = 0$. Eliminating~$x$ we find that the required condition is \[ C^{2} - 4bBC + 4cB^{2} = 0. \] \Par{Condition for a purely imaginary root.} This is easily found to be \[ C^{2} - 4bBC - 4b^{2}c = 0. \] \Par{Conditions for a pair of conjugate complex roots.} Since the sum and the product of two conjugate complex numbers are both real, $b + Bi$ and $c + Ci$ must both be real, \ie\ $B = 0$, $C = 0$. Thus the equation~\Eq{(1)} can have a pair of conjugate complex roots only if its coefficients are real. The reader should verify this conclusion by means of the explicit expressions of the roots. Moreover, if $b^{2}\geq c$, the roots will be real even in this case. Hence for a pair of conjugate roots we must have $B = 0$, $C = 0$, $b^{2} < c$. \Item{15.} \Topic{The Cubic equation.} Consider the cubic equation \[ z^{3} + 3Hz + G = 0, \] where $G$~and~$H$ are complex numbers, it being given that the equation has (\ia)~a real root, (\ib)~a purely imaginary root, (\ic)~a pair of conjugate roots. If $H = \lambda + \mu i$, $G = \rho + \sigma i$, we arrive at the following conclusions. \Par{\Item{(\ia)} Conditions for a real root.} If $\mu$~is not zero, then the real root is~$-\sigma/3\mu$, and $\sigma^{3} + 27\lambda\mu^{2}\sigma - 27\mu^{3}\rho = 0$. On the other hand, if $\mu = 0$ then we must also have $\sigma = 0$, so that the coefficients of the equation are real. In this case there may be three real roots. \Par{\Item{(\ib)} Conditions for a purely imaginary root.} If $\mu$~is not zero then the purely imaginary root is~$(\rho/3\mu)i$, and $\rho^{3} - 27\lambda\mu^{2}\rho - 27\mu^{3}\sigma = 0$. If $\mu = 0$ then also $\rho = 0$, and the root is~$yi$, where $y$~is given by the equation $y^{3} - 3\lambda y - \sigma = 0$, which has real coefficients. In this case there may be three purely imaginary roots. \Par{\Item{(\ic)} Conditions for a pair of conjugate complex roots.} Let these be $x + yi$ and $x - yi$. Then since the sum of the three roots is zero the third root must be~$-2x$. From the relations between the coefficients and the roots of an equation we deduce \[ y^{2} - 3x^{2} = 3H,\quad 2x(x^{2} + y^{2}) = G. \] Hence $G$~and~$H$ must both be real. In each case we can either find a root (in which case the equation can be reduced to a quadratic by dividing by a known factor) or we can reduce the solution of the equation to the solution of a cubic equation with real coefficients. \PageSep{93} \Item{16.} The cubic equation $x^{3} + a_{1}x^{2} + a_{2}x + a_{3} = 0$, where $a_{1} = A_{1} + A_{1}'i$,~\dots, has a pair of conjugate complex roots. Prove that the remaining root is $-A_{1}'a_{3}/A_{3}'$, unless $A_{3}' = 0$. Examine the case in which $A_{3}' = 0$. \Item{17.} Prove that if $z^{3} + 3Hz + G = 0$ has two complex roots then the equation \[ 8\alpha^{3} + 6\alpha H - G = 0 \] has one real root which is the real part~$\alpha$ of the complex roots of the original equation; and show that $\alpha$~has the same sign as~$G$. \Item{18.} An equation of any order with complex coefficients will in general have no real roots nor pairs of conjugate complex roots. How many conditions must be satisfied by the coefficients in order that the equation should have (\ia)~a real root, (\ib)~a pair of conjugate roots? \Item{19.} \Topic{Coaxal circles.} In \Fig{26}, let $a$,~$b$,~$z$ be the arguments of $A$,~$B$,~$P$. Then \[ \am\frac{z - b}{z - a} = APB, \] if the principal value of the amplitude is chosen. If the two circles shown in the figure are equal, and $z'$,~$z_{1}$,~$z_{1}'$ are the arguments of $P'$,~$P_{1}$,~$P_{1}'$, and $APB = \theta$, it is easy to see that \[ \am\frac{z' - b}{z' - a} = \pi - \theta,\quad \am\frac{z_{1} - b}{z_{1} - a} = -\theta, \] and \[ \am\frac{z_{1}' - b}{z_{1}' - a} = -\pi + \theta. \] The locus defined by the equation \[ \am\frac{z - b}{z - a} = \theta, \] where $\theta$~is constant, is the arc~$APB$. By writing $\pi - \theta$,~$-\theta$,~$-\pi + \theta$ for~$\theta$, we obtain the other three arcs shown. %[Illustration: Fig. 26.] \Figure[2.25in]{26}{p093} The system of equations obtained by supposing that $\theta$~is a parameter, varying from~$-\pi$ to~$+\pi$, represents \emph{the system of circles which can be drawn through the points $A$,~$B$}. It should however be observed that each circle has to be divided into two parts to which correspond different values of~$\theta$. \Item{20.} Now let us consider the equation \[ \left|\frac{z - b}{z - a}\right| = \lambda, \Tag{(1)} \] where $\lambda$~is a constant. Let $K$ be the point in which the tangent to the circle~$ABP$ at~$P$ meets~$AB$. Then the triangles $KPA$,~$KBP$ are similar, and so \[ AP/PB = PK/BK = KA/KP = \lambda. \] \PageSep{94} Hence $KA/KB = \lambda^{2}$, and therefore $K$~is a fixed point for all positions of~$P$ which satisfy the equation~\Eq{(1)}. Also $KP^{2} = KA · KB$, and so is constant. Hence \emph{the locus of~$P$ is a circle whose centre is~$K$}. The system of equations obtained by varying~$\lambda$ represents a system of circles, and every circle of this system cuts at right angles every circle of the system of Ex.~19. The system of Ex.~19 is called \emph{a system of coaxal circles of the common point kind}. The system of Ex.~20 is called \emph{a system of coaxal circles of the limiting point kind}, $A$~and~$B$ being the \emph{limiting points} of the system. If $\lambda$~is very large or very small then the circle is a very small circle containing $A$~or~$B$ in its interior. \Item{21.} \Topic{Bilinear Transformations.} Consider the equation \[ z = Z + a, \Tag{(1)} \] where $z = x + yi$ and $Z = X + Yi$ are two complex variables which we may suppose to be represented in two planes $xoy$,~$XOY$. To every value of~$z$ corresponds one of~$Z$, and conversely. If $a = \alpha + \beta i$ then \[ x = X + \alpha,\quad y = Y + \beta, \] and to the point $(x, y)$ corresponds the point $(X, Y)$. If $(x, y)$ describes a curve of any kind in its plane, $(X, Y)$ describes a curve in its plane. Thus to any figure in one plane corresponds a figure in the other. A passage of this kind from a figure in the plane~$xoy$ to a figure in the plane~$XOY$ by means of a relation such as~\Eq{(1)} between $z$~and~$Z$ is called a \emph{transformation}. In this particular case the relation between corresponding figures is very easily defined. The $(X, Y)$ figure is the same in size, shape, and orientation as the $(x, y)$ figure, but is shifted a distance~$\alpha$ to the left, and a distance~$\beta$ downwards. Such a transformation is called a \emph{translation}. Now consider the equation \[ z = \rho Z, \Tag{(2)} \] where $\rho$~is real. This gives $x = \rho X$, $y = \rho Y$. The two figures are similar and similarly situated about their respective origins, but the scale of the $(x, y)$ figure is $\rho$~times that of the $(X, Y)$ figure. Such a transformation is called a \emph{magnification}. Finally consider the equation \[ z = (\cos\phi + i \sin\phi)Z. \Tag{(3)} \] It is clear that $|z| = |Z|$ and that one value of $\am z$ is $\am Z + \phi$, and that the two figures differ only in that the $(x, y)$ figure is the $(X, Y)$ figure turned about the origin through an angle~$\phi$ in the positive direction. Such a transformation is called a \emph{rotation}. The general linear transformation \[ z = aZ + b \Tag{(4)} \] \PageSep{95} is a combination of the three transformations \Eq{(1)},~\Eq{(2)},~\Eq{(3)}. For, if $|a| = \rho$ and $\am a = \phi$, we can replace~\Eq{(4)} by the three equations \[ z = z' + b,\quad z' = \rho Z',\quad Z' = (\cos\phi + i\sin\phi)Z. \] Thus \emph{the general linear transformation is equivalent to the combination of a translation, a magnification, and a rotation}. Next let us consider the transformation \[ z = 1/Z. \Tag{(5)} \] If $|Z| = R$ and $\am Z = \Theta$, then $|z| = 1/R$ and $\am z = -\Theta$, and to pass from the $(x, y)$ figure to the $(X, Y)$ figure we invert the former with respect to~$o$, with unit radius of inversion, and then construct the image of the new figure in the axis~$ox$ (\ie\ the symmetrical figure on the other side of~$ox$). Finally consider the transformation \[ z = \frac{aZ + b}{cZ + d}. \Tag{(6)} \] This is equivalent to the combination of the transformations \[ z = (a/c) + (bc - ad)(z'/c),\quad z' = 1/Z',\quad Z' = cZ + d, \] \ie\ to a certain combination of transformations of the types already considered. The transformation~\Eq{(6)} is called the \emph{general bilinear transformation}. Solving for~$Z$ we obtain \[ Z = \frac{dz - b}{cz - a}. \] The general bilinear transformation is the most general type of transformation for which one and only one value of~$z$ corresponds to each value of~$Z$, and conversely. \Par{\Item{22.} The general bilinear transformation transforms circles into circles.} This may be proved in a variety of ways. We may assume the well-known theorem in pure geometry, that inversion transforms circles into circles (which may of course in particular cases be straight lines). Or we may use the results of Exs.\ 19~and~20. If, \eg, the $(x, y)$ circle is \[ |(z - \sigma)/(z - \rho)| = \lambda, \] and we substitute for~$z$ in terms of~$Z$, we obtain \[ |(Z - \sigma')/(Z - \rho')| = \lambda', \] where \[ \sigma' = -\frac{b - \sigma d}{a - \sigma c},\quad \rho' = -\frac{b - \rho d}{a - \rho c},\quad \lambda' = \left|\frac{a - \rho c}{a - \sigma c}\right|\lambda. \] \Item{23.} Consider the transformations $z = 1/Z$, $z = (1 + Z)/(1 - Z)$, and draw the $(X, Y)$ curves which correspond to (1)~circles whose centre is the origin, (2)~straight lines through the origin. \PageSep{96} \Item{24.} The condition that the transformation $z = (aZ + b)/(cZ + d)$ should make the circle $x^{2} + y^{2} = 1$ correspond to a straight line in the $(X, Y)$ plane is $|a| = |c|$. \Item{25.} \Topic{Cross ratios.} The cross ratio $\DPmod{(z_{1}z_{2}, z_{3}z_{4})}{(z_{1}, z_{2}; z_{3}, z_{4})}$ is defined to be \[ \frac{(z_{1} - z_{3}) (z_{2} - z_{4})}{(z_{1} - z_{4}) (z_{2} - z_{3})}. \] If the four points $z_{1}$,~$z_{2}$,~$z_{3}$,~$z_{4}$ are on the same line, this definition agrees with that adopted in elementary geometry. There are $24$~cross ratios which can be formed from $z_{1}$,~$z_{2}$,~$z_{3}$,~$z_{4}$ by permuting the suffixes. These consist of six groups of four equal cross ratios. If one ratio is~$\lambda$, then the six distinct cross ratios are $\lambda$, $1 - \lambda$, $1/\lambda$, $1/(1 - \lambda)$, $(\lambda - 1)/\lambda$, $\lambda/(\lambda - 1)$. The four points are said to be \emph{harmonic} or \emph{harmonically related} if any one of these is equal to~$-1$. In this case the six ratios are $-1$, $2$, $-1$, $\frac{1}{2}$, $2$,~$\frac{1}{2}$. \emph{If any cross ratio is real then all are real and the four points lie on a circle}. For in this case \[ \am\frac{(z_{1} - z_{3}) (z_{2} - z_{4})}{(z_{1} - z_{4}) (z_{2} - z_{3})} \] must have one of the three values $-\pi$,~$0$,~$\pi$, so that $\am\{(z_{1} - z_{3})/(z_{1} - z_{4})\}$ and $\am\{(z_{2} - z_{3})/(z_{2} - z_{4})\}$ must either be equal or differ by~$\pi$ (cf.~Ex.~19). If $\DPmod{(z_{1}z_{2}, z_{3}z_{4})}{(z_{1}, z_{2}; z_{3}, z_{4})} = - 1$, we have the two equations \[ \am\frac{z_{1} - z_{3}}{z_{1} - z_{4}} = ±\pi + \am\frac{z_{2} - z_{3}}{z_{2} - z_{4}},\quad \left|\frac{z_{1} - z_{3}}{z_{1} - z_{4}}\right| = \left|\frac{z_{2} - z_{3}}{z_{2} - z_{4}}\right|. \] The four points $A_{1}$, $A_{2}$, $A_{3}$,~$A_{4}$ lie on a circle, $A_{1}$~and~$A_{2}$ being separated by $A_{3}$~and~$A_{4}$. Also $A_{1}A_{3}/A_{1}A_{4} = A_{2}A_{3}/A_{2}A_{4}$. Let $O$~be the middle point of~$A_{3}A_{4}$. The equation \[ \frac{(z_{1} - z_{3}) (z_{2} - z_{4})}{(z_{1} - z_{4}) (z_{2} - z_{3})} = -1 \] may be put in the form \[ (z_{1} + z_{2}) (z_{3} + z_{4}) = 2(z_{1}z_{2} + z_{3}z_{4}), \] or, what is the same thing, \[ \{z_{1} - \tfrac{1}{2}(z_{3} + z_{4})\} \{z_{2} - \tfrac{1}{2}(z_{3} + z_{4})\} = \{\tfrac{1}{2}(z_{3} - z_{4})\}^{2}. \] But this is equivalent to $\Seg{OA_{1}} · \Seg{OA_{2}} = \Seg{OA_{3}}^{2} = \Seg{OA_{4}}^{2}$. Hence $OA_{1}$ and $OA_{2}$ make equal angles with~$A_{3}A_{4}$, and $OA_{1} · OA_{2} = OA_{3}^{2} = OA_{4}^{2}$. It will be observed that the relation between the pairs $A_{1}$,~$A_{2}$ and $A_{3}$,~$A_{4}$ is symmetrical. Hence, if $O'$~is the middle point of~$A_{1}A_{2}$, $O'A_{3}$~and~$O'A_{4}$ are equally inclined to~$A_{1}A_{2}$, and $O'A_{3} · O'A_{4} = O'A_{1}^{2} = O'A_{2}^{2}$. \Item{26.} If the points $A_{1}$,~$A_{2}$ are given by $az^{2} + 2bz + c = 0$, and the points $A_{3}$,~$A_{4}$ by $a'z^{2} + 2b'z + c' = 0$, and $O$~is the middle point of~$A_{3}A_{4}$, and $ac' + a'c - 2bb' = 0$, then $OA_{1}$,~$OA_{2}$ are equally inclined to~$A_{3}A_{4}$ and $OA_{1} · OA_{2} = OA_{3}^{2} = OA_{4}^{2}$. \MathTrip{1901.} \PageSep{97} \Item{27.} $AB$,~$CD$ are two intersecting lines in Argand's diagram, and $P$,~$Q$ their middle points. Prove that, if $AB$~bisects the angle~$CPD$ and $PA^{2} = PB^{2} = PC · PD$, then $CD$~bisects the angle~$AQB$ and $QC^{2} = QD^{2} = QA · QB$. \MathTrip{1909.} \Item{28.} \Topic{The condition that four points should lie on a circle.} A sufficient condition is that one (and therefore all) of the cross ratios should be real (Ex.~25); this condition is also necessary. Another form of the condition is that it should be possible to choose real numbers $\alpha$,~$\beta$,~$\gamma$ such that \[ \begin{vmatrix} 1 & 1 & 1\\ \alpha & \beta & \gamma\\ z_{1}z_{4} + z_{2}z_{3} & z_{2}z_{4} + z_{3}z_{1} & z_{3}z_{4} + z_{1}z_{2} \end{vmatrix} = 0. \] {\Loosen[To prove this we observe that the transformation $Z = 1/(z - z_{4})$ is equivalent to an inversion with respect to the point~$z_{4}$, coupled with a certain reflexion (Ex.~21). If $z_{1}$,~$z_{2}$,~$z_{3}$ lie on a circle through~$z_{4}$, the corresponding points $Z_{1} = 1/(z_{1} - z_{4})$, $Z_{2} = 1/(z_{2} - z_{4})$, $Z_{3} = 1/(z_{3} - z_{4})$ lie on a straight line. Hence (Ex.~12) we can find real numbers $\alpha'$,~$\beta'$,~$\gamma'$ such that $\alpha' + \beta' + \gamma' = 0$ and $\alpha'/(z_{1} - z_{4}) + \beta'/(z_{2} - z_{4}) + \gamma'/(z_{3} - z_{4}) = 0$, and it is easy to prove that this is equivalent to the given condition.]} \Item{29.} Prove the following analogue of De~Moivre's Theorem for real numbers: if $\phi_{1}$,~$\phi_{2}$, $\phi_{3}$,~\dots\ is a series of positive acute angles such that \begin{alignat*}{2} \tan\phi_{m+1} &= \tan\phi_{m} \sec\phi_{1} &&+ \sec\phi_{m} \tan\phi_{1},\\ \intertext{then} \tan\phi_{m+n} &= \tan\phi_{m} \sec\phi_{n} &&+ \sec\phi_{m} \tan\phi_{n},\\ \sec\phi_{m+n} &= \sec\phi_{m} \sec\phi_{n} &&+ \tan\phi_{m} \tan\phi_{n}, \end{alignat*} and \[ \tan\phi_{m} + \sec\phi_{m} = (\tan\phi_{1} + \sec\phi_{1})^{m}. \] [Use the method of mathematical induction.] \Item{30.} \Topic{The transformation $z = Z^{m}$.} In this case $r = R^{m}$, and $\theta$~and~$m\Theta$ differ by a multiple of~$2\pi$. If $Z$~describes a circle round the origin then $z$~describes a circle round the origin $m$~times. The whole $(x, y)$ plane corresponds to any one of $m$~sectors in the $(X, Y)$ plane, each of angle~$2\pi/m$. To each point in the $(x, y)$ plane correspond $m$~points in the $(X, Y)$ plane. \Item{31.} \Topic{Complex functions of a real variable.} If $f(t)$,~$\phi(t)$ are two real functions of a real variable~$t$ defined for a certain range of values of~$t$, we call \[ z = f(t) + i\phi(t) \Tag{(1)} \] a complex function of~$t$. We can represent it graphically by drawing the curve \[ x = f(t),\quad y = \phi(t); \] \PageSep{98} the equation of the curve may be obtained by eliminating~$t$ between these equations. If $z$~is a polynomial in~$t$, or rational function of~$t$, with complex coefficients, we can express it in the form~\Eq{(1)} and so determine the curve represented by the function. \SubItem{(i)} Let \[ z = a + (b - a)t, \] where $a$~and~$b$ are complex numbers. If $a = \alpha + \alpha' i$, $b = \beta + \beta' i$, then \[ x = \alpha + (\beta - \alpha)t,\quad y = \alpha' + (\beta' - \alpha')t. \] The curve is the straight line joining the points $z = a$ and $z = b$. The segment between the points corresponds to the range of values of~$t$ from~$0$ to~$1$. Find the values of~$t$ which correspond to the two produced segments of the line. \SubItem{(ii)} If \[ z = c + \rho\left(\frac{1 + ti}{1 - ti}\right), \] where $\rho$~is positive, then the curve is the circle of centre~$c$ and radius~$\rho$. As $t$~varies through all real values\DPnote{** [sic], no comma} $z$~describes the circle once. \SubItem{(iii)} In general the equation $z = (a + bt)/(c + dt)$ represents a circle. This can be proved by calculating $x$~and~$y$ and eliminating: but this process is rather cumbrous. A simpler method is obtained by using the result of Ex.~22. Let $z = (a + bZ)/(c + dZ)$, $Z = t$. As $t$~varies\DPnote{** [sic], no comma} $Z$~describes a straight line, viz.\ the axis of~$X$. Hence $z$~describes a circle. \SubItem{(iv)} The equation \[ z = a + 2bt + ct^{2} \] represents a parabola generally, a straight line if $b/c$~is real. \SubItem{(v)} The equation $z = (a + 2bt + ct^{2})/(\alpha + 2\beta t + \gamma t^{2})$, where $\alpha$,~$\beta$,~$\gamma$ are real, represents a conic section. [Eliminate~$t$ from \[ x = (A + 2Bt + Ct^{2})/(\alpha + 2\beta t + \gamma t^{2}),\quad y = (A' + 2B't + C't^{2})/(\alpha + 2\beta t + \gamma t^{2}), \] where $A + A'i = a$, $B + B'i = b$, $C + C'i = c$.] \end{Examples} \Paragraph{47. Roots of complex numbers.} We have not, up to the present, attributed any meaning to symbols such as $\sqrt[n]{a}$,~$a^{m/n}$, when $a$~is a complex number, and $m$~and~$n$ integers. It is, however, natural to adopt the definitions which are given in elementary algebra for real values of~$a$. Thus we define~$\sqrt[n]{a}$ or~$a^{1/n}$, where $n$~is a positive integer, as a number~$z$ which satisfies the equation $z^{n} = a$; and $a^{m/n}$, where $m$~is an integer, as~$(a^{1/n})^{m}$. These definitions do not prejudge the question as to whether there are or are not more than one (or any) roots of the equation. \Paragraph{48. Solution of the equation $z^{n} = a$.} Let \[ a = \rho(\cos\phi + i\sin\phi), \] where $\rho$~is positive and $\phi$~is an angle such that $-\pi < \phi \leq \pi$. If \PageSep{99} we put $z = r(\cos\theta + i\sin\theta)$, the equation takes the form \[ r^{n}(\cos n\theta + i\sin n\theta) = \rho(\cos\phi + i \sin\phi); \] so that \[ r^{n} = \rho,\quad \cos n\theta = \cos\phi,\quad \sin n\theta = \sin\phi. \Tag{(1)} \] The only possible value of~$r$ is~$\sqrt[n]{\rho}$, the ordinary arithmetical $n$th~root of~$\rho$; and in order that the last two equations should be satisfied it is necessary and sufficient that $n\theta = \phi + 2k\pi$, where $k$~is an integer, or \[ \theta = (\phi + 2k\pi)/n. \] If $k = pn + q$, where $p$~and~$q$ are integers, and $0 \leq q < n$, the value of~$\theta$ is~$2p\pi + (\phi + 2q\pi)/n$, and in this the value of~$p$ is a matter of indifference. Hence \emph{the equation \[ z^{n} = a = \rho(\cos\phi + i\sin\phi) \] has $n$~roots and $n$~only, given by $z = r(\cos\theta + i\sin\theta)$, where} \[ r = \sqrt[n]{\rho},\quad \theta = (\phi + 2q\pi)/n,\quad (q = 0,\ 1,\ 2,\ \dots\Add{,} n - 1). \] That these $n$~roots are in reality all distinct is easily seen by plotting them on Argand's diagram. The particular root \[ \sqrt[n]{\rho}\{\cos(\phi/n) + i\sin(\phi/n)\} \] is called the \emph{principal value} of~$\sqrt[n]{a}$. The case in which $a = 1$, $\rho = 1$, $\phi = 0$ is of particular interest. The $n$~roots of the equation $x^{n} = 1$ are \[ \cos(2q\pi/n) + i\sin(2q\pi/n),\quad (q = 0,\ 1,\ \dots\Add{,} n - 1). \] These numbers are called the $n$th~roots of unity; the principal value is unity itself. If we write $\omega_{n}$ for $\cos(2\pi/n) + i\sin(2\pi/n)$, we see that the $n$th~roots of unity are \[ 1,\quad \omega_{n},\quad \omega_{n}^{2},\ \dots,\quad \omega_{n}^{n-1}. \] \begin{Examples}{XXII.} \Item{1.} The two square roots of~$1$ are $1$,~$-1$; the three cube roots are $1$, $\frac{1}{2}(-1 + i\sqrt{3})$, $\frac{1}{2}(-1 - i\sqrt{3})$; the four fourth roots are $1$, $i$, $-1$, $-i$; and the five fifth roots are \begin{alignat*}{4} 1,\quad &\tfrac{1}{4} \Bigl[ &&\sqrt{5} - 1 + i\sqrtb{10 + 2\sqrt{5}}\Bigr],\quad && \tfrac{1}{4} \Bigl[-&&\sqrt{5} - 1 + i\sqrtb{10 - 2\sqrt{5}}\Bigr],\\ &\tfrac{1}{4} \Bigl[-&&\sqrt{5} - 1 - i\sqrtb{10 - 2\sqrt{5}}\Bigr],\quad && \tfrac{1}{4} \Bigl[ &&\sqrt{5} - 1 - i\sqrtb{10 + 2\sqrt{5}}\Bigr]. \end{alignat*} \Item{2.} Prove that \[ 1 + \omega_{n} + \omega_{n}^{2} + \dots + \omega_{n}^{n-1} = 0. \] \Item{3.} Prove that \[ (x + y\omega_{3} + z\omega_{3}^{2}) (x + y\omega_{3}^{2} + z\omega_{3}) = x^{2} + y^{2} + z^{2} - yz - zx - xy. \] \Item{4.} The $n$th~roots of~$a$ are the products of the $n$th~roots of unity by the principal value of~$\sqrt[n]{a}$. \PageSep{100} \Item{5.} It follows from \Exs{xxi}.~14 that the roots of \[ z^{2} = \alpha + \beta i \] are \[ ± \sqrtbr{\tfrac{1}{2} \{\sqrtp{\alpha^{2} + \beta^{2}} + \alpha\}} ± i\sqrtbr{\tfrac{1}{2} \{\sqrtp{\alpha^{2} + \beta^{2}} - \alpha\}}, \] like or unlike signs being chosen according as $\beta$~is positive or negative. Show that this result agrees with the result of \SecNo[§]{48}. \Item{6.} Show that $(x^{2m} - a^{2m})/(x^{2} - a^{2})$ is equal to \[ \Bigl(x^{2} - 2ax\cos\frac{\pi}{m} + a^{2}\Bigr) \Bigl(x^{2} - 2ax\cos\frac{2\pi}{m} + a^{2}\Bigr) \dots \Bigl(x^{2} - 2ax\cos\frac{(m - 1)\pi}{m} + a^{2}\Bigr). \] [The factors of $x^{2m} - a^{2m}$ are \[ (x - a),\quad (x - a\omega_{2m}),\quad (x - a\omega_{2m}^{2}),\ \dots\quad (x - a\omega_{2m}^{2m-1}). \] The factor $x - a\omega_{2m}^{m}$ is $x + a$. The factors $(x - a\omega_{2m}^{s})$, $(x - a\omega_{2m}^{2m-s})$ taken together give a factor $x^{2} - 2ax \cos(s\pi/m) + a^{2}$.] \Item{7.} Resolve $x^{2m+1} - a^{2m+1}$, $x^{2m} + a^{2m}$, and $x^{2m+1} + a^{2m+1}$ into factors in a similar way. \Item{8.} Show that $x^{2n} - 2x^{n}a^{n} \cos\theta + a^{2n}$ is equal to \begin{multline*} \left(x^{2} - 2xa\cos\frac{\theta}{n} + a^{2}\right) \left(x^{2} - 2xa\cos\frac{\theta + 2\pi}{n} + a^{2}\right) \dots \\ \dots\left(x^{2} - 2xa\cos\frac{\theta + 2(n - 1)\pi}{n} + a^{2}\right). \end{multline*} [Use the formula \[ x^{2n} - 2x^{n}a^{n} \cos\theta + a^{2n} = \{x^{n} - a^{n}(\cos\theta + i\sin\theta)\} \{x^{n} - a^{n}(\cos\theta - i\sin\theta)\}, \] and split up each of the last two expressions into $n$~factors.] \Item{9.} Find all the roots of the equation $x^{6} - 2x^{3} + 2 = 0$. \MathTrip{1910.} \Item{10.} The problem of finding the accurate value of~$\omega_{n}$ in a numerical form involving square roots only, as in the formula $\omega_{3} = \frac{1}{2}(-1 + i\sqrt{3})$, is the algebraical equivalent of the geometrical problem of inscribing a regular polygon of $n$~sides in a circle of unit radius by Euclidean methods, \ie\ by ruler and compasses. For this construction will be possible if and only if we can construct lengths measured by $\cos(2\pi/n)$ and $\sin(2\pi/n)$; and this is possible (\Ref{Ch.}{II}, \MiscExs{II}~22) if and only if these numbers are expressible in a form involving square roots only. Euclid gives constructions for $n = 3$, $4$, $5$, $6$, $8$, $10$, $12$, and~$15$. It is evident that the construction is possible for any value of~$n$ which can be found from these by multiplication by any power of~$2$. There are other special values of~$n$ for which such constructions are possible, the most interesting being~$n = 17$. \end{Examples} \PageSep{101} \Paragraph{49. The general form of De~Moivre's Theorem.} It follows from the results of the last section that if $q$~is a positive integer then one of the values of $(\cos\theta + i\sin\theta)^{1/q}$ is \[ \cos(\theta/q) + i\sin(\theta/q). \] Raising each of these expressions to the power~$p$ (where $p$~is any integer positive or negative), we obtain the theorem that one of the values of $(\cos\theta + i\sin\theta)^{p/q}$ is $\cos(p\theta/q) + i\sin(p\theta/q)$, or that \emph{if $\alpha$~is any rational number then one of the values of $(\cos\theta + i\sin\theta)^{\alpha}$ is} \[ \cos\alpha\theta + i\sin\alpha\theta. \] This is a generalised form of De~Moivre's Theorem (\SecNo[§]{45}). \Section{MISCELLANEOUS EXAMPLES ON CHAPTER III.} \begin{Examples}{} \Item{1.} The condition that a triangle~$(xyz)$ should be equilateral is that \[ x^{2} + y^{2} + z^{2} - yz - zx - xy = 0. \] {\Loosen[Let $XYZ$~be the triangle. The displacement $\Seg{ZX}$ is $\Seg{YZ}$ turned through an angle~$\frac{2}{3}\pi$ in the positive or negative direction. Since $\Cis\frac{2}{3}\pi = \omega_{3}$, $\Cis(-\frac{2}{3}\pi) = 1/\omega_{3} = \omega_{3}^{2}$, we have $x - z = (z - y)\omega_{3}$ or $x - z = (z - y)\omega_{3}^{2}$. Hence $x + y\omega_{3} + z\omega_{3}^{2} = 0$ or $x + y\omega_{3}^{2} + z\omega_{3} = 0$. The result follows from \Exs{xxii}.~3.]} \Item{2.} If $XYZ$, $X'Y'Z'$ are two triangles, and \[ \Seg{YZ} · \Seg{Y'Z'} = \Seg{ZX} · \Seg{Z'X'} = \Seg{XY} · \Seg{X'Y'}, \] then both triangles are equilateral. [From the equations \[ (y - z)(y' - z') = (z - x)(z' - x') = (x - y)(x' - y') = \kappa^{2}, \] say, we deduce $\sum 1/(y' - z') = 0$, or $\sum x'^{2} - \sum y'z' = 0$. Now apply the result of the last example.] \Item{3.} Similar triangles $BCX$, $CAY$, $ABZ$ are described on the sides of a triangle~$ABC$. Show that the centres of gravity of $ABC$,~$XYZ$ are coincident. [We have $(x - c)/(b - c) = (y - a)/(c - a) = (z - b)/(a - b) = \lambda$, say. Express $\frac{1}{3}(x + y + z)$ in terms of $a$,~$b$,~$c$.] \Item{4.} If $X$,~$Y$,~$Z$ are points on the sides of the triangle $ABC$, such that \[ BX/XC = CY/YA = AZ/ZB = r, \] and if $ABC$, $XYZ$ are similar, then either $r = 1$ or both triangles are equilateral. \Item{5.} If $A$,~$B$,~$C$,~$D$ are four points in a plane, then \[ AD · BC \leq BD · CA + CD · AB. \] \PageSep{102} [Let $z_{1}$,~$z_{2}$,~$z_{3}$,~$z_{4}$ be the complex numbers corresponding to $A$,~$B$,~$C$,~$D$. Then we have identically \[ (x_{1} - x_{4})(x_{2} - x_{3}) + (x_{2} - x_{4})(x_{3} - x_{1}) + (x_{3} - x_{4})(x_{1} - x_{2}) = 0. \] Hence \begin{align*} |(x_{1} - x_{4})(x_{2} - x_{3})| &= |(x_{2} - x_{4})(x_{3} - x_{1}) + (x_{3} - x_{4})(x_{1} - x_{2})|\\ &\leq |(x_{2} - x_{4})(x_{3} - x_{1})| + |(x_{3} - x_{4})(x_{1} - x_{2})|.] \end{align*} \Item{6.} Deduce Ptolemy's Theorem concerning cyclic quadrilaterals from the fact that the cross ratios of four concyclic points are real. [Use the same identity as in the last example.] \Item{7.} If $z^{2} + z'^{2} = 1$, then the points $z$,~$z'$ are ends of conjugate diameters of an ellipse whose foci are the points $1$,~$-1$. [If $CP$,~$CD$ are conjugate semi-diameters of an ellipse and $S$,~$H$ its foci, then $CD$~is parallel to the external bisector of the angle~$SPH$, and $SP · HP = CD^{2}$.] \Item{8.} Prove that $|a + b|^{2} + |a - b|^{2} = 2\{|a|^{2} + |b|^{2}\}$. [This is the analytical equivalent of the geometrical theorem that, if $M$~is the middle point of~$PQ$, then $OP^{2} + OQ^{2} = 2OM^{2} + 2MP^{2}$.] \Item{9.} Deduce from Ex.~8 that \[ |a + \sqrtp{a^{2} - b^{2}}| + |a - \sqrtp{a^{2} - b^{2}}| = |a + b| + |a - b|. \] [If $a + \sqrtp{a^{2} - b^{2}} = z_{1}$, $a - \sqrtp{a^{2} - b^{2}} = z_{2}$, we have \[ |z_{1}|^{2} + |z_{2}|^{2} = \tfrac{1}{2}|z_{1} + z_{2}|^{2} + \tfrac{1}{2}|z_{1} - z_{2}|^{2} = 2|a|^{2} + 2|a^{2} - b^{2}|, \] and so \[ (|z_{1}| + |z_{2}|)^{2} = 2\{|a|^{2} + |a^{2} - b^{2}| + |b|^{2}\} = |a + b|^{2} + |a - b|^{2} + 2|a^{2} - b^{2}|. \] {\Loosen Another way of stating the result is: if $z_{1}$~and~$z_{2}$ are the roots of $\alpha z^{2} + 2\beta z + \gamma = 0$, then} \[ |z_{1}| + |z_{2}| = (1/|\alpha|) \{(|-\beta + \DPtypo{\sqrt{\alpha\gamma}}{\sqrtp{\alpha\gamma}}|) + (|-\beta - \DPtypo{\sqrt{\alpha\gamma}}{\sqrtp{\alpha\gamma}}|)\}.] \] \Item{10.} Show that the necessary and sufficient conditions that both the roots of the equation $z^{2} + az + b = 0$ should be of unit modulus are \[ |a| \leq 2,\quad |b| = 1,\quad \am b = 2\am a. \] [The amplitudes have not necessarily their principal values.] \Item{11.} If $x^{4} + 4a_{1}x^{3} + 6a_{2}x^{2} + 4a_{3}x + a_{4} = 0$ is an equation with real coefficients and has two real and two complex roots, concyclic in the Argand diagram, then \[ a_{3}^{2} + a_{1}^{2}a_{4} + a_{2}^{3} - a_{2}a_{4} - 2a_{1}a_{2}a_{3} = 0. \] \Item{12.} The four roots of $a_{0}x^{4} + 4a_{1}x^{3} + 6a_{2}x^{2} + 4a_{3}x + a_{4} = 0$ will be harmonically related if \[ a_{0}a_{3}^{2} + a_{1}^{2}a_{4} + a_{2}^{3} - a_{0}a_{2}a_{4} - 2a_{1}a_{2}a_{3} = 0. \] [Express $Z_{23, 14} Z_{31, 24} Z_{12, 34}$, where $Z_{23, 14} = (z_{1} - z_{2}) (z_{3} - z_{4}) + (z_{1} -z_{3}) (z_{2} - z_{4})$ and $z_{1}$,~$z_{2}$, $z_{3}$,~$z_{4}$ are the roots of the equation, in terms of the coefficients.] \PageSep{103} \Item{13.} \Topic{Imaginary points and straight lines.} Let $ax + by + c = 0$ be an equation with complex coefficients (which of course may be real in special cases). If we give $x$ any particular real or complex value, we can find the corresponding value of~$y$. The aggregate of pairs of real or complex values of $x$~and~$y$ which satisfy the equation is called an \emph{imaginary straight line}; the pairs of values are called \emph{imaginary points}, and are said \emph{to lie on the line}. The values of $x$~and~$y$ are called the \emph{coordinates} of the point $(x, y)$. When $x$~and~$y$ are real, the point is called a \emph{real point}: when $a$,~$b$,~$c$ are all real (or can be made all real by division by a common factor), the line is called a \emph{real line}. The points $x = \alpha + \beta i$, $y = \gamma + \delta i$ and $x = \alpha - \beta i$, $y = \gamma - \delta i$ are said to be \emph{conjugate}; and so are the lines \[ (A + A'i)x + (B + B'i)y + C + C'i = 0,\quad (A - A'i)x + (B - B'i)y + C - C'i = 0. \] Verify the following assertions:---every real line contains infinitely many pairs of conjugate imaginary points; an imaginary line in general contains one and only one real point; an imaginary line cannot contain a pair of conjugate imaginary points:---and find the conditions (\ia)~that the line joining two given imaginary points should be real, and (\ib)~that the point of intersection of two imaginary lines should be real. \Item{14.} Prove the identities \begin{gather*} (x + y + z) (x + y\omega_{3} + z\omega_{3}^{2}) (x + y\omega_{3}^{2} + z\omega_{3}) = x^{3} + y^{3} + z^{3} - 3xyz,\\ (x + y + z) (x + y\omega_{5} + z\omega_{5}^{4}) (x + y\omega_{5}^{2} + z\omega_{5}^{3}) (x + y\omega_{5}^{3} + z\omega_{5}^{2}) (x + y\omega_{5}^{4} + z\omega_{5})\\ = x^{5} + y^{5} + z^{5} - 5x^{3}yz + 5xy^{2}z^{2}. \end{gather*} \Item{15.} Solve the equations \[ x^{3} - 3ax + (a^{3} + 1) = 0,\quad x^{5} - 5ax^{3} + 5a^{2}x + (a^{5} + 1) = 0. \] \Item{16.} If $f(x) = a_{0} + a_{1}x + \dots + a_{k}x^{k}$, then \[ \{f(x) + f(\omega x) + \dots + f(\omega^{n-1}x)\}/n = a_{0} + a_{n}x^{n} + a_{2n}x^{2n} + \dots + a_{\lambda n}x^{\lambda n}, \] $\omega$~being any root of $x^{n} = 1$ (except $x = 1$), and $\lambda n$~the greatest multiple of~$n$ contained in~$k$. Find a similar formula for $a_{\mu} + a_{\mu+n}x^{n} + a_{\mu+2n}x^{2n} + \dots$. \Item{17.} If \[ (1 + x)^{n} = p_{0} + p_{1}x + p_{2}x^{2} + \dots, \] $n$~being a positive integer, then \[ p_{0} - p_{2} + p_{4} - \dots = 2^{\frac{1}{2} n} \cos\tfrac{1}{4}n\pi,\quad p_{1} - p_{3} + p_{5} - \dots = 2^{\frac{1}{2} n} \sin\tfrac{1}{4}n\pi. \] \Item{18.} Sum the series \[ \frac{x}{2! \DPchg{n - 2!}{(n - 2)!}} + \frac{x^{2}}{5! \DPchg{n - 5!}{(n - 5)!}} + \frac{x^{3}}{8! \DPchg{n - 8!}{(n - 8)!}} + \dots + \frac{x^{n/3}}{\DPchg{n - 1!}{(n - 1)!}}, \] $n$~being a multiple of~$3$. \MathTrip{1899.} \Item{19.} {\Loosen If $t$~is a complex number such that $|t| = 1$, then the point $x = (at + b)/(t - c)$ describes a circle as $t$~varies, unless $|c| = 1$, when it describes a straight line.} \PageSep{104} \Item{20.} If~$t$ varies as in the last example then the point $x = \frac{1}{2}\{at + (b/t)\}$ in general describes an ellipse whose foci are given by $x^{2} = ab$, and whose axes are $|a| + |b|$ and $|a| - |b|$. But if $|a| = |b|$ then $x$~describes the finite straight line joining the points $-\sqrtp{ab}$, $\sqrtp{ab}$. \Item{21.} Prove that if $t$~is real and $z = t^{2} - 1 + \sqrtp{t^{4} - t^{2}}$, then, when $t^{2} < 1$, $z$~is represented by a point which lies on the circle $x^{2} + y^{2} + x = 0$. Assuming that, when $t^{2} > 1$, $\sqrtp{t^{4} - t^{2}}$ denotes the positive square root of $t^{4} - t^{2}$, discuss the motion of the point which represents~$z$, as $t$~diminishes from a large positive value to a large negative value. \MathTrip{1912.} \Item{22.} The coefficients of the transformation $z = (aZ + b)/(cZ + d)$ are subject to the condition $ad - bc = 1$. Show that, if $c \neq 0$, there are two \emph{fixed points} $\alpha$,~$\beta$, \ie\ points unaltered by the transformation, except when $(a + d)^{2} = 4$, when there is only one fixed point~$\alpha$; and that in these two cases the transformation may be expressed in the forms \[ \frac{z - \alpha}{z - \beta} = K\frac{Z - \alpha}{Z - \beta},\quad \frac{1}{z - \alpha} = \frac{1}{Z - \alpha} + K. \] Show further that, if $c = 0$, there will be one fixed point~$\alpha$ unless $a = d$, and that in these two cases the transformation may be expressed in the forms \[ z - \alpha = K(Z - \alpha),\quad z = Z + K. \] Finally, if $a$,~$b$,~$c$,~$d$ are further restricted to positive integral values (including zero), show that the only transformations with less than two fixed points are of the forms $(1/z) = (1/Z) + K$, $z = Z + K$. \MathTrip{1911.} \Item{23.} Prove that the relation $z = (1 + Zi)/(Z + i)$ transforms the part of the axis of~$x$ between the points $z = 1$ and $z = -1$ into a semicircle passing through the points $Z = 1$ and $Z = -1$. Find all the figures that can be obtained from the originally selected part of the axis of~$x$ by successive applications of the transformation. \MathTrip{1912.} \Item{24.} If $z = 2Z + Z^{2}$ then the circle $|Z| = 1$ corresponds to a cardioid in the plane of~$z$. \Item{25.} Discuss the transformation $z = \frac{1}{2}\{Z + (1/Z)\}$, showing in particular that to the circles $X^{2} + Y^{2} = \alpha^{2}$ correspond the confocal ellipses \[ \frac{x^{2}}{\left\{\dfrac{1}{2}\left(\alpha + \dfrac{1}{\alpha}\right)\right\}^{2}} + \frac{y^{2}}{\left\{\dfrac{1}{2}\left(\alpha - \dfrac{1}{\alpha}\right)\right\}^{2}} = 1. \] \Item{26.} If $(z + 1)^{2} = 4/Z$ then the unit circle in the $z$-plane corresponds to the parabola $R\cos^{2} \frac{1}{2}\Theta = 1$ in the $Z$-plane, and the inside of the circle to the outside of the parabola. \Item{27.} Show that, by means of the transformation $z = \{(Z - ci)/(Z + ci)\}^{2}$, the upper half of the $z$-plane may be made to correspond to the interior of a certain semicircle in the $Z$-plane. \PageSep{105} \Item{28.} If $z = Z^{2} - 1$, then as $z$~describes the circle $|z| = \kappa$, the two corresponding positions of~$Z$ each describe the Cassinian oval $\rho_{1}\rho_{2} = \kappa$, where $\rho_{1}$,~$\rho_{2}$ are the distances of~$Z$ from the points $-1$,~$1$. Trace the ovals for different values of~$\kappa$. \Item{29.} Consider the relation $az^{2} + 2hzZ + bZ^{2} + 2gz + 2fZ + c = 0$. Show that there are two values of~$Z$ for which the corresponding values of~$z$ are equal, and \textit{vice versa}. We call these the \emph{branch points} in the $Z$ and $z$-planes respectively. Show that, if $z$~describes an ellipse whose foci are the branch points, then so does~$Z$. [We can, without loss of generality, take the given relation in the form \[ %[** TN: [sic] colon] z^{2} + 2zZ\cos\omega + Z^{2} = 1: \] the reader should satisfy himself that this is the case. The branch points in either plane are $\cosec\omega$ and $-\cosec\omega$. An ellipse of the form specified is given by \[ |z + \cosec\omega| + |z - \cosec\omega| = C, \] where $C$~is a constant. This is equivalent (Ex.~9) to \[ |z + \sqrtp{z^{2} - \cosec^{2}\omega}| + |z - \sqrtp{z^{2} - \cosec^{2}\omega}| = C. \] Express this in terms of~$Z$.] \Item{30\Add{.}} If $z = aZ^{m} + bZ^{n}$, where $m$,~$n$ are positive integers and $a$,~$b$ real, then as $Z$~describes the unit circle, $z$~describes a hypo- or epi-cycloid. \Item{31.} Show that the transformation \[ z = \frac{(a + di)Z_{0} + b}{cZ_{0} - (a - di)}, \] where $a$,~$b$,~$c$,~$d$ are real and $a^{2} + d^{2} + bc > 0$, and $Z_{0}$~denotes the conjugate of~$Z$, is equivalent to an inversion with respect to the circle \[ c(x^{2} + y^{2}) - 2ax - 2dy - b = 0. \] What is the geometrical interpretation of the transformation when \[ a^{2} + d^{2} + bc < 0? \] \Item{32.} The transformation \[ \frac{1 - z}{1 + z} = \left(\frac{1 - Z}{1 + Z}\right)^{c}, \] where $c$~is rational and $0 < c < 1$, transforms the circle $|z| = 1$ into the boundary of a circular lune of angle~$\pi/c$. \end{Examples} \PageSep{106} \Chapter{IV}{LIMITS OF FUNCTIONS OF A POSITIVE INTEGRAL VARIABLE} \Paragraph{50. Functions of a positive integral variable.} In \Ref{Chapter}{II} we discussed the notion of a function of a real variable~$x$, and illustrated the discussion by a large number of examples of such functions. And the reader will remember that there was one important particular with regard to which the functions which we took as illustrations differed very widely. Some were defined for \emph{all} values of~$x$, some for \emph{rational} values only, some for \emph{integral} values only, and so on. \begin{Remark} Consider, for example, the following functions: (i)~$x$, (ii)~$\sqrt{x}$, (iii)~the denominator of~$x$, (iv)~the square root of the product of the numerator and the denominator of~$x$, (v)~the largest prime factor of~$x$, (vi)~the product of $\sqrt{x}$ and the largest prime factor of~$x$, (vii)~the $x$th~prime number, (viii)~the height measured in inches of convict~$x$ in Dartmoor prison. Then the aggregates of values of~$x$ for which these functions are defined or, as we may say, the \emph{fields of definition} of the functions, consist of (i)~\emph{all} values of~$x$, (ii)~\emph{all positive} values of~$x$, (iii)~\emph{all rational} values of~$x$, (iv)~\emph{all positive rational} values of~$x$, (v)~\emph{all integral} values of~$x$, (vi),~(vii)~\emph{all positive integral} values of~$x$, (viii)~a certain number of positive integral values of~$x$, viz., $1$,~$2$, \dots,~$N$, where $N$~is the total number of convicts at Dartmoor at a given moment of time.\footnote {In the last case $N$~depends on the time, and convict~$x$, where $x$~has a definite value, is a different individual at different moments of time. Thus if we take different moments of time into consideration we have a simple example of a function $y = F(x, t)$ of two variables, defined for a certain range of values of~$t$, viz.\ from the time of the establishment of Dartmoor prison to the time of its abandonment, and for a certain number of positive integral values of~$x$, this number varying with~$t$.} \end{Remark} Now let us consider a function, such as (vii) above, which is defined for all positive integral values of~$x$ and no others. This \PageSep{107} function may be regarded from two slightly different points of view. We may consider it, as has so far been our custom, as a function of the real variable~$x$ defined for some only of the values of~$x$, viz.\ positive integral values, and say that for all other values of~$x$ the definition fails. Or we may leave values of~$x$ other than positive integral values entirely out of account, and regard our function as a function of the \emph{positive integral variable~$n$}, whose values are the positive integers \[ 1,\ 2,\ 3,\ 4,\ \dots. \] In this case we may write \[ y = \phi(n) \] and regard $y$~now as a function of~$n$ defined for all values of~$n$. It is obvious that any function of~$x$ defined for all values of~$x$ gives rise to a function of~$n$ defined for all values of~$n$. Thus from the function $y = x^{2}$ we deduce the function $y = n^{2}$ by merely omitting from consideration all values of~$x$ other than positive integers, and the corresponding values of~$y$. On the other hand from any function of~$n$ we can deduce any number of functions of~$x$ by merely assigning values to~$y$, corresponding to values of~$x$ other than positive integral values, in any way we please. \begin{Remark} \Paragraph{51. Interpolation.} The problem of determining a function of~$x$ which shall assume, for all positive integral values of~$x$, values agreeing with those of a given function of~$n$, is of extreme importance in higher mathematics. It is called the \emph{problem of functional interpolation}. Were the problem however merely that of finding \emph{some} function of~$x$ to fulfil the condition stated, it would of course present no difficulty whatever. We could, as explained above, simply fill in the missing values as we pleased: we might indeed simply regard the given values of the function of~$n$ as \emph{all} the values of the function of~$x$ and say that the definition of the latter function failed for all other values of~$x$. But such purely theoretical solutions are obviously not what is usually wanted. What is usually wanted is some \emph{formula} involving~$x$ (of as simple a kind as possible) which assumes the given values for $x = 1$, $2$,~\dots. In some cases, especially when the function of~$n$ is itself defined by a formula, there is an obvious solution. If for example $y = \phi(n)$, where $\phi(n)$ is a function of~$n$, such as $n^{2}$ or $\cos n\pi$, which would have a meaning even were $n$ not a positive integer, we naturally take our function of~$x$ to be $y = \phi(x)$. But even in this very simple case it is easy to write down other almost equally obvious solutions of the problem. For example \[ y = \phi(x) + \sin x\pi \] assumes the value $\phi(n)$ for $x = n$, since $\sin n\pi = 0$. \PageSep{108} In other cases $\phi(n)$ may be defined by a formula, such as~$(-1)^{n}$, which ceases to define for some values of~$x$ (as here in the case of fractional values of~$x$ with even denominators, or irrational values). But it may be possible to transform the formula in such a way that it does define for all values of~$x$. In this case, for example, \[ (-1)^{n} = \cos n\pi, \] if $n$~is an integer, and the problem of interpolation is solved by the function~$\cos x\pi$. In other cases $\phi(x)$ may be defined for some values of~$x$ other than positive integers, but not for all. Thus from $y = n^{n}$ we are led to $y = x^{x}$. This expression has a meaning for some only of the remaining values of~$x$. If for simplicity we confine ourselves to positive values of~$x$, then $x^{x}$~has a meaning for all rational values of~$x$, in virtue of the definitions of fractional powers adopted in elementary algebra. But when $x$~is \emph{irrational} $x^{x}$~has (so far as we are in a position to say at the present moment) no meaning at all. Thus in this case the problem of interpolation at once leads us to consider the question of extending our definitions in such a way that $x^{x}$~shall have a meaning even when $x$~is irrational. We shall see later on how the desired extension may be effected. Again, consider the case in which \[ y = 1 · 2 \dots n = n!. \] In this case there is no obvious formula in~$x$ which reduces to~$n!$ for $x = n$, as $x!$~means nothing for values of~$x$ other than the positive integers. This is a case in which attempts to solve the problem of interpolation have led to important advances in mathematics. For mathematicians have succeeded in discovering a function (the Gamma-function) which possesses the desired property and many other interesting and important properties besides. \end{Remark} \Paragraph{52. Finite and infinite classes.} Before we proceed further it is necessary to make a few remarks about certain ideas of an abstract and logical nature which are of constant occurrence in Pure Mathematics. In the first place, the reader is probably familiar with the notion of \Emph{a class}. It is unnecessary to discuss here any logical difficulties which may be involved in the notion of a `class': roughly speaking we may say that a class is the aggregate or collection of all the entities or objects which possess a certain property, simple or complex. Thus we have the class of British subjects, or members of Parliament, or positive integers, or real numbers. \PageSep{109} Moreover, the reader has probably an idea of what is meant by a \Emph{finite} or \Emph{infinite} class. Thus the class of \emph{British subjects} is a finite class: the aggregate of all British subjects, past, present, and future, has a finite number~$n$, though of course we cannot tell at present the actual value of~$n$. The class of \emph{present British subjects}, on the other hand, has a number~$n$ which could be ascertained by counting, were the methods of the census effective enough. On the other hand the class of positive integers is not finite but infinite. This may be expressed more precisely as follows. If $n$~is any positive integer, such as $1000$, $1,000,000$ or any number we like to think of, then there are more than $n$ positive integers. Thus, if the number we think of is $1,000,000$, there are obviously at least $1,000,001$ positive integers. Similarly the class of rational numbers, or of real numbers, is infinite. It is convenient to express this by saying that there are \Emph{an infinite number} of positive integers, or rational numbers, or real numbers. But the reader must be careful always to remember that by saying this we mean \emph{simply} that the class in question has not a finite number of members such as $1000$ or $1,000,000$. \Paragraph{53. Properties possessed by a function of~$n$ for large values of~$n$.} We may now return to the `functions of~$n$' which we were discussing in \SecNo[§§]{50}--\SecNo{51}. They have many points of difference from the functions of~$x$ which we discussed in \Ref{Chap.}{II}\@. But there is one fundamental characteristic which the two classes of functions have in common: \emph{the values of the variable for which they are defined form an infinite class}. It is this fact which forms the basis of all the considerations which follow and which, as we shall see in the next chapter, apply, \textit{mutatis mutandis}, to functions of~$x$ as well. Suppose that $\phi(n)$ is any function of~$n$, and that $P$~is any property which $\phi(n)$ may or may not have, such as that of being a positive integer or of being greater than~$1$. Consider, for each of the values $n = 1$, $2$, $3$,~\dots, whether $\phi(n)$ has the property~$P$ or not. Then there are three possibilities:--- \Item{(\ia)} $\phi(n)$ may have the property~$P$ for \emph{all} values of~$n$, or for all values of~$n$ except a finite number~$N$ of such values: \PageSep{110} \Item{(\ib)} $\phi(n)$ may have the property for \emph{no} values of~$n$, or only for a finite number~$N$ of such values: \Item{(\ic)} neither~(\ia) nor~(\ib) may be true. If (\ib)~is true, the values of~$n$ for which $\phi(n)$ has the property form a finite class. If (\ia)~is true, the values of~$n$ for which $\phi(n)$ has not the property form a finite class. In the third case neither class is finite. Let us consider some particular cases. \begin{Remark} \Item{(1)} Let $\phi(n) = n$, and let $P$~be the property of being a positive integer. Then $\phi(n)$ has the property~$P$ for all values of~$n$. If on the other hand $P$~denotes the property of being a positive integer greater than or equal to~$1000$, then $\phi(n)$ has the property for all values of~$n$ except a finite number of values of~$n$, viz.\ $1$,~$2$, $3$, \dots,~$999$. In either of these cases (\ia)~is true. \Item{(2)} If $\phi(n) = n$, and $P$~is the property of being less than~$1000$, then (\ib)~is true. \Item{(3)} If $\phi(n) = n$, and $P$~is the property of being odd, then~(\ic) is true. For $\phi(n)$~is odd if $n$~is odd and even if $n$~is even, and both the odd and the even values of~$n$ form an infinite class. \Par{Example.} Consider, in each of the following cases, whether (\ia),~(\ib), or (\ic) is true: \Itemp{(i)} $\phi(n) = n$, $P$~being the property of being a perfect square, \Itemp{(ii)} \Hang $\phi(n) = p_{n}$, where $p_{n}$~denotes the $n$th~prime number, $P$~being the \\ %[** TN: Explicit line break avoids minor glue problem] property of being odd, \Itemp{(iii)} $\phi(n) = p_{n}$, $P$~being the property of being even, \Itemp{(iv)} $\phi(n) = p_{n}$, $P$~being the property $\phi(n) > n$, \Itemp{(v)} $\phi(n) = 1 - (-1)^{n}(1/n)$, $P$~being the property $\phi(n) < 1$, \Itemp{(vi)} $\phi(n) = 1 - (-1)^{n}(1/n)$, $P$~being the property $\phi(n) < 2$, \Itemp{(vii)} $\phi(n) = 1000\{1 + (-1)^{n}\}/n$, $P$~being the property $\phi(n) < 1$, \Itemp{(viii)} $\phi(n) = 1/n$, $P$~being the property $\phi(n) < .001$, \Itemp{(ix)} $\phi(n) = (-1)^{n}/n$, $P$~being the property $|\phi(n)| < .001$, \Itemp{(x)} \Hang $\phi(n) = 10\MC000/n$, or $(-1)^{n}10\MC000/n$, $P$~being either of the properties $\phi(n) < .001$ or $|\phi(n)| < .001$, \Itemp{(xi)} $\phi(n) = (n - 1)/(n + 1)$, $P$~being the property $1 - \phi(n) < .0001$. \end{Remark} \Paragraph{54.} Let us now suppose that $\phi(n)$~and~$P$ are such that the assertion~(\ia) is true, \ie\ that $\phi(n)$~has the property~$P$, if not for all values of~$n$, at any rate for all values of~$n$ except a finite number~$N$ of such values. We may denote these exceptional values by \[ n_{1},\ n_{2},\ \dots,\ n_{N}. \] \PageSep{111} There is of course no reason why these $N$~values should be the \emph{first} $N$~values $1$,~$2$, \dots,~$N$, though, as the preceding examples show, this is frequently the case in practice. But whether this is so or not we know that $\phi(n)$ has the property~$P$ if $n > n_{N}$. Thus the $n$th~prime is odd if $n > 2$, $n = 2$ being the only exception to the statement; and $1/n < .001$ if $n > 1000$, the first $1000$~values of~$n$ being the exceptions; and \[ 1000\{1 + (-1)^{n}\}/n < 1 \] if $n > 2000$, the exceptional values being $2$,~$4$,~$6$, \dots,~$2000$. That is to say, in each of these cases the property is possessed \emph{for all values of~$n$ from a definite value onwards}. We shall frequently express this by saying that $\phi(n)$ has the property for \Emph{large}, or \emph{very large}, or \emph{all sufficiently large} values of~$n$. Thus when we say that \emph{$\phi(n)$~has the property~$P$} (which will as a rule be a property expressed by some relation of inequality) \emph{for large values of~$n$}, what we mean is that we can determine some definite number, $n_{0}$~say, such that $\phi(n)$ has the property for all values of~$n$ greater than or equal to~$n_{0}$. This number~$n_{0}$, in the examples considered above, may be taken to be any number greater than~$n_{N}$, the greatest of the exceptional numbers: it is most natural to take it to be~$n_{N} + 1$. Thus we may say that `all large primes are odd', or that `$1/n$~is less than~$.001$ for large values of~$n$'. And the reader must make himself familiar with the use of the word \emph{large} in statements of this kind. \emph{Large} is in fact a word which, standing by itself, has no more absolute meaning in mathematics than in the language of common life. It is a truism that in common life a number which is large in one connection is small in another; $6$~goals is a large score in a football match, but $6$~runs is not a large score in a cricket match; and $400$~runs is a large score, but £$400$~is not a large income: and so of course in mathematics \emph{large} generally means \emph{large enough}, and what is large enough for one purpose may not be large enough for another. We know now what is meant by the assertion `$\phi(n)$~has the property~$P$ for large values of~$n$'. It is with assertions of this kind that we shall be concerned throughout this chapter. \PageSep{112} \Paragraph{55. The phrase `$n$~tends to infinity'.} There is a somewhat different way of looking at the matter which it is natural to adopt. Suppose that $n$~assumes successively the values $1$,~$2$, $3$,~\dots. The word `successively' naturally suggests succession in time, and we may suppose~$n$, if we like, to assume these values at successive moments of time (\eg\ at the beginnings of successive seconds). Then as the seconds pass $n$~gets larger and larger and there is no limit to the extent of its increase. However large a number we may think of (\eg\ $2\MC147\MC483\MC647$), a time will come when $n$~has become larger than this number. It is convenient to have a short phrase to express this unending growth of~$n$, and we shall say that \Emph{$n$~tends to infinity}, or $n \to \infty$, this last symbol being usually employed as an abbreviation for `infinity'. The phrase `tends to' like the word `successively' naturally suggests the idea of change in time, and it is convenient to think of the variation of~$n$ as accomplished in time in the manner described above. This however is a mere matter of convenience. The variable~$n$ is a purely logical entity which has in itself nothing to do with time. The reader cannot too strongly impress upon himself that when we say that $n$~`tends to~$\infty$' we mean simply that $n$~is supposed to assume a series of values which increase continually and without limit. \Emph{There is no number `infinity'}: such an equation as \[ n = \infty \] is as it stands \emph{absolutely meaningless}: $n$~cannot be equal to~$\infty$, because `equal to~$\infty$' means nothing. So far in fact the symbol~$\infty$ means nothing at all except in the one phrase `tends to~$\infty$', the meaning of which we have explained above. Later on we shall learn how to attach a meaning to other phrases involving the symbol~$\infty$, but the reader will always have to bear in mind \Item{(1)} that \emph{$\infty$~by itself} means nothing, although \emph{phrases containing it} sometimes mean something, \Item{(2)} that in every case in which a phrase containing the symbol~$\infty$ means something it will do so simply because we have previously attached a meaning to this particular phrase by means of a special definition. \PageSep{113} Now it is clear that if $\phi(n)$~has the property~$P$ for large values of~$n$, and if $n$~`tends to~$\infty$', in the sense which we have just explained, then $n$~will ultimately assume values large enough to ensure that $\phi(n)$ has the property~$P$. And so another way of putting the question `what properties has $\phi(n)$ for sufficiently large values of~$n$?'\ is `how does $\phi(n)$ behave as $n$~tends to~$\infty$?' \Paragraph{56. The behaviour of a function of~$n$ as $n$~tends to infinity.} {\Loosen We shall now proceed, in the light of the remarks made in the preceding sections, to consider the meaning of some kinds of statements which are perpetually occurring in higher mathematics. Let us consider, for example, the two following statements: (\ia)~\emph{$1/n$ is small for large values of~$n$}, (\ib)~\emph{$1 - (1/n)$ is nearly equal to~$1$ for large values of~$n$}. Obvious as they may seem, there is a good deal in them which will repay the reader's attention. Let us take (\ia) first, as being slightly the simpler.} We have already considered the statement `\emph{$1/n$~is less than~$.01$ for large values of~$n$}'. This, we saw, means that the inequality $1/n < .01$ is true for all values of~$n$ greater than some definite value, in fact greater than~$100$. Similarly it is true that `\emph{$1/n$~is less than $.0001$ for large values of~$n$}': in fact $1/n < .0001$ if $n > 10\MC000$. And instead of $.01$ or $.0001$ we might take $.000\MS001$ or $.000\MS000\MS01$, or indeed any positive number we like. It is obviously convenient to have some way of expressing the\PageLabel{113} fact that \emph{any} such statement as `\emph{$1/n$~is less than~$.01$ for large values of~$n$}' is true, when we substitute for~$.01$ any smaller number, such as $.0001$ or $.000\MS001$ or any other number we care to choose. And clearly we can do this by saying that `\emph{however small $\DELTA$ may be} (provided of course it is positive), \emph{then $1/n < \DELTA$ for sufficiently large values of~$n$}'. That this is true is obvious. For $1/n < \DELTA$ if $n > 1/\DELTA$, so that our `sufficiently large' values of~$n$ need only all be greater than~$1/\DELTA$. The assertion is however a complex one, in that it really stands for the whole class of assertions which we obtain by giving to~$\DELTA$ special values such as~$.01$. And of course the smaller $\DELTA$~is, and the larger~$1/\DELTA$, the larger must be the least of the `sufficiently large' values of~$n$: values which are sufficiently large when $\DELTA$~has one value are inadequate when it has a smaller. The last statement italicised is what is really meant by the statement~(\ia), that $1/n$~is small when $n$~is large. Similarly \PageSep{114} (\ib)~really means ``\emph{if $\phi(n) = 1 - (1/n)$, then the statement `$1 - \phi(n) < \DELTA$ for sufficiently large values of~$n$' is true whatever positive value \(such as $.01$ or $.0001$\) we attribute to~$\DELTA$}''. That the statement~(\ib) is true is obvious from the fact that $1 - \phi(n) = 1/n$. There is another way in which it is common to state the facts expressed by the assertions (\ia)~and~(\ib). This is suggested at once by \SecNo[§]{55}. Instead of saying `$1/n$~is small for large values of~$n$' we say `$1/n$~tends to~$0$ as $n$~tends to~$\infty$'. Similarly we say that `$1 - (1/n)$ tends to~$1$ as $n$~tends to~$\infty$': and these statements are to be regarded as precisely equivalent to (\ia)~and~(\ib). Thus the statements \begin{align*} &\text{`$1/n$~is small when $n$~is large',} \\ &\text{`$1/n$~tends to~$0$ as $n$~tends to~$\infty$',} \end{align*} are equivalent to one another and to the more formal statement \begin{quotation} `if $\DELTA$ is any positive number, however small, then $1/n < \DELTA$ for sufficiently large values of~$n$', \end{quotation} or to the still more formal statement \begin{quotation} `if $\DELTA$ is any positive number, however small, then we can find a number~$n_{0}$ such that $1/n < \DELTA$ for all values of~$n$ greater than or equal to~$n_{0}$'. \end{quotation} The number~$n_{0}$ which occurs in the last statement is of course a function of~$\DELTA$. We shall sometimes emphasize this fact by writing $n_{0}$ in the form~$n_{0}(\DELTA)$. \begin{Remark} The reader should imagine himself confronted by an opponent who questions the truth of the statement. He would name a series of numbers growing smaller and smaller. He might begin with~$.001$. The reader would reply that $1/n < .001$ as soon as $n > 1000$. The opponent would be bound to admit this, but would try again with some smaller number, such as~$.000\MS000\MS1$. The reader would reply that $1/n < .000\MS000\MS1$ as soon as $n > 10\MC000\MC000$: and so on. In this simple case it is evident that the reader would always have the better of the argument. \end{Remark} We shall now introduce yet another way of expressing this property of the function~$1/n$. We shall say that `\emph{the \Emph{limit} of~$1/n$ as $n$~tends to~$\infty$ is~$0$}', a statement which we may express symbolically in the form \[ \lim_{n\to\infty} \frac{1}{n} = 0, \] \PageSep{115} or simply $\lim(1/n) = 0$. We shall also sometimes write %[** TN: Next expression displayed in the original] `$1/n \to 0$ as $n \to \infty$', which may be read `$1/n$~tends to~$0$ as $n$~tends to~$\infty$'; or simply `$1/n \to 0$'. In the same way we shall write \[ \lim_{n\to\infty} \left(1 - \frac{1}{n}\right) = 1,\quad \lim \left(1 - \frac{1}{n}\right) = 1, \] or $1 - (1/n) \to 1$. \Paragraph{57.} Now let us consider a different example: let $\phi(n) = n^{2}$. Then `\emph{$n^{2}$~is large when $n$~is large}'. This statement is equivalent to the more formal statements \begin{quotation} `if $\Delta$ is any positive number, however large, then $n^{2} > \Delta$ for sufficiently large values of~$n$', \bigskip `we can find a number $n_{0}(\Delta)$ such that $n^{2} > \Delta$ for all values of~$n$ greater than or equal to~$n_{0}(\Delta)$'. \end{quotation} And it is natural in this case to say that `$n^{2}$~tends to~$\infty$ as $n$~tends to~$\infty$', or `$n^{2}$~tends to~$\infty$ with~$n$', and to write \[ n^2 \to \infty. \] Finally consider the function $\phi(n) = -n^{2}$. In this case $\phi(n)$ is large, but negative, when $n$~is large, and we naturally say that `$-n^{2}$~tends to~$-\infty$ as $n$~tends to~$\infty$' and write \[ -n^{2} \to -\infty. \] And the use of the symbol~$-\infty$ in this sense suggests that it will sometimes be convenient to write $n^{2} \to +\infty$ for $n^{2} \to \infty$ and generally to use~$+\infty$ instead of~$\infty$, in order to secure greater uniformity of notation. But we must once more repeat that in all these statements the symbols $\infty$,~$+\infty$,~$-\infty$ mean nothing whatever by themselves, and only acquire a meaning when they occur in certain special connections in virtue of the explanations which we have just given. \PageSep{116} \Paragraph{58. Definition of a limit.} After the discussion which precedes the reader should be in a position to appreciate the general notion of a \emph{limit}. Roughly we may say that \emph{$\phi(n)$~tends to a limit~$l$ as $n$~tends to~$\infty$ if $\phi(n)$~is nearly equal to~$l$ when $n$~is large}. But although the meaning of this statement should be clear enough after the preceding explanations, it is not, as it stands, precise enough to serve as a strict mathematical definition. It is, in fact, equivalent to a whole class of statements of the type `\emph{for sufficiently large values of~$n$, $\phi(n)$~differs from~$l$ by less than~$\DELTA$}'. This statement has to be true for $\DELTA = .01$ or $.0001$ or \emph{any} positive number; and for any such value of~$\DELTA$ it has to be true for \emph{any} value of~$n$ after a certain definite value~$n_{0}(\DELTA)$, though the smaller~$\DELTA$ is the larger, as a rule, will be this value~$n_{0}(\DELTA)$. We accordingly frame the following formal definition: \begin{Definition}[I.] The function $\phi(n)$ is said to tend to the limit~$l$ as $n$~tends to~$\infty$, if, however small be the positive number~$\DELTA$, $\phi(n)$~differs from~$l$ by less than~$\DELTA$ for sufficiently large values of~$n$; that is to say if, however small be the positive number~$\DELTA$, we can determine a number~$n_{0}(\DELTA)$ corresponding to~$\DELTA$, such that $\phi(n)$~differs from~$l$ by less than~$\DELTA$ for all values of~$n$ greater than or equal to~$n_{0}(\DELTA)$. \end{Definition} It is usual to denote the difference between $\phi(n)$~and~$l$, taken positively, by $|\phi(n) - l|$. It is equal to $\phi(n) - l$ or to $l - \phi(n)$, whichever is positive, and agrees with the definition of the \emph{modulus} of~$\phi(n) - l$, as given in \Ref{Chap.}{III}, though at present we are only considering real values, positive or negative. With this notation the definition may be stated more shortly as follows: `\emph{if, given any positive number,~$\DELTA$, however small, we can find~$n_{0}(\DELTA)$ so that $|\phi(n) - l| < \DELTA$ when $n \geq n_{0}(\DELTA)$, then we say that $\phi(n)$~tends to the limit~$l$ as $n$~tends to~$\infty$, and write} \[ \lim_{n \to \infty} \phi(n) = l\text{'.} \] \begin{Remark} Sometimes we may omit the `$n \to \infty$'; and sometimes it is convenient, for brevity, to write $\phi(n) \to l$. The reader will find it instructive to work out, in a few simple cases, the explicit expression of~$n_{0}$ as a function of~$\DELTA$. Thus if $\phi(\DPtypo{x}{n}) = 1/n$ then $l = 0$, and the condition reduces to $1/n < \DELTA$ for $n \geq n_{0}$, which is satisfied if $n_{0} = 1 + [1/\DELTA]$.\footnote {Here and henceforward we shall use $[x]$ in the sense of \Ref{Chap.}{II}, \ie\ as the greatest integer not greater than~$x$.} There is one and only one case in which \emph{the same~$n_{0}$} will do for \emph{all} values of~$\DELTA$. \PageSep{117} If, from a certain value~$N$ of~$n$ onwards, $\phi(n)$~is constant, say equal to~$C$, then it is evident that $\phi(n) - C = 0$ for $n \geq N$, so that the inequality $|\phi(n) - C| < \DELTA$ is satisfied for $n \geq N$ and all positive values of~$\DELTA$. And if $|\phi(n) - l| < \DELTA$ for $n \geq N$ and all positive values of~$\DELTA$, then it is evident that $\phi(n) = l$ when $n \geq N$, so that $\phi(n)$~is constant for all such values of~$n$. \end{Remark} \Paragraph{59.} The definition of a limit may be illustrated geometrically as follows. The graph of~$\phi(n)$ consists of a number of points corresponding to the values $n = 1$, $2$,~$3$,~\dots. Draw the line $y = l$, and the parallel lines $y = l - \DELTA$, $y = l + \DELTA$ at distance~$\DELTA$ from it. Then \[ \lim_{n \to \infty} \phi(n) = l, \] %[Illustration: Fig. 27.] \ifthenelse{\boolean{Modernize}}{% \Figure{27}{p117} }{% \Figure{27}{p117_orig_notation}% } if, when once these lines have been drawn, no matter how close they may be together, we can always draw a line $x = n_{0}$, as in the figure, in such a way that the point of the graph on this line, and all points to the right of it, lie between them. We shall find this geometrical way of looking at our definition particularly useful when we come to deal with functions defined for all values of a real variable and not merely for positive integral values. \Paragraph{60.} So much for functions of~$n$ which tend to a limit as~$n$ tends to~$\infty$. We must now frame corresponding definitions for functions which, like the functions $n^{2}$~or~$-n^{2}$, tend to positive or negative infinity. The reader should by now find no difficulty in appreciating the point of \begin{Definition}[II.] The function~$\phi(n)$ is said to tend to~$+\infty$ (positive infinity) with~$n$, if, when any number~$\Delta$, however large, is assigned, we can determine~$n_{0}(\Delta)$ so that $\phi(n) > \Delta$ when $n \geq n_{0}(\Delta)$; \PageSep{118} that is to say if, however large~$\Delta$ may be, $\phi(n) > \Delta$ for sufficiently large values of~$n$. \end{Definition} Another, less precise, form of statement is `\emph{if we can make $\phi(n)$~as large as we please by sufficiently increasing~$n$}'. This is open to the objection that it obscures a fundamental point, viz.\ that $\phi(n)$~must be greater than~$\Delta$ for \emph{all} values of~$n$ such that $n \geq n_{0}(\Delta)$, and not merely for \emph{some} such values. But there is no harm in using this form of expression if we are clear what it means. When $\phi(n)$ tends to~$+\infty$ we write \[ \phi(n) \to +\infty. \] We may leave it to the reader to frame the corresponding definition for functions which tend to negative infinity. \Paragraph{61. Some points concerning the definitions.} The reader should be careful to observe the following points. \Item{(1)} We may obviously alter the values of~$\phi(n)$ for any finite number of values of~$n$, in any way we please, without in the least affecting the behaviour of~$\phi(n)$ as $n$~tends to~$\infty$. For example $1/n$~tends to~$0$ as $n$~tends to~$\infty$. We may deduce any number of new functions from~$1/n$ by altering a finite number of its values. For instance we may consider the function~$\phi(n)$ which is equal to~$3$ for $n = 1$,~$2$, $7$, $11$, $101$, $107$, $109$,~$237$ and equal to~$1/n$ for all other values of~$n$. For this function, just as for the original function~$1/n$, $\lim\phi(n) = 0$. Similarly, for the function~$\phi(n)$ which is equal to~$3$ if $n = 1$,~$2$, $7$, $11$, $101$, $107$, $109$,~$237$, and to~$n^{2}$ otherwise, it is true that $\phi(n) \to +\infty$. \Item{(2)} On the other hand we cannot as a rule alter an \emph{infinite} number of the values of~$\phi(n)$ without affecting fundamentally its behaviour as $n$~tends to~$\infty$. If for example we altered the function~$1/n$ by changing its value to~$1$ whenever $n$~is a multiple of~$100$, it would no longer be true that $\lim\phi(n) = 0$. So long as a finite number of values only were affected we could always choose the number~$n_{0}$ of the definition so as to be greater than the greatest of the values of~$n$ for which $\phi(n)$ was altered. In the examples above, for instance, we could always take $n_{0} > 237$, and indeed we should be compelled to do so as soon as our imaginary opponent \PageSep{119} of \SecNo[§]{56} had assigned a value of~$\DELTA$ as small as~$3$ (in the first example) or a value of~$\Delta$ as great as~$3$ (in the second). But now \emph{however} large $n_{0}$ may be there will be greater values of~$n$ for which $\phi(n)$~has been altered. \Item{(3)} In applying the test of Definition~I it is of course %[xref] absolutely essential that we should have $|\phi(n) - l| < \DELTA$ not merely when $n = n_{0}$ but when $n \geq n_{0}$, \ie\ \emph{for $n_{0}$ and for all larger values of~$n$}. It is obvious, for example, that, if $\phi(n)$~is the function last considered, then given~$\DELTA$ we can choose~$n_{0}$ so that $|\phi(n)| < \DELTA$ when $n = n_{0}$: we have only to choose a sufficiently large value of~$n$ which is not a multiple of~$100$. But, when $n_{0}$ is thus chosen, it is not true that $|\phi(n)| < \DELTA$ when $n \geq n_{0}$: all the multiples of~$100$ which are greater than~$n_{0}$ are exceptions to this statement. \Item{(4)} If $\phi(n)$ is always greater than~$l$, we can replace $|\phi(n) - l|$ by $\phi(n) - l$. Thus the test whether $1/n$~tends to the limit~$0$ as $n$~tends to~$\infty$ is simply whether $1/n < \DELTA$ when $n \geq n_{0}$. If however $\phi(n) = (-1)^{n}/n$, then $l$~is again~$0$, but $\phi(n) - l$ is sometimes positive and sometimes negative. In such a case we must state the condition in the form $|\phi(n) - l| < \DELTA$, for example, in this particular case, in the form $|\phi(n)| < \DELTA$. \Item{(5)} \emph{The limit~$l$ may itself be one of the actual values of $\phi(n)$.} Thus if $\phi(n) = 0$ for all values of~$n$, it is obvious that $\lim\phi(n) = 0$. Again, if we had, in (2)~and~(3) above, altered the value of the function, when $n$~is a multiple of~$100$, to~$0$ instead of to~$1$, we should have obtained a function $\phi(n)$ which is equal to~$0$ when $n$~is a multiple of~$100$ and to~$1/n$ otherwise. The limit of this function as $n$~tends to~$\infty$ is still obviously zero. This limit is itself the value of the function for an infinite number of values of~$n$, viz.\ all multiples of~$100$. On the other hand \emph{the limit itself need not \(and in general will not\) be the value of the function for any value of~$n$}. This is sufficiently obvious in the case of $\phi(n) = 1/n$. The limit is zero; but the function is never equal to zero for any value of~$n$. The reader cannot impress these facts too strongly on his mind. \Emph{A limit is not a value of the function}: it is something quite distinct from these values, though it is defined by its relations \PageSep{120} to them and may possibly be equal to some of them. For the functions \[ \phi(n) = 0,\ 1, \] the limit is equal to \emph{all} the values of~$\phi(n)$: for \[ \phi(n) = 1/n,\quad (-1)^{n}/n,\quad 1 + (1/n),\quad 1 + \{(-1)^{n}/n\} \] it is not equal to \emph{any} value of~$\phi(n)$: for \[ \phi(n) = (\sin\tfrac{1}{2}n\pi)/n,\quad 1 + \{(\sin\tfrac{1}{2}n\pi)/n\} \] (whose limits as $n$~tends to~$\infty$ are easily seen to be $0$~and~$1$, since $\sin\frac{1}{2}n\pi$ is never numerically greater than~$1$) the limit is equal to the value which $\phi(n)$ assumes for all even values of~$n$, but the values assumed for odd values of~$n$ are all different from the limit and from one another. \Item{(6)} A function may be always numerically very large when $n$~is very large without tending either to~$+\infty$ or to~$-\infty$. A sufficient illustration of this is given by $\phi(n) = (-1)^{n} n$. A function can only tend to~$+\infty$ or to~$-\infty$ if, after a certain value of~$n$, it maintains a constant sign. \begin{Examples}{XXIII.} Consider the behaviour of the following functions of~$\DPtypo{x}{n}$ as $n$~tends to~$\infty$: \Item{1.} $\phi(n) = n^{k}$, where $k$~is a positive or negative integer or rational fraction. If $k$~is positive, then $n^{k}$~tends to~$+\infty$ with~$n$. If $k$~is negative, then $\lim n^{k} = 0$. If $k = 0$, then $n^{k} = 1$ for all values of~$n$. Hence $\lim n^{k} = 1$. The reader will find it instructive, even in so simple a case as this, to write down a formal proof that the conditions of our definitions are satisfied. Take for instance the case of $k > 0$. Let $\Delta$ be any assigned number, however large. We wish to choose~$n_{0}$ so that $n^{k} > \Delta$ when $n \geq n_{0}$. We have in fact only to take for~$n_{0}$ any number greater than~$\sqrt[k]{\Delta}$. If \eg\ $k = 4$, then $n^{4} > 10\MC000$ when $n \geq 11$, $n^{4}> 100\MC000\MC000$ when $n \geq 101$, and so on. \Item{2.} $\phi(n) = p_{n}$, where $p_{n}$~is the $n$th~prime number. If there were only a finite number of primes then $\phi(n)$ would be defined only for a finite number of values of~$n$. There are however, as was first shown by Euclid, infinitely many primes. Euclid's proof is as follows. If there are only a finite number of primes, let them be $1$,~$2$, $3$, $5$, $7$, $11$,~\dots~$N$. Consider the number $1 + (1 · 2 · 3 · 5 · 7 · 11 \dots N)$. This number is evidently not divisible by any of $2$,~$3$, $5$,~\dots~$N$, since the remainder when it is divided by any of these numbers is~$1$. It is therefore not divisible by any prime save~$1$, and is therefore itself prime, which is contrary to our hypothesis. It is moreover obvious that $\phi(n) > n$ for all values of~$n$ (save $n = 1$, $2$,~$3$). Hence $\phi(n) \to +\infty$. \PageSep{121} \Item{3.} Let $\phi(n)$~be the number of primes less than~$n$. Here again $\phi(n) \to +\infty$. \Item{4.} $\phi(n) = [\alpha n]$, where $\alpha$~is any positive number. Here \[ \phi(n) = 0\quad (0 \leq n < 1 / \alpha),\qquad \phi(n) = 1\quad (1/\alpha \leq n < 2/\alpha), \] and so on; and $\phi(n) \to +\infty$. \Item{5.} If $\phi(n) = 1\MC000\MC000/n$, then $\lim\phi(n) = 0$: and if $\psi(n) = n/1\MC000\MC000$, then $\psi(n) \to +\infty$. These conclusions are in no way affected by the fact that at first $\phi(n)$~is much larger than~$\psi(n)$, being in fact larger until $n = 1\MC000\MC000$. \Item{6.} $\phi(n) = 1/\{n - (-1)^{n}\}$, $n - (-1)^{n}$, $n\{1 - (-1)^{n}\}$. The first function tends to~$0$, the second to~$+\infty$, the third does not tend either to a limit or to~$+\infty$. \Item{7.} $\phi(n) = (\sin n\theta\pi)/n$, where $\theta$~is any real number. Here $|\phi(n)| < 1/n$, since $|\sin n\theta\pi| \leq 1$, and $\lim\phi(n) = 0$. \Item{8.} $\phi(n) = (\sin n\theta\pi)/\sqrt{n}$, $(a\cos^{2} n\theta + b\sin^{2}n\theta)/n$, where $a$~and~$b$ are any real numbers. \Item{9.} $\phi(n) = \sin n\theta\pi$. If $\theta$~is integral then $\phi(n) = 0$ for all values of~$n$, and therefore $\lim\phi(n) = 0$. Next let $\theta$~be rational, \eg\ $\theta = p/q$, where $p$~and~$q$ are positive integers. Let $n = aq + b$ where $a$~is the quotient and $b$~the remainder when $n$~is divided by~$q$. Then $\sin(np\pi/q) = (-1)^{ap}\sin(bp\pi/q)$. Suppose, for example, $p$~even; then, as $n$~increases from~$0$ to~$q - 1$, $\phi(n)$~takes the values \[ 0,\quad \sin(p\pi/q),\quad \sin(2p\pi/q),\ \dots\quad \sin\{(q - 1)p\pi/q\}. \] When $n$~increases from~$q$ to~$2q - 1$ these values are repeated; and so also as $n$~goes from $2q$~to~$3q - 1$, $3q$~to~$4q - 1$, and so on. Thus the values of~$\phi(n)$ form \emph{a perpetual cyclic repetition of a finite series of different values}. It is evident that when this is the case $\phi(n)$~cannot tend to a limit, nor to~$+\infty$, nor to~$-\infty$, as $n$~tends to infinity. The case in which $\theta$~is irrational is a little more difficult. It is discussed in the next set of examples. \end{Examples} \Paragraph{62. Oscillating Functions.} \begin{Definition} When $\phi(n)$ does not tend to a limit, nor to~$+\infty$, nor to~$-\infty$, as $n$~tends to~$\infty$, we say that $\phi(n)$ \Emph{oscillates} as $n$~tends to~$\infty$. \end{Definition} A function $\phi(n)$ certainly oscillates if its values form, as in the case considered in the last example above, a continual repetition of a cycle of values. But of course it may oscillate without possessing this peculiarity. Oscillation is defined in a purely negative manner: a function oscillates when it does not do certain other things. \PageSep{122} The simplest example of an oscillatory function is given by \[ \phi(n) = (-1)^{n}, \] which is equal to~$+1$ when $n$~is even and to~$-1$ when $n$~is odd. In this case the values recur cyclically. But consider \[ \phi(n) = (-1)^{n} + (1/n), \] the values of which are \[ -1 + 1,\quad 1 + (1/2),\quad -1 + (1/3),\quad 1 + (1/4),\quad -1 + (1/5),\ \dots. \] When $n$~is large every value is nearly equal to~$+1$ or~$-1$, and obviously $\phi(n)$~does not tend to a limit or to~$+\infty$ or to~$-\infty$, and therefore it oscillates: but the values do not recur. It is to be observed that in this case every value of~$\phi(n)$ is numerically less than or equal to~$3/2$. Similarly \[ \phi(n) = (-1)^{n} 100 + (1000/n) \] oscillates. When $n$~is large, every value is nearly equal to~$100$ or to~$-100$. The numerically greatest value is~$900$ (for $n = 1$). But now consider $\phi(n) = (-1)^{n}n$, the values of which are $-1$, $2$, $-3$, $4$, $-5$,~\dots. This function oscillates, for it does not tend to a limit, nor to~$+\infty$, nor to~$-\infty$. And in this case we cannot assign any limit beyond which the numerical value of the terms does not rise. The distinction between these two examples suggests a further definition. \begin{Definition} If $\phi(n)$ oscillates as $n$~tends to~$\infty$, then $\phi(n)$~will be said to \Emph{oscillate finitely} or \Emph{infinitely} according as it is or is not possible to assign a number~$K$ such that all the values of~$\phi(n)$ are numerically less than~$K$, \ie\ $|\phi(n)| < K$ for all values of~$n$. \end{Definition} These definitions, as well as those of \SecNo[§§]{58}~and~\SecNo{60}, are further illustrated in the following examples. \begin{Examples}{XXIV.} Consider the behaviour as $n$~tends to~$\infty$ of the following functions: \Item{1.} $(-1)^{n}$, $5 + 3(-1)^{n}$, $(1\MC000\MC000/n) + (-1)^{n}$, $1\MC000\MC000(-1)^{n} + (1/n)$. \Item{2.} $(-1)^{n}n$, $1\MC000\MC000 + (-1)^{n}n$. \Item{3.} $1\MC000\MC000 - n$, $(-1)^{n}(1\MC000\MC000 - n)$. \Item{4.} $n\{1 + (-1)^{n}\}$. In this case the values of~$\phi(n)$ are \[ 0,\quad 4,\quad 0,\quad 8,\quad 0,\quad 12,\quad 0,\quad 16,\ \dots. \] The odd terms are all zero and the even terms tend to~$+\infty$: $\phi(n)$~oscillates infinitely. \PageSep{123} \Item{5.} $n^{2} + (-1)^{n}2n$. The second term oscillates infinitely, but the first is very much larger than the second when $n$~is large. In fact $\phi(n) \geq n^{2} - 2n$ and $n^{2} - 2n = (n - 1)^{2} - 1$ is greater than any assigned value~$\Delta$ if $n > 1 + \sqrtp{\Delta + 1}$. Thus $\phi(n) \to +\infty$. It should be observed that in this case $\phi(2k + 1)$~is always less than~$\phi(2k)$, so that the function progresses to infinity by a continual series of steps forwards and backwards. It does not however `oscillate' according to our definition of the term. \Item{6.} $n^{2}\{1 + (-1)^{n}\}$, $(-1)^{n}n^{2} + n$, $n^{3} + (-1)^{n}n^{2}$. \Item{7.} $\sin n\theta\pi$. We have already seen (\Exs{xxiii}.~9) that $\phi(n)$~oscillates finitely when $\theta$~is rational, unless $\theta$~is an integer, when $\phi(n)= 0$, $\phi(n) \to 0$. The case in which $\theta$~is irrational is a little more difficult. But it is not difficult to see that $\phi(n)$~still oscillates finitely. We can without loss of generality suppose $0 < \theta < 1$. In the first place $|\phi(n)| < 1$. Hence $\phi(n)$~must oscillate finitely or tend to a limit. We shall consider whether the second alternative is really possible. Let us suppose that \[ \lim \sin n\theta\pi = l. \] {\Loosen Then, however small $\DELTA$ may be, we can choose~$n_{0}$ so that $\sin n\theta\pi$ lies between $l - \DELTA$ and $l + \DELTA$ for all values of~$n$ greater than or equal to~$n_{0}$. Hence $\sin(n + 1)\theta\pi - \sin n\theta\pi$ is numerically less than~$2\DELTA$ for all such values of~$n$, and so $|\sin \frac{1}{2}\theta\pi \cos(n + \frac{1}{2})\theta\pi| < \DELTA$.} Hence \[ \cos(n + \tfrac{1}{2})\theta\pi = \cos n\theta\pi \cos\tfrac{1}{2}\theta\pi - \sin n\theta\pi \sin\tfrac{1}{2}\theta\pi \] must be numerically less than~$\DELTA/|\sin\frac{1}{2}\theta\pi|$. Similarly \[ \cos(n - \tfrac{1}{2})\theta\pi = \cos n\theta\pi \cos\tfrac{1}{2}\theta\pi + \sin n\theta\pi \sin\tfrac{1}{2}\theta\pi \] must be numerically less than~$\DELTA/|\sin\frac{1}{2}\theta\pi|$; and so each of $\cos n\theta\pi \cos\frac{1}{2}\theta\pi$, $\sin n\theta\pi \sin\frac{1}{2}\theta\pi$ must be numerically less than $\DELTA/|\sin\frac{1}{2}\theta\pi|$. That is to say, $\cos n\theta\pi \cos\frac{1}{2}\theta\pi$ is very small if $n$~is large, and this can only be the case if $\cos n\theta\pi$ is very small. Similarly $\sin n\theta\pi$ must be very small, so that $l$~must be zero. But it is impossible that $\cos n\theta\pi$ and $\sin n\theta\pi$ can \emph{both} be very small, as the sum of their squares is unity. Thus the hypothesis that $\sin n\theta\pi$ tends to a limit~$l$ is impossible, and therefore $\sin n\theta\pi$ oscillates as $n$~tends to~$\infty$. {\Loosen The reader should consider with particular care the argument `$\cos n\theta\pi \cos\frac{1}{2}\theta\pi$ is very small, and this can only be the case if $\cos n\theta\pi$ is very small'. Why, he may ask, should it not be the other factor $\cos\frac{1}{2}\theta\pi$ which is `very small'? The answer is to be found, of course, in the meaning of the phrase `very small' as used in this connection. When we say `$\phi(n)$~is very small' for large values of~$n$, we mean that we can choose~$n_{0}$ so that $\phi(n)$~is numerically smaller than \emph{any} assigned number, if \DPchg{$n$~is sufficiently large}{$n \geq n_{0}$}. Such an assertion is palpably absurd when made of a \emph{fixed} number such as~$\cos\frac{1}{2}\theta\pi$, which is not zero.} Prove similarly that $\cos n\theta\pi$ oscillates finitely, unless $\theta$~is an even integer. \Item{8.} $\sin n\theta\pi + (1/n)$, $\sin n\theta\pi + 1$, $\sin n\theta\pi + n$, $(-1)^{n} \sin n\theta\pi$. \Item{9.} $a\cos n\theta\pi + b\sin n\theta\pi$, $\sin^{2}n\theta\pi$, $a\cos^{2}n\theta\pi + b\sin^{2}n\theta\pi$. \PageSep{124} \Item{10.} $a + bn + (-1)^{n} (c + dn) + e\cos n\theta\pi + f\sin n\theta\pi$. \Item{11.} $n\sin n\theta\pi$. If $\DPtypo{n}{\theta}$ is integral, then $\phi(n) = 0$, $\phi(n) \to 0$. If $\theta$~is rational but not integral, or irrational, then $\phi(n)$~oscillates infinitely. \Item{12.} $n(a\cos^{2} n\theta\pi + b\sin^{2} n\theta\pi)$. In this case $\phi(n)$~tends to~$+\infty$ if $a$~and~$b$ are both positive, but to~$-\infty$ if both are negative. Consider the special cases in which $a = 0$, $b > 0$, or $a > 0$, $b = 0$, or $a = 0$, $b = 0$. If $a$~and~$b$ have opposite signs $\phi(n)$~generally oscillates infinitely. Consider any exceptional cases. \Item{13.} $\sin(n^{2}\theta\pi)$. If $\theta$~is integral, then $\phi(n) \to 0$. Otherwise $\phi(n)$~oscillates finitely, as may be shown by arguments similar to though more complex than those used in \Exs{xxiii}.~9 and \Exs[]{xxiv}.~7.\footnote {See Bromwich's \textit{Infinite Series}, p.~485.} \Item{14.} $\sin(n!\, \theta\pi)$. If $\theta$~has a rational value~$p/q$, then $n!\, \theta$~is certainly integral for all values of $n$ greater than or equal to~$q$. Hence $\phi(n) \to 0$. The case in which $\theta$~is irrational cannot be dealt with without the aid of considerations of a much more difficult character. \Item{15.} $\cos(n!\, \theta\pi)$, $a\cos^{2}(n!\, \theta\pi) + b\sin^{2}(n!\, \theta\pi)$, where $\theta$~is rational. \Item{16.} $an - [bn]$, $(-1)^{n}(an - [bn])$. \Item{17.} $[\sqrt{n}]$, $(-1)^{n}[\sqrt{n}]$, $\sqrt{n} - [\sqrt{n}]$. \Item{18.} \emph{The smallest prime factor of~$n$}. When $n$~is a prime, $\phi(n) = n$. When $n$~is even, $\phi(n) = 2$. Thus $\phi(n)$ oscillates infinitely. \Item{19.} \emph{The largest prime factor of~$n$}. \Item{20.} \emph{The number of days in the year~$n$~\textsc{a.d.}} \end{Examples} \begin{Examples}{XXV.} \Item{1.} If $\phi(n) \to +\infty$ and $\psi(n) \geq \phi(n)$ for all values of~$n$, then $\psi(n) \to +\infty$. \Item{2.} If $\phi(n) \to 0$, and $|\psi(n)| \leq |\phi(n)|$ for all values of~$n$, then $\psi(n) \to 0$. \Item{3.} If $\lim |\phi(n)| = 0$, then $\lim \phi(n) = 0$. \Item{4.} If $\phi(n)$ tends to a limit or oscillates finitely, and $|\psi(n)| \leq |\phi(n)|$ when $n \geq n_{0}$, then $\psi(n)$~tends to a limit or oscillates finitely. \Item{5.} If $\phi(n)$ tends to~$+\infty$, or to~$-\infty$, or oscillates infinitely, and \[ |\psi(n)| \geq |\phi(n)| \] when $n \geq n_{0}$, then $\psi(n)$~tends to~$+\infty$ or to~$-\infty$ or oscillates infinitely. \Item{6.} `If $\phi(n)$~oscillates and, however great be~$n_{0}$, we can find values of~$n$ greater than~$n_{0}$ for which $\psi(n) > \phi(n)$, and values of~$n$ greater than~$n_{0}$ for which $\psi(n) < \phi(n)$, then $\psi(n)$ oscillates'. Is this true? If not give an example to the contrary. \Item{7.} If $\phi(n) \to l$ as $n \to \infty$, then also $\phi(n + p) \to l$, $p$~being any fixed integer. [This follows at once from the definition. Similarly we see that if $\phi(n)$~tends to~$+\infty$ or~$-\infty$ or oscillates so also does~$\phi(n + p)$.] \Item{8.} The same conclusions hold (except in the case of oscillation) if $p$~varies with~$n$ but is always numerically less than a fixed positive integer~$N$; or if $p$~varies with~$n$ in any way, so long as it is always positive. \PageSep{125} \Item{9.} Determine the least value of~$n_{0}$ for which it is true that \[ \Item{(\ia)}\ n^{2} + 2n > 999\MC999\quad (n \geq n_{0}),\qquad \Item{(\ib)}\ n^{2} + 2n > 1\MC000\MC000\quad (n \geq n_{0}). \] \Item{10.} Determine the least value of~$n_{0}$ for which it is true that \[ \Item{(\ia)}\ n + (-1)^{n} > 1000\quad (n \geq n_{0}),\qquad \Item{(\ib)}\ n + (-1)^{n} > 1\MC000\MC000\quad (n \geq n_{0}). \] \Item{11.} Determine the least value of~$n_{0}$ for which it is true that \[ \Item{(\ia)}\ n^{2} + 2n > \Delta\quad (n \geq n_{0}),\qquad \Item{(\ib)}\ n + (-1)^{n} > \Delta\quad (n \geq n_{0}), \] $\Delta$~being any positive number. [(\ia)~$n_{0} = [\sqrtp{\Delta + 1}]$: (\ib)~$n_{0} = 1 + [\Delta]$ or $2 + [\Delta]$, according as $[\Delta]$~is odd or even, \ie\ $n_{0} = 1 + [\Delta] + \frac{1}{2} \{1 + (-1)^{[\Delta]}\}$.] \Item{12.} Determine the least value of~$n_{0}$ such that \[ \Item{(\ia)}\ n/(n^{2} + 1) < .0001,\qquad \Item{(\ib)}\ (1/n) + \{(-1)^{n}/n^{2}\} < .000\MS01, \] when $n \geq n_{0}$. [Let us take the latter case. In the first place \[ (1/n) + \{(-1)^{n}/n^{2}\} \leq (n + 1)/n^{2}, \] and it is easy to see that the least value of~$n_{0}$, such that $(n + 1)/n^{2} < .000\MS001$ when $n \geq n_{0}$, is~$1\MC000\MC002$. But the inequality given is satisfied by $n = 1\MC000\MC001$, and this is the value of~$n_{0}$ required.] \end{Examples} \Paragraph{63. Some general theorems with regard to limits.} \Topic{\Item{A.} The behaviour of the sum of two functions whose behaviour is known.} \begin{Theorem}[I.] If $\phi(n)$~and~$\psi(n)$ tend to limits $a$,~$b$, then $\phi(n) + \psi(n)$ tends to the limit $a + b$. \end{Theorem} This is almost obvious.\footnote {There is a certain ambiguity in this phrase which the reader will do well to notice. When one says `such and such a theorem is almost obvious' one may mean one or other of two things. One may mean `it is difficult to doubt the truth of the theorem', `the theorem is such as common-sense instinctively accepts', as it accepts, for example, the truth of the propositions `$2 + 2 = 4$' or `the base-angles of an isosceles triangle are equal'. That a theorem is `obvious' in this sense does not prove that it is true, since the most confident of the intuitive judgments of common sense are often found to be mistaken; and even if the theorem is true, the fact that it is also `obvious' is no reason for not proving it, if a proof can be found. The object of mathematics is to prove that certain premises imply certain conclusions; and the fact that the conclusions may be as `obvious' as the premises never detracts from the necessity, and often not even from the interest of the proof. But sometimes (as for example here) we mean by `this is almost obvious' something quite different from this. We mean `a moment's reflection should not only convince the reader of the truth of what is stated, but should also suggest to him the general lines of a rigorous proof'. And often, when a statement is `obvious' in this sense, one may well omit the proof, not because the proof is in any sense unnecessary, but because it is a waste of time and space to state in detail what the reader can easily supply for himself.} The argument which the reader will \PageSep{126} at once form in his mind is roughly this: `when $n$~is large, $\phi(n)$~is nearly equal to~$a$ and $\psi(n)$ to~$b$, and therefore their sum is nearly equal to $a + b$'. It is well to state the argument quite formally, however. Let $\DELTA$ be any assigned positive number (\eg\ $.001$, $.000\MS000\MS1$,~\dots). We require to show that a number~$n_{0}$ can be found such that \[ |\phi(n) + \psi(n) - a - b| < \DELTA, \Tag{(1)} \] when $n \geq n_{0}$. Now by a proposition proved in \Ref{Chap.}{III} (more generally indeed than we need here) the modulus of the sum of two numbers is less than or equal to the sum of their moduli. Thus \[ |\phi(n) + \psi(n) - a - b| \leq |\phi(n) - a| + |\psi(n) - b|. \] It follows that the desired condition will certainly be satisfied if $n_{0}$~can be so chosen that \[ |\phi(n) - a| + |\psi(n) - b| < \DELTA, \Tag{(2)} \] when $n \geq n_{0}$. But this is certainly the case. For since $\lim\phi(n) = a$ we can, by the definition of a limit, find~$n_{1}$ so that $|\phi(n) - a| < \DELTA'$ when $n \geq n_{1}$, and this however small $\DELTA'$ may be. Nothing prevents our taking $\DELTA' = \frac{1}{2}\DELTA$, so that $|\phi(n) - a| < \frac{1}{2}\DELTA$ when $n \geq n_{1}$. Similarly we can find~$n_{2}$ so that $|\psi(n) - b| < \frac{1}{2}\DELTA$ when $n \geq n_{2}$. Now take $n_{0}$ to be \emph{the greater of the two numbers $n_{1}$,~$n_{2}$}. Then $|\phi(n) - a| < \frac{1}{2}\DELTA$ and $|\psi(n) - b| < \frac{1}{2}\DELTA$ when $n \geq n_{0}$, and therefore \Eq{(2)}~is satisfied and the theorem is proved. \begin{Remark} The argument may be concisely stated thus: since $\lim\phi(n) = a$ and $\lim\psi(n) = b$, we can choose $n_{1}$,~$n_{2}$ so that \[ |\phi(n) - a| < \tfrac{1}{2}\DELTA\quad (n \geq n_{1}),\qquad |\psi(n) - b| < \tfrac{1}{2}\DELTA\quad (n \geq n_{2}); \] and then, if $n$~is not less than either $n_{1}$~or~$n_{2}$, \[ |\phi(n) + \psi(n) - a - b| \leq |\phi(n) - a| + |\DPtypo{\phi}{\psi}(n) - b| < \DELTA; \] and therefore \[ \lim\{\phi(n) + \psi(n)\} = a + b. \] \end{Remark} \Paragraph{64. Results subsidiary to Theorem~I.} The reader should have no difficulty in verifying the following subsidiary results. \begin{Result} \Item{1.} If $\phi(n)$~tends to a limit, but $\psi(n)$~tends to~$+\infty$ or to~$-\infty$ or oscillates finitely or infinitely, then $\phi(n) + \psi(n)$ behaves like~$\psi(n)$. \end{Result} \begin{Result} \Item{2.} {\Loosen If $\phi(n) \to +\infty$, and $\psi(n) \to +\infty$ or oscillates finitely, then $\phi(n) + \psi(n) \to +\infty$.} \end{Result} \PageSep{127} In this statement we may obviously change $+\infty$ into~$-\infty$ throughout. \begin{Result} \Item{3.} If $\phi(n) \to \DPchg{\infty}{+\infty}$ and $\psi(n) \to -\infty$, then $\phi(n) + \psi(n)$ may tend either to a limit or to~$+\infty$ or to~$-\infty$ or may oscillate either finitely or infinitely. \end{Result} \begin{Remark} These five possibilities are illustrated in order by (i)~$\phi(n) = n$, $\psi(n) = -n$, (ii)~$\phi(n) = n^{2}$, $\psi(n) = -n$, (iii)~$\phi(n) = n$, $\psi(n) = -n^{2}$, (iv)~$\phi(n) = n + (-1)^{n}$, $\psi(n) = -n$, (v)~$\phi(n) = n^{2} + (-1)^{n}n$, $\psi(n) = -n^{2}$. The reader should construct additional examples of each case. \end{Remark} \begin{Result} \Item{4.} If $\phi(n) \to +\infty$ and $\psi(n)$~oscillates infinitely, then $\phi(n) + \psi(n)$ may tend to~$+\infty$ or oscillate infinitely, but cannot tend to a limit, or to~$-\infty$, or oscillate finitely. \end{Result} \begin{Remark} For $\psi(n) = \{\phi(n) + \psi(n)\} - \phi(n)$; and, if $\phi(n) + \psi(n)$ behaved in any of the three last ways, it would follow, from the previous results, that $\psi(n) \to -\infty$, which is not the case. As examples of the two cases which are possible, consider (i)~$\phi(n) = n^{2}$, $\psi(n) = (-1)^{n}n$, (ii)~$\phi(n) = n$, $\psi(n) = (-1)^{n}n^{2}$. Here again the signs of~$+\infty$ and~$-\infty$ may be permuted throughout. \end{Remark} \begin{Result} \Item{5.} If $\phi(n)$ and $\psi(n)$ both oscillate finitely, then $\phi(n) + \psi(n)$ must tend to a limit or oscillate finitely. \end{Result} \begin{Remark} As examples take \[ \Itemp{(i)} \phi(n) = (-1)^{n},\quad \psi(n) = (-1)^{n+1},\qquad \Itemp{(ii)} \phi(n) = \psi(n) = (-1)^{n}. \] \end{Remark} \begin{Result} \Item{6.} If $\phi(n)$ oscillates finitely, and $\psi(n)$~infinitely, then $\phi(n) + \psi(n)$ oscillates infinitely. \end{Result} \begin{Remark} For $\phi(n)$ is in absolute value always less than a certain constant, say~$K$. On the other hand $\psi(n)$, since it oscillates infinitely, must assume values numerically greater than any assignable number (\eg\ $10K$, $100K$,~\dots). Hence $\phi(n) + \psi(n)$ must assume values numerically greater than any assignable number (\eg\ $9K$, $99K$,~\dots). Hence $\phi(n) + \psi(n)$ must either tend to~$+\infty$ or~$-\infty$ or oscillate infinitely. But if it tended to~$+\infty$ then \[ \psi(n) = \{\phi(n) + \psi(n)\} - \phi(n) \] would also tend to~$+\infty$, in virtue of the preceding results. Thus $\phi(n) + \psi(n)$ cannot tend to~$+\infty$, nor, for similar reasons, to~$-\infty$: hence it oscillates infinitely. \end{Remark} \begin{Result} \Item{7.} If both $\phi(n)$ and $\psi(n)$ oscillate infinitely, then $\phi(n) + \psi(n)$ may tend to a limit, or to~$+\infty$, or to~$-\infty$, or oscillate either finitely or infinitely. \end{Result} \begin{Remark} Suppose, for instance, that $\phi(n) = (-1)^{n}n$, while $\psi(n)$~is in turn each of the functions $(-1)^{n+1}n$, $\{1 + (-1)^{n+1}\}n$, $-\{1 + (-1)^{n}\}n$, $(-1)^{n+1}(n + 1)$, $(-1)^{n}n$. We thus obtain examples of all five possibilities. \end{Remark} \PageSep{128} The results 1--7 cover all the cases which are really distinct. Before passing on to consider the product of two functions, we may point out that the result of Theorem~I may be immediately extended to the sum of three or more functions which tend to limits as $n\to\infty$. \Paragraph{65.} \Topic{\Item{B.} The behaviour of the product of two functions whose behaviour is known.} We can now prove a similar set of theorems concerning the product of two functions. The principal result is the following. \begin{Theorem}[II.] If $\lim\phi(n) = a$ and $\lim\psi(n) = b$, then \[ \lim\phi(n)\psi(n) = ab. \] \end{Theorem} Let \[ \phi(n) = a + \phi_{1}(n),\quad \psi(n) = b + \psi_{1}(n), \] so that $\lim\phi_{1}(n) = 0$ and $\lim\psi_{1}(n) = 0$. Then \[ \phi(n)\psi(n) = ab + a\psi_{1}(n) + b\phi_{1}(n) + \phi_{1}(n)\psi_{1}(n). \] Hence the numerical value of the difference $\phi(n)\psi(n) - ab$ is certainly not greater than the sum of the numerical values of $a\psi_{1}(n)$, $b\phi_{1}(n)$, $\phi_{1}(n)\psi_{1}(n)$. From this it follows that \[ \lim\{\phi(n)\psi(n) - ab\} = 0, \] which proves the theorem. \begin{Remark} The following is a strictly formal proof. We have \[ |\phi(n)\psi(n) - ab| \leq |a\psi_{1}(n)| + |b\phi_{1}(n)| + |\phi_{1}(n)| |\psi_{1}(n)|. \] Assuming that neither $a$~nor~$b$ is zero, we may choose~$n_{0}$ so that \[ |\phi_{1}(n)| < \tfrac{1}{3}\DELTA/|b|,\quad |\psi_{1}(n)| < \tfrac{1}{3}\DELTA/|a|, \] when $n \geq n_{0}$. Then \[ |\phi(n)\psi(n) - ab| < \tfrac{1}{3}\DELTA + \tfrac{1}{3}\DELTA + \{\tfrac{1}{9}\DELTA^{2}/(|a||b|)\}, \] which is certainly less than~$\DELTA$ if $\DELTA < \frac{1}{3}|a||b|$. That is to say we can choose~$n_{0}$ so that $|\phi(n)\psi(n) - ab| < \DELTA$ when $n \geq n_{0}$, and so the theorem follows. The reader should supply a proof for the case in which at least one of $a$~and~$b$ is zero. \end{Remark} We need hardly point out that this theorem, like Theorem~I, may be immediately extended to the product of any number of functions of~$n$. There is also a series of subsidiary theorems concerning products analogous to those stated in \SecNo[§]{64} for sums. We must distinguish now \emph{six} different ways in which $\phi(n)$~may behave as $n$~tends to~$\infty$. It may (1)~tend to a limit \emph{other than \PageSep{129} zero}, (2)~tend to zero, (3\ia)~tend to~$+\infty$, (3\ib)~tend to~$-\infty$, (4)~oscillate finitely, (5)~oscillate infinitely. It is not necessary, as a rule, to take account separately of (3\ia)~and~(3\ib), as the results for one case may be deduced from those for the other by a change of sign. \begin{Remark} To state these subsidiary theorems at length would occupy more space than we can afford. We select the two which follow as examples, leaving the verification of them to the reader. He will find it an instructive exercise to formulate some of the remaining theorems himself. \begin{Result} \Itemp{(i)} If $\phi(n) \to +\infty$ and~$\psi(n)$~oscillates finitely, then $\phi(n)\psi(n)$ must tend to~$+\infty$ or to~$-\infty$ or oscillate infinitely. \end{Result} Examples of these three possibilities may be obtained by taking $\phi(n)$ to be~$n$ and $\psi(n)$ to be one of the three functions $2 + (-1)^{n}$, $-2 - (-1)^{n}$, $(-1)^{n}$. \begin{Result} \Itemp{(ii)} If $\phi(n)$ and~$\psi(n)$ oscillate finitely, then $\phi(n)\psi(n)$ must tend to a limit \(which may be zero\) or oscillate finitely. \end{Result} {\Loosen For examples, take (\ia)~$\phi(n) = \psi(n) = (-1)^{n}$, (\ib)~$\phi(n) = 1 + (-1)^{n}$, $\psi(n) = 1 - (-1)^{n}$, and (\ic)~$\phi(n) = \cos\frac{1}{3}n\pi$, $\psi(n) = \sin\tfrac{1}{3} n\pi$.} \end{Remark} A particular case of Theorem~II which is important is that in which $\psi(n)$~is constant. The theorem then asserts simply that $\lim k\phi(n) = ka$ if $\lim\phi(n) = a$. To this we may join the subsidiary theorem that if $\phi(n) \to +\infty$ then $k\phi(n) \to +\infty$ or $k\phi(n) \to -\infty$, according as $k$~is positive or negative, unless $k = 0$, when of course $k\phi(n) = 0$ for all values of~$n$ and $\lim k\phi(n) = 0$. And if $\phi(n)$~oscillates finitely or infinitely, then so does $k\phi(n)$, unless $k = 0$. \Paragraph{66.} \Topic{\Item{C.} The behaviour of the difference or quotient of two functions whose behaviour is known.} There is, of course, a similar set of theorems for the difference of two given functions, which are obvious corollaries from what precedes. In order to deal with the quotient \[ \frac{\phi(n)}{\psi(n)}, \] we begin with the following theorem. \begin{Theorem}[III.] If $\lim\phi(n) = a$, and $a$~is not zero, then \[ \lim\frac{1}{\phi(n)} = \frac{1}{a}. \] \end{Theorem} Let \[ \phi(n) = a + \phi_{1}(n), \] \PageSep{130} so that $\lim\phi_{1}(n) = 0$. Then \[ \left|\frac{1}{\phi(n)} - \frac{1}{a}\right| = \frac{|\phi_{1}(n)|}{|a| |a + \phi_{1}(n)|}, \] and it is plain, since $\lim\phi_{1}(n) = 0$, that we can choose~$n_{0}$ so that this is smaller than any assigned number~$\DELTA$ when $n \geq n_{0}$. From Theorems II~and~III we can at once deduce the principal theorem for quotients, viz.\ \begin{Theorem}[IV.] If $\lim\phi(n) = a$ and $\lim\psi(n) = b$, and $b$~is not zero, then \[ \lim\frac{\phi(n)}{\psi(n)} = \frac{a}{b}. \] \end{Theorem} The reader will again find it instructive to formulate, prove, and illustrate by examples some of the `subsidiary theorems' corresponding to Theorems III~and~IV. \Paragraph{67.} \begin{Theorem}[V.] If $R\{\phi(n), \psi(n), \chi(n), \dots\}$ is any rational function of $\phi(n)$, $\psi(n)$, $\chi(n)$,~\dots, \ie\ any function of the form \[ P\{\phi(n), \psi(n), \chi(n), \dots\}/Q\{\phi(n), \psi(n), \chi(n), \dots\}, \] where $P$~and~$Q$ denote polynomials in $\phi(n)$, $\psi(n)$, $\chi(n)$,~\dots: and if \[ \lim\phi(n) = a,\quad \lim\psi(n) = b,\quad \lim\chi(n) = c,\ \dots, \] and \[ Q(a, b, c, \dots) \neq 0; \] then \[ \lim R\{\phi(n), \psi(n), \chi(n), \dots\} = R(a, b, c, \dots). \] \end{Theorem} For $P$~is a sum of a finite number of terms of the type \[ A\{\phi(n)\}^{p} \{\psi(n)\}^{q} \dots, \] where $A$~is a constant and $p$,~$q$,~\dots\ positive integers. This term, by Theorem~II (or rather by its obvious extension to the product of any number of functions) tends to the limit $Aa^{p}b^{q}\dots$, and so $P$~tends to the limit $P(a, b, c, \dots)$, by the similar extension of Theorem~I\@. Similarly $Q$~tends to $Q(a, b, c, \dots)$; and the result then follows from Theorem~IV. \Paragraph{68.} The preceding general theorem may be applied to the following very important particular problem: \emph{what is the behaviour of the most general rational function of~$n$, viz. \[ S(n) = \frac{a_{0}n^{p} + a_{1}n^{p-1} + \dots + a_{p}} {b_{0}n^{q} + b_{1}n^{q-1} + \dots + b_{q}}, \] as $n$~tends to~$\infty$?}\footnote {We naturally suppose that neither $a_{0}$~nor~$b_{0}$ is zero.} \PageSep{131} In order to apply the theorem we transform $S(n)$ by writing it in the form \[ n^{p-q}\left\{ \biggl(a_{0} + \frac{a_{1}}{n} + \dots + \frac{a_{p}}{n^{p}}\biggr)\bigg/ \biggl(b_{0} + \frac{b_{1}}{n} + \dots + \frac{b_{q}}{n^{q}}\biggr) \right\}. \] The function in curly brackets is of the form $R\{\phi(n)\}$, where $\phi(n) = 1/n$, and therefore tends, as $n$~tends to~$\infty$, to the limit $R(0) = a_{0}/b_{0}$. Now $n^{p-q} \to 0$ if $p < q$; $n^{p-q} = 1$ and $n^{p-q} \to 1$ if $p = q$; and $n^{p-q} \to +\infty$ if $p > q$. Hence, by Theorem~II, \begin{gather*} \lim S(n) = 0\quad (p < q), \\ \lim S(n) = a_{0}/b_{0}\quad (p = q), \\ S(n) \to +\infty\quad (p > q,\ \text{$a_{0}/b_{0}$ \emph{positive}}), \\ S(n) \to -\infty\quad (p > q,\ \text{$a_{0}/b_{0}$ \emph{negative}}). \end{gather*} \begin{Examples}{XXVI.} \Item{1.} What is the behaviour of the functions \[ \left(\frac{n - 1}{n + 1}\right)^{2},\quad (-1)^{n} \left(\frac{n - 1}{n + 1}\right)^{2},\quad \frac{n^{2} + 1}{n},\quad (-1)^{n} \frac{n^{2} + 1}{n}, \] as $n\to\infty$? \Item{2.} Which (if any) of the functions \begin{gather*} 1/(\cos^{2}\tfrac{1}{2}n\pi + n\sin^{2}\tfrac{1}{2}n\pi),\quad 1/\{n(\cos^{2}\tfrac{1}{2}n\pi + n\sin^{2}\tfrac{1}{2}n\pi)\}, \\ (n\cos^{2}\tfrac{1}{2}n\pi + \sin^{2}\tfrac{1}{2}n\pi)/ \{n(\cos^{2}\tfrac{1}{2}n\pi + n\sin^{2}\tfrac{1}{2}n\pi)\} \end{gather*} tend to a limit as $n \to \infty$? \Item{3.} Denoting by~$S(n)$ the general rational function of~$n$ considered above, show that in all cases \[ \lim\frac{S(n + 1)}{S(n)} = 1,\quad \lim\frac{S\{n + (1/n)\}}{S(n)} = 1. \] \end{Examples} \Paragraph{69. Functions of~$n$ which increase steadily with~$n$.} A special but particularly important class of functions of~$n$ is formed by those whose variation as $n$~tends to~$\infty$ is always in the same direction, that is to say those which always increase (or always decrease) as $n$~increases. Since $-\phi(n)$ always increases if $\phi(n)$ always decreases, it is not necessary to consider the two kinds of functions separately; for theorems proved for one kind can at once be extended to the other. \begin{Definition} The function $\phi(n)$ will be said to increase steadily with~$n$ if $\phi(n + 1) \geq \phi(n)$ for all values of~$n$. \end{Definition} \PageSep{132} It is to be observed that we do not exclude the case in which $\phi(n)$ has the \emph{same} value for several values of~$n$; all we exclude is possible \emph{decrease}. Thus the function \[ \phi(n) = 2n + (-1)^{n}, \] whose values for $n = 0$, $1$, $2$, $3$, $4$,~\dots\ are \[ 1,\ 1,\ 5,\ 5,\ 9,\ 9,\ \dots \] is said to increase steadily with~$n$. Our definition would indeed include even functions which remain constant from some value of~$n$ onwards; thus $\phi(n) = 1$ steadily increases according to our definition. However, as these functions are extremely special ones, and as there can be no doubt as to their behaviour as $n$~tends to~$\infty$, this apparent incongruity in the definition is not a serious defect. There is one exceedingly important theorem concerning functions of this class. \begin{Theorem} If $\phi(n)$ steadily increases with~$n$, then either \Inum{(i)}~$\phi(n)$ tends to a limit as $n$~tends to~$\infty$, or \Inum{(ii)}~$\phi(n)\to +\infty$. \end{Theorem} That is to say, while there are in general \emph{five} alternatives as to the behaviour of a function, there are \emph{two} only for this special kind of function. This theorem is a simple corollary of Dedekind's Theorem (\SecNo[§]{17}). We divide the real numbers~$\xi$ into two classes $L$~and~$R$, putting $\xi$~in $L$~or~$R$ according as $\phi(n) \geq \xi$ for some value of~$n$ (and so of course for all greater values), or $\phi(n) < \xi$ for all values of~$n$. The class~$L$ certainly exists; the class~$R$ may or may not. If it does not, then, given any number~$\Delta$, however large, $\phi(n) > \Delta$ for all sufficiently large values of~$n$, and so \[ \phi(n) \to +\infty. \] If on the other hand $R$~exists, the classes $L$~and~$R$ form a section of the real numbers in the sense of \SecNo[§]{17}. Let $a$~be the number corresponding to the section, and let $\DELTA$~be any positive number. Then $\phi(n) < a + \DELTA$ for all values of~$n$, and so, since $\DELTA$~is arbitrary, $\phi(n) \leq a$. On the other hand $\phi(n) > a - \DELTA$ for some value of~$n$, and so for all sufficiently large values. Thus \[ a - \DELTA < \phi(n) \leq a \] \PageSep{133} for all sufficiently large values of~$n$; \ie\ \[ \phi(n)\to a. \] \begin{Remark} It should be observed that in general $\phi(n) < a$ for all values of~$n$; for if $\phi(n)$~is equal to~$a$ for any value of~$n$ it must be equal to~$a$ for all greater values of~$n$. Thus $\phi(n)$~can never be equal to~$a$ except in the case in which the values of~$\phi(n)$ are ultimately all the same. If this is so, $a$~is the largest member of~$L$; otherwise $L$~has no largest member. \end{Remark} \begin{Cor}[1.] If $\phi(n)$ increases steadily with~$n$, then it will tend to a limit or to~$+\infty$ according as it is or is not possible to find a number~$K$ such that $\phi(n) < K$ for all values of~$n$. \end{Cor} We shall find this corollary exceedingly useful later on. \begin{Cor}[2.] If $\phi(n)$ increases steadily with~$n$, and $\phi(n) < K$ for all values of~$n$, then $\phi(n)$~tends to a limit and this limit is less than or equal to~$K$. \end{Cor} {\Loosen It should be noticed that the limit may be equal to~$K$: if \eg\ $\phi(n) = 3 - (1/n)$, then every value of~$\phi(n)$ is less than~$3$, but the limit is equal to~$3$.} \begin{Cor}[3.] If $\phi(n)$ increases steadily with~$n$, and tends to a limit, then \[ \phi(n) \leq \lim\phi(n) \] for all values of~$n$. \end{Cor} The reader should write out for himself the corresponding theorems and corollaries for the case in which $\phi(n)$~\emph{decreases} as $n$~increases. \Paragraph{70.} The great importance of these theorems lies in the fact that they give us (what we have so far been without) a means of deciding, in a great many cases, whether a given function of~$n$ does or does not tend to a limit as $n \to \infty$, \emph{without requiring us to be able to guess or otherwise infer beforehand what the limit is}. If we know what the limit, if there is one, must be, we can use the test \[ |\phi(n) - l| < \DELTA\quad (n \geq n_{0}): \] as for example in the case of $\phi(n) = 1/n$, where it is obvious that the limit can only be zero. But suppose we have to determine whether \[ \phi(n) = \left(1 + \frac{1}{n}\right)^{n} \] \PageSep{134} tends to a limit. In this case it is not obvious what the limit, if there is one, will be: and it is evident that the test above, which involves~$l$, cannot be used, at any rate directly, to decide whether $l$~exists or not. \begin{Remark} Of course the test can sometimes be used indirectly, to prove by means of a \textit{reductio ad absurdum} that $l$~\emph{cannot} exist. If \eg\ $\phi(n) = (-1)^{n}$, it is clear that $l$~would have to be equal to~$1$ and also equal to~$-1$, which is obviously impossible. \Paragraph{71. Alternative proof of Weierstrass's Theorem of \SecNo[§]{19}.} The results of \SecNo[§]{69} enable us to give an alternative proof of the important theorem proved in \SecNo[§]{19}. If we divide~$PQ$ into two equal parts, one at least of them must contain infinitely many points of~$S$. We select the one which does, or, if both do, we select the left-hand half; and we denote the selected half by~$P_{1}Q_{1}$ (\Fig{28}). If $P_{1}Q_{1}$ is the left-hand half, $P_{1}$~is the same point as~$P$. %[Illustration: Fig. 28.] \Figure[0.9\textwidth]{28}{p134} Similarly, if we divide $P_{1}Q_{1}$ into two halves, one at least of them must contain infinitely many points of~$S$. We select the half $P_{2}Q_{2}$ which does so, or, if both do so, we select the left-hand half. Proceeding in this way we can define a sequence of intervals \[ PQ,\quad P_{1}Q_{1},\quad P_{2}Q_{2},\quad P_{3}Q_{3},\ \dots, \] each of which is a half of its predecessor, and each of which contains infinitely many points of~$S$. The points $P$, $P_{1}$, $P_{2}$,~\dots\ progress steadily from left to right, and so $P_{n}$~tends to a limiting position~$T$. Similarly $Q_{n}$~tends to a limiting position~$T'$. But $TT'$~is plainly less than~$P_{n}Q_{n}$, whatever the value of~$n$; and $P_{n}Q_{n}$, being equal to~$PQ/2^{n}$, tends to zero. Hence $T'$~coincides with~$T$, and $P_{n}$~and~$Q_{n}$ both tend to~$T$. Then $T$~is a point of accumulation of~$S$. For suppose that $\xi$~is its coordinate, and consider any interval of the type $\DPmod{(\xi - \DELTA, \xi + \DELTA)}{[\xi - \DELTA, \xi + \DELTA]}$. If $n$~is sufficiently large, $P_{n}Q_{n}$ will lie entirely inside this interval.\footnote {This will certainly be the case as soon as $PQ/2^{n} < \DELTA$.} Hence $\DPmod{(\xi - \DELTA, \xi + \DELTA)}{[\xi - \DELTA, \xi + \DELTA]}$ contains infinitely many points of~$S$. \end{Remark} \Paragraph{72. The limit of~$x^{n}$ as $n$~tends to~$\infty$.} Let us apply the results of \SecNo[§]{69} to the particularly important case in which $\phi(n) = x^{n}$. If $x = 1$ then $\phi(n) = 1$, $\lim\phi(n) = 1$, and if $x = 0$ then $\phi(n) = 0$, $\lim \phi(n) = 0$, so that these special cases need not detain us. \PageSep{135} First, suppose $x$ positive. Then, since $\phi(n + 1) = x\phi(n)$, $\phi(n)$ increases with~$n$ if $x > 1$, decreases as $n$~increases if $x < 1$. {\Loosen If $x > 1$, then $x^{n}$~must tend either to a limit (which must obviously be greater than~$1$) or to~$+\infty$. Suppose it tends to a limit~$l$. Then $\lim\phi(n + 1) = \lim\phi(n) = l$, by \Exs{xxv}.~7; but} \[ \lim\phi(n + 1) = \lim x\phi(n) = x\lim\phi(n) = xl, \] and therefore $l = xl$: and as $x$~and~$l$ are both greater than~$1$, this is impossible. Hence \[ x^{n} \to +\infty\quad (x > 1). \] \begin{Remark} \Par{Example.} The reader may give an alternative proof, showing by the binomial theorem that $x^{n} > 1 + n\delta$ if $\delta$~is positive and $x = 1 + \delta$, and so that \[ x^{n} \to +\infty. \] \end{Remark} On the other hand $x^{n}$~is a decreasing function if $x < 1$, and must therefore tend to a limit or to~$-\infty$. Since $x^{n}$~is positive the second alternative may be ignored. Thus $\lim x^{n} = l$, say, and as above $l = xl$, so that $l$~must be zero. Hence \[ \lim x^{n} = 0\quad (0 < x < 1). \] \begin{Remark} \Par{Example.} Prove as in the preceding example that $(1/x)^{n}$ tends to~$+\infty$ if $0 < x < 1$, and deduce that $x^{n}$~tends to~$0$. \end{Remark} {\Loosen We have finally to consider the case in which $x$~is negative. If $-1 < x < 0$ and $x = -y$, so that $0 < y < 1$, then it follows from what precedes that $\lim y^{n} = 0$ and therefore $\lim x^{n} = 0$. If $x = -1$ it is obvious that $x^{n}$~oscillates, taking the values $-1$,~$1$ alternatively. Finally if $x < -1$, and $x = -y$, so that $y > 1$, then $y^{n}$~tends to~$+\infty$, and therefore $x^{n}$~takes values, both positive and negative, numerically greater than any assigned number. Hence $x^{n}$~oscillates infinitely. To sum up:} \begin{alignat*}{2} &\phi(n) = x^{n} \to +\infty &&(x > 1),\\ &\lim \phi(n) = 1 &&(x = 1),\\ &\lim \phi(n) = 0 &&(-1 < x < 1),\\ &\text{$\phi(n)$ \emph{oscillates finitely}} &&(x = -1),\\ &\text{$\phi(n)$ \emph{oscillates infinitely}}\qquad &&(x < -1). \end{alignat*} \begin{Examples}{XXVII.\protect\footnotemark} \Item{1.} If $\phi(n)$~is positive and $\phi(n + 1) > K \phi(n)$, where $K > 1$, for all values of~$n$, then $\phi(n) \to +\infty$.\footnotetext {These examples are particularly important and several of them will be made use of later in the text. They should therefore be studied very carefully.} \PageSep{136} [For \[ \phi(n) > K\phi(n - 1) > K^{2}\phi(n - 2) \dots > K^{n-1}\phi(1), \] from which the conclusion follows at once, as $K^{n} \to\infty$.] \Item{2.} The same result is true if the conditions above stated are satisfied only when $n \geq n_{0}$. \Item{3.} If $\phi(n)$~is positive and $\phi(n + 1) < K\phi(n)$, where $0 < K < 1$, then $\lim\phi(n) = 0$. This result also is true if the conditions are satisfied only when $n \geq n_{0}$. \Item{4.} If $|\phi(n + 1)| < K|\phi(n)|$ when $n \geq n_{0}$, and $0 < K < 1$, then $\lim\phi(n) = 0$. \Item{5.} If $\phi(n)$ is positive and $\lim\{\phi(n + 1)\}/\{\phi(n)\} = l > 1$, then $\phi(n) \to +\infty$. [For we can determine~$n_{0}$ so that $\{\phi(n + 1)\}/\{\phi(n)\} > K > 1$ when $n \geq n_{0}$: we may, \eg, take $K$ \DPchg{half-way}{halfway} between $1$~and~$l$. Now apply Ex.~1.] \Item{6.} If $\lim\{\phi(n + 1)\}/\{\phi(n)\} = l$, where $l$~is numerically less than unity, then $\lim\phi(n) = 0$. [This follows from Ex.~4 as Ex.~5 follows from Ex.~1.] \Item{7.} Determine the behaviour, as $n \to \infty$, of $\phi(n) = n^{r}x^{n}$, where $r$~is any positive integer. [If $x = 0$ then $\phi(n) = 0$ for all values of~$n$, and $\phi(n) \to 0$. In all other cases \[ \frac{\phi(n + 1)}{\phi(n)} = \left(\frac{n + 1}{n}\right)^{r}x \to x. \] First suppose $x$~positive. Then $\phi(n) \to +\infty$ if $x > 1$ (Ex.~5) and $\phi(n) \to 0$ if $x < 1$ (Ex.~6). If $x = 1$, then $\phi(n) = n^{r} \to +\infty$. Next suppose $x$~negative. Then $|\phi(n)| = n^{r}|x|^{n}$ tends to~$+\infty$ if $|x| \geq 1$ and to~$0$ if $|x| < 1$. Hence $\phi(n)$~oscillates infinitely if $x \leq -1$ and $\phi(n) \to 0$ if $-1 < x < 0$.] \Item{8.} Discuss $n^{-r}x^{n}$ in the same way. [The results are the same, except that $\phi(n) \to 0$ when $x = 1$ or~$-1$.] \Item{9.} Draw up a table to show how $n^{k}x^{n}$ behaves as $n \to \infty$, for all real values of~$x$, and all positive and negative integral values of~$k$. [The reader will observe that the value of~$k$ is immaterial except in the special cases when $x = 1$ or~$-1$. Since $\lim\{(n + 1)/n\}^{k} = 1$, whether $k$~be positive or negative, the limit of the ratio $\phi(n + 1)/\phi(n)$ depends only on~$x$, and the behaviour of~$\phi(n)$ is in general dominated by the factor~$x^{n}$. The factor~$n^{k}$ only asserts itself when $x$~is numerically equal to~$1$.] \Item{10.} Prove that if $x$~is positive then $\sqrt[n]{x} \to 1$ as $n \to \infty$. [Suppose, \eg, $x > 1$. Then $x$,~$\sqrt{x}$, $\sqrt[3]{x}$,~\dots\ is a decreasing sequence, and $\sqrt[n]{x} > 1$ for all values of~$n$. Thus $\sqrt[n]{x} \to l$, where $l \geq 1$. But if $l > 1$ we can find values of~$n$, as large as we please, for which $\sqrt[n]{x} > l$ or $x > l^{n}$; and, since $l^{n} \to +\infty$ as $n \to \infty$, this is impossible.] \Item{11.} $\sqrt[n]{n}\to 1$. [For $\sqrtp[n+1]{n + 1} < \sqrt[n]{n}$ if $(n + 1)^{n} < n^{n+1}$ or $\{1 + (1/n)\}^{n} < n$, which is certainly satisfied if $n \geq 3$ (see \SecNo[§]{73} for a proof). Thus $\sqrt[n]{n}$~decreases as $n$~increases from $3$ onwards, and, as it is always greater than unity, it tends to a limit which is greater than or equal to unity. But if $\sqrt[n]{n}\to l$, where $l > 1$, then $n > l^{n}$, which is certainly untrue for sufficiently large values of~$n$, since $l^{n}/n \to +\infty$ with~$n$ (Exs.~7,~8).] \PageSep{137} \Item{12.} $\sqrtp[n]{n!} \to +\infty$. [However large~$\Delta$ may be, $n! > \Delta^{n}$ if $n$~is large enough. For if $u_{n} = \Delta^{n}/n!$ then $u_{n+1}/u_{n} = \Delta/(n + 1)$, which tends to zero as $n \to \infty$, so that $u_{n}$~does the same (Ex.~6).] \Item{13.} Show that if $-1 < x < 1$ then \[ u_{n} = \frac{m(m - 1) \dots (m - n + 1)}{n!} x^{n} = \binom{m}{n} x^{n} \] tends to zero as $n \to \infty$. [If $m$~is a positive integer, $u_{n} = 0$ for $n > m$. Otherwise \[ \frac{u_{n+1}}{u_{n}} = \frac{m - n}{n + 1}x \to -x, \] unless $x = 0$.] \end{Examples} \Paragraph{73. The limit of $\left(1 + \dfrac{1}{n}\right)^{n}$.} A more difficult problem which can be solved by the help of \SecNo[§]{69} arises when $\phi(n) = \{1 + 1/n\}^{n}$. It follows from the binomial theorem\footnote {The binomial theorem for a positive integral exponent, which is what is used here, is a theorem of elementary algebra. The other cases of the theorem belong to the theory of infinite series, and will be considered later.} that {\setlength{\multlinegap}{0pt}% \begin{multline*} \begin{aligned} \biggl(1 + \frac{1}{n}\biggr)^{n} &= 1 + n · \frac{1}{n} + \frac{n(n - 1)}{1·2}\, \frac{1}{n^{2}} + \dots + \frac{n(n - 1)\dots (n - n + 1)}{1·2\dots n}\, \frac{1}{n^{n}}\\ &= 1 + 1 + \frac{1}{1·2} \biggl(1 - \frac{1}{n}\biggr) + \frac{1}{1·2·3} \biggl(1 - \frac{1}{n}\biggr) \biggl(1- \frac{2}{n}\biggr) + \dots\\ \end{aligned} \\ + \frac{1}{1·2\dots n} \biggl(1 - \frac{1}{n}\biggr) \biggl(1 - \frac{2}{n}\biggr)\dots \biggl(1 - \frac{n - 1}{n}\biggr). \end{multline*}} The $(p + 1)$th~term in this expression, viz. \[ \frac{1}{1·2\dots p} \left(1 - \frac{1}{n}\right) \left(1 - \frac{2}{n}\right)\dots \left(1 - \frac{p - 1}{n}\right), \] is positive and an increasing function of~$n$, and the number of terms also increases with~$n$. Hence $\left(1 + \dfrac{1}{n}\right)^{n}$ increases with~$n$, and so tends to a limit or to~$+\infty$, as $n \to \infty$. But \begin{align*} \left(1 + \frac{1}{n}\right)^{n} &< 1 + 1 + \frac{1}{1·2} + \frac{1}{1·2·3} + \dots + \frac{1}{1·2·3 \dots n}\\ &< 1 + 1 + \frac{1}{2} + \frac{1}{2^{2}} + \dots + \frac{1}{2^{n-1}} < 3. \end{align*} Thus $\left(1 + \dfrac{1}{n}\right)^{n}$ cannot tend to~$+\infty$, and so \[ \lim_{n \to\infty} \left(1 + \frac{1}{n}\right)^{n} = e, \] where $e$~is a number such that $2 < e \leq 3$. \PageSep{138} \begin{Remark} \Paragraph{74. Some algebraical lemmas.} It will be convenient to prove at this stage a number of elementary inequalities which will be useful to us later on. \Itemp{(i)} It is evident that if $\alpha > 1$ and $r$~is a positive integer then \[ r\alpha^{r} > \alpha^{r-1} + \alpha^{r-2} + \dots + 1. \] Multiplying both sides of this inequality by $\alpha - 1$, we obtain \[ r\alpha^{r}(\alpha - 1) > \alpha^{r} - 1; \] and adding $r(\alpha^{r} - 1)$ to each side, and dividing by $r(r + 1)$, we obtain \[ \frac{\alpha^{r+1} - 1}{r + 1} > \frac{\alpha^{r} - 1}{r}\quad (\alpha > 1). \Tag{(1)} \] Similarly we can prove that \[ \frac{1 - \beta^{r+1}}{r + 1} < \frac{1 - \beta^{r}}{r}\quad (0 < \beta < 1). \Tag{(2)} \] It follows that if $r$~and~$s$ are positive integers, and $r > s$, then \[ \frac{\alpha^{r} - 1}{r} > \frac{a^{s} - 1}{s},\quad \frac{1 - \beta^{r}}{r} < \frac{1 - \beta^{s}}{s}. \Tag{(3)} \] Here $0 < \beta < 1 < \alpha$. In particular, when $s = 1$, we have \[ \alpha^{r} - 1 > r(\alpha - 1),\quad 1 - \beta^{r} < r(1 - \beta). \Tag{(4)} \] \Itemp{(ii)} The inequalities \Eq{(3)}~and~\Eq{(4)} have been proved on the supposition that $r$~and~$s$ are positive integers. But it is easy to see that they hold under the more general hypothesis that $r$~and~$s$ are any positive rational numbers. Let us consider, for example, the first of the inequalities~\Eq{(3)}. Let $r = a/b$, $s = c/d$, where $a$,~$b$, $c$,~$d$ are positive integers; so that $ad > bc$. If we put $\alpha = \gamma^{bd}$, the inequality takes the form \[ (\gamma^{ad} - 1)/ad > (\gamma^{bc} - 1)/bc; \] and this we have proved already. The same argument applies to the remaining inequalities; and it can evidently be proved in a similar manner that \[ \alpha^{s} - 1 < s(\alpha - 1),\quad 1 - \beta^{s} > s(1 - \beta), \Tag{(5)} \] if $s$~is a positive rational number less than~$1$. \Itemp{(iii)} In what follows it is to be understood \emph{that all the letters denote positive numbers, that $r$~and~$s$ are rational, and that $\alpha$~and~$r$ are greater than $1$,~$\beta$ and $s$~less than~$1$}. Writing $1/\beta$ for~$\alpha$, and $1/\alpha$ for~$\beta$, in~\Eq{(4)}, we obtain \[ \alpha^{r} - 1 < r\alpha^{r-1}(\alpha - 1),\quad 1 - \beta^{r} > r\beta^{r-1}(1 - \beta). \Tag{(6)} \] Similarly, from~\Eq{(5)}, we deduce \[ \alpha^{s} - 1 > s\alpha^{s-1}(\alpha - 1),\quad 1 - \beta^{s} < s\beta^{s-1}(1 - \beta). \Tag{(7)} \] Combining \Eq{(4)}~and~\Eq{(6)}, we see that \[ r\alpha^{r-1}(\alpha - 1) > \alpha^{r} - 1 > r(\alpha - 1). \Tag{(8)} \] \PageSep{139} Writing $x/y$ for~$\alpha$, we obtain \[ rx^{r-1} (x - y) > x^{r} - y^{r} > ry^{r-1} (x - y) \Tag{(9)} \] if $x > y > 0$. And the same argument, applied to \Eq{(5)}~and~\Eq{(7)}, leads to \[ sx^{s-1} (x - y) < x^{s} - y^{s} < sy^{s-1} (x - y). \Tag{(10)} \] \end{Remark} \begin{Examples}{XXVIII.} \Item{1.} Verify \Eq{(9)} for $r = 2$,~$3$, and \Eq{(10)} for $s = \frac{1}{2}$,~$\frac{1}{3}$. \Item{2.} Show that \Eq{(9)}~and~\Eq{(10)} are also true if $y > x > 0$. \Item{3.} Show that \Eq{(9)}~also holds for $r < 0$. [See Chrystal's \textit{Algebra}, vol.~ii, pp.~43--45.] \Item{4.} If $\phi(n) \to l$, where $l > 0$, as $n \to \infty$, then $\phi^{k} \to l^{k}$, $k$~being any rational number. [We may suppose that $k > 0$, in virtue of Theorem~III of \SecNo[§]{66}; and that $\frac{1}{2}l < \phi < 2l$, as is certainly the case from a certain value of $n$ onwards. If $k > 1$, \[ k\phi^{k-1}(\phi - l) > \phi^{k} - l^{k} > kl^{k-1}(\phi - l) \] or \[ kl^{k-1}(l - \phi) > l^{k} - \phi^{k} > k\phi^{k-1}(l - \phi), \] according as $\phi > l$ or $\phi < l$. It follows that the ratio of $|\phi^{k} - l^{k}|$ and $|\phi - l|$ lies between $k(\frac{1}{2}l)^{k-1}$ and $k(2l)^{k-1}$. The proof is similar when $0 < k < 1$. The result is still true when $l = 0$, if $k > 0$.] \Item{5.} Extend the results of \Exs{xxvii}.\ 7,~8,~9 to the case in which $r$~or~$k$ are any rational numbers. \end{Examples} \begin{Remark} \Paragraph{75. The limit of $n(\sqrt[n]{x} - 1)$.} If in the first inequality~\Eq{(3)} of \SecNo[§]{74} we put $r = 1/(n - 1)$, $s = 1/n$, we see that \[ (n - 1)(\sqrt[n-1]{\alpha} - 1) > n(\sqrt[n]{\alpha} - 1) \] when $\alpha > 1$. Thus if $\phi(n) = n(\sqrt[n]{\alpha} - 1)$ then $\phi(n)$~decreases steadily as $n$~increases. Also $\phi(n)$~is always positive. Hence $\phi(n)$~tends to a limit~$l$ as $n \to \infty$, and $l \geq 0$. Again if, in the first inequality~\Eq{(7)} of \SecNo[§]{74}, we put $s = 1/n$, we obtain \[ n(\sqrt[n]{\alpha} - 1) > \sqrt[n]{\alpha}\left(1 - \frac{1}{\alpha}\right) > 1 - \frac{1}{\alpha}. \] Thus $l \geq 1 - (1/\alpha) > 0$. Hence, if $\alpha > 1$, we have \[ \lim_{n \to \infty} n(\sqrt[n]{\alpha} - 1) = f(\alpha), \] where $f(\alpha) > 0$. Next suppose $\beta < 1$, and let $\beta = 1/\alpha$; then $n(\sqrt[n]{\beta} - 1) = -n(\sqrt{\alpha} - 1)/\sqrt[n]{\alpha}$. Now $n(\sqrt[n]{\alpha} - 1) \to f(\alpha)$, and (\Exs{xxvii}.~10) \[ \sqrt[n]{\alpha} \to 1. \] Hence, if $\beta = 1/\alpha < 1$, we have \[ n(\sqrt[n]{\beta} - 1) \to -f(\alpha). \] Finally, if $x = 1$, then $n(\sqrt[n]{x} - 1) = 0$ for all values of $n$. \PageSep{140} Thus we arrive at the result: \emph{the limit \[ \lim n(\sqrt[n]{x} - 1) \] defines a function of~$x$ for all positive values of~$x$. This function~$f(x)$ possesses the properties \[ f(1/x) = -f(x),\quad f(1) = 0, \] and is positive or negative according as $x > 1$ or $x < 1$.} Later on we shall be able to identify this function with the \emph{Napierian logarithm} of~$x$. \Par{Example.} Prove that $f(xy) = f(x) + f(y)$. [Use the equations \[ f(xy) = \lim n(\DPtypo{\sqrt[n]{xy}}{\sqrtp[n]{xy}} - 1) = \lim \{n(\sqrt[n]{x} - 1)\sqrt[n]{y} + n(\sqrt[n]{y} - 1)\}.] \] \end{Remark} \Paragraph{76. Infinite Series.} Suppose that $u(n)$~is any function of~$n$ defined for all values of~$n$. If we add up the values of~$u(\nu)$ for $\nu = 1$, $2$,~\dots~$n$, we obtain another function of~$n$, viz. \[ s(n) = u(1) + u(2) + \dots + u(n), \] also defined for all values of~$n$. It is generally most convenient to alter our notation slightly and write this equation in the form \[ s_{n} = u_{1} + u_{2} + \dots + u_{n}, \] or, more shortly, \[ s_{n} = \sum_{\nu=1}^{n} u_{\nu}. \] If now we suppose that $s_{n}$~tends to a limit~$s$ when $n$~tends to~$\infty$, we have \[ \lim_{n\to\infty} \sum_{\nu=1}^{n} u_{\nu} = s. \] This equation is usually written in one of the forms \[ \sum_{\nu=1}^{\infty} u_{\nu} = s,\quad u_{1} + u_{2} + u_{3} + \dots = s, \] the dots denoting the indefinite continuance of the series of~$u$'s. The meaning of the above equations, expressed roughly, is that by adding more and more of the~$u$'s together we get nearer and nearer to the limit~$s$. More precisely, if any small positive number~$\DELTA$ is chosen, we can choose~$n_{0}(\DELTA)$ so that the sum of the first $n_{0}(\DELTA)$~terms, or any of greater number of terms, lies between $s - \DELTA$ and $s + \DELTA$; or in symbols \[ s - \DELTA < s_{n} < s + \DELTA, \] if $n \geq n_{0}(\DELTA)$. In these circumstances we shall call the series \[ u_{1} + u_{2} + \dots \] a \Emph{convergent infinite series}, and we shall call~$s$ the \emph{sum} of the series, or the \emph{sum of all the terms} of the series. \PageSep{141} Thus to say that the series $u_{1} + u_{2} + \dots$ \emph{converges and has the sum~$s$}, or \emph{converges to the sum~$s$} or simply \emph{converges to~$s$}, is merely another way of stating that the sum $s_{n} = u_{1} + u_{2} + \dots + u_{n}$ of the first $n$~terms tends to the limit~$s$ as $n \to \infty$, and the consideration of such infinite series introduces no new ideas beyond those with which the early part of this chapter should already have made the reader familiar. In fact the sum~$s_{n}$ is merely a function~$\phi(n)$, such as we have been considering, expressed in a particular form. Any function~$\phi(n)$ may be expressed in this form, by writing \[ \phi(n) = \phi(1) + \{\phi(2) - \phi(1)\} + \dots + \{\phi(n) - \phi(n - 1)\}; \] and it is sometimes convenient to say that $\phi(n)$~\emph{converges} (instead of `tends') to the limit~$l$, say, as $n \to \infty$. If $s_{n} \to +\infty$ or $s_{n} \to -\infty$, we shall say that the series $u_{1} + u_{2} + \dots$ is \Emph{divergent} or \emph{diverges to~$+\infty$}, or~$-\infty$, as the case may be. These phrases too may be applied to any function~$\phi(n)$: thus if $\phi(n) \to +\infty$ we may say that \emph{$\phi(n)$~diverges to~$+\infty$}. If $s_{n}$~does not tend to a limit or to~$+\infty$ or to~$-\infty$, then it oscillates finitely or infinitely: in this case we say that the series $u_{1} + u_{2} + \dots$ oscillates finitely or infinitely.\footnote {The reader should be warned that the words `divergent' and `oscillatory' are used differently by different writers. The use of the words here agrees with that of Bromwich's \textit{Infinite Series}. In Hobson's \textit{Theory of Functions of a Real Variable} a series is said to oscillate only if it oscillates \emph{finitely}, series which oscillate infinitely being classed as `divergent'. Many foreign writers use `divergent' as meaning merely `not convergent'.} \Paragraph{77. General theorems concerning infinite series.} When we are dealing with infinite series we shall constantly have occasion to use the following general theorems. \Item{(1)} If $u_{1} + u_{2} + \dots$ is convergent, and has the sum~$s$, then $a + u_{1} + u_{2} + \dots$ is convergent and has the sum $a + s$. Similarly $a + b + c + \dots + k + u_{1} + u_{2} + \dots$ is convergent and has the sum $a + b + c + \dots + k + s$. \Item{(2)} {\Loosen If $u_{1} + u_{2} + \dots$ is convergent and has the sum~$s$, then $u_{m+1} + u_{m+2} + \dots$ is convergent and has the sum} \[ s - u_{1} - u_{2} - \dots - u_{m}. \] \Item{(3)} If any series considered in (1)~or~(2) diverges or oscillates, then so do the others. \Item{(4)} If $u_{1} + u_{2} + \dots$ is convergent and has the sum~$s$, then $ku_{1} + ku_{2} + \dots$ is convergent and has the sum~$ks$. \PageSep{142} \Item{(5)} If the first series considered in~(4) diverges or oscillates, then so does the second, unless $k = 0$. \Item{(6)} If $u_{1} + u_{2} + \dots$ and $v_{1} + v_{2} + \dots$ are both convergent, then the series $(u_{1} + v_{1}) + (u_{2} + v_{2}) + \dots$ is convergent and its sum is the sum of the first two series. {\Loosen All these theorems are almost obvious and may be proved at once from the definitions or by applying the results of \SecNo[§§]{63}--\SecNo{66} to the sum $s_{n} = u_{1} + u_{2} + \dots + u_{n}$. Those which follow are of a somewhat different character.} \begin{Result} \Item{(7)} If $u_{1} + u_{2} + \dots$ is convergent, then $\lim u_{n} = 0$. \end{Result} For $u_{n} = s_{n} - s_{n-1}$, and $s_{n}$~and~$s_{n-1}$ have the same limit~$s$. Hence $\lim u_{n} = s - s = 0$. \begin{Remark} The reader may be tempted to think that the converse of the theorem is true and that if $\lim u_{n} = 0$ then the series~$\sum u_{n}$ must be convergent. That this is not the case is easily seen from an example. Let the series be \[ 1 + \tfrac{1}{2} + \tfrac{1}{3} + \tfrac{1}{4} + \dots \] so that $u_{n} = 1/n$. The sum of the first four terms is \[ 1 + \tfrac{1}{2} + \tfrac{1}{3} + \tfrac{1}{4} > 1 + \tfrac{1}{2} + \tfrac{2}{4} = 1 + \tfrac{1}{2} + \tfrac{1}{2}. \] The sum of the next four terms is $\frac{1}{5} + \frac{1}{6} + \frac{1}{7} + \frac{1}{8} > \frac{4}{8} = \frac{1}{2}$; the sum of the next eight terms is greater than $\frac{8}{16} = \frac{1}{2}$, and so on. The sum of the first \[ 4 + 4 + 8 + 16 + \dots + 2^{n} = 2^{n+1} \] terms is greater than \[ 2 + \tfrac{1}{2} + \tfrac{1}{2} + \tfrac{1}{2} + \dots + \tfrac{1}{2} = \tfrac{1}{2} (n + 3), \] and this increases beyond all limit with~$n$: hence the series diverges to~$+\infty$. \end{Remark} \begin{Result} \Item{(8)} If $u_{1} + u_{2} + u_{3} + \dots$ is convergent, then so is any series formed by grouping the terms in brackets in any way to form new single terms, and the sums of the two series are the same. \end{Result} \begin{Remark} The reader will be able to supply the proof of this theorem. Here again the converse is not true. Thus $1 - 1 + 1 - 1 + \dots$ oscillates, while \[ (1 - 1) + (1 - 1) + \dots \] or $0 + 0 + 0 + \dots$ converges to~$0$. \end{Remark} \begin{Result} \Item{(9)} If every term~$u_{n}$ is positive \(or zero\), then the series~$\sum u_{n}$ must either converge or diverge to~$+\infty$. If it converges, its sum must be positive {\upshape(unless all the terms are zero, when of course its sum is zero)}. \end{Result} For $s_{n}$~is an increasing function of~$n$, according to the definition of \SecNo[§]{69}, and we can apply the results of that section to~$s_{n}$. \PageSep{143} \begin{Result} \Item{(10)} If every term~$u_{n}$ is positive \(or zero\), then the necessary and sufficient condition that the series~$\sum u_{n}$ should be convergent is that it should be possible to find a number~$K$ such that the sum of any number of terms is less than~$K$; and, if $K$ can be so found, then the sum of the series is not greater than~$K$. \end{Result} This also follows at once from \SecNo[§]{69}. It is perhaps hardly necessary to point out that the theorem is not true if the condition that every~$u_{n}$ is positive is not fulfilled. For example \[ 1 - 1 + 1 - 1 + \dots \] obviously oscillates, $s_{n}$~being alternately equal to~$1$ and to~$0$. \begin{Result} \Item{(11)} If $u_{1} + u_{2} + \dots$, $v_{1} + v_{2} + \dots$ are two series of positive \(or zero\) terms, and the second series is convergent, and if $u_{n} \leq Kv_{n}$, where $K$~is a constant, for all values of~$n$, then the first series is also convergent, and its sum is less than or equal to\DPtypo{}{ $K$~times} that of the second. \end{Result} For if $v_{1} + v_{2} + \dots = t$ then $v_{1} + v_{2} + \dots + v_{n} \leq t$ for all values of~$n$, and so $u_{1} + u_{2} + \dots + u_{n} \leq Kt$; which proves the theorem. \begin{Result} Conversely, if $\sum u_{n}$ is divergent, and $v_{n} \geq Ku_{n}$, then $\sum v_{n}$~is divergent. \end{Result} \Paragraph{78. The infinite geometrical series.} We shall now consider the `geometrical' series, whose general term is $u_{n} = r^{n-1}$. In this case \[ s_{n} = 1 + r + r^{2} + \dots + r^{n-1} = (1 - r^{n})/(1 - r), \] except in the special case in which $r = 1$, when \[ s_{n} = 1 + 1 + \dots + 1 = n. \] In the last case $s_{n} \to +\infty$. In the general case $s_{n}$~will tend to a limit if and only if $r^{n}$ does so. Referring to the results of \SecNo[§]{72} we see that \begin{Result} the series $1 + r + r^{2} + \dots$ is convergent and has the sum $1/(1 - r)$ if and only if $-1 < r < 1$. \end{Result} If $r \geq 1$, then $s_{n} \geq n$, and so $s_{n} \to +\infty$; \ie\ the series diverges to~$+\infty$. If $r = -1$, then $s_{n} = 1$ or $s_{n} = 0$ according as $n$~is odd or even: \ie\ $s_{n}$~oscillates finitely. If $r < -1$, then $s_{n}$~oscillates infinitely. Thus, to sum up, \begin{Result} the series $1 + r + r^{2} + \dots$ diverges to~$+\infty$ if $r \geq 1$, converges to $1/(1 - r)$ if $-1 < r < 1$, oscillates finitely if $r = -1$, and oscillates infinitely if $r < -1$. \end{Result} \begin{Examples}{XXIX.} \Item{1.} \Topic{Recurring decimals.} The commonest example of an infinite geometric series is given by an ordinary recurring decimal. \PageSep{144} Consider, for example, the decimal $.217\DPmod{\dot{1}\dot{3}}{\Repeat{13}}$. This stands, according to the ordinary rules of arithmetic, for \[ \frac{2}{10} + \frac{1}{10^{2}} + \frac{7}{10^{3}} + \frac{1}{10^{4}} + \frac{3}{10^{5}} + \frac{1}{10^{6}} + \frac{3}{10^{7}} + \dots = \frac{217}{1000} + \frac{13}{10^{5}} \bigg/ \left(1 - \frac{1}{10^{2}}\right) = \frac{2687}{12\MC375}. \] The reader should consider where and how any of the general theorems of \SecNo[§]{77} have been used in this reduction. \Item{2.} Show that in general \[ .a_{1}a_{2}\dots a_{m} \DPmod{\dot{\alpha}_{1}\alpha_{2}\dots\dot{\alpha}_{n}} {\Repeat{\alpha_{1}\alpha_{2}\dots \alpha_{n}}} = \frac{a_{1}a_{2}\dots a_{m}\alpha_{1}\dots \alpha_{n} - a_{1}a_{2}\dots a_{n}} {99\dots 900\dots 0}, \] the denominator containing~$n$ $9$'s and $m$~$0$'s. \Item{3.} Show that a pure recurring decimal is always equal to a proper fraction whose denominator does not contain $2$~or~$5$ as a factor. \Item{4.} A decimal with $m$~non-recurring and $n$~recurring decimal figures is equal to a proper fraction whose denominator is divisible by $2^{m}$~or~$5^{m}$ but by no higher power of either. \Item{5.} The converses of Exs.~3,~4 are also true. Let $r = p/q$, and suppose first that $q$~is prime to~$10$. If we divide all powers of~$10$ by~$q$ we can obtain at most $q$~different remainders. It is therefore possible to find two numbers $n_{1}$~and~$n_{2}$, where $\DPtypo{n_{2} > n_{1}}{n_{1} > n_{2}}$, such that $10^{n_{1}}$ and $10^{n_{2}}$ give the same remainder. Hence $10^{n_{1}} - 10^{n_{2}} = 10^{n_{2}}(10^{n_{1}-n_{2}} - 1)$ is divisible by~$q$, and so $10^{n} - 1$, where $n = n_{1} - n_{2}$, is divisible by~$q$. Hence $r$~may be expressed in the form~$P/(10^{n} - 1)$, or in the form \[ \frac{P}{10^{n}} + \frac{P}{10^{2n}} + \dots, \] \ie\ as a pure recurring decimal with $n$~figures. If on the other hand $q = 2^{\alpha}5^{\beta}Q$, where $Q$~is prime to~$10$, and $m$~is the greater of $\alpha$~and~$\beta$, then $10^{m}r$~has a denominator prime to~$10$, and is therefore expressible as the sum of an integer and a pure recurring decimal. But this is not true of~$10^{\mu}r$, for any value of~$\mu$ less than~$m$; hence the decimal for~$r$ has exactly~$m$ non-recurring figures. \Item{6.} To the results of Exs.~2--5 we must add that of \Ex{i}.~3. Finally, if we observe that \[ .\DPmod{\dot{9}}{\Repeat{9}} = \frac{9}{10} + \frac{9}{10^{2}} + \frac{9}{10^{3}} + \dots = 1, \] we see that every terminating decimal can also be expressed as a mixed recurring decimal whose recurring part is composed entirely of~$9$'s. For example, $.217 = .216\DPmod{\dot{9}}{\Repeat{9}}$. Thus every proper fraction can be expressed as a recurring decimal, and conversely. \Item{7.} \Topic{Decimals in general. The expression of irrational numbers as non-recurring decimals.} Any decimal, whether recurring or not, corresponds to a definite number between $0$~and~$1$. For the decimal $.a_{1}a_{2}a_{3}a_{4}\dots$ stands for the series \[ \frac{a_{1}}{10} + \frac{a_{2}}{10^{2}} + \frac{a_{3}}{10^{3}} + \dots. \] \PageSep{145} Since all the digits~$a_{r}$ are positive, the sum~$s_{n}$ of the first $n$~terms of this series increases with~$n$, and it is certainly not greater than~$.\DPmod{\dot{9}}{\Repeat{9}}$ or~$1$. Hence $s_{n}$~tends to a limit between $0$~and~$1$. Moreover no two decimals can correspond to the same number (except in the special case noticed in Ex.~6). For suppose that $.a_{1}a_{2}a_{3} \dots$, $.b_{1}b_{2}b_{3} \dots$ are two decimals which agree as far as the figures $a_{r-1}$,~$b_{r-1}$, while $a_{r} > b_{r}$. Then $a_{r}\geq b_{r} + 1 > b_{r}.b_{r+1}b_{r+2} \dots$ (unless $b_{r+1}$, $b_{r+2}$,~\dots\ are all~$9$'s), and so \[ .a_{1}a_{2} \dots a_{r}a_{r+1} \dots > .b_{1}b_{2} \dots b_{r}b_{r+1} \dots. \] It follows that the expression of a rational fraction as a recurring decimal (Exs.\ 2--6) is unique. It also follows that every decimal which does not recur represents some \emph{irrational} number between $0$~and~$1$. Conversely, any such number can be expressed as such a decimal. For it must lie in one of the intervals \[ 0,\ 1/10;\quad 1/10,\ 2/10;\ \dots;\quad 9/10,\ 1. \] If it lies between $r/10$ and $(r + 1)/10$, then the first figure is~$r$. By subdividing this interval into $10$~parts we can determine the second figure; and so on. But (Exs.~3,~4) the decimal cannot recur. Thus, for example, the decimal $1.414\dots$, obtained by the ordinary process for the extraction of~$\sqrt{2}$, cannot recur. \Item{8.} The decimals $.101\MS001\MS000\MS100\MS001\MS0\dots$ and $.202\MS002\MS000\MS200\MS002\MS0\dots$, in which the number of zeros between two~$1$'s or $2$'s increases by one at each stage, represent irrational numbers. \Item{9.} The decimal $.111\MS010\MS100\MS010\MS10\dots$, in which the $n$th~figure is~$1$ if $n$~is prime, and zero otherwise, represents an irrational number. [Since the number of primes is infinite the decimal does not terminate. Nor can it recur: for if it did we could determine $m$~and~$p$ so that $m$,~$m + p$, $m + 2p$, $m + 3p$,~\dots\ are all prime numbers; and this is absurd, since the series includes $m + mp$.]\footnote {All the results of \Exs{xxix} may be extended, with suitable modifications, to decimals in any scale of notation. For a fuller discussion see Bromwich, \textit{Infinite Series}, Appendix~I.} \end{Examples} \begin{Examples}{XXX.} \Item{1.} {\Loosen The series $r^{m} + r^{m+1} + \dots$ is convergent if $-1 < r < 1$, and its sum is $1/(1 - r) - 1 - r - \dots - r^{m-1}$ (\SecNo[§]{77},~\Eq{(2)}).} \Item{2.} The series $r^{m} + r^{m+1} + \dots$ is convergent if $-1 < r < 1$, and its sum is $r^{m}/(1 - r)$ (\SecNo[§]{77},~\Eq{(4)}). Verify that the results of Exs.\ 1~and~2 are in agreement. \Item{3.} Prove that the series $1 + 2r + 2r^{2} + \dots$ is convergent, and that its sum is~$(1 + r)/(1 - r)$, ($\alpha$)~by writing it in the form $-1 + 2(1 + r + r^{2} + \dots)$, ($\beta$)~by writing it in the form $1 + 2(r + r^{2} + \dots)$, ($\gamma$)~by adding the two series $1 + r + r^{2} + \dots$, $r + r^{2} + \dots$. In each case mention which of the theorems of \SecNo[§]{77} are used in your proof. \PageSep{146} \Item{4.} Prove that the `arithmetic' series \[ a + (a + b) + (a + 2b) + \dots \] is always divergent, unless both $a$~and~$b$ are zero. Show that, if $b$ is not zero, the series diverges to~$+\infty$ or to~$-\infty$ according to the sign of~$b$, while if $b = 0$ it diverges to~$+\infty$ or~$-\infty$ according to the sign of~$a$. \Item{5.} What is the sum of the series \[ (1 - r) + (r - r^{2}) + (r^{2} - r^{3}) + \dots \] when the series is convergent? [The series converges only if $-1 < r \leq 1$. Its sum is~$1$, except when $r = 1$, when its sum is~$0$.] \Item{6.} Sum the series \[ %[** TN: In-line equation in the original] r^{2} + \frac{r^{2}}{1 + r^{2}} + \frac{r^{2}}{(1 + r^{2})^{2}} + \dots. \] [The series is always convergent. Its sum is~$1 + r^{2}$, except when $r = 0$, when its sum is~$0$.] \Item{7.} If we assume that $1 + r + r^{2} + \dots$ is convergent then we can prove that its sum is~$1/(1 - r)$ by means of \SecNo[§]{77}, \Eq{(1)}~and~\Eq{(4)}. For if $1 + r + r^{2} + \dots = s$ then \[ s = 1 + r(1 + r^{2} + \dots) = 1 + rs. \] \Item{8.} Sum the series \[ r + \frac{r}{1 + r} + \frac{r}{(1 + r)^{2}} + \dots \] when it is convergent. [The series is convergent if $-1 < 1/(1 + r) < 1$, \ie\ if $r < -2$ or if $r > 0$, and its sum is~$1 + r$. It is also convergent when $r = 0$, when its sum is~$0$.] \Item{9.} Answer the same question for the series \begin{align*} & r - \frac{r}{1 + r} + \frac{r}{(1 + r)^{2}} - \dots, && r + \frac{r}{1 - r} + \frac{r}{(1 - r)^{2}} + \dots,\\ & 1 - \frac{r}{1 + r} + \left(\frac{r}{1 + r}\right)^{2} - \dots, && 1 + \frac{r}{1 - r} + \left(\frac{r}{1 - r}\right)^{2} + \dots. \end{align*} \Item{10.} Consider the convergence of the series \begin{align*} & (1 + r) + (r^{2} + r^{3}) + \dots, && (1 + r + r^{2}) + (r^{3} + r^{4} + r^{5}) + \dots,\\ & 1 - 2r + r^{2} + r^{3} - 2r^{4} + r^{5} + \dots, && (1 - 2r + r^{2}) + (r^{3} - 2r^{4} + r^{5}) + \dots, \end{align*} and find their sums when they are convergent. \Item{11.} If $0 \leq a_{n} \leq 1$ then the series $a_{0} + a_{1}r + a_{2}r^{2} + \dots$ is convergent for $0 \leq r < 1$, and its sum is not greater than~$1/(1 - r)$. \Item{12.} If in addition the series $a_{0} + a_{1} + a_{2} + \dots$ is convergent, then the series $a_{0} + a_{1}r + a_{2}r^{2} + \dots$ is convergent for $0 \leq r \leq 1$, and its sum is not greater than the lesser of $a_{0} + a_{1} + a_{2} + \dots$ and~$1/(1 - r)$. \Item{13.} The series \[ 1 + \frac{1}{1} + \frac{1}{1·2} + \frac{1}{1·2·3} + \dots \] is convergent. [For $1/(1·2 \dots n) \leq 1/2^{n-1}$.] \PageSep{147} \Item{14.} The series \[ 1 + \frac{1}{1·2} + \frac{1}{1·2·3·4} + \dots,\quad \frac{1}{1} + \frac{1}{1·2·3} + \frac{1}{1·2·3·4·5} + \dots \] are convergent. \Item{15.} The general harmonic series \[ \frac{1}{a} + \frac{1}{a + b} + \frac{1}{a + 2b} + \dots, \] where $a$~and~$b$ are positive, diverges to~$+\infty$. [For $u_{n} = 1/(a + nb) > 1/\{n(a + b)\}$. Now compare with $1 + \frac{1}{2} + \frac{1}{3} + \dots$.] \Item{16.} Show that the series \[ (u_{0} - u_{1}) + (u_{1} - u_{2}) + (u_{2} - u_{3}) + \dots \] is convergent if and only if $u_{n}$~tends to a limit as $n \to \infty$. \Item{17.} If $u_{1} + u_{2} + u_{3} + \dots$ is divergent then so is any series formed by grouping the terms in brackets in any way to form new single terms. \Item{18.} Any series, formed by taking a selection of the terms of a convergent series of positive terms, is itself convergent. \end{Examples} \Paragraph{79. The representation of functions of a continuous real variable by means of limits.} In the preceding sections we have frequently been concerned with limits such as \[ \lim_{n \to \infty} \phi_{n}(x), \] and series such as \[ u_{1}(x) + u_{2}(x) + \dots = \lim_{n \to \infty}\{u_{1}(x) + u_{2}(x) + \dots + u_{n}(x)\}, \] in which the function of~$n$ whose limit we are seeking involves, besides~$n$, another variable~$x$. In such cases the limit is of course a function of~$x$. Thus in \SecNo[§]{75} we encountered the function \[ f(x) = \lim_{n \to \infty} n(\sqrt[n]{x} - 1): \] and the sum of the geometrical series $1 + x + x^{2} + \dots$ is a function of~$x$, viz.~the function which is equal to $1/(1 - x)$ if $-1 < x < 1$ and is undefined for all other values of~$x$. Many of the apparently `arbitrary' or `unnatural' functions considered in \Ref{Ch.}{II} are capable of a simple representation of this kind, as will appear from the following examples. \PageSep{148} \begin{Examples}{XXXI.} \Item{1.} $\phi_{n}(x) = x$. Here $n$~does not appear at all in the expression of~$\phi_{n}(x)$, and $\phi(x) = \lim\phi_{n}(x) = x$ for all values of~$x$. \Item{2.} $\phi_{n}(x) = x/n$. Here $\phi(x) = \lim\phi_{n}(x) = 0$ for all values of~$x$. \Item{3.} $\phi_{n}(x) = nx$. If $x > 0$, $\phi_{n}(x) \to +\infty$; if $x < 0$, $\phi_{n}(x) \to -\infty$: only when $x = 0$ has $\phi_{n}(x)$ a limit (viz.~$0$) as $n \to \infty$. Thus $\phi(x) = 0$ when $x = 0$ and is not defined for any other value of~$x$. \Item{4.} $\phi_{n}(x) = 1/nx$, $nx/(nx + 1)$. \Item{5.} $\phi_{n}(x) = x^{n}$. Here $\phi(x) = 0$, ($-1 < x < 1$); $\phi(x) = 1$, ($x = 1$); and $\phi(x)$ is not defined for any other value of~$x$. \Item{6.} $\phi_{n}(x) = x^{n}(1 - x)$. Here $\phi(x)$~differs from the $\phi(x)$ of Ex.~5 in that it has the value~$0$ when $x = 1$. \Item{7.} $\phi_{n}(x) = x^{n}/n$. Here $\phi(x)$ differs from the $\phi(x)$ of Ex.~6 in that it has the value~$0$ when $x = -1$ as well as when $x = 1$. \Item{8.} $\phi_{n}(x) = x^{n}/(x^{n} + 1)$. [$\phi(x) = 0$, ($-1 < x < 1$); $\phi(x) = \frac{1}{2}$, ($x = 1$); $\phi(x) = 1$, ($x < -1$ or $x > 1$); and $\phi(x)$~is not defined when $x = -1$.] \Item{9.} $\phi_{n}(x) = x^{n}/(x^{n} - 1)$, $1/(x^{n} + 1)$, $1/(x^{n} - 1)$, $1/(x^{n} + x^{-n})$, $1/(x^{n} - x^{-n})$. \Item{10.} $\phi_{n}(x) = (x^{n} - 1)/(x^{n} + 1)$, $(nx^{n} - 1)/(nx^{n} + 1)$, $(x^{n} - n)/(x^{n} + n)$. [In the first case $\phi(x) = 1$ when $|x| > 1$, $\phi(x) = -1$ when $|x| < 1$, $\phi(x) = 0$ when $x = 1$ and $\phi(x)$~is not defined when $x = -1$. The second and third functions differ from the first in that they are defined both when $x = 1$ and when $x = -1$: the second has the value~$1$ and the third the value~$-1$ for both these values of~$x$.] \Item{11.} Construct an example in which $\phi(x) = 1$, ($|x| > 1$); $\phi(x) = -1$, ($|x| < 1$); and $\phi(x) = 0$, ($x = 1$ and $x = -1$). \Item{12.} $\phi_{n}(x) = x\{(x^{2n} - 1)/(x^{2n} + 1)\}^{2}$, $n/(x^{n} + x^{-n} + n)$. \Item{13.} $\phi_{n}(x) = \{x^{n}f(x) + g(x)\}/(x^{n} + 1)$. [Here $\phi(x) = f(x)$, ($|x| > 1$); $\phi(x) = g(x)$, ($|x| < 1$); $\phi(x) = \frac{1}{2}\{f(x) + g(x)\}$, ($x = 1$); and $\phi(x)$~is undefined when $x = -1$.] \Item{14.} $\phi_{n}(x) = (2/\pi) \arctan(nx)$. [$\phi(x) = 1$, ($x > 0$); $\phi(x) = 0$, ($x = 0$); $\phi(x) = -1$, ($x < 0$). This function is important in the Theory of Numbers, and is usually denoted by~$\sgn x$.] \Item{15.} $\phi_{n}(x) = \sin nx\pi$. [$\phi(x) = 0$ when $x$~is an integer; and $\phi(x)$~is otherwise undefined (\Ex{xxiv}.~7).] \Item{16.} If $\phi_{n}(x) = \sin (n!\, x\pi)$ then $\phi(x) = 0$ for all rational values of~$x$ (\Ex{xxiv}.~14). [The consideration of irrational values presents greater difficulties.] \Item{17.} $\phi_{n}(x) = (\cos^{2} x\pi)^{n}$. [$\phi(x) = 0$ except when $x$~is integral, when $\phi(x) = 1$.] \Item{18.} If $N \geq 1752$ then the number of days in the year $N$~\textsc{a.d.}\ is \[ \lim \{365 + (\cos^{2} \tfrac{1}{4} N\pi)^{n} - (\cos^{2} \tfrac{1}{100} N\pi)^{n} + (\cos^{2} \tfrac{1}{400} N\pi)^{n}\}. \] \end{Examples} \PageSep{149} \begin{Remark} \Paragraph{80. The bounds of a bounded aggregate.} Let $S$~be any system or aggregate of real numbers~$s$. If there is a number~$K$ such that $s \leq K$ for every~$s$ of~$S$, we say that $S$~is \emph{bounded above}. If there is a number~$k$ such that $s \geq k$ for every~$s$, we say that $S$~is \emph{bounded below}. If $S$~is both bounded above and bounded below, we say simply that $S$~is \emph{bounded}. Suppose first that $S$~is bounded above (but not necessarily below). There will be an infinity of numbers which possess the property possessed by~$K$; any number greater than~$K$, for example, possesses it. We shall prove that \emph{among these numbers there is a least},\footnote {An infinite aggregate of numbers does not necessarily possess a least member. The set consisting of the numbers \[ 1,\ \frac{1}{2},\ \frac{1}{3},\ \dots,\ \frac {1}{n},\ \dots, \] for example, has no least member.} which we shall call~$M$. This number~$M$ is not exceeded by any member of~$S$, but every number less than~$M$ is exceeded by at least one member of~$S$. We divide the real numbers~$\xi$ into two classes $L$~and~$R$, putting $\xi$ into $L$~or~$R$ according as it is or is not exceeded by members of~$S$. Then every~$\xi$ belongs to one and one only of the classes $L$~and~$R$. Each class exists; for any number less than any member of~$S$ belongs to~$L$, while $K$~belongs to~$R$. Finally, any member of~$L$ is less than some member of~$S$, and therefore less than any member of~$R$. Thus the three conditions of Dedekind's Theorem (\SecNo[§]{17}) are satisfied, and there is a number~$M$ dividing the classes. The number~$M$ is the number whose existence we had to prove. In the first place, $M$~cannot be exceeded by any member of~$S$. For if there were such a member~$s$ of~$S$, we could write $s = M + \eta$, where $\eta$~is positive. The number $M + \frac{1}{2}\eta$ would then belong to~$L$, because it is less than~$s$, and to~$R$, because it is greater than~$M$; and this is impossible. On the other hand, any number less than~$M$ belongs to~$L$, and is therefore exceeded by at least one member of~$S$. Thus $M$~has all the properties required. This number~$M$ we call the \emph{upper bound} of~$S$, and we may enunciate the following theorem. \begin{Result} Any aggregate~$S$ which is bounded above has an upper bound~$M$. No member of~$S$ exceeds~$M$; but any number less than~$M$ is exceeded by at least one member of~$S$. \end{Result} In exactly the same way we can prove the corresponding theorem for an aggregate bounded below (but not necessarily above). \begin{Result} Any aggregate~$S$ which is bounded below has a lower bound~$m$. No member of~$S$ is less than~$m$; but there is at least one member of~$S$ which is less than any number greater than~$m$. \end{Result} It will be observed that, when $S$~is bounded above, $M \leq K$, and when $S$~is bounded below, $m \geq k$. When $S$~is bounded, $k \leq m \leq M \leq K$. \Paragraph{81. The bounds of a bounded function.} Suppose that $\phi(n)$~is a function of the positive integral variable~$n$. The aggregate of all the values~$\phi(n)$ defines a set~$S$, to which we may apply all the arguments of \SecNo[§]{80}. If $S$~is bounded above, or bounded below, or bounded, we say that $\phi(n)$~is bounded \PageSep{150} above, or bounded below, or bounded. If $\phi(n)$~is bounded above, that is to say if there is a number~$K$ such that $\phi(n) \leq K$ for all values of~$n$, then there is a number~$M$ such that \Itemp{(i)} \emph{$\phi(n) \leq M$ for all values of~$n$}; \Itemp{(ii)} \emph{if $\DELTA$ is any positive number then $\phi(n) > M - \DELTA$ for at least one value of~$n$.} This number~$M$ we call the \Emph{upper bound} of~$\phi(n)$. Similarly, if $\phi(n)$~is bounded below, that is to say if there is a number~$k$ such that $\phi(n) \leq k$ for all values of~$n$, then there is a number~$m$ such that \Itemp{(i)} \emph{$\phi(n) \geq m$ for all values of $n$}; \Itemp{(ii)} \emph{if $\DELTA$ is any positive number then $\phi(n) < m + \DELTA$ for at least one value of~$n$.} This number~$m$ we call the \Emph{lower bound} of~$\phi(n)$. If $K$~exists, $M \leq K$; if $k$~exists, $m \geq k$; and if both $k$~and~$K$ exist then \[ k \leq m \leq M \leq K. \] \Paragraph{82. The limits of indetermination of a bounded function.} Suppose that $\phi(n)$~is a bounded function, and $M$~and~$m$ its upper and lower bounds. Let us take any real number~$\xi$, and consider now the relations of inequality which may hold between~$\xi$ and the values assumed by~$\phi(n)$ for \emph{large} values of~$n$. There are three mutually exclusive possibilities: \Item{(1)} $\xi \geq \phi(n)$ for all sufficiently large values of~$n$; \Item{(2)} $\xi \leq \phi(n)$ for all sufficiently large values of~$n$; \Item{(3)} $\xi < \phi(n)$ for an infinity of values of~$n$, and also $\xi > \phi(n)$ for an infinity of values of~$n$. In case~(1) we shall say that $\xi$~is a \emph{superior} number, in case~(2) that it is an \emph{inferior} number, and in case~(3) that it is an \emph{intermediate} number. It is plain that no superior number can be less than~$m$, and no inferior number greater than~$M$. Let us consider the aggregate of all superior numbers. It is bounded below, since none of its members are less than~$m$, and has therefore a lower bound, which we shall denote by~$\Lambda$. Similarly the aggregate of inferior numbers has an upper bound, which we denote by~$\lambda$. We call $\Lambda$~and~$\lambda$ respectively the \emph{upper and lower limits of indetermination of~$\phi(n)$ as $n$~tends to infinity}; and write \[ \Lambda = \limsup \phi(n),\quad \lambda = \liminf \phi(n). \] These numbers have the following properties: \Item{(1)} $m \leq \lambda \leq \Lambda \leq M$; \Item{(2)} $\Lambda$~and~$\lambda$ are the upper and lower bounds of the aggregate of intermediate numbers, if any such exist; \Item{(3)} if $\DELTA$ is any positive number, then $\phi(n) < \Lambda + \DELTA$ for all sufficiently large values of~$n$, and $\phi(n) > \Lambda - \DELTA$ for an infinity of values of~$n$; \Item{(4)} {\Loosen similarly $\phi(n) > \lambda - \DELTA$ for all sufficiently large values of~$n$, and $\phi(n) < \lambda + \DELTA$ for an infinity of values of~$n$;} \PageSep{151} \Item{(5)} the necessary and sufficient condition that $\phi(n)$ should tend to a limit is that $\Lambda = \lambda$, and in this case the limit is~$l$, the common value of $\lambda$~and~$\Lambda$. Of these properties, (1)~is an immediate consequence of the definitions; and we can prove~(2) as follows. If $\Lambda = \lambda = l$, there can be at most one intermediate number, viz.~$l$, and there is nothing to prove. Suppose then that $\Lambda > \lambda$. Any intermediate number~$\xi$ is less than any superior and greater than any inferior number, so that $\lambda \leq \xi \leq \Lambda$. But if $\lambda < \xi < \Lambda$ then $\xi$~must be intermediate, since it is plainly neither superior nor inferior. Hence there are intermediate numbers as near as we please to either $\lambda$~or~$\Lambda$. To prove~(3) we observe that $\Lambda + \DELTA$ is superior and $\Lambda - \DELTA$ intermediate or inferior. The result is then an immediate consequence of the definitions; and the proof of~(4) is substantially the same. Finally (5)~may be proved as follows. If $\Lambda = \lambda = l$, then \[ l - \DELTA < \phi(n) < l + \DELTA \] for every positive value of~$\DELTA$ and all sufficiently large values of~$n$, so that $\phi(n)\to l$. Conversely, if $\phi(n) \to l$, then the inequalities above written hold for all sufficiently large values of~$n$. Hence $l - \DELTA$ is inferior and $l + \DELTA$ superior, so that \[ \lambda \geq l - \DELTA,\quad \Lambda \leq l + \DELTA, \] and therefore $\Lambda - \lambda \leq 2\DELTA$. As $\Lambda - \lambda \geq 0$, this can only be true if $\Lambda = \lambda$. \end{Remark} \begin{Examples}{XXXII.} \Item{1.} Neither $\Lambda$~nor~$\lambda$ is affected by any alteration in any finite number of values of~$\phi(n)$. \Item{2.} If $\phi(n) = a$ for all values of~$n$, then $m = \lambda = \Lambda = M = a$. \Item{3.} If $\phi(n) = 1/n$, then $m = \lambda = \Lambda = 0$ and $M = 1$. \Item{4.} If $\phi(n) = (-1)^{n}$, then $m = \lambda = -1$ and $\Lambda = M = 1$. \Item{5.} If $\phi(n) = (-1)^{n}/n$, then $m = -1$, $\lambda = \Lambda = 0$, $M = \frac{1}{2}$. \Item{6.} If $\phi(n) = (-1)^{n}\{1 + (1/n)\}$, then $m = -2$, $\lambda = -1$, $\Lambda = 1$, $M = \frac{3}{2}$. \Item{7.} Let $\phi(n) = \sin n\theta\pi$, where $\theta > 0$. If $\theta$~is an integer then $m = \lambda = \Lambda = M = 0$. If $\theta$~is rational but not integral a variety of cases arise. Suppose, \eg, that $\theta = p/q$, $p$~and~$q$ being positive, odd, and prime to one another, and $q > 1$. Then $\phi(n)$~assumes the cyclical sequence of values \[ \sin(p\pi/q),\quad \sin(2p\pi/q),\ \dots,\quad \sin\{(2q - 1)p\pi/q\},\quad \sin(2qp\pi/q),\ \dots. \] It is easily verified that the numerically greatest and least values of~$\phi(n)$ are $\cos(\pi/2q)$ and $-\cos(\pi/2q)$, so that \[ m = \lambda = -\cos(\pi/2q),\quad \Lambda = M = \cos(\pi/2q). \] The reader may discuss similarly the cases which arise when $p$~and~$q$ are not both odd. The case in which $\theta$~is irrational is more difficult: it may be shown that in this case $m = \lambda = -1$ and $\Lambda = M = 1$. It may also be shown that the values of~$\phi(n)$ are scattered all over the interval $\DPmod{(-1, 1)}{[-1, 1]}$ in such a way that, if $\xi$~is \PageSep{152} \emph{any} number of the interval, then there is a sequence $n_{1}$, $n_{2}$,~\dots\ such that $\phi(n_{k}) \to \xi$ as $k \to \infty$.\footnote {A number of simple proofs of this result are given by Hardy and Littlewood, ``Some Problems of Diophantine Approximation'', \textit{Acta Mathematica}, vol.~xxxvii.} The results are very similar when $\phi(n)$~is the fractional part of~$n\theta$. \end{Examples} \begin{Remark} \Paragraph{83. The general principle of convergence for a bounded function.} The results of the preceding sections enable us to formulate a very important necessary and sufficient condition that a bounded function~$\phi(n)$ should tend to a limit, a condition usually referred to as \emph{the general principle of convergence} to a limit. \begin{Theorem}[1.] The necessary and sufficient condition that a bounded function~$\phi(n)$ should tend to a limit is that, when any positive number~$\DELTA$ is given, it should be possible to find a number~$n_{0}(\DELTA)$ such that \[ |\phi(n_{2}) - \phi(n_{1})| < \DELTA \] for all values of $n_{1}$~and~$n_{2}$ such that $n_{2} > n_{1} \geq n_{0}(\DELTA)$. \end{Theorem} In the first place, the condition is \emph{necessary}. For if $\phi(n) \to l$ then we can find~$n_{0}$ so that \[ l - \tfrac{1}{2}\DELTA < \phi(n) < l + \tfrac{1}{2}\DELTA \] when $n \geq n_{0}$, and so \[ |\phi(n_{2}) - \phi(n_{1})| < \DELTA \Tag{(1)} \] when $n_{1} \geq n_{0}$ and $n_{2} \geq n_{0}$. In the second place, the condition is \emph{sufficient}. In order to prove this we have only to show that it involves $\lambda = \Lambda$. But if $\lambda < \Lambda$ then there are, however small $\DELTA$~may be, infinitely many values of~$n$ such that $\phi(n) < \lambda + \DELTA$ and infinitely many such that $\phi(n) > \Lambda - \DELTA$; and therefore we can find values of $n_{1}$~and~$n_{2}$, each greater than any assigned number~$n_{0}$, and such that \[ \phi(n_{2}) - \phi(n_{1}) > \Lambda - \lambda - 2\DELTA, \] which is greater than $\frac{1}{2}(\Lambda - \lambda)$ if $\DELTA$~is small enough. This plainly contradicts the inequality~\Eq{(1)}. Hence $\lambda = \Lambda$, and so $\phi(n)$~tends to a limit. \Paragraph{84. Unbounded functions.} So far we have restricted ourselves to bounded functions; but the `general principle of convergence' is the same for unbounded as for bounded functions, and the words `\emph{a bounded function}' may be omitted from the enunciation of Theorem~1. In the first place, if $\phi(n)$~tends to a limit~$l$ then it is certainly bounded; for all but a finite number of its values are less than $l + \DELTA$ and greater than $l - \DELTA$. In the second place, if the condition of Theorem~1 is satisfied, we have \[ |\phi(n_{2}) - \phi(n_{1})| < \DELTA \] whenever $n_{1} \geq n_{0}$ and $n_{2} \geq n_{0}$. Let us choose some particular value~$n_{1}$ greater than~$n_{0}$. Then \[ \phi(n_{1}) - \DELTA < \phi(n_{2}) < \phi(n_{1}) + \DELTA \] when $n_{2} \geq n_{0}$. Hence $\phi(n)$~is bounded; and so the second part of the proof of the last section applies also. \PageSep{153} The theoretical importance of the `general principle of convergence' can hardly be overestimated. Like the theorems of \SecNo[§]{69}, it gives us a means of deciding whether a function~$\phi(n)$ tends to a limit or not, without requiring us to be able to tell beforehand what the limit, if it exists, must be; and it has not the limitations inevitable in theorems of such a special character as those of \SecNo[§]{69}. But in elementary work it is generally possible to dispense with it, and to obtain all we want from these special theorems. And it will be found that, in spite of the importance of the principle, practically no applications are made of it in the chapters which follow.\footnote {A few proofs given in \Ref{Ch.}{VIII} can be simplified by the use of the principle.} We will only remark that, if we suppose that \[ \phi(n) = s_{n} = u_{1} + u_{2} + \dots + u_{n}, \] we obtain at once a necessary and sufficient condition for the convergence of an infinite series, viz: \begin{Theorem}[2.] The necessary and sufficient condition for the convergence of the series $u_{1} + u_{2} + \dots$ is that, given any positive number~$\DELTA$, it should be possible to find~$n_{0}$ so that \[ |u_{n_{1}+1} + u_{n_{1}+2} + \dots + u_{n_{2}}| < \DELTA \] for all values of $n_{1}$~and~$n_{2}$ such that $n_{2} > n_{1} \geq n_{0}$. \end{Theorem} \end{Remark} \Paragraph{85. Limits of complex functions and series of complex terms.} In this chapter we have, up to the present, concerned ourselves only with real functions of~$n$ and series all of whose terms are real. There is however no difficulty in extending our ideas and definitions to the case in which the functions or the terms of the series are complex. Suppose that $\phi(n)$~is complex and equal to \[ \rho(n) + i\sigma(n), \] where $\rho(n)$,~$\sigma(n)$ are real functions of~$n$. Then \emph{if $\rho(n)$~and~$\sigma(n)$ converge respectively to limits $r$~and~$s$ as $n \to \infty$, we shall say that $\phi(n)$~converges to the limit $l = r + is$, and write} \[ \lim\phi(n) = l. \] Similarly, when $u_{n}$~is complex and equal to $v_{n} + iw_{n}$, we shall say that \emph{the series \[ u_{1} + u_{2} + u_{3} + \dots \] is convergent and has the sum $l = r + is$, if the series \[ v_{1} + v_{2} + v_{3} + \dots,\quad w_{1} + w_{2} + w_{3} + \dots \] are convergent and have the sums $r$,~$s$ respectively}. \PageSep{154} To say that $u_{1} + u_{2} + u_{3} + \dots$ is convergent and has the sum~$l$ is of course the same as to say that the sum \[ s_{n} = u_{1} + u_{2} + \dots + u_{n} = (v_{1} + v_{2} + \dots + v_{n}) + i(w_{1} + w_{2} + \dots + w_{n}) \] converges to the limit~$l$ as $n \to \infty$. In the case of real functions and series we also gave definitions of \emph{divergence} and \emph{oscillation}, \emph{finite} or \emph{infinite}. But in the case of complex functions and series, where we have to consider the behaviour both of~$\rho(n)$ and of~$\sigma(n)$, there are so many possibilities that this is hardly worth while. When it is necessary to make further distinctions of this kind, we shall make them by stating the way in which the real or imaginary parts behave when taken separately. \Paragraph{86.} The reader will find no difficulty in proving such theorems as the following, which are obvious extensions of theorems already proved for real functions and series. \Item{(1)} If $\lim\phi(n) = l$ then $\lim\phi(n + p) = l$ for any fixed value of~$p$. \Item{(2)} If $u_{1} + u_{2} + \dots$ is convergent and has the sum~$l$, then $a + b + c + \dots + k + u_{1} + u_{2} + \dots$ is convergent and has the sum $a + b + c + \dots + k + l$, and $u_{p+1} + u_{p+2} + \dots$ is convergent and has the sum $l - u_{1} - u_{2} - \dots - u_{p}$. \Item{(3)} If $\lim\phi(n) = l$ and $\lim\psi(n) = m$, then \[ \lim\{\phi(n) + \psi(n)\} = l + m. \] \Item{(4)} If $\lim\phi(n) = l$, then $\lim k\phi(n) = kl$. \Item{(5)} If $\lim\phi(n) = l$ and $\lim\psi(n) = m$, then $\lim \phi(n)\psi(n) = lm$. \Item{(6)} If $u_{1} + u_{2} + \dots$ converges to the sum~$l$, and $v_{1} + v_{2} + \dots$ to the sum~$m$, then $(u_{1} + v_{1}) + (u_{2}+ v_{2}) + \dots$ converges to the sum~$l + m$. \Item{(7)} If $u_{1} + u_{2} + \dots$ converges to the sum~$l$ then $ku_{1} + ku_{2} + \dots$ converges to the sum~$kl$. \Item{(8)} If $u_{1} + u_{2} + u_{3} + \dots$ is convergent then $\lim u_{n} = 0$. \Item{(9)} If $u_{1} + u_{2} + u_{3} + \dots$ is convergent, then so is any series formed by grouping the terms in brackets, and the sums of the two series are the same. \PageSep{155} \begin{Remark} As an example, let us prove theorem~(5). Let \[ \phi(n) = \rho(n) + i\sigma(n),\quad \psi(n) = \rho'(n) + i\sigma'(n),\quad l = r + is,\quad m = r' + is'. \] Then \[ \rho(n) \to r,\quad \sigma(n) \to s,\quad \rho'(n) \to r',\quad \sigma'(n) \to s'. \] But \[ \phi(n)\psi(n) = \rho\rho' - \sigma\sigma' + i(\rho\sigma' + \rho'\sigma), \] and \[ \rho\rho' - \sigma\sigma' \to rr' - ss',\quad \rho\sigma' + \rho'\sigma \to rs' + r's; \] so that \[ \phi(n)\psi(n) \to rr' - ss' + i(rs' + r's), \] \ie \[ \phi(n)\psi(n) \to (r + is)(r' + is') = lm. \] \end{Remark} The following theorems are of a somewhat different character. \begin{Result} \Item{(10)} In order that $\phi(n) = \rho(n) + i\sigma(n)$ should converge to zero as $n \to \infty$, it is necessary and sufficient that \[ |\phi(n)| = \sqrtbr{\{\rho(n)\}^{2} + \{\sigma(n)\}^{2}} \] should converge to zero. \end{Result} \begin{Remark} If $\rho(n)$ and~$\sigma(n)$ both converge to zero then it is plain that $\sqrtp{\rho^{2} + \sigma^{2}}$ does so. The converse follows from the fact that the numerical value of~$\rho$ or~$\sigma$ cannot be greater than $\sqrtp{\rho^{2} + \sigma^{2}}$. \end{Remark} \begin{Result} \Item{(11)} More generally, in order that $\phi(n)$~should converge to a limit~$l$, it is necessary and sufficient that \[ |\phi(n) - l| \] should converge to zero. \end{Result} \begin{Remark} For $\phi(n) - l$ converges to zero, and we can apply~(10). \end{Remark} \begin{Result} \Item{(12)} Theorems {\upshape1}~and~{\upshape2} of \SecNo[§§]{83}--\SecNo{84} are still true when $\phi(n)$~and~$u_{n}$ are complex. \end{Result} \begin{Remark} We have to show that the necessary and sufficient condition that $\phi(n)$ should tend to~$l$ is that \[ |\phi(n_{2}) - \phi(n_{1})| < \DELTA \Tag{(1)} \] when $n_{2} > n_{1} \geq n_{0}$. If $\phi(n) \to l$ then $\rho(n) \to r$ and $\sigma(n) \to s$, and so we can find numbers $n_{0}'$ and $n_{0}''$ depending on~$\DELTA$ and such that \[ |\rho(n_{2}) - \rho(n_{1})| < \tfrac{1}{2}\DELTA,\quad |\sigma(n_{2}) - \sigma(n_{1})| < \tfrac{1}{2}\DELTA, \] {\Loosen the first inequality holding when $n_{2} > n_{1} \geq n_{0}'$, and the second when $n_{2} > n_{1} \geq n_{0}''$. Hence} \[ |\phi(n_{2}) - \phi(n_{1})| \leq |\rho(n_{2}) - \rho(n_{1})| + |\sigma(n_{2}) - \sigma(n_{1})| < \DELTA \] when $n_{2} > n_{1} \geq n_{0}$, where $n_{0}$~is the greater of $n_{0}'$~and~$n_{0}''$. Thus the condition~\Eq{(1)} is \emph{necessary}. To prove that it is \emph{sufficient} we have only to observe that \[ |\rho(n_{2}) - \rho(n_{1})| \leq |\phi(n_{2}) - \phi(n_{1})| < \DELTA \] when $n_{2} > n_{1} \geq n_{0}$. Thus $\rho(n)$~tends to a limit~$r$, and in the same way it may be shown that $\sigma(n)$~tends to a limit~$s$. \end{Remark} \PageSep{156} \Paragraph{87. The limit of~$z^{n}$ as $n \to \infty$, $z$~being any complex number.} Let us consider the important case in which $\phi(n) = z^{n}$. This problem has already been discussed for real values of~$z$ in~\SecNo[§]{72}. If $z^{n} \to l$ then $z^{n+1} \to l$, by~\Eq{(1)} of \SecNo[§]{86}. But, by~\Eq{(4)} of \SecNo[§]{86}, \[ z^{n+1} = zz^{n} \to zl, \] and therefore $l = zl$, which is only possible if (\ia)~$l = 0$ or (\ib)~$z = 1$. If $z = 1$ then $\lim z^{n} = 1$. Apart from this special case the limit, if it exists, can only be zero. Now if $z = r(\cos\theta + i\sin\theta)$, where $r$~is positive, then \[ z^{n} = r^{n} (\cos n\theta + i\sin n\theta), \] so that $|z^{n}| = r^{n}$. Thus $|z^{n}|$~tends to zero if and only if $r < 1$; and it follows from~\Eq{(10)} of \SecNo[§]{86} that \[ \lim z^{n} = 0 \] if and only if $r < 1$. In no other case does $z^{n}$~converge to a limit, except when $z = 1$ and $z^n \to 1$. \Paragraph{88. The geometric series $1 + z + z^{2} + \dots$ when $z$~is complex.} Since \[ s_{n} = 1 + z + z^{2} + \dots + z^{n-1} = (1 - z^{n})/(1 - z), \] {\Loosen unless $z = 1$, when the value of~$s_{n}$ is~$n$, it follows that \emph{the series $1 + z + z^{2} + \dots$ is convergent if and only if $r = |z| < 1$. And its sum when convergent is $1/(1 - z)$}.} Thus if $z = r(\cos\theta + i\sin\theta) = r\Cis\theta$, and $r < 1$, we have \begin{align*} 1 + z + z^{2} + \dots &= 1/(1 - r\Cis\theta), \intertext{or} 1 + r \Cis\theta + r^{2} \Cis 2\theta + \dots &= 1/(1 - r\Cis\theta)\\ &= (1 - r\cos\theta + ir\sin\theta)/(1 - 2r\cos\theta + r^{2}). \end{align*} Separating the real and imaginary parts, we obtain \begin{align*} 1 + r\cos\theta + r^{2}\cos 2\theta + \dots &= (1 - r\cos\theta)/(1 - 2r\cos\theta + r^{2}),\\ r\sin\theta + r^{2}\sin 2\theta + \dots &= r\sin\theta/(1 - 2r\cos\theta + r^{2}), \end{align*} provided $r < 1$. If we change~$\theta$ into~$\theta + \pi$, we see that these results hold also for negative values of~$r$ numerically less than~$1$. Thus they hold when $-1 < r < 1$. \PageSep{157} \begin{Examples}{XXXIII.} \Item{1.} Prove directly that $\phi(n) = r^{n} \cos n\theta$ converges to~$0$ when $r < 1$ and to~$1$ when $r = 1$ and $\theta$~is a multiple of~$2\pi$. Prove further that if $r = 1$ and $\theta$~is not a multiple of~$2\pi$, then $\phi(n)$~oscillates finitely; if $r > 1$ and $\theta$~is a multiple of~$2\pi$, then $\phi(n) \to +\infty$; and if $r > 1$ and $\theta$~is not a multiple of~$2\pi$, then $\phi(n)$~oscillates infinitely. \Item{2.} Establish a similar series of results for $\phi(n) = r^{n} \sin n\theta$. \Item{3.} Prove that \begin{gather*} z^{m} + z^{m+1} + \dots = z^{m}/(1 - z),\\ z^{m} + 2z^{m+1} + 2z^{m+2} + \dots = z^{m}(1 + z)/(1 - z), \end{gather*} if and only if $|z| < 1$. Which of the theorems of \SecNo[§]{86} do you use? \Item{4.} Prove that if $-1 < r < 1$ then \[ 1 + 2r\cos\theta + 2r^{2}\cos 2\theta + \dots = (1 - r^{2})/(1 - 2r\cos\theta + r^{2}). \] \Item{5.} The series \[ 1 + \frac{z}{1 + z} + \left(\frac{z}{1 + z}\right)^{2} + \dots \] converges to the sum $1\bigg/\left(1 - \dfrac{z}{1 + z}\right) = 1 + z$ if $|z/(1 + z) | < 1$. Show that this condition is equivalent to the condition that $z$~has a real part greater than~$-\frac{1}{2}$. \end{Examples} \Section{MISCELLANEOUS EXAMPLES ON CHAPTER IV.} \begin{Examples}{} \Item{1.} The function~$\phi(n)$ takes the values $1$, $0$, $0$, $0$, $1$, $0$, $0$, $0$, $1$,~\dots\ when $n = 0$, $1$, $2$,~\dots. Express $\phi(n)$ in terms of~$n$ by a formula which does not involve trigonometrical functions. [$\phi(n) = \frac{1}{4}\{1 + (-1)^{n} + i^{n} + (-i)^{n}\}$.] \Item{2.} If $\phi(n)$~steadily increases, and $\psi(n)$~steadily decreases, as $n$~tends to~$\infty$, and if $\psi(n) > \phi(n)$ for all values of~$n$, then both $\phi(n)$~and~$\psi(n)$ tend to limits, and $\lim\phi(n) \leq \lim\psi(n)$. [This is an \DPchg{intermediate}{immediate} corollary from~\SecNo[§]{69}.] \Item{3.} Prove that, if \[ \phi(n) = \left(1 + \frac{1}{n}\right)^{n},\quad \psi(n) = \left(1 - \frac{1}{n}\right)^{-n}, \] then $\phi(n + 1) > \phi(n)$ and $\psi(n + 1) < \psi(n)$. [The first result has already been proved in~\SecNo[§]{73}.] \Item{4.} Prove also that $\psi(n) > \phi(n)$ for all values of~$n$: and deduce (by means of the preceding examples) that both $\phi(n)$~and~$\psi(n)$ tend to limits as $n$~tends to~$\infty$.\footnote {A proof that $\lim\{\psi(n) - \phi(n)\} = 0$, and that therefore each function tends to the limit~$e$, will be found in Chrystal's \textit{Algebra}, vol.~ii, p.~78. We shall however prove this in \Ref{Ch.}{IX} by a different method.} \Item{5.} The arithmetic mean of the products of all distinct pairs of positive integers whose sum is~$n$ is denoted by~$S_{n}$. Show that $\lim(S_{n}/n^{2}) = 1/6$. \MathTrip{1903.} \PageSep{158} \Item{6.} Prove that if $x_{1} = \frac{1}{2}\{x + (A/x)\}$, $x_{2} = \frac{1}{2}\{x_{1} + (A/x_{1})\}$, and so on, $x$~and~$A$ being positive, then $\lim x_{n} = \sqrt{A}$. [Prove first that $\dfrac{x_{n} - \sqrt{A}}{x_{n} + \sqrt{A}} = \biggl(\dfrac{x - \sqrt{A}}{x + \sqrt{A}}\biggr)^{2^{n}}$.] \Item{7.} If $\phi(n)$~is a positive integer for all values of~$n$, and tends to~$\infty$ with~$n$, then $x^{\phi(n)}$~tends to~$0$ if $0 < x < 1$ and to~$+\infty$ if $x > 1$. Discuss the behaviour of~$x^{\phi(n)}$, as $n \to \infty$, for other values of~$x$. \Item{8.\footnotemark} If $a_{n}$~increases or decreases steadily as $n$~increases, then the same is true of $(a_{1} + a_{2} + \dots + a_{n})/n$.\footnotetext {Exs.\ 8--12 are taken from Bromwich's \textit{Infinite Series}.}% \Item{9.} If $x_{n+1} = \sqrtp{k + x_{n}}$, and $k$~and~$x_{1}$ are positive, then the sequence $x_{1}$,~$x_{2}$, $x_{3}$,~\dots\ is an increasing or decreasing sequence according as $x_{1}$~is less than or greater than~$\alpha$, the positive root of the equation $x^{2} = x + k$; and in either case $x_{n} \to \alpha$ as $n \to \infty$. \Item{10.} If $x_{n+1} = k/(1 + x_{n})$, and $k$~and~$x_{1}$ are positive, then the sequences $x_{1}$,~$x_{3}$, $x_{5}$,~\dots\ and $x_{2}$,~$x_{4}$, $x_{6}$,~\dots\ are one an increasing and the other a decreasing sequence, and each sequence tends to the limit~$\alpha$, the positive root of the equation $x^{2} + x = k$. \Item{11.} The function~$f(x)$ is increasing and continuous (see \Ref{Ch.}{V}) for all values of~$x$, and a sequence $x_{1}$,~$x_{2}$, $x_{3}$,~\dots\ is defined by the equation $x_{n+1} = f(x_{n})$. Discuss on general graphical grounds the question as to whether $x_{n}$~tends to a root of the equation $x = f(x)$. Consider in particular the case in which this equation has only one root, distinguishing the cases in which the curve $y = f(x)$ crosses the line $y = x$ from above to below and from below to above. \Item{12.} If $x_{1}$,~$x_{2}$ are positive and $x_{n+1} = \frac{1}{2} (x_{n} + x_{n-1})$, then the sequences $x_{1}$,~$x_{3}$, $x_{5}$,~\dots\ and $x_{2}$,~$x_{4}$, $x_{6}$,~\dots\ are one a decreasing and the other an increasing sequence, and they have the common limit $\frac{1}{3}(x_{1} + 2x_{2})$. \Item{13.} Draw a graph of the function~$y$ defined by the equation \[ y = \lim_{n \to \infty} \frac{x^{2n} \sin\frac{1}{2}\pi x + x^{2}}{x^{2n} + 1}. \] \MathTrip{1901.} \Item{14.} The function \[ y = \lim_{n \to \infty} \frac{1}{1 + n\sin^{2} \pi x} \] is equal to~$0$ except when $x$~is an integer, and then equal to~$1$. The function \[ y = \lim_{n \to \infty} \frac{\psi(x) + n\phi(x) \sin^{2}\pi x}{1 + n\sin^{2}\pi x} \] is equal to~$\phi(x)$ unless $x$~is an integer, and then equal to~$\psi(x)$. \Item{15.} Show that the graph of the function \[ y = \lim_{n \to \infty} \frac{x^{n}\phi(x) + x^{-n}\psi(x)}{x^{n} + x^{-n}} \] \PageSep{159} is composed of parts of the graphs of $\phi(x)$~and~$\psi(x)$, together with (as a rule) two isolated points. Is $y$~defined when (\ia)~$x = 1$, (\ib)~$x = -1$, (\ic)~$x = 0$? \Item{16.} Prove that the function~$y$ which is equal to~$0$ when $x$~is rational, and to~$1$ when $x$~is irrational, may be represented in the form \[ y = \lim_{m \to \infty} \sgn\{\sin^{2}(m!\, \pi x)\}, \] where \[ \sgn x = \lim_{n \to \infty} (2/\pi)\arctan(nx), \] {\Loosen as in \Ex{xxxi}.~14. [If $x$~is rational then $\sin^{2}(m!\, \pi x)$, and therefore $\sgn\{\sin^{2}(m!\, \pi x)\}$, is equal to zero from a certain value of~$m$ onwards: if $x$~is irrational then $\sin^{2}(m!\, \pi x)$ is always positive, and so $\sgn\{\sin^{2}(m!\, \pi x)\}$ is always equal to~$1$.]} Prove that $y$~may also be represented in the form \[ 1 - \lim_{m \to\infty} [\lim_{n \to\infty}\{\cos(m!\, \pi x)\}^{2n}]. \] \Item{17.} Sum the series \[ \sum_{1}^{\infty} \frac{1}{\nu(\nu + 1)},\quad \sum_{1}^{\infty} \frac{1}{\nu(\nu + 1)\dots(\nu + k)}. \] [Since \[ %[** TN: Using \bigg to squeeze a bit of horizontal space] \frac{1}{\nu(\nu + 1)\dots(\nu + k)} = \frac{1}{k} \biggl\{\frac{1}{\nu(\nu + 1)\dots(\nu + k - 1)} - \frac{1}{(\nu + 1)(\nu + 2)\dots(\nu + k)}\biggr\}, \] we have \[ \sum_{1}^{n} \frac{1}{\nu(\nu + 1)\dots(\nu + k)} = \frac{1}{k}\left\{\frac{1}{1·2\dots k} - \frac{1}{(n + 1)(n + 2)\dots (n + k)}\right\} \] and so \[ \sum_{1}^{\infty} \frac{1}{\nu(\nu + 1)\dots (\nu + k)} = \frac{1}{k(k!)}.] \] \Item{18.} If $|z| < |\alpha|$, then \begin{align*} \frac{L}{z - \alpha} = -&\frac{L}{\alpha}\left(1 + \frac{z}{\alpha} + \frac{z^{2}}{\alpha^{2}} + \dots\right); \\ \intertext{and if $|z|>|\alpha|$, then} \frac{L}{z - \alpha} = &\frac{L}{z}\left(1 + \frac{\alpha}{z} + \frac{\alpha^{2}}{z^{2}} + \dots\right). \end{align*} \Item{19.} \Topic{Expansion of $(Az + B)/(az^{2} + 2bz + c)$ in powers of~$z$.} Let $\alpha$,~$\beta$ be the roots of $az^{2} + 2bz + c = 0$, so that $az^{2} + 2bz + c = a(z - \alpha)(z - \beta)$. We shall suppose that $A$,~$B$, $a$,~$b$,~$c$ are all real, and $\alpha$~and~$\beta$ unequal. It is then easy to verify that \[ \frac{Az + B}{az^{2} + 2bz + c} = \frac{1}{a(\alpha - \beta)} \left(\frac{A\alpha + B}{z - \alpha} - \frac{A\beta + B}{z - \beta}\right). \] There are two cases, according as $b^{2} > ac$ or $b^{2} < ac$. \Item{(1)} If $b^{2} > ac$ then the roots $\alpha$,~$\beta$ are real and distinct. If $|z|$~is less than either $|\alpha|$ or~$|\beta|$ we can expand $1/(z - \alpha)$ and $1/(z - \beta)$ in ascending powers of~$z$ (Ex.~18). If $|z|$~is greater than either $|\alpha|$ or~$|\beta|$ we must expand in descending powers of~$z$; while if $|z|$~lies between $|\alpha|$ and~$|\beta|$ one fraction must be expanded in ascending and one in descending powers of~$z$. The reader should write down the actual results. If $|z|$~is equal to~$|\alpha|$ or~$|\beta|$ then no such expansion is possible. \PageSep{160} \Item{(2)} If $b^{2} < ac$ then the roots are conjugate complex numbers (\Ref{Ch.}{III} \SecNo[§]{43}), and we can write \[ \alpha = \rho\Cis\phi, \quad \beta = \rho\Cis(-\phi), \] where $\rho^{2} = \alpha\beta = c/a$, $\rho\cos\phi = \frac{1}{2}(\alpha + \beta) = - b/a$, so that $\cos\phi = -\sqrtp{b^{2}/ac}$, $\sin\phi = \sqrtb{1 - (b^{2}/ac)}$. If $|z| < \rho$ then each fraction may be expanded in ascending powers of~$z$. The coefficient of~$z^{n}$ will be found to be \[ \frac{A\rho\sin n\phi + B\sin\{(n + 1)\phi\}}{a\rho^{n+1} \sin\phi}. \] If $|z| > \rho$ we obtain a similar expansion in descending powers, while if $|z| = \rho$ no such expansion is possible. \Item{20.} Show that if $|z| < 1$ then \[ 1 + 2z + 3z^{2} + \dots + (n + 1)z^{n} + \dots = 1/(1 - z)^{2}. \] [The sum to $n$~terms is $\dfrac{1 - z^{n}}{(1 - z)^{2}} - \dfrac{nz^{n}}{1 - z}$.] \Item{21.} Expand $L/(z - \alpha)^{2}$ in powers of~$z$, ascending or descending according as $|z| < |\alpha|$ or $|z| > |\alpha|$. \Item{22.} Show that if $b^{2} = ac$ and $|az| < |b|$ then \[ \frac{Az + B}{az^{2} + 2bz + c} = \sum_{0}^{\infty} p_{n}z^{n}, \] where $p_{n} = \{(-a)^{n}/b^{n+2}\} \{(n + 1)aB - nbA\}$; and find the corresponding expansion, in descending powers of~$z$, which holds when $|az| > |b|$. \Item{23.} Verify the result of Ex.~19 in the case of the fraction $1/(1 + z^{2})$. [We have $1/(1 + z^{2}) = \sum z^{n} \sin\{\frac{1}{2}(n + 1)\pi\} = 1 - z^{2} + z^{4} - \dots$.] \Item{24.} Prove that if $|z| < 1$ then \[ \frac{1}{1 + z + z^{2}} = \frac{2}{\sqrt{3}} \sum_{0}^{\infty} z^{n} \sin\{\tfrac{2}{3}(n + 1)\pi\}. \] \Item{25.} Expand $(1 + z)/(1 + z^{2})$, $(1 + z^{2})/(1 + z^{3})$ and $(1 + z + z^{2})/(1 + z^{4})$ in ascending powers of~$z$. For what values of~$z$ do your results hold? \Item{26.} If $a/(a + bz + cz^{2}) = 1 + p_{1}z + p_{2}z^{2} + \dots$ then \[ 1 + p_{1}^{2}z + p_{2}^{2}z^{2} + \dots = \frac{a + cz}{a - cz}\, \frac{a^{2}}{a^{2} - (b^{2} - 2ac)z + c^{2}z^{2}}. \] \MathTrip{1900.} \Item{27.} If $\lim\limits_{n \to \infty} s_{n} = l$ then \[ \lim_{n \to \infty} \frac{s_{1} + s_{2} + \dots + s_{n}}{n} = l. \] [Let $s_{n} = l + t_{n}$. Then we have to prove that $(t_{1} + t_{2} + \dots + t_{n})/n$ tends to zero if $t_{n}$~does so. \PageSep{161} We divide the numbers $t_{1}$, $t_{2}$,~\dots\Add{,} $t_{n}$ into two sets $t_{1}$, $t_{2}$,~\dots, $t_{p}$ and $t_{p+1}$, $t_{p+2}$,~\dots, $t_{n}$. Here we suppose that $p$~is a function of~$n$ which tends to~$\infty$ as $n \to \infty$, but \emph{more slowly than~$n$}, so that $p \to \infty$ and $p/n \to 0$: \eg\ we might suppose $p$ to be the integral part of~$\sqrt{n}$. Let $\DPtypo{\epsilon}{\DELTA}$ be any positive number. However small $\DELTA$~may be, we can choose~$n_{0}$ so that $t_{p+1}$, $t_{p+2}$,~\dots, $t_{n}$ are all numerically less than~$\frac{1}{2}\DELTA$ when $n \geq n_{0}$, and so \[ |(t_{p+1} + t_{p+2} + \dots + t_{n})/n| < \tfrac{1}{2}\DELTA(n - p)/n < \tfrac{1}{2} \DELTA. \] But, if $A$~is the greatest of the moduli of all the numbers $t_{1}$, $t_{2}$,~\dots, we have \[ |(t_{1} + t_{2} + \dots + t_{p})/n| < pA/n, \] and this also will be less than~$\frac{1}{2}\DELTA$ when $n \geq n_{0}$, if $n_{0}$~is large enough, since $p/n \to 0$ as $n \to \infty$. Thus \[ |(t_{1} + t_{2} + \dots + t_{n})/n| \leq |(t_{1} + t_{2} + \dots + t_{p})/n| + |(t_{p+1} + \dots + t_{n})/n| < \DELTA \] when $n \geq n_{0}$; which proves the theorem. The reader, if he desires to become expert in dealing with questions about limits, should study the argument above with great care. It is very often necessary, in proving the limit of some given expression to be zero, to split it into two parts which have to be proved to have the limit zero in slightly different ways. When this is the case the proof is never very easy. The point of the proof is this: we have to prove that $(t_{1} + t_{2} + \dots + t_{n})/n$ is small when $n$~is large, the~$t$'s being small when their suffixes are large. We split up the terms in the bracket into two groups. The terms in the first group are not all small, but their number is small compared with~$n$. The number in the second group is \emph{not} small compared with~$n$, but the terms are all small, and their number at any rate less than~$n$, so that their sum is small compared with~$n$. Hence each of the parts into which $(t_{1} + t_{2} + \dots + t_{n})/n$ has been divided is small when $n$~is large.] \Item{28.} If $\phi(n) - \phi(n - 1)\to l$ as $n \to \infty$, then $\phi(n)/n \to l$. [If $\phi(n) = s_{1} + s_{2} + \dots + s_{n}$ then $\phi(n) - \phi(n - 1) = s_{n}$, and the theorem reduces to that proved in the last example.] \Item{29.} If $s_{n} = \frac{1}{2}\{1 - (-1)^{n}\}$, so that $s_{n}$~is equal to~$1$ or~$0$ according as $n$~is odd or even, then $(s_{1} + s_{2} + \dots + s_{n})/n \to \frac{1}{2}$ as $n \to \infty$. [This example proves that the converse of Ex.~27 is not true: for $s_{n}$~oscillates as $n \to \infty$.] \Item{30.} If $c_{n}$,~$s_{n}$ denote the sums of the first $n$~terms of the series \[ \tfrac{1}{2} + \cos\theta + \cos 2\theta + \dots,\quad \sin\theta + \sin 2\theta + \dots, \] then \[ \lim (c_{1} + c_{2} + \dots + c_{n})/n = 0,\quad \lim (s_{1} + s_{2} + \dots + s_{n})/n = \tfrac{1}{2} \cot\tfrac{1}{2} \theta. \] \end{Examples} \PageSep{162} \Chapter[LIMITS OF FUNCTIONS OF A CONTINUOUS VARIABLE] {V}{LIMITS OF FUNCTIONS OF A CONTINUOUS VARIABLE. CONTINUOUS AND DISCONTINUOUS FUNCTIONS} \Paragraph{89. Limits as $x$~tends to~$\infty$.} We shall now return to functions of a continuous real variable. We shall confine ourselves entirely to \emph{one-valued} functions,\footnote {Thus $\sqrt{x}$ stands in this chapter for the one-valued function~$+\sqrt{x}$ and not (as in \SecNo[§]{26}) for the two-valued function whose values are $+\sqrt{x}$~and~$-\sqrt{x}$.} and we shall denote such a function by~$\phi(x)$. We suppose $x$ to assume successively all values corresponding to points on our fundamental straight line~$\Lambda$, starting from some definite point on the line and progressing always to the right. In these circumstances we say that \emph{$x$~tends to infinity}, or \emph{to~$\infty$}, and write $x \to \infty$. The only difference between the `tending of~$n$ to~$\infty$' discussed in the last chapter, and this `tending of~$x$ to~$\infty$', is that $x$~assumes all values as it tends to~$\infty$, \ie\ that the point~$P$ which corresponds to~$x$ coincides in turn with every point of~$\Lambda$ to the right of its initial position, whereas $n$~tended to~$\infty$ by a series of jumps. We can express this distinction by saying that $x$~tends \emph{continuously} to~$\infty$. As we explained at the beginning of the last chapter, there is a very close correspondence between functions of~$x$ and functions of~$n$. Every function of~$n$ may be regarded as a selection from the values of a function of~$x$. In the last chapter we discussed the peculiarities which may characterise the behaviour of a function~$\phi(n)$ as $n$~tends to~$\infty$. Now we are concerned with the same problem for a function~$\phi(x)$; and the definitions and theorems to which we are led are practically repetitions of those of the last chapter. Thus corresponding to Def.~1 of \SecNo[§]{58} we have: \PageSep{163} \begin{Definition}[1.] The function~$\phi(x)$ is said to tend to the limit~$l$ as $x$~tends to~$\infty$ if, when any positive number~$\DELTA$, however small, is assigned, a number~$x_{0}(\DELTA)$ can be chosen such that, for all values of~$x$ equal to or greater than~$x_{0}(\DELTA)$, $\phi(x)$~differs from~$l$ by less than~$\DELTA$, \ie\ if \[ |\phi(x) - l| < \DELTA \] when $x \geq x_{0}(\DELTA)$. \end{Definition} When this is the case we may write \[ \lim_{x \to \infty} \phi(x) = l, \] or, when there is no risk of ambiguity, simply $\lim\phi(x) = l$, or $\phi(x) \to l$. Similarly we have: \begin{Definition}[2.] The function~$\phi(x)$ is said to tend to~$\infty$ with~$x$ if, when any number~$\Delta$, however large, is assigned, we can choose a number~$x_{0}(\Delta)$ such that \[ \phi(x) > \Delta \] when $x \geq x_{0}(\Delta)$. \end{Definition} We then write \[ \phi(x) \to \infty. \] Similarly we define $\phi(x) \to -\infty$.\footnote {We shall sometimes find it convenient to write~$+\infty$, $x \to +\infty$, $\phi(x) \to +\infty$ instead of~$\infty$, $x \to \infty$, $\phi(x) \to \infty$.} Finally we have: \begin{Definition}[3.] If the conditions of neither of the two preceding definitions are satisfied, then $\phi(x)$~is said to oscillate as $x$~tends to~$\infty$. If $|\phi(x)|$~is less than some constant~$K$ when $x \geq x_{0}$,\footnote {In the corresponding definition of \SecNo[§]{62}, we postulated that $|\phi(n)| < K$ for \emph{all} values of~$n$, and not merely when $n \geq n_{0}$. But then the two hypotheses would have been equivalent; for if $|\phi(n)| < K$ when $n \geq n_{0}$, then $|\phi(n)| < K'$ for all values of~$n$, where $K'$~is the greatest of \DPtypo{$\phi(1)$, $\phi(2)$,~\dots, $\phi(n_{0} - 1)$} {$|\phi(1)|$, $|\phi(2)|$,~\dots, $|\phi(n_{0} - 1)|$} and~$K$. Here the matter is not quite so simple, as there are infinitely many values of~$x$ less than~$x_{0}$.} then $\phi(x)$~is said to oscillate finitely, and otherwise infinitely. \end{Definition} The reader will remember that in the last chapter we considered very carefully various less formal ways of expressing the facts represented by the formulae $\phi(n) \to l$, $\phi(n) \to \infty$. Similar modes of expression may of course be used in the present case. Thus we may say that $\phi(x)$~is small or nearly equal to~$l$ or large when $x$~is large, using the words `small', `nearly', `large' in a sense similar to that in which they were used in \Ref{Ch.}{IV}\@. \PageSep{164} \begin{Examples}{XXXIV.} \Item{1.} Consider the behaviour of the following functions as $x \to \infty$: $1/x$, $1 + (1/x)$, $x^{2}$, $x^{k}$, $[x]$, $x - [x]$, $[x] + \sqrtb{x - [x]}$. The first four functions correspond exactly to functions of~$n$ fully discussed in \Ref{Ch.}{IV}\@. The graphs of the last three were constructed in \Ref{Ch.}{II} (\Exs{xvi}.\ 1,~2,~4), and the reader will see at once that $[x] \to \infty$, $x - [x]$ oscillates finitely, and $[x] + \sqrtb{x - [x]} \to \infty$. One simple remark may be inserted here. The function $\phi(x) = x - [x]$ oscillates between $0$~and~$1$, as is obvious from the form of its graph. It is equal to zero whenever $x$~is an integer, so that the function~$\phi(n)$ derived from it is always zero and so tends to the limit zero. The same is true if \[ \phi(x) = \sin x\pi,\quad \phi(n) = \sin n\pi = 0. \] It is evident that $\phi(x) \to l$ or $\phi(x) \to \infty$ or $\phi(x) \to -\infty$ involves the corresponding property for~$\phi(n)$, but that the converse is by no means always true. \Item{2.} Consider in the same way the functions: \[ (\sin x\pi)/x,\quad x\sin x\pi,\quad (x\sin x\pi)^{2},\quad \tan x\pi,\quad a\cos^{2} x\pi + b\sin^{2} x\pi, \] illustrating your remarks by means of the graphs of the functions. \Item{3.} Give a geometrical explanation of Def.~1, analogous to the geometrical explanation of \Ref{Ch.}{IV}, \SecNo[§]{59}. \Item{4.} If $\phi(x) \to l$, and $l$~is not zero, then $\phi(x)\cos x\pi$ and $\phi(x)\sin x\pi$ oscillate finitely. If $\phi(x) \to \infty$ or $\phi(x) \to -\infty$, then they oscillate infinitely. The graph of either function is a wavy curve oscillating between the curves $y = \phi(x)$ and $y = -\phi(x)$. \Item{5.} Discuss the behaviour, as $x \to \infty$, of the function \[ y = f(x)\cos^{2} x\pi + F(x)\sin^{2} x\pi, \] where $f(x)$~and~$F(x)$ are some pair of simple functions (\eg\ $x$~and~$x^{2}$). [The graph of~$y$ is a curve oscillating between the curves $y = f(x)$, $y = F(x)$.] \end{Examples} \Paragraph{90. Limits as $x$~tends to~$-\infty$.} The reader will have no difficulty in framing for himself definitions of the meaning of the assertions `$x$~tends to~$-\infty$', or `$x \to -\infty$' and \[ \lim_{x \to -\infty} \phi(x) = l,\quad \phi(x) \to \infty,\quad \phi(x) \to -\infty. \] In fact, if $x = -y$ and $\phi(x) = \phi(-y) = \psi(y)$, then $y$~tends to~$\infty$ as $x$~tends to~$-\infty$, and the question of the behaviour of~$\phi(x)$ as $x$~tends to~$-\infty$ is the same as that of the behaviour of~$\psi(y)$ as $y$~tends to~$\infty$. \PageSep{165} \begin{Remark} \Paragraph{91. Theorems corresponding to those of \Ref{Ch.}{IV}, \SecNo[§§]{63}--\SecNo{67}.} The theorems concerning the sums, products, and quotients of functions proved in \Ref{Ch.}{IV} are all true (with obvious verbal alterations which the reader will have no difficulty in supplying) for functions of the continuous variable~$x$. Not only the enunciations but the proofs remain substantially the same. \Paragraph{92. Steadily increasing or decreasing functions.} The definition which corresponds to that of \SecNo[§]{69} is as follows: \emph{the function~$\phi(x)$ will be said to increase steadily with~$x$ if $\phi(x_{2}) \geq \phi(x_{1})$ whenever $x_{2} > x_{1}$}. In many cases, of course, this condition is only satisfied from a definite value of $x$ onwards, \ie\ when $x_{2} > x_{1} \geq x_{0}$. The theorem which follows in that section requires no alteration but that of~$n$ into~$x$: and the proof is the same, except for obvious verbal changes. {\Loosen If $\phi(x_{2}) > \phi(x_{1})$, the possibility of equality being excluded, whenever $x_{2} > x_{1}$, then $\phi(x)$~will be said to be \emph{steadily increasing in the stricter sense}. We shall find that the distinction is often important (cf.\ \SecNo[§§]{108}--\SecNo{109}).} The reader should consider whether or no\DPnote{** [sic], not "not"} the following functions increase steadily with~$x$ (or at any rate increase steadily from a certain value of $x$ onwards): $x^{2} - x$, $x + \sin x$, $x + 2\sin x$, $x^{2} + 2\sin x$, $[x]$, $[x] + \sin x$, $[x] + \sqrtb{x - [x]}$. All these functions tend to~$\infty$ as $x \to \infty$. \end{Remark} \Paragraph{93. Limits as $x$~tends to~$0$.} Let $\phi(x)$~be such a function of~$x$ that $\lim\limits_{x \to \infty} \phi(x) = l$, and let $y = 1/x$. Then \[ \phi(x) = \phi(1/y) = \psi(y), \] say. As $x$~tends to~$\infty$, $y$~tends to the limit~$0$, and $\psi(y)$~tends to the limit~$l$. Let us now dismiss~$x$ and consider $\psi(y)$ simply as a function of~$y$. We are for the moment concerned only with those values of~$y$ which correspond to large positive values of~$x$, that is to say with small positive values of~$y$. And $\psi(y)$ has the property that by making $y$ sufficiently small we can make $\psi(y)$ differ by as little as we please from~$l$. To put the matter more precisely, the statement expressed by $\lim\phi(x) = l$ means that, when any positive number~$\DELTA$, however small, is assigned, we can choose~$x_{0}$ so that $|\phi(x) - l| < \DELTA$ for all values of~$x$ greater than or equal to~$x_{0}$. But this is the same thing as saying that we can choose $y_{0} = 1/x_{0}$ so that $|\psi(y) - l| < \DELTA$ for all positive values of~$y$ less than or equal to~$y_{0}$. We are thus led to the following definitions: \PageSep{166} \begin{Defn} \Item{A.} If, when any positive number~$\DELTA$, however small, is assigned, we can choose~$y_{0}(\DELTA)$ so that \[ |\phi(y) - l| < \DELTA \] when $0 < y \leq y_{0}(\DELTA)$, then we say that $\phi(y)$~tends to the limit~$l$ as $y$~tends to~$0$ by positive values, and we write \[ \lim_{y \to +0} \phi(y) = l. \] \end{Defn} \begin{Defn} \Item{B.} If, when any number~$\Delta$, however large, is assigned, we can choose $y_{0}(\Delta)$ so that \[ \phi(y) > \Delta \] when $0 < y \leq y_{0}(\Delta)$, then we say that $\phi(y)$~tends to~$\infty$ as $y$~tends to~$0$ by positive values, and we write \[ \phi(y) \to \infty. \] \end{Defn} We define in a similar way the meaning of `$\phi(y)$~tends to the limit~$l$ as $y$~tends to~$0$ by negative values', or `$\lim\phi(y) = l$ when $y \to -0$'. We have in fact only to alter $0 < y \leq y_{0}(\DELTA)$ to $-y_{0}(\DELTA) \leq y < 0$ in definition~A\@. There is of course a corresponding analogue of definition~B, and similar definitions in which \[ \phi(y) \to -\infty \] as $y \to +0$ or $y \to -0$. If $\lim\limits_{y \to +0} \phi(y) = l$ \emph{and} $\lim\limits_{y \to -0} \phi(y) = l$, we write simply \[ \lim_{y \to 0} \phi(y) = l. \] This case is so important that it is worth while to give a formal definition. \begin{Defn} If, when any positive number~$\DELTA$, however small, is assigned, we can choose $y_{0}(\DELTA)$ so that, for all values of~$y$ different from zero but numerically less than or equal to~$y_{0}(\DELTA)$, $\phi(y)$~differs from~$l$ by less than~$\DELTA$, then we say that $\phi(y)$~tends to the limit~$l$ as $y$~tends to~$0$, and write \[ \lim_{y \to 0} \phi(y) = l. \] \end{Defn} So also, if $\phi(y) \to \infty$ as $y \to +0$ and also as $y \to -0$, we say that $\phi(y) \to \infty$ as $y \to 0$. We define in a similar manner the statement that $\phi(y) \to -\infty$ as $y \to 0$. \PageSep{167} Finally, if $\phi(y)$~does not tend to a limit, or to~$\infty$, or to~$-\infty$, as $y \to +0$, we say that $\phi(y)$~oscillates as $y \to +0$, finitely or infinitely as the case may be; and we define oscillation as $y \to -0$ in a similar manner. The preceding definitions have been stated in terms of a variable denoted by~$y$: what letter is used is of course immaterial, and we may suppose $x$ written instead of~$y$ throughout them. \Paragraph{94. Limits as $x$~tends to~$a$.} Suppose that $\phi(y) \to l$ as $y \to 0$, and write \[ y = x - a,\quad \phi(y) = \phi(x - a) = \psi(x). \] If $y \to 0$ then $x \to a$ and $\psi(x) \to l$, and we are naturally led to write \[ \lim_{x \to a} \psi(x) = l, \] or simply $\lim\psi(x) = l$ or $\psi(x) \to l$, and to say that \emph{$\psi(x)$~tends to the limit~$l$ as $x$~tends to~$a$}. The meaning of this equation may be formally and directly defined as follows: \begin{Defn}if, given~$\DELTA$, we can always determine~$\EPSILON(\DELTA)$ so that \[ |\phi(x) - l| < \DELTA \] when $0 < |x - a| \leq \EPSILON(\DELTA)$, then \[ \lim_{x \to a} \phi(x) = l. \] \end{Defn} By restricting ourselves to values of~$x$ greater than~$a$, \ie\ by replacing $0 < |x - a| \leq \EPSILON(\DELTA)$ by $a < x \leq a + \EPSILON(\DELTA)$, we define `$\phi(x)$~tends to~$l$ when $x$~approaches~$a$ from the right', which we may write as \[ \lim_{x \to a+0} \phi(x) = l. \] In the same way we can define the meaning of \[ \lim_{x \to a-0} \phi(x) = l. \] Thus $\lim\limits_{x \to a} \phi(x) = l$ is equivalent to the two assertions \[ \lim_{x \to a+0} \phi(x) = l,\quad \lim_{x \to a-0} \phi(x) = l. \] We can give similar definitions referring to the cases in which $\phi(x) \to \infty$ or $\phi(x) \to -\infty$ as $x \to a$ through values greater or less than~$a$; but it is probably unnecessary to dwell further on these definitions, since they are exactly similar to those stated above in \PageSep{168} the special case when $a = 0$, and we can always discuss the behaviour of~$\phi(x)$ as $x \to a$ by putting $x - a = y$ and supposing that $y \to 0$. \begin{Remark} \Paragraph{95. Steadily increasing or decreasing functions.} If there is a number~$\EPSILON$ such that $\phi(x') \leq \phi(x'')$ whenever $a - \EPSILON < x' < x'' < a + \EPSILON$, then $\phi(x)$~will be said to \emph{increase steadily in the neighbourhood of $x = a$}. Suppose first that $x < a$, and put $y = 1/(a - x)$. Then $y \to \infty$ as $x \to a-0$, and $\phi(x) = \psi(y)$ is a steadily increasing function of~$y$, never greater than~$\phi(a)$. It follows from \SecNo[§]{92} that $\phi(x)$~tends to a limit not greater than~$\phi(a)$. We shall write \[ \lim_{x \to a+0} \phi(x) = \phi(a+0). \] We can define~$\phi(a-0)$ in a similar manner; and it is clear that \[ \phi(a-0) \leq \phi(a) \leq \phi(a+0). \] It is obvious that similar considerations may be applied to \emph{decreasing} functions. {\Loosen If $\phi(x') < \phi(x'')$, the possibility of equality being excluded, whenever $a - \EPSILON < x' < x'' < a + \EPSILON$, then $\phi(x)$~will be said to be \emph{steadily increasing in the stricter sense}.} \Paragraph{96. Limits of indetermination and the principle of convergence.} All of the argument of \SecNo[§§]{80}--\SecNo{84} may be applied to functions of a continuous variable~$x$ which tends to a limit~$a$. In particular, if $\phi(x)$~is \emph{bounded} in an interval including~$a$ (\ie\ if we can find $\EPSILON$,~$H$, and~$K$ so that $H < \phi(x) < K$ when $a - \EPSILON \leq x \leq a + \EPSILON$).\footnote {For some further discussion of the notion of \emph{a function bounded in an interval} see \SecNo[§]{102}.} then we can define $\lambda$~and~$\Lambda$, the lower and upper limits of indetermination of~$\phi(x)$ as $x \to a$, and prove that the necessary and sufficient condition that $\phi(x) \to l$ as $x \to a$ is that $\lambda = \Lambda = l$. We can also establish the analogue of the principle of convergence, \ie\ prove that \emph{the necessary and sufficient condition that $\phi(x)$ should tend to a limit as $x \to a$ is that, when $\DELTA$~is given, we can choose~$\EPSILON(\DELTA)$ so that $|\phi(x_{2}) - \phi(x_{1})| < \DELTA$ when $0 < |x_{2} - a| < |x_{1} - a| \leq \EPSILON(\DELTA)$}. \end{Remark} \begin{Examples}{XXXV.} \Item{1.} If \[ \phi(x) \to l,\quad \psi(x) \to l', \] as $x \to a$, then $\phi(x) + \psi(x) \to l + l'$, $\phi(x)\psi(x) \to ll'$, and $\phi(x)/\psi(x) \to l/l'$, unless in the last case $l' = 0$. [We saw in \SecNo[§]{91} that the theorems of \Ref{Ch.}{IV}, \SecNo[§§]{63}~\textit{et~seq.}\ hold also for functions of~$x$ when $x \to \infty$ or $x \to -\infty$. By putting $x = 1/y$ we may extend them to functions of~$y$, when $y \to 0$, and by putting $y = z - a$ to functions of~$z$, when $z \to a$. \PageSep{169} The reader should however try to prove them directly from the formal definition given above. Thus, in order to obtain a strict direct proof of the first result he need only take the proof of Theorem~I of \SecNo[§]{63} and write throughout $x$~for~$n$, $a$~for~$\infty$ and $0 < |x - a| \leq \EPSILON$ for $n \geq n_{0}$.] \Item{2.} If $m$~is a positive integer then $x^{m} \to 0$ as $x \to 0$. \Item{3.} If $m$~is a negative integer then $x^{m} \to +\infty$ as $x \to +0$, while $x^{m} \to -\infty$ or $x^{m} \to +\infty$ as $x \to -0$, according as $m$~is odd or even. If $m = 0$ then $x^{m} = 1$ and $x^{m} \to 1$. \Item{4.} $\lim\limits_{x \to 0} (a + bx + cx^{2} + \dots + kx^{m}) = a$. \Item{5.} $\lim\limits_{x \to 0} \left\{(a + bx + \dots + kx^{m})/(\alpha + \beta x + \dots + \kappa x^{\mu})\right\} = a/\alpha$, unless $\alpha = 0$. If $\alpha = 0$ and $a \neq 0$, $\beta \neq 0$, then the function tends to $+\infty$~or~$-\infty$, as $x \to +0$, according as $a$~and~$\beta$ have like or unlike signs; the case is reversed if $x \to -0$. The case in which both $a$~and~$\alpha$ vanish is considered in \Ex{xxxvi}.~5. Discuss the cases which arise when $a \neq 0$ and more than one of the first coefficients in the denominator vanish. \Item{6.} $\lim\limits_{x \to a} x^{m} = a^{m}$, if $m$~is any positive or negative integer, except when $a = 0$ and $m$~is negative. [If $m > 0$, put $x = y + a$ and apply Ex.~4. When $m < 0$, the result follows from Ex.~1 above. It follows at once that $\lim P(x) = P(a)$, if $P(x)$~is any polynomial.] \Item{7.} $\lim\limits_{x \to a} R(x) = R(a)$, if $R$~denotes any rational function and $a$~is not one of the roots of its denominator. \Item{8.} Show that $\lim\limits_{x \to a} x^{m} = a^{m}$ for all rational values of~$m$, except when $a = 0$ and $m$~is negative. [This follows at once, when $a$~is positive, from the inequalities \Eq{(9)}~or~\Eq{(10)} of \SecNo[§]{74}. For $|x^{m} - a^{m}| < H|x - a|$, where $H$~is the greater of the absolute values of $mx^{m-1}$ and~$ma^{m-1}$ (cf.\ \Ex{xxviii}.~4). If $a$~is negative we write $x = -y$ and $a = -b$. Then \[ \lim x^{m} = \lim (-1)^{m}y^{m} = (-1)^{m}b^{m} = a^{m}.] \] \end{Examples} \Paragraph{97.} The reader will probably fail to see at first that any proof of such results as those of Exs.\ 4,~5, 6, 7,~8 above is necessary. He may ask `why not simply put $x = 0$, or $x = a$? Of course we then get $a$,~$a/\alpha$, $a^{m}$, $P(a)$,~$R(a)$'\Add{.} It is very important that he should see exactly where he is wrong. We shall therefore consider this point carefully before passing on to any further examples. The statement \[ \lim_{x \to 0} \phi(x) = l \] is a statement about the values of~$\phi(x)$ when $x$~has any value \PageSep{170} \emph{distinct from but differing by little from zero}.\footnote {Thus in Def.~A of \SecNo[§]{93} we make a statement about values of~$y$ such that $0 < y \leq y_{0}$, the first of these inequalities being inserted expressly in order to exclude the value $y = 0$.} It is \emph{not} a statement about the \emph{value of~$\phi(x)$ when $x = 0$}. When we make the statement we assert that, when $x$~is \emph{nearly} equal to zero, $\phi(x)$~is nearly equal to~$l$. We assert nothing whatever about what happens when $x$~is \emph{actually} equal to~$0$. So far as we know, $\phi(x)$~may not be defined at all for $x = 0$; or it may have some value other than~$l$. For example, consider the function defined for all values of~$x$ by the equation $\phi(x) = 0$. It is obvious that \[ \lim\phi(x) = 0. \Tag{(1)} \] {\Loosen Now consider the function~$\psi(x)$ which differs from~$\phi(x)$ only in that $\psi(x) = 1$ when $x = 0$. Then} \[ \lim\psi(x) = 0, \Tag{(2)} \] for, when $x$~is nearly equal to zero, $\psi(x)$~is not only nearly but exactly equal to zero. But $\psi(0) = 1$. The graph of this function consists of the axis of~$x$, with the point $x = 0$ left out, and one isolated point, viz.\ the point $(0, 1)$. The equation~\Eq{(2)} expresses the fact that if we move along the graph towards the axis of~$y$, from either side, then the ordinate of the curve, being always equal to zero, tends to the limit zero. This fact is in no way affected by the position of the isolated point~$(0, 1)$. The reader may object to this example on the score of artificiality: but it is easy to write down simple formulae representing functions which behave precisely like this near $x = 0$. One is \[ \psi(x) = [1 - x^{2}], \] where $[1 - x^{2}]$ denotes as usual the greatest integer not greater than $1 - x^{2}$. For if $x = 0$ then $\psi(x) = [1] = 1$; while if $0 < x < 1$, or $-1 < x < 0$, then $0 < 1 - x^{2} < 1$ and so $\psi(x) = [1 - x^{2}] = 0$. Or again, let us consider the function \[ y = x/x \] already discussed in \Ref{Ch.}{II}, \SecNo[§]{24},~\Eq{(2)}. This function is equal to~$1$ for all values of~$x$ save $x = 0$. It is \emph{not} equal to~$1$ when $x = 0$: it is in fact not defined at all for $x = 0$. For when we say \PageSep{171} that $\phi(x)$~is defined for $x = 0$ we mean (as we explained in \Ref{Ch.}{II}, \textit{l.c.})\ that we can calculate its value for $x = 0$ by putting $x = 0$ in the actual expression of~$\phi(x)$. In this case we cannot. When we put $x = 0$ in~$\phi(x)$ we obtain~$0/0$, which is a meaningless expression. The reader may object `divide numerator and denominator by~$x$'. But he must admit that when $x = 0$ this is impossible. Thus $y = x/x$ is a function which differs from $y = 1$ solely in that it is not defined for $x = 0$. None the less \[ \lim(x/x) = 1, \] for $x/x$~is equal to~$1$ so long as $x$~differs from zero, however small the difference may be. Similarly $\phi(x) = \{(x + 1)^{2} - 1\}/x = x + 2$ so long as $x$~is not equal to zero, but is undefined when $x = 0$. None the less $\lim\phi(x) = 2$. On the other hand there is of course nothing to prevent the limit of~$\phi(x)$ as $x$~tends to zero from being equal to~$\phi(0)$, the value of~$\phi(x)$ for $x = 0$. Thus if $\phi(x) = x$ then $\phi(0) = 0$ and $\lim\phi(x) = 0$. This is in fact, from a practical point of view, \ie\ from the point of view of what most frequently occurs in applications, the ordinary case. \begin{Examples}{XXXVI.} \Item{1.} $\lim\limits_{x \to a} (x^{2} - a^{2})/(x - a) = 2a$. \Item{2.} $\lim\limits_{x \to a} (x^{m} - a^{m})/(x - a) = ma^{m-1}$, if $m$~is any integer (zero included). \Item{3.} Show that the result of Ex.~2 remains true for all rational values of~$m$, provided $a$~is positive. [This follows at once from the inequalities \Eq{(9)}~and~\Eq{(10)} of~\SecNo[§]{74}.] \Item{4.} $\lim\limits_{x \to 1} (x^{7} - 2x^{5} + 1)/(x^{3} - 3x^{2} + 2) = 1$. [Observe that $x - 1$ is a factor of both numerator and denominator.] \Item{5.} Discuss the behaviour of \[ \phi(x) = (a_{0}x^{m} + a_{1}x^{m+1} + \dots + a_{k}x^{m+k}) /(b_{0}x^{n} + b_{1}x^{n+1} + \dots + b_{l}x^{n+l}) \] as $x$~tends to~$0$ by positive or negative values. [If $m > n$, $\lim\phi(x) = 0$. If $m = n$, $\lim\phi(x) = a_{0}/b_{0}$. If $m < n$ and $n - m$ is even, $\phi(x) \to +\infty$ or $\phi(x) \to -\infty$ according as $a_{0}/b_{0} > 0$ or $a_{0}/b_{0} < 0$. If $m < n$ and $n - m$ is odd, $\phi(x) \to +\infty$ as $x \to +0$ and $\phi(x) \to -\infty$ as $x \to -0$, or $\phi(x) \to -\infty$ as $x \to +0$ and $\phi(x) \to +\infty$ as $x \to -0$, according as $a_{0}/b_{0} > 0$ or $a_{0}/b_{0} < 0$.] \PageSep{172} \Item{6.} \Topic{Orders of smallness}. When $x$~is small $x^{2}$~is very much smaller, $x^{3}$~much smaller still, and so on: in other words \[ \lim_{x\to 0} (x^{2}/x) = 0,\quad \lim_{x\to 0} (x^{3}/x^{2}) = 0,\ \dots. \] Another way of stating the matter is to say that, when $x$~tends to~$0$, $x^{2}$, $x^{3}$,~\dots\ all also tend to~$0$, but $x^{2}$~tends to~$0$ more rapidly than~$x$, $x^{3}$~than~$x^{2}$, and so on. It is convenient to have some scale by which to measure the rapidity with which a function, whose limit, as $x$~tends to~$0$, is~$0$, diminishes with~$x$, and it is natural to take the simple functions $x$,~$x^{2}$, $x^{3}$,~\dots\ as the measures of our scale. We say, therefore, that \emph{$\phi(x)$~is of the first order of smallness} if $\phi(x)/x$ tends to a limit other than~$0$ as $x$~tends to~$0$. Thus $2x + 3x^{2} + x^{7}$ is of the first order of smallness, since $\lim(2x + 3x^{2} + x^{7})/x = 2$. Similarly we define the second, third, fourth,~\dots\ orders of smallness. It must not be imagined that this scale of orders of smallness is in any way complete. If it were complete, then every function~$\phi(x)$ which tends to zero with~$x$ would be of either the first or second or some higher order of smallness. This is obviously not the case. For example $\phi(x) = x^{7/5}$ tends to zero more rapidly than~$x$ and less rapidly than~$x^{2}$. The reader may not unnaturally think that our scale might be made complete by including in it \emph{fractional} orders of smallness. Thus we might say that $x^{7/5}$~was of the $\frac{7}{5}$th~order of smallness. We shall however see later on that such a scale of orders would still be altogether incomplete. And as a matter of fact the \emph{integral} orders of smallness defined above are so much more important in applications than any others that it is hardly necessary to attempt to make our definitions more precise. \Topic{Orders of greatness.} Similar definitions are at once suggested to meet the case in which $\phi(x)$~is large (positively or negatively) when $x$~is small. We shall say that $\phi(x)$~is of the $k$th~order of greatness when $x$~is small if $\phi(x)/x^{-k} = x^{k}\phi(x)$ tends to a limit different from~$0$ as $x$~tends to~$0$. These definitions have reference to the case in which $x \to 0$. There are of course corresponding definitions relating to the cases in which $x \to \infty$ or $x \to a$. Thus if $x^{k}\phi(x)$~tends to a limit other than zero, as $x \to \infty$, then we say that $\phi(x)$~is of the $k$th~order of smallness when $x$~is large: while if $(x - a)^{k}\phi(x)$ tends to a limit other than zero, as $x \to a$, then we say that $\phi(x)$~is of the $k$th~order of greatness when $x$~is nearly equal to~$a$. \Item{7.\footnotemark} $\lim\sqrtp{1 + x} = \lim\sqrtp{1 - x} = 1$. [Put $1 + x = y$ or $1 - x = y$, and use \Ex{xxxv}.~8.] \footnotetext{In the examples which follow it is to be assumed that limits as $x \to 0$ are required, unless (as in Exs.~19,~22) the contrary is explicitly stated.} \Item{8.} $\lim\{\sqrtp{1 + x} - \sqrtp{1 - x}\}/x = 1$. [Multiply numerator and denominator by $\sqrtp{1 + x} + \sqrtp{1 - x}$.] \PageSep{173} \Item{9.} Consider the behaviour of $\{\sqrtp{1 + x^{m}} - \sqrtp{1 - x^{m}}\}/x^{n}$ as $x \to 0$, $m$~and~$n$ being positive integers. \Item{10.} $\lim\{\sqrtp{1 + x + x^{2}} - 1\}/x = \frac{1}{2}$. \Item{11.} $\lim\dfrac{\sqrtp{1 + x} - \sqrtp{1 + x^{2}}}{\sqrtp{1 - x^{2}} - \sqrtp{1 - x}} = 1$. \Item{12.} Draw a graph of the function \[ y = \biggl\{\frac{1}{x - 1} + \frac{1}{x - \tfrac{1}{2}} + \frac{1}{x - \tfrac{1}{3}} + \frac{1}{x - \tfrac{1}{4}}\biggr\} \bigg/ \biggl\{\frac{1}{x - 1} + \frac{1}{x - \tfrac{1}{2}} + \frac{1}{x - \tfrac{1}{3}} + \frac{1}{x - \tfrac{1}{4}}\biggr\}. \] Has it a limit as $x \to 0$? [Here $y = 1$ except for $x = 1$, $\frac{1}{2}$,~$\frac{1}{3}$,~$\frac{1}{4}$, when $y$~is not defined, and $y \to 1$ as $x \to 0$.] \Item{13.} $\lim\dfrac{\sin x}{x} = 1$. [It may be deduced from the definitions of the trigonometrical ratios\footnote {The proofs of the inequalities which are used here depend on certain properties of the area of a sector of a circle which are usually taken as geometrically intuitive; for example, that the area of the sector is greater than that of the triangle inscribed in the sector. The justification of these assumptions must be postponed to \Ref{Ch.}{VII}\@.} that if $x$~is positive and less than~$\frac{1}{2}\pi$ then \[ \sin x < x < \tan x \] or \[ \cos x < \frac{\sin x}{x} < 1 \] or \[ 0 < 1 - \frac{\sin x}{x} < 1 - \cos x = 2\sin^{2} \tfrac{1}{2} x. \] But $2\sin^{2} \frac{1}{2} x < 2(\frac{1}{2} x)^{2} \DPtypo{<}{=} \frac{1}{2} x^{2}$\Add{.} Hence $\lim\limits_{x \to +0} \left(1 - \dfrac{\sin x}{x}\right) = 0$, and $\lim\limits_{x \to +0} \dfrac{\sin x}{x} = 1$. As $\dfrac{\sin x}{x}$~is an even function, the result follows.] \Item{14.} $\lim \dfrac{1 - \cos x}{x^{2}} = \frac{1}{2}$. \Item{15.} $\lim \dfrac{\sin \alpha x}{x} = \alpha$. Is this true if $\alpha = 0$? \Item{16.} $\lim \dfrac{\arcsin x}{x} = 1$. [Put $x = \sin y$.] \Item{17.} $\lim \dfrac{\tan \alpha x}{x}= \alpha$,\quad $\lim\dfrac{\arctan \alpha x}{x} = \alpha$. \Item{18.} $\lim \dfrac{\cosec x - \cot x}{x} = \frac{1}{2}$. \Item{19.} $\lim\limits_{x \to 1} \dfrac{1 + \cos \pi x}{\tan^{2}\pi x} = \frac{1}{2}$. \PageSep{174} \Item{20.} {\Loosen How do the functions $\sin(1/x)$, $(1/x)\sin(1/x)$, $x\sin(1/x)$ behave as $x \to 0$? [The first oscillates finitely, the second infinitely, the third tends to the limit~$0$. None is defined when $x = 0$. See \Exs{xv}.\ 6,~7,~8.]} \Item{21.} Does the function \[ y = \biggl(\sin \frac{1}{x}\biggr)\bigg/\biggr(\sin \frac{1}{x}\biggr) \] tend to a limit as $x$~tends to~$0$? [\emph{No}. The function is equal to~$1$ except when $\sin(1/x) = 0$; \ie\ when $x = 1/\pi$, $1/2\pi$,~\dots, $-1/\pi$, $-1/2\pi$,~\dots. For these values the formula for~$y$ assumes the meaningless form~$0/0$, and $y$~is therefore not defined for an infinity of values of~$x$ near $x = 0$.] \Item{22.} Prove that if $m$ is any integer then $[x] \to m$ and $x - [x] \to 0$ as $x \to m+0$, and $[x] \to m - 1$, $x - [x] \to 1$ as $x \to m-0$. \end{Examples} \Paragraph{98. Continuous functions of a real variable.} The reader has no doubt some idea as to what is meant by a \emph{continuous curve}. Thus he would call the curve~$C$ in \Fig{29} continuous, the curve~$C'$ generally continuous but discontinuous for $x = \xi'$ and $x = \xi''$. %[Illustration: Fig. 29.] \Figure{29}{p174} Either of these curves may be regarded as the graph of a function~$\phi(x)$. It is natural to call a function \emph{continuous} if its graph is a continuous curve, and otherwise discontinuous. Let us take this as a provisional definition and try to distinguish more precisely some of the properties which are involved in it. In the first place it is evident that the property of the function $y = \phi(x)$ of which $C$ is the graph may be analysed into some property possessed by the curve at each of its points. To be able to define continuity \emph{for all values of~$x$} we must first define continuity \emph{for any particular value of~$x$}. Let us therefore fix on some particular value of~$x$, say the value $x = \xi$ \PageSep{175} corresponding to the point~$P$ of the graph. What are the characteristic properties of~$\phi(x)$ associated with this value of~$x$? In the first place \emph{$\phi(x)$~is defined for $x = \xi$}. This is obviously essential. If $\phi(\xi)$~were not defined there would be a point missing from the curve. Secondly \emph{$\phi(x)$~is defined for all values of~$x$ near $x = \xi$}; \ie\ we can find an interval, including $x = \xi$ in its interior, for all points of which $\phi(x)$~is defined. Thirdly \emph{if $x$~approaches the value~$\xi$ from either side then $\phi(x)$~approaches the limit~$\phi(\xi)$}. The properties thus defined are far from exhausting those which are possessed by the curve as pictured by the eye of common sense. This picture of a curve is a generalisation from particular curves such as straight lines and circles. But they are the simplest and most fundamental properties: and the graph of any function which has these properties would, so far as drawing it is practically possible, satisfy our geometrical feeling of what a continuous curve should be. We therefore select these properties as embodying the mathematical notion of continuity. We are thus led to the following \begin{Definition} The function~$\phi(x)$ is said to be continuous for $x = \xi$ if it tends to a limit as $x$~tends to~$\xi$ from either side, and each of these limits is equal to~$\phi(\xi)$. \end{Definition} We can now define \emph{continuity throughout an interval}. The function~$\phi(x)$ is said to be continuous throughout a certain interval of values of~$x$ if it is continuous for all values of~$x$ in that interval. It is said to be \emph{continuous everywhere} if it is continuous for every value of~$x$. Thus $[x]$~is continuous in the interval $\DPmod{(\EPSILON, 1 - \EPSILON)}{[\EPSILON, 1 - \EPSILON]}$, where $\EPSILON$~is any positive number less than~$\frac{1}{2}$; and $1$ and~$x$ are continuous everywhere.\PageLabel{175} If we recur\DPnote{** [sic]} to the definitions of a limit we see that our definition is equivalent to `\emph{$\phi(x)$~is continuous for $x = \xi$ if, given~$\DELTA$, we can choose~$\EPSILON(\DELTA)$ so that $|\phi(x) - \phi(\xi)| < \DELTA$ if $0 \leq |x - \xi| \leq \EPSILON(\DELTA)$}'. We have often to consider functions defined only in an interval $\DPmod{(a, b)}{[a, b]}$. In this case it is convenient to make a slight and obvious \PageSep{176} change in our definition of continuity in so far as it concerns the particular points $a$~and~$b$. We shall then say that $\phi(x)$~is continuous for $x = a$ if $\phi(a + 0)$ exists and is equal to~$\phi(a)$, and for $x = b$ if $\phi(b - 0)$ exists and is equal to~$\phi(b)$. \Paragraph{99.} The definition of continuity given in the last section may be illustrated geometrically as follows. Draw the two horizontal lines $y = \phi(\xi) - \DELTA$ and $y = \phi(\xi) + \DELTA$. Then $|\phi(x) - \phi(\xi)| < \DELTA$ expresses the fact that the point on the curve corresponding to~$x$ lies %[Illustration: Fig. 30.] \ifthenelse{\boolean{Modernize}}{% \Figure[\textwidth]{30}{p176}% }{% \Figure[\textwidth]{30}{p176_orig_notation}% } between these two lines. Similarly $|x - \xi| \leq \EPSILON$ expresses the fact that $x$~lies in the interval $\DPmod{(\xi - \EPSILON, \xi + \EPSILON)}{[\xi - \EPSILON, \xi + \EPSILON]}$. Thus our definition asserts that if we draw two such horizontal lines, no matter how close together, we can always cut off a vertical strip of the plane by two vertical lines in such a way that all that part of the curve which is contained in the strip lies between the two horizontal lines. This is evidently true of the curve~$C$ (\Fig{29}), whatever value $\xi$ may have. We shall now discuss the continuity of some special types of functions. Some of the results which follow were (as we pointed out at the time) tacitly assumed in \Ref{Ch.}{II}\@. \begin{Examples}{XXXVII.} \Item{1.} The sum or product of two functions continuous at a point is continuous at that point. The quotient is also continuous unless the denominator vanishes at the point. [This follows at once from \Ex{xxxv}.~1.] \Item{2.} Any polynomial is continuous for all values of~$x$. Any rational fraction is continuous except for values of~$x$ for which the denominator vanishes. [This follows from \Exs{xxxv}.\ 6,~7.] \PageSep{177} \Item{3.} $\sqrt{x}$~is continuous for all positive values of~$x$ (\Ex{xxxv}.~8). It is not defined when $x < 0$, but is continuous for $x = 0$ in virtue of the remark made at the end of \SecNo[§]{98}. The same is true of~$x^{m/n}$, where $m$~and~$n$ are any positive integers of which $n$ is even. \Item{4.} The function~$x^{m/n}$, where $n$~is odd, is continuous for all values of~$x$. \Item{5.} $1/x$~is not continuous for $x = 0$. It has no value for $x = 0$, nor does it tend to a limit as $x \to 0$. In fact $1/x \to +\infty$ or $1/x \to -\infty$ according as $x \to 0$ by positive or negative values. \Item{6.} Discuss the continuity of~$x^{-m/n}$, where $m$~and~$n$ are positive integers, for $x = 0$. \Item{7.} The standard rational function $R(x) = P(x)/Q(x)$ is discontinuous for $x = a$, where $a$~is any root of $Q(x) = 0$. Thus $(x^{2} + 1)/(x^{2} - 3x + 2)$ is discontinuous for $x = 1$. It will be noticed that in the case of rational functions a discontinuity is always associated with (\ia)~a failure of the definition for a particular value of~$x$ and (\ib)~a tending of the function to~$+\infty$ or~$-\infty$ as $x$~approaches this value from either side. Such a particular kind of point of discontinuity is usually described as an \Emph{infinity} of the function. An `infinity' is the kind of discontinuity of most common occurrence in ordinary work. \Item{8.} Discuss the continuity of \[ \sqrtb{(x - a)(b - x)},\quad \sqrtb[3]{(x - a)(b - x)},\quad \sqrtb{(x - a)/(b - x)},\quad \sqrtb[3]{(x - a)/(b - x)}\Add{.} \] \Item{9.} $\sin x$ and $\cos x$ are continuous for all values of~$x$. [We have \[ \sin(x + h) - \sin x = 2\sin \tfrac{1}{2}h \cos(x + \tfrac{1}{2}h), \] which is numerically less than the numerical value of~$h$.] \Item{10.} For what values of~$x$ are $\tan x$, $\cot x$, $\sec x$, and $\cosec x$ continuous or discontinuous? \Item{11.} If $f(y)$~is continuous for $y = \eta$, and $\phi(x)$~is a continuous function of~$x$ which is equal to~$\eta$ when $x = \xi$, then $f\{\phi(x)\}$~is continuous for $x = \xi$. \Item{12.} If $\phi(x)$~is continuous for any particular value of~$x$, then any polynomial in~$\phi(x)$, such as $a\{\phi(x)\}^{m} + \dots$, is so too. \Item{13.} Discuss the continuity of \[ 1/(a\cos^{2} x + b\sin^{2} x),\quad \sqrtp{2 + \cos x},\quad \sqrtp{1 + \sin x},\quad 1/\sqrtp{1 + \sin x}. \] \Item{14.} $\sin(1/x)$, $x\sin(1/x)$, and $x^{2}\sin(1/x)$ are continuous except for $x = 0$. \Item{15.} The function which is equal to $x\sin(1/x)$ except when $x = 0$, and to zero when $x = 0$, is continuous for all values of~$x$. \Item{16.} $[x]$ and $x - [x]$ are discontinuous for all integral values of~$x$. \Item{17.} For what (if any) values of~$x$ are the following functions discontinuous: $[x^{2}]$, $[\sqrt{x}\,]$, $\sqrtp{x - [x]}$, $[x] + \sqrtp{x - [x]}$, $[2x]$, $[x] + [-x]$? \PageSep{178} \Item{18.} \Topic{Classification of discontinuities.} Some of the preceding examples suggest a classification of different types of discontinuity. \SubItem{(1)} Suppose that $\phi(x)$~tends to a limit as $x \to a$ either by values less than or by values greater than~$a$. Denote these limits, as in \SecNo[§]{95}, by $\phi(a - 0)$ and $\phi(a + 0)$ respectively. Then, for continuity, it is necessary and sufficient that $\phi(x)$~should be defined for $x = a$, and that $\phi(a - 0) = \phi(a) = \phi(a + 0)$. Discontinuity may arise in a variety of ways. \Item{($\alpha$)} $\phi(a - 0)$ may be equal to $\phi(a + 0)$, but $\phi(a)$~may not be defined, or may differ from $\phi(a - 0)$ and~$\phi(a + 0)$. Thus if $\phi(x) = x \sin(1/x)$ and $a = 0$, $\phi(0 - 0) = \phi(0 + 0) = 0$, but $\phi(x)$~is not defined for $x = 0$. Or if $\phi(x) = [1 - x^{2}]$ and $a = 0$, $\phi(0 - 0) = \phi(0 + 0) = 0$, but $\phi(0) = 1$. \Item{($\beta$)} {\Loosen$\phi(a - 0)$ and $\phi(a + 0)$ may be unequal. In this case $\phi(a)$~may be equal to one or to neither, or be undefined. The first case is illustrated by $\phi(x) = [x]$, for which $\phi(0 - 0) = -1$, $\phi(0 + 0) = \phi(0) = 0$; the second by $\phi(x) = [x] - [-x]$, for which $\phi(0 - 0) = -1$, $\phi(0 + 0) = 1$, $\phi(0) = 0$; and the third by $\phi(x) = [x] + x \sin(1/x)$, for which $\phi(0 - 0)= -1$, $\phi(0 + 0) = 0$, and $\phi(0)$~is undefined.} In any of these cases we say that $\phi(x)$~has a \Emph{simple discontinuity} at $x = a$. And to these cases we may add those in which $\phi(x)$~is defined only on one side of $x = a$, and $\phi(a - 0)$ or~$\phi(a + 0)$, as the case may be, exists, but $\phi(x)$~is either not defined when $x = a$ or has when $x = a$ a value different from $\phi(a - 0)$ or~$\phi(a + 0)$. It is plain from \SecNo[§]{95} that \emph{a function which increases or decreases steadily in the neighbourhood of $x = a$ can have at most a simple discontinuity for $x = a$}. \SubItem{(2)} It may be the case that only one (or neither) of $\phi(a - 0)$ and $\phi(a + 0)$ exists, but that, supposing for example $\phi(a + 0)$ not to exist, $\phi(x) \to +\infty$ or $\phi(x) \to -\infty$ as $x \to a+0$, so that $\phi(x)$~tends to a limit or to~$+\infty$ or to~$-\infty$ as $x$~approaches~$a$ from either side. Such is the case, for instance, if $\phi(x) = 1/x$ or $\phi(x) = 1/x^{2}$, and $a = 0$. In such cases we say (cf.\ Ex.~7) that $x = a$ is an \Emph{infinity} of~$\phi(x)$. And again we may add to these cases those in which $\phi(x) \to +\infty$ or $\phi(x) \to -\infty$ as $x \to a$ from one side, but $\phi(x)$~is not defined at all on the other side of $x = a$. \SubItem{(3)} Any point of discontinuity which is not a point of simple discontinuity nor an infinity is called a point of \Emph{oscillatory discontinuity}. Such is the point $x = 0$ for the functions $\sin(1/x)$, $(1/x)\sin(1/x)$. \Item{19.} What is the nature of the discontinuities at $x = 0$ of the functions $(\sin x)/x$, $[x] + [-x]$, $\cosec x$, $\sqrtp{1/x}$, $\sqrtp[3]{1/x}$, $\cosec(1/x)$, $\sin(1/x)/\sin(1/x)$? \Item{20.} The function which is equal to~$1$ when $x$~is rational and to~$0$ when $x$~is irrational (\Ref{Ch.}{II}, \Ex{xvi}.~10) is discontinuous for all values of~$x$. So too is any function which is defined only for rational or for irrational values of~$x$. \PageSep{179} \Item{21.} {\Loosen The function which is equal to~$x$ when $x$~is irrational and to $\sqrtb{(1 + p^{2})/(1 + q^{2})}$ when $x$~is a rational fraction~$p/q$ (\Ref{Ch.}{II}, \Ex{xvi}.~11) is discontinuous for all negative and for positive rational values of~$x$, but continuous for positive irrational values.} \Item{22.} For what points are the functions considered in \Ref{Ch.}{IV}, \Exs{xxxi} discontinuous, and what is the nature of their discontinuities? [Consider, \eg, the function $y = \lim x^{n}$ (Ex.~5). Here $y$~is only defined when $-1 < x \leq 1$: it is equal to~$0$ when $-1 < x < 1$ and to~$1$ when $x = 1$. The points $x = 1$ and $x = -1$ are points of simple discontinuity.] \end{Examples} \Paragraph{100. The fundamental property of a continuous function.} It may perhaps be thought that the analysis of the idea of a continuous curve given in \SecNo[§]{98} is not the simplest or most natural possible. Another method of analysing our idea of continuity is the following. Let $A$~and~$B$ be two points on the graph of~$\phi(x)$ whose coordinates are $x_{0}$,~$\phi(x_{0})$ and $x_{1}$,~$\phi(x_{1})$ respectively. Draw any straight line~$\lambda$ which passes between $A$~and~$B$. Then common sense certainly declares that if the graph of~$\phi(x)$ is continuous it must cut~$\lambda$. If we consider this property as an intrinsic geometrical property of continuous curves it is clear that there is no real loss of generality in supposing $\lambda$ to be parallel to the axis of~$x$. In this case the ordinates of $A$~and~$B$ cannot be equal: let us suppose, for definiteness, that $\phi(x_{1}) > \phi(x_{0})$. And let $\lambda$ be the line $y = \eta$, where $\phi(x_{0}) < \eta < \phi(x_{1})$. Then to say that the graph of~$\phi(x)$ must cut~$\lambda$ is the same thing as to say that there is a value of~$x$ between $x_{0}$~and~$x_{1}$ for which $\phi(x) = \eta$. We conclude then that a continuous function~$\phi(x)$ must possess the following property: \emph{if \[ \phi(x_{0}) = y_{0},\quad \phi(x_{1}) = y_{1}, \] and $y_{0} < \eta < y_{1}$, then there is a value of~$x$ between $x_{0}$~and~$x_{1}$ for which $\phi(x) = \eta$}. In other words \emph{as $x$~varies from $x_{0}$ to~$x_{1}$, $y$~must assume at least once every value between $y_{0}$~and~$y_{1}$}. We shall now prove that if $\phi(x)$~is a continuous function of~$x$ in the sense defined in \SecNo[§]{98} then it does in fact possess this property. There is a certain range of values of~$x$, to the right of~$x_{0}$, for which $\phi(x) < \eta$. For $\phi(x_{0}) < \eta$, and so $\phi(x)$~is certainly less than~$\eta$ if \PageSep{180} $\phi(x) - \phi(x_{0})$ is numerically less than $\eta - \phi(x_{0})$. But since $\phi(x)$~is continuous for $x = x_{0}$, this condition is certainly satisfied if $x$~is near enough to~$x_{0}$. Similarly there is a certain range of values, to the left of~$x_{1}$, for which $\phi(x) > \eta$. Let us divide the values of~$x$ between $x_{0}$~and~$x_{1}$ into two classes $L$,~$R$ as follows: \Item{(1)} in the class~$L$ we put all values~$\xi$ of~$x$ such that $\phi(x) < \eta$ when $x = \xi$ and for all values of~$x$ between $x_{0}$~and~$\xi$; \Item{(2)} in the class~$R$ we put all the other values of~$x$, \ie\ all numbers~$\xi$ such that either $\phi(\xi) \geq \eta$ or there is a value of~$x$ between $x_{0}$~and~$\xi$ for which $\phi(x) \geq \eta$. Then it is evident that these two classes satisfy all the conditions imposed upon the classes $L$,~$R$ of \SecNo[§]{17}, and so constitute a section of the real numbers. Let $\xi_{0}$ be the number corresponding to the section. First suppose $\phi(\xi_{0}) > \eta$, so that $\xi_{0}$~belongs to the upper class: and let $\phi(\xi_{0}) = \eta + k$, say. Then $\phi(\xi') < \eta$ and so \[ \phi(\xi_{0}) - \phi(\xi') > k, \] for all values of~$\xi'$ less than~$\xi_{0}$, which contradicts the condition of continuity for $x = \xi_{0}$. Next suppose $\phi(\xi_{0}) = \eta - k < \eta$. Then, if $\xi'$~is any number greater than~$\xi_{0}$, either $\phi(\xi') \geq \eta$ or we can find a number~$\xi''$ between $\xi_{0}$~and~$\xi'$ such that $\phi(\xi'') \geq \eta$. In either case we can find a number as near to~$\xi_{0}$ as we please and such that the corresponding values of~$\phi(x)$ differ by more than~$k$. And this again contradicts the hypothesis that $\phi(x)$~is continuous for $x = \xi_{0}$. Hence $\phi(\xi_{0}) = \eta$, and the theorem is established. It should be observed that we have proved more than is asserted explicitly in the theorem; we have proved in fact that $\xi_{0}$~is the \emph{least} value of~$x$ for which $\phi(x) = \eta$. It is not obvious, or indeed generally true, that there is a least among the values of~$x$ for which a function assumes a given value, though this is true for continuous functions. \begin{Remark} It is easy to see that the converse of the theorem just proved is not true. Thus such a function as the function~$\phi(x)$ whose graph is represented \PageSep{181} by \Fig{31} obviously assumes at least once every value between $\phi(x_{0})$ and~$\phi(x_{1})$: yet $\phi(x)$~is discontinuous. Indeed it is not even true that $\phi(x)$~must be continuous when it assumes each value \emph{once and once only}. Thus let $\phi(x)$ be defined as follows from $x = 0$ to $x = 1$. If $x = 0$ let $\phi(x) = 0$; if $0 < x < 1$ let $\phi(x) = 1 - x$; and if $x = 1$ let $\phi(x) = 1$. The graph of the function is shown in \Fig{32}; it includes the points $O$,~$C$ but \emph{not} the points $A$,~$B$. It is clear that, as $x$~varies from $0$ to~$1$, $\phi(x)$~assumes once and once only every value between $\phi(0) = 0$ and $\phi(1) = 1$; but $\phi(x)$~is discontinuous for $x = 0$ and $x = 1$. %[Illustration: Fig. 31.] %[Illustration: Fig. 32.] \Figures{2.5in}{31}{p181a}{2in}{32}{p181b} As a matter of fact, however, the curves which usually occur in elementary mathematics are composed of \emph{a finite number of pieces along which $y$~always varies in the same direction}. It is easy to show that if $y = \phi(x)$ always varies in the same direction, \ie\ steadily increases or decreases, as $x$~varies from $x_{0}$ to~$x_{1}$, then the two notions of continuity are really equivalent, \ie\ that if $\phi(x)$~takes every value between $\phi(x_{0})$ and~$\phi(x_{1})$ then it must be a continuous function in the sense of \SecNo[§]{98}\Add{.} For let $\xi$ be any value of~$x$ between $x_{0}$ and~$x_{1}$. As $x \to \xi$ through values less than~$\xi$, $\phi(x)$~tends to the limit~$\phi(\xi - 0)$ (\SecNo[§]{95}). Similarly as $x \to \xi$ through values greater than~$\xi$, $\phi(x)$~tends to the limit~$\phi(\xi + 0)$. The function will be continuous for $x = \xi$ if and only if \[ \phi(\xi - 0) = \phi(\xi) = \phi(\xi + 0)\Add{.} \] But if either of these equations is untrue, say the first, then it is evident that $\phi(x)$~never assumes any value which lies between $\phi(\xi - 0)$ and~$\phi(\xi)$, which is contrary to our assumption. Thus $\phi(x)$~must be continuous. The net result of this and the last section is consequently to show that our common-sense notion of what we mean by continuity is substantially accurate, and capable of precise statement in mathematical terms. \end{Remark} \Paragraph{101.} In this and the following paragraphs we shall state and prove some general theorems concerning continuous functions. \PageSep{182} \begin{Theorem}[1.] Suppose that $\phi(x)$~is continuous for $x = \xi$, and that $\phi(\xi)$~is positive. Then we can determine a positive number~$\EPSILON$ such that $\phi(\xi)$~is positive throughout the interval $\DPmod{(\xi - \EPSILON, \xi + \EPSILON)}{[\xi - \EPSILON, \xi + \EPSILON]}$. \end{Theorem} For, taking $\DELTA = \frac{1}{2}\phi(\xi)$ in the fundamental inequality of \PageRef{p.}{175}, we can choose $\EPSILON$ so that \[ \DPtypo{\phi(x) - \phi(\xi)}{|\phi(x) - \phi(\xi)|} < \tfrac{1}{2}\phi(\xi) \] throughout $\DPmod{(\xi - \EPSILON, \xi + \EPSILON)}{[\xi - \EPSILON, \xi + \EPSILON]}$, and then \[ \phi(x) \geq \phi(\xi) - |\phi(x) - \phi(\xi)| > \tfrac{1}{2}\phi(\xi) > 0, \] so that $\phi(x)$~is positive. There is plainly a corresponding theorem referring to negative values of~$\phi(x)$. \begin{Theorem}[2.] If $\phi(x)$~is continuous for $x = \xi$, and $\phi(x)$~vanishes for values of~$x$ as near to~$\xi$ as we please, or assumes, for values of~$x$ as near to~$\xi$ as we please, both positive and negative values, then $\phi(\xi) = 0$. \end{Theorem} This is an obvious corollary of Theorem~1. If $\phi(\xi)$~is not zero, it must be positive or negative; and if it were, for example, positive, it would be positive for all values of~$x$ sufficiently near to~$\xi$, which contradicts the hypotheses of the theorem. \Paragraph{102. The range of values of a continuous function.} Let us consider a function~$\phi(x)$ about which we shall only assume at present that it is defined for every value of~$x$ in an interval $\DPmod{(a, b)}{[a, b]}$. The values assumed by~$\phi(x)$ for values of~$x$ in~$\DPmod{(a, b)}{[a, b]}$ form an aggregate~$S$ to which we can apply the arguments of \SecNo[§]{80}, as we applied them in \SecNo[§]{81} to the aggregate of values of a function of~$n$. If there is a number~$K$ such that $\phi(x) \leq K$, for all values of~$x$ in question, we say that $\phi(x)$ is \emph{bounded above}. In this case $\phi(x)$ possesses an \emph{upper bound}~$M$: no value of~$\phi(x)$ exceeds~$M$, but any number less than~$M$ is exceeded by at least one value of~$\phi(x)$. Similarly we define `\emph{bounded below}', `\emph{lower bound}', `\emph{bounded'}, as applied to functions of a continuous variable~$x$. \begin{Theorem}[1.] If $\phi(x)$ is continuous throughout~$\DPmod{(a, b)}{[a, b]}$, then it is bounded in~$\DPmod{(a, b)}{[a, b]}$. \end{Theorem} \PageSep{183} We can certainly determine an interval~$\DPmod{(a, \xi)}{[a, \xi]}$, extending to the right from~$a$, in which $\phi(x)$~is bounded. For since $\phi(x)$~is continuous for $x = a$, we can, given any positive number~$\DELTA$ however small, determine an interval~$\DPmod{(a, \xi)}{[a, \xi]}$ throughout which $\phi(x)$~lies between $\phi(a) - \DELTA$ and $\phi(a) + \DELTA$; and obviously $\phi(x)$~is bounded in this interval. Now divide the points~$\xi$ of the interval~$\DPmod{(a, b)}{[a, b]}$ into two classes $L$,~$R$, putting~$\xi$ in~$L$ if $\phi(\xi)$~is bounded in~$\DPmod{(a, \xi)}{[a, \xi]}$, and in~$R$ if this is not the case. It follows from what precedes that $L$~certainly exists: what we propose to prove is that $R$~does not. Suppose that $R$~does exist, and let $\beta$ be the number corresponding to the section whose lower and upper classes are $L$~and~$R$. Since $\phi(x)$~is continuous for $x = \beta$, we can, however small $\DELTA$ may be, determine an interval $\DPmod{(\beta - \eta, \beta + \eta)}{[\beta - \eta, \beta + \eta]}$\footnote {If $\beta = b$ we must replace this interval by $\DPmod{(\beta - \eta, \beta)}{[\beta - \eta, \beta]}$, and $\beta + \eta$ by~$\beta$, throughout the argument which follows.} throughout which \[ \phi(\beta) - \DELTA < \phi(x) < \phi(\beta) + \DELTA. \] Thus $\phi(x)$~is bounded in $\DPmod{(\beta - \eta, \beta + \eta)}{[\beta - \eta, \beta + \eta]}$. Now $\beta - \eta$ belongs to~$L$. Therefore $\phi(x)$~is bounded in~$\DPmod{(a, \beta - \eta)}{[a, \beta - \eta]}$: and therefore it is bounded in the whole interval $\DPmod{(a, \beta + \eta)}{[a, \beta + \eta]}$. But $\beta + \eta$ belongs to~$R$ and so $\phi(x)$~is \emph{not} bounded in~$\DPmod{(a, \beta + \eta)}{[a, \beta + \eta]}$. This contradiction shows that $R$~does not exist. And so $\phi(x)$~is bounded in the whole interval $\DPmod{(a, b)}{[a, b]}$\Add{.} \begin{Theorem}[2.] If $\phi(x)$~is continuous throughout~$\DPmod{(a, b)}{[a, b]}$, and $M$~and~$m$ are its upper and lower bounds, then $\phi(x)$~assumes the values $M$~and~$m$ at least once each in the interval. \end{Theorem} For, given any positive number~$\DELTA$, we can find a value of~$x$ for which $M - \phi(x) < \DELTA$ or $1/\{M - \phi(x)\} > 1/\DELTA$. Hence $1/\{M - \phi(x)\}$ is not bounded, and therefore, by Theorem~1, is not continuous. But $M - \phi(x)$ is a continuous function, and so $1/\{M - \phi(x)\}$ is continuous at any point at which its denominator does not vanish (\Ex{xxxvii}.~1). There must therefore be one point at which the denominator vanishes: at this point $\phi(x) = M$. Similarly it may be shown that there is a point at which $\phi(x) = m$. The proof just given is somewhat subtle and indirect, and it may be well, in view of the great importance of the theorem, to indicate alternative lines of proof. It will however be convenient to postpone these for a moment.\footnote {See \SecNo[§]{104}.} \PageSep{184} \begin{Examples}{XXXVIII.} \Item{1.} {\Loosen If $\phi(x) = 1/x$ except when $x = 0$, and $\phi(x) = 0$ when $x = 0$, then $\phi(x)$~has neither an upper nor a lower bound in any interval which includes $x = 0$ in its interior, as \eg\ the interval~$\DPmod{(-1, +1)}{[-1, +1]}$.} \Item{2.} If $\phi(x) = 1/x^{2}$ except when $x = 0$, and $\phi(x) = 0$ when $x = 0$, then $\phi(x)$~has the lower bound~$0$, but no upper bound, in the interval~$\DPmod{(-1, +1)}{[-1, +1]}$. \Item{3.} Let $\phi(x) = \sin(1/x)$ except when $x = 0$, and $\phi(x) = 0$ when $x = 0$. Then $\phi(x)$~is discontinuous for $x = 0$. In any interval~$\DPmod{(-\DELTA, +\DELTA)}{[-\DELTA, +\DELTA]}$ the lower bound is~$-1$ and the upper bound~$+1$, and each of these values is assumed by~$\phi(x)$ an infinity of times. \Item{4.} Let $\phi(x) = x - [x]$. This function is discontinuous for all integral values of~$x$. In the interval~$\DPmod{(0, 1)}{[0, 1]}$ its lower bound is~$0$ and its upper bound~$1$. It is equal to~$0$ when $x = 0$ or $x = 1$, but it is never equal to~$1$. Thus $\phi(x)$~never assumes a value equal to its upper bound. \Item{5.} Let $\phi(x) = 0$ when $x$~is irrational, and $\phi(x) = q$ when $x$~is a rational fraction~$p/q$. Then $\phi(x)$~has the lower bound~$0$, but no upper bound, in any interval~$\DPmod{(a, b)}{[a, b]}$. But if $\phi(x) = (-1)^{p}q$ when $x = p/q$, then $\phi(x)$~has neither an upper nor a lower bound in any interval. \end{Examples} \Paragraph{103. The oscillation of a function in an interval.} Let $\phi(x)$ be any function bounded throughout~$\DPmod{(a, b)}{[a, b]}$, and $M$~and~$m$ its upper and lower bounds. We shall now use the notation $M(a, b)$, $m(a, b)$ for $M$,~$m$, in order to exhibit explicitly the dependence of $M$~and~$m$ on $a$~and~$b$, and we shall write \[ O(a, b) = M(a, b) - m(a, b). \] This number~$O(a, b)$, the difference between the upper and lower bounds of~$\phi(x)$ in~$\DPmod{(a, b)}{[a, b]}$, we shall call the \Emph{oscillation} \emph{of~$\phi(x)$ in~$\DPmod{(a, b)}{[a, b]}$}. The simplest of the properties of the functions $M(a, b)$, $m(a, b)$, $O(a, b)$ are as follows. \begin{Result} \Item{(1)} If $a \leq c \leq b$ then $M(a, b)$~is equal to the greater of $M(a, c)$ and~$M(c, b)$, and $m(a, b)$ to the lesser of $m(a, c)$ and~$m(c, b)$. \end{Result} \begin{Result} \Item{(2)} $M(a, b)$~is an increasing, $m(a, b)$~a decreasing, and $O(a, b)$ an increasing function of~$b$. \end{Result} \begin{Result} \Item{(3)} $O(a, b) \leq O(a, c) + O(c, b)$. \end{Result} The first two theorems are almost immediate consequences of our definitions. Let $\mu$~be the greater of $M(a, c)$ and~$M(c, b)$, and let $\DELTA$ be any positive number. Then $\phi(x) \leq \mu$ throughout $\DPmod{(a, c)}{[a, c]}$ and~$\DPmod{(c, b)}{[c, b]}$, and therefore throughout~$\DPmod{(a, b)}{[a, b]}$; and $\phi(x) > \mu - \DELTA$ somewhere in~$\DPmod{(a, c)}{[a, c]}$ or in~$\DPmod{(c, b)}{[c, b]}$, and therefore somewhere in~$\DPmod{(a, b)}{[a, b]}$. \PageSep{185} Hence $M(a, b) = \mu$. The proposition concerning~$m$ may be proved similarly. Thus (1)~is proved, and (2)~is an obvious corollary. Suppose now that $M_{1}$~is the greater and $M_{2}$~the less of $M(a, c)$ and~$M(c, b)$, and that $m_{1}$~is the less and $m_{2}$~the greater of $m(a, c)$ and~$m(c, b)$. Then, since $c$~belongs to both intervals, $\phi(c)$~is not greater than~$M_{2}$ nor less than~$m_{2}$. Hence $M_{2} \geq m_{2}$, whether these numbers correspond to the same one of the intervals $\DPmod{(a, c)}{[a, c]}$ and $\DPmod{(c, b)}{[c, b]}$ or not, and \[ O(a, b) = M_{1} - m_{1} \leq M_{1} + M_{2} - m_{1} - m_{2}. \] But \[ O(a, c) + O(c, b) = M_{1} + M_{2} - m_{1} - m_{2}; \] and (3)~follows. \begin{Remark} \Paragraph{104. Alternative proofs of Theorem~2 of \SecNo[§]{102}.} The most straightforward proof of Theorem~2 of \SecNo[§]{102} is as follows. Let $\xi$~be any number of the interval~$\DPmod{(a, b)}{[a, b]}$. The function $M(a, \xi)$ increases steadily with~$\xi$ and never exceeds~$M$. We can therefore construct a section of the numbers~$\xi$ by putting~$\xi$ in~$L$ or in~$R$ according as $M(a, \xi) < M$ or $M(a, \xi) = M$. Let $\beta$~be the number corresponding to the section. If $a < \beta < b$, we have \[ M(a, \beta - \eta) < M,\quad M(a, \beta + \eta) = M \] for all positive values of~$\eta$, and so \[ M(\beta - \eta, \beta + \eta) = M, \] by~\Eq{(1)} of \SecNo[§]{103}. Hence $\phi(x)$~assumes, for values of~$x$ as near as we please to~$\beta$, values as near as we please to~$M$, and so, since $\phi(x)$~is continuous, $\phi(\beta)$ must be equal to~$M$. If $\beta = a$ then $M(a, a + \eta) = M$. And if $\beta = b$ then $M(a, b - \eta) < M$, and so $M(b - \eta, b) = M$. In either case the argument may be completed as before. The theorem may also be proved by the method of repeated bisection used in \SecNo[§]{71}. If $M$~is the upper bound of~$\phi(x)$ in an interval~$PQ$, and $PQ$~is divided into two equal parts, then it is possible to find a half~$P_{1}Q_{1}$ in which the upper bound of~$\phi(x)$ is also~$M$. Proceeding as in \SecNo[§]{71}, we construct a sequence of intervals $PQ$, $P_{1}Q_{1}$, $P_{2}Q_{2}$,~\dots\ in each of which the upper bound of~$\phi(x)$ is~$M$. These intervals, as in \SecNo[§]{71}, converge to a point~$T$, and it is easily proved that the value of~$\phi(x)$ at this point is~$M$. \end{Remark} \Paragraph{105. Sets of intervals on a line. The Heine-Borel Theorem.} We shall now proceed to prove some theorems concerning the oscillation of a function which are of a somewhat abstract character but of very great importance, particularly, as we shall see later, in the theory of integration. These theorems depend upon a general theorem concerning intervals on a line. \PageSep{186} Suppose that we are given a \emph{set of intervals} in a straight line, that is to say an aggregate each of whose members is an interval~$\DPmod{(\alpha, \beta)}{[\alpha, \beta]}$. We make no restriction as to the nature of these intervals; they may be finite or infinite in number; they may or may not overlap;\footnote {The word \emph{overlap} is used in its obvious sense: two intervals overlap if they have points in common which are not end points of either. Thus $\DPmod{(0, \frac{2}{3})}{[0, \frac{2}{3}]}$ and~$\DPmod{(\frac{1}{3}, 1)}{[\frac{1}{3}, 1]}$ overlap. A pair of intervals such as $\DPmod{(0, \frac{1}{2})}{[0, \frac{1}{2}]}$ and~$\DPmod{(\frac{1}{2}, 1)}{[\frac{1}{2}, 1]}$ may be said to \emph{abut}.} and any number of them may be included in others. \begin{Remark} It is worth while in passing to give a few examples of sets of intervals to which we shall have occasion to return later.\PageLabel{186} \Itemp{(i)} If the interval~$\DPmod{(0, 1)}{[0, 1]}$ is divided into $n$~equal parts then the $n$~intervals thus formed define a finite set of non-overlapping intervals which just cover up the line. \Itemp{(ii)} We take every point~$\xi$ of the interval~$\DPmod{(0, 1)}{[0, 1]}$, and associate with~$\xi$ the interval $\DPmod{(\xi - \EPSILON, \xi + \EPSILON)}{[\xi - \EPSILON, \xi + \EPSILON]}$, where $\EPSILON$~is a positive number less than~$1$, except that with~$0$ we associate $\DPmod{(0, \EPSILON)}{[0, \EPSILON]}$ and with~$1$ we associate $\DPmod{(1 - \EPSILON, 1)}{[1 - \EPSILON, 1]}$, and in general we reject any part of any interval which projects outside the interval~$\DPmod{(0, 1)}{[0, 1]}$. We thus define an infinite set of intervals, and it is obvious that many of them overlap with one another. \Itemp{(iii)} We take the rational points~$p/q$ of the interval~$\DPmod{(0, 1)}{[0, 1]}$, and associate with~$p/q$ the interval \[ \DPmod{\left(\frac{p}{q} - \frac{\EPSILON}{q^{3}}, \frac{p}{q} + \frac{\EPSILON}{q^{3}}\right)} {\left[\frac{p}{q} - \frac{\EPSILON}{q^{3}}, \frac{p}{q} + \frac{\EPSILON}{q^{3}}\right]}, \] where $\EPSILON$~is positive and less than~$1$. We regard~$0$ as~$0/1$ and~$1$ as~$1/1$: in these two cases we reject the part of the interval which lies outside~$\DPmod{(0, 1)}{[0, 1]}$. We obtain thus an infinite set of intervals, which plainly overlap with one another, since there are an infinity of rational points, other than~$p/q$, in the interval associated with~$p/q$. \end{Remark} \begin{ParTheorem}{The Heine-Borel Theorem.} Suppose that we are given an interval~$\DPmod{(a, b)}{[a, b]}$, and a set of intervals~$I$ each of whose members is included in~$\DPmod{(a, b)}{[a, b]}$. Suppose further that $I$~possesses the following properties: \Itemp{(i)} every point of~$\DPmod{(a, b)}{[a, b]}$, other than $a$~and~$b$, lies inside\footnote {That is to say `in and not at an end of'.} at least one interval of~$I$; \Itemp{(ii)} $a$~is the left-hand end point, and $b$~the right-hand end point, of at least one interval of~$I$. Then it is possible to choose \Emph{a finite number} of intervals from the set~$I$ which form a set of intervals possessing the properties \Inum{(i)}~and~\Inum{(ii)}. \end{ParTheorem} \PageSep{187} We know that $a$~is the left-hand end point of at least one interval of~$I$, say~$\DPmod{(a, a_{1})}{[a, a_{1}]}$. We know also that $a_{1}$~lies inside at least one interval of~$I$, say~$\DPmod{(a_{1}', a_{2})}{[a_{1}', a_{2}]}$. Similarly $a_{2}$~lies inside an interval $\DPmod{(a_{2}', a_{3})}{[a_{2}', a_{3}]}$ of~$I$. It is plain that this argument may be repeated indefinitely, unless after a finite number of steps $a_{n}$~coincides with~$b$. If $a_{n}$~does coincide with~$b$ after a finite number of steps then there is nothing further to prove, for we have obtained a finite set of intervals, selected from the intervals of~$I$, and possessing the properties required. If $a_{n}$~never coincides with~$b$, then the points $a_{1}$,~$a_{2}$, $a_{3}$,~\dots\ must (since each lies to the right of its predecessor) tend to a limiting position, but this limiting position may, so far as we can tell, lie anywhere in~$\DPmod{(a, b)}{[a, b]}$. Let us suppose now that the process just indicated, starting from~$a$, is performed in all possible ways, so that we obtain all possible sequences of the type $a_{1}$,~$a_{2}$, $a_{3}$,~\dots. Then we can prove that \emph{there must be at least one such sequence which arrives at~$b$ after a finite number of steps}. %[Illustration: Fig. 33.] \Figure[\textwidth]{33}{p187} There are two possibilities with regard to any point~$\xi$ between $a$~and~$b$. Either (i)~$\xi$~lies to the left of \emph{some} point~$a_{n}$ of \emph{some} sequence or (ii)~it does not. We divide the points~$\xi$ into two classes $L$~and~$R$ according as to whether (i)~or~(ii) is true. The class~$L$ certainly exists, since all points of the interval $\DPmod{(a, a_{1})}{[a, a_{1}]}$ belong to~$L$. We shall now prove that $R$~does not exist, so that every point~$\xi$ belongs to~$L$. If $R$~exists then $L$~lies entirely to the left of~$R$, and the classes $L$,~$R$ form a section of the real numbers between $a$~and~$b$, to which corresponds a number~$\xi_{0}$. The point~$\xi_{0}$ lies inside an interval of~$I$, say~$\DPmod{(\xi', \xi'')}{[\xi', \xi'']}$, and $\xi'$~belongs to~$L$, and so lies to the left of some term~$a_{n}$ of some sequence. But then we can take $\DPmod{(\xi', \xi'')}{[\xi', \xi'']}$ as the interval $\DPmod{(a_{n}', a_{n+1})}{[a_{n}', a_{n+1}]}$ associated with~$a_{n}$ in our construction of the sequence $a_{1}$,~$a_{2}$, $a_{3}$,~\dots; and all points to the left of~$\xi''$ lie to the left of~$a_{n+1}$. There are therefore points of~$L$ to the right of~$\xi_{0}$, and this contradicts the definition of~$R$. It is therefore impossible that $R$~should exist. \PageSep{188} Thus every point~$\xi$ belongs to~$L$. Now $b$~is the right-hand end point of an interval of~$I$, say~$\DPmod{(b_{1}, b)}{[b_{1}, b]}$, and $b_{1}$~belongs to~$L$. Hence there is a member~$a_{n}$ of a sequence $a_{1}$,~$a_{2}$, $a_{3}$,~\dots\ such that $a_{n} > b_{1}$. But then we may take the interval $\DPmod{(a_{n}', a_{n+1})}{[a_{n}', a_{n+1}]}$ corresponding to~$a_{n}$ to be~$\DPmod{(b_{1}, b)}{[b_{1}, b]}$, and so we obtain a sequence in which the term after the~$n$th coincides with~$b$, and therefore a finite set of intervals having the properties required. Thus the theorem is proved. \begin{Remark} It is instructive to consider the examples of \PageRef{p.}{186} in the light of this theorem. \Itemp{(i)} Here the conditions of the theorem are not satisfied\Add{:} the points $1/n$,~$2/n$, $3/n$,~\dots\ do not lie inside any interval of~$I$\Add{.} \Itemp{(ii)} Here the conditions of the theorem are satisfied. The set of intervals \[ \DPmod{(0, 2\EPSILON)}{[0, 2\EPSILON]}, \quad \DPmod{(\EPSILON, 3\EPSILON)}{[\EPSILON, 3\EPSILON]}, \quad \DPmod{(2\EPSILON, 4\EPSILON)}{[2\EPSILON, 4\EPSILON]}, \ \dots, \quad \DPmod{(1 - 2\EPSILON, 1)}{[1 - 2\EPSILON, 1]}, \] associated with the points $\EPSILON$,~$2\EPSILON$, $3\EPSILON$,~\dots, $1 - \EPSILON$, possesses the properties required. \Itemp{(iii)} In this case we can prove, by using the theorem, that there are, if $\EPSILON$~is small enough, points of~$\DPmod{(0, 1)}{[0, 1]}$ which do not lie in any interval of~$I$. If every point of~$\DPmod{(0, 1)}{[0, 1]}$ lay inside an interval of~$I$ (with the obvious reservation as to the end points), then we could find a finite number of intervals of~$I$ possessing the same property and having therefore a total length greater than~$1$. Now there are two intervals, of total length~$2\EPSILON$, for which $q = 1$, and $q - 1$~intervals, of total length $2\EPSILON(q - 1)/q^{3}$, associated with any other value of~$q$. The sum of any finite number of intervals of~$I$ can therefore not be greater than $2\EPSILON$~times that of the series \[ 1 + \frac{1}{2^{3}} + \frac{2}{3^{3}} + \frac{3}{4^{3}} + \dots, \] which will be shown to be convergent in \Ref{Ch.}{VIII}\@. Hence it follows that, if $\EPSILON$~is small enough, the supposition that every point of~$\DPmod{(0, 1)}{[0, 1]}$ lies inside an interval of~$I$ leads to a contradiction. The reader may be tempted to think that this proof is needlessly elaborate, and that the existence of points of the interval, not in any interval of~$I$, follows at once from the fact that the sum of all these intervals is less than~$1$. But the theorem to which he would be appealing is (when the set of intervals is infinite) far from obvious, and can only be proved rigorously by some such use of the Heine-Borel Theorem as is made in the text. \end{Remark} \Paragraph{106.} We shall now apply the Heine-Borel Theorem to the proof of two important theorems concerning the oscillation of a continuous function. \PageSep{189} \begin{Theorem}[I.] If $\phi(x)$~is continuous throughout the interval $\DPmod{(a, b)}{[a, b]}$, then we can divide $\DPmod{(a, b)}{[a, b]}$ into a finite number of sub-intervals $\DPmod{(a, x_{1})}{[a, x_{1}]}$, $\DPmod{(x_{1}, x_{2})}{[x_{1}, x_{2}]}$,~\dots\Add{,} $\DPmod{(x_{n}, b)}{[x_{n}, b]}$, in each of which the oscillation of~$\phi(x)$ is less than an assigned positive number~$\DELTA$. \end{Theorem} Let $\xi$ be any number between $a$~and~$b$. Since $\phi(x)$~is continuous for $x = \xi$, we can determine an interval $\DPmod{(\xi - \EPSILON, \xi + \EPSILON)}{[\xi - \EPSILON, \xi + \EPSILON]}$ such that the oscillation of~$\phi(x)$ in this interval is less than~$\DELTA$. It is indeed obvious that there are an infinity of such intervals corresponding to every~$\xi$ and every~$\DELTA$, for if the condition is satisfied for any particular value of~$\EPSILON$, then it is satisfied \textit{a~fortiori} for any smaller value. What values of~$\EPSILON$ are admissible will naturally depend upon~$\xi$; we have at present no reason for supposing that a value of~$\EPSILON$ admissible for one value of~$\xi$ will be admissible for another. We shall call the intervals thus associated with~$\xi$ \emph{the $\DELTA$-intervals of~$\xi$}. If $\xi = a$ then we can determine an interval $\DPmod{(a, a + \EPSILON)}{[a, a + \EPSILON]}$, and so an infinity of such intervals, having the same property. These we call the $\DELTA$-intervals of~$a$, and we can define in a similar manner the $\DELTA$-intervals of~$b$. Consider now the set~$I$ of intervals formed by taking all the $\DELTA$-intervals of all points of~$\DPmod{(a, b)}{[a, b]}$. It is plain that this set satisfies the conditions of the Heine-Borel Theorem; every point interior to the interval is interior to at least one interval of~$I$, and $a$~and~$b$ are end points of at least one such interval. We can therefore determine a set~$I'$ which is formed by a finite number of intervals of~$I$, and which possesses the same property as $I$~itself. The intervals which compose the set~$I'$ will in general overlap as in \Fig{34}. But their end %[Illustration: Fig. 34.] \Figure[2.5in]{34}{p189} points obviously divide up $\DPmod{(a, b)}{[a, b]}$ into a finite set of intervals~$I''$ each of which is included in an interval of~$I'$, and in each of which the oscillation of~$\phi(x)$ is less than~$\DELTA$. Thus Theorem~I is proved. \begin{Theorem}[II.] Given any positive number~$\DELTA$, we can find a number~$\eta$ such that, if the interval $\DPmod{(a, b)}{[a, b]}$ is divided in any manner into sub-intervals of length less than~$\eta$, then the oscillation of~$\phi(x)$ in each of them will be less than~$\DELTA$. \end{Theorem} \PageSep{190} Take $\DELTA_{1} < \frac{1}{2}\DELTA$, and construct, as in Theorem~I, a finite set of sub-intervals~$j$ in each of which the oscillation of~$\phi(x)$ is less than~$\DELTA_{1}$. Let $\eta$ be the length of the least of these sub-intervals~$j$. If now we divide $\DPmod{(a, b)}{[a, b]}$ into parts each of length less than~$\eta$, then any such part must lie entirely within at most two successive sub-intervals~$j$. Hence, in virtue of~(3) of \SecNo[§]{103}, the oscillation of~$\phi(x)$, in one of the parts of length less than~$\eta$, cannot exceed twice the greatest oscillation of~$\phi(x)$ in a sub-interval~$j$, and is therefore less than~$2\DELTA_{1}$, and therefore than~$\DELTA$. This theorem is of fundamental importance in the theory of definite integrals (\Ref{Ch.}{VII}). It is impossible, without the use of this or some similar theorem, to prove that a function continuous throughout an interval necessarily possesses an integral over that interval. \Paragraph{107. Continuous functions of several variables.} The notions of continuity and discontinuity may be extended to functions of several independent variables (\Ref{Ch.}{II}, \SecNo[§§]{31}~\textit{et~seq.}). Their application to such functions however, raises questions much more complicated and difficult than those which we have considered in this chapter. It would be impossible for us to discuss these questions in any detail here; but we shall, in the sequel, require to know what is meant by a continuous function of two variables, and we accordingly give the following definition. It is a straightforward generalisation of the last form of the definition of~\SecNo[§]{98}. \begin{Defn} The function $\phi(x, y)$ of the two variables $x$~and~$y$ is said to be \Emph{continuous} for $x = \xi$, $y = \eta$ if, given any positive number~$\DELTA$, however small, we can choose~$\EPSILON(\DELTA)$ so that \[ |\phi(x, y) - \phi(\xi, \eta) | < \DELTA \] when $0 \leq |x - \xi| \leq \EPSILON(\DELTA)$ and $0 \leq |y - \eta| \leq \EPSILON(\DELTA)$; that is to say if we can draw a square, whose sides are parallel to the axes of coordinates and of length~$2\EPSILON(\DELTA)$, whose centre is the point~$(\xi, \eta)$, and which is such that the value of~$\phi(x, y)$ at any point inside it or on its boundary differs from~$\phi(\xi, \eta)$ by less than~$\DELTA$.\footnote {The reader should draw a figure to illustrate the definition.} \end{Defn} This definition of course presupposes that $\phi(x, y)$~is defined at all points of the square in question, and in particular at the point \PageSep{191} $(\xi, \eta)$. Another method of stating the definition is this: \emph{$\phi(x, y)$~is continuous for $x = \xi$, $y = \eta$ if $\phi(x, y) \to \phi(\xi, \eta)$ when $x \to \xi$, $y \to \eta$ in any manner}. This statement is apparently simpler; but it contains phrases the precise meaning of which has not yet been explained and can only be explained by the help of inequalities like those which occur in our original statement. It is easy to prove that the sums, the products, and in general the quotients of continuous functions of two variables are themselves continuous. A polynomial in two variables is continuous for all values of the variables; and the ordinary functions of~$x$ and~$y$ which occur in every-day analysis are \emph{generally} continuous, \ie\ are continuous except for pairs of values of $x$~and~$y$ connected by special relations. \begin{Remark} The reader should observe carefully that to assert the continuity of~$\phi(x, y)$ with respect to the two variables $x$~and~$y$ is to assert much more than its continuity with respect to each variable considered separately. It is plain that if $\phi(x, y)$~is continuous with respect to $x$~and~$y$ then it is certainly continuous with respect to~$x$ (or~$y$) when any fixed value is assigned to~$y$ (or~$x$). But the converse is by no means true. Suppose, for example, that \[ \phi(x, y) = \frac{2xy}{x^{2} + y^{2}} \] when neither $x$~nor~$y$ is zero, and $\phi(x, y) = 0$ when either $x$ or~$y$ is zero. Then if $y$~has any fixed value, zero or not, $\phi(x, y)$~is a continuous function of~$x$, and in particular continuous for $x = 0$; for its value when $x = 0$ is zero, and it tends to the limit zero as $x \to 0$. In the same way it may be shown that $\phi(x, y)$~is a continuous function of~$y$. But $\phi(x, y)$~is \emph{not} a continuous function of $x$~\emph{and}~$y$ for $x = 0$, $y = 0$. Its value when $x = 0$, $y = 0$ is zero; but if $x$~and~$y$ tend to zero along the straight line~$y = ax$, then \[ \phi(x, y) = \frac{2a}{1 + a^{2}},\quad \lim\phi(x, y) = \frac{2a}{1 + a^{2}}, \] which may have any value between $-1$~and~$1$. \Paragraph{108. Implicit functions.} We have already, in \Ref{Ch.}{II}, met with the idea of an \emph{implicit function}. Thus, if $x$~and~$y$ are connected by the relation \[ y^{5} - xy - y - x = 0, \Tag{(1)} \] then $y$~is an `implicit function' of~$x$. But it is far from obvious that such an equation as this does really define a function~$y$ of~$x$, or several such functions. In \Ref{Ch.}{II} we were content to take this for granted. We are now in a position to consider whether the assumption we made then was justified. \PageSep{192} We shall find the following terminology useful. Suppose that it is possible to surround a point~$(a, b)$, as in \SecNo[§]{107}, with a square throughout which a certain condition is satisfied. We shall call such a square a \emph{neighbourhood} of~$(a, b)$, and say that the condition in question is satisfied \emph{in the neighbourhood of~$(a, b)$}, or \emph{near~$(a, b)$}, meaning by this simply that it is possible to find \emph{some} square throughout which the condition is satisfied. It is obvious that similar language may be used when we are dealing with a single variable, the square being replaced by an interval on a line. \begin{Theorem} If \Itemp{(i)} $f(x, y)$~is a continuous function of $x$~and~$y$ in the neighbourhood of~$(a, b)$, \Itemp{(ii)} $f(a, b) = 0$, \Itemp{(iii)} $f(x, y)$ is, for all values of~$x$ in the neighbourhood of~$a$, a steadily increasing function of~$y$, in the stricter sense of~\SecNo[§]{95}, %[** TN: Paragraph break in the original] then \Inum{(1)}~there is a unique function $y = \phi(x)$ which, when substituted in the equation $f(x, y) = 0$, satisfies it identically for all values of~$x$ in the neighbourhood of~$a$, \Inum{(2)}~$\phi(x)$ is continuous for all values of~$x$ in the neighbourhood of~$a$. \end{Theorem} In the figure the square represents a `neighbourhood' of~$(a, b)$ throughout which the conditions (i)~and~(iii) are satisfied, and $P$~the point~$(a, b)$. If we %[Illustration: Fig. 35.] \Figure[2in]{35}{p192} take $Q$~and~$R$ as in the figure, it follows from~(iii) that $f(x, y)$ is positive at~$Q$ and negative at~$R$. This being so, and $f(x, y)$ being continuous at~$Q$ and at~$R$, we can draw lines $QQ'$ and~$RR'$ parallel to~$OX$, so that $R'Q'$~is parallel to~$OY$ and $f(x, y)$ is positive at all points of~$QQ'$ and negative at all points of~$RR'$. In particular $f(x, y)$ is positive at~$Q'$ and negative at~$R'$, and therefore, in virtue of (iii)~and \SecNo[§]{100}, vanishes once and only once at a point~$P'$ on~$R'Q'$. The same construction gives us a unique point at which $f(x, y) = 0$ on each ordinate\DPnote{** TN: i.e., vertical segment} between $RQ$~and~$R'Q'$. It is obvious, moreover, that the same construction can be carried out to the left of~$RQ$. The aggregate of points such as~$P'$ gives us the graph of the required function $y = \phi(x)$. It remains to prove that $\phi(x)$~is continuous. This is most simply effected by using the idea of the `limits of indetermination' of~$\phi(x)$ as $x \to a$ (\SecNo[§]{96}). Suppose that $x \to a$, and let $\lambda$~and~$\Lambda$ be the limits of indetermination of~$\phi(x)$ as $x \to a$. It is evident that the points $(a, \lambda)$ and~$(a, \Lambda)$ lie on~$QR$. Moreover, we can find a sequence of values of~$x$ such that $\phi(x) \to \lambda$ when $x \to a$ through the values of the sequence; and since $f\{x, \phi(x)\} = 0$, and $f(x, y)$~is a continuous function of $x$~and~$y$, we have \[ f(a, \lambda) = 0. \] Hence $\lambda = b$; and similarly $\Lambda = b$. Thus $\phi(x)$~tends to the limit~$b$ as $x \to a$, and so $\phi(x)$~is continuous for $x = a$. It is evident that we can show in \PageSep{193} exactly the same way that $\phi(x)$~is continuous for any value of~$x$ in the neighbourhood of~$a$. It is clear that the truth of the theorem would not be affected if we were to change `increasing' to `decreasing' in condition~(iii). As an example, let us consider the equation~\Eq{(1)}, taking $a = 0$, $b = 0$. It is evident that the conditions (i)~and~(ii) are satisfied. Moreover \[ f(x, y) - f(x, y') = (y - y') (y^{4} + y^{3}y' + y^{2}y'^{2} + yy'^{3} + y'^{4} - x - 1) \] has, when $x$,~$y$, and~$y'$ are sufficiently small, the sign opposite to that of~$y - y'$. Hence condition~(iii) (with `decreasing' for `increasing') is satisfied. It follows that there is one and only one continuous function~$y$ which satisfies the equation~\Eq{(1)} identically and vanishes with~$x$. The same conclusion would follow if the equation were \[ y^{2} - xy - y - x = 0. \] The function in question is in this case \[ y = \tfrac{1}{2}\{1 + x - \sqrtp{1 + 6 x + x^{2}}\}, \] where the square root is positive. The second root, in which the sign of the square root is changed, does not satisfy the condition of vanishing with~$x$. There is one point in the proof which the reader should be careful to observe. We supposed that the hypotheses of the theorem were satisfied `in the neighbourhood of~$(a, b)$', that is to say throughout a certain square $\xi - \EPSILON \leq x \leq \xi + \EPSILON$, $\eta - \EPSILON \leq y \leq \eta + \EPSILON$. The conclusion holds `in the neighbourhood of $x = a$', that is to say throughout a certain interval $\xi - \EPSILON_{1} \leq x \leq \xi + \EPSILON_{1}$. There is nothing to show that the~$\EPSILON_{1}$ of the conclusion is the~$\EPSILON$ of the hypotheses, and indeed this is generally untrue. \Paragraph{109. Inverse Functions.} Suppose in particular that $f(x, y)$~is of the form $F(y) - x$. We then obtain the following theorem. \begin{Result} If $F(y)$ is a function of~$y$, continuous and steadily increasing \(or decreasing\), in the stricter sense of \SecNo[§]{95}, in the neighbourhood of $y = b$, and $F(b) = a$, then there is a unique continuous function $y = \phi(x)$ which is equal to~$b$ when $x = a$ and satisfies the equation $F(y) = x$ identically in the neighbourhood of $x = a$. \end{Result} The function thus defined is called the \emph{inverse function of~$F(y)$}. Suppose for example that $y^{3} = x$, $a = 0$, $b = 0$. Then all the conditions of the theorem are satisfied. The inverse function is $x = \sqrt[3]{y}$. If we had supposed that $y^{2} = x$ then the conditions of the theorem would not have been satisfied, for $y^{2}$~is not a steadily increasing function of~$y$ in any interval which includes $y = 0$: it decreases when $y$~is negative and increases when $y$~is positive. And in this case the conclusion of the theorem does not hold, for $y^{2} = x$ defines \emph{two} functions of~$x$, viz.\ $y = \sqrt{x}$ and $y = -\sqrt{x}$, both of which vanish when $x = 0$, and each of which is defined only for positive values of~$x$, so that the equation has sometimes two solutions and sometimes none. The reader should consider the more general equations \[ y^{2n} = x, \quad y^{2n+1} = x, \] \PageSep{194} in the same way. Another interesting example is given by the equation \[ y^{5} - y - x = 0, \] already considered in \Ex{xiv}.~7. Similarly the equation \[ \sin y = x \] has just one solution which vanishes with~$x$, viz.\ the value of~$\arcsin x$ which vanishes with~$x$. There are of course an infinity of solutions, given by the other values of~$\arcsin x$ (cf.\ \Ex{xv}.~10), which do not satisfy this condition. So far we have considered only what happens in the neighbourhood of a particular value of~$x$. Let us suppose now that $F(y)$~is positive and steadily increasing (or decreasing) throughout an interval~$\DPmod{(a, b)}{[a, b]}$. Given any point~$\xi$ of~$\DPmod{(a, b)}{[a, b]}$, we can determine an interval~$i$ including~$\xi$, and a unique and continuous inverse function~$\phi_{i} (x)$ defined throughout~$i$. From the set~$I$ of intervals~$i$ we can, in virtue of the Heine-Borel Theorem, pick out a finite sub-set covering up the whole interval~$\DPmod{(a, b)}{[a, b]}$; and it is plain that the finite set of functions~$\phi_{i} (x)$, corresponding to the sub-set of intervals~$i$ thus selected, define together a unique inverse function~$\phi(x)$ continuous throughout~$\DPmod{(a, b)}{[a, b]}$. We thus obtain the theorem: \begin{Result}if $x = F(y)$, where $F(y)$~is continuous and increases steadily and strictly from~$A$ to~$B$ as $x$~increases from~$a$ to~$b$, then there is a unique inverse function $y = \phi(x)$ which is continuous and increases steadily and strictly from~$a$ to~$b$ as $x$~increases from~$A$ to~$B$. \end{Result} It is worth while to show how this theorem can be obtained directly without the help of the more difficult theorem of \SecNo[§]{108}. Suppose that $A < \xi < B$, and consider the class of values of~$y$ such that (i)~$a \eta$ and $F(y) < \xi$, and $\eta$~would not be the upper bound of the class considered. Hence $F(\eta) = \xi$. The equation $F(y) = \xi$ has therefore a unique solution $y = \eta = \phi(\xi)$, say; and plainly $\eta$~increases steadily and continuously with~$\xi$, which proves the theorem. \end{Remark} \Section{MISCELLANEOUS EXAMPLES ON CHAPTER V.} \begin{Examples}{} \Item{1.} Show that, if neither $a$~nor~$b$ is zero, then \[ ax^{n} + bx^{n-1} + \dots + k = ax^{n} (1 + \epsilon_{x}), \] where $\epsilon_{x}$~is of the first order of smallness when $x$~is large. \Item{2.} If $P(x) = ax^{n} + bx^{n-1} + \dots + k$, and $a$~is not zero, then as $x$~increases $P(x)$~has ultimately the sign of~$a$; and so has $P(x + \lambda) - P(x)$, where $\lambda$~is any constant. \Item{3.} Show that in general \[ (ax^{n} + bx^{n-1} + \dots + k)/(Ax^{n} + Bx^{n-1} + \dots + K) = \alpha + (\beta/x) (1 + \epsilon_{x}), \] where $\alpha = a/A$, $\beta = (bA - aB)/A^{2}$, and $\epsilon_{x}$~is of the first order of smallness when $x$~is large. Indicate any exceptional cases. \PageSep{195} \Item{4.} Express \[ (ax^{2} + bx + c)/(Ax^{2} + Bx + C) \] in the form \[ \alpha + (\beta/x) + (\gamma/x^{2})(1 + \epsilon_{x}), \] where $\epsilon_{x}$~is of the first order of smallness when $x$~is large. \Item{5.} Show that \[ \lim_{x\to\infty}\sqrt{x}\{\sqrtp{x + a} - \sqrt{x}\} = \tfrac{1}{2} a. \] [Use the formula $\sqrtp{x + a} - \sqrt{x} = a/\{\sqrtp{x + a} + \sqrt{x}\}$.] \Item{6.} Show that $\sqrtp{x + a} = \sqrt{x} + \frac{1}{2}(a/\sqrt{x}) (1 + \epsilon_{x})$, where $\epsilon_{x}$~is of the first order of smallness when $x$~is large. \Item{7.} Find values of $\alpha$~and~$\beta$ such that $\sqrtp{a x^{2} + 2bx + c} - \alpha x - \beta$ has the limit zero as $x \to \infty$; and prove that $\lim x\{\sqrtp{ax^{2} + 2bx + c} - \alpha x - \beta\} = (ac - b^{2})/2a$. \Item{8.} Evaluate \[ \lim_{x \to\infty} x\left\{\sqrtbr{x^{2} + \sqrtp{x^{4} + 1}} - x\sqrt{2}\right\}. \] \Item{9.} Prove that $(\sec x - \tan x) \to 0$ as $x \to \frac{1}{2}\pi$. \Item{10.} Prove that $\phi(x) = 1 - \cos(1 - \cos x)$ is of the fourth order of smallness when $x$~is small; and find the limit of $\phi(x)/x^{4}$ as $x \to 0$. \Item{11.} Prove that $\phi(x) = x\sin(\sin x) - \sin^{2}x$ is of the sixth order of smallness when $x$~is small; and find the limit of $\phi(x)/x^{6}$ as $x \to 0$. \Item{12.} From a point~$P$ on a radius~$OA$ of a circle, produced beyond the circle, a tangent~$PT$ is drawn to the circle, touching it in~$T$, and $TN$~is drawn perpendicular to~$OA$. Show that $NA/AP \to 1$ as $P$~moves up to~$A$. \Item{13.} Tangents are drawn to a circular arc at its middle point and its extremities; $\Delta$~is the area of the triangle formed by the chord of the arc and the two tangents at the extremities, and $\Delta'$~the area of that formed by the three tangents. Show that $\Delta/\Delta' \to 4$ as the length of the arc tends to zero. \Item{14.} For what values of~$a$ does $\{a + \sin(1/x)\}/x$ tend to (1)~$\infty$, (2)~$-\infty$, as $x \to 0$? [To~$\infty$ if~$a > 1$, to~$-\infty$ if~$a < -1$: the function oscillates if $-1 \leq a \leq 1$.] \Item{15.} If $\phi(x) = 1/q$ when $x = p/q$, and $\phi(x) = 0$ when $x$~is irrational, then $\phi(x)$~is continuous for all irrational and discontinuous for all rational values of~$x$. \Item{16.} Show that the function whose graph is drawn in \Fig{32} may be represented by either of the formulae \[ 1 - x + [x] - [1 - x],\quad 1 - x - \lim_{n\to\infty} (\cos^{2n+1}\pi x). \] \Item{17.} Show that the function~$\phi(x)$ which is equal to~$0$ when $x = 0$, to~$\frac{1}{2} - x$ when $0 < x < \frac{1}{2}$, to~$\frac{1}{2}$ when $x = \frac{1}{2}$, to~$\frac{3}{2} - x$ when $\frac{1}{2}< x < 1$, and to~$1$ when $x = 1$, assumes every value between $0$~and~$1$ once and once only as $x$~increases from $0$~to~$1$, but is discontinuous for $x = 0$, $x = \frac{1}{2}$, and $x = 1$. Show also that the function may be represented by the formula \[ \tfrac{1}{2} - x - \tfrac{1}{2}[2x] - \tfrac{1}{2}[1 - 2x]. \] \PageSep{196} \Item{18.} Let $\phi(x) = x$ when $x$~is rational and $\phi(x) = 1 - x$ when $x$~is irrational. Show that $\phi(x)$~assumes every value between $0$ and~$1$ once and once only as $x$~increases from $0$ to~$1$, but is discontinuous for every value of~$x$ except $x = \frac{1}{2}$. \Item{19.} As $x$~increases from~$-\frac{1}{2}\pi$ to~$\frac{1}{2}\pi$, $y = \sin x$ is continuous and steadily increases, in the stricter sense, from~$-1$ to~$1$. Deduce the existence of a function $x = \arcsin y$ which is a continuous and steadily increasing function of~$y$ from $y = -1$ to~$y = 1$. \Item{20.} Show that the numerically least value of~$\arctan y$ is continuous for all values of~$y$ and increases steadily from $-\frac{1}{2}\pi$ to~$\frac{1}{2}\pi$ as $y$~varies through all real values. \Item{21.} Discuss, on the lines of \SecNo[§§]{108}--\SecNo{109}, the solution of the equations \[ y^{2} - y - x = 0,\quad y^{4} - y^{2} - x^{2} = 0,\quad y^{4} - y^{2} + x^{2} = 0 \] in the neighbourhood of $x = 0$, $y = 0$. \Item{22.} If $ax^{2} + 2bxy + cy^{2} + 2dx + 2ey = 0$ and $\Delta = 2bde - ae^{2} - cd^{2}$, then one value of~$y$ is given by $y = \alpha x + \beta x^{2} + (\gamma + \epsilon_{x}) x^{3}$, where \[ \alpha = -d/e,\quad \beta = \Delta/2e^{3},\quad \gamma = (cd - be) \Delta/2e^{5}, \] and $\DPtypo{e_{x}}{\epsilon_{x}}$~is of the first order of smallness when $x$~is small. [If $y - \alpha x = \eta $ then \[ -2e\eta = ax^{2} + 2bx(\eta + \alpha x) + c(\eta + \alpha x)^{2} = Ax^{2} + 2Bx \eta + C\eta^{2}, \] say. It is evident that $\eta$~is of the second order of smallness, $x\eta$~of the third, and $\eta^{2}$~of the fourth; and $-2e\eta = Ax^{2} - (AB/e) x^{3}$, the error being of the fourth order.] \Item{23.} If $x = ay + by^{2} + cy^{3}$ then one value of~$y$ is given by \[ y = \alpha x + \beta x^{2} + (\gamma + \epsilon_{x}) x^{3}, \] where $\alpha = 1/a$, $\beta = -b/a^{3}$, $\gamma = (2b^{2} - ac)/a^{5}$, and $\epsilon_{x}$~is of the first order of smallness when $x$~is small. \Item{24.} If $x = ay + by^{n}$, where $n$~is an integer greater than unity, then one value of~$y$ is given by $y = \alpha x + \beta x^{n} + (\gamma + \epsilon_{x}) x^{2n-1}$, where $\alpha = 1/a$, $\beta = -b/a^{n+1}$, $\gamma = nb^{2}/a^{2n+1}$, and $\epsilon_{x}$~is of the $(n - 1)$th~order of smallness when $x$~is small. \Item{25.} Show that the least positive root of the equation $xy = \sin x$ is a continuous function of~$y$ throughout the interval $\DPmod{(0, 1)}{[0, 1]}$, and decreases steadily from $\pi$ to~$0$ as $y$~increases from $0$ to~$1$. [The function is the inverse of $(\sin x)/x$: apply~\SecNo[§]{109}.] \Item{26.} The least positive root of $xy = \tan x$ is a continuous function of~$y$ throughout the interval $\DPmod{(1, \infty)}{[1, \infty)}$, and increases steadily from $0$ to~$\frac{1}{2}\pi$ as $y$~increases from $1$ towards~$\infty$. \end{Examples} \PageSep{197} \Chapter{VI}{DERIVATIVES AND INTEGRALS} \Paragraph{110. Derivatives or Differential Coefficients.} Let us return to the consideration of the properties which we naturally associate with the notion of a curve. The first and most obvious property is, as we saw in the last chapter, that which gives a curve its appearance of connectedness, and which we embodied in our definition of a continuous function. The ordinary curves which occur in elementary geometry, such as straight lines, circles and conic sections, have of course many other properties of a general character. The simplest and most noteworthy of these is perhaps that they have a definite \emph{direction} at every point, or what is the same thing, that at every point of the curve we can draw a \emph{tangent} to it. The reader will probably remember that in elementary geometry the tangent to a curve at~$P$ is defined to be `the limiting position of the chord~$PQ$, when $Q$~moves up towards coincidence with~$P$'. Let us consider what is implied in the assumption of the existence of such a limiting position. In the figure (\Fig{36}) $P$~is a fixed point on the curve, and $Q$ a variable point; $PM$,~$QN$ are parallel to~$OY$ and $PR$ to~$OX$. We denote the coordinates of~$P$ by $x$,~$y$ and those of~$Q$ by $x + h$,~$y + k$: $h$~will be positive or negative according as $N$~lies to the right or left of~$M$. We have assumed that there is a tangent to the curve at~$P$, or that there is a definite `limiting position' of the chord~$PQ$. Suppose that $PT$, the tangent at~$P$, makes an angle~$\psi$ with~$OX$. Then to say that $PT$~is the limiting position of~$PQ$ is equivalent to saying that the limit of the angle $QPR$ is~$\psi$, when $Q$~approaches~$P$ \PageSep{198} along the curve from either side. We have now to distinguish two cases, a general case and an exceptional one. %[Illustration: Fig. 36.] \Figure[3in]{36}{p198} The general case is that in which $\psi$~is not equal to~$\frac{1}{2}\pi$, so that $PT$~is not parallel to~$OY$. In this case $RPQ$ tends to the limit~$\psi$, and \[ RQ/PR = \tan RPQ \] tends to the limit $\tan \psi$. Now \[ RQ/PR = (NQ - MP)/MN = \{\phi(x + h) - \phi(x)\}/h; \] and so \[ \lim_{h \to 0} \frac{\phi(x + h) - \phi(x)}{h} = \tan\psi. \Tag{(1)} \] The reader should be careful to note that in all these equations all lengths are regarded as affected with the proper sign, so that (\eg)\ $RQ$~is negative in the figure when $Q$~lies to the left of~$P$; and that the convergence to the limit is unaffected by the sign of~$h$. Thus the assumption that the curve which is the graph of~$\phi(x)$ has a tangent at~$P$, which is not perpendicular to the axis of~$x$, implies that $\phi(x)$~has, for the particular value of~$x$ corresponding to~$P$, the property that \emph{$\{\phi(x + h) - \phi(x)\}/h$ tends to a limit when $h$~tends to zero}. \begin{Remark} This of course implies that both of \[ \{\phi(x + h) - \phi(x)\}/h,\quad \{\phi(x - h) - \phi(x)\}/(-h) \] tend to limits when $h \to 0$ by positive values only, and that the two limits are equal. If these limits exist but are not equal, then the curve $y = \phi(x)$ has an angle at the particular point considered, as in \Fig{37}. \end{Remark} Now let us suppose that the curve has (like the circle or ellipse) a tangent at every point of its length, or at any rate every \PageSep{199} portion of its length which corresponds to a certain range of variation of~$x$. Further let us suppose this tangent never perpendicular to the axis of~$x$: in the case of a circle this would of course restrict us to considering an arc less than a semicircle. Then an equation such as~\Eq{(1)} holds for all values of~$x$ which fall inside this range. To each such value of~$x$ corresponds a value of $\tan\psi$: $\tan\psi$~is a function of~$x$, which is defined for all values of~$x$ in the range of values under consideration, and which may be calculated or \emph{derived} from the original function~$\phi(x)$. We shall call this function the \Emph{derivative} or \emph{derived function} of~$\phi(x)$, and we shall denote it by \[ \phi'(x). \] Another name for the derived function of~$\phi(x)$ is the \Emph{differential coefficient} of~$\phi(x)$; and the operation of calculating $\phi'(x)$ from~$\phi(x)$ is generally known as \Emph{differentiation}. This terminology is firmly established for historical reasons: see~\SecNo[§]{115}. Before we proceed to consider the special case mentioned above, in which $\psi = \frac{1}{2}\pi$, we shall illustrate our definition by some general remarks and particular illustrations. \Paragraph{111. Some general remarks.} \Item{(1)} The existence of a derived function~$\phi'(x)$ for all values of~$x$ in the interval $a \leq x \leq b$ implies that $\phi(x)$~is continuous at every point of this interval. For it is evident that $\{\phi(x + h) - \phi(x)\}/h$ cannot tend to a limit unless $\lim\phi(x + h) = \phi(x)$, and it is this which is the property denoted by continuity. \Item{(2)} It is natural to ask whether the converse is true, \ie\ whether every continuous curve has a definite tangent at every point, and %[Illustration: Fig. 37.] \Figure[2in]{37}{p199} every function a differential coefficient for every value of~$x$ for which it is continuous.\footnote {We leave out of account the exceptional case (which we have still to examine) in which the curve is supposed to have a tangent perpendicular to~$OX$: apart from this possibility the two forms of the question stated above are equivalent.} The answer is obviously \emph{No}: it is sufficient to consider the curve formed by two straight lines meeting to form an angle (\Fig{37}). \PageSep{200} The reader will see at once that in this case $\{\phi(x + h) - \phi(x)\}/h$ has the limit $\tan\beta$ when $h \to 0$ by positive values and the limit $\tan\alpha$ when $h \to 0$ by negative values. \begin{Remark} This is of course a case in which a curve might reasonably be said to have \emph{two} directions at a point. But the following example, although a little more difficult, shows conclusively that there are cases in which a continuous curve cannot be said to have either one direction or several directions at one of its points. Draw the graph (\Fig{14}, \PageRef{p.}{53}) of the function $x\sin(1/x)$. The function is not defined for $x = 0$, and so is discontinuous for $x = 0$. On the other hand the function defined by the equations \[ \phi(x) = x\sin(1/x)\quad (x \neq 0),\qquad \phi(x) = 0\quad (x = 0) \] is continuous for $x = 0$ (\Exs{xxxvii}.~14,~15), and the graph of this function is a continuous curve. But $\phi(x)$~has no derivative for $x = 0$. For $\phi'(0)$~would be, by definition, $\lim\{\phi(h) - \phi(0)\}/h$ or $\lim\sin(1/h)$; and no such limit exists. It has even been shown that a function of~$x$ may be continuous and yet have no derivative for \emph{any} value of~$x$, but the proof of this is much more difficult. The reader who is interested in the question may be referred to Bromwich's \textit{Infinite Series}, pp.~490--1, or Hobson's \textit{Theory of Functions of a Real Variable}, pp.~620--5. \end{Remark} \Item{(3)} The notion of a derivative or differential coefficient was suggested to us by geometrical considerations. But there is nothing geometrical in the notion itself. The derivative $\phi'(x)$ of a function $\phi(x)$ may be defined, without any reference to any kind of geometrical representation of~$\phi(x)$, by the equation \[ \phi'(x) = \lim_{h \to 0} \frac{\phi(x + h) - \phi(x)}{h}; \] and $\phi(x)$~has or has not a derivative, for any particular value of~$x$, according as this limit does or does not exist. The geometry of curves is merely one of many departments of mathematics in which the idea of a derivative finds an application. \begin{Remark} Another important application is in dynamics. Suppose that a particle is moving in a straight line in such a way that at time~$t$ its distance from a fixed point on the line is $s = \phi(t)$. Then the `velocity of the particle at time~$t$' is by definition the limit of \[ \frac{\phi(t + h) - \phi(t)}{h} \] as $h \to 0$. The notion of `velocity' is in fact merely a special case of that of the derivative of a function. \end{Remark} \PageSep{201} \begin{Examples}{XXXIX.} \Item{1.} If $\phi(x)$~is a constant then $\phi'(x) = 0$. Interpret this result geometrically. \Item{2.} If $\phi(x) = ax + b$ then $\phi'(x) = a$. Prove this (i)~from the formal definition and (ii)~by geometrical considerations. \Item{3.} If $\phi(x) = x^{m}$, where $m$~is a positive integer, then $\phi'(x) = mx^{m-1}$. [For \begin{align*} \phi'(x) &= \lim \frac{(x + h)^{m} - x^{m}}{h}\\ &= \lim \left\{mx^{m-1} + \frac{m(m - 1)}{1·2} x^{m-2} h + \dots + h^{m-1}\right\}. \end{align*} The reader should observe that this method cannot be applied to~$x^{p/q}$, where $p/q$~is a rational fraction, as we have no means of expressing $(x + h)^{p/q}$ as a finite series of powers of~$h$. We shall show later on (\SecNo[§]{118}) that the result of this example holds for all rational values of~$m$. Meanwhile the reader will find it instructive to determine $\phi'(x)$ when $m$~has some special fractional value (\eg~$\frac{1}{2}$), by means of some special device.] \Item{4.} {\Loosen If $\phi(x) = \sin x$, then $\phi'(x) = \cos x$; and if $\phi(x) = \cos x$, then $\phi'(x) = -\sin x$.} [For example, if $\phi(x) = \sin x$, we have \[ \{\phi(x + h) - \phi(x)\}/h = \{2\sin \tfrac{1}{2}h \cos(x + \tfrac{1}{2}h)\}/h, \] the limit of which, when $h \to 0$, is $\cos x$, since $\lim\cos(x + \frac{1}{2}h) = \cos x$ (the cosine being a continuous function) and $\lim\{(\sin \frac{1}{2}h)/\frac{1}{2}h\} = 1$ (\Ex{xxxvi}.~13).] \Item{5.} \Topic{Equations of the tangent and normal to a curve $y = \phi(x)$.} The tangent to the curve at the point $(x_{0}, y_{0})$ is the line through $(x_{0}, y_{0})$ which makes with~$OX$ an angle~$\psi$, where $\tan\psi = \phi'(x_{0})$. Its equation is therefore \[ y - y_{0} = (x - x_{0}) \phi'(x_{0}); \] and the equation of the normal (the perpendicular to the tangent at the point of contact) is \[ (y - y_{0}) \phi'(x_{0}) + x - x_{0} = 0. \] We have assumed that the tangent is not parallel to the axis of~$y$. In this special case it is obvious that the tangent and normal are $x = x_{0}$ and $y = y_{0}$ respectively. \Item{6.} Write down the equations of the tangent and normal at any point of the parabola $x^{2} = 4ay$. Show that if $x_{0} = 2a/m$, $y_{0} = a/m^{2}$, then the tangent at $(x_{0}, y_{0})$ is $x = my + (a/m)$. \end{Examples} \Paragraph{112.} We have seen that if $\phi(x)$~is not continuous for a value of~$x$ then it cannot possibly have a derivative for that value of~$x$. Thus such functions as $1/x$ or $\sin(1/x)$, which are not defined for $x = 0$, and so necessarily discontinuous for $x = 0$, cannot have derivatives for $x = 0$. Or again the function~$[x]$, which is discontinuous for every integral value of~$x$, has no derivative for any such value of~$x$. \PageSep{202} \begin{Remark} \Par{Example.} Since $[x]$~is constant between every two integral values of~$x$, its derivative, whenever it exists, has the value zero. Thus the derivative of~$[x]$, which we may represent by~$[x]'$, is a function equal to zero for all values of~$x$ save integral values and undefined for integral values. It is interesting to note that the function $1 - \dfrac{\sin\pi x}{\sin\pi x}$ has exactly the same properties. \end{Remark} We saw also in \Ex{xxxvii}.~7 that the types of discontinuity which occur most commonly, when we are dealing with the very simplest and most obvious kinds of functions, such as polynomials or rational or trigonometrical functions, are associated with a relation of the type \[ \phi(x) \to +\infty \] or $\phi(x) \to -\infty$. In all these cases, as in such cases as those considered above, there is no derivative for certain special values of $x$. %[Illustration: Fig. 38.] \Figure{38}{p202} In fact, as was pointed out in \SecNo[§]{111},~\Eq{(1)}, \emph{all discontinuities of~$\phi(x)$ are also discontinuities of~$\phi'(x)$}. But the converse is not true, as we may easily see if we return to the geometrical point of view of \SecNo[§]{110} and consider the special case, hitherto left aside, in which the graph of~$\phi(x)$ has a tangent parallel to~$OY$. This case may be subdivided into a number of cases, of which the most typical are shown in \Fig{38}. In cases (\ic)~and~(\id) the function is two valued on one side of~$P$ and not defined on the other. In such cases we may consider the two sets of values of~$\phi(x)$, which occur on one side of~$P$ or the other, as defining distinct functions $\phi_{1}(x)$ and~$\phi_{2}(x)$, the upper part of the curve corresponding to~$\phi_{1}(x)$. \PageSep{203} The reader will easily convince himself that in~(\ia) \[ \{\phi(x + h) - \phi(x)\}/h \to +\infty, \] as $h \to 0$, and in~(\ib) \[ \{\phi(x + h) - \phi(x)\}/h \to -\infty; \] while in~(\ic) \[ \{\phi_{1}(x + h) - \phi_{1}(x)\}/h \to +\infty,\quad \{\phi_{2}(x + h) - \phi_{2}(x)\}/h \to -\infty, \] and in~(\id) \[ \{\phi_{1}(x + h) - \phi_{1}(x)\}/h \to -\infty,\quad \{\phi_{2}(x + h) - \phi_{2}(x)\}/h \to +\infty, \] though of course in~(\ic) only positive and in~(\id) only negative values of~$h$ can be considered, a fact which by itself would preclude the existence of a derivative. We can obtain examples of these four cases by considering the functions defined by the equations \[ \Item{(\ia)}\ y^{3} = x,\quad \Item{(\ib)}\ y^{3} = -x,\quad \Item{(\ic)}\ y^{2} = x,\quad \Item{(\id)}\ y^{2} = -x, \] the special value of~$x$ under consideration being $x = 0$. \Paragraph{113. Some general rules for differentiation.} Throughout the theorems which follow we assume that the functions $f(x)$~and~$F(x)$ have derivatives $f'(x)$~and~$F'(x)$ for the values of~$x$ considered. \begin{Result} \Item{(1)} If $\phi(x) = f(x) + F(x)$, then $\phi(x)$ has a derivative \[ \phi'(x) = f'(x) + F'(x). \] \end{Result} \begin{Result} \Item{(2)} If $\phi(x) = kf(x)$, where $k$~is a constant, then $\phi(x)$~has a derivative \[ \phi'(x) = kf'(x). \] \end{Result} We leave it as an exercise to the reader to deduce these results from the general theorems stated in \Ex{xxxv}.~1. \begin{Result} \Item{(3)} If $\phi(x) = f(x)F(x)$, then $\phi(x)$~has a derivative \[ \phi'(x) = f(x)F'(x) + f'(x)F(x). \] \end{Result} For \begin{align*} %[** TN: Re-aligned] \phi'(x) &= \lim\frac{f(x + h)F(x + h) - f(x)F(x)}{h}\\ &= \lim\left\{f(x + h)\frac{F(x + h) - F(x)}{h} + F(x)\frac{f(x + h) - f(x)}{h}\right\}\\ &=f(x)F'(x) + F(x)f'(x). \end{align*} \PageSep{204} \begin{Result} \Item{(4)} If $\phi(x) = \dfrac{1}{f(x)}$, then $\phi(x)$~has a derivative \[ \phi'(x) = -\frac{f'(x)}{\{f(x)\}^{2}}. \] \end{Result} In this theorem we of course suppose that $f(x)$~is not equal to zero for the particular value of~$x$ under consideration. Then \[ \phi'(x) = \lim \frac{1}{h} \left\{\frac{f(x) - f(x + h)}{f(x + h)f(x)}\right\} = -\frac{f'(x)}{\{f(x)\}^{2}}. \] \begin{Result} \Item{(5)} If $\phi(x) = \dfrac{f(x)}{F(x)}$, then $\phi(x)$~has a derivative \[ \phi'(x) = \frac{f'(x)F(x) - f(x)F'(x)}{\{F(x)\}^{2}}. \] \end{Result} This follows at once from (3)~and~(4). \begin{Result} \Item{(6)} If $\phi(x) = F\{f(x)\}$, then $\phi(x)$~has a derivative \[ \phi'(x) = F'\{f(x)\} f'(x). \] \end{Result} For let \[ f(x) = y,\quad f(x + h) = y + k. \] Then $k \to 0$ as $h \to 0$, and $k/h \to f'(x)$. And \begin{align*} %[** TN: Not strictly correct: k can be zero infinitely often as h -> 0] \phi'(x) & = \lim \frac{F\{f(x + h)\} - F\{f(x)\}}{h}\\ & = \lim \left\{\frac{F(y + k) - F(y)}{k}\right\} × \lim \left(\frac{k}{h}\right)\\ & = F'(y)f'(x). \end{align*} This theorem includes (2)~and~(4) as special cases, as we see on taking $F(x) = kx$ or $F(x) = 1/x$. Another interesting special case is that in which $f(x) = ax + b$: the theorem then shows that the derivative of~$F(ax + b)$ is~$aF'(ax + b)$. Our last theorem requires a few words of preliminary explanation. Suppose that $x = \psi(y)$, where $\psi(y)$~is continuous and steadily increasing (or decreasing), in the stricter sense of \SecNo[§]{95}, in a certain interval of values of~$y$. Then we may write $y = \phi(x)$, where $\phi$~is the `inverse' function (\SecNo[§]{109}) of~$\psi$. \begin{Result} \Item{(7)} If $y = \phi(x)$, where $\phi$~is the inverse function of~$\psi$, so that $x = \psi(y)$, and $\psi(y)$~has a derivative~$\psi'(y)$ which is not equal to zero, then $\phi(x)$~has a derivative \[ \phi'(x) = \frac{1}{\psi'(y)}. \] \end{Result} \PageSep{205} For if $\phi(x + h) = y + k$, then $k \to 0$ as $h \to 0$, and \[ \phi'(x) = \lim_{h \to 0} \frac{\phi(x + h) - \phi(x)}{(x + h) - x} = \lim_{k \to 0} \frac{(y + k) - y}{\psi(y + k) - \psi(y)} = \frac{1}{\psi'(y)}. \] The last function may now be expressed in terms of~$x$ by means of the relation $y = \phi(x)$, so that $\phi'(x)$~is the reciprocal of~$\psi'\{\phi(x)\}$. This theorem enables us to differentiate any function if we know the derivative of the inverse function. \Paragraph{114. Derivatives of complex functions.} So far we have supposed that $y = \phi(x)$ is a purely \emph{real} function of~$x$. If $y$~is a complex function $\phi(x) + i\psi(x)$, then we define the derivative of~$y$ as being $\phi'(x) + i\psi'(x)$. The reader will have no difficulty in seeing that Theorems~(1)--(5) above retain their validity when $\phi(x)$~is complex. Theorems (6)~and~(7) have also analogues for complex functions, but these depend upon the general notion of a `function of a complex variable', a notion which we have encountered at present only in a few particular cases. \Paragraph{115. The notation of the differential calculus.} We have already explained that what we call a \emph{derivative} is often called a \emph{differential coefficient}. Not only a different name but a different notation is often used; the derivative of the function $y = \phi(x)$ is often denoted by one or other of the expressions \[ D_{x}y,\quad \frac{dy}{dx}. \] Of these the last is the most usual and convenient: the reader must however be careful to remember that $dy/dx$ does not mean `a certain number~$dy$ divided by another number~$dx$': it means `the result of a certain operation~$D_{x}$ or~$d/dx$ applied to $y = \phi(x)$', the operation being that of forming the quotient $\{\phi(x + h) - \phi(x)\}/h$ and making $h \to 0$. \begin{Remark} Of course a notation at first sight so peculiar would not have been adopted without some reason, and the reason was as follows. The denominator~$h$ of the fraction $\{\phi(x + h) - \phi(x)\}/h$ is the difference of the values $x+h$,~$x$ of the independent variable~$x$; similarly the numerator is the difference of the corresponding values $\phi(x + h)$,~$\phi(x)$ of the dependent variable~$y$. These differences may be called the \emph{increments} of $x$~and~$y$ respectively, and denoted by $\delta x$~and~$\delta y$. Then the fraction is~$\delta y/\delta x$, and it is for many purposes convenient to denote the limit of the fraction, which is the same thing as~$\phi'(x)$, \PageSep{206} by~$dy/dx$. But this notation must for the present be regarded as purely symbolical. The $dy$~and~$dx$ which occur in it cannot be separated, and standing by themselves they would mean nothing: in particular $dy$~and~$dx$ do not mean $\lim\delta y$ and~$\lim\delta x$, these limits being simply equal to zero. The reader will have to become familiar with this notation, but so long as it puzzles him he will be wise to avoid it by writing the differential coefficient in the form~$D_{x}y$, or using the notation $\phi(x)$,~$\phi'(x)$, as we have done in the preceding sections of this chapter. In \Ref{Ch.}{VII}, however, we shall show how it is possible to define the symbols $dx$~and~$dy$ in such a way that they have an independent meaning and that the derivative~$dy/dx$ is actually their quotient. \end{Remark} The theorems of \SecNo[§]{113} may of course at once be translated into this notation. They may be stated as follows: %[** TN: The conclusions below are aligned on their equals signs in the orig.] \begin{Result} \Item{(1)} if $y = y_{1} + y_{2}$, then \[ \frac{dy}{dx} = \frac{dy_{1}}{dx} + \frac{dy_{2}}{dx}; \] \Item{(2)} if $y = ky_{1}$, then \[ \frac{dy}{dx} = k\frac{dy_{1}}{dx}; \] \Item{(3)} if $y = y_{1}y_{2}$, then \[ \frac{dy}{dx} = y_{1}\frac{dy_{2}}{dx} + y_{2}\frac{dy_{1}}{dx}; \] \Item{(4)} if $y = \dfrac{1}{y_{1}}$, then \[ \frac{dy}{dx} = -\frac{1}{y_{1}^{2}}\, \frac{dy_{1}}{dx}; \] \Item{(5)} if $y = \dfrac{y_{1}}{y_{2}}$, then \[ \frac{dy}{dx} = \biggl(y_{2}\frac{dy_{1}}{dx} - y_{1}\frac{dy_{2}}{dx}\biggr) \bigg/ y_{2}^{2}; \] \Item{(6)} if $y$~is a function of~$x$, and $z$~a function of~$y$, then \[ \frac{dz}{dx} = \frac{dz}{dy}\, \frac{dy}{dx}; \] \CenterLine{\Item{(7)}}{$\dfrac{dy}{dx} = 1 \bigg/ \biggl(\dfrac{dx}{dy}\biggr)$.} \end{Result} \begin{Examples}{XL.} \Item{1.} If $y = y_{1}y_{2}y_{3}$ then \[ \frac{dy}{dx} = y_{2}y_{3}\, \frac{dy_{1}}{dx} + y_{3}y_{1}\, \frac{dy_{2}}{dx} + y_{1}y_{2}\, \frac{dy_{3}}{dx}, \] and if $y = y_{1}y_{2} \dots y_{n}$ then \[ \frac{dy}{dx} = \sum_{r=1}^{n} y_{1}y_{2} \dots y_{r-1}y_{r+1} \dots y_{n}\, \frac{dy_{r}}{dx}. \] In particular, if $y = z^{n}$, then $dy/dx = nz^{n-1}(dz/dx)$; and if $y = x^{n}$, then $dy/dx = nx^{n-1}$, as was proved otherwise in \Ex{xxxix}.~3. \PageSep{207} \Item{2.} If $y = y_{1}y_{2}\dots y_{n}$ then \[ \frac{1}{y}\, \frac{dy}{dx} = \frac{1}{y_{1}}\, \frac{dy_{1}}{dx} + \frac{1}{y_{2}}\, \frac{dy_{2}}{dx} + \dots + \frac{1}{y_{n}}\, \frac{dy_{n}}{dx}. \] In particular, if $y = z^{n}$, then $\dfrac{1}{y}\, \dfrac{dy}{dx} = \dfrac{n}{z}\, \dfrac{dz}{dx}$. \end{Examples} \Paragraph{116. Standard forms.} We shall now investigate more systematically the forms of the derivatives of a few of the the simplest types of functions. \Topic{\Item{A.} Polynomials.} If $\phi(x) = a_{0}x^{n} + a_{1}x^{n-1} + \dots + a_{n}$, then \[ \phi'(x) = na_{0}x^{n-1} + (n - 1)a_{1}x^{n-2} + \dots + a_{n-1}. \] It is sometimes more convenient to use for the standard form of a polynomial of degree~$n$ in~$x$ what is known as the \emph{binomial form}, viz. \[ a_{0}x^{n} + \binom{n}{1} a_{1}x^{n-1} + \binom{n}{2} a_{2}x^{n-2} + \dots + a_{n}. \] In this case \[ \phi'(x) = n \left\{ a_{0}x^{n-1} + \binom{n - 1}{1} a_{1}x^{n-2} + \binom{n - 1}{2} a_{2}x^{n-3} + \dots + a_{n-1} \right\}. \] The binomial form of~$\phi(x)$ is often written symbolically as \[ (a_{0}, a_{1}, \dots, a_{n} \btw x, 1)^{n}; \] and then \[ \phi'(x) = n(a_{0}, a_{1}, \dots, a_{n-1} \btw x, 1)^{n-1}. \] We shall see later that $\phi(x)$~can always be expressed as the product of $n$~factors in the form \[ \phi(x) = a_{0}(x - \alpha_{1})(x - \alpha_{2}) \dots (x - \alpha_{n}), \] where the~$\alpha$'s are real or complex numbers. Then \[ \phi'(x) = a_{0}\tsum (x - \alpha_{2})(x - \alpha_{3}) \dots (x - \alpha_{n}), \] the notation implying that we form all possible products of $n - 1$ factors, and add them all together. This form of the result holds even if several of the numbers~$\alpha$ are equal; but of course then some of the terms on the right-hand side are repeated. The reader will easily verify that if \[ \phi(x) = a_{0}(x - \alpha_{1})^{m_{1}} (x - \alpha_{2})^{m_{2}}\dots (x - \alpha_{\nu})^{m_{\nu}}, \] then \[ %[** TN: Implicit indexing; notation matches the original.] \phi'(x) = a_{0} \tsum m_{1}(x - \alpha_{1})^{m_{1}-1} (x - \alpha_{2})^{m_{2}}\dots (x - \alpha_{\nu})^{m_{\nu}}. \] \PageSep{208} \begin{Examples}{XLI.} \Item{1.} Show that if $\phi(x)$~is a polynomial then $\phi'(x)$~is the coefficient of~$h$ in the expansion of~$\phi(x + h)$ in powers of~$h$. \Item{2.} If $\phi(x)$~is divisible by~$(x - \alpha)^{2}$, then $\phi'(x)$~is divisible by~$x - \alpha$: and generally, if $\phi(x)$~is divisible by~$(x - \alpha)^{m}$, then $\phi'(x)$~is divisible by~$(x - \alpha)^{m-1}$. \Item{3.} Conversely, if $\phi(x)$ and~$\phi'(x)$ are \emph{both} divisible by~$x - \alpha$, then $\phi(x)$~is divisible by~$(x - \alpha)^{2}$; and if $\phi(x)$~is divisible by~$x - \alpha$ and $\phi'(x)$ by~$(x - \alpha)^{m-1}$, then $\phi(x)$~is divisible by~$(x - \alpha)^{m}$. \Item{4.} Show how to determine as completely as possible the multiple roots of $P(x) = 0$, where $P(x)$~is a polynomial, with their degrees of multiplicity, by means of the elementary algebraical operations. [If $H_{1}$~is the highest common factor of $P$~and~$P'$, $H_{2}$~the highest common factor of $H_{1}$ and~$P''$, $H_{3}$ that of $H_{2}$ and~$P'''$, and so on, then the roots of $H_{1}H_{3}/H_{2}^{2} = 0$ are the \emph{double} roots of $P = 0$, the roots of $H_{2}H_{4}/H_{3}^{2} = 0$ the \emph{treble} roots, and so on. But it may not be possible to complete the solution of $H_{1}H_{3}/H_{2}^{2} = 0$, $H_{2}H_{4}/H_{3}^{2} = 0$,~\dots. Thus if $P(x) = (x - 1)^{3}(x^{5} - x - 7)^{2}$ then $H_{1}H_{3}/H_{2}^{2} = x^{5} - x - 7$ and $H_{2}H_{4}/H_{3}^{2} = x - 1$; and we cannot solve the first equation.] \Item{5.} Find all the roots, with their degrees of multiplicity, of \[ x^{4} + 3x^{3} - 3x^{2} - 11x - 6 = 0,\quad x^{6} + 2x^{5} - 8x^{4} - 14x^{3} + 11x^{2} + 28x + 12 = 0. \] \Item{6.} If $ax^{2} + 2bx + c$ has a double root, \ie\ is of the form $a(x - \alpha)^{2}$, then $2(ax + b)$~must be divisible by~$x - \alpha$, so that $\alpha = -b/a$. This value of~$x$ must satisfy $ax^{2} + 2bx + c = 0$. Verify that the condition thus arrived at is $ac - b^{2} = 0$. \Item{7.} The equation $1/(x - a) + 1/(x - b) + 1/(x - c) = 0$ can have a pair of equal roots only if $a = b = c$. \MathTrip{1905.} \Item{8.} Show that \[ ax^{3} + 3bx^{2} + 3cx + d = 0 \] has a double root if $G^{2} + 4H^{3} = 0$, where $H = ac - b^{2}$, $G = a^{2}d - 3abc + 2b^{3}$. [Put $ax + b = y$, when the equation reduces to $y^{3} + 3Hy + G = 0$. This must have a root in common with $y^{2} + H = 0$.] \Item{9.} The reader may verify that if $\alpha$,~$\beta$, $\gamma$,~$\delta$ are the roots of \[ ax^{4} + 4bx^{3} + 6cx^{2} + 4dx + e = 0, \] then the equation whose roots are \[ \tfrac{1}{12}a \{ (\alpha - \beta)(\gamma - \delta) - (\gamma - \alpha)(\beta - \delta) \}, \] and two similar expressions formed by permuting $\alpha$,~$\beta$,~$\gamma$ cyclically, is \[ 4\theta^{3} - g_{2}\theta - g_{3} = 0, \] where \[ g_{2} = ae - 4bd + 3c^{2},\quad g_{3} = ace + 2bcd - ad^{2} - eb^{2} - c^{3}. \] It is clear that if two of $\alpha$,~$\beta$, $\gamma$,~$\delta$ are equal then two of the roots of this cubic will be equal. Using the result of Ex.~8 we deduce that $g_{2}^{3} - 27g_{3}^{2} = 0$. \PageSep{209} \begin{Result} \Item{10.} \Topic{Rolle's Theorem for polynomials.} If $\phi(x)$~is any polynomial, then between any pair of roots of $\phi(x) = 0$ lies a root of $\phi'(x) = 0$. \end{Result} A general proof of this theorem, applying not only to polynomials but to other classes of functions, will be given later. The following is an algebraical proof valid for polynomials only. We suppose that $\alpha$,~$\beta$ are two successive roots, repeated respectively $m$~and~$n$ times, so that \[ \phi(x) = (x - \alpha)^{m} (x - \beta)^{n} \theta(x), \] where $\theta(x)$~is a polynomial which has the same sign, say the positive sign, for $\alpha \leq x \leq \beta$. Then {\footnotesize\begin{align*} \phi'(x) &= (x - \alpha)^{m} (x - \beta)^{n} \theta'(x) + \{m(x - \alpha)^{m-1} (x - \beta)^{n} + n(x - \alpha)^{m} (x - \beta)^{n-1}\} \theta(x)\\ &= (x - \alpha)^{m-1} (x - \beta)^{n-1} [(x - \alpha) (x - \beta) \theta'(x) + \{m(x - \beta) + n(x - \alpha)\} \theta(x)]\\ &= (x - \alpha)^{m-1} (x - \beta)^{n-1} F(x), \end{align*}}% say. Now $F(\alpha) = m(\alpha - \beta) \theta(\alpha)$ and $F(\beta) = n(\beta - \alpha) \theta(\beta)$, which have opposite signs. Hence $F(x)$, and so~$\phi'(x)$, vanishes for some value of~$x$ between $\alpha$~and~$\beta$\Add{.} \end{Examples} \Paragraph{117.} \Topic{\Item{B.} Rational Functions.} If \[ R(x) = \frac{P(x)}{Q(x)}, \] where $P$~and~$Q$ are polynomials, it follows at once from \SecNo[§]{113},~(5) that \[ R'(x) = \frac{P'(x)Q(x) - P(x)Q'(x)}{\{Q(x)\}^{2}}, \] and this formula enables us to write down the derivative of any rational function. The form in which we obtain it, however, may or may not be the simplest possible. It will be the simplest possible if $Q(x)$ and~$Q'(x)$ have no common factor, \ie\ if $Q(x)$~has no repeated factor. But if $Q(x)$~has a repeated factor then the expression which we obtain for~$R'(x)$ will be capable of further reduction. It is very often convenient, in differentiating a rational function, to employ the method of partial fractions. We shall suppose that~$Q(x)$, as in \SecNo[§]{116}, is expressed in the form \[ a_{0}(x - \alpha_{1})^{m_{1}} (x - \alpha_{2})^{m_{2}}\dots (x - \alpha_{\nu})^{m_{\nu}}. \] Then it is proved in treatises on Algebra\footnote {See, \eg, Chrystal's \textit{Algebra}, vol.~i, pp.~151~\textit{et~seq.}} that $R(x)$~can be expressed in the form \begin{align*} \Pi(x) &+ \frac{A_{1, 1}}{x - \alpha_{1}} + \frac{A_{1, 2}}{(x - \alpha_{1})^{2}} + \dots + \frac{A_{1, m_{1}}}{(x - \alpha_{1})^{m_{1}}}\\ &+ \frac{A_{2, 1}}{x - \alpha_{2}} + \frac{A_{2, 2}}{(x - \alpha_{2})^{2}} + \dots + \frac{A_{2, m_{2}}}{(x - \alpha_{2})^{m_{2}}} + \dots, \end{align*} \PageSep{210} where $\Pi(x)$~is a polynomial; \ie\ as the sum of a polynomial and the sum of a number of terms of the type \[ \frac{A}{(x - \alpha)^{p}}, \] where $\alpha$~is a root of $Q(x) = 0$. We know already how to find the derivative of the polynomial: and it follows at once from Theorem~(4) of \SecNo[§]{113}, or, if $\alpha$~is complex, from its extension indicated in \SecNo[§]{114}, that the derivative of the rational function last written is \[ -\frac{pA(x -\alpha)^{p-1}}{(x - \alpha)^{2p}} = -\frac{pA}{(x - \alpha)^{p+1}}. \] We are now able to write down the derivative of the general rational function~$R(x)$, in the form \[ \Pi'(x) - \frac{A_{1, 1}}{(x - \alpha_{1})^{2}} - \frac{2A_{1, 2}}{(x - \alpha_{1})^{3}} - \dots - \frac{A_{2, 1}}{(x - \alpha_{2})^{2}} - \frac{2A_{2, 2}}{(x - \alpha_{2})^{3}} - \dots. \] Incidentally we have proved that \emph{the derivative of~$x^{m}$ is~$mx^{m-1}$, for all integral values of~$m$ positive or negative}. The method explained in this section is particularly useful when we have to differentiate a rational function several times (see \Exs{xlv}). \begin{Examples}{XLII.} \Item{1.} Prove that \[ \frac{d}{dx}\left(\frac{x}{1 + x^{2}}\right) = \frac{1 - x^{2}}{(1 + x^{2})^{2}},\quad \frac{d}{dx}\left(\frac{1 - x^{2}}{1 + x^{2}}\right) = -\frac{4x}{(1 + x^{2})^{2}}. \] \Item{2.} Prove that \[ \frac{d}{dx}\left(\frac{ax^{2} + 2bx + c}{Ax^{2} + 2Bx + C}\right) = \frac{(ax + b) (Bx + C) - (bx + c) (Ax + B)}{(Ax^{2} + 2Bx + C)^{2}}. \] \Item{3.} If $Q$~has a factor $(x - \alpha)^{m}$ then the denominator of~$R'$ (when $R'$~is reduced to its lowest terms) is divisible by~$(x - \alpha)^{m+1}$ but by no higher power of~$x - \alpha$. \Item{4.} In no case can the denominator of~$R'$ have a \emph{simple} factor~$x - \alpha$. Hence no rational function (such as~$1/x$) whose denominator contains any simple factor can be the derivative of another rational function. \end{Examples} \Paragraph{118.} \Topic{\Item{C.} Algebraical Functions.} The results of the preceding sections, together with Theorem~(6) of \SecNo[§]{113}, enable us to obtain the derivative of any explicit algebraical function whatsoever. The most important such function is~$x^{m}$, where $m$~is a rational number. We have seen already (\SecNo[§]{117}) that the derivative of this \PageSep{211} function is~$mx^{m-1}$ when $m$~is an integer positive or negative; and we shall now prove that this result is true for all rational values of~$m$. Suppose that $y = x^{m} = x^{p/q}$, where $p$~and~$q$ are integers and $q$~positive; and let $z = x^{1/q}$, so that $x = z^{q}$ and $y = z^{p}$. Then \[ \frac{dy}{dx} = \biggl(\frac{dy}{dz}\biggr) \bigg/ \biggl(\frac{dx}{dz}\biggr) = \frac{p}{q} z^{p-q} = mx^{m-1}. \] This result may also be deduced as a corollary from \Ex{xxxvi}.~3. For, if $\phi(x) = x^{m}$, we have \begin{align*} \phi'(x) &= \lim_{h \to 0} \frac{(x + h)^{m} - x^{m}}{h}\\ &= \lim_{\xi \to x} \frac{\xi^{m} - x^{m}}{\xi - x} = mx^{m-1}. \end{align*} It is clear that the more general formula \[ \frac{d}{dx} (ax + b)^{m} = ma(ax + b)^{m-1} \] holds also for all rational values of~$m$. The differentiation of \emph{implicit} algebraical functions involves certain theoretical difficulties to which we shall return in \Ref{Ch.}{VII}\@. But there is no practical difficulty in the actual calculation of the derivative of such a function: the method to be adopted will be illustrated sufficiently by an example. Suppose that $y$~is given by the equation \[ x^{3} + y^{3} - 3axy = 0. \] Differentiating with respect to~$x$ we find \[ x^{2} + y^{2} \frac{dy}{dx} - a\left(y + x \frac{dy}{dx}\right) = 0 \] and so \[ \frac{dy}{dx} = -\frac{x^{2} - ay}{y^{2} - ax}. \] \begin{Examples}{XLIII.} \Item{1.} Find the derivatives of \[ \bigsqrtp{\frac{1 + x}{1 - x}},\quad \bigsqrtp{\frac{ax + b}{cx + d}},\quad \bigsqrtp{\frac{ax^{2} + 2bx + c}{Ax^{2} + 2Bx + C}},\quad (ax + b)^{m} (cx + d)^{n}. \] \Item{2.} Prove that \[ \frac{d}{dx}\left\{\frac{x}{\sqrtp{a^{2} + x^{2}}}\right\} = \frac{a^{2}}{(a^{2} + x^{2})^{(3/2)}},\quad \frac{d}{dx}\left\{\frac{x}{\sqrtp{a^{2} - x^{2}}}\right\} = \frac{a^{2}}{(a^{2} - x^{2})^{3/2}}. \] \Item{3.} Find the differential coefficient of $y$ when \[ \Itemp{(i)} ax^{2} + 2hxy + by^{2} + 2gx + 2fy + c = 0,\quad \Itemp{(ii)} x^{5} + y^{5} - 5ax^{2}y^{2} = 0. \] \end{Examples} \PageSep{212} \Paragraph{119.} \Topic{\Item{D.} Transcendental Functions.} We have already proved (\Ex{xxxix}.~4) that \[ D_{x} \sin x = \cos x, \quad D_{x} \cos x = -\sin x. \] By means of Theorems (4)~and~(5) of \SecNo[§]{113}, the reader will easily verify that \begin{alignat*}{2} D_{x} \tan x &= \sec^{2} x, & D_{x} \cot x &= -\cosec^{2} x,\\ D_{x} \sec x &= \tan x \sec x, \quad & D_{x} \cosec x &= -\cot x\cosec x. \end{alignat*} And by means of Theorem~(7) we can determine the derivatives of the ordinary inverse trigonometrical functions. The reader should verify the following formulae: \begin{alignat*}{2} D_{x} \arcsin x &= ±1/\sqrtp{1 - x^{2}}, & D_{x} \arccos x &= \mp 1/\sqrtp{1 - x^{2}},\\ % D_{x} \arctan x &= 1/(1 + x^{2}), & D_{x} \arccot x &= -1/(1 + x^{2}),\\ D_{x} \arcsec x &= ± 1/\{x\sqrtp{x^{2} - 1}\}, \quad & D_{x} \arccosec x &= \mp 1/\{x\sqrtp{x^{2} - 1}\}. \end{alignat*} In the case of the inverse sine and cosecant the ambiguous sign is the same as that of~$\cos(\arcsin x)$, in the case of the inverse cosine and secant the same as that of~$\sin(\arccos x)$. The more general formulae \[ D_{x} \arcsin(x/a) = ±1/\sqrtp{a^{2} - x^{2}},\quad D_{x} \arctan(x/a) = a/(x^{2} + a^{2}), \] which are also easily derived from Theorem~(7) of \SecNo[§]{113}, are also of considerable importance. In the first of them the ambiguous sign is the same as that of~$a\cos\{\arcsin(x/a)\}$, since \[ a\sqrtb{1 - (x^{2}/a^{2})} = ±\sqrtp{a^{2} - x^{2}} \] according as $a$~is positive or negative. Finally, by means of Theorem~(6) of \SecNo[§]{113}, we are enabled to differentiate composite functions involving symbols both of algebraical and trigonometrical functionality, and so to write down the derivative of any such function as occurs in the following examples. \begin{Examples}{XLIV.\protect\footnotemark} \Item{1.} Find the derivatives of\footnotetext {In these examples $m$~is a rational number and $a$, $b$,~\dots, $\alpha$, $\beta$~\dots\ have such values that the functions which involve them are real.} \begin{gather*} \cos^{m} x, \quad \sin^{m} x, \quad \cos x^{m}, \quad \sin x^{m}, \quad \cos (\sin x), \quad \sin (\cos x),\\ \sqrtp{a^{2}\cos^{2} x + b^{2}\sin^{2} x}, \quad \frac{\cos x\sin x}{\sqrtp{a^{2}\cos^{2} x + b^{2}\sin^{2} x}},\\ x\arcsin x + \sqrtp{1 - x^{2}}, \quad (1 + x)\arctan\sqrt{x} - \sqrt{x}. \end{gather*} \PageSep{213} \Item{2.} Verify by differentiation that $\arcsin x + \arccos x$ is constant for all values of~$x$ between $0$~and~$1$, and $\arctan x + \arccot x$ for all positive values of~$x$. \Item{3.} Find the derivatives of \[ \arcsin\sqrtp{1 - x^{2}},\quad \arcsin\{2x\sqrtp{1 - x^{2}}\},\quad \arctan \left(\frac{a + x}{1 - ax}\right). \] How do you explain the simplicity of the results? \Item{4.} Differentiate \[ \frac{1}{\sqrtp{ac - b^{2}}} \arctan \frac{ax + b}{\sqrtp{ac - b^{2}}},\quad -\frac{1}{\sqrtp{-a}} \arcsin\frac{ax + b}{\sqrtp{b^{2} - ac}}. \] \Item{5.} Show that each of the functions \[ 2\arcsin \bigsqrtp{\frac{x - \beta}{\alpha - \beta}},\quad 2\arctan \bigsqrtp{\frac{x - \beta}{\alpha - x}},\quad \arcsin \frac{2\sqrtb{(\alpha - x)(x - \beta)}}{\alpha - \beta} \] has the derivative \[ \frac{1}{\sqrtb{(\alpha - x)(x - \beta)}}. \] \Item{6.} Prove that \[ \frac{d}{d\theta}\left\{ \arccos \bigsqrtp{\frac{\cos 3\theta}{\cos^{3}\theta}} \right\} = \bigsqrtp{\frac{3}{\cos\theta \cos 3\theta}}. \] \MathTrip{1904.} \Item{7.} Show that \[ \frac{1}{\sqrtp{C(Ac - aC)}}\, \frac{d}{dx} \left[ \arccos \bigsqrtb{\frac{C(ax^{2} + c)}{c(Ax^{2} + C)}} \right] = \frac{1}{(Ax^{2} + C) \sqrtp{ax^{2} + c}}. \] \Item{8.} Each of the functions \[ \frac{1}{\sqrtp{a^{2} - b^{2}}} \arccos \left(\frac{a\cos x + b}{a + b\cos x}\right),\quad \frac{2}{\sqrtp{a^{2} - b^{2}}} \arctan \left\{\bigsqrtp{\frac{a - b}{a + b }} \tan \tfrac{1}{2}x\right\} \] has the derivative~$1/(a + b\cos x)$. \Item{9.} If $X = a + b\cos x + c\sin x$, and \[ y = \frac{1}{\sqrtp{a^{2} - b^{2} -c^{2}}} \arccos \frac{aX - a^{2} + b^{2} + c^{2}}{X \sqrtp{b^{2} + c^{2}}}, \] then $dy/dx = 1/X$. \Item{10.} Prove that the derivative of $F[f\{\phi(x)\}]$ is $F'[f\{\phi(x)\}]\, f'\{\phi(x)\}\phi'(x)$, and extend the result to still more complicated cases. \Item{11.} If $u$~and~$v$ are functions of~$x$, then \[ D_{x} \arctan(u/v) = (vD_{x}u - uD_{x}v)/(u^{2} + v^{2}). \] \Item{12.} The derivative of $y = (\tan x + \sec x)^{m}$ is $my\sec x$. \Item{13.} The derivative of $y = \cos x + i\sin x$ is~$iy$. \Item{14.} Differentiate $x\cos x$, $(\sin x)/x$. Show that the values of~$x$ for which the tangents to the curves $y = x\cos x$, $y = (\sin x)/x$ are parallel to the axis of~$x$ are roots of $\cot x = x$, $\tan x = x$ respectively. \PageSep{214} \Item{15.} It is easy to see (cf.\ \Ex{xvii}.~5) that the equation $\sin x = ax$, where $a$~is positive, has no real roots except $x = 0$ if $a \geq 1$, and if $a < 1$ a finite number of roots which increases as $a$~diminishes. Prove that the values of~$a$ for which the number of roots changes are the values of~$\cos\xi$, where $\xi$~is a positive root of the equation $\tan\xi = \xi$. [The values required are the values of~$a$ for which $y = ax$ touches $y = \sin x$.] \Item{16.} If $\phi(x) = x^{2}\sin(1/x)$ when $x \neq 0$, and $\phi(0) = 0$, then \[ \phi'(x) = 2x\sin(1/x) - \cos(1/x) \] when $x\neq 0$, and $\phi'(0) = 0$. And $\phi'(x)$~is discontinuous for $x = 0$ (cf.\ \SecNo[§]{111},~(2)). \Item{17.} Find the equations of the tangent and normal at the point $(x_{0}, y_{0})$ of the circle $x^{2} + y^{2} = a^{2}$. [Here $y = \sqrtp{a^{2} - x^{2}}$, $dy/dx = -x/\sqrtp{a^{2} - x^{2}}$, and the tangent is \[ y - y_{0} = (x - x_{0}) \left\{-x_{0}/\sqrtp{a^{2} - x_{0}^{2}}\right\}, \] which may be reduced to the form $xx_{0} + yy_{0} = a^{2}$. The normal is $xy_{0} - yx_{0} = 0$, which of course passes through the origin.] \Item{18.} Find the equations of the tangent and normal at any point of the ellipse $(x/a)^{2} + (y/b)^{2} = 1$ and the hyperbola $(x/a)^{2} - (y/b)^{2} = 1$. \Item{19.} The equations of the tangent and normal to the curve $x = \phi(t)$, $y = \psi(t)$, at the point whose parameter is~$t$, are \[ \frac{x - \phi(t)}{\phi'(t)} = \frac{y - \psi(t)}{\psi'(t)},\quad \{x - \phi(t)\} \phi'(t) + \{y - \psi(t)\} \psi'(t) = 0. \] \end{Examples} \Paragraph{120. Repeated differentiation.} We may form a new function~$\phi''(x)$ from~$\phi'(x)$ just as we formed~$\phi'(x)$ from~$\phi(x)$. This function is called the \emph{second derivative} or \emph{second differential coefficient} of~$\phi(x)$. The second derivative of $y = \phi(x)$ may also be written in any of the forms \[ D_{x}^{2}y,\quad \left(\frac{d}{dx}\right)^{2}y,\quad \frac{d^{2}y}{dx^{2}}. \] In exactly the same way we may define the \emph{$n$th~derivative or $n$th~differential coefficient of $y = \phi(x)$}, which may be written in any of the forms \[ \phi^{(n)}(x),\quad D_{x}^{n}y,\quad \left(\frac{d}{dx}\right)^{n}y,\quad \frac{d^{n}y}{dx^{n}}. \] But it is only in a few cases that it is easy to write down a general formula for the $n$th~differential coefficient of a given function. Some of these cases will be found in the examples which follow. \PageSep{215} \begin{Examples}{XLV.} \Item{1.} If $\phi(x) = x^{m}$ then \[ \phi^{(n)}(x) = m(m - 1) \dots (m - n + 1)x^{m-n}. \] This result enables us to write down the $n$th~derivative of any polynomial. \Item{2.} If $\phi(x) = (ax + b)^{m}$ then \[ \phi^{(n)}(x) = m(m - 1) \dots (m - n + 1)a^{n}(ax + b)^{m-n}. \] In these two examples $m$~may have any rational value. If $m$~is a positive integer, and $n > m$, then $\phi^{(n)}(x) = 0$. \Item{3.} The formula \[ \left(\frac{d}{dx}\right)^{n} \frac{A}{(x - \alpha)^{p}} = (-1)^{n} \frac{p(p + 1) \dots (p + n - 1)A}{(x - \alpha)^{p+n}} \] enables us to write down the $n$th~derivative of any rational function expressed in the standard form as a sum of partial fractions. \Item{4.} Prove that the $n$th~derivative of $1/(1 - x^{2})$ is \[ \tfrac{1}{2}(n!) \{(1 - x)^{-n-1} + (-1)^{n}(1 + x)^{-n-1}\}. \] \Item{5.} \Topic{Leibniz' Theorem.} If $y$~is a product~$uv$, and we can form the first $n$~derivatives of $u$ and~$v$, then we can form the $n$th~derivative of~$y$ by means of \emph{Leibniz' Theorem}, which gives the rule \[ (uv)_{n} = u_{n}v + \binom{n}{1}u_{n-1}v_{1} + \binom{n}{2}u_{n-2}v_{2} + \dots + \binom{n}{r}u_{n-r}v_{r} + \dots + uv_{n}, \] where suffixes indicate differentiations, so that $u_{n}$, for example, denotes the $n$th~derivative of~$u$. To prove the theorem we observe that \begin{align*} (uv)_{1} &= u_{1}v + uv_{1},\\ (uv)_{2} &= u_{2}v + 2u_{1}v_{1} + uv_{2}, \end{align*} and so on. It is obvious that by repeating this process we arrive at a formula of the type \[ (uv)_{n} = u_{n}v + a_{n, 1} u_{n-1} v_{1} + a_{n, 2} u_{n-2} v_{2} + \dots + a_{n, r} u_{n-r} v_{r} + \dots + uv_{n}. \] Let us assume that $a_{n, r} = \dbinom{n}{r}$ for $r = 1$, $2$,~\dots\Add{,} $n - 1$, and show that if this is so then $a_{n+1, r} = \dbinom{n + 1}{r}$ for $r = 1$, $2$,~\dots~$n$. It will then follow by the principle of mathematical induction that $a_{n, r} = \dbinom{n}{r}$ for all values of $n$ and~$r$ in question. When we form $(uv)_{n+1}$ by differentiating $(uv)_{n}$ it is clear that the coefficient of~$u_{n+1-r}v_{r}$ is \[ a_{n, r} + a_{n, r-1} = \binom{n}{r} + \binom{n}{r - 1} = \binom{n + 1}{r}. \] This establishes the theorem. \PageSep{216} \Item{6.} The $n$th~derivative of~$x^{m}f(x)$ is \begin{multline*} \frac{m!}{(m - n)!} x^{m-n} f(x) + n \frac{m!}{(m - n + 1)!} x^{m-n+1} f'(x)\\ + \frac{n(n - 1)}{1·2}\, \frac{m!}{(m - n + 2)!} x^{m-n+2} f''(x) + \dots, \end{multline*} the series being continued for $n + 1$~terms or until it terminates. \Item{7.} Prove that $D_{x}^{n}\cos x = \cos(x + \frac{1}{2}n\pi)$, $D_{x}^{n}\sin x = \sin(x + \frac{1}{2}n\pi)$\Add{.} \Item{8.} If $y = A\cos mx + B\sin mx$ then $D_{x}^{2} y + m^{2} y = 0$. And if \[ y = A\cos mx + B\sin mx + P_{n}(x), \] where $P_{n}(x)$~is a polynomial of degree~$n$, then $D_{x}^{n+3} y + m^{2} D_{x}^{n+1} y = 0$. \Item{9.} If $x^{2} D_{x}^{2}y + x D_{x} y + y = 0$ then \[ x^{2} D_{x}^{n+2} y + (2n + 1)x D_{x}^{n+1} y + (n^{2} + 1) D_{x}^{n} y = 0. \] [Differentiate $n$~times by \DPchg{Leibnitz'}{Leibniz'} Theorem.] \Item{10.} If $U_{n}$~denotes the $n$th~derivative of $(Lx + M)/(x^{2} - 2Bx + C)$, then \[ \frac{x^{2} - 2Bx + C}{(n + 1)(n + 2)} U_{n+2} + \frac{2(x - B)}{n + 1} U_{n+1} + U_{n} = 0. \] \MathTrip{1900.} [First obtain the equation when $n = 0$; then differentiate $n$~times by \DPchg{Leibnitz'}{Leibniz'} Theorem.] \Item{11.} \Topic{The $n$th~derivatives of $a/(a^{2} + x^{2})$ and $x/(a^{2} + x^{2})$.} Since \[ \frac{a}{a^{2} + x^{2}} = \frac{1}{2i} \left(\frac{1}{x - ai} - \frac{1}{x + ai}\right), \quad \frac{x}{a^{2} + x^{2}} = \frac{1}{2} \left(\frac{1}{x - ai} + \frac{1}{x + ai}\right), \] we have \[ D_{x}^{n} \left(\frac{a}{a^{2} + x^{2}}\right) = \frac{(-1)^{n} n!}{2i} \left\{ \frac{1}{(x - ai)^{n+1}} - \frac{1}{(x + ai)^{n+1}} \right\}, \] {\Loosen and a similar formula for $D_{x}^{n}\{x/(a^{2} + x^{2})\}$. If $\rho = \sqrtp{x^{2} + a^{2}}$, and $\theta$~is the numerically smallest angle whose cosine and sine are $x/\rho$~and~$a/\rho$, then $x + ai = \rho\Cis\theta$ and $x - ai = \rho\Cis(-\theta )$, and so} \begin{align*} D_{x}^{n} \{a/(a^{2} + x^{2})\} &= \{(-1)^{n} n!/2i\} \rho^{-n-1} [\Cis \{(n + 1)\theta\} - \Cis \{-(n + 1)\theta\}]\\ &= (-1)^{n} n!\, (x^{2} + a^{2})^{-(n+1)/2} \sin \{(n + 1) \arctan(a/x)\}. \end{align*} Similarly \[ D_{x}^{n} \{x/(a^{2} + x^{2})\} = (-1)^{n} n!\, (x^{2} + a^{2})^{-(n+1)/2} \cos \{(n + 1) \arctan (a/x)\}. \] \Item{12.} Prove that \begin{align*} D_{x}^{n} \{(\cos x)/x\} &= \{P_{n} \cos(x + \tfrac{1}{2}n\pi) + Q_{n} \sin(x + \tfrac{1}{2}n\pi)\}/x^{n+1},\\ D_{x}^{n} \{(\sin x)/x\} &= \{P_{n} \sin(x + \tfrac{1}{2}n\pi) - Q_{n} \cos(x + \tfrac{1}{2}n\pi)\}/x^{n+1}, \end{align*} where $P_{n}$ and~$Q_{n}$ are polynomials in~$x$ of degree $n$~and~$n-1$ respectively. \Item{13.} Establish the formulae \begin{gather*} %[** TN: Set on one line in the orignal] \frac{dx}{dy} = 1 \bigg/\biggl(\frac{dy}{dx}\biggr),\quad \frac{d^{2} x}{dy^{2}} = -\frac{d^{2} y}{dx^{2}} \bigg/ \biggl(\frac{dy}{dx}\biggr)^{3},\\ \frac{d^{3} x}{dy^{3}} = -\biggl\{\frac{d^{3} y}{dx^{3}}\, \frac{dy}{dx} - 3\biggl(\frac{d^{2} y}{dx^{2}}\biggr)\biggr\} \bigg/ \biggl(\frac{dy}{dx}\biggr)^{5}. \end{gather*} \PageSep{217} \Item{14.} If $yz = 1$ and $y_{r} = (1/r!) D_{x}^{r}y$, $z_{s} = (1/s!) D_{x}^{s}z$, then \[ \frac{1}{z^{3}} \begin{vmatrix} z & z_{1}& z_{2}\\ z_{1}& z_{2}& z_{3}\\ z_{2}& z_{3}& z_{4} \end{vmatrix} = \frac{1}{y^{2}} \begin{vmatrix} y_{2}& y_{3}\\ y_{3}& y_{4} \end{vmatrix}. \] \MathTrip{1905.} \Item{15.} If \[ W(y, z, u) = \begin{vmatrix} y & z & u\\ y' & z' & u'\\ y''& z''& u'' \end{vmatrix}, \] dashes denoting differentiations with respect to~$x$, then \[ W(y, z, u) = y^{3}\, W\left(1, \frac{z}{y}, \frac{u}{y}\right). \] \Item{16.} If \[ ax^{2} + 2hxy + by^{2} + 2gx + 2fy + c = 0, \] then \[ dy/dx = -(ax + hy + g)/(hx + by + f) \] and \[ d^{2}y/dx^{2} = (abc + 2fgh - af^{2} - bg^{2} - ch^{2})/(hx + by + f)^{3}. \] \end{Examples} \Paragraph{121. Some general theorems concerning derived functions.} In all that follows we suppose that $\phi(x)$~is a function of~$x$ which has a derivative~$\phi'(x)$ for all values of~$x$ in question. This assumption of course involves the continuity of~$\phi(x)$. \begin{ParTheorem}{The meaning of the sign of~$\phi'(x)$. \normalfont\textsc{Theorem~A\@.}} If $\phi'(x_{0}) > 0$ then $\phi(x) < \phi(x_{0})$ for all values of~$x$ less than~$x_{0}$ but sufficiently near to~$x_{0}$, and $\phi(x) > \phi(x_{0})$ for all values of~$x$ greater than~$x_{0}$ but sufficiently near to~$x_{0}$. \end{ParTheorem} For $\{\phi(x_{0} + h) - \phi(x_{0})\}/h$ converges to a positive limit~$\phi'(x_{0})$ as $h \to 0$. This can only be the case if $\phi(x_{0} + h) - \phi(x_{0})$ and~$h$ have the same sign for sufficiently small values of~$h$, and this is precisely what the theorem states. Of course from a geometrical point of view the result is intuitive, the inequality $\phi'(x) > 0$ expressing the fact that the tangent to the curve $y = \phi(x)$ makes a positive acute angle with the axis of~$x$. The reader should formulate for himself the corresponding theorem for the case in which $\phi'(x) < 0$. An immediate deduction from Theorem~A is the following important theorem, generally known as Rolle's Theorem. In view of the great importance of this theorem it may be well to repeat that its truth depends on the assumption of the existence of the derivative~$\phi'(x)$ for all values of~$x$ in question. \begin{Theorem}[B\@.] If $\phi(a) = 0$ and $\phi(b) = 0$, then there must be at least one value of~$x$ which lies between $a$ and~$b$ and for which $\phi'(x) = 0$. \end{Theorem} There are two possibilities: the first is that $\phi(x)$~is equal to \PageSep{218} zero throughout the whole interval~$\DPmod{(a, b)}{[a, b]}$. In this case $\phi'(x)$~is also equal to zero throughout the interval. If on the other hand $\phi(x)$~is not always equal to zero, then there must be values of~$x$ for which $\phi(x)$~is positive or negative. Let us suppose, for example, that $\phi(x)$~is sometimes positive. Then, by Theorem~2 of \SecNo[§]{102}, there is a value~$\xi$ of~$x$, not equal to $a$~or~$b$, and such that $\phi(\xi)$~is at least as great as the value of~$\phi(x)$ at any other point in the interval. And $\phi'(\xi)$~must be equal to zero. For if it were positive then $\phi(x)$ would, by Theorem~A, be greater than~$\phi(\xi)$ for values of~$x$ greater than~$\xi$ but sufficiently near to~$\xi$, so that there would certainly be values of~$\phi(x)$ greater than~$\phi(\xi)$. Similarly we can show that $\phi'(\xi)$ cannot be negative. \begin{Cor}[1.] If $\phi(a) = \phi(b) = k$, then there must be a value of~$x$ between $a$~and~$b$ such that $\phi'(x) = 0$. \end{Cor} We have only to put $\phi(x) - k = \psi(x)$ and apply Theorem~B to~$\psi(x)$. \begin{Cor}[2.] If $\phi'(x) > 0$ for all values of~$x$ in a certain interval, then $\phi(x)$~is an increasing function of~$x$, in the stricter sense of \SecNo[§]{95}, throughout that interval. \end{Cor} Let $x_{1}$ and~$x_{2}$ be two values of~$x$ in the interval in question, and $x_{1} < x_{2}$. We have to show that $\phi(x_{1}) < \phi(x_{2})$. In the first place $\phi(x_{1})$~cannot be equal to~$\phi(x_{2})$; for, if this were so, there would, by Theorem~B, be a value of~$x$ between $x_{1}$ and~$x_{2}$ for which $\phi'(x) = 0$. Nor can $\phi(x_{1})$~be greater than~$\phi(x_{2})$. For, since $\phi'(x_{1})$~is positive, $\phi(x)$~is, by Theorem~A, greater than~$\phi(x_{1})$ when $x$~is greater than~$x_{1}$ and sufficiently near to~$x_{1}$. It follows that there is a value~$x_{3}$ of~$x$ between $x_{1}$ and$~x_{2}$ such that $\phi(x_{3}) = \phi(x_{1})$; and so, by Theorem~B, that there is a value of~$x$ between $x_{1}$ and~$x_{3}$ for which $\phi'(x) = 0$. \begin{Cor}[3.] The conclusion of Cor.~\Inum{2} still holds if the interval~$\DPmod{(a, b)}{[a, b]}$ considered includes a finite number of exceptional values of~$x$ for which $\phi'(x)$~does not exist, or is not positive, provided $\phi(x)$~is continuous even for these exceptional values of~$x$. \end{Cor} It is plainly sufficient to consider the case in which there is one exceptional value of~$x$ only, and that corresponding to an end of the interval, say to~$a$. If $a < x_{1} < x_{2} < b$, we can choose~$a + \EPSILON$ so that $a + \EPSILON < x_{1}$, and $\phi'(x) > 0$ throughout $\DPmod{(a + \EPSILON, b)}{[a + \EPSILON, b]}$, so that $\phi(x_{1}) < \phi(x_{2})$, by Cor.~2. All that remains is to prove that \PageSep{219} $\phi(a) < \phi(x_{1})$. Now $\phi(x_{1})$~decreases steadily, and in the stricter sense, as $x_{1}$~decreases towards~$a$, and so \[ \phi(a) = \phi(a + 0) = \lim_{x_{1}\to a+0} \phi(x_{1}) < \phi(x_{1}). \] \begin{Cor}[4.] If $\phi'(x) > 0$ throughout the interval~$\DPmod{(a, b)}{[a, b]}$, and $\phi(a) \geq 0$, then $\phi(x)$~is positive throughout the interval~$\DPmod{(a, b)}{[a, b]}$. \end{Cor} \begin{Remark} The reader should compare the second of these corollaries very carefully with Theorem~A\@. If, as in Theorem~A, we assume only that $\phi'(x)$~is positive \emph{at a single point $x = x_{0}$}, then we can prove that $\phi(x_{1}) < \phi(x_{2})$ when $x_{1}$~and~$x_{2}$ are sufficiently near to~$x_{0}$ and $x_{1} < x_{0} < x_{2}$. For $\phi(x_{1}) < \phi(x_{0})$ and $\phi(x_{2}) > \phi(x_{0})$, by Theorem~A\@. But this does not prove that there is any interval including~$x_{0}$ throughout which $\phi(x)$~is a steadily increasing function, for the assumption that $x_{1}$~and~$x_{2}$ lie on opposite sides of~$x_{0}$~is essential to our conclusion. We shall return to this point, and illustrate it by an actual example, in a moment~(\SecNo[§]{124}). \end{Remark} \Paragraph{122. Maxima and Minima.} We shall say that the value~$\phi(\xi)$ assumed by~$\phi(x)$ when $x = \xi$ is a \emph{maximum} if $\phi(\xi)$~is greater than any other value assumed by~$\phi(x)$ in the immediate neighbourhood of $x = \xi$, \ie\ if we can find an interval $\DPmod{(\xi - \EPSILON, \xi + \EPSILON)}{[\xi - \EPSILON, \xi + \EPSILON]}$ of values of~$x$ such that $\phi(\xi) > \phi(x)$ when $\xi - \EPSILON < x < \xi$ and when $\xi < x < \xi + \EPSILON$; and we define a \emph{minimum} in a similar manner. Thus in the figure the points~$A$ correspond to maxima, the points~$B$ to minima of %[Illustration: Fig. 39.] \Figure[\textwidth]{39}{p219} the function whose graph is there shown. It is to be observed that the fact that $A_{3}$~corresponds to a maximum and $B_{1}$~to a minimum is in no way inconsistent with the fact that the value of the function is greater at~$B_{1}$ than at~$A_{3}$. \begin{Theorem}[C\@.] A \Emph{necessary} condition for a maximum or minimum value of~$\phi(x)$ at $x = \xi$ is that $\phi'(\xi) = 0$.\footnote {A function which is continuous but has no derivative may have maxima and minima. We are of course assuming the existence of the derivative.} \end{Theorem} \PageSep{220} This follows at once from Theorem~A\@. That the condition is not \emph{sufficient} is evident from a glance at the point~$C$ in the figure. Thus if $y = x^{3}$ then $\phi'(x) = 3x^{2}$, which vanishes when $x = 0$. But $x = 0$ does not give either a maximum or a minimum of~$x^{3}$, as is obvious from the form of the graph of~$x^{3}$ (\Fig{10}, \PageRef{p.}{45}). But \emph{there will certainly be a maximum at $x = \xi$ if $\phi'(\xi) = 0$, $\phi'(x) > 0$ for all values of~$x$ less than but near to~$\xi$, and $\phi'(x) < 0$ for all values of~$x$ greater than but near to~$\xi$}: and if the signs of these two inequalities are reversed there will certainly be a minimum. For then we can (by Cor.~3 of \SecNo[§]{121}) determine an interval $\DPmod{(\xi - \EPSILON, \xi)}{[\xi - \EPSILON, \xi]}$ throughout which $\phi(x)$~increases with~$x$, and an interval~$\DPmod{(\xi, \xi + \EPSILON)}{[\xi, \xi + \EPSILON]}$ throughout which it decreases as $x$~increases: and obviously this ensures that $\phi(\xi)$~shall be a maximum. This result may also be stated thus. If the sign of~$\phi'(x)$ changes at $x = \xi$ from positive to negative, then $x = \xi$ gives a maximum of~$\phi(x)$: and if the sign of~$\phi'(x)$ changes in the opposite sense, then $x = \xi$ gives a minimum. \Paragraph{123.} There is another way of stating the conditions for a maximum or minimum which is often useful. Let us assume that $\phi(x)$~has a second derivative~$\phi''(x)$: this of course does not follow from the existence of~$\phi'(x)$, any more than the existence of~$\phi'(x)$ follows from that of~$\phi(x)$. But in such cases as we are likely to meet with at present the condition is generally satisfied. \begin{Theorem}[D\@.] If $\phi'(\xi) = 0$ and $\phi''(\xi) \neq 0$, then $\phi(x)$~has a maximum or minimum at $x = \xi$, a maximum if $\phi''(\xi) < 0$, a minimum if $\phi''(\xi) > 0$. \end{Theorem} Suppose, \eg, that $\phi''(\xi) < 0$. Then, by Theorem~A, $\phi'(x)$~is negative when $x$~is less than~$\xi$ but sufficiently near to~$\xi$, and positive when $x$~is greater than~$\xi$ but sufficiently near to~$\xi$. Thus $x = \xi$ gives a maximum. \begin{Remark} \Paragraph{124.} In what has preceded (apart from the last paragraph) we have assumed simply that $\phi(x)$~has a derivative for all values of~$x$ in the interval under consideration. If this condition is not fulfilled the theorems cease to be true. Thus Theorem~B fails in the case of the function \[ y = 1 - \sqrtp{x^{2}}, \] \PageSep{221} where the square root is to be taken positive. The graph of this function is shown in \Fig{40}. Here $\phi(-1) = \phi(1) = 0$: but $\phi'(x)$, as is evident from the figure, is equal to~$1$ if $x$~is negative and to~$-1$ if $x$~is positive, and never %[Illustration: Fig. 40.] \Figure[2.75in]{40}{p221} vanishes. There is no derivative for $x = 0$, and no tangent to the graph at~$P$. And in this case $x = 0$ obviously gives a maximum of~$\phi(x)$, but $\phi'(0)$, as it does not exist, cannot be equal to zero, so that the test for a maximum fails. The bare existence of the derivative~$\phi'(x)$, however, is all that we have assumed. And there is one assumption in particular that we have not made, and that is that \emph{$\phi'(x)$~itself is a continuous function}. This raises a rather subtle but still a very interesting point. \emph{Can} a function~$\phi(x)$ have a derivative for all values of~$x$ which is not itself continuous? In other words can a curve have a tangent at every point, and yet the direction of the tangent not vary continuously? The reader, if he considers what the question means and tries to answer it in the light of common sense, will probably incline to the answer \emph{No}. It is, however, not difficult to see that this answer is wrong. Consider the function~$\phi(x)$ defined, when $x \neq 0$, by the equation \[ \phi(x) = x^{2}\sin(1/x); \] and suppose that $\phi(0) = 0$. Then $\phi(x)$~is continuous for all values of~$x$. If $x \neq 0$ then \[ \phi'(x) = 2x \sin(1/x) - \cos(1/x); \] while \[ \phi'(0) = \lim_{h \to 0} \frac{h^{2}\sin(1/h)}{h} = 0. \] Thus $\phi'(x)$~exists for all values of~$x$. But $\phi'(x)$~is discontinuous for $x = 0$; for $2x\sin(1/x)$~tends to~$0$ as $x \to 0$, and $\cos(1/x)$~oscillates between the limits of indetermination $-1$~and~$1$, so that $\phi'(x)$~oscillates between the same limits. What is practically the same example enables us also to illustrate the point referred to at the end of \SecNo[§]{121}. Let \[ \phi(x) = x^{2}\sin(1/x) + ax, \] where $0 < a < 1$, when $x \neq 0$, and $\phi(0) = 0$. Then $\phi'(0) = a > 0$. Thus the conditions of Theorem~A of \SecNo[§]{121} are satisfied. But if $x \neq 0$ then \[ \phi'(x) = 2x\sin(1/x) - \cos(1/x) + a, \] which oscillates between the limits of indetermination $a - 1$ and~$a + 1$ as $x \to 0$. As $a - 1 < 0$, we can find values of~$x$, as near to~$0$ as we like, for which $\phi'(x) < 0$; and it is therefore impossible to find any interval, including $x = 0$, throughout which $\phi(x)$~is a steadily increasing function of~$x$. \PageSep{222} It is, however, impossible that $\phi'(x)$~should have what was called in \Ref{Ch.}{V} (\Ex{xxxvii}.~18) a `simple' discontinuity; \eg\ that $\phi'(x) \to a$ when $x \to +0$, $\phi'(x) \to b$ when $x \to -0$, and $\phi'(0) = c$, unless $a = b = c$, in which case $\phi'(x)$~is continuous for $x = 0$. For a proof see \SecNo[§]{125}, \Ex{xlvii}.~3. \end{Remark} \begin{Examples}{XLVI.} \Item{1.} Verify Theorem~B when $\phi(x) = (x - a)^{m} (x - b)^{n}$ or $\phi(x) = (x - a)^{m} (x - b)^{n} (x - c)^{p}$, where $m$,~$n$,~$p$ are positive integers and $a < b < c$. [The first function vanishes for $x = a$ and $x = b$. And \[ \phi'(x) = (x - a)^{m-1} (x - b)^{n-1} \{(m + n)x - mb - na\} \] vanishes for $x = (mb + na)/(m + n)$, which lies between $a$~and~$b$. In the second case we have to verify that the quadratic equation \[ (m + n + p)x^{2} - \{m(b + c) + n(c + a) + p(a + b)\}x + mbc + nca + pab = 0 \] has roots between $a$~and~$b$ and between $b$~and~$c$.] \Item{2.} Show that the polynomials \[ 2x^{3} + 3x^{2} - 12x + 7,\quad 3x^{4} + 8x^{3} - 6x^{2} - 24x + 19 \] are positive when $x > 1$. \Item{3.} Show that $x - \sin x$ is an increasing function throughout any interval of values of~$x$, and that $\tan x - x$ increases as $x$~increases from $-\frac{1}{2}\pi$ to~$\frac{1}{2}\pi$. For what values of~$a$ is $ax - \sin x$ a steadily increasing or decreasing function of~$x$? \Item{4.} Show that $\tan x - x$ also increases from $x = \frac{1}{2}\pi$ to $x = \frac{3}{2}\pi$, from $x = \frac{3}{2}\pi$ to $x = \frac{5}{2}\pi$, and so on, and deduce that there is one and only one root of the equation $\tan x = x$ in each of these intervals (cf.\ \Ex{xvii}.~4). \Item{5.} {\Loosen Deduce from Ex.~3 that $\sin x - x < 0$ if $x > 0$, from this that $\cos x - 1 + \frac{1}{2}x^{2} > 0$, and from this that $\sin x - x + \frac{1}{6} x^{3} > 0$. And, generally, prove that if} \begin{align*} C_{2m} & = \cos x - 1 + \frac{x^{2}}{2!} - \dots - (-1)^{m} \frac{x^{2m}}{\DPchg{2m!}{(2m)!}},\\ S_{2m+1}& = \sin x - x + \frac{x^{3}}{3!} - \dots - (-1)^{m} \frac{x^{2m+1}}{(2m+1)!}, \end{align*} and $x> 0$, then $C_{2m}$~and~$S_{2m+1}$ are positive or negative according as $m$~is odd or even. \Item{6.} If $f(x)$~and~$f''(x)$ are continuous and have the same sign at every point of an interval~$\DPmod{(a, b)}{[a, b]}$, then this interval can include at most one root of either of the equations $f(x) = 0$, $f'(x) = 0$. \Item{7.} The functions $u$,~$v$ and their derivatives $u'$,~$v'$ are continuous throughout a certain interval of values of~$x$, and $uv' - u'v$ never vanishes at any point of the interval. Show that between any two roots of $u = 0$ lies one of $v = 0$, and conversely. Verify the theorem when $u = \cos x$, $v = \sin x$. [If $v$~does not vanish between two roots of $u = 0$, say $\alpha$~and~$\beta$, then the function~$u/v$ is continuous throughout the interval~$\DPmod{(\alpha, \beta)}{[\alpha, \beta]}$ and vanishes at its extremities. Hence $(u/v)' = (u'v - uv')/v^{2}$ must vanish between $\alpha$~and~$\beta$, which contradicts our hypothesis.] \PageSep{223} \Item{8.} Determine the maxima and minima (if any) of $(x - 1)^{2} (x + 2)$, $x^{3} - 3x$, $2x^{3} - 3x^{2} - 36x + 10$, $4x^{3} - 18x^{2} + 27x - 7$, $3x^{4} - 4x^{3} + 1$, $x^{5} - 15x^{3} + 3$. In each case sketch the form of the graph of the function. [Consider the last function, for example. Here $\phi'(x) = 5x^{2} (x^{2} - 9)$, which vanishes for $x = -3$, $x = 0$, and $x = 3$. It is easy to see that $x = -3$ gives a maximum and $x = 3$ a minimum, while $x = 0$ gives neither, as $\phi'(x)$~is negative on both sides of $x = 0$.] \Item{9.} Discuss the maxima and minima of the function $(x - a)^{m} (x - b)^{n}$, where $m$~and~$n$ are any positive integers, considering the different cases which occur according as $m$~and~$n$ are odd or even. Sketch the graph of the function. \Item{10.} Discuss similarly the function $(x - a) (x - b)^{2} (x - c)^{3}$, distinguishing the different forms of the graph which correspond to different hypotheses as to the relative magnitudes of $a$,~$b$,~$c$. \Item{11.} Show that $(ax + b)/(cx + d)$ has no maxima or minima, whatever values $a$,~$b$, $c$,~$d$ may have. Draw a graph of the function. \Item{12.} Discuss the maxima and minima of the function \[ y = (ax^{2} + 2bx + c)/(Ax^{2} + 2Bx + \DPtypo{c}{C}), \] when the denominator has complex roots. [We may suppose $a$~and~$A$ positive. The derivative vanishes if \[ (ax + b)(Bx + C) - (Ax + B)(bx + c) = 0. \Tag{(1)} \] This equation must have real roots. For if not the derivative would always have the same sign, and this is impossible, since $y$~is continuous for all values of~$x$, and $y \to a/A$ as $x \to +\infty$ or $x \to -\infty$. It is easy to verify that the curve cuts the line $y = a/A$ in one and only one point, and that it lies above this line for large positive values of~$x$, and below it for large negative values, or \textit{vice versa}, according as $b/a > B/A$ or $b/a < B/A$. Thus the algebraically greater root of~\Eq{(1)} gives a maximum if $b/a > B/A$, a minimum in the contrary case.] \Item{13.} The maximum and minimum values themselves are the values of~$\lambda$ for which $ax^{2} + 2bx + c - \lambda(Ax^{2} + 2Bx + C)$ is a perfect square. [This is the condition that $y = \lambda$ should touch the curve.] \Item{14.} In general the maxima and maxima of $R(x) = P(x)/Q(x)$ are among the values of~$\lambda$ obtained by expressing the condition that $P(x) - \lambda Q(x) = 0$ should have a pair of equal roots. \Item{15.} If $Ax^{2} + 2Bx + C = 0$ has real roots then it is convenient to proceed as follows. We have \[ y - (a/A) = (\lambda x + \mu)/\{A(Ax^{2} + 2Bx + C)\}, \] where $\lambda = bA - aB$, $\mu = cA - aC$. Writing further $\xi$ for $\lambda x + \mu$ and $\eta$ for $(A/\lambda^{2})(Ay - a)$, we obtain an equation of the form \[ \eta = \xi/\{(\xi - p)(\xi - q)\}. \] \PageSep{224} This transformation from $(x, y)$ to $(\xi, \eta)$ amounts only to a shifting of the origin, keeping the axes parallel to themselves, a change of scale along each axis, and (if $\lambda < 0$) a reversal in direction of the axis of abscissae; and so a minimum of~$y$, considered as a function of~$x$, corresponds to a minimum of~$\eta$ considered as a function of~$\xi$, and \textit{vice versa}, and similarly for a maximum. The derivative of~$\eta$ with respect to~$\xi$ vanishes if \[ (\xi - p)(\xi - q) - \xi(\xi - p) - \xi(\xi - q) = 0, \] or if $\xi^{2} = pq$. Thus there are two roots of the derivative if $p$~and~$q$ have the same sign, none if they have opposite signs. In the latter case the form of % [** TN: Figure labels italicized in the original] the graph of~$\eta$ is as shown in \Fig{41a}. %[Illustration: Fig. 41a.] %[Illustration: Fig. 41b.] %[Illustration: Fig. 41c.] \begin{figure}[hbt!] \centering \begin{minipage}{0.3\textwidth} \centering \Graphic{1.5in}{p224a} \caption{Fig.~41a.} \label{fig:41a} \end{minipage}\hfill \begin{minipage}{0.3\textwidth} \centering \Graphic{1.5in}{p224b} \caption{Fig.~41b.} \label{fig:41b} \end{minipage}\hfill \begin{minipage}{0.3\textwidth} \centering \Graphic{1.5in}{p224c} \caption{Fig.~41c.} \label{fig:41c} \end{minipage} \end{figure} When $p$ and $q$ are positive the general form of the graph is as shown in Fig~41b, and it is easy to see that $\xi = \sqrtp{pq}$ gives a maximum and $\xi = -\sqrtp{pq}$ a minimum.\footnote {The maximum is $-1/(\sqrt{p} - \sqrt{q})^{2}$, the minimum $-1/(\sqrt{p} + \sqrt{q})^{2}$, of which the latter is the greater.} In the particular case in which $p = q$ the function is \[ \eta = \xi/(\xi - p)^{2}, \] and its graph is of the form shown in \Fig{41c}. The preceding discussion fails if $\lambda = 0$, \ie\ if $a/A = b/B$. But in this case we have \begin{align*} y - (a/A) &= \mu/\{A(Ax^{2} + 2Bx + C)\}\\ &= \mu/\{A^{2}(x - x_{1})(x - x_{2})\}, \end{align*} say, and $dy/dx = 0$ gives the single value $x = \frac{1}{2}(x_{1} + x_{2})$. On drawing a graph it becomes clear that this value gives a maximum or minimum according as $\mu$~is positive or negative. The graph shown in \Fig{42} corresponds to the former case. %[Illustration: Fig. 42.] \Figure[1.5in]{42}{p224d} {\Loosen[A full discussion of the general function $y = (ax^{2} + 2bx + c)/(Ax^{2} + 2Bx + C)$, by purely algebraical methods, will be found in Chrystal's \textit{Algebra}, vol~i, pp.~464--7.]} \Item{16.} Show that $(x - \alpha)(x - \beta)/(x - \gamma)$ assumes all real values as $x$~varies, if $\gamma$~lies between $\alpha$ and~$\beta$, and otherwise assumes all values except those included in an interval of length $4\sqrtp{|\alpha - \gamma||\beta - \gamma|}$. \PageSep{225} \Item{17.} Show that \[ y = \frac{x^{2} + 2x + c}{x^{2} + 4x + 3c} \] can assume any real value if $0 < c < 1$, and draw a graph of the function in this case. \MathTrip{1910.} \Item{18.} Determine the function of the form $(ax^{2} + 2bx + c)/(Ax^{2} + 2Bx + C)$ which has turning values (\ie\ maxima or minima) $2$~and~$3$ when $x = 1$ and $x = -1$ respectively, and has the value~$2.5$ when $x = 0$. \MathTrip{1908.} \Item{19.} The maximum and minimum of $(x + a) (x + b)/(x - a) (x - b)$, where $a$~and~$b$ are positive, are \[ -\left(\frac{\sqrt{a} + \sqrt{b}}{\sqrt{a} - \sqrt{b}}\right)^{2},\quad -\left(\frac{\sqrt{a} - \sqrt{b}}{\sqrt{a} + \sqrt{b}}\right)^{2}. \] \Item{20.} The maximum value of $(x - 1)^{2}/(x + 1)^{3}$ is~$\frac{2}{27}$. \Item{21.} Discuss the maxima and minima of \begin{gather*} x(x - 1)/(x^{2} + 3x + 3),\quad x^{4}/(x - 1)(x - 3)^{3},\\ (x - 1)^{2}(3x^{2} - 2x - 37)/(x + 5)^{2}(3x^{2} - 14x - 1). \end{gather*} \longpage \MathTrip{1898.} [If the last function be denoted by~$P(x)/Q(x)$, it will be found that \[ P'Q - PQ' = 72(x - 7)(x - 3)(x - 1)(x + 1)(x + 2)(x + 5).] \] \Item{22.} Find the maxima and minima of $a\cos x + b\sin x$. Verify the result by expressing the function in the form~$A\cos(x - a)$. \Item{23.} Find the maxima and minima of \[ a^{2}\cos^{2} x + b^{2}\sin^{2} x,\quad A\cos^{2}x + 2H\cos x\sin x + B\sin^{2} x. \] \Item{24.} Show that $\sin(x + a)/\sin(x + b)$ has no maxima or minima. Draw a graph of the function. \Item{25.} Show that the function \[ \frac{\sin^{2}x}{\sin(x + a)\sin(x + b)}\quad (0 < a < b < \pi) \] has an infinity of minima equal to~$0$ and of maxima equal to \[ -4\sin a\sin b/\sin^{2}(a - b). \] \MathTrip{1909.} \Item{26.} The least value of $a^{2}\sec^{2}x + b^{2}\cosec^{2}x$ is $(a + b)^{2}$. \Item{27.} Show that $\tan 3x \cot 2x$ cannot lie between $\frac{1}{9}$~and~$\frac{3}{2}$. \Item{28.} Show that, if the sum of the lengths of the hypothenuse\DPnote{** [sic], variant spelling} and another side of a right-angled triangle is given, then the area of the triangle is a maximum when the angle between those sides is~$60°$. \MathTrip{1909.} \Item{29.} A line is drawn through a fixed point~$(a, b)$ to meet the axes $OX$,~$OY$ in $P$~and~$Q$. Show that the minimum values of $PQ$, $OP + OQ$, and $OP·OQ$ are respectively $(a^{2/3} + b^{2/3})^{3/2}$, $(\sqrt{a} + \sqrt{b})^{2}$, and~$4ab$. \PageSep{226} \Item{30.} A tangent to an ellipse meets the axes in $P$~and~$Q$. Show that the least value of~$PQ$ is equal to the sum of the semiaxes of the ellipse. \Item{31.} Find the lengths and directions of the axes of the conic \[ ax^{2} + 2hxy + by^{2} = 1. \] [The length~$r$ of the \DPchg{semidiameter}{semi-diameter} which makes an angle~$\theta$ with the axis of~$x$ is given by \[ 1/r^{2} = a\cos^{2} \theta + 2h\cos\theta \sin\theta + b\sin^{2} \theta. \] The condition for a maximum or minimum value of~$r$ is $\tan 2\theta = 2h/(a - b)$. Eliminating~$\theta$ between these two equations we find \[ \{a - (1/r^{2})\} \{b - (1/r^{2})\} = h^{2}.] \] \Item{32.} The greatest value of~$x^{m}y^{n}$, where $x$~and~$y$ are positive and $x + y = k$, is \[ m^{m} n^{n} k^{m+n}/(m + n)^{m+n}. \] \Item{33.} {\Loosen The greatest value of $ax + by$, where $x$~and~$y$ are positive and $x^{2} + xy + y^{2} = 3\kappa^{2}$, is} \[ 2\kappa \sqrtp{a^{2} - ab + b^{2}}. \] [If $ax + by$ is a maximum then $a + b(dy/dx) = 0$. The relation between $x$~and~$y$ gives $(2x + y) + (x + 2y)(dy/dx) = 0$. Equate the two values of~$dy/dx$.] \Item{34.} If $\theta$ and~$\phi$ are acute angles connected by the relation $a \sec\theta + b \sec\phi = c$, where $a$,~$b$,~$c$ are positive, then $a\cos\theta + b\cos\phi$ is a minimum when $\theta = \phi$. \end{Examples} \Paragraph{125. The Mean Value Theorem.} We can proceed now to the proof of another general theorem of extreme importance, a theorem commonly known as `\emph{The Mean Value Theorem}\Add{'} or `\emph{The Theorem of the Mean}'. \begin{Theorem} If $\phi(x)$ has a derivative for all values of~$x$ in the interval~$\DPmod{(a, b)}{[a, b]}$, then there is a value~$\xi$ of~$x$ between $a$~and~$b$, such that \[ \phi(b) - \phi(a) = (b - a)\phi'(\xi). \] \end{Theorem} Before we give a strict proof of this theorem, which is perhaps the most important theorem in the Differential Calculus, it will be well to point out its obvious geometrical meaning. This is simply (see \Fig{43}) that if the curve~$APB$ has a tangent at all points of its length then there %[Illustration: Fig. 43.] \Figure[2in]{43}{p226} \PageSep{227} must be a point, such as~$P$, where the tangent is parallel to~$AB$. For $\phi'(\xi)$~is the tangent of the angle which the tangent at~$P$ makes with~$OX$, and $\{\phi(b) - \phi(a)\}/(b - a)$ the tangent of the angle which $AB$ makes with~$OX$. It is easy to give a strict analytical proof. Consider the function \[ \phi(b) - \phi(x) - \frac{b - x}{b - a}\{\phi(b) - \phi(a)\}, \] which vanishes when $x = a$ and $x = b$. It follows from Theorem~B of \SecNo[§]{121} that there is a value~$\xi$ for which its derivative vanishes. But this derivative is \[ \frac{\phi(b) - \phi(a)}{b - a} - \phi'(x); \] which proves the theorem. It should be observed that it has not been assumed in this proof that $\phi'(x)$~is continuous. It is often convenient to express the Mean Value Theorem in the form \[ \phi(b) = \phi(a) + (b - a) \phi'\{a + \theta(b - a)\}, \] where $\theta$~is a number lying between $0$ and~$1$. Of course $a + \theta(b - a)$ is merely another way of writing `some number~$\xi$ between $a$~and~$b$'. If we put $b = a + h$ we obtain \[ \phi(a + h) = \phi(a) + h\phi'(a + \theta h), \] which is the form in which the theorem is most often quoted. \begin{Examples}{XLVII.} \Item{1.} Show that \[ \phi(b) - \phi(x) - \frac{b - x}{b - a}\{\phi(b) - \phi(a)\} \] is the difference between the ordinates of a point on the curve and the corresponding point on the chord. \Item{2.} Verify the theorem when $\phi(x) = x^{2}$ and when $\phi(x) = x^{3}$. [In the latter case we have to prove that $(b^{3} - a^{3})/(b - a) = 3\xi^{2}$, where $a < \xi < b$; \ie\ that if $\frac{1}{3}(b^{2} + ab + a^{2}) = \xi^{2}$ then $\xi$~lies between $a$ and~$b$.] \Item{3.} Establish the theorem stated at the end of \SecNo[§]{124} by means of the Mean Value Theorem. {\Loosen[Since $\phi'(0) = c$, we can find a small positive value of~$x$ such that $\{\phi(x) - \phi(0)\}/x$ is nearly equal to~$c$; and therefore, by the theorem, a small positive value of~$\xi$ such that $\phi'(\xi)$~is nearly equal to~$c$, which is inconsistent with $\lim\limits_{x \to +0} \phi'(x) = a$, unless $a = c$. Similarly $b = c$.]} \PageSep{228} \Item{4.} Use the Mean Value Theorem to prove Theorem~(6) of \SecNo[§]{113}, assuming that the derivatives which occur are continuous. [The derivative of~$F\{f(x)\}$ is by definition \[ \lim \frac{F\{f(x + h)\} - F\{f(x)\}}{h}. \] But, by the Mean Value Theorem, $f(x + h) = f(x) + hf'(\xi)$, where $\xi$~is a number lying between $x$ and~$x + h$. And \[ F\{f(x) + hf'(\xi)\} = F\{f(x)\} + hf'(\xi)\, F'(\xi_{1}), \] where $\xi_{1}$~is a number lying between $f(x)$ and~$f(x) + hf'(\xi)$. Hence the derivative of~$F\{f(x)\}$ is \[ \lim f'(\xi)\, F'(\xi_{1}) = f'(x)\, F'\{f(x)\}, \] since $\xi \to x$ and $\xi_{1} \to f(x)$ as $h \to 0$.] \end{Examples} \Paragraph{126.} The Mean Value Theorem furnishes us with a proof of a result which is of great importance in what follows: \begin{Result}if $\phi'(x) = 0$, throughout a certain interval of values of~$x$, then $\phi(x)$~is constant throughout that interval. \end{Result} For, if $a$~and~$b$ are any two values of~$x$ in the interval, then \[ \phi(b) - \phi(a) = (b - a) \phi'\{a + \theta(b - a)\} = 0. \] An immediate corollary is that if $\phi'(x) = \psi'(x)$, throughout a certain interval, then the functions $\phi(x)$ and~$\psi(x)$ differ throughout that interval by a constant. \Paragraph{127. Integration.} We have in this chapter seen how we can find the derivative of a given function~$\phi(x)$ in a variety of cases, including all those of the commonest occurrence. It is natural to consider the converse question, that of \emph{determining a function whose derivative is a given function}. Suppose that $\psi(x)$~is the given function. Then we wish to determine a function such that $\phi'(x) = \psi(x)$. A little reflection shows us that this question may really be analysed into three parts. \Item{(1)} In the first place we want to know whether such a function as $\phi(x)$ \emph{actually exists}. This question must be carefully distinguished from the question as to whether (supposing that there is such a function) we can find any simple formula to express it. \Item{(2)} We want to know whether it is possible that more than one such function should exist, \ie\ we want to know whether our \PageSep{229} problem is one which admits of a \emph{unique} solution or not; and if not, we want to know whether there is any simple relation between the different solutions which will enable us to express all of them in terms of any particular one. \Item{(3)} If there is a solution, we want to know \emph{how to find an actual expression for it}. It will throw light on the nature of these three distinct questions if we compare them with the three corresponding questions which arise with regard to the differentiation of functions. \Item{(1)} A function~$\phi(x)$ may have a derivative for all values of~$x$, like~$x^{m}$, where $m$~is a positive integer, or~$\sin x$. It may generally, but not always have one, like $\sqrt[3]{x}$ or~$\tan x$ or~$\sec x$. Or again it may never have one: for example, the function considered in \Ex{xxxvii}.~20, which is nowhere continuous, has obviously no derivative for any value of~$x$. Of course during this chapter we have confined ourselves to functions which are continuous except for some special values of~$x$. The example of the function~$\sqrt[3]{x}$, however, shows that a continuous function may not have a derivative for some special value of~$x$, in this case $x = 0$. Whether there are continuous functions which \emph{never} have derivatives, or continuous curves which never have tangents, is a further question which is at present beyond us. Common-sense says \emph{No}: but, as we have already stated in \SecNo[§]{111}, this is one of the cases in which higher mathematics has proved common-sense to be mistaken. But at any rate it is clear enough that the question `has $\phi(x)$ a derivative~$\phi'(x)$?'\ is one which has to be answered differently in different circumstances. And we may expect that the converse question `is there a function~$\phi(x)$ of which $\psi(x)$~is the derivative?'\ will have different answers too. We have already seen that there are cases in which the answer is \emph{No}: thus if $\psi(x)$~is the function which is equal to $a$,~$b$, or~$c$ according as $x$~is less than, equal to, or greater than~$0$, then the answer is \emph{No} (\Ex{xlvii}.~3), unless $a = b = c$. This is a case in which the given function is discontinuous. In what follows, however, we shall always suppose $\psi(x)$~continuous. And then the answer is~\emph{Yes}: \emph{if $\psi(x)$~is continuous then there is always a function~$\phi(x)$ such that $\phi'(x) = \psi(x)$}. The proof of this will be given in \Ref{Ch.}{VII}\@. \PageSep{230} \Item{(2)} The second question presents no difficulties. In the case of differentiation we have a direct definition of the derivative which makes it clear from the beginning that there cannot possibly be more than one. In the case of the converse problem the answer is almost equally simple. It is that if $\phi(x)$~is one solution of the problem then $\phi(x) + C$ is another, for any value of the constant~$C$, and that all possible solutions are comprised in the form $\phi(x) + C$. This follows at once from \SecNo[§]{126}. \Item{(3)} The practical problem of actually finding~$\phi'(x)$ is a fairly simple one in the case of any function defined by some finite combination of the ordinary functional symbols. The converse problem is much more difficult. The nature of the difficulties will appear more clearly later on. \begin{Definitions} If $\psi(x)$ is the derivative of~$\phi(x)$, then we call $\phi(x)$ an \Emph{integral} or \Emph{integral function} of~$\psi(x)$. The operation of forming~$\psi(x)$ from~$\phi(x)$ we call \Emph{integration}. \end{Definitions} We shall use the notation \[ \phi(x) = \int \psi(x)\, dx. \] It is hardly necessary to point out that $\int\dots dx$ like $d/dx$ must, at present at any rate, be regarded purely as a symbol of operation: the~$\int$ and the~$dx$ no more mean anything when taken by themselves than do the~$d$ and~$dx$ of the other operative symbol~$d/dx$. \Paragraph{128. The practical problem of integration.} The results of the earlier part of this chapter enable us to write down at once the integrals of some of the commonest functions. Thus \[ \int x^{m}\, dx = \frac{x^{m+1}}{m + 1},\quad \int \cos x\, dx = \sin x,\quad \int \sin x\, dx = -\cos x. \Tag{(1)} \] These formulae must be understood as meaning that the function on the right-hand side is \emph{one} integral of that under the sign of integration. The \emph{most general} integral is of course obtained by adding to the former a constant~$C$, known as the \Emph{arbitrary constant} of integration. \PageSep{231} There is however one case of exception to the first formula, that in which $m = -1$. In this case the formula becomes meaningless, as is only to be expected, since we have seen already (\Ex{xlii}.~4) that $1/x$ cannot be the derivative of any polynomial or rational fraction. That there really is a function~$F(x)$ such that $D_{x}F(x) = 1/x$ will be proved in the next chapter. For the present we shall be content to assume its existence. This function~$F(x)$ is certainly not a polynomial or rational function; and it can be proved that it is not an algebraical function. It can indeed be proved that $F(x)$~is an essentially new function, independent of any of the classes of functions which we have considered yet, that is to say incapable of expression by means of any finite combination of the functional symbols corresponding to them. The proof of this is unfortunately too detailed and tedious to be inserted in this book; but some further discussion of the subject will be found in \Ref{Ch.}{IX}, where the properties of~$F(x)$ are investigated systematically. Suppose first that $x$~is positive. Then we shall write \[ \int \frac{dx}{x} = \log x, \Tag{(2)} \] and we shall call the function on the right-hand side of this equation \Emph{the logarithmic function}: it is defined so far only for positive values of~$x$. Next suppose $x$~negative. Then $-x$~is positive, and so $\log(-x)$ is defined by what precedes. Also \[ \frac{d}{dx} \log(-x) = \frac{-1}{-x} = \frac{1}{x}, \] so that, when $x$~is negative, \[ \int \frac{dx}{x} = \log(-x). \Tag{(3)} \] The formulae \Eq{(2)}~and~\Eq{(3)} may be united in the formulae \[ \int \frac{dx}{x} = \log(±x) = \log|x|, \Tag{(4)} \] where the ambiguous sign is to be chosen so that $±x$~is positive: these formulae hold for all real values of~$x$ other than $x = 0$. \PageSep{232} \begin{Remark} The most fundamental of the properties of~$\log x$ which will be proved in \Ref{Ch.}{IX} are expressed by the equations \[ \log 1 = 0,\quad \log (1/x) = -\log x,\quad \log xy = \log x + \log y, \] of which the second is an obvious deduction from the first and third. It is not really necessary, for the purposes of this chapter, to assume the truth of any of these formulae; but they sometimes enable us to write our formulae in a more compact form than would otherwise be possible. It follows from the last of the formulae that $\log x^{2}$~is equal to~$2\log x$ if $x > 0$ and to~$2\log(-x)$ if $x < 0$, and in either case to~$2\log |x|$. Either of the formulae~\Eq{(4)} is therefore equivalent to the formula \[ \int \frac{dx}{x} = \tfrac{1}{2}\log x^{2}. \Tag{(5)} \] \end{Remark} The five formulae \Eq{(1)}--\Eq{(3)} are the five most fundamental \emph{standard forms} of the Integral Calculus. To them should be added two more, viz. \[ \int \frac{dx}{1 + x^{2}} = \arctan x,\quad \int \frac{x}{\sqrtp{1 - x^{2}}} = ±\arcsin x.\footnotemark \Tag{(6)} \] \footnotetext{See \SecNo[§]{119} for the rule for determining the ambiguous sign.}% \Paragraph{129. Polynomials.} All the general theorems of \SecNo[§]{113} may of course also be stated as theorems in integration. Thus we have, to begin with, the formulae \begin{gather*} \int \{f(x) + F(x)\}\, dx = \int f(x) dx + \int F(x)\, dx, \Tag{(1)}\\ \int kf(x)\, dx = k\int f(x)\, dx. \Tag{(2)} \end{gather*} Here it is assumed, of course, that the arbitrary constants are adjusted properly. Thus the formula~\Eq{(1)} asserts that the sum of \emph{any} integral of~$f(x)$ and \emph{any} integral of~$F(x)$~is \emph{an} integral of $f(x) + F(x)$. These theorems enable us to write down at once the integral of any function of the form $\sum A_{\nu} f_{\nu}(x)$, the sum of a finite number of constant multiples of functions whose integrals are known. In particular we can write down the integral of any \emph{polynomial}: thus \[ \int (a_{0}x^{n} + a_{1}x^{n-1} + \dots + a_{n})\, dx = \frac{a_{0}x^{n+1}}{n + 1} + \frac{a_{1}x^{n}}{n} + \dots + a_{n}x. \] \PageSep{233} \Paragraph{130. Rational Functions.} After integrating polynomials it is natural to turn our attention next to \emph{rational functions}. Let us suppose $R(x)$ to be any rational function expressed in the standard form of \SecNo[§]{117}, viz.\ as the sum of a polynomial~$\Pi(x)$ and a number of terms of the form~$A/(x - \alpha)^{p}$. We can at once write down the integrals of the polynomial and of all the other terms except those for which $p = 1$, since \[ \int \frac{A}{(x - \alpha)^{p}}\, dx = -\frac{A}{p - 1}\, \frac{1}{(x - \alpha)^{p-1}}, \] whether $\alpha$~be real or complex (\SecNo[§]{117}). The terms for which $p = 1$ present rather more difficulty. It follows immediately from Theorem~(6) of \SecNo[§]{113} that \[ \int F'\{f(x)\}\, f'(x)\, dx = F\{f(x)\}. \Tag{(3)} \] In particular, if we take $f(x) = ax + b$, where $a$~and~$b$ are real, and write $\phi(x)$ for~$F(x)$ and $\psi(x)$ for~$F'(x)$, so that $\phi(x)$~is an integral of~$\psi(x)$, we obtain \[ \int \psi(ax + b)\, dx = \frac{1}{a}\phi(ax + b). \Tag{(4)} \] Thus, for example, \[ \int \frac{dx}{ax + b} = \frac{1}{a} \log|ax + b|, \] and in particular, if $\alpha$~is real, \[ \int \frac{dx}{x - \alpha} = \log|x - \alpha|. \] We can therefore write down the integrals of all the terms in~$R(x)$ for which $p = 1$ and $\alpha$~is real. There remain the terms for which $p = 1$ and $\alpha$~is complex. In order to deal with these we shall introduce a restrictive hypothesis, viz.\ that all the coefficients in~$R(x)$ are real. Then if $\alpha = \gamma + \delta i$ is a root of $Q(x) = 0$, of multiplicity~$m$, so is its conjugate $\bar{\alpha} = \gamma - \delta i$; and if a partial fraction $A_{p}/(x - \alpha)^{p}$ occurs in the expression of~$R(x)$, so does $\bar{A}_{p}/(x - \bar{\alpha})^{p}$, where $\bar{A}_{p}$~is conjugate to~$A_{p}$. This follows from the nature of the algebraical processes by means of which the partial fractions can be found, and which are explained at length in treatises on Algebra.\footnote {See, for example, Chrystal's \textit{Algebra}, vol.~i, pp.~151--9.} \PageSep{234} Thus, if a term $(\lambda + \mu i)/(x - \gamma - \delta i)$ occurs in the expression of~$R(x)$ in partial fractions, so will a term $(\lambda - \mu i)/(x - \gamma + \delta i)$; and the sum of these two terms is \[ \frac{2\{\lambda(x - \gamma) - \mu\delta\}}{(x - \gamma)^{2} + \delta^{2}}. \] This fraction is in reality the most general fraction of the form \[ \frac{Ax + B}{ax^{2} + 2bx + c}, \] where $b^{2} < ac$. The reader will easily verify the equivalence of the two forms, the formulae which express $\lambda$,~$\mu$, $\gamma$,~$\delta$ in terms of $A$,~$B$, $a$,~$b$,~$c$ being \[ \lambda = A/2a,\quad \mu = -D/(2a\sqrt{\Delta}),\quad \gamma = -b/a,\quad \delta = \sqrt{\Delta}/a, \] where $\Delta = ac - b^{2}$, and $D = aB - bA$.\PageLabel{234} If in~\Eq{(3)} we suppose $F\{f(x)\}$~to be~$\log |f(x)|$, we obtain \[ \int \frac{f'(x)}{f(x)}\, dx = \log |f(x)|; \Tag{(5)} \] and if we further suppose that $f(x) = (x - \lambda)^{2} + \mu^{2}$, we obtain \[ \int \frac{2(x - \lambda)}{(x - \lambda)^{2} + \mu^{2}}\, dx = \log\{(x - \lambda)^{2} + \mu^{2}\}. \] And, in virtue of the equations~\Eq{(6)} of \SecNo[§]{128} and \Eq{(4)}~above, we have \[ \int \frac{-2\delta\mu}{(x - \lambda)^{2} + \mu^{2}}\, dx = -2\delta \arctan \left(\frac{x - \lambda}{\mu}\right). \] These two formulae enable us to integrate the sum of the two terms which we have been considering in the expression of~$R(x)$; and we are thus enabled to write down the integral of any real rational function, if all the factors of its denominator can be determined. The integral of any such function is composed of \begin{Result}the sum of a polynomial, a number of rational functions of the type \[ -\frac{A}{p - 1}\, \frac{1}{(x - \alpha)^{p-1}}, \] a number of logarithmic functions, and a number of inverse tangents. \end{Result} It only remains to add that if $\alpha$~is complex then the rational function just written always occurs in conjunction with another in which $A$ and~$\alpha$ are replaced by the complex numbers conjugate to them, and that the sum of the two functions is a real rational function. \PageSep{235} \begin{Examples}{XLVIII.} \Item{1.} Prove that \[ \int \frac{Ax + B}{ax^{2} + 2bx + c}\, dx = \frac{A}{2a} \log |X| + \frac{D}{2a \sqrtp{-\Delta}} \log \left|\frac{ax + b - \sqrtp{-\Delta}}{ax + b + \sqrtp{-\Delta}}\right| \] (where $X = ax^{2} + bx + c$) if $\Delta < 0$, and \[ \int \frac{Ax + B}{ax^{2} + 2bx + c}\, dx = \frac{A}{2a} \log |X| + \frac{D}{2a \sqrt{\Delta}} \arctan \left(\frac{ax + b}{\sqrt{\Delta}}\right) \] if $\Delta > 0$, $\Delta$ and~$D$ having the same meanings as on \PageRef{p.}{234}. \Item{2.} In the particular case in which $ac = b^{2}$ the integral is \[ -\frac{D}{a(ax + b)} + \frac{A}{a} \log |ax + b|. \] \Item{3.} Show that if the roots of $Q(x) = 0$ are all real and distinct, and $P(x)$~is of lower degree than~$Q(x)$, then \[ \int R(x)\, dx = \tsum \frac{P(\alpha)}{Q'(\alpha)} \log |x - \alpha|, \] the summation applying to all the roots~$\alpha$ of $Q(x) = 0$. [The form of the fraction corresponding to~$\alpha$ may be deduced from the facts that \[ \frac{Q(x)}{x - \alpha} \to Q'(\alpha),\quad (x - \alpha) R(x) \to \frac{P(\alpha)}{Q'(\alpha)}, \] as $x \to \alpha$.] \Item{4.} If all the roots of~$Q(x)$ are real and $\alpha$~is a double root, the other roots being simple roots, and $P(x)$~is of lower degree than~$Q(x)$, then the integral is $A/(x - \alpha) + A'\log |x - \alpha| + \sum B\log |x - \beta|$, where \[ A = -\frac{2P(\alpha)}{Q''(\alpha)},\quad A' = \frac{2\{3P'(\alpha) Q''(\alpha) - P(a) Q'''(\alpha)\}} {3\{Q''(\alpha)\}^{2}},\quad B = \frac{P(\beta)}{Q'(\beta)}, \] and the summation applies to all roots~$\beta$ of $Q(x) = 0$ other than~$\alpha$. \Item{5.} Calculate \[ \int \frac{dx}{\{(x - 1) (x^{2} + 1)\}^{2}}. \] [The expression in partial fractions is \[ \frac{1}{4(x - 1)^{2}} - \frac{1}{2(x - 1)} - \frac{i}{8(x - i)^{2}} + \frac{2 - i}{8(x - i)} + \frac{i}{8(x + i)^{2}} + \frac{2 + i}{8(x + i)}, \] and the integral is \[ -\frac{1}{4(x - 1)} - \frac{1}{4(x^{2} + 1)} - \tfrac{1}{2} \log |x - 1| + \tfrac{1}{4} \log (x^{2} + 1) + \tfrac{1}{4} \arctan x.] \] \Item{6.} Integrate \begin{gather*} \frac{x}{(x - a)(x - b)(x - c)},\quad \frac{x}{(x - a)^{2}(x - b)},\quad \frac{x}{(x - a)^{2} (x - b)^{2}},\quad \frac{x}{(x - a)^{3}},\\ % \frac{x}{(x^{2} + a^{2}) (x^{2} + b^{2})},\quad \frac{x^{2}}{(x^{2} + a^{2}) (x^{2} + b)^{2}},\quad \frac{x^{2} - a^{2}}{x^{2}(x^{2} + a^{2})},\quad \frac{x^{2} - a^{2}}{x(x^{2} + a^{2})^{2}}. \end{gather*} \PageSep{236} \Item{7.} Prove the formulae: \begin{alignat*}{3} \int \frac{dx}{1 + x^{4}} &= \frac{1}{4\sqrt{2}} \biggl\{% &&\log \biggl(\frac{1 + x\sqrt{2} + x^{2}}{1 - x\sqrt{2} + x^{2}}\biggr) &&+ 2\arctan \biggl(\frac{x\sqrt{2}}{1 - x^{2}}\biggr)\biggr\},\\ % \int \frac{x^{2}\, dx}{1 + x^{4}} &= \frac{1}{4\sqrt{2}} \biggl\{% &-&\log \biggl(\frac{1 + x\sqrt{2} + x^{2}}{1 - x\sqrt{2} + x^{2}}\biggr) &&+ 2\arctan \biggl(\frac{x\sqrt{2}}{1 - x^{2}}\biggr)\biggr\},\\ % \int \frac{dx}{1 + x^{2} + x^{4}} &= \frac{1}{4\sqrt{3}}\biggl\{% &\sqrt{3}&\log \biggl(\frac{1 + x + x^{2}}{1 - x + x^{2}}\biggr) &&+ 2\arctan \biggl(\frac{x\sqrt{3}}{1 - x^{2}}\biggr)\biggr\}. \end{alignat*} \end{Examples} \begin{Remark} \Paragraph{131. Note on the practical integration of rational functions.} The analysis of \SecNo[§]{130} gives us a general method by which we can find the integral of any real rational function~$R(x)$, \emph{provided we can solve the equation $Q(x) = 0$}. In simple cases (as in Ex.~5 above) the application of the method is fairly simple. In more complicated cases the labour involved is sometimes prohibitive, and other devices have to be used. It is not part of the purpose of this book to go into practical problems of integration in detail. The reader who desires fuller information may be referred to Goursat's \textit{Cours d'Analyse}, second~ed., vol.~i, pp.~246~\textit{et~seq.}, Bertrand's \textit{Calcul Intégral}, and Dr~Bromwich's tract \textit{Elementary Integrals} (Bowes and Bowes,~1911). If the equation $Q(x) = 0$ cannot be solved algebraically, then the method of partial fractions naturally fails and recourse must be had to other methods.\footnote {See the author's tract ``The integration of functions of a single variable'' (\textit{Cambridge Tracts in Mathematics}, No.~2,\PageLabel{236} second edition, 1915). This does not often happen in practice.} \end{Remark} \Paragraph{132. Algebraical Functions.} We naturally pass on next to the question of the integration of \emph{algebraical} functions. We have to consider the problem of integrating~$y$, where $y$~is an algebraical function of~$x$. It is however convenient to consider an apparently more general integral, viz. \[ \int R(x, y)\, dx, \] where $R(x, y)$~is any rational function of $x$~and~$y$. The greater generality of this form is only apparent, since (\Ex{xiv}.~6) the function~$R(x, y)$ is itself an algebraical function of~$x$. The choice of this form is in fact dictated simply by motives of convenience: such a function as \[ \frac{px + q + \sqrtp{ax^{2} + 2bx + c}} {px + q - \sqrtp{ax^{2} + 2bx + c}} \] is far more conveniently regarded as a rational function of $x$ and the simple algebraical function $\sqrtp{ax^{2} + 2bx + c}$, than directly as itself an algebraical function of~$x$. \PageSep{237} \Paragraph{133. Integration by substitution and rationalisation.} It follows from equation~\Eq{(3)} of \SecNo[§]{130} that if $\ds\int \psi(x)\, dx = \phi(x)$ then \[ \int \psi\{f(t)\}\, f'(t)\, dt = \phi\{f(t)\}. \Tag{(1)} \] This equation supplies us with a method for determining the integral of~$\psi(x)$ in a large number of cases in which the form of the integral is not directly obvious. It may be stated as a rule as follows: \emph{put $x = f(t)$, where $f(t)$~is any function of a new variable~$t$ which it may be convenient to choose; multiply by~$f'(t)$, and determine \(if possible\) the integral of $\psi\{f(t)\}\, f'(t)$; express the result in terms of~$x$}. It will often be found that the function of~$t$ to which we are led by the application of this rule is one whose integral can easily be calculated. This is always so, for example, if it is a rational function, and it is very often possible to choose the relation between $x$ and~$t$ so that this shall be the case. Thus the integral of~$R(\sqrt{x})$, where $R$~denotes a rational function, is reduced by the substitution $x = t^{2}$ to the integral of~$2tR(t^{2})$, \ie\ to the integral of a rational function of~$t$. This method of integration is called \Emph{integration by rationalisation}, and is of extremely wide application. Its application to the problem immediately under consideration is obvious. \begin{Result}If we can find a variable~$t$ such that $x$~and~$y$ are both rational functions of~$t$, say $x = R_{1}(t)$, $y = R_{2}(t)$, then \[ \int R(x, y)\, dx = \int R\{R_{1}(t), R_{2}(t)\}\, R_{1}'(t)\, dt, \] and the latter integral, being that of a rational function of~$t$, can be calculated by the methods of~\SecNo[§]{130}. \end{Result} It would carry us beyond our present range to enter upon any general discussion as to when it is and when it is not possible to find an auxiliary variable~$t$ connected with $x$~and~$y$ in the manner indicated above. We shall consider only a few simple and interesting special cases. \Paragraph{134. Integrals connected with conics.} Let us suppose that $x$~and~$y$ are connected by an equation of the form \[ ax^{2} + 2hxy + by^{2} + 2gx + 2fy + c = 0; \] in other words that the graph of~$y$, considered as a function of~$x$ \PageSep{238} is a conic. Suppose that $(\xi, \eta)$ is any point on the conic, and let $x - \xi = X$, $y - \eta = Y$. If the relation between $x$~and~$y$ is expressed in terms of $X$~and~$Y$, it assumes the form \[ aX^{2} + 2hXY + bY^{2} + 2GX + 2FY = 0, \] where $F = h\xi + b\eta + f$, $G = a\xi + h\eta + g$. In this equation put $Y = tX$. It will then be found that $X$~and~$Y$ can both be expressed as rational functions of~$t$, and therefore $x$~and~$y$ can be so expressed, the actual formulae being \[ x - \xi = -\frac{2 (G + Ft)}{a + 2ht + bt^{2}},\quad y - \eta = -\frac{2t(G + Ft)}{a + 2ht + bt^{2}}. \] Hence the process of rationalisation described in the last section can be carried out. The reader should verify that \[ hx + by + f = -\tfrac{1}{2}(a + 2ht + bt^{2}) \frac{dx}{dt}, \] so that \[ \int \frac{dx}{hx + by + f}= -2\int \frac{dt}{a + 2ht + bt^{2}}. \] When $h^{2} > ab$ it is in some ways advantageous to proceed as follows. The conic is a hyperbola whose asymptotes are parallel to the lines \[ ax^{2} + 2hxy + by^{2} = 0, \] or \[ b(y - \mu x) (y - \mu' x) = 0, \] say\Add{.} If we put $y - \mu x = t$, we obtain \[ y - \mu x = t,\quad y - \mu' x = -\frac{2gx + 2fy + c}{bt}, \] and it is clear that $x$~and~$y$ can be calculated from these equations as rational functions of~$t$. We shall illustrate this process by an application to an important special case. \begin{Remark} \Paragraph{135. The integral $\ds\int \frac{dx}{\sqrtp{ax^{2} + 2bx + c}}$.} {\Loosen Suppose in particular that $y^{2} = ax^{2} + 2bx + c$, where $a > 0$. It will be found that, if we put $y + x\sqrt{a} = t$, we obtain} \[ 2\frac{dx}{dt} = \frac{(t^{2} + c)\sqrt{a} + 2bt}{(t\sqrt{a} + b)^{2}},\quad 2y = \frac{(t^{2} + c)\sqrt{a} + 2bt}{t\sqrt{a} + b}, \] and so \[ \int \frac{dx}{y} = \int \frac{dt}{t\sqrt{a} + b} = \frac{1}{\sqrt{a}} \log \left|x\sqrt{a} + y + \frac{b}{\sqrt{a}}\right|. \Tag{(1)} \] \PageSep{239} If in particular $a = 1$, $b = 0$, $c = a^{2}$, or $a = 1$, $b = 0$, $c = -a^{2}$, we obtain \[ \int \frac{dx}{\sqrtp{x^{2} + a^{2}}} = \log \{x + \sqrtp{x^{2} + a^{2}}\},\quad \int \frac{dx}{\sqrtp{x^{2} - a^{2}}} = \log |x + \sqrtp{x^{2} - a^{2}}|, \Tag{(2)} \] equations whose truth may be verified immediately by differentiation. With these formulae should be associated the third formula \[ \int \frac{dx}{\sqrtp{a^{2} - x^{2}}} = \arcsin(x/a), \Tag{(3)} \] which corresponds to a case of the general integral of this section in which $a < 0$. In~\Eq{(3)} it is supposed that $a > 0$; if $a < 0$ then the integral is $\arcsin(x/|a|)$ (cf.\ \SecNo[§]{119}). In practice we should evaluate the general integral by reducing it (as in the next section) to one or other of these standard forms. The formula~\Eq{(3)} appears very different from the formulae~\Eq{(2)}: the reader will hardly be in a position to appreciate the connection between them until he has read \Ref{Ch.}{X}\@. \end{Remark} \Paragraph{136. The integral $\ds\int \frac{\lambda x + \mu}{\sqrtp{ax^{2} + 2bx + c}}\, dx$.} This integral can be integrated in all cases by means of the results of the preceding sections. It is most convenient to proceed as follows. Since \begin{gather*} \lambda x + \mu = (\lambda/a) (ax + b) + \mu - (\lambda b/a),\\ \int \frac{ax + b}{\sqrtp{ax^{2} + 2bx + c}}\, dx = \sqrtp{ax^{2} + 2bx + c}, \end{gather*} we have \[ \int \frac{(\lambda x + \mu)\, dx}{\sqrtp{ax^{2} + 2bx + c}} = \frac{\lambda}{a} \sqrtp{ax^{2} + 2bx + c} + \left(\mu - \frac{\lambda b}{a}\right) \int \frac{dx}{\sqrtp{ax^{2} + 2bx + c}}. \] In the last integral $a$~may be positive or negative. If $a$~is positive we put $x\sqrt{a} + (b/\sqrt{a}) = t$, when we obtain \[ \frac{1}{\sqrt{a}} \int \frac{dt}{\sqrtp{t^{2} + \kappa}}, \] {\Loosen where $\kappa = (ac - b^{2})/a$. If $a$~is negative we write~$A$ for~$-a$ and put $x\sqrt{A} - (b/\sqrt{A}) = t$, when we obtain} \[ \frac{1}{\sqrtp{-a}}\int \frac{dt}{\sqrtp{-\kappa - t^{2}}}. \] It thus appears that in any case the calculation of the integral may be made to depend on that of the integral considered in \SecNo[§]{135}, and that this integral may be reduced to one or other of the three forms \[ \int \frac{dt}{\sqrtp{t^{2} + a^{2}}},\quad \int \frac{dt}{\sqrtp{t^{2} - a^{2}}},\quad \int \frac{dt}{\sqrtp{a^{2} - t^{2}}}. \] \PageSep{240} \begin{Remark} \Paragraph{137. The integral $\ds\int (\lambda x + \mu) \sqrtp{ax^{2} + 2bx + c}\, dx$.} In exactly the same way we find {\setlength{\multlinegap}{0pt}% \begin{multline*}%[** TN: Re-broken] \int(\lambda x + \mu) \sqrtp{ax^{2} + 2bx + c}\, dx \\ = \left(\frac{\lambda}{3a}\right) (ax^{2} + 2bx + c)^{3/2} + \left(\Add{\mu} - \frac{\lambda b}{a}\right) \int \sqrtp{ax^{2} + 2bx + c}\, dx; \end{multline*}}% and the last integral may be reduced to one or other of the three forms \[ \int \sqrtp{t^{2} + a^{2}}\, dt,\quad \int \sqrtp{t^{2} - a^{2}}\, dt,\quad \int \sqrtp{a^{2} - t^{2}}\, dt. \] In order to obtain these integrals it is convenient to introduce at this point another general theorem in integration. \end{Remark} \Paragraph{138. Integration by parts.} The theorem of \emph{integration by parts} is merely another way of stating the rule for the differentiation of a product proved in \SecNo[§]{113}. It follows at once from Theorem~(3) of \SecNo[§]{113} that \[ \int f'(x)F(x)\, dx = f(x)F(x) - \int f(x)F'(x)\, dx. \] It may happen that the function which we wish to integrate is expressible in the form~$f'(x)F(x)$, and that $f(x)F'(x)$ can be integrated. Suppose, for example, that $\phi(x) = x\psi(x)$, where $\psi(x)$~is the second derivative of a known function~$\chi(x)$. Then \[ \int\phi(x)\, dx = \int x\chi''(x)\, dx = x\chi'(x) - \int \chi'(x)\, dx = x\chi'(x) - \chi(x). \] \begin{Remark} We can illustrate the working of this method of integration by applying it to the integrals of the last section. Taking \[ f(x) = ax + b,\quad F(x) = \sqrtp{ax^{2} + 2bx + c} = y, \] we obtain \begin{align*} %[** TN: Set on one line in the original] a\int y\, dx &= (ax + b)y - \int \frac{(ax + b)^{2}}{y}\, dx \\ &= (ax + b)y - a\int y\, dx + (ac - b^{2}) \int \frac{dx}{y}, \end{align*} so that \[ \int y\, dx = \frac{(ax + b)y}{2a} + \frac{ac - b^{2}}{2a} \int \frac{dx}{y}; \] and we have seen already (\SecNo[§]{135}) how to determine the last integral. \end{Remark} \begin{Examples}{XLIX.} \Item{1\Add{.}} Prove that if $a > 0$ then \begin{align*} \int \sqrtp{x^{2} + a^{2}}\, dx &= \tfrac{1}{2}x \sqrtp{x^{2} + a^{2}} + \tfrac{1}{2}a^{2} \log \{x + \sqrtp{x^{2} + a^{2}}\},\\ \int \sqrtp{x^{2} - a^{2}}\, dx &= \tfrac{1}{2}x \sqrtp{x^{2} - a^{2}} - \tfrac{1}{2}a^{2} \log |x + \sqrtp{x^{2} - a^{2}}|,\\ \int \sqrtp{a^{2} - x^{2}}\, dx &= \tfrac{1}{2}x \sqrtp{a^{2} - x^{2}} + \tfrac{1}{2}a^{2} \arcsin(x/a). \end{align*} \PageSep{241} \Item{2.} Calculate the integrals $\ds\int \frac{dx}{\sqrtp{a^{2} - x^{2}}}$, $\ds\int \sqrtp{a^{2} - x^{2}}\, dx$ by means of the substitution $x = a\sin\theta$, and verify that the results agree with those obtained in \SecNo[§]{135} and Ex.~1. \Item{3.} Calculate $\ds\int x(x + a)^{m}\, dx$, where $m$~is any rational number, in three ways, viz.\ (i)~by integration by parts, (ii)~by the substitution $(x + a)^{m} = t$, and (iii)~by writing $(x + a) - a$ for~$x$; and verify that the results agree. \Item{4.} Prove, by means of the substitutions $ax + b = 1/t$ and $x = 1/u$, that (in the notation of \SecNo[§§]{130}~and~\SecNo{138}) \[ \int \frac{dx}{y^{3}} = \frac{ax + b}{\Delta y},\quad \int \frac{x\, dx}{y^{3}} = -\frac{bx + c}{\Delta y}. \] \Item{5.} Calculate $\ds\int \frac{dx}{\sqrtb{(x - a) (b - x)}}$, where $b > a$, in three ways, viz.\ (i)~by the methods of the preceding sections, (ii)~by the substitution $(b - x)/(x - a) = t^{2}$, and (iii)~by the substitution $x = a\cos^{2}\theta + b\sin^{2}\theta$; and verify that the results agree. \Item{6.} Integrate $\sqrtb{(x - a) (b - x)}$ and $\sqrtb{(b - x)/(x - a)}$. \Item{7.} Show, by means of the substitution $2x + a + b = \frac{1}{2}(a - b) \{t^{2} + (1/t)^{2}\}$, or by multiplying numerator and denominator by $\sqrtp{x + a} -\sqrtp{x + b}$, that if $a > b$ then \[ \int \frac{dx}{\sqrtp{x + a} + \sqrtp{x + b}} = \tfrac{1}{2}\sqrtp{a - b} \left(t + \frac{1}{3t^{3}}\right). \] \Item{8.} Find a substitution which will reduce $\ds\int \frac{dx}{(x + a)^{3/2} + (x - a)^{3/2}}$ to the integral of a rational function. \MathTrip{1899.} \Item{9.} {\Loosen Show that $\ds\int R\{x, \sqrtp[n]{ax + b}\}\, dx$ is reduced, by the substitution $ax + b = y^{n}$, to the integral of a rational function.} \Item{10.} Prove that \[ \int f''(x) F(x)\, dx = f'(x) F(x) - f(x) F'(x) + \int f(x) F''(x)\, dx \] and generally {\setlength{\multlinegap}{\parindent}% \begin{multline*} %[** TN: Set on one line in the original] \int f^{(n)}(x) F(x)\, dx \\ = f^{(n-1)}(x) F(x) - f^{(n-2)}(x) F'(x) + \dots + (-1)^{n} \int f(x) F^{(n)}(x)\, dx. \end{multline*}} \Item{11.} The integral $\ds\int (1 + x)^{p} x^{q}\, dx$, where $p$~and~$q$ are rational, can be found in three cases, viz.\ (i)~if $p$~is an integer, (ii)~if $q$~is an integer, and (iii)~if $p + q$~is an integer. [In case~(i) put $x = u^{s}$, where $s$~is the denominator of~$q$; in case~(ii) put $1 + x = t^{s}$, where $s$~is the denominator of~$p$; and in case~(iii) put $1 + x = xt^{s}$, where $s$~is the denominator of~$p$.] \PageSep{242} \Item{12.} The integral $\ds\int x^{m}(ax^{n} + b)^{q}\, dx$ can be reduced to the preceding integral by the substitution $ax^{n} = bt$. [In practice it is often most convenient to calculate a particular integral of this kind by a `formula of reduction' (cf.\ \MiscEx{VI}~39).] \Item{13.} The integral $\ds\int R\{x, \sqrtp{ax + b}, \sqrtp{cx + d}\}\, dx$ can be reduced to that of a rational function by the substitution \[ 4x = -(b/a) \{t + (1/t)\}^{2} - (d/c)\{t - (1/t)\}^{2}. \] \Item{14.} Reduce $\ds\int R(x, y)\, dx$, where $y^{2}(x - y) = x^{2}$, to the integral of a rational function. [Putting $y = tx$ we obtain $x = 1/\{t^{2}(1 - t)\}$, $y = 1/\{t(1 - t)\}$.] \Item{15.} {\Loosen Reduce the integral in the same way when (\ia)~$y(x - y)^{2} = x$, (\ib)~$(x^{2} + y^{2})^{2} = a^{2}(x^{2} - y^{2})$. [In case~(\ia) put $x - y = t$: in case~(b) put $x^{2} + y^{2} = t(x - y)$, when we obtain} \[ %[** TN: Set in-line in the original] x = a^{2}t(t^{2} + a^{2})/(t^{4} + a^{4}),\quad y = a^{2}t(t^{2} - a^{2})/(t^{4} + a^{4}).] \] \Item{16.} If $y(x - y)^{2} = x$ then \[ \int \frac{dx}{x - 3y} = \tfrac{1}{2} \log\{(x - y)^{2} - 1\}. \] \Item{17.} If $(x^{2} + y^{2})^{2} = 2c^{2}(x^{2} - y^{2})$ then \[ \int \frac{dx}{y(x^{2} + y^{2} + c^{2})} = - \frac{1}{c^{2}}\log\left(\frac{x^{2} + y^{2}}{x - y}\right). \] \end{Examples} \begin{Remark} \Paragraph{139. The general integral $\ds\int R(x, y)\, dx$, where $y^{2} = ax^{2} + 2bx + c$.} The most general integral, of the type considered in \SecNo[§]{134}, and associated with the special conic $y^{2} = ax^{2} + 2bx + c$, is \[ \int R(x, \sqrt{X})\, dx, \Tag{(1)} \] where $X = y^{2} = ax^{2} + 2bx + c$. We suppose that $R$~is a \emph{real} function. The subject of integration is of the form~$P/Q$, where $P$~and~$Q$ are polynomials in $x$~and~$\sqrt{X}$. It may therefore be reduced to the form \[ \frac{A + B\sqrt{X}}{C + D\sqrt{X}} = \frac{(A + B\sqrt{X})(C - D\sqrt{X})}{C^{2} - D^{2}X} = E + F\sqrt{X}, \] where $A$, $B$,~\dots\ are rational functions of~$x$. The only new problem which arises is that of the integration of a function of the form~$F\sqrt{X}$, or, what is the same thing,~$G/\sqrt{X}$, where $G$~is a rational function of~$x$. And the integral \[ \int \frac{G}{\sqrt{X}}\, dx \Tag{(2)} \] can always be evaluated by splitting up $G$ into partial fractions. When we do this, integrals of three different types may arise. \Itemp{(i)} In the first place there may be integrals of the type \[ \int \frac{x^{m}}{\sqrt{X}}\, dx, \Tag{(3)} \] \PageSep{243} where $m$~is a positive integer. The cases in which $m = 0$ or $m = 1$ have been disposed of in \SecNo[§]{136}. In order to calculate the integrals corresponding to larger values of~$m$ we observe that \[ \frac{d}{dx}(x^{m-1}\sqrt{X}) = (m - 1)x^{m-2} \sqrt{X} + \frac{(ax + b) x^{m-1}}{\sqrt{X}} = \frac{\alpha x^{m} + \beta x^{m-1} + \gamma x^{m-2}}{\sqrt{X}}, \] where $\alpha$,~$\beta$,~$\gamma$ are constants whose values may be easily calculated. It is clear that, when we integrate this equation, we obtain a relation between three successive integrals of the type~\Eq{(3)}. As we know the values of the integral for $m = 0$ and $m = 1$, we can calculate in turn its values for all other values of~$m$. \Itemp{(ii)} In the second place there may be integrals of the type \[ \int \frac{dx}{(x - p)^{m}\sqrt{X}}, \Tag{(4)} \] where $p$~is real. If we make the substitution $x - p = 1/t$ then this integral is reduced to an integral in~$t$ of the type~\Eq{(3)}. \Itemp{(iii)} Finally, there may be integrals corresponding to complex roots of the denominator of~$G$. We shall confine ourselves to the simplest case, that in which all such roots are simple roots. In this case (cf.~\SecNo[§]{130}) a pair of conjugate complex roots of~$G$ gives rise to an integral of the type \[ \int \frac{Lx + M}{(Ax^{2} + 2Bx + C) \sqrt{ax^{2} + 2bx + c}}\, dx. \Tag{(5)} \] In order to evaluate this integral we put \[ x = \frac{\mu t + \nu}{t + 1}, \] where $\mu$~and~$\nu$ are so chosen that \[ a\mu\nu + b(\mu + \nu) + c = 0,\quad A\mu\nu + B(\mu + \nu) + C = 0; \] so that $\mu$~and~$\nu$ are the roots of the equation \[ (aB - bA)\xi^{2} - (cA - aC)\xi + (bC - cB) = 0. \] This equation has certainly real roots, for it is the same equation as equation~\Eq{(1)} of \Ex{xlvi}.~12; and it is therefore certainly possible to find real values of $\mu$~and~$\nu$ fulfilling our requirements. It will be found, on carrying out the substitution, that the integral~\Eq{(5)} assumes the form \[ H\int \frac{t\, dt}{(\alpha t^{2} + \beta)\sqrtp{\gamma t^{2} + \delta}} + K\int \frac{dt}{(\alpha t^{2} + \beta)\sqrtp{\gamma t^{2} + \delta}}. \Tag{(6)} \] The second of these integrals is rationalised by the substitution \[ \frac{t}{\sqrtp{\gamma t^{2} + \delta}} = u, \] which gives \[ \int \frac{dt}{(\alpha t^{2} + \beta) \sqrtp{\gamma t^{2} + \delta}} = \int \frac{du}{\beta + (\alpha\delta - \beta\gamma) u^{2}}. \] \PageSep{244} Finally, if we put $t = 1/u$ in the first of the integrals~\Eq{(6)}, it is transformed into an integral of the second type, and may therefore be calculated in the manner just explained, viz.\ by putting $u/\sqrtp{\gamma + \delta u^{2}} = u$, \ie\ $1/\sqrtp{\gamma t^{2} + \delta} = v$.\footnote {The method of integration explained here fails if $a/A = b/B$; but then the integral may be reduced by the substitution $ax + b = t$. For further information concerning the integration of algebraical functions see Stolz, \textit{Grundzüge der Differential-und-integralrechnung}, vol.~i, pp.~331~\textit{et~seq.}; Bromwich, \textit{Elementary Integrals} (Bowes and Bowes, 1911). An alternative method of reduction has been given by Sir~G. Greenhill: see his \textit{A Chapter in the Integral Calculus}, pp.~12 \textit{et~seq.}, and the author's tract quoted on \PageRef{p.}{236}.} \end{Remark} \begin{Examples}{L.} \Item{1.} Evaluate \[ \int \frac{dx}{x \sqrtp{x^{2} + 2x + 3}},\quad \int \frac{dx}{(x - 1) \sqrtp{x^{2} + 1}},\quad \int \frac{dx}{(x + 1) \sqrtp{1 + 2x - x^{2}}}. \] \Item{2.} Prove that \[ \int \frac{dx}{(x - p) \sqrtb{(x - p) (x - q)}} = \frac{2}{q - p} \bigsqrtp{\frac{x - q}{x - p}}. \] \Item{3.} If $ag^{2} + ch^{2} = -\nu < 0$ then \[ \int \frac{dx}{(hx + g) \sqrtp{ax^{2} + c}} = -\frac{1}{\sqrt{\nu}} \arctan\left[ \frac{\sqrtb{\nu(ax^{2} + c)}}{ch - agx} \right]. \] \Item{4.} Show that $\ds\int \frac{dx}{(x - x_{0})y}$, where $y^{2} = ax^{2} + 2bx + c$, may be expressed in one or other of the forms \[ -\frac{1}{y_{0}} \log\left| \frac{axx_{0} + b(x + x_{0}) + c + yy_{0}}{x - x_{0}} \right|,\quad \frac{1}{z_{0}} \arctan \left\{ \frac{axx_{0} + b(x + x_{0}) + c}{yz_{0}} \right\}, \] according as $ax_{0}^{2} + 2bx_{0} + c$ is positive and equal to~$y_{0}^{2}$ or negative and equal to~$-z_{0}^{2}$. \Item{5.} Show by means of the substitution $y = \sqrtp{ax^{2} + 2bx + c}/(x - p)$ that \[ \int \frac{dx}{(x - p) \sqrtp{ax^{2} + 2bx + c}} = \int \frac{dy}{\sqrtp{\lambda y^{2} - \mu}}, \] where $\lambda = ap^{2} + 2bp + c$, $\mu = ac - b^{2}$. [This method of reduction is elegant but less straightforward than that explained in \SecNo[§]{139}.] \Item{6.} Show that the integral \[ \int \frac{dx}{x \sqrtp{3x^{2} + 2x + 1}} \] is rationalised by the substitution $x = (1 + y^{2})/(3 - y^{2})$. \MathTrip{1911.} \Item{7.} Calculate \[ \int \frac{(x + 1)\, dx}{(x^{2} + 4) \sqrtp{x^{2} + 9}}. \] \PageSep{245} \Item{8.} Calculate \[ \int \frac{dx}{(5x^{2} + 12x + 8) \sqrtp{5x^{2} + 2x - 7}}. \] {\Loosen[Apply the method of \SecNo[§]{139}. The equation satisfied by $\mu$~and~$\nu$ is $\xi^{2} + 3\xi + 2 = 0$, so that $\mu = -2$, $\nu = -1$, and the appropriate substitution is $x = -(2t + 1)/(t + 1)$. This reduces the integral to} \[ -\int \frac{dt}{(4t^{2} + 1) \sqrtp{9t^{2} - 4}} - \int \frac{t\, dt}{(4t^{2} + 1) \sqrtp{9t^{2} - 4}}. \] The first of these integrals may be rationalised by putting $t/\sqrtp{9t^{2} - 4} = u$ and the second by putting $1/\sqrtp{9t^{2} - 4} = v$.] \Item{9.} Calculate \[ \int \frac{(x + 1)\, dx}{(2x^{2} - 2x + 1) \sqrtp{3x^{2} - 2x + 1}},\quad \int \frac{(x - 1)\, dx}{(2x^{2} - 6x + 5) \sqrtp{7x^{2} - 22x + 19}}. \] \MathTrip{1911.} \Item{10.} Show that the integral $\ds\int R(x, y)\, dx$, where $y^{2} = ax^{2} + 2bx + c$, is rationalised by the substitution $t = (x - p)/(y + q)$, where $(p, q)$~is any point on the conic $y^{2} = ax^{2} + 2bx + c$. [The integral is of course also rationalised by the substitution $t = (x - p)/(y - q)$: cf.~\SecNo[§]{134}.] \end{Examples} \Paragraph{140. Transcendental Functions.} Owing to the immense variety of the different classes of transcendental functions, the theory of their integration is a good deal less systematic than that of the integration of rational or algebraical functions. We shall consider in order a few classes of transcendental functions whose integrals can always be found. \Paragraph{141. Polynomials in cosines and sines of multiples of~$x$.} We can always integrate any function which is the sum of a finite number of terms such as \[ A\cos^{m} ax \sin^{m'} ax \cos^{n} bx \sin^{n'} bx\dots, \] where $m$,~$m'$, $n$,~$n'$,~\dots\ are positive integers and $a$,~$b$,~\dots\ any real numbers whatever. For such a term can be expressed as the sum of a finite number of terms of the types \[ \alpha\cos\{(pa + qb + \dots)x\},\quad \beta \sin\{(pa + qb + \dots)x\} \] and the integrals of these terms can be written down at once. \PageSep{246} \begin{Examples}{LI.} \Item{1.} Integrate $\sin^{3} x \cos^{2} 2x$. In this case we use the formulae \[ \sin^{3} x = \tfrac{1}{4}(3\sin x - \sin 3x),\quad \cos^{2} 2x = \tfrac{1}{2}(1 + \cos 4x). \] Multiplying these two expressions and replacing $\sin x\cos 4x$, for example, by $\frac{1}{2}(\sin 5x - \sin 3x)$, we obtain \begin{multline*} \tfrac{1}{16}\int (7\sin x - 5\sin 3x + 3\sin 5x - \sin 7x)\, dx\\ = - \tfrac{7}{16}\cos x + \tfrac{5}{48}\cos 3x - \tfrac{3}{80}\cos 5x + \tfrac{1}{112}\cos 7x. \end{multline*} The integral may of course be obtained in different forms by different methods. For example \[ \int \sin^{3}x \cos^{2}2x\, dx = \int(4\cos^{4}x - 4\cos^{2}x + 1) (1 - \cos^{2} x)\sin x\, dx, \] which reduces, on making the substitution $\cos x = t$, to \[ \int(4t^{6} - 8t^{4} + 5t^{2} - 1)\, dt = \tfrac{4}{7}\cos^{7} x - \tfrac{8}{5}\cos^{5}x + \tfrac{5}{3}\cos^{3}x - \cos x. \] It may be verified that this expression and that obtained above differ only by a constant. \Item{2.} Integrate by any method $\cos ax \cos bx$, $\sin ax \sin bx$, $\cos ax \sin bx$, $\cos^{2}x$, $\sin^{3}x$, $\cos^{4}x$, $\cos x \cos 2x \cos 3x$, $\cos^{3}2x \sin^{2}3x$, $\cos^{5}x \sin^{7}x$. [In cases of this kind it is sometimes convenient to use a formula of reduction (\MiscEx{VI}~39).] \end{Examples} \Paragraph{142. The integrals $\ds\int x^{n}\cos x\, dx$, $\ds\int x^{n}\sin x\, dx$ and associated integrals.} The method of integration by parts enables us to generalise the preceding results. For \begin{alignat*}{3} \int x^{n}\cos x\, dx &= & &x^{n}\sin x &&- n\int x^{n-1}\sin x\, dx,\\ \int x^{n}\sin x\, dx &= &-&x^{n}\cos x &&+ n\int x^{n-1}\cos x\, dx, \end{alignat*} and clearly the integrals can be calculated completely by a repetition of this process whenever $n$~is a positive integer. It follows that we can always calculate $\ds\int x^{n}\cos ax\, dx$ and $\ds\int x^{n}\sin ax\, dx$ if $n$~is a positive integer; and so, by a process similar to that of the preceding paragraph, we can calculate \[ \int P(x, \cos ax, \sin ax, \cos bx, \sin bx, \dots)\, dx, \] where $P$~is any polynomial. \PageSep{247} \begin{Examples}{LII.} \Item{1.} Integrate $x\sin x$, $x^{2}\cos x$, $x^{2}\cos^{2}x$, $x^{2}\sin^{2}x \sin^{2} 2x$, $x\sin^{2}x \cos^{4}x$, $x^{3}\sin^{3}\frac{1}{3}x$. \Item{2.} Find polynomials $P$~and~$Q$ such that \[ \int\{(3x - 1)\cos x + (1 - 2x)\sin x\}\, dx = P\cos x + Q\sin x. \] \Item{3.} Prove that $\ds\int x^{n}\cos x\, dx = P_{n}\cos x + Q_{n}\sin x$, where \[ P_{n} = nx^{n-1} - n(n - 1)(n - 2) x^{n-3} + \dots,\quad Q_{n} = x^{n} - n(n - 1) x^{n-2} + \dots. \] \end{Examples} \Paragraph{143. Rational Functions of $\cos x$ and $\sin x$.} The integral of any rational function of $\cos x$~and $\sin x$ may be calculated by the substitution $\tan \frac{1}{2}x = t$. For \[ \cos x = \frac{1 - t^{2}}{1 + t^{2}},\quad \sin x = \frac{2t}{1 + t^{2}},\quad \frac{dx}{dt} = \frac{2}{1 + t^{2}}, \] so that the substitution reduces the integral to that of a rational function of~$t$. \begin{Examples}{LIII.} \Item{1.} Prove that \[ \int \sec x\, dx = \log |\sec x + \tan x|,\quad \int \cosec x\, dx = \log |\tan \tfrac{1}{2}x|. \] [Another form of the first integral is $\log |\tan(\frac{1}{4}\pi + \frac{1}{2}x)|$; a third form is $\frac{1}{2}\log |(1 + \sin x)/(1 - \sin x)|$.] \Item{2.} $\ds\int \tan x\, dx = -\log |\cos x|$, $\ds\int \cot x\, dx = \log |\sin x|$, $\ds\int\sec^{2} x\, dx = \tan x$, $\ds\int \cosec^{2} x\, dx = -\cot x$, $\ds\int \tan x\sec x\, dx = \sec x$, $\ds\int \cot x \cosec x\, dx = -\cosec x$. [These integrals are included in the general form, but there is no need to use a substitution, as the results follow at once from \SecNo[§]{119} and equation~\Eq{(5)} of~\SecNo[§]{130}.] \Item{3.} Show that the integral of $1/(a + b\cos x)$, where $a + b$~is positive, may be expressed in one or other of the forms \[ \frac{2}{\sqrtp{a^{2} - b^{2}}} \arctan \left\{t\bigsqrtp{\frac{a - b}{a + b}}\right\},\quad \frac{1}{\sqrtp{b^{2} - a^{2}}} \log \left|\frac{\sqrtp{b + a} + t\sqrtp{b - a}} {\sqrtp{b + a} - t\sqrtp{b - a}}\right|, \] where $t = \tan\frac{1}{2}x$, according as $a^{2} > b^{2}$ or $a^{2} < b^{2}$. If $a^{2} = b^{2}$ then the integral reduces to a constant multiple of that of $\sec^{2}\frac{1}{2}x$ or $\cosec^{2}\frac{1}{2}x$, and its value may at once be written down. Deduce the forms of the integral when $a + b$ is negative. \Item{4.} Show that if $y$~is defined in terms of~$x$ by means of the equation \[ (a + b\cos x)(a - b\cos y) = a^{2} - b^{2}, \] where $a$~is positive and $a^{2} > b^{2}$, then as $x$~varies from $0$~to~$\pi$ one value of~$y$ also varies from $0$~to~$\pi$. Show also that \[ \sin x = \frac{\sqrtp{a^{2} - b^{2}} \sin y}{a - b\cos y},\quad \frac{\sin x}{a + b\cos x}\, \frac{dx}{dy} = \frac{\sin y}{a - b\cos y}; \] \PageSep{248} and deduce that if $0 < x < \pi$ then \[ \int \frac{dx}{a + b\cos x} = \frac{1}{\sqrtp{a^{2} - b^{2}}} \arccos \left(\frac{a\cos x + b}{a + b\cos x}\right). \] Show that this result agrees with that of Ex.~3. \Item{5.} Show how to integrate $1/(a + b\cos x + c\sin x)$. [Express $b\cos x + c\sin x$ in the form $\sqrtp{b^{2} + c^{2}} \cos(x - \alpha)$.] \Item{6.} Integrate $(a + b\cos x + c\sin x)/(\alpha + \beta\cos x + \gamma\sin x)$\Add{.} [Determine $\lambda$,~$\mu$,~$\nu$ so that \[ a + b\cos x + c\sin x = \lambda + \mu(\alpha + \beta\cos x + \gamma\sin x) + \nu(-\beta\sin x + \gamma\cos x). \] Then the integral is \[ \mu x + \nu \log |\alpha + \beta\cos x + \gamma\sin x| + \lambda \int \frac{dx}{\alpha + \beta\cos x + \gamma\sin x}.] \] \Item{7.} Integrate $1/(a\cos^{2} x + 2b\cos x\sin x + c\sin^{2} x)$. [The subject of integration may be expressed in the form $1/(A + B\cos 2x + C\sin 2x)$, where $A = \frac{1}{2}(a + c)$, $B = \frac{1}{2}(a - c)$, $C = b$: but the integral may be calculated more simply by putting $\tan x = t$, when we obtain \[ \int \frac{\sec^{2} x\, dx}{a + 2b\tan x + c\tan^{2} x} = \int \frac{dt}{a + 2bt + ct^{2}}.] \] \end{Examples} \Paragraph{144. Integrals involving $\arcsin x$, $\arctan x$, and $\log x$.} The integrals of the inverse sine and tangent and of the logarithm can easily be calculated by integration by parts. Thus \begin{align*} \int \arcsin x\, dx &= x\arcsin x - \int \frac{x\, dx}{\sqrtp{1 - x^{2}}} = x\arcsin x + \sqrtp{1 - x^{2}},\\ % \int \arctan x\, dx &= x\arctan x - \int \frac{x\, dx}{1 + x^{2}} = x\arctan x - \tfrac{1}{2} \log(1 + x^{2}),\\ % \int \log x\, dx &= x\log x - \int dx = x(\log x - 1). \end{align*} It is easy to see that if we can find the integral of $y = f(x)$ then we can always find that of $x = \phi(y)$, where $\phi$~is the function inverse to~$f$. For on making the substitution $y = f(x)$ we obtain \[ \int \phi(y)\, dy = \int xf'(x)\, dx = xf(x) - \int f(x)\, dx. \] The reader should evaluate the integrals of $\arcsin y$ and $\arctan y$ in this way. Integrals of the form \[ \int P(x, \arcsin x)\, dx,\quad \int P(x, \log x)\, dx, \] \PageSep{249} where $P$~is a polynomial, can always be calculated. Take the first form, for example. We have to calculate a number of integrals of the type $\ds\int x^{m} (\arcsin x)^{n}\, dx$. Making the substitution $x = \sin y$, we obtain $\ds\int y^{n}\sin^{m}y \cos y\, dy$, which can be found by the method of \SecNo[§]{142}. In the case of the second form we have to calculate a number of integrals of the type $\ds\int x^{m} (\log x)^{n}\, dx$. Integrating by parts we obtain \[ \int x^{m}(\log x)^{n}\, dx = \frac{x^{m+1} (\log x)^{n}}{m + 1} - \frac{n}{m + 1} \int x^{m}(\log x)^{n-1}\, dx, \] and it is evident that by repeating this process often enough we shall always arrive finally at the complete value of the integral. \Paragraph{145. Areas of plane curves.} One of the most important applications of the processes of integration which have been explained in the preceding sections is to the calculation of \emph{areas} of plane curves. Suppose that $P_{0}PP'$ (\Fig{44}) is the graph of a continuous curve $y = \phi(x)$ which lies wholly above the axis of~$x$, $P$~being the point $(x, y)$ and $P'$~the point $(x + h, y + k)$, and $h$~being either positive or negative (positive in the figure). %[Illustration: Fig. 44a.] %[Illustration: Fig. 44.] \Figures{2.25in}{44}{p249a}{1.5in}{44a}{p249b} The reader is of course familiar with the idea of an `area', and in particular with that of an area such as~$ONPP_{0}$. This idea we shall at present take for granted. It is indeed one which needs and has received the most careful mathematical analysis: later on we shall return to it and explain precisely what is meant by \PageSep{250} ascribing an `area' to such a region of space as~$ONPP_{0}$. For the present we shall simply assume that any such region has associated with it a definite positive number $(ONPP_{0})$ which we call its area, and that these areas possess the obvious properties indicated by common sense, \eg\ that \[ (PRP') + (NN'RP) = (NN'P'P),\quad (N_{1}NPP_{1}) < (ONPP_{0}), \] and so on. Taking all this for granted it is obvious that the area $ONPP_{0}$ is a function of~$x$; we denote it by~$\Phi(x)$. Also $\Phi(x)$~is a \emph{continuous} function. For \begin{align*} \Phi(x + h) - \Phi(x) &= (NN'P'P)\\ &= (NN'RP) + (PRP') = h\phi(x) + (PRP'). \end{align*} As the figure is drawn, the area~$PRP'$ is less than~$hk$. This is not however necessarily true in general, because it is not necessarily the case (see for example \Fig{44a}) that the arc~$PP'$ should rise or fall steadily from $P$ to~$P'$. But the area~$PRP'$ is always less than~$|h|\lambda(h)$, where $\lambda(h)$~is the greatest distance of any point of the arc~$PP'$ from~$PR$. Moreover, since $\phi(x)$~is a continuous function, $\lambda(h) \to 0$ as $h \to 0$. Thus we have \[ \Phi(x + h) - \Phi(x) = h\{\phi(x) + \mu(h)\}, \] where $|\mu(h)| < \lambda(h)$ and $\lambda(h) \to 0$ as $h \to 0$. From this it follows at once that $\Phi(x)$~is continuous. Moreover \[ \Phi'(x) = \lim_{h \to 0} \frac{\Phi(x + h) - \Phi(x)}{h} = \lim_{h \to 0} \{\phi(x) + \mu(h)\} = \phi(x). \] Thus \emph{the ordinate of the curve is the derivative of the area, and the area is the integral of the ordinate}. We are thus able to formulate a rule for determining the area~$ONPP_{0}$. \emph{Calculate $\Phi(x)$, the integral of~$\phi(x)$. This involves an arbitrary constant, which we suppose so chosen that $\Phi(0) = 0$. Then the area required is~$\Phi(x)$.} \begin{Remark} If it were the area~$N_{1}NPP_{1}$ which was wanted, we should of course determine the constant so that $\Phi(x_{1}) = 0$, where $x_{1}$~is the abscissa of~$P_{1}$. If the curve lay below the axis of~$x$, $\Phi(x)$~would be negative, and the area would be the absolute value of~$\Phi(x)$. \end{Remark} \PageSep{251} \Paragraph{146. Lengths of plane curves.} The notion of the length of a curve, other than a straight line, is in reality a more difficult one even than that of an area. In fact the assumption that $P_{0}P$ (\Fig{44}) has a definite length, which we may denote by~$S(x)$, does not suffice for our purposes, as did the corresponding assumption about areas. We cannot even prove that $S(x)$~is continuous, \ie\ that $\lim\{S(P') - S(P)\} = 0$. This looks obvious enough in the larger figure, but less so in such a case as is shown in the smaller figure. Indeed it is not possible to proceed further, with any degree of rigour, without a careful analysis of precisely what is meant by the length of a curve. It is however easy to see what the \emph{formula} must be. Let us suppose that the curve has a tangent whose direction varies continuously, so that $\phi'(x)$~is continuous. Then the assumption that the curve has a length leads to the equation \[ \{S(x + h) - S(x)\}/h = \{PP'\}/h = (PP'/h) × (\{PP'\}/PP'), \] where $\{PP'\}$~is the arc whose chord is~$PP'$. Now \[ PP' + \sqrtp{PR^{2} + RP'^{2}} = h\bigsqrtp{1 + \frac{k^{2}}{h^{2}}}, \] and \[ k = \phi(x + h) - \phi(x) = h\phi'(\xi), \] where $\xi$~lies between $x$ and~$x + h$. Hence \[ \lim (PP'/h) = \lim \sqrtb{1 + [\phi'(\xi)]^{2}} = \sqrtb{1 + [\phi'(x)]^{2}}. \] If also we assume that \[ \lim \{PP'\}/PP' = 1, \] we obtain the result \[ S'(x) = \lim \{S(x + h) - S(x)\}/h = \sqrtb{1 + [\phi'(x)]^{2}} \] and so \[ S(x) = \int \sqrtb{1 + [\phi'(x)]^{2}}\, dx. \] \begin{Examples}{LIV.} \Item{1.} Calculate the area of the segment cut off from the parabola $y = x^{2}/4a$ by the ordinate $x = \xi$, and the length of the arc which bounds it. \Item{2.} Answer the same questions for the curve $ay^{2} = x^{3}$, showing that the length of the arc is \[ \frac{8a}{27} \left\{\left(1 + \frac{9\xi}{4a}\right)^{3/2} - 1\right\}. \] \Item{3.} Calculate the areas and lengths of the circles $x^{2} + y^{2} = a^{2}$, $x^{2} + y^{2} = 2ax$ by means of the formulae of \SecNo[§§]{145}--\SecNo{146}. \PageSep{252} \Item{4.} Show that the area of the ellipse $(x^{2}/a^{2}) + (y^{2}/b^{2}) = 1$ is~$\pi ab$. \Item{5.} Find the area bounded by the curve $y = \sin x$ and the segment of the axis of~$x$ from $x = 0$ to $x = 2\pi$. [Here $\Phi(x) = -\cos x$, and the difference between the values of $-\cos x$ for $x = 0$ and $x = 2\pi$ is zero. The explanation of this is of course that between $x = \pi$ and $x = 2\pi$ the curve lies below the axis of~$x$, and so the corresponding part of the area is counted negative in applying the method. The area from $x = 0$ to $x = \pi$ is $-\cos \pi + \cos 0 = 2$; and the whole area required, when every part is counted positive, is twice this, \ie\ is~$4$.] \Item{6.} Suppose that the coordinates of any point on a curve are expressed as functions of a parameter~$t$ by equations of the type $x = \phi(t)$, $y = \psi(t)$, $\phi$~and~$\psi$ being functions of~$t$ with continuous derivatives. Prove that if $x$~steadily increases as $t$~varies from $t_{0}$ to~$t_{1}$, then the area of the region bounded by the corresponding portion of the curve, the axis of~$x$, and the two ordinates corresponding to $t_{0}$ and~$t_{1}$, is, apart from sign, $A(t_{1}) - A(t_{0})$, where \[ A(t) = \int \psi(t)\phi'(t)\, dt = \int y \frac{dx}{dt}\, dt. \] \Item{7.} Suppose that $C$~is a closed curve formed of a single loop and not met by any parallel to either axis in more than two points. And suppose that the coordinates of any point~$P$ on the curve can be expressed as in Ex.~6 in terms of~$t$, and that, as $t$~varies from $t_{0}$ to~$t_{1}$, $P$~moves in the same direction round the curve and returns after a single circuit to its original position. Show that the area of the loop is equal to the difference of the initial and final values of any one of the integrals \[ -\int y \frac{dx}{dt}\, dt,\quad \int x \frac{dy}{dt}\, dt,\quad \tfrac{1}{2} \int \left(x \frac{dy}{dt} - y \frac{dx}{dt}\right) dt, \] this difference being of course taken positively. \Item{8.} Apply the result of Ex.~7 to determine the areas of the curves given by \[ \Itemp{(i)} \frac{x}{a} = \frac{1 - t^{2}}{1 + t^{2}},\quad \frac{y}{a} = \frac{2t}{1 + t^{2}},\qquad \Itemp{(ii)} x = a\cos^{3} t,\quad y = b\sin^{3} t. \] \Item{9.} Find the area of the loop of the curve $x^{3} + y^{3} = 3axy$. [Putting $y = tx$ we obtain $x = 3at/(1 + t^{3})$, $y = 3at^{2}/(1 + t^{3})$. As $t$~varies from~$0$ towards~$\infty$ the loop is described once. Also \[ \tfrac{1}{2} \int \left(y \frac{dx}{dt} - x \frac{dy}{dt}\right)\, dt = -\tfrac{1}{2} \int x^{2} \frac{d}{dt}\left(\frac{y}{x}\right)\, dt = -\tfrac{1}{2} \int \frac{9a^{2}t^{2}}{(1 + t^{3})^{2}}\, dt = \frac{3a^{2}}{2(1 + t^{3})}, \] which tends to~$0$ as $t \to \infty$. Thus the area of the loop is~$\frac{3}{2}a^{2}$.] \Item{10.} Find the area of the loop of the curve $x^{5} + y^{5} = 5ax^{2}y^{2}$. \Item{11.} Prove that the area of a loop of the curve $x = a\sin 2t$, $y = a\sin t$ is~$\frac{4}{3}a^{2}$. \MathTrip{1908.} \PageSep{253} \Item{12.} The arc of the ellipse given by $x = a\cos t$, $y = b\sin t$, between the points $t = t_{1}$ and $t = t_{2}$, is $F(t_{2}) - F(t_{1})$, where \[ F(t) = a\int \sqrtp{1 - e^{2}\sin^{2} t}\, dt, \] $e$~being the eccentricity. [This integral cannot however be evaluated in terms of such functions as are at present at our disposal.] \Item{13.} \Topic{Polar coordinates.} Show that the area bounded by the curve $r = f(\theta)$, where $f(\theta)$~is a one-valued function of~$\theta$, and the radii $\theta = \theta_{1}$, $\theta = \theta_{2}$, is $F(\theta_{2}) - F(\theta_{1})$, where $\ds F(\theta) = \tfrac{1}{2} \int r^{2}\, d\theta$. And the length of the corresponding arc of the curve is $\Phi(\theta_{2}) - \Phi(\theta_{1})$, where \[ \Phi(\theta) = \bigint \bigsqrtb{r^{2} + \biggl(\frac{dr}{d\theta}\biggr)^{2}}\, d\theta. \] Hence determine (i)~the area and perimeter of the circle $r = 2a\sin\theta$; (ii)~the area between the parabola $r = \frac{1}{2}l\sec^{2} \frac{1}{2}\theta$ and its latus rectum, and the length of the corresponding arc of the parabola; (iii)~the area of the limaçon $r = a + b\cos\theta$, distinguishing the cases in which $a > b$, $a = b$, and $a < b$; and (iv)~the areas of the ellipses $1/r^{2} = a\cos^{2} \theta + 2h\cos\theta\sin\theta + b\sin^{2} \theta$ and $l/r = 1 + e\cos\theta$. [In the last case we are led to the integral $\ds \int \frac{d\theta}{(1 + e\cos\theta)^{2}}$, which may be calculated (cf.\ \Ex{liii}.~4) by the help of the substitution \[ (1 + e\cos\theta) (1 - e\cos\phi) = 1 - e^{2}.] \] \Item{14.} Trace the curve $2\theta = (a/r) + (r/a)$, and show that the area bounded by the radius vector $\theta = \beta$, and the two branches which touch at the point $r = a$, $\theta = 1$, is $\frac{2}{3} a^{2}(\beta^{2} - 1)^{3/2}$. \MathTrip{1900.} \Item{15.} A curve is given by an equation $p = f(r)$, $r$~being the radius vector and $p$~the perpendicular from the origin on to the tangent. Show that the calculation of the area of the region bounded by an arc of the curve and two radii vectores depends upon that of the integral $\frac{1}{2} \ds \int \frac{pr\, dr}{\sqrtp{r^{2} - p^{2}}}$. \end{Examples} \Section{MISCELLANEOUS EXAMPLES ON CHAPTER VI.} \begin{Examples}{} \Item{1.} A function~$f(x)$ is defined as being equal to $1 + x$ when $x \leq 0$, to~$x$ when $0 < x < 1$, to $2 - x$ when $1 \leq x \leq 2$, and to $3x - x^{2}$ when $x > 2$. Discuss the continuity of~$f(x)$ and the existence and continuity of~$f'(x)$ for $x = 0$, $x = 1$, and $x = 2$. \MathTrip{1908.} \Item{2.} Denoting $a$, $ax + b$, $ax^{2} + 2bx + c$,~\dots\ by $u_{0}$,~$u_{1}$, $u_{2}$,~\dots, show that $u_{0}^{2} u_{3} - 3u_{0} u_{1} u_{2} + 2u_{1}^{3}$ and $u_{0} u_{4} - 4u_{1} u_{3} + 3u_{2}^{2}$ are independent of~$x$. \PageSep{254} \Item{3.} If $a_{0}$, $a_{1}$,~\dots, $a_{2n}$ are constants and $U_{r} = (a_{0}, a_{1}, \dots, a_{r} \btw x, 1)^{r}$, then \[ U_{0}U_{2n} - 2nU_{1}U_{2n-1} + \frac{2n(2n - 1)}{1·2} U_{2}U_{2n-2} - \dots + U_{2n}U_{0} \] is independent of~$x$. \MathTrip{1896.} [Differentiate and use the relation $U_{r}' = rU_{r-1}$.] \Item{4.} The first three derivatives of the function $\arcsin(\mu\sin x) - x$, where $\mu > 1$, are positive when $0 \leq x \leq \frac{1}{2} \pi$. \Item{5.} The constituents of a determinant are functions of~$x$. Show that its differential coefficient is the sum of the determinants formed by differentiating the constituents of one row only, leaving the rest unaltered. \Item{6.} If $f_{1}$, $f_{2}$, $f_{3}$, $f_{4}$ are polynomials of degree not greater than~$4$, then \[ \begin{vmatrix} f_{1}& f_{2}& f_{3}& f_{4}\\ f_{1}'& f_{2}'& f_{3}'& f_{4}'\\ f_{1}''& f_{2}''& f_{3}''& f_{4}''\\ f_{1}'''& f_{2}'''& f_{3}'''& f_{4}''' \end{vmatrix} \] is also a polynomial of degree not greater than~$4$. [Differentiate five times, using the result of Ex.~5, and rejecting vanishing determinants.] \Item{7.} If $y^{3} + 3yx + 2x^{3} = 0$ then $x^{2}(1 + x^{3})y'' - \frac{3}{2}xy' + y = 0$. \MathTrip{1903.} \Item{8.} Verify that the differential equation $y = \phi\{\psi(y_{1})\} + \phi\{x - \psi(y_{1})\}$, where $y_{1}$~is the derivative of~$y$, and $\psi$~is the function inverse to~$\phi'$, is satisfied by $y = \phi(c) + \phi(x - c)$ or by $y = 2\phi(\frac{1}{2}x)$. \Item{9.} Verify that the differential equation $y = \{x/\psi(y_{1})\} \phi\{\psi(y_{1})\}$, where the notation is the same as that of Ex.~8, is satisfied by $y = c\phi(x/c)$ or by $y = \beta x$, where $\beta = \phi(\alpha)/\alpha$ and $\alpha$~is any root of the equation $\phi(\alpha) - \alpha\phi'(\alpha) = 0$. \Item{10.} If $ax + by + c = 0$ then $y_{2} = 0$ (suffixes denoting differentiations with respect to~$x$). We may express this by saying that \emph{the general differential equation of all straight lines is $y_{2} = 0$}. Find the general differential equations of (i)~all circles with their centres on the axis of~$x$, (ii)~all parabolas with their axes along the axis of~$x$, (iii)~all parabolas with their axes parallel to the axis of~$y$, (iv)~all circles, (v)~all parabolas, (vi)~all conics. [The equations are (i)~$1 + y_{1}^{2} + yy_{2} = 0$, (ii)~$y_{1}^{2} + yy_{2} = 0$, (iii)~$y_{3} = 0$, (iv)~$(1 + y_{1}^{2}) y_{3} = 3y_{1} y_{2}^{2}$, (v)~$5y_{3}^{2} = 3y_{2} y_{4}$, (vi)~$9y_{2}^{2} y_{5} - 45y_{2} y_{3} y_{4} + 40y_{3}^{3} = 0$. In each case we have only to write down the general equation of the curves in question, and differentiate until we have enough equations to eliminate all the arbitrary constants.] \Item{11.} Show that the general differential equations of all parabolas and of all conics are respectively \[ D_{x}^{2} (y_{2}^{-2/3}) = 0,\quad D_{x}^{3} (y_{2}^{-2/3}) = 0. \] \PageSep{255} [The equation of a conic may be put in the form \[ y = ax + b ± \sqrtp{px^{2} + 2qx + r}. \] From this we deduce \[ y_{2} = ±(pr - q^{2})/(px^{2} + 2qx + r)^{3/2}. \] If the conic is a parabola then $p = 0$.] \Item{12.} Denoting $\dfrac{dy}{dx}$, $\dfrac{1}{2!}\, \dfrac{d^{2}y}{dx^{2}}$, $\dfrac{1}{3!}\, \dfrac{d^{3}y}{dx^{3}}$, $\dfrac{1}{4!}\, \dfrac{d^{4}y}{dx^{4}}$,~\dots\ by $t$, $a$, $b$, $c$,~\dots\ and $\dfrac{dx}{dy}$, $\dfrac{1}{2!}\, \dfrac{d^{2}x}{dy^{2}}$, $\dfrac{1}{3!}\, \dfrac{d^{3}x}{dy^{3}}$, $\dfrac{1}{4!}\, \dfrac{d^{4}x}{dy^{4}}$,~\dots\ by $\tau$, $\alpha$, $\beta$, $\gamma$,~\dots, show that \[ 4ac - 5b^{2} = (4\alpha\gamma - 5\beta^{2})/\tau^{8},\quad bt - a^{2} = - (\beta\tau - \alpha^{2})/\tau^{6}. \] Establish similar formulae for the functions $a^{2}d - 3abc - 2b^{3}$, $(1 + t^{2})b - 2a^{2}t$, $2ct - 5ab$. \Item{13.} Prove that, if $y_{k}$~is the $k$th~derivative of $y = \sin(n\arcsin x)$, then \[ (1 - x^{2})y_{k+2} - (2k + 1)xy_{k+1} + (n^{2} - k^{2})y_{k} = 0. \] [Prove first when $k = 0$, and differentiate $k$~times by Leibniz' Theorem.] \Item{14.} Prove the formula \[ vD_{x}^{n}u = D_{x}^{n}(uv) - nD_{x}^{n-1}(uD_{x}v) + \frac{n(n - 1)}{1·2} D_{x}^{n-2}(uD_{x}^{2}v) - \dots \] where $n$~is any positive integer. [Use the method of induction.] \Item{15.} A curve is given by \[ x = a(2\cos t + \cos 2t),\quad y = a(2\sin t - \sin 2t). \] Prove (i)~that the equations of the tangent and normal, at the point~$P$ whose parameter is~$t$, are \[ x\sin \tfrac{1}{2} t + y\cos \tfrac{1}{2} t = a\sin \tfrac{3}{2} t,\quad x\cos \tfrac{1}{2} t - y\sin \tfrac{1}{2} t = 3a\cos \tfrac{3}{2} t; \] (ii)~that the tangent at~$P$ meets the curve in the points $Q$,~$R$ whose parameters are $-\frac{1}{2} t$ and $\pi - \frac{1}{2} t$; (iii)~that $QR = 4a$; (iv)~that the tangents at $Q$ and~$R$ are at right angles and intersect on the circle $x^{2} + y^{2} = a^{2}$; (v)~that the normals at $P$,~$Q$, and~$R$ are concurrent and intersect on the circle $x^{2} + y^{2} = 9a^{2}$; (vi)~that the equation of the curve is \[ (x^{2} + y^{2} + 12ax + 9a^{2})^{2} = 4a(2x + 3a)^{3}. \] Sketch the form of the curve. \Item{16.} Show that the equations which define the curve of Ex.~15 may be replaced by $\xi/a = 2u + (1/u^{2})$, $\eta/a = (2/u) + u^{2}$, where $\xi = x + yi$, $\eta = x - yi$, $u = \Cis t$. Show that the tangent and normal, at the point defined by~$u$, are \[ u^{2}\xi - u\eta = a(u^{3} - 1),\quad u^{2}\xi + u\eta = 3a(u^{3} + 1), \] and deduce the properties (ii)--(v) of Ex.~15. \Item{17.} Show that the condition that $x^{4} + 4px^{3} - 4qx - 1 = 0$ should have equal roots may be expressed in the form $(p + q)^{2/3} - (p - q)^{2/3} = 1$. \MathTrip{1898.} \PageSep{256} \Item{18.} The roots of a cubic $f(x) = 0$ are $\alpha$,~$\beta$,~$\gamma$ in ascending order of magnitude. Show that if $\DPmod{(\alpha, \beta)}{[\alpha, \beta]}$ and~$\DPmod{(\beta, \gamma)}{[\beta, \gamma]}$ are each divided into six equal sub-intervals, then a root of $f'(x) = 0$ will fall in the fourth interval from~$\beta$ on each side. What will be the nature of the cubic in the two cases when a root of $f'(x) = 0$ falls at a point of division? \MathTrip{1907.} \Item{19.} {\Loosen Investigate the maxima and minima of~$f(x)$, and the real roots of $f(x) = 0$, $f(x)$~being either of the functions} \[ x - \sin x - \tan\alpha (1 - \cos x),\quad x - \sin x - (\alpha - \sin\alpha) - \tan \tfrac{1}{2}\alpha (\cos\alpha - \cos x), \] and $\alpha$~an angle between $0$~and~$\pi$. Show that in the first case the condition for a double root is that $\tan\alpha - \alpha$ should be a multiple of~$\pi$. \Item{20.} {\Loosen Show that by choice of the ratio~$\lambda : \mu$ we can make the roots of $\lambda(ax^{2} + bx + c) + \mu(a'x^{2} + b'x + c') = 0$ real and having a difference of any magnitude, unless the roots of the two quadratics are all real and interlace; and that in the excepted case the roots are always real, but there is a lower limit for the magnitude of their difference. \MathTrip{1895.}} [Consider the form of the graph of the function $(ax^{2} + bx + c)/(a'x^{2} + b'x + c')$: cf.\ \Exs{xlvi}.\ 12~\textit{et~seq.}] \Item{21.} Prove that \[ \pi < \frac{\sin \pi x}{x(1 - x)} \leq 4 \] when $0 < x < 1$, and draw the graph of the function. \Item{22.} Draw the graph of the function \[ \pi \cot\pi x - \frac{1}{x} - \frac{1}{x - 1}. \] \Item{23.} Sketch the general form of the graph of~$y$, given that \[ \frac{dy}{dx} = \frac{(6x^{2} + x - 1) (x - 1)^{2} (x + 1)^{3}}{x^{2}}. \] \MathTrip{1908.} \Item{24.} A sheet of paper is folded over so that one corner just reaches the opposite side. Show how the paper must be folded to make the length of the crease a maximum. \Item{25.} The greatest acute angle at which the ellipse $(x^{2}/a^{2}) + (y^{2}/b^{2}) = 1$ can be cut by a concentric circle is $\arctan\{(a^{2} - b^{2})/2ab\}$. \MathTrip{1900.} \Item{26.} In a triangle the area~$\Delta$ and the semi-perimeter~$s$ are fixed. Show that any maximum or minimum of one of the sides is a root of the equation $s(x - s) x^{2} + 4\Delta^{2} = 0$. Discuss the reality of the roots of this equation, and whether they correspond to maxima or minima. [The equations $a + b + c = 2s$, $s(s - a)(s - b)(s - c) = \Delta^{2}$ determine $a$~and~$b$ as functions of~$c$. Differentiate with respect to~$c$, and suppose that $da/dc = 0$. It will be found that $b = c$, $s - b = s - c = \frac{1}{2} a$, from which we deduce that $s(a - s)a^{2} + 4\Delta^{2} = 0$. \PageSep{257} This equation has three real roots if $s^{4} > 27\Delta^{2}$, and one in the contrary case. In an equilateral triangle (the triangle of minimum perimeter for a given area) $s^{4} = 27\Delta^{2}$; thus it is impossible that $s^{4} < 27\Delta^{2}$. Hence the equation in~$a$ has three real roots, and, since their sum is positive and their product negative, two roots are positive and the third negative. Of the two positive roots one corresponds to a maximum and one to a minimum.] \Item{27.} The area of the greatest equilateral triangle which can be drawn with its sides passing through three given points $A$,~$B$,~$C$ is \[ 2\Delta + \frac{a^{2} + b^{2} + c^{2}}{2\sqrt{3}}, \] $a$,~$b$,~$c$ being the sides and $\Delta$~the area of~$ABC$. \MathTrip{1899.} \Item{28.} If $\Delta$,~$\Delta'$ are the areas of the two maximum isosceles triangles which can be described with their vertices at the origin and their base angles on the cardioid $r = a(1 + \cos\theta)$, then $256\Delta\Delta' = 25a^{4}\sqrt{5}$. \MathTrip{1907.} \Item{29.} Find the limiting values which $(x^{2} - 4y + 8)/(y^{2} - 6x + 3)$ approaches as the point~$(x, y)$ on the curve $x^{2}y - 4x^{2} - 4xy + y^{2} + 16x - 2y - 7 = 0$ approaches the position~$(2, 3)$. \MathTrip{1903.} {\Loosen[If we take $(2, 3)$ as a new origin, the equation of the curve becomes $\xi^{2} \eta - \xi^{2} + \eta^{2} = 0$, and the function given becomes $(\xi^{2} + 4\xi - 4\eta)/(\eta^{2} + 6\eta - 6\xi)$. If we put $\eta = t\xi$, we obtain $\xi = (1 - t^{2})/t$, $\eta = 1 - t^{2}$. The curve has a loop branching at the origin, which corresponds to the two values $t = -1$ and $t= 1$. Expressing the given function in terms of~$t$, and making $t$~tend to $-1$~or~$1$, we obtain the limiting values $-\frac{3}{2}$,~$-\frac{2}{3}$.]} \Item{30.} If %[** TN: Displayed in the original] $f(x) = \dfrac{1}{\sin x - \sin a} - \dfrac{1}{(x - a)\cos a}$, then \[ \frac{d}{da}\{\lim_{x \to a} f(x)\} - \lim_{x \to a}f'(x) = \tfrac{3}{4} \sec^{3} a - \tfrac{5}{12} \sec a. \] \longpage\MathTrip{1896.} \Item{31.} Show that if $\phi(x) = 1/(1 + x^{2})$ then $\phi^{n} (x) = Q_{n}(x)/(1 + x^{2})^{n+1}$, where $Q_{n}(x)$~is a polynomial of degree~$n$. Show also that \Itemp{(i)} $Q_{n+1} = (1 + x^{2}) Q_{n}' - 2(n + 1) x Q_{n}$, \Itemp{(ii)} $Q_{n+2} + 2(n + 2) x Q_{n+1} + (n + 2)(n + 1)(1 + x^{2})Q_{n} = 0$, \Itemp{(iii)} $(1 + x^{2}) Q_{n}'' - 2nx Q_{n}' + n(n + 1)Q_{n} = 0$, \Itemp{(iv)} $Q_{n} = (-1)^{n} n!\left\{(n + 1)x^{n} - \dfrac{(n + 1)n(n - 1)}{3!} x^{n-2} + \dots\right\}$, \Itemp{(v)} all the roots of $Q_{n} = 0$ are real and separated by those of $Q_{n-1} = 0$. \Item{32.} If $f(x)$, $\phi(x)$, $\psi(x)$ have derivatives when $a \leq x \leq b$, then there is a value of~$\xi$ lying between $a$~and~$b$ and such that \[ \begin{vmatrix} f(a) & \phi(a) & \psi(a)\\ f(b) & \phi(b) & \psi(b)\\ f'(\xi)& \phi'(\xi)& \psi'(\xi) \end{vmatrix} =0. \] \PageSep{258} [Consider the function formed by replacing the constituents of the third row by $f(x)$,~$\phi(x)$,~$\psi(x)$. This theorem reduces to the Mean Value Theorem (\SecNo[§]{125}) when $\phi(x) = x$ and $\psi(x) = 1$.] \Item{33.} Deduce from Ex.~32 the formula \[ \frac{f(b) - f(a)}{\phi(b) - \phi(a)} = \frac{f'(\xi)}{\phi'(\xi)}\Add{.} \] \Item{34.} If $\phi'(x) \to a$ as $x \to \infty$, then $\phi(x)/x \to a$. If $\phi'(x) \to \infty$ then $\phi(x) \to \infty$. [Use the formula $\phi(x) - \phi(x_{0}) = (x - x_{0})\phi'(\xi)$, where $x_{0} < \xi < x$.] \Item{35.} If $\phi(x) \to a$ as $x \to \infty$, then $\phi'(x)$~cannot tend to any limit other than zero. \Item{36.} If $\phi(x) + \phi'(x) \to a$ as $x \to \infty$, then $\phi(x) \to a$ and $\phi'(x) \to 0$. [Let $\phi(x) = a + \psi(x)$, so that $\psi(x) + \psi'(x) \to 0$. If $\psi'(x)$~is of constant sign, say positive, for all sufficiently large values of~$x$, then $\psi(x)$~steadily increases and must tend to a limit~$l$ or to~$\infty$. If $\psi(x) \to \infty$ then $\psi'(x) \to -\infty$, which contradicts our hypothesis. If $\psi(x) \to l$ then $\psi'(x) \to -l$, and this is impossible (Ex.~35) unless $l = 0$. Similarly we may dispose of the case in which $\psi'(x)$~is ultimately negative. If $\psi(x)$~changes sign for values of~$x$ which surpass all limit, then these are the maxima and minima of~$\psi(x)$. If $x$~has a large value corresponding to a maximum or minimum of~$\psi(x)$, then $\psi(x) + \psi'(x)$ is small and $\psi'(x) = 0$, so that $\psi(x)$~is small. \textit{A~fortiori} are the other values of~$\psi(x)$ small when $x$~is large. For generalisations of this theorem, and alternative lines of proof, see a paper by the author entitled ``Generalisations of a limit theorem of Mr~Mercer,'' in volume~43 of the \textit{Quarterly Journal of Mathematics}. The simple proof sketched above was suggested by Prof.~E.~W. Hobson.] \Item{37.} Show how to reduce $\ds\int R\left\{x, \bigsqrtp{\frac{ax + b}{mx + n}}, \bigsqrtp{\frac{cx + d}{mx + n}}\right\} dx$ to the integral of a rational function. [Put $mx + n = 1/t$ and use \Ex{xlix}.~13.] \Item{38.} Calculate the integrals: \begin{gather*} \int \frac{dx}{(1 + x^{2})^{3}},\quad \int \bigsqrtp{\frac{x - 1}{x + 1}}\, \frac{dx}{x},\quad \int \frac{x\, dx}{\sqrtp{1 + x} - \sqrtp[3]{1 + x}},\displaybreak[1]\\ % \int \bigsqrtb{a^{2} + \bigsqrtp{b^{2} + \frac{c}{x}}}\, dx,\quad \int \cosec^{3}x\, dx,\quad \int \frac{5\cos x + 6}{2\cos x + \sin x + 3}\, dx,\displaybreak[1]\\ % \int \frac{dx}{(2 - \sin^{2}x) (2 + \sin x - \sin^{2} x)},\quad \int \frac{\cos x\sin x \, dx}{\cos^{4}x + \sin^{4}x},\quad \int \cosec x \sqrtp{\sec 2x}\, dx,\displaybreak[1]\\ % %[** TN: Slightly wide, but visually harmless] \int \frac{dx}{\sqrtb{(1 + \sin x) (2 + \sin x)}},\quad \int \frac{x + \sin x}{1 + \cos x}\, dx,\quad \int \arcsec x\, dx,\quad \int (\arcsin x)^{2}\, dx,\displaybreak[1]\\ % \int x\arcsin x\, dx,\quad \int \frac{x\arcsin x}{\sqrtp{1 - x^{2}}}\, dx,\quad \int \frac{\arcsin x}{x^{3}}\, dx,\quad \int \frac{\arcsin x}{(1 + x)^{2}}\, dx,\displaybreak[1]\\ % %[** TN: Slightly wide, but visually harmless] \int \frac{\arctan x}{x^{2}}\, dx,\quad \int \frac{\arctan x}{(1 + x^{2})^{3/2}}\, dx,\quad \int \frac{\log(\alpha^{2} + \beta^{2}x^{2})}{x^{2}}\, dx,\quad \int \frac{\log(\alpha + \beta x)}{(a + bx)^{2}}\, dx. \end{gather*} \PageSep{259} \Item{39.} \Topic{Formulae of reduction.} \Itemp{(i)} Show that \begin{multline*} %[** TN: Re-broken] 2(n - 1)(q - \tfrac{1}{4}p^{2}) \int \frac{dx}{(x^{2} + px + q)^{n}} \\ = \frac{x + \frac{1}{2}p}{(x^{2} + px + q)^{n-1}} + (2n - 3) \int \frac{dx}{(x^{2} + px + q)^{n-1}}. \end{multline*} [Put $x + \frac{1}{2}p = t$, $q - \frac{1}{4}p^{2} = \lambda$: then we obtain \begin{align*} \int \frac{dt}{(t^{2} + \lambda)^{n}} &= \frac{1}{\lambda} \int \frac{dt}{(t^{2} + \lambda)^{n-1}} - \frac{1}{\lambda} \int \frac{t^{2}\, dt}{(t^{2} + \lambda)^{n}} \\ % &= \frac{1}{\lambda} \int \frac{dt}{(t^{2} + \lambda)^{n-1}} + \frac{1}{2\lambda(n-1)} \int t \frac{d}{dt} \left\{\frac{1}{(t^{2} + \lambda)^{n-1}}\right\} dt, \end{align*} and the result follows on integrating by parts. A formula such as this is called a \emph{formula of reduction}. It is most useful when $n$~is a positive integer. We can then express $\ds\int \frac{dx}{(x^{2} + px + q)^{n}}$ in terms of $\ds\int \frac{dx}{(x^{2} + px + q)^{n-1}}$, and so evaluate the integral for every value of~$n$ in turn.] \Itemp{(ii)} Show that if $I_{p, q} = \ds\int x^{p}(1 + x)^{q}\, dx$ then \[ (p + 1) I_{p, q} = x^{p+1}(1 + x)^{q} - qI_{p+1, q-1}, \] and obtain a similar formula connecting $I_{p, q}$ with~$I_{p-1, q+1}$. Show also, by means of the substitution $x = -y/(1 + y)$, that \[ I_{p, q} = (-1)^{p+1} \int y^{p} (1 + y)^{-p-q-2}\, dy. \] \Itemp{(iii)} Show that if $X = a + bx$ then \begin{align*} \int xX^{-1/3}\, dx &= -3(3a - 2bx) X^{2/3}/10b^{2}, \\ \int x^{2}X^{-1/3}\, dx &= 3(9a^{2} - 6abx + 5b^{2}x^{2}) X^{2/3}/40b^{3}\DPchg{.}{,}\\ % \int xX^{-1/4}\, dx &= -4(4a - 3bx) X^{3/4}/21b^{2},\\ \int x^{2}X^{-1/4}\, dx &= 4(32a^{2} - 24abx + 21b^{2}x^{2}) X^{3/4}/231b^{3}. \end{align*} \Itemp{(iv)} If $I_{m, n} = \ds\int \frac{x^{m}\, dx}{(1 + x^{2})^{n}}$ then \[ 2(n - 1)I_{m, n} = -x^{m-1} (1 + x^{2})^{-(n-1)} + (m - 1)I_{m-2, n-1}. \] \Itemp{(v)} If $I_{n} = \ds\int x^{n} \cos\beta x\, dx$ and $J_{n} = \ds\int x^{n} \sin\beta x\, dx$ then \[ \beta I_{n} = x^{n} \sin\beta x - nJ_{n-1},\quad \beta J_{n} = -x^{n} \cos\beta x + nI_{n-1}. \] \PageSep{260} \Itemp{(vi)} If $I_{n} = \ds\int \cos^{n} x\, dx$ and $J_{n} = \ds\int \sin^{n} x\, dx$ then \[ nI_{n} = \sin x\cos^{n-1} x + (n - 1) I_{n-2},\quad nJ_{n} = -\cos x\sin^{n-1} x + (n - 1) J_{n-2}. \] \Itemp{(vii)} If $I_{n} = \ds\int \tan^{n}x\, dx$ then $(n - 1)(I_{n} + I_{n-2}) = \tan^{n-1}x$. \Itemp{(viii)} If $I_{m, n} = \ds\int \cos^{m}x \sin^{n}x\, dx$ then \begin{alignat*}{2} (m+n)I_{m, n} &= -&&\cos^{m+1}x \sin^{n-1}x + (n - 1) I_{m, n-2}\\ &= &&\cos^{m-1}x \sin^{n+1}x + (m - 1) I_{m-2, n}. \end{alignat*} [We have \begin{align*} (m+1)I_{m, n} &= -\int \sin^{n-1}x \frac{d}{dx} (\cos^{m+1}x)\, dx\\ &= -\cos^{m+1}x \sin^{n-1}x + (n - 1)\int \cos^{m+2}x \sin^{n-2}x\, dx\\ &= -\cos^{m+1}x \sin^{n-1}x + (n - 1)(I_{m, n-2} - I_{m, n}), \end{align*} which leads to the first reduction formula.] \Itemp{(ix)} Connect $I_{m, n} = \ds\int \sin^{m}x \sin nx\, dx$ with~$I_{m-2, n}$. \MathTrip{1897.} \Itemp{(x)} If $I_{m, n} = \ds\int x^{m} \cosec^{n}x\, dx$ then \begin{multline*} (n - 1)(n - 2)I_{m, n} = (n - 2)^{2}I_{m, n-2} + m(m - 1)I_{m-2, n-2}\\ -x^{m-1} \cosec^{n-1}x \{m\sin x + (n - 2) x\cos x\}. \end{multline*} \MathTrip{1896.} \Itemp{(xi)} If $I_{n} = \ds\int (a + b\cos x)^{-n}\, dx$ then \[ (n - 1)(a^{2} - b^{2}) I_{n} = -b\sin x (a + b\cos x)^{-(n-1)} + (2n - 3)aI_{n-1} - (n - 2)I_{n-2}. \] \Itemp{(xii)} If $I_{n} = \ds\int (a\cos^{2} x + 2h\cos x\sin x + b\sin^{2}x)^{-n}\, dx$ then \[ 4n(n + 1)(ab - h^{2})I_{n+2} - 2n(2n + 1)(a + b)I_{n+1} + 4n^{2}I_{n} = -\frac{d^{2} I_{n}}{dx^{2}}. \] \MathTrip{1898.} \Itemp{(xiii)} If $I_{m, n} = \ds\int x^{m}(\log x)^{n}\, dx$ then \[ %[** TN: In-line in the original] (m + 1)I_{m, n} = x^{m+1}(\log x)^{n} - nI_{m, n-1}. \] \Item{40.} If $n$~is a positive integer then the value of $\ds\int x^{m}(\log x)^{n}\, dx$ is \[ x^{m+1} \left\{\frac{(\log x)^{n}}{m + 1} - \frac{n(\log x)^{n-1}}{(m + 1)^{2}} + \frac{n(n - 1)(\log x)^{n-2}}{(m + 1)^{3}} - \dots + \frac{(-1)^{n}n!}{(m + 1)^{n+1}}\right\}. \] \Item{41.} Show that the most general function~$\phi(x)$, such that $\phi'' + a^{2}\phi = 0$ for all values of~$x$, may be expressed in either of the forms $A\cos ax + B\sin ax$, $\rho\cos(ax + \epsilon)$, where $A$,~$B$, $\rho$,~$\epsilon$ are constants. [Multiplying by~$2\phi'$ and \PageSep{261} integrating we obtain $\phi'^{2} + a^{2}\phi^{2} = a^{2}b^{2}$, where $b$~is a constant, from which we deduce that $ax = \ds\int \frac{d\phi}{\sqrtp{b^{2} - \phi^{2}}}$.] \Item{42.} Determine the most general functions $y$~and~$z$ such that $y' + \omega z = 0$, and $z' - \omega y = 0$, where $\omega$~is a constant and dashes denote differentiation with respect to~$x$. \Item{43.} The area of the curve given by \[ x = \cos\phi + \frac{\sin\alpha \sin\phi}{1 - \cos^{2}\alpha \sin^{2}\phi},\quad y = \sin\phi - \frac{\sin\alpha \cos\phi}{1 - \cos^{2}\alpha \sin^{2}\phi}, \] where $\alpha$~is a positive acute angle, is $\frac{1}{2}\pi(1 + \sin\alpha)^{2}/\sin\alpha$. \MathTrip{1904.} \Item{44.} The projection of a chord of a circle of radius~$a$ on a diameter is of constant length~$2a\cos\beta$; show that the locus of the middle point of the chord consists of two loops, and that the area of either is $a^{2}(\beta - \cos\beta\sin\beta)$. \MathTrip{1903.} \Item{45.} Show that the length of a quadrant of the curve $(x/a)^{2/3} + (y/b)^{2/3} = 1$ is $(a^{2} + ab + b^{2})/(a + b)$. \MathTrip{1911.} \Item{46.} A point $A$~is inside a circle of radius~$a$, at a distance~$b$ from the centre. Show that the locus of the foot of the perpendicular drawn from $A$ to a tangent to the circle encloses an area $\pi(a^{2} + \frac{1}{2}b^{2})$. \MathTrip{1909.} \Item{47.} Prove that if $(a, b, c, f, g, h \btw x, y, 1)^{2} = 0$ is the equation of a conic, then \[ \int \frac{dx}{(lx + my + n)(hx + by + f)} = \alpha\log \frac{PT}{PT'} + \beta, \] where $PT$,~$PT'$ are the perpendiculars from a point~$P$ of the conic on the tangents at the ends of the chord $lx + my + n = 0$, and $\alpha$,~$\beta$ are constants. \MathTrip{1902.} \Item{48.} Show that \[ \int \frac{ax^{2} + 2bx + c}{(Ax^{2} + 2Bx + C)^{2}}\, dx \] will be a rational function of~$x$ if and only if one or other of $AC - B^{2}$ and $aC + cA - 2bB$ is zero.\footnote {See the author's tract quoted on \PageRef{p.}{236}.} \Item{49.} Show that the necessary and sufficient condition that \[ \int \frac{f(x)}{\{F(x)\}^{2}}\, dx, \] where $f$~and~$F$ are polynomials of which the latter has no repeated factor, should be a rational function of~$x$, is that $f'F' - fF''$ should be divisible by~$F$. \MathTrip{1910.} \Item{50.} Show that \[ \int \frac{\alpha\cos x + \beta\sin x + \gamma}{(1 - e\cos x)^{2}}\, dx \] is a rational function of $\cos x$ and~$\sin x$ if and only if $\alpha e + \gamma = 0$; and determine the integral when this condition is satisfied. \MathTrip{1910.} \end{Examples} \PageSep{262} \Chapter[ADDITIONAL THEOREMS IN THE CALCULUS] {VII}{ADDITIONAL THEOREMS IN THE DIFFERENTIAL AND \\ INTEGRAL CALCULUS} \Paragraph{147. Higher Mean Value Theorems.} In the preceding chapter (\SecNo[§]{125}) we proved that if $f(x)$~has a derivative~$f'(x)$ throughout the interval $\DPmod{(a, b)}{[a, b]}$ then \[ f(b) - f(a) = (b - a) f'(\xi), \] where $a < \xi < b$; or that, if $f(x)$~has a derivative throughout $\DPmod{(a, a + h)}{[a, a + h]}$, then \[ f(a + h) - f(a) = hf'(a + \theta_{1} h), \Tag{(1)} \] where $0 < \theta_{1} < 1$. This we proved by considering the function \[ f(b) - f(x) - \frac{b - x}{b - a} \{f(b) - f(a)\} \] which vanishes when $x = a$ and when $x = b$. Let us now suppose that $f(x)$~has also a second derivative~$f''(x)$ throughout $\DPmod{(a, b)}{[a, b]}$, an assumption which of course involves the continuity of the first derivative~$f'(x)$, and consider the function \[ f(b) - f(x) - (b - x) f'(x) - \left(\frac{b - x}{b - a}\right)^{2} \{f(b) - f(a) - (b - a)f'(a)\}. \] This function also vanishes when $x = a$ and when $x = b$; and its derivative is \[ \frac{2(b - x)}{(b - a)^{2}} \{f(b) - f(a) - (b - a) f'(a) - \tfrac{1}{2}(b - a)^{2}f''(x)\}, \] and this must vanish (\SecNo[§]{121}) for some value of~$x$ between $a$ and~$b$ (exclusive of $a$~and~$b$). Hence there is a value~$\xi$ of~$x$, between \PageSep{263} $a$ and~$b$, and therefore capable of representation in the form $a + \theta_{2}(b - a)$, where $0 < \theta_{2} < 1$, for which \[ f(b) = f(a) + (b - a)f'(a) + \tfrac{1}{2}(b - a)^{2}f''(\xi). \] If we put $b = a + h$ we obtain the equation \[ f(a + h) = f(a) + hf'(a) + \tfrac{1}{2}h^{2} f''(a + \theta_{2}h), \Tag{(2)} \] which is the standard form of what may be called the \emph{Mean Value Theorem of the second order}. The analogy suggested by \Eq{(1)}~and~\Eq{(2)} at once leads us to formulate the following theorem: \begin{ParTheorem}{Taylor's or the General Mean Value Theorem.} If $f(x)$~is a function of~$x$ which has derivatives of the first $n$ orders throughout the interval $\DPmod{(a, b)}{[a, b]}$, then \begin{multline*} f(b) = f(a) + (b - a)f'(a) + \frac{(b - a)^{2}}{2!} f''(a) + \dots\\ + \frac{(b - a)^{n-1}}{(n - 1)!} f^{(n-1)}(a) + \frac{(b - a)^{n}}{n!}f^{(n)}(\xi), \end{multline*} where $a < \xi < b$; and if $b = a + h$ then \begin{multline*} f(a + h) = f(a) + hf'(a) + \tfrac{1}{2} h^{2}f''(a) + \dots\\ + \frac{h^{n-1}}{(n - 1)!} f^{(n-1)}(a) + \frac{h^{n}}{n!} f^{(n)}(a + \theta_{n}h), \end{multline*} where $0 < \theta_{n} < 1$. \end{ParTheorem} The proof proceeds on precisely the same lines as were adopted before in the special cases in which $n = 1$ and $n = 2$. We consider the function \[ F_{n}(x) - \left(\frac{b - x}{b - a}\right)^{n} F_{n}(a), \] where \begin{multline*} F_{n}(x) = f(b) - f(x) - (b - x)f'(x) - \frac{(b - x)^{2}}{2!} f''(x) - \dots\\ - \frac{(b - x)^{n-1}}{(n - 1)!} f^{(n-1)}(x). \end{multline*} This function vanishes for $x = a$ and $x = b$; its derivative is \[ \frac{n(b - x)^{n-1}}{(b - a)^{n}} \left\{F_{n}(a) - \frac{(b - a)^{n}}{n!} f^{(n)}(x)\right\}; \] and there must be some value of~$x$ between $a$ and~$b$ for which the derivative vanishes. This leads at once to the desired result. \PageSep{264} In view of the great importance of this theorem we shall give at the end of this chapter another proof, not essentially distinct from that given above, but different in form and depending on the method of integration by parts. \begin{Examples}{LV.} \Item{1.} Suppose that $f(x)$~is a polynomial of degree~$r$. Then $f^{(n)}(x)$~is identically zero when $n > r$, and the theorem leads to the algebraical identity \[ f(a + h) = f(a) + hf'(a) + \frac{h^{2}}{2!} f''(a) + \dots + \frac{h^{r}}{r!} f^{(r)}(a). \] \Item{2.} By applying the theorem to $f(x) = 1/x$, and supposing $x$ and~$x + h$ positive, obtain the result \[ \frac{1}{x + h} = \frac{1}{x} - \frac{h}{x^{2}} + \frac{h^{2}}{x^{3}} - \dots + \frac{(-1)^{n-1} h^{n-1}}{x^{n}} + \frac{(-1)^{n} h^{n}}{(x + \theta_{n} h)^{n+1}}. \] [Since \[ \frac{1}{x + h} = \frac{1}{x} - \frac{h}{x^{2}} + \frac{h^{2}}{x^{3}} - \dots + \frac{(-1)^{n-1} h^{n-1}}{x^{n}} + \frac{(-1)^{n} h^{n}}{x^{n}(x + h)},\quad%[** TN: Quick spacing hack] \] we can verify the result by showing that $x^{n}(x + h)$ can be put in the form $(x + \theta_{n}h)^{n+1}$, or that $x^{n+1} < x^{n}(x + h) < (x + h)^{n+1}$, as is evidently the case.] \Item{3.} Obtain the formula \begin{multline*} \sin(x + h) = \sin x + h\cos x - \frac{h^{2}}{2!}\sin x - h^{3}\frac{3!}\cos x + \dots\\ + (-1)^{n-1}\frac{h^{2n-1}}{(2n - 1)!}\cos x + (-1)^{n} h^{2n}\frac{2n!}\sin(x + \theta_{2n} h), \end{multline*} the corresponding formula for $\cos(x + h)$, and similar formulae involving powers of~$h$ extending up to~$h^{2n+1}$. \Item{4.} Show that if $m$~is a positive integer, and $n$~a positive integer not greater than~$m$, then \[ (x + h)^{m} = x^{m} + \binom{m}{1}x^{m-1} h + \dots + \binom{m}{n - 1}x^{m-n+1} h^{n-1} + \binom{m}{n}(x + \theta_{n} h)^{m-n} h^{n}. \] Show also that, if the interval $\DPmod{(x, x + h)}{[x, x + h]}$ does not include $x = 0$, the formula holds for all real values of~$m$ and all positive integral values of~$n$; and that, even if $x < 0 < x + h$ or $x + h < 0 < x$, the formula still holds if $m - n$~is positive. \Item{5.} The formula $f(x + h) = f(x) + hf'(x + \theta_{1}h)$ is not true if $f(x) = 1/x$ and $x < 0 < x + h$. [For $f(x + h) - f(x) > 0$ and $hf'(x + \theta_{1} h) = -h/(x + \theta_{1} h)^{2} < 0$; it is evident that the conditions for the truth of the Mean Value Theorem are not satisfied.] \Item{6.} If $x = -a$, $h = 2a$, $f(x) = x^{1/3}$, then the equation \[ f(x + h) = f(x) + hf'(x + \theta_{1} h) \] is satisfied by $\theta_{1} = \frac{1}{2} ± \frac{1}{18}\sqrt{3}$. [This example shows that the result of the theorem may hold even if the conditions under which it was proved are not satisfied.] \PageSep{265} \Item{7.} \Topic{Newton's method of approximation to the roots of equations.} Let $\xi$~be an approximation to a root of an algebraical equation $f(x) = 0$, the actual root being~$\xi + h$. Then \[ 0 = f(\xi + h) = f(\xi) + hf'(\xi) + \tfrac{1}{2} h^{2}f''(\xi + \theta_{2}h), \] so that \[ h = -\frac{f(\xi)}{f'(\xi)} - \tfrac{1}{2} h^{2} \frac{f''(\xi + \theta_{2}h)}{f'(\xi)}. \] It follows that in general a better approximation than $x = \xi$ is \[ x = \xi - \frac{f(\xi)}{f'(\xi)}. \] If the root is a simple root, so that $f'(\xi + h) \neq 0$, we can, when $h$~is small enough, find a positive constant~$K$ such that $|f'(x)| > K$ for all the values of~$x$ which we are considering, and then, if $h$~is regarded as of the first order of smallness, $f(\xi)$~is of the first order of smallness, and the error in taking $\xi - \{f(\xi)/f'(\xi)\}$ as the root is of the second order. \Item{8.} Apply this process to the equation $x^{2} = 2$, taking $\xi = 3/2$ as the first approximation. [We find $h = -1/12$, $\xi + h = 17/12 = 1.417\dots$, which is quite a good approximation, in spite of the roughness of the first. If now we repeat the process, taking $\xi = 17/12$, we obtain $\xi + h = 577/408 = 1.414\MS215\dots$, which is correct to $5$~places of decimals.\Add{]} \Item{9.} By considering in this way the equation $x^{2} - 1 - y = 0$, where $y$~is small, show that $\sqrtp{1 + y} = 1 + \frac{1}{2} y - \{\frac{1}{4}y^{2}/(2 + y)\}$ approximately, the error being of the fourth order. \Item{10.} Show that the error in taking the root to be $\xi - (f/f') - \frac{1}{2}(f^{2}f''/f'^{3})$, where $\xi$~is the argument of every function, is in general of the third order. \Item{11.} The equation $\sin x = \alpha x$, where $\alpha$~is small, has a root nearly equal to~$\pi$. Show that $(1 - \alpha)\pi$~is a better approximation, and $(1 - \alpha + \alpha^{2})\pi$ a better still. [The method of Exs.~7--10 does not depend on $f(x) = 0$ being an algebraical equation, so long as $f'$~and~$f''$ are continuous.] \Item{12.} Show that the limit when $h \to 0$ of the number~$\theta_{n}$ which occurs in the general Mean Value Theorem is~$1/(n + 1)$, provided that $f^{(n+1)}(x)$~is continuous. [For $f(x + h)$~is equal to each of \[ f(x) + \dots + \frac{h^{n}}{n!} f^{(n)}(x + \theta_{n}h),\quad f(x) + \dots + \frac{h^{n}}{n!} f^{(n)}(x) + \frac{h^{n+1}}{(n + 1)!} f^{(n+1)}(x + \theta_{n+1}h), \] where $\theta_{n+1}$ as well as~$\theta_{n}$ lies between $0$~and~$1$. Hence \[ f^{(n)}(x + \theta_{n}h) = f^{(n)}(x) + \frac{hf^{(n+1)}(x + \theta_{n+1}h)}{n + 1}\Add{.} \] But if we apply the original Mean Value Theorem to the function~$f^{(n)}(x)$, taking $\theta_{n}h$ in place of~$h$, we find \[ f^{(n)}(x + \theta_{n}h) = f^{(n)}(x) + \theta_{n}hf^{(n+1)}(x + \theta\theta_{n}h), \] \PageSep{266} where $\theta$ also lies between $0$~and~$1$. Hence \[ \theta_{n} f^{(n+1)}(x + \theta\theta_{n} h) = \frac{f^{(n+1)}(x + \theta_{n+1} h)}{n + 1}, \] from which the result follows, since $f^{(n+1)}(x + \theta\theta_{n} h)$ and $f^{(n+1)}(x + \theta_{n+1} h)$ tend to the same limit~$f^{(n+1)}(x)$ as $h \to 0$.] \Item{13.} Prove that $\{f(x + 2h) - 2f(x + h) + f(x)\}/h^{2} \to f''(x)$ as $h \to 0$, provided that $f''(x)$~is continuous. [Use equation~\Eq{(2)} of~\SecNo[§]{147}.] %[** TN: [sic] "the" f^{(n)}(x)] \Item{14.} Show that, if the $f^{(n)}(x)$ is continuous for $x = 0$, then \[ f(x) = a_{0} + a_{1}x + a_{2}x^{2} + \dots + (a_{n} + \epsilon_{x}) x^{n}, \] where $a_{r} = f^{(r)}(0)/r!$ and $\epsilon_{x} \to 0$ as $x \to 0$.\footnote {It is in fact sufficient to suppose that \emph{$f^{(n)}(0)$~exists}. See R.~H. Fowler, ``The elementary differential geometry of plane curves'' (\textit{Cambridge Tracts in Mathematics}, No.~20, p.~104).\PageLabel{266}} \Item{15.} Show that if \[ a_{0} + a_{1}x + a_{2}x^{2} + \dots + (a_{n} + \epsilon_{x}) x^{n} = b_{0} + b_{1}x + b_{2}x^{2} + \dots + (b_{n} + \eta_{x}) x^{n}, \] where $\epsilon_{x}$ and~$\eta_{x}$ tend to zero as $x \to 0$, then $a_{0} = b_{0}$, $a_{1} = b_{1}$,~\dots, $a_{n} = b_{n}$. [Making $x \to 0$ we see that $a_{0} = b_{0}$. Now divide by~$x$ and afterwards make $x \to 0$. We thus obtain $a_{1} = b_{1}$; and this process may be repeated as often as is necessary. It follows that if $f(x) = a_{0} + a_{1}x + a_{2}x^{2} + \dots + (a_{n} + \epsilon_{x}) x^{n}$, and the first~$n$ derivatives of~$f(x)$ are continuous, then $a_{r} = f^{(r)}(0)/r!$.] \end{Examples} \Paragraph{148. Taylor's Series.} Suppose that $f(x)$~is a function all of whose differential coefficients are continuous in an interval $\DPmod{(a - \eta, a + \eta)}{[a - \eta, a + \eta]}$ surrounding the point $x = a$. Then, if $h$~is numerically less than~$\eta$, we have \[ f(a + h) = f(a) + hf'(a) + \dots + \frac{h^{n-1}}{(n - 1)!} f^{(n-1)}(a) + \frac{h^{n}}{n!} f^{(n)}(a + \theta_{n} h), \] where $0 < \theta_{n} < 1$, for all values of~$n$. Or, if \[ S_{n} = \sum_{0}^{n-1} \frac{h^{\nu}}{\nu!} f^{(\nu)}(a),\quad R_{n} = \frac{h^{n}}{n!} f^{(n)}(a + \theta_{n} h), \] we have \[ f(a + h) - S_{n} = R_{n}. \] Now let us suppose, in addition, that we can prove that $R_{n} \to 0$ as $n \to \infty$. Then \[ f(a + h) = \lim_{n\to\infty} S_{n} = f(a) + hf'(a) + \frac{h^{2}}{2!} f''(a) + \dots. \] This expansion of~$f(a + h)$ is known as \Emph{Taylor's Series}. When $a = 0$ the formula reduces to \[ f(h) = f(0) + hf'(0) + \frac{h^{2}}{2!} f''(0) + \dots, \] \PageSep{267} which is known as \Emph{Maclaurin's Series}. The function~$R_{n}$ is known as \Emph{Lagrange's form of the remainder}. \begin{Remark} The reader should be careful to guard himself against supposing that the continuity of all the derivatives of~$f(x)$ is a sufficient condition for the validity of Taylor's series. A direct discussion of the behaviour of~$R_{n}$ is always essential. \end{Remark} \begin{Examples}{LVI.} \Item{1.} Let $f(x) = \sin x$. Then all the derivatives of~$f(x)$ are continuous for all values of~$x$. Also $|f^{n}(x)| \leq 1$ for all values of $x$~and~$n$. Hence in this case $|R_{n}| \leq h^{n}/n!$, which tends to zero as $n \to \infty$ (\Ex{xxvii}.~12) whatever value $h$ may have. It follows that \[ \sin(x + h) = \sin x + h\cos x - \frac{h^{2}}{2!}\sin x - \frac{h^{3}}{3!}\cos x + \frac{h^{4}}{4!}\sin x + \dots, \] for all values of $x$~and~$h$. In particular \[ \sin h = h - \frac{h^{3}}{3!} + \frac{h^{5}}{5!} - \dots, \] for all values of~$h$. Similarly we can prove that \[ \cos(x + h) = \cos x - h\sin x - \frac{h^{2}}{2!}\cos x + \frac{h^{3}}{3!} \sin x + \dots,\quad \cos h = 1 - \frac{h^{2}}{2!} + \frac{h^{4}}{4!} - \dots. \] \Item{2.} \Topic{The Binomial Series.} Let $f(x) = (1 + x)^{m}$, where $m$~is any rational number, positive or negative. Then $f^{(n)}(x) = m(m - 1) \dots (m - n + 1) (1 + x)^{m-n}$ and Maclaurin's Series takes the form \[ (1 + x)^{m} = 1 + \binom{m}{1}x + \binom{m}{2}x^{2} + \dots. \] When $m$~is a positive integer the series terminates, and we obtain the ordinary formula for the Binomial Theorem with a positive integral exponent. In the general case \[ R_{n} = \frac{x^{n}}{n!} f^{(n)}(\theta_{n}x) = \binom{m}{n}x^{n}(1 + \theta_{n}x)^{m-n}, \] and in order to show that Maclaurin's Series really represents $(1 + x)^{m}$ for any range of values of~$x$ when $m$~is not a positive integer, we must show that $R_{n} \to 0$ for every value of~$x$ in that range. This is so in fact if $-1 < x < 1$, and may be proved, when $0\leq x < 1$, by means of the expression given above for~$R_{n}$, since $(1 + \theta_{n}x)^{m-n} < 1$ if $n > m$, and $\dbinom{m}{n} x^{n} \to 0$ as $n \to \infty$ (\Ex{xxvii}.~13). But a difficulty arises if $-1 < x < 0$, since $1 + \theta_{n}x < 1$ and $(1 + \theta_{n}x)^{m-n} > 1$ if $n > m$; knowing only that $0 < \theta_{n} < 1$, we cannot be assured that $1 + \theta_{n}x$~is not quite small and $(1 + \theta _{n}x)^{m-n}$ quite large. In fact, in order to prove the Binomial Theorem by means of Taylor's Theorem, we need some different form for~$R_{n}$, such as will be given later~(\SecNo[§]{162}). \end{Examples} \PageSep{268} \Paragraph{149. Applications of Taylor's Theorem.} \Topic{\Item{A.} Maxima and minima.} Taylor's Theorem may be applied to give greater theoretical completeness to the tests of \Ref{Ch.}{VI}, \SecNo[§§]{122}--\SecNo{123}, though the results are not of much practical importance. It will be remembered that, assuming that $\phi(x)$~has derivatives of the first two orders, we stated the following as being sufficient conditions for a maximum or minimum of~$\phi(x)$ at $x = \xi$: \emph{for a maximum}, $\phi'(\xi) = 0$, $\phi''(\xi) < 0$; \emph{for a minimum}, $\phi'(\xi) = 0$, $\phi''(\xi) > 0$. It is evident that these tests fail if $\phi''(\xi)$ as well as $\phi'(\xi)$ is zero. Let us suppose that the first~$n$ derivatives \[ \phi'(x),\quad \phi''(x),\ \dots,\quad \phi^{(n)}(x) \] are continuous, and that all save the last vanish when $x = \xi$. Then, for sufficiently small values of~$h$, \[ \phi(\xi + h) - \phi(\xi) = \frac{h^{n}}{n!} \phi^{(n)} (\xi + \theta_{n} h). \] In order that there should be a maximum or a minimum this expression must be of constant sign for all sufficiently small values of~$h$, positive or negative. This evidently requires that $n$~should be even. And if $n$~is even there will be a maximum or a minimum according as $\phi^{(n)}(\xi)$~is negative or positive. Thus we obtain the test: \begin{Result}if there is to be a maximum or minimum the first derivative which does not vanish must be an even derivative, and there will be a maximum if it is negative, a minimum if it is positive. \end{Result} \begin{Examples}{LVII.} \Item{1.} Verify the result when $\phi(x) = (x - a)^{m}$, $m$~being a positive integer, and $\xi = a$. \Item{2.} Test the function $(x - a)^{m} (x - b)^{n}$, where $m$~and~$n$ are positive integers, for maxima and minima at the points $x = a$, $x = b$. Draw graphs of the different possible forms of the curve $y = (x - a)^{m} (x - b)^{n}$. \Item{3.} Test the functions $\sin x - x$, $\sin x - x + \dfrac{x^{3}}{6}$, $\sin x - x + \dfrac{x^{3}}{6} - \dfrac{x^{5}}{120}$,~\dots, $\cos x - 1$, $\cos x - 1 + \dfrac{x^{2}}{2}$, $\cos x - 1 + \dfrac{x^{2}}{2} - \dfrac{x^{4}}{24}$,~\dots\ for maxima or minima at $x = 0$. \end{Examples} \Paragraph{150.} \Topic{\Item{B.} The calculation of certain limits.} Suppose that $f(x)$ and~$\phi(x)$ are two functions of~$x$ whose derivatives $f'(x)$ and~$\phi'(x)$ are continuous for $x = \xi$ and that $f(\xi)$ and~$\phi(\xi)$ are both equal to zero. Then the function \[ \psi(x) = f(x)/\phi(x) \] \PageSep{269} is not defined when $x = \xi$. But of course it may well tend to a limit as $x \to \xi$. Now \[ f(x) = f(x) - f(\xi) = (x - \xi)f'(x_{1}), \] where $x_{1}$~lies between $\xi$ and~$x$; and similarly $\phi(x) = (x - \xi)\phi'(x_{2})$, where $x_{2}$~also lies between $\xi$ and~$x$. Thus \[ \psi(x) = f'(x_{1})/\phi'(x_{2}). \] We must now distinguish four cases. \Item{(1)} If neither $f'(\xi)$ nor $\phi'(\xi)$ is zero, then \[ f(x)/\phi(x) \to f'(\xi)/\phi'(\xi). \] \Item{(2)} If $f'(\xi) = 0$, $\phi'(\xi) \neq 0$, then \[ f(x)/\phi(x) \to 0. \] \Item{(3)} If $f'(\xi) \neq 0$, $\phi'(\xi)= 0$, then $f(x)/\phi(x)$ becomes numerically very large as $x \to \xi$: but whether $f(x)/\phi(x)$ tends to $\infty$~or~$-\infty$, or is sometimes large and positive and sometimes large and negative, we cannot say, without further information as to the way in which $\phi'(x) \to 0$ as $x \to \xi$. \Item{(4)} If $f'(\xi) = 0$, $\phi'(\xi) = 0$, then we can as yet say nothing about the behaviour of~$f(x)/\phi(x)$ as $x \to 0$. But in either of the last two cases it may happen that $f(x)$ and $\phi(x)$ have continuous second derivatives. And then \begin{align*} f(x) &= f(x) - f(\xi) - (x - \xi)f'(\xi) = \tfrac{1}{2}(x - \xi)^{2} f''(x_{1}),\\ \phi(x) &= \phi(x) - \phi(\xi) - (x - \xi)\phi'(\xi) = \tfrac{1}{2}(x - \xi)^{2} \phi''(x_{2}), \end{align*} where again $x_{1}$ and~$x_{2}$ lie between $\xi$ and~$x$; so that \[ \psi(x)= f''(x_{1})/\phi''(x_{2}). \] We can now distinguish a variety of cases similar to those considered above. In particular, if neither second derivative vanishes for $x = \xi$, we have \[ f(x)/\phi(x) \to f''(\xi)/\phi''(\xi). \] It is obvious that this argument can be repeated indefinitely, and we obtain the following theorem: \begin{Result}suppose that $f(x)$ and $\phi(x)$ and their derivatives, so far as may be wanted, are continuous for $x = \xi$. Suppose further that $f^{(p)}(x)$ and~$\phi^{(q)}(x)$ are the first derivatives of $f(x)$ and $\phi(x)$ which do not vanish when $x = \xi$. Then \Item{(1)} if $p = q$, $f(x)/\phi(x) \to f^{(p)}(\xi)/\phi^{(p)}(\xi)$; \Item{(2)} if $p > q$, $f(x)/\phi(x) \to 0$; \PageSep{270} \Item{(3)} {\Loosen if $p < q$, and $q - p$~is even, either $f(x)/\phi(x) \to +\infty$ or $f(x)/\phi(x) \to -\infty$, the sign being the same as that of~$f^{(p)}(\xi)/\phi^{(q)}(\xi)$;} \Item{(4)} {\Loosen if $p < q$ and $q - p$~is odd, either $f(x)/\phi(x) \to +\infty$ or $f(x)/\phi(x) \to -\infty$, as $x \to \xi+0$, the sign being the same as that of $f^{(p)}(\xi)/\phi^{(q)}(\xi)$, while if $x \to \xi - 0$ the sign must be reversed.} \end{Result} This theorem is in fact an immediate corollary from the equations \[ f(x) = \frac{(x - \xi)^{p}}{p!}f^{(p)}(x_{1}),\quad \phi(x) = \frac{(x - \xi)^{q}}{q!}\phi^{(q)}(x_{2}). \] \begin{Examples}{LVIII.} \Item{1.} Find the limit of \[ \{x - (n + 1)x^{n+1} + nx^{n+2}\}/(1 - x)^{2}, \] as $x \to 1$. [Here the functions and their first derivatives vanish for $x = 1$, and $f''(1) = n(n + 1)$, $\phi''(1) = 2$.] \Item{2.} Find the limits as $x \to 0$ of \[ (\tan x - x)/(x - \sin x),\quad (\tan nx - n\tan x)/(n\sin x - \sin nx). \] \Item{3.} Find the limit of $x\{\sqrtp{x^{2} + a^{2}} - x\}$ as $x \to \infty$. [Put $x = 1/y$.] \Item{4.} Prove that \[ \lim_{x \to n} (x - n)\cosec x\pi = \frac{(-1)^{n}}{\pi},\quad \lim_{x \to n} \frac{1}{x - n} \left\{ \cosec x\pi - \frac{(-1)^{n}}{(x - n)\pi} \right\} = \frac{(-1)^{n}\pi}{6}, \] $n$~being any integer; and evaluate the corresponding limits involving $\cot x\pi$. \Item{5.} Find the limits as $x \to 0$ of \[ \frac{1}{x^{3}}\left(\cosec x - \frac{1}{x} - \frac{x}{6}\right),\quad \frac{1}{x^{3}}\left(\cot x - \frac{1}{x} + \frac{x}{3}\right). \] \Item{6.} $(\sin x\arcsin x - x^{2})/x^{6} \to \frac{1}{18}$, $(\tan x\arctan x - x^{2})/x^{6} \to \frac{2}{9}$, as $x \to 0$. \end{Examples} \Paragraph{151.} \Topic{\Item{C.} The contact of plane curves.} Two curves are said to \emph{intersect} (or \emph{cut}) at a point if the point lies on each of them. They are said to \emph{touch} at the point if they have the same tangent at the point. Let us suppose now that $f(x)$,~$\phi(x)$ are two functions which possess derivatives of all orders continuous for $x = \xi$, and let us consider the curves $y = f(x)$, $y = \phi(x)$. In general $f(\xi)$ and~$\phi(\xi)$ will not be equal. In this case the abscissa $x = \xi$ does not correspond to a point of intersection of the curves. If however \PageSep{271} $f(\xi) = \phi(\xi)$, the curves intersect in the point $x = \xi$, $y = f(\xi) = \phi(\xi)$. Let us suppose this to be the case. Then in order that the curves should not only cut but touch at this point it is obviously necessary and sufficient that the first derivatives $f'(x)$,~$\phi'(x)$ should also have the same value when $x = \xi$. The contact of the curves in this case may be regarded from a different point of view. In the figure the two %[Illustration: Fig. 45.] \Figure[2.25in]{45}{p271} curves are drawn touching at~$P$, and $QR$~is equal to $\phi(\xi + h) - f(\xi + h)$, or, since $\phi(\xi) = f(\xi)$, $\phi'(\xi) = f'(\xi)$, to \[ \tfrac{1}{2} h^{2}\{\phi''(\xi + \theta h) - f''(\xi + \theta h)\}, \] where $\theta$~lies between $0$ and~$1$. Hence \[ \lim \frac{QR}{h^{2}} = \tfrac{1}{2}\{\phi''(\xi) - f''(\xi)\}, \] when $h \to 0$. In other words, when the curves touch at the point whose abscissa is~$\xi$, \emph{the difference of their ordinates at the point whose abscissa is $\xi + h$ is at least of the second order of smallness when $h$~is small}. \begin{Remark} The reader will easily verify that $\lim (QR/h) = \phi'(\xi) - f'(\xi)$ when the curves cut and do not touch, so that $QR$~is then of the first order of smallness only. \end{Remark} It is evident that the degree of smallness of~$QR$ may be taken as a kind of measure of the \emph{closeness of the contact} of the curves. It is at once suggested that if the first $n - 1$ derivatives of $f$ and~$\phi$ have equal values when $x = \xi$, then $QR$~will be of $n$th~order of smallness; and the reader will have no difficulty in proving that this is so and that \[ \lim \frac{QR}{h^{n}} = \frac{1}{n!}\{\phi^{(n)}(\xi) - f^{(n)}(\xi)\}. \] We are therefore led to frame the following definition: \begin{Defn} \Emph{Contact of the $n$th~order.} If $f(\xi) = \phi(\xi)$, $f'(\xi) = \phi'(\xi)$,~\dots, $f^{(n)}(\xi) = \phi^{(n)}(\xi)$, but $f^{(n+1)}(\xi) \neq \phi^{(n+1)}(\xi)$, then the curves $y = f(x)$, $y = \phi(x)$ will be said to have contact of the $n$th~order at the point whose abscissa is~$\xi$. \end{Defn} The preceding discussion makes the notion of contact of the $n$th~order dependent on the choice of axes, and fails entirely \PageSep{272} when the tangent to the curves is parallel to the axis of~$y$. We can deal with this case by taking $y$~as the independent and $x$~as the dependent variable. It is better, however, to consider $x$~and~$y$ as functions of a parameter~$t$. An excellent account of the theory will be found in Mr~Fowler's tract referred to on \PageRef{p.}{266}, or in de~la~Vallée Poussin's \textit{Cours d'Analyse}, vol.~ii, pp.~396~\textit{et~seq.} \begin{Examples}{LIX.} \Item{1.} Let $\phi(x) = ax + b$, so that $y = \phi(x)$ is a straight line. The conditions for contact at the point for which $x = \xi$ are $f(\xi) = a\xi + b$, $f'(\xi) = a$. If we determine $a$~and~$b$ so as to satisfy these equations we find $a = f'(\xi)$, $b = f(\xi) - \xi f'(\xi)$, and the equation of the tangent to $y = f(x)$ at the point $x = \xi$ is \[ y = xf'(\xi) + \{f(\xi) - \xi f'(\xi)\}, \] or $y - f(\xi) = (x - \xi)f'(\xi)$. Cf.\ \Ex{xxxix}.~5. \Item{2.} The fact that the line is to have simple contact with the curve completely determines the line. In order that the tangent should have \emph{contact of the second order} with the curve we must have $f''(\xi) = \phi''(\xi)$, \ie\ $f''(\xi) = 0$. A point at which the tangent to a curve has contact of the second order is called a \Emph{point of inflexion}. %[** TN: Differs from the modern definition] \Item{3.} Find the points of inflexion on the graphs of the functions $3x^{4} - 6x^{3} + 1$, $2x/(1 + x^{2})$, $\sin x$, $a\cos^{2}x + b\sin^{2}x$, $\tan x$, $\arctan x$. \Item{4.} Show that the conic $ax^{2} + 2hxy + by^{2} + 2gx + 2fy + c = 0$ cannot have a point of inflexion. [Here $ax + hy + g + (hx + by + f)y_{1} = 0$ and \[ a + 2hy_{1} + by_{1}^{2} + (hx + by + f)y_{2} = 0, \] suffixes denoting differentiations. Thus at a point of inflexion \[ a + 2hy_{1} + by_{1}^{2} = 0, \] or \[ a(hx + by + f)^{2} - 2h(ax + hy + g)(hx + by + f) + b(ax + hy + g)^{2} = 0, \] or \[ (ab - h^{2})\{ax^{2} + 2hxy + by^{2} + 2gx + 2fy\} + af^{2} - 2fgh + bg^{2} = 0. \] But this is inconsistent with the equation of the conic unless \[ af^{2} - 2fgh + bg^{2} = c(ab - h^{2}) \] or $abc + 2fgh - af^{2} - bg^{2} - ch^{2} = 0$; and this is the condition that the conic should degenerate into two straight lines.] \Item{5.} The curve $y = (ax^{2} + 2bx + c)/(\alpha x^{2} + 2\beta x + \gamma)$ has one or three points of inflexion according as the roots of $\alpha x^{2} + 2\beta x + \gamma = 0$ are real or complex. [The equation of the curve can, by a change of origin (cf.\ \Ex{xlvi}.~15), be reduced to the form \[ \eta = \xi/(A\xi^{2} + 2B\xi + C) = \xi/\{A(\xi - p)(\xi - q)\}, \] where $p$,~$q$ are real or conjugate. The condition for a point of inflexion will be found to be $\xi^{3} - 3pq\xi + pq(p + q) = 0$, which has one or three real roots according as $\DPtypo{\{pq(p - q)\}}{\{pq(p - q)\}^{2}}$ is positive or negative, \ie\ according as $p$~and~$q$ are real or conjugate.] \PageSep{273} \Item{6.} Discuss in particular the curves $y = (1 - x)/(1 + x^{2})$, $y = (1 - x^{2})/(1 + x^{2})$, $y = (1 + x^{2})/(1 - x^{2})$. \Item{7.} Show that when the curve of Ex.~5 has three points of inflexion, they lie on a straight line. [The equation $\xi^{3} - 3pq\xi + pq(p + q) = 0$ can be put in the form $(\xi - p)(\xi - q)(\xi + p + q) + (p - q)^{2}\xi = 0$, so that the points of inflexion lie on the line $\xi + A(p - q)^{2}\eta + p + q = 0$ or $A\xi - 4(AC - B^{2})\eta = 2B$.] \Item{8.} Show that the curves $y = x\sin x$, $y = (\sin x)/x$ have each infinitely many points of inflexion. \Item{9.} \Topic{Contact of a circle with a curve. Curvature.\footnote {A much fuller discussion of the theory of curvature will be found in Mr~Fowler's %[** TN: Reference on page 272 of orig. points to page 266.] tract referred to on \PageRef{p.}{\DPchg{272}{266}}.}} The general equation of a circle, viz. \[ (x - a)^{2} + (y - b)^{2} = r^{2}, \Tag{(1)} \] contains three arbitrary constants. Let us attempt to determine them so that the circle has contact of as high an order as possible with the curve $y = f(x)$ at the point $(\xi, \eta)$, where $\eta = f(\xi)$. We write $\eta_{1}$,~$\eta_{2}$ for $f'(\xi)$,~$f''(\xi)$. Differentiating the equation of the circle twice we obtain \begin{align} (x - a) + (y - b)y_{1} &= 0, \Tag{(2)}\\ 1 + y_{1}^{2} + (y - b)y_{2} &= 0. \Tag{(3)} \end{align} If the circle touches the curve then the equations \Eq{(1)}~and~\Eq{(2)} are satisfied when $x = \xi$, $y = \eta$, $y_{1} = \eta_{1}$. This gives $(\xi - a)/\eta_{1} = -(\eta - b) = r/\sqrtp{1 + \eta_{1}^{2}}$. If the contact is of the second order then the equation~\Eq{(3)} must also be satisfied when $y_{2} = \eta_{2}$. Thus $b = \eta + \{(1 + \eta_{1}^{2})/\eta_{2}\}$; and hence we find \[ a = \xi - \frac{\eta_{1}(1 + \eta_{1}^{2})}{\eta_{2}},\quad b = \eta + \frac{1 + \eta_{1}^{2}}{\eta_{2}},\quad r = \frac{(1 + \eta_{1}^{2})^{3/2}}{\eta_{2}}. \] The circle which has contact of the second order with the curve at the point $(\xi, \eta)$ is called the \Emph{circle of curvature}, and its radius the \Emph{radius of curvature}. The \Emph{measure of curvature} (or simply the \emph{curvature}) is the reciprocal of the radius: thus the measure of curvature is $f''(\xi)/\{1 + [f'(\xi)]^{2}\}^{3/2}$, or \[ \frac{d^{2}\eta}{d\xi^{2}} \bigg/ \biggl\{1 + \biggl(\frac{d\eta}{d\xi}\biggr)^{2}\biggr\}^{3/2}. \] \Item{10.} Verify that the curvature of a circle is constant and equal to the reciprocal of the radius; and show that the circle is the only curve whose curvature is constant. \Item{11.} {\Loosen Find the centre and radius of curvature at any point of the conics $y^{2} = 4ax$, $(x/a)^{2} + (y/b)^{2} = 1$.} \Item{12.} In an ellipse the radius of curvature at~$P$ is~$CD^{3}/ab$, where $CD$~is the semi-diameter conjugate to~$CP$. \PageSep{274} \Item{13.} Show that in general a conic can be drawn to have contact of the fourth order with the curve $y = f(x)$ at a given point~$P$. [Take the general equation of a conic, viz. \[ ax^{2} + 2hxy + by^{2} + 2gx + 2fy + c = 0, \] and differentiate four times with respect to~$x$. Using suffixes to denote differentiation we obtain \begin{align*} ax + hy + g + (hx + by + f) y_{1} &= 0,\\ a + 2hy_{1} + by_{1}^{2} + (hx + by + f) y_{2} &= 0,\\ 3(h + by_{1}) y_{2} + (hx + by + f) y_{3} &= 0,\\ 4(h + by_{1}) y_{3} +3by_{2}^{2} + (hx + by + f) y_{4} &= 0. \end{align*} If the conic has contact of the fourth order, then these five equations must be satisfied by writing $\xi$, $\eta$, $\eta_{1}$, $\eta_{2}$, $\eta_{3}$, $\eta_{4}$, for $x$, $y$, $y_{1}$, $y_{2}$, $y_{3}$, $y_{4}$. We have thus just enough equations to determine the ratios $a : b : c : f : g : h$.] \Item{14.} An infinity of conics can be drawn having contact of the third order with the curve at~$P$. Show that their centres all lie on a straight line. [Take the tangent and normal as axes. Then the equation of the conic is of the form $2y = ax^{2} + 2hxy + by^{2}$, and when $x$~is small one value of~$y$ may be expressed (\Ref{Ch.}{V}, \MiscEx{V}~22) in the form \[ y = \tfrac{1}{2}ax^{2} + \left(\tfrac{1}{2}ah + \epsilon_{x}\right) x^{3}, \] where $\epsilon_{x} \to 0$ with~$x$. But this expression must be the same as \[ y = \tfrac{1}{2}f''(0) x^{2} + \{\tfrac{1}{6}f'''(0) + \epsilon'_{x}\} x^{3}, \] where $\epsilon'_{x} \to 0$ with~$x$, and so $a = f''(0)$, $h = f'''(0)/3f''(0)$, in virtue of the result of \Ex{lv}.~15. But the centre lies on the line $ax + hy = 0$.] \Item{15.} Determine a parabola which has contact of the third order with the ellipse $(x/a)^{2} + (y/b)^{2} = 1$ at the extremity of the major axis. \Item{16.} The locus of the centres of conics which have contact of the third order with the ellipse $(x/a)^{2} + (y/b)^{2} = 1$ at the point $(a\cos\alpha, b\sin\alpha)$ is the diameter $x/(a\cos\alpha) = y/(b\sin\alpha)$. [For the ellipse itself is one such conic.] \end{Examples} \Paragraph{152. Differentiation of functions of several variables.} So far we have been concerned exclusively with functions of a single variable~$x$, but there is nothing to prevent us applying the notion of differentiation to functions of several variables $x$, $y$,~\dots. Suppose then that $f(x, y)$~is a function of two\footnote {The new points which arise when we consider functions of several variables are illustrated sufficiently when there are two variables only. The generalisations of our theorems for three or more variables are in general of an obvious character.} real variables $x$~and~$y$, and that the limits \[ \lim_{h\to 0}\frac{f(x + h, y) - f(x, y)}{h},\quad \lim_{k\to 0}\frac{f(x, y + k) - f(x, y)}{k} \] \PageSep{275} exist for all values of $x$~and~$y$ in question, that is to say that $f(x, y)$ possesses a derivative~$df/dx$ or~$D_{x}f(x, y)$ with respect to~$x$ and a derivative~$df/dy$ or~$D_{y}f(x, y)$ with respect to~$y$. It is usual to call these derivatives the \emph{partial differential coefficients} of~$f$, and to denote them by \[ \frac{\dd f}{\dd x},\quad \frac{\dd f}{\dd y} \] or \[ f_{x}'(x, y),\quad f_{y}'(x, y) \] or simply $f_{x}'$,~$f_{y}'$ or $f_{x}$,~$f_{y}$. The reader must not suppose, however, that these new notations imply any essential novelty of idea: `partial differentiation' with respect to~$x$ is exactly the same process as ordinary differentiation, the only novelty lying in the presence in~$f$ of a second variable~$y$ independent of~$x$. In what precedes we have supposed $x$~and~$y$ to be two real variables entirely independent of one another. If $x$~and~$y$ were connected by a relation the state of affairs would be very different. In this case our definition of~$f_{x}'$ would fail entirely, as we could not change~$x$ into~$x + h$ without at the same time changing~$y$. But then $f(x, y)$ would not really be a function of two variables at all. A function of two variables, as we defined it in \Ref{Ch.}{II}, is essentially a function of two \emph{independent} variables. If $y$~depends on~$x$, $y$~is a function of~$x$, say $y = \phi(x)$; and then \[ f(x, y) = f\{x, \phi(x)\} \] is really a function of the single variable~$x$. Of course we may also represent it as a function of the single variable~$y$. Or, as is often most convenient, we may regard $x$~and~$y$ as functions of a third variable~$t$, and then $f(x, y)$, which is of the form $f\{\phi(t), \psi(t)\}$, is a function of the single variable~$t$. \begin{Examples}{LX.} \Item{1.} {\Loosen Prove that if $x = r\cos\theta$, $y = r\sin\theta$, so that $r = \sqrtp{x^{2} + y^{2}}$, $\theta = \arctan(y/x)$, then} \begin{align*} \frac{\dd r}{\dd x} &= \frac{x}{\sqrtp{x^{2} + y^{2}}}, &\frac{\dd r}{\dd y} &= \frac{y}{\sqrtp{x^{2} + y^{2}}}, &\frac{\dd \theta}{\dd x} &= -\frac{y}{x^{2} + y^{2}}, &\frac{\dd \theta}{\dd y} &= \frac{x}{x^{2} + y^{2}},\\ % \frac{\dd x}{\dd r} &= \cos\theta, &\frac{\dd y}{\dd r} &= \sin\theta, &\frac{\dd x}{\dd \theta} &= -r\sin\theta, &\frac{\dd y}{\dd \theta} &= r\cos\theta. \end{align*} \Item{2.} Account for the fact that $\dfrac{\dd r}{\dd x}\neq 1\bigg/\biggl(\dfrac{\dd x}{\dd r}\biggr)$ and $\dfrac{\dd \theta}{\dd x}\neq 1\bigg/\biggl(\dfrac{\dd x}{\dd \theta}\biggr)$. [When we were considering a function~$y$ of one variable~$x$ it followed from the definitions that $dy/dx$ and~$dx/dy$ were reciprocals. This is no longer the \PageSep{276} case when we are dealing with functions of two variables. Let $P$ (\Fig{46}) be the point $(x, y)$ or $(r, \theta)$. To find $\dd r/\dd x$ we must increase~$x$, say by an increment $MM_{1} = \delta x$, while keeping $y$~constant. This brings~$P$ to~$P_{1}$. If along~$OP_{1}$ we take $OP' = OP$, the increment of~$r$ is $P'P_{1} = \delta r$, say; and $\dd r/\dd x = \lim(\delta r/\delta x)$. If on the other hand we want to calculate $\dd x/\dd r$, $x$~and~$y$ %[Illustration: Fig. 46.] \Figure[2.25in]{46}{p276} being now regarded as functions of $r$~and~$\theta$, we must increase~$r$ by~$\Delta r$, say, keeping $\theta$~constant. This brings~$P$ to~$P_{2}$, where $PP_{2} = \Delta r$: the corresponding increment of~$x$ is $MM_{1} = \Delta x$, say; and \[ \dd x/\dd r = \lim(\Delta x/\Delta r). \] Now $\Delta x = \delta x$:\footnote {Of course the fact that $\Delta x = \delta x$ is due merely to the particular value of~$\Delta r$ that we have chosen (viz.~$PP_{2}$). Any other choice would give us values of $\Delta x$,~$\Delta r$ proportional to those used here.} but $\Delta r \neq \delta r$. Indeed it is easy to see from the figure that \[ \lim (\delta r/\delta x) = \lim (P'P_{1}/PP_{1}) = \cos\theta, \] but \[ \lim (\Delta r/\Delta x) = \lim (PP_{2}/PP_{1}) = \sec\theta, \] so that \[ \lim (\delta r/\Delta r) = \cos^{2}\theta. \] The fact is of course that \emph{$\dd x/\dd r$ and $\dd r/\dd x$ are not formed upon the same hypothesis as to the variation of~$P$.}] \Item{3.} Prove that if $z = f(ax + by)$ then $b(\dd z/\dd x) = a(\dd z/\dd y)$. \Item{4.} Find $\dd X/\dd x$, $\dd X/\dd y$,~\dots\ when $X + Y = x$, $Y = xy$. Express $x$,~$y$ as functions of $X$,~$Y$ and find $\dd x/\dd X$, $\dd x/\dd Y$,~\dots. \Item{5.} Find $\dd X/\dd x$,~\dots\ when $X + Y + Z = x$, $Y + Z = xy$, $Z = xyz$; express $x$,~$y$,~$z$ in terms of $X$,~$Y$,~$Z$ and find $\dd x/\dd X$,~\dots. [There is of course no difficulty in extending the ideas of the last section to functions of any number of variables. But the reader must be careful to impress on his mind that the notion of the partial derivative of a function of several variables is only determinate when \emph{all} the independent variables are specified. Thus if $u = x + y + z$, $x$,~$y$, and~$z$ being the independent variables, then $\dd u/\dd x = 1$. But if we regard $u$ as a function of the variables $x$, $x + y = \eta$, and $x + y + z = \zeta$, so that $u = \zeta$, then $\dd u/\dd x = 0$.] \end{Examples} \Paragraph{153. Differentiation of a function of two functions.} There is a theorem concerning the differentiation of a function of \emph{one} variable, known generally as the \Emph{Theorem of the Total Differential Coefficient}, which is of very great importance and depends on the notions explained in the preceding section regarding functions of \emph{two} variables. This theorem gives us a rule for differentiating \[ f\{\phi(t), \psi(t)\}, \] with respect to~$t$. \PageSep{277} Let us suppose, in the first instance, that $f(x, y)$ is a function of the two variables $x$~and~$y$, and that $f_{x}'$,~$f_{y}'$ are continuous functions of both variables (\SecNo[§]{107}) for all of their values which come in question. And now let us suppose that the variation of $x$~and~$y$ is restricted in that $(x, y)$ lies on a curve \[ x = \phi(t),\quad y = \psi(t), \] where $\phi$ and~$\psi$ are functions of~$t$ with continuous differential coefficients $\phi'(t)$,~$\psi' (t)$. Then $f(x, y)$ reduces to a function of the single variable~$t$, say~$F(t)$. The problem is to determine~$F'(t)$. Suppose that, when $t$~changes to~$t + \tau$, $x$~and~$y$ change to $x + \xi$ and $y + \eta$. Then by definition \begin{align*} %[** TN: Third line not aligned in the original] \frac{dF(t)}{dt} &= \lim_{\tau\to 0} \frac{1}{\tau}[f\{\phi(t + \tau), \psi(t + \tau)\} - f\{\phi(t), \psi(t)\}]\\ &= \lim \frac{1}{\tau}\{f(x + \xi, y + \eta) - f(x, y)\} \\ &= \lim \left[ \frac{f(x + \xi, y + \eta) - f(x, y + \eta)}{\xi}\, \frac{\xi}{\tau} + \frac{f(x, y + \eta) - f(x, y)}{\eta}\, \frac{\eta}{\tau} \right]. \end{align*} But, by the Mean Value Theorem, \begin{align*} \{f(x + \xi, y + \eta) - f (x, y + \eta)\}/\xi &= f_{x}'(x + \theta\xi, y + \eta),\\ \{f(x, y + \eta) - f(x, y)\}/\eta &= f_{y}'(x, y + \theta'\eta), \end{align*} where $\theta$~and~$\theta'$ each lie between $0$ and~$1$. As $\tau \to 0$, $\xi \to 0$ and $\eta \to 0$, and $\xi/\tau \to \phi'(t)$, $\eta/\tau \to \psi'(t)$: also \[ f_{x}'(x + \theta\xi, y + \eta) \to f_{x}'(x, y),\quad f_{y}'(x, y + \theta'\eta) \to f_{y}'(x, y). \] Hence \[ F'(t) = D_{t}f \{\phi(t), \psi(t)\} = f_{x}'(x, y)\phi'(t) + f_{y}'(x, y)\psi'(t), \] where we are to put $x = \phi(t)$, $y = \psi(t)$ after carrying out the differentiations with respect to $x$~and~$y$. This result may also be expressed in the form \[ \frac{df}{dt} = \frac{\dd f}{\dd x}\, \frac{dx}{dt} + \frac{\dd f}{\dd y}\, \frac{dy}{dt}\Add{.} \] \begin{Examples}{LXI.} \Item{1.} Suppose $\phi(t) = (1 - t^{2})/(1 + t^{2})$, $\psi(t) = 2t/(1 + t^{2})$, so that the locus of~$(x, y)$ is the circle $x^{2} + y^{2} = 1$. Then \begin{align*} \phi'(t) &= -4t/(1 + t^{2})^{2},\quad \psi'(t) = 2(1 - t^{2})/(1 + t^{2})^{2},\\ F'(t) &= \{-4t/(1 + t^{2})^{2}\}f_{x}' + \{2(1 - t^{2})/(1 + t^{2})^{2}\}f_{y}', \end{align*} where $x$~and~$y$ are to be put equal to $(1 - t^{2})/(1 + t^{2})$ and $2t/(1 + t^{2})$ after carrying out the differentiations. \PageSep{278} {\Loosen We can easily verify this formula in particular cases. Suppose, \eg, that $f(x, y) = x^{2} + y^{2}$. Then $f_{x}' = 2x$, $f_{y}' = 2y$, and it is easily verified that $F'(t) = 2x\phi'(t) + 2y\psi'(t) = 0$, which is obviously correct, since $F(t) = 1$.} \Item{2.} Verify the theorem in the same way when (\ia)~$x = t^{m}$, $y = 1 - t^{m}$, $f(x, y) = x + y$; (\ib)~$x = a\cos t$, $y = a\sin t$, $f(x, y) = x^{2} + y^{2}$. \Item{3.} One of the most important cases is that in which $t$ is $x$~itself. We then obtain \[ D_{x}f\{x, \psi(x)\} = D_{x}f(x, y) + D_{y}f(x, y)\psi'(x). \] where $y$~is to be replaced by~$\psi(x)$ after differentiation. It was this case which led to the introduction of the notation $\dd f/\dd x$, $\dd f/\dd y$. For it would seem natural to use the notation~$df/dx$ for \emph{either} of the functions $D_{x}f\{x, \psi(x)\}$ and $D_{x}f(x, y)$, in one of which $y$~is put equal to~$\psi(x)$ before and in the other after differentiation. Suppose for example that $y = 1 - x$ and $f(x, y) = x + y$. Then $D_{x}f(x, 1 - x) = D_{x}1 = 0$, but $D_{x}f(x, y) = 1$. The distinction between the two functions is adequately shown by denoting the first by~$df/dx$ and the second by~$\dd f/\dd x$, in which case the theorem takes the form \[ \frac{df}{dx} = \frac{\dd f}{\dd x} + \frac{\dd f}{\dd y}\, \frac{dy}{dx}; \] though this notation is also open to objection, in that it is a little misleading to denote the functions $f\{x, \psi(x)\}$ and $f(x, y)$, whose forms as functions of~$x$ are quite different from one another, by the same letter~$f$ in $df/dx$ and~$\dd f/\dd x$. \Item{4.} If the result of eliminating~$t$ between $x = \phi(t)$, $y = \psi(t)$ is $f(x, y) = 0$, then \[ \frac{\dd f}{\dd x}\, \frac{dx}{dt} + \frac{\dd f}{\dd y}\, \frac{dy}{dt} = 0. \] \Item{5.} If $x$~and~$y$ are functions of~$t$, and $r$~and~$\theta$ are the polar coordinates of $(x, y)$, then $r' = (xx' + yy')/r$, $\theta' = (xy' - yx')/r^{2}$, dashes denoting differentiations with respect to~$t$. \end{Examples} \Paragraph{154. The Mean Value Theorem for functions of two variables.} Many of the results of the last chapter depended upon the Mean Value Theorem, expressed by the equation \[ \phi(x + h) - \phi(x) = hf'(x + \theta h), \] or as it may be written, if $y = \phi(x)$, \[ \delta y = f'(x + \theta\, \delta x)\, \delta x. \] Now suppose that $z = f(x, y)$ is a function of the two independent variables $x$~and~$y$, and that $x$~and~$y$ receive increments $h$,~$k$ or $\delta x$,~$\delta y$ respectively: and let us attempt to express the corresponding increment of~$z$, viz. \[ \delta z = f(x + h, y + k) - f(x, y), \] in terms of $h$,~$k$ and the derivatives of~$z$ with respect to $x$~and~$y$. \PageSep{279} Let $f(x + ht, y + kt) = F(t)$. Then \[ f(x + h, y + k) - f(x, y) = F(1) - F(0) = F'(\theta), \] where $0 < \theta < 1$. But, by \SecNo[§]{153}, \begin{align*} F' (t) &= D_{t} f(x + ht, y + kt)\\ &= hf_{x}'(x + ht, y + kt) + kf_{y}'(x + ht, y + kt). \end{align*} Hence finally \[ \delta z = f(x + h, y + k) - f(x, y) = hf_{x}'(x + \theta h, y + \theta k) + kf_{y}'(x + \theta h, y + \theta k), \] which is the formula desired. Since $f_{x}'$,~$f_{y}'$ are supposed to be continuous functions of $x$~and~$y$, we have \begin{align*} f_{x}'(x + \theta h, y + \theta k) &= f_{x}'(x, y) + \epsilon_{h, k},\\ f_{y}'(x + \theta h, y + \theta k) &= f_{y}'(x, y) + \eta_{h, k}, \end{align*} where $\epsilon_{h, k}$ and~$\eta_{h, k}$ tend to zero as $h$~and~$k$ tend to zero. Hence the theorem may be written in the form \[ \delta z = (f_{x}' + \epsilon)\, \delta x + (f_{y}' + \eta)\, \delta y, \Tag{(1)} \] where $\epsilon$~and~$\eta$ are small when $\delta x$~and~$\delta y$ are small. The result embodied in~\Eq{(1)} may be expressed by saying that the equation \[ \delta z = f_{x}'\, \delta x + f_{y}'\, \delta y \] is \emph{approximately} true; \ie\ that the difference between the two sides of the equation is small in comparison with the larger of $\delta x$ and~$\delta y$.\footnote {Or with $|\delta x| + |\delta y|$ or $\sqrtp{\delta x^{2} + \delta y^{2}}$.} We must say `\emph{the larger of $\delta x$~and~$\delta y$}' because one of them might be small in comparison with the other; we might indeed have $\delta x = 0$ or $\delta y = 0$. \begin{Remark} It should be observed that if any equation of the form $\delta z = \lambda\, \delta x + \mu\, \delta y$ is `approximately true' in this sense, we must have $\lambda = f_{x}'$, $\mu = f_{y}'$. For we have \[ \delta z - f_{x}'\, \delta x - f_{y}'\, \delta y = \epsilon\, \delta x + \eta\, \delta y,\quad \delta z - \lambda\, \delta x - \mu\, \delta y = \epsilon'\, \delta x + \eta'\, \delta y \] where $\epsilon$,~$\eta$, $\epsilon'$,~$\eta'$ all tend to zero as $\delta x$~and~$\delta y$ tend to zero; and so \[ (\lambda - f_{x}')\, \delta x + (\mu - f_{y}')\, \delta y = \rho\, \delta x + \rho'\, \delta y \] where $\rho$~and~$\rho'$ tend to zero. Hence, if $\zeta$~is any assigned positive number, we can choose~$\sigma$ so that \[ |(\lambda - f_{x}')\, \delta x + (\mu - f_{y}')\, \delta y| < \zeta(|\delta x| + |\delta y|) \] for all values of $\delta x$ and~$\delta y$ numerically less than~$\sigma$. Taking $\delta y = 0$ we obtain $|(\lambda - f_{x}')\, \delta x| < \zeta|\delta x|$, or $|\lambda - f_{x}'| < \zeta$, and, as $\zeta$~may be as small as we please, this can only be the case if $\lambda = f_{x}'$. Similarly $\mu = f_{y}'$. \end{Remark} \PageSep{280} \Paragraph{155. Differentials.} In the applications of the Calculus, especially in geometry, it is usually most convenient to work with equations expressed not, like equation~\Eq{(1)} of \SecNo[§]{154}, in terms of the increments $\delta x$,~$\delta y$,~$\delta z$ of the functions $x$,~$y$,~$z$, but in terms of what are called their \emph{differentials} $dx$,~$dy$,~$dz$. Let us return for a moment to a function $y = f(x)$ of a single variable~$x$. If $f'(x)$~is continuous then \[ \delta y = \{f'(x) + \epsilon\}\, \delta x, \Tag{(1)} \] where $\epsilon \to 0$ as $\delta x \to 0$: in other words the equation \[ \delta y = f'(x)\, \delta x \Tag{(2)} \] is `approximately' true. We have up to the present attributed no meaning of any kind to the symbol~$dy$ standing by itself. We now agree to \emph{define}~$dy$ by the equation \[ dy = f'(x)\, \delta x. \Tag{(3)} \] If we choose for~$y$ the particular function~$x$, we obtain \[ dx = \delta x, \Tag{(4)} \] so that \[ dy = f'(x)\, dx. \Tag{(5)} \] If we divide both sides of~\Eq{(5)} by~$dx$ we obtain \[ \frac{dy}{dx} = f'(x), \Tag{(6)} \] where $dy/dx$ denotes not, as heretofore, the differential coefficient of~$y$, but the quotient of the differentials $dy$,~$dx$. The symbol $dy/dx$ thus acquires a double meaning; but there is no inconvenience in this, since \Eq{(6)}~is true whichever meaning we choose. \begin{Remark} The equation~\Eq{(5)} has two apparent advantages over~\Eq{(2)}. It is exact and not merely approximate, and its truth does not depend on any assumption as to the continuity of~$f'(x)$. On the other hand it is precisely the fact that we can, under certain conditions, pass from the exact equation~\Eq{(5)} to the approximate equation~\Eq{(2)}, which gives the former its importance. The advantages of the `differential' notation are in reality of a purely technical character. These technical advantages are however so great, especially when we come to deal with functions of several variables, that the use of the notation is almost inevitable. When $f'(x)$~is continuous, we have \[ \lim \frac{dy}{\delta y} = 1 \] when $\delta x \to 0$. This is sometimes expressed by saying that $dy$~is the \emph{principal part} of~$\delta y$ when $\delta x$~is small, just as we might say that $ax$~is the `principal part' of $ax + bx^{2}$ when $x$~is small. \end{Remark} \PageSep{281} We pass now to the corresponding definitions connected with a function~$z$ of two independent variables $x$~and~$y$. We define the differential~$dz$ by the equation \[ dz = f_{x}'\, \delta x + f_{y}'\, \delta y. \Tag{(7)} \] Putting $z = x$ and $z = y$ in turn, we obtain \begin{align*} dx &= \delta x,\quad dy = \delta y, \Tag{(8)} \intertext{so that} dz &= f_{x}'\, dx + f_{y}'\, dy, \Tag{(9)} \end{align*} which is the exact equation corresponding to the approximate equation~\Eq{(1)} of \SecNo[§]{154}. Here again it is to be observed that the former is of importance only for reasons of practical convenience in working and because the latter can in certain circumstances be deduced from it. \begin{Remark} One property of the equation~\Eq{(9)} deserves special remark. We saw in \SecNo[§]{153} that if $z = f(x, y)$, $x$~and~$y$ being not independent but functions of a single variable~$t$, so that $z$~is also a function of $t$~alone, then \[ \frac{dz}{dt} = \frac{\dd f}{\dd x}\, \frac{dx}{dt} + \frac{\dd f}{\dd y}\, \frac{dy}{dt}. \] Multiplying this equation by~$dt$ and observing that \[ dx = \frac{dx}{dt}\, dt,\quad dy = \frac{dy}{dt}\, dt,\quad dz = \frac{dz}{dt}\, dt, \] we obtain \[ dz = f_{x}'\, dx + f_{y}'\, dy, \] which is the same in form as~\Eq{(9)}. Thus \emph{the formula which expresses~$dz$ in terms of $dx$~and~$dy$ is the same whether the variables $x$~and~$y$ are independent or not}. This remark is of great importance in applications. It should also be observed that if $z$~is a function of the two independent variables $x$~and~$y$, and \[ dz = \lambda\, dx + \mu\, dy, \] then $\lambda = f_{x}'$, $\mu = f_{y}'$. This follows at once from the last paragraph of~\SecNo[§]{154}. It is obvious that the theorems and definitions of the last three sections are capable of immediate extension to functions of any number of variables. \end{Remark} \begin{Examples}{LXII.} \Item{1.} The area of an ellipse is given by $A = \pi ab$, where $a$,~$b$ are the semiaxes. Prove that \[ \frac{dA}{A} = \frac{da}{a} + \frac{db}{b}, \] and state the corresponding approximate equation connecting the increments of the axes and the area. \PageSep{282} \Item{2.} Express $\Delta$, the area of a triangle~$ABC$, as a function of (i)~$a$, $B$,~$C$, (ii)~$A$, $b$,~$c$, and (iii)~$a$, $b$,~$c$, and establish the formulae \begin{gather*} \frac{d\Delta}{\Delta} = 2\frac{da}{a} + \frac{c\, dB}{a\sin B} + \frac{b\, dC}{a\sin C},\quad \frac{d\Delta}{\Delta} = \cot A\, dA + \frac{db}{b} + \frac{dc}{c},\\ d\Delta = R(\cos A\, da + \cos B\, db + \cos C\, dc), \end{gather*} %[** TN: Sole instance of circumcircle, not hyphenated in the original] where $R$~is the radius of the circumcircle. \Item{3.} The sides of a triangle vary in such a way that the area remains constant, so that $a$~may be regarded as a function of $b$~and~$c$. Prove that \[ \frac{\dd a}{\dd b} = -\frac{\cos B}{\cos A},\quad \frac{\dd a}{\dd c} = -\frac{\cos C}{\cos A}. \] [This follows from the equations \[ da = \frac{\dd a}{\dd b}\, db + \frac{\dd a}{\dd c}\, dc,\quad \cos A\, da + \cos B\, db + \cos C\, dc = 0.\Add{]} \] \Item{4.} If $a$,~$b$,~$c$ vary so that $R$~remains constant, then \[ \frac{da}{\cos A} + \frac{db}{\cos B} + \frac{dc}{\cos C} = 0, \] and so \[ \frac{\dd a}{\dd b} = -\frac{\cos A}{\cos B},\quad \frac{\dd a}{\dd c} = -\frac{\cos A}{\cos C}. \] [Use the formulae $a = 2R\sin A$,~\dots, and the facts that $R$ and $A + B + C$ are constant.] \Item{5.} If $z$~is a function of $u$~and~$v$, which are functions of $x$~and~$y$, then \[ \frac{\dd z}{\dd x} = \frac{\dd z}{\dd u}\, \frac{\dd u}{\dd x} + \frac{\dd z}{\dd v}\, \frac{\dd v}{\dd x},\quad \frac{\dd z}{\dd y} = \frac{\dd z}{\dd u}\, \frac{\dd u}{\dd y} + \frac{\dd z}{\dd v}\, \frac{\dd v}{\dd y}. \] [We have \[ dz = \frac{\dd z}{\dd u}\, du + \frac{\dd z}{\dd v}\, dv,\quad du = \frac{\dd u}{\dd x}\, dx + \frac{\dd u}{\dd y}\, dy,\quad dv = \frac{\dd v}{\dd x}\, dx + \frac{\dd v}{\dd y}\, dy. \] Substitute for $du$~and~$dv$ in the first equation and compare the result with the equation \[ dz = \frac{\dd z}{\dd x}\, dx + \frac{\dd z}{\dd y}\, dy.] \] \Item{6.} Let $z$~be a function of $x$~and~$y$, and let $X$,~$Y$,~$Z$ be defined by the equations \[ x = a_{1} X + b_{1} Y + c_{1} Z,\quad y = a_{2} X + b_{2} Y + c_{2} Z,\quad z = a_{3} X + b_{3} Y + c_{3} Z. \] Then $Z$~may be expressed as a function of $X$~and~$Y$. Express $\dd Z/\dd X$, $\dd Z/\dd Y$ in terms of $\dd z/\dd x$, $\dd z/\dd y$. [Let these differential coefficients be denoted by $P$,~$Q$ and $p$,~$q$. Then $dz - p\, dx - q\, dy = 0$, or \[ (c_{1} p + c_{2} q - c_{3})\, dZ + (a_{1} p + a_{2} q - a_{3})\, dX + (b_{1} p + b_{2} q - b_{3})\, dY = 0. \] \PageSep{283} Comparing this equation with $dZ - P\, dX - Q\, dY = 0$ we see that \[ P = -\frac{a_{1}p + a_{2}q - a_{3}}{c_{1}p + c_{2}q - c_{3}},\quad Q = -\frac{b_{1}p + b_{2}q - b_{3}}{c_{1}p + c_{2}q - c_{3}}.] \] \Item{7.} If \[ (a_{1} x + b_{1} y + c_{1} z)p + (a_{2} x + b_{2} y + c_{2} z)q = a_{3} x + b_{3} y + c_{3} z, \] then \[ (a_{1} X + b_{1} Y + c_{1} Z) P + (a_{2} X + b_{2} Y + c_{2} Z) Q = a_{3} X + b_{3} Y + c_{3} Z. \] \MathTrip{1899.} \Item{8.} \Topic{Differentiation of implicit functions.} Suppose that $f(x, y)$ and its derivative $f_{y}'(x, y)$ are continuous in the neighbourhood of the point $(a, b)$, and that \[ f(a, b) = 0,\quad f_{b}'(a, b) \neq 0. \] Then we can find a neighbourhood of~$(a, b)$ throughout which $f_{y}'(x, y)$ has always the same sign. Let us suppose, for example, that $f_{y}'(x, y)$~is positive near $(a, b)$. Then $f(x, y)$~is, for any value of~$x$ sufficiently near to~$a$, and for values of~$y$ sufficiently near to~$b$, an increasing function of~$y$ in the stricter sense of \SecNo[§]{95}. It follows, by the theorem of \SecNo[§]{108}, that there is a unique continuous function~$y$ which is equal to~$b$ when $x = a$ and which satisfies the equation $f(x, y) = 0$ for all values of~$x$ sufficiently near to~$a$. Let us now suppose that $f(x, y)$ possesses a derivative $f_{x}'(x, y)$ which is also continuous near $(a, b)$. If $f(x, y) = 0$, $x = a + h$, $y = b + k$, we have \[ 0 = f(x, y) - f(a, b) = (f_{a}' + \epsilon) h + (f_{b}' + \eta) k, \] where $\DPtypo{}{\epsilon}$ and~$\eta$ tend to zero with $h$~and~$k$. Thus \[ \frac{k}{h} = -\frac{f_{a}' + \epsilon}{f_{b}' + \eta} \to -\frac{f_{a}'}{f_{b}'}, \] or \[ \frac{dy}{dx} = -\frac{f_{a}'}{f_{b}'}. \] \Item{9.} The equation of the tangent to the curve $f(x, y) = 0$, at the point $x_{0}$,~$y_{0}$, is \[ (x - x_{0}) f_{x_{0}}'(x_{0}, y_{0}) + (y - y_{0}) f_{y_{0}}'(x_{0}, y_{0}) = 0. \] \end{Examples} \Paragraph{156. Definite Integrals and Areas.} It will be remembered that, in \Ref{Ch.}{VI}, \SecNo[§]{145}, we assumed that, if $f(x)$~is a continuous function of~$x$, and $PQ$~is the %[Illustration: Fig. 47.] \Figure[2.5in]{47}{p283} graph of $y = f(x)$, then the region~$PpqQ$ shown in \Fig{47} has associated with it a definite number which we call its \emph{area}. It is clear that, if we denote $Op$~and~$Oq$ by $a$~and~$x$, and allow $x$ to vary, this area is a function of~$x$, which we denote by~$F(x)$. \PageSep{284} Making this assumption, we proved in \SecNo[§]{145} that $F'(x) = f(x)$, and we showed how this result might be used in the calculation of the areas of particular curves. But we have still to justify the fundamental assumption that there is such a number as the area~$F(x)$. We know indeed what is meant by the area of a \emph{rectangle}, and that it is measured by the product of its sides. Also the properties of triangles, parallelograms, and polygons proved by Euclid enable us to attach a definite meaning to the areas of such figures. But nothing which we know so far provides us with a direct definition of the area of a figure bounded by curved lines. We shall now show how to give a definition of~$F(x)$ which will enable us to \emph{prove} its existence.\footnote {The argument which follows is modelled on that given in Goursat's \textit{Cours d'Analyse} (second edition), vol.~i, pp.~171~\textit{et~seq.}; but Goursat's treatment is much more general.} Let us suppose $f(x)$ continuous throughout the interval~$\DPmod{(a, b)}{[a, b]}$, and let us divide up the interval into a number of sub-intervals by means of the points of division $x_{0}$,~$x_{1}$, $x_{2}$,~\dots, $x_{n}$, where \[ a = x_{0} < x_{1} < \dots < x_{n-1} < x_{n} = b. \] Further, let us denote by~$\delta_{\nu}$ the interval $\DPmod{(x_{\nu}, x_{\nu+1})}{[x_{\nu}, x_{\nu+1}]}$, and by~$m_{\nu}$ the lower bound (\SecNo[§]{102}) of~$f(x)$ in~$\delta_{\nu}$, and let us write \[ %[** TN: Hardy now means the *length* of \delta_{\nu}] s = m_{0}\delta_{0} + m_{1}\delta_{1} + \dots + m_{n}\delta_{n} = \tsum m_{\nu}\delta_{\nu}, \] say. {\Loosen It is evident that, if $M$~is the upper bound of~$f(x)$ in~$\DPmod{(a, b)}{[a, b]}$, then $s \leq M(b - a)$. The aggregate of values of~$s$ is therefore, in the language of \SecNo[§]{80}, bounded above, and possesses an upper bound which we will denote by~$j$. No value of~$s$ exceeds~$j$, but there are values of~$s$ which exceed any number less than~$j$.} In the same way, if $M_{\nu}$~is the upper bound of~$f(x)$ in~$\delta_{\nu}$, we can define the sum \[ S = \tsum M_{\nu}\delta_{\nu}. \] {\Loosen It is evident that, if $m$~is the lower bound of~$f(x)$ in~$\DPmod{(a, b)}{[a, b]}$, then $S \geq m(b - a)$. The aggregate of values of~$S$ is therefore bounded below, and possesses a lower bound which we will denote by~$J$. No value of~$S$ is less than~$J$, but there are values of~$S$ less than any number greater than~$J$.} \PageSep{285} \begin{Remark} It will help to make clear the significance of the sums $s$ and~$S$ if we observe that, in the simple case in which $f(x)$~increases steadily from $x = a$ to $x = b$, $m_{\nu}$~is $f(x_{\nu})$ and $M_{\nu}$~is $f(x_{\nu+1})$. In this case $s$~is the total area of the rectangles shaded in \Fig{48}, and $S$~is the %[Illustration: Fig. 48.] \Figure[2.25in]{48}{p285} area bounded by a thick line. In general $s$ and~$S$ will still be areas, composed of rectangles, respectively included in and including the curvilinear region whose area we are trying to define. \end{Remark} We shall now show that \emph{no sum such as~$s$ can exceed any sum such as~$S$.} Let $s$,~$S$ be the sums corresponding to one mode of subdivision, and $s'$,~$S'$ those corresponding to another. We have to show that $s \leq S'$ and $s' \leq S$. We can form a third mode of subdivision by taking as dividing points all points which are such for either $s$,~$S$ or $s'$,~$S'$. Let $\mathbf{s}$,~$\mathbf{S}$ be the sums corresponding to this third mode of subdivision. Then it is easy to see that \[ \mathbf{s} \geq s,\quad \mathbf{s} \geq s',\quad \mathbf{S} \leq S,\quad \mathbf{S} \leq S'. \Tag{(1)} \] For example, $\mathbf{s}$~differs from~$s$ in that at least one interval~$\delta_{\nu}$ which occurs in~$s$ is divided into a number of smaller intervals \[ \delta_{\nu, 1},\ \delta_{\nu, 2},\ \dots,\ \delta_{\nu, p}, \] so that a term $m_{\nu}\delta_{\nu}$ of~$s$ is replaced in~$\mathbf{s}$ by a sum \[ m_{\nu, 1}\delta_{\nu, 1} + m_{\nu, 2}\delta_{\nu, 2} + \dots + m_{\nu, p}\delta_{\nu, p}, \] where $m_{\nu, 1}$, $m_{\nu, 2}$,~\dots\ are the lower bounds of~$f(x)$ in $\delta_{\nu, 1}$, $\delta_{\nu, 2}$,~\dots. But evidently $m_{\nu, 1} \geq m_{\nu}$, $m_{\nu, 2} \geq m_{\nu}$,~\dots, so that the sum just written is not less than~$m_{\nu}\delta_{\nu}$. Hence $\mathbf{s} \geq s;$ and the other inequalities~\Eq{(1)} can be established in the same way. But, since $\mathbf{s} \leq \mathbf{S}$, it follows that \[ s \leq \mathbf{s} \leq \mathbf{S} \leq S', \] which is what we wanted to prove. It also follows that $j \leq J$. For we can find an~$s$ as near to~$j$ as we please and an~$S$ as near to~$J$ as we please,\footnote {The $s$ and the~$S$ do not in general correspond to the same mode of subdivision.} and so $j > J$ would involve the existence of an~$s$ and an~$S$ for which $s > S$. \PageSep{286} So far we have made no use of the fact that $f(x)$~is continuous. We shall now show that $j = J$, and that the sums $s$,~$S$ tend to the limit~$J$ when the points of division~$x_{\nu}$ are multiplied indefinitely in such a way that all the intervals~$\delta_{\nu}$ tend to zero. More precisely, we shall show that, \begin{Result}given any positive number~$\epsilon$, it is possible to find~$\delta$ so that \[ 0 \leq J - s < \epsilon,\quad 0 \leq S - J < \epsilon \] whenever $\delta_{\nu} < \delta$ for all values of~$\nu$. \end{Result} There is, by Theorem~II of \SecNo[§]{106}, a number~$\delta$ such that \[ M_{\nu} - m_{\nu} < \epsilon/(b - a), \] whenever every~$\delta_{\nu}$ is less than~$\delta$. Hence \[ S - s = \tsum (M_{\nu} - m_{\nu})\, \delta_{\nu} < \epsilon. \] But \[ S - s = (S - J) + (J - j) + (j - s); \] and all the three terms on the right-hand side are positive, and therefore all less than~$\epsilon$. As $J - j$~is a constant, it must be zero. Hence $j = J$ and $0 \leq j - s < \epsilon$, $0 \leq S - J < \epsilon$, as was to be proved. We define the area of~$PpqQ$ as being \emph{the common limit of $s$~and~$S$, that is to say~$J$}. It is easy to give a more general form to this definition. Consider the sum \[ \sigma = \tsum f_{\nu}\delta_{\nu} \] where $f_{\nu}$~denotes the value of~$f(x)$ at any point in~$\delta_{\nu}$. Then $\sigma$ plainly lies between $s$~and~$S$, and so tends to the limit~$J$ when the intervals~$\delta_{\nu}$ tend to zero. We may therefore define the area as the limit of~$\sigma$. \Paragraph{157. The definite integral.} Let us now suppose that $f(x)$~is a continuous function, so that the region bounded by the curve $y = f(x)$, the ordinates $x = a$ and $x = b$, and the axis of~$x$, has a definite area. We proved in \Ref{Ch.}{VI}, \SecNo[§]{145}, that if $F(x)$~is an `integral function' of~$f(x)$, \ie\ if \[ F'(x) = f(x),\quad F(x) = \int f(x)\, dx, \] then the area in question is $F(b) - F(a)$. As it is not always practicable actually to determine the form of~$F(x)$, it is convenient to have a formula which represents the area~$PpqQ$ and contains no explicit reference to~$F(x)$. We shall write \[ (PpqQ) = \int_{a}^{b} f(x)\, dx. \] \PageSep{287} The expression on the right-hand side of this equation may then be regarded as being defined in either of two ways. We may regard it as simply an abbreviation for $F(b) - F(a)$, where $F(x)$~is some integral function of~$f(x)$, whether an actual formula expressing it is known or not; or we may regard it as the value of the area~$PpqQ$, as directly defined in~\SecNo[§]{156}. The number \[ \int_{a}^{b} f(x)\, dx \] is called a \Emph{definite integral}; $a$~and~$b$ are called its \Emph{lower and upper limits}; $f(x)$~is called the \Emph{subject of integration} or \Emph{integrand}; and the interval~$\DPmod{(a, b)}{[a, b]}$ the \Emph{range of integration}. The definite integral depends on $a$~and~$b$ and the form of the function~$f(x)$ only, and is not a function of~$x$. On the other hand the integral function \[ F(x) = \int f(x)\, dx \] is sometimes called the \Emph{indefinite integral} of~$f(x)$. \begin{Remark} The distinction between the definite and the indefinite integral is merely one of point of view. The definite integral $\ds\int_{a}^{b} f(x)\, dx = F(b) - F(a)$ is a function of~$b$, and may be regarded as a particular integral function of~$f(b)$. On the other hand the indefinite integral~$F(x)$ can always be expressed by means of a definite integral, since \[ F(x) = F(a) + \int_{a}^{x} f(t)\, dt. \] But when we are considering `indefinite integrals' or `integral functions' we are usually thinking of \emph{a relation between two functions}, in virtue of which one is the derivative of the other. And when we are considering a `definite integral' we are not as a rule concerned with any possible variation of the limits. Usually the limits are constants such as $0$ and~$1$; and \[ \int_{0}^{1} f(x)\, dx = F(1) - F(0) \] is not a function at all, but a mere number. It should be observed that the integral $\ds\int_{a}^{x} f(t)\, dt$, having a differential coefficient~$f(x)$, is \textit{a~fortiori} a continuous function of~$x$. Since $1/x$~is continuous for all positive values of~$x$, the investigations of the preceding paragraphs supply us with a proof of the actual existence of the function~$\log x$, which we agreed to assume provisionally in~\SecNo[§]{128}. \end{Remark} \PageSep{288} \Paragraph{158. Area of a sector of a circle. The circular functions.} The theory of the trigonometrical functions $\cos x$, $\sin x$, etc., as usually presented in text-books of elementary trigonometry, rests on an unproved assumption. An \emph{angle} is the configuration formed by two straight lines $OA$,~$OP$; there is no particular difficulty in translating this `geometrical' definition into purely analytical terms. The assumption comes at the next stage, when it is assumed that \emph{angles are capable of numerical measurement}, that is to say %[Illustration: Fig. 49.] \Figure[2in]{49}{p288} that there is a real number~$x$ associated with the configuration, just as there is a real number associated with the region~$PpqQ$ of \Fig{47}. This point once admitted, $\cos x$ and $\sin x$ may be defined in the ordinary way, and there is no further difficulty of principle in the elaboration of the theory. The whole difficulty lies in the question, \emph{what is the~$x$ which occurs in $\cos x$ and $\sin x$}? To answer this question, we must define the measure of an angle, and we are now in a position to do so. The most natural definition would be this: suppose that $AP$~is an arc of a circle whose centre is~$O$ and whose radius is unity, so that $OA = OP = 1$. Then $x$, the measure of the angle, is \emph{the length of the arc~$AP$}. This is, in substance, the definition adopted in the text-books, in the accounts which they give of the theory of `circular measure'. It has however, for our present purpose, a fatal defect; for we have not proved that the arc of a curve, even of a circle, possesses a length. The notion of the length of a curve is capable of precise mathematical analysis just as much as that of an area; but the analysis, although of the same general character as that of the preceding sections, is decidedly more difficult, and it is impossible that we should give any general treatment of the subject here. We must therefore found our definition on the notion not of length but of \emph{area}. We define the measure of the angle~$AOP$ as \emph{twice the area of the sector~$AOP$ of the unit circle}. Suppose, in particular, that $OA$~is $y = 0$ and that $OP$~is $y = mx$, where $m > 0$. The area is a function of~$m$, which we may denote by~$\phi(m)$. If we write~$\mu$ for $(1 + m^{2})^{-\frac{1}{2}}$, $P$~is the point $(\mu, m\mu)$, and \PageSep{289} we have \[ \phi(m) = \tfrac{1}{2} m\mu^{2} + \int_{\mu}^{1} \sqrtp{1 - x^{2}}\, dx. \] Differentiating with respect to~$m$, we find \[ \phi'(m) = \frac{1}{2(1 + m^{2})},\quad \phi(m) = \tfrac{1}{2} \int_{0}^{m} \frac{dt}{1 + t^{2}}. \] Thus the analytical equivalent of our definition would be to define $\arctan m$ by the equation \[ \arctan m = \int_{0}^{m} \frac{dt}{1 + t^{2}}; \] and the whole theory of the circular functions could be worked out from this starting point, just as the theory of the logarithm is worked out from a similar definition in \Ref{Ch.}{IX}\@. See \Ref{Appendix}{III}\@. \begin{Examples}{LXIII.} \Topic{Calculation of the definite from the indefinite integral.} \Item{1.} Show that \[ \int_{a}^{b} x^{n}\, dx = \frac{b^{n+1} - a^{n+1}}{n + 1}, \] and in particular that \[ \int_{0}^{1} x^{n}\, dx = \frac{1}{n + 1}. \] \Item{2.} $\ds\int_{a}^{b} \cos mx\, dx = \frac{\sin mb - \sin ma}{m}$, $\ds\int_{a}^{b} \sin mx\, dx = \frac{\cos ma - \cos mb}{m}$. \Item{3.} $\ds\int_{a}^{b}\frac{dx}{1 + x^{2}} = \arctan b - \arctan a$, $\ds\int_{0}^{1}\frac{dx}{1 + x^{2}} = \tfrac{1}{4}\pi$. [There is an apparent difficulty here owing to the fact that $\arctan x$~is a many valued function. The difficulty may be avoided by observing that, in the equation \[ \int_{0}^{x} \frac{dt}{1 + t^{2}} = \arctan x, \] $\arctan x$ must denote an angle lying between $-\frac{1}{2}\pi$ and~$\frac{1}{2}\pi$. For the integral vanishes when $x = 0$ and increases steadily and continuously as $x$~increases. Thus the same is true of~$\arctan x$, which therefore tends to~$\tfrac{1}{2}\pi$ as $x \to \infty$. In the same way we can show that $\arctan x \to -\frac{1}{2}\pi$ as $x \to -\infty$. Similarly, in the equation \[ \int_{0}^{x} \frac{dt}{\sqrtp{1 - t^{2}}} = \arcsin x, \] where $-1 < x < 1$, $\arcsin x$ denotes an angle lying between $-\frac{1}{2}\pi$ and $\frac{1}{2}\pi$. Thus, if $a$~and~$b$ are both numerically less than unity, we have \[ \int_{a}^{b} \frac{dx}{\sqrtp{1 - x^{2}}} = \arcsin b - \arcsin a.] \] \Item{4.} $\ds\int_{0}^{1} \frac{dx}{1 - x + x^{2}} = \frac{2\pi}{3\sqrt3}$, $\ds\int_{0}^{1} \frac{dx}{1 + x + x^{2}} = \frac{\pi}{3\sqrt3}$\Add{.} \PageSep{290} \Item{5.} $\ds\int_{0}^{1} \frac{dx}{1 + 2x\cos\alpha + x^{2}} = \frac{\alpha}{2\sin\alpha}$ if $-\pi < \alpha < \pi$, except when $\alpha = 0$, when the value of the integral is~$\frac{1}{2}$, which is the limit of~$\frac{1}{2}\alpha\cosec\alpha$ as $\alpha \to 0$. \Item{6.} $\ds\int_{0}^{\DPtypo{}{1}} \sqrtp{1 - x^{2}}\, dx = \tfrac{1}{4}\pi$, $\ds\int_{0}^{a} \sqrtp{a^{2} - x^{2}}\, dx = \tfrac{1}{4}\pi a^{2}$\quad $(a > 0)$. \Item{7.} $\ds\int_{0}^{\pi} \frac{dx}{a + b\cos x} = \frac{\pi}{\sqrt{a^{2} - b^{2}}}$, if $a > |b|$. [For the form of the indefinite integral see \Exs{liii}.\ 3,~4. If $|a| < |b|$ then the subject of integration has an infinity between $0$ and~$\pi$. What is the value of the integral when $a$~is negative and $-a > |b|$?] \Item{8.} $\ds\int_{0}^{\frac{1}{2}\pi} \frac{dx}{a^{2}\cos^{2}x + b^{2}\sin^{2}x} = \frac{\pi}{2ab}$, if $a$~and~$b$ are positive. What is the value of the integral when $a$~and~$b$ have opposite signs, or when both are negative? \Item{9.} \Topic{Fourier's integrals.} Prove that if $m$~and~$n$ are positive integers then \[ \int_{0}^{2\pi} \cos mx \sin nx\, dx \] is always equal to zero, and \[ \int_{0}^{2\pi} \cos mx \cos nx\, dx,\quad \int_{0}^{2\pi} \sin mx \sin nx\, dx \] are equal to zero unless $m = n$, when each is equal to~$\pi$. \Item{10.} Prove that $\ds\int_{0}^{\pi} \cos mx \cos nx\, dx$ and $\ds\int_{0}^{\pi} \sin mx \sin nx\, dx$ are each equal to zero except when $m = n$, when each is equal to~$\frac{1}{2}\pi$; and that \[ \int_{0}^{\pi} \cos mx \sin nx\, dx = \frac{2n}{n^{2} - m^{2}},\quad \int_{0}^{\pi} \cos mx \sin nx\, dx = 0, \] according as $n - m$~is odd or even. \end{Examples} \Paragraph{159. Calculation of the definite integral from its definition as the limit of a sum.} In a few cases we can evaluate a definite integral by direct calculation, starting from the definitions of \SecNo[§§]{156}~and~\SecNo{157}. As a rule it is much simpler to use the indefinite integral, but the reader will find it instructive to work through a few examples. \begin{Examples}{LXIV.} \Item{1.} Evaluate $\ds\int_{a}^{b} x\, dx$ by dividing $\DPmod{(a, b)}{[a, b]}$ into $n$~equal parts by the points of division $a = x_{0}$, $x_{1}$, $x_{2}$,~\dots, $x_{n} = b$, and calculating the limit as $n \to \infty$ of \[ (x_{1} - x_{0})f(x_{0}) + (x_{2} - x_{1})f(x_{1}) + \dots + (x_{n} - x_{n-1})f(x_{n-1}). \] \PageSep{291} [This sum is \begin{gather*} \frac{b - a}{n}\left[ a + \left(a + \frac{b - a}{n}\right) + \left(a + 2\frac{b - a}{n}\right) + \dots + \left\{a + (n - 1)\frac{b - a}{n}\right\} \right]\\ = \frac{b - a}{n}\left[ na + \frac{b - a}{n} \{1 + 2 + \dots + (n - 1)\} \right] = (b - a)\left\{a + (b - a)\frac{n(n - 1)}{2n^{2}}\right\}, \end{gather*} which tends to the limit $\frac{1}{2} (b^{2} - a^{2})$ as $n \to \infty$. Verify the result by graphical reasoning.] \Item{2.} Calculate $\ds\int_{a}^{b} x^{2}\, dx$ in the same way. \Item{3.} Calculate $\ds\int_{a}^{b} x\, dx$, where $0 < a < b$, by dividing $\DPmod{(a, b)}{[a, b]}$ into $n$~parts by the points of division $a$, $ar$, $ar^{2}$,~\dots\Add{,} $ar^{n-1}$, $ar^{n}$, where $r^{n} = b/a$. Apply the same method to the more general integral $\ds\int_{a}^{b} x^{m}\, dx$. \Item{4.} Calculate $\ds\int_{a}^{b}\cos mx\, dx$ and $\ds\int_{a}^{b}\sin mx\, dx$ by the method of Ex.~1. \Item{5.} Prove that $n\sum\limits_{r=0}^{n-1} \dfrac{1}{n^{2} + r^{2}} \to \tfrac{1}{4}\pi$ as $n \to \infty$. [This follows from the fact that \[ \frac{n}{n^{2}} + \frac{n}{n^{2} + 1^{2}} + \dots + \frac{n}{n^{2} + (n - 1)^{2}} = \sum_{r=0}^{n-1} \frac{(1/n)}{1 + (r/n)^{2}}, \] which tends to the limit $\ds\int_{0}^{1} \frac{dx}{1 + x^{2}}$ as $n \to \infty$, in virtue of the direct definition of the integral.] \Item{6.} Prove that $\dfrac{1}{n^{2}} \sum\limits_{r=0}^{n-1} \sqrtp{n^{2} - r^{2}} \to \tfrac{1}{4}\pi$. [The limit is $\ds\int_{0}^{1} \sqrtp{1 - x^{2}}\, dx$.] \end{Examples} \Paragraph{160. General properties of the definite integral.} The definite integral possesses the important properties expressed by the following equations.\footnote {All functions mentioned in these equations are of course continuous, as the definite integral has been defined for continuous functions only.} \CenterLine{\Item{(1)}}{$\ds\int_{a}^{b} f(x)\, dx = -\int_{b}^{a} f(x)\, dx$.} This follows at once from the definition of the integral by means of the integral function~$F(x)$, since $F(b) - F(a) = -\{F(a) - F(b)\}$. It should be observed that in the direct definition it was presupposed that the upper limit is greater than the lower; thus this method of definition does not apply to the integral $\ds\int_{b}^{a} f(x)\, dx$ when $a < b$. If we adopt this definition as fundamental we must extend it to such cases by regarding the equation~\Eq{(1)} as a definition of its right-hand side. \PageSep{292} \CenterLine{\Item{(2)}}{$\ds\int_{a}^{a} f(x)\, dx = 0$.} \CenterLine{\Item{(3)}} {$\ds\int_{a}^{b}f(x)\, dx + \int_{b}^{c}f(x)\, dx = \int_{a}^{c}f(x)\, dx$.} \CenterLine{\Item{(4)}} {$\ds\int_{a}^{b}kf(x)\, dx = k \int_{a}^{b}f(x)\, dx$.} \CenterLine{\Item{(5)}}{$\ds\int_{a}^{b}\{f(x) + \phi(x)\}\, dx = \int_{a}^{b}f(x)\, dx + \int_{a}^{b}\phi(x)\, dx$.} \begin{Remark} The reader will find it an instructive exercise to write out formal proofs of these properties, in each case giving a proof starting from ($\alpha$)~the definition by means of the integral function and ($\beta$)~the direct definition. \end{Remark} The following theorems are also important. \begin{Result} \Item{(6)} If $f(x) \geq 0$ when $a \leq x \leq b$, then $\ds\int_{a}^{b}f(x)\, dx \geq 0$. \end{Result} \begin{Remark} We have only to observe that the sum~$s$ of \SecNo[§]{156} cannot be negative. It will be shown later (\MiscEx{VII}~41) that the value of the integral cannot be zero unless $f(x)$~is always equal to zero: this may also be deduced from the second corollary of~\SecNo[§]{121}. \end{Remark} \begin{Result} \Item{(7)} If $H \leq f(x) \leq K$ when $a \leq x \leq b$, then \[ H(b - a) \leq \int_{a}^{b}f(x)\, dx \leq K(b - a). \] \end{Result} \begin{Remark} This follows at once if we apply~(6) to $f(x) - H$ and $K - f(x)$. \end{Remark} \begin{Result} \CenterLine{\Item{(8)}}{$\ds\int_{a}^{b}f(x)\, dx = (b-a)f(\xi)$,} where $\xi$ lies between $a$ and~$b$. \end{Result} \begin{Remark} This follows from~(7). For we can take $H$ to be the least and $K$~the greatest value of~$f(x)$ in~$\DPmod{(a, b)}{[a, b]}$. Then the integral is equal to~$\eta(b - a)$, where $\eta$~lies between $H$ and~$K$. But, since $f(x)$~is continuous, there must be a value of~$\xi$ for which $f(\xi) = \eta$~(\SecNo[§]{100}). If $F(x)$~is the integral function, we can write the result of~(8) in the form \[ F(b) - F(a) = (b - a)F'(\xi), \] so that (8)~appears now to be only another way of stating the Mean Value Theorem of \SecNo[§]{125}. We may call~(8) the \Emph{First Mean Value Theorem for Integrals}. \end{Remark} \PageSep{293} \begin{Result} \Item{(9)} \Topic{The Generalised Mean Value Theorem for integrals.} If $\phi(x)$~is positive, and $H$ and~$K$ are defined as in~\Eq{(7)}, then \[ H\int_{a}^{b} \phi(x)\, dx \leq \int_{a}^{b} f(x)\phi(x)\, dx \leq K\int_{a}^{b} \phi(x)\, dx; \] and \[ \int_{a}^{b} f(x)\phi(x)\, dx = f(\xi) \int_{a}^{b} \phi(x)\, dx, \] where $\xi$~is defined as in~\Eq{(8)}. \end{Result} \begin{Remark} This follows at once by applying Theorem~\Eq{(6)} to the integrals \[ \int_{a}^{b} \{f(x) - H\}\phi(x)\, dx,\quad \int_{a}^{b} \{K - f(x)\}\phi(x)\, dx. \] The reader should formulate for himself the corresponding result which holds when $\phi(x)$~is always negative. \end{Remark} \begin{Result} \Itemp{(10)} \Topic{The Fundamental Theorem of the Integral Calculus.} The function \[ F(x) = \int_{a}^{x} f(t)\, dt \] has a derivative equal to $f(x)$. \end{Result} This has been proved already in \SecNo[§]{145}, but it is convenient to restate the result here as a formal theorem. It follows as a corollary, as was pointed out in \SecNo[§]{157}, that \emph{$F(x)$~is a continuous function of~$x$}. \begin{Examples}{LXV.} \Item{1.} Show, by means of the direct definition of the definite integral, and equations \Eq{(1)}--\Eq{(5)} above, that \CenterLine{\Itemp{(i)}}{$\ds\int_{-a}^{a} \phi(x^{2})\, dx = 2\int_{0}^{a} \phi(x^{2})\, dx$,\quad $\ds\int_{-a}^{a} x\phi(x^{2})\, dx = 0$;} % \CenterLine{\Itemp{(ii)}}{$\ds\int_{0}^{\frac{1}{2}\pi} \phi(\cos x)\, dx = \int_{0}^{\frac{1}{2} \pi} \phi(\sin x)\, dx = \tfrac{1}{2} \int_{0}^{\pi} \phi(\sin x)\, dx$;} % \CenterLine{\Itemp{(iii)}}{$\ds\int_{0}^{m\pi} \phi(\cos^{2} x)\, dx = m\int_{0}^{\pi} \phi(\cos^{2} x)\, dx$,} $m$~being an integer. [The truth of these equations will appear geometrically intuitive, if the graphs of the functions under the sign of integration are sketched.] \Item{2.} Prove that $\ds\int_{0}^{\pi} \frac{\sin nx}{\sin x}\, dx$ is equal to~$\pi$ or to~$0$ according as $n$~is odd or or even. [Use the formula $(\sin nx)/(\sin x) = 2\cos \{(n - 1)x\} + 2\cos \{(n - 3)x\} + \dots$, the last term being $1$~or $2\cos x$.] \Item{3.} Prove that $\ds\int_{0}^{\pi} \sin nx \cot x\, dx$ is equal to~$0$ or to~$\pi$ according as $n$~is odd or even. \PageSep{294} \Item{4.} If $\phi(x) = a_{0} + a_{1}\cos x + b_{1}\sin x + a_{2}\cos 2x + \dots + a_{n}\cos nx + b_{n}\sin nx$, and $k$~is a positive integer not greater than~$n$, then \[ \int_{0}^{2\pi} \phi(x)\, dx = 2\pi a_{0},\quad \int_{0}^{2\pi} \cos kx \phi(x)\, dx = \pi a_{k},\quad \int_{0}^{2\pi} \sin kx \phi(x)\, dx = \pi b_{k}. \] If $k > n$ then the value of each of the last two integrals is zero. [Use \Ex{lxiii}.~9.] \Item{5.} If $\phi(x) = a_{0} + a_{1} \cos x + a_{2}\cos 2x + \dots + a_{n}\cos nx$, and $k$~is a positive integer not greater than~$n$, then \[ \int_{0}^{\pi} \phi(x)\, dx = \pi a_{0},\quad \int_{0}^{\pi} \cos kx \phi(x)\, dx = \tfrac{1}{2}\pi a_{k}. \] If $k > n$ then the value of the last integral is zero. [Use \Ex{lxiii}.~10.] \Item{6.} Prove that if $a$ and~$b$ are positive then \[ %[** TN: In-line in the original] \int_{0}^{2\pi} \frac{dx}{a^{2}\cos^{2} x + b^{2}\sin^{2} x} = \frac{2\pi}{ab}. \] %[** TN: No paragraph break in the original] [Use \Ex{lxiii}.~8 and Ex.~1 above.] \Item{7.} If $f(x) \leq \phi(x)$ when $a \leq x \leq b$, then $\ds\int_{a}^{b} f\, dx \leq \int_{a}^{b}\phi\, dx$. \Item{8.} Prove that \begin{alignat*}{2} %[** TN: Set on one line in the original] 0 &< \int_{0}^{\frac{1}{2}\pi} \sin^{n+1}x\, dx &&< \int_{0}^{\frac{1}{2}\pi} \sin^{n}x\, dx,\\ 0 &< \int_{0}^{\frac{1}{4}\pi} \tan^{n+1}x\, dx &&< \int_{0}^{\frac{1}{4}\pi} \tan^{n}x\, dx. \end{alignat*} \Item{9.\footnotemark} If $n > 1$ then \[ %[** TN: In-line in the original] .5 < \int_{0}^{\frac{1}{2}} \frac{dx}{\sqrtp{1 - x^{2n}}} < .524. \] \footnotetext{Exs.~9--13 are taken from Prof.\ Gibson's \textit{Elementary Treatise on the Calculus}.}% [The first inequality follows from the fact that $\sqrtp{1 - x^{2n}} < 1$, the second from the fact that $\sqrtp{1 - x^{2n}} > \sqrtp{1 - x^{2}}$.] %[** TN: Displayed in the original] \Item{10.} Prove that \[ %[** TN: In-line in the original] \tfrac{1}{2} < \int_{0}^{1} \frac{dx}{\sqrtp{4 - x^{2} + x^{3}}} < \tfrac{1}{6}\pi. \] \Item{11.} Prove that $(3x + 8)/16 < 1/\sqrtp{4 - 3x + x^{3}} < 1/\sqrtp{4 - 3x}$ if $0 < x < 1$, and hence that \[ %[** TN: In-line in the original] \tfrac{19}{32} < \int_{0}^{1} \frac{dx}{\sqrtp{4 - 3x + x^{3}}} < \tfrac{2}{3}. \] \Item{12.} Prove that \[ %[** TN: In-line in the original] .573 < \int_{1}^{2} \frac{dx}{\sqrtp{4 - 3x + x^{3}}} < .595. \] [Put $x = 1 + u$: then replace $2 + 3u^{2} + u^{3}$ by $2 + 4u^{2}$ and by $2 + 3u^{2}$.] \Item{13.} If $\alpha$~and~$\phi$ are positive acute angles then \[ \phi < \int_{0}^{\phi} \frac{dx}{\sqrtp{1 - \sin^{2}\alpha \sin^{2} x}} < \frac{\phi}{\sqrtp{1 - \sin^{2}\alpha \sin^{2}\phi}}. \] If $\alpha = \phi = \frac{1}{6}\pi$, then the integral lies between $.523$ and~$.541$. \Item{14.} Prove that \[ %[** TN: In-line in the original] \left|\int_{a}^{b} f(x)\, dx\right| \leq \int_{a}^{b}|f(x)|\, dx. \] [If $\sigma$~is the sum considered at the end of \SecNo[§]{156}, and $\sigma'$~the corresponding sum formed from the function~$|f(x)|$, then $|\sigma| \leq \sigma'$.] %[** TN: Left vertical bar around integral missing in original] \Item{15.} If $|f(x)| \leq M$, then \[ %[** TN: In-line in the original] \left|\int_{a}^{b} f(x)\phi(x)\, dx\right| \leq M\int_{a}^{b}|\phi(x)|\, dx. \] \end{Examples} \PageSep{295} \Paragraph{161. Integration by parts and by substitution.} It follows from \SecNo[§]{138} that \[ \int_{a}^{b} f(x)\phi'(x)\, dx = f(b)\phi(b) - f(a)\phi(a) - \int_{a}^{b} f'(x)\phi(x)\, dx. \] This formula is known as the formula for \Emph{integration of a definite integral by parts}. Again, we know (\SecNo[§]{133}) that if $F(t)$~is the integral function of~$f(t)$, then \[ \int f\{\phi(x)\}\phi'(x)\, dx = F\{\phi(x)\}. \] Hence, if $\phi(a) = c$, $\phi(b) = d$, we have \[ \int_{c}^{d} f(t)\, dt = F(d) - F(c) = F\{\phi(b)\} - F\{\phi(a)\} = \int_{a}^{b} f\{\phi(x)\}\phi'(x)\, dx; \] which is the formula for the transformation of a definite integral by \Emph{substitution}. The formulae for integration by parts and for transformation often enable us to evaluate a definite integral without the labour of actually finding the integral function of the subject of integration, and sometimes even when the integral function cannot be found. Some instances of this will be found in the following examples. That the value of a definite integral may sometimes be found without a knowledge of the integral function is only to be expected, for the fact that we cannot determine the general form of a function~$F(x)$ in no way precludes the possibility that we may be able to determine the difference $F(b) - F(a)$ between two of its particular values. But as a rule this can only be effected by the use of more advanced methods than are at present at our disposal. \begin{Examples}{LXVI.} \Item{1.} Prove that \[ \int_{a}^{b} x f''(x)\, dx = \{bf'(b) - f(b)\} - \{af'(a) - f(a)\}. \] \Item{2.} More generally, \[ \int_{a}^{b} x^{m} f^{(m+1)}(x)\, dx = F(b) - F(a), \] where \begin{multline*} %[** TN: Set on one line in the original] F(x) = x^{m} f^{(m)}(x) - mx^{m-1} f^{(m-1)}\DPtypo{x}{(x)} \\ + m(m - 1)x^{m-2} f^{(m-2)}\DPtypo{x}{(x)} - \dots + (-1)^{m} m!\, f(x). \end{multline*} \Item{3.} Prove that \[ \int_{0}^{1} \arcsin x\, dx = \tfrac{1}{2}\pi - 1,\quad \int_{0}^{1}x\arctan x\, dx = \tfrac{1}{4}\pi - \tfrac{1}{2}. \] \PageSep{296} \Item{4.} Prove that if $a$~and~$b$ are positive then \[ \int_{0}^{\frac{1}{2}\pi} \frac{x\cos x\sin x\, dx}{(a^{2}\cos^{2}x + b^{2}\sin^{2}x)^{2}} = \frac{\pi}{4ab^{2}(a + b)}. \] [Integrate by parts and use \Ex{lxiii}.~8.] \Item{5.} If \[ f_{1}(x) = \int_{0}^{x}f(t)\, dt,\quad f_{2}(x) = \int_{0}^{x}f_{1}(t)\, dt,\ \dots,\quad f_{k}(x) = \int_{0}^{x} f_{k-1}(t)\, dt, \] then \[ f_{k}(x) = \frac{1}{(k - 1)!} \int_{0}^{x} f(t)(x - t)^{k-1}\, dt. \] [Integrate repeatedly by parts.] \Item{6.} Prove by integration by parts that if \[ %[** TN: In-line in the original] u_{m, n} = \int_{0}^{1} x^{m} (1 - x)^{n}\, dx, \] where $m$~and~$n$ are positive integers, then $(m + n + 1) u_{m, n} = nu_{m, n-1}$, and deduce that \[ u_{m, n} = \frac{m!\, n!}{(m + n + 1)!}. \] \Item{7.} Prove that if %[** TN: In-line in the original] \[ u_{n} = \int_{0}^{\frac{1}{4}\pi} \tan^{n}x\, dx \] then $u_{n} + u_{n-2} = 1/(n - 1)$. Hence evaluate the integral for all positive integral values of~$n$. [Put $\tan^{n}x = \tan^{n-2}x(\sec^{2}x - 1)$ and integrate by parts.] \Item{8.} Deduce from the last example that $u_{n}$~lies between $1/\{2(n - 1)\}$ and $1/\{2(n + 1)\}$. \Item{9.} Prove that if \[ %[** TN: In-line in the original] u_{n} = \int_{0}^{\frac{1}{2}\pi} \sin^{n} x\, dx \] then $u_{n} = \{(n - 1)/n\} u_{n-2}$. [Write $\sin^{n-1}x\sin x$ for $\sin^{n}x$ and integrate by parts.] \Item{10.} Deduce that $u_{n}$~is equal to \[ \frac{2·4·6 \dots (n - 1)}{3·5·7 \dots n},\quad \tfrac{1}{2}\pi \frac{1·3·5 \dots (n - 1)}{2·4·6 \dots n}, \] according as $n$~is odd or even. \Item{11.} \Topic{The Second Mean Value Theorem.} If $f(x)$~is a function of~$x$ which has a differential coefficient of constant sign for all values of~$x$ from $x = a$ to $x = b$, then there is a number~$\xi$ between $a$~and~$b$ such that \[ \int_{a}^{b} f(x)\phi(x)\, dx = f(a) \int_{a}^{\xi} \phi(x)\, dx + f(b) \int_{\xi}^{b} \phi(x)\, dx. \] [Let $\ds\int_{a}^{x}\phi(t)\, dt = \Phi(x)$. Then \begin{align*} \int_{a}^{b} f(x)\phi(x)\, dx = \int_{a}^{b} f(x)\Phi'(x)\, dx &= f(b)\Phi(b) - \int_{a}^{b} f'(x)\Phi(x)\, dx\\ &= f(b)\Phi(b) - \Phi(\xi) \int_{a}^{b}f'(x)\, dx, \end{align*} by the generalised Mean Value Theorem of \SecNo[§]{160}: \ie \[ \int_{a}^{b} f(x)\phi(x)\, dx = f(b)\Phi(b) + \{f(a) - f(b)\}\Phi(\xi), \] which is equivalent to the result given.] \PageSep{297} \Item{12.} \Topic{Bonnet's form of the Second Mean Value Theorem.} If $f'(x)$~is of constant sign, and $f(b)$ and $f(a) - f(b)$ have the same sign, then \[ \int_{a}^{b} f(x)\phi(x)\, dx = f(a) \int_{a}^{X} \phi(x)\, dx, \] where $X$~lies between $a$ and~$b$. [For $f(b)\Phi(b) + \{f(a) - f(b)\}\Phi(\xi) = \mu f(a)$, where $\mu$~lies between $\Phi(\xi)$ and~$\Phi(b)$, and so is the value of~$\Phi(x)$ for a value of~$x$ such as~$X$. The important case is that in which $0 \leq f(b) \leq f(x) \leq f(a)$.] Prove similarly that if $f(a)$ and $f(b) - f(a)$ have the same sign, then \[ \int_{a}^{b} f(x)\phi(x)\, dx = f(b) \int_{X}^{b} \phi(x)\, dx, \] where $X$~lies between $a$ and~$b$. [Use the function $\Psi(\xi) = \ds\int_{\xi}^{b} \phi(x)\, dx$. It will be found that the integral can be expressed in the form \[ f(a)\DPtypo{\psi(a)}{\Psi(a)} + \{f(b) - f(a)\}\Psi(\xi). \] The important case is that in which $0 \leq f(a) \leq f(x) \leq f(b)$.] \Item{13.} Prove that \[ %[** TN: In-line in the original] \left|\int_{X}^{X'} \frac{\sin x}{x}\, dx\right| < \frac{2}{X} \] if $X' > X > 0$. [Apply the first formula of Ex.~12, and note that the integral of $\sin x$ over any interval whatever is numerically less than~$2$.] \Item{14.} Establish the results of \Ex{lxv}.~1 by means of the rule for substitution. [In (i)~divide the range of integration into the two parts $\DPmod{(-a, 0)}{[-a, 0]}$, $\DPmod{(0, a)}{[0, a]}$, and put $x = -y$ in the first. In (ii)~use the substitution $x = \frac{1}{2}\pi - y$ to obtain the first equation: to obtain the second divide the range $\DPmod{(0, \pi)}{[0, \pi]}$ into two equal parts and use the substitution $x = \frac{1}{2}\pi + y$. In (iii)~divide the range into $m$~equal parts and use the substitutions $x = \pi + y$, $x = 2\pi + y$,~\dots.] %[** TN: Integrals in next five examples are set in-line in the original] \Item{15.} Prove that \[ \int_{a}^{b} F(x)\, dx = \int_{a}^{b} F(a + b - x)\, dx. \] \Item{16.} Prove that \[ \int_{0}^{\frac{1}{2}\pi} \cos^{m} x\sin^{m} x\, dx = 2^{-m} \int_{0}^{\frac{1}{2}\pi} \cos^{m} x\, dx. \] \Item{17.} Prove that \[ \int_{0}^{\pi} x\phi(\sin x)\, dx = \tfrac{1}{2}\pi \int_{0}^{\pi} \phi(\sin x)\, dx. \] [Put $x = \pi - y$.] \Item{18.} Prove that \[ \int_{0}^{\pi} \frac{x\sin x}{1 + \cos^{2} x}\, dx = \tfrac{1}{4}\pi^{2}. \] \Item{19.} Show by means of the transformation $x = a\cos^{2}\theta + b\sin^{2}\theta$ that \[ \int_{a}^{b} \sqrtb{(x - a)(b - x)}\, dx = \tfrac{1}{8}\pi (b - a)^{2}. \] \Item{20.} Show by means of the substitution $(a + b\cos x) (a - b\cos y) = a^{2} - b^{2}$ that \[ \int_{0}^{\pi} (a + b\cos x)^{-n}\, dx = (a^{2} - b^{2})^{-(n - \frac{1}{2})} \int_{0}^{\pi} (a - b\cos y)^{n-1}\, dy, \] when $n$~is a positive integer and $a > |b|$, and evaluate the integral when $n = 1$, $2$,~$3$. \PageSep{298} \Item{21.} If $m$~and~$n$ are positive integers then \[ \int_{a}^{b} (x - a)^{m} (b - x)^{n}\, dx = (b - a)^{m+n+1} \frac{m!\, n!}{(m + n + 1)!}. \] [Put $x = a + (b - a)y$, and use Ex.~6.] \end{Examples} \Paragraph{162. Proof of Taylor's Theorem by Integration by Parts.} We shall now give the alternative form of the proof of Taylor's Theorem to which we alluded in \SecNo[§]{147}. Let $f(x)$~be a function whose first $n$~derivatives are continuous, and let \[ F_{n}(x) = f(b) - f(x) - (b - x)f'(x) - \dots - \frac{(b - x)^{n-1}}{(n - 1)!} f^{(n-1)}(x). \] Then \[ F_{n}'(x) = -\frac{(b - x)^{n-1}}{(n - 1)!} f^{(n)}(x), \] and so \[ F_{n}(a) = F_{n}(b) - \int_{a}^{b}F_{n}'(x)\, dx = \frac{1}{(n - 1)!} \int_{a}^{b} (b - x)^{n-1} f^{(n)}(x)\, dx. \] If now we write $a + h$ for~$b$, and transform the integral by putting $x = a + th$, we obtain \[ f(a + h) = f(a) + hf'(a) + \dots + \frac{h^{n-1}}{(n - 1)!} f^{(n-1)}(a) + R_{n}, \Tag{(1)} \] where \[ R_{n} = \frac{h^{n}}{(n - 1)!} \int_{0}^{1} (1 - t)^{n-1} f^{(n)}(a + th)\, dt. \Tag{(2)} \] Now, if $p$~is any positive integer not greater than~$n$, we have, by Theorem~(9) of \SecNo[§]{160}, \begin{align*} \int_{0}^{1} (1 - t)^{n-1} f^{(n)}(a + th)\, dt &= \int_{0}^{1}(1 - t)^{n-p} (1 - t)^{p-1} f^{(n)}(a + th)\, dt \\ &= (1 - \theta)^{n-p} f^{(n)}(a + \theta h) \int_{0}^{1} (1 - t)^{p-1}\, dt, \end{align*} where $0 < \theta < 1$. Hence \[ R_{n} = \frac{(1 - \theta)^{n-p} f^{(n)}(a + \theta h)h^{n}}{p(n - 1)!}. \Tag{(3)} \] If we take $p = n$ we obtain Lagrange's form of~$R_{n}$ (\SecNo[§]{148}). If on the other hand we take $p = 1$ we obtain \Emph{Cauchy's form}, viz. \[ R_{n} = \frac{(1 - \theta)^{n-1} f^{(n)}(a + \theta h) h^{n}}{(n - 1)!}.\footnotemark \Tag{(4)} \] \footnotetext{The method used in \SecNo[§]{147} can also be modified so as to obtain these alternative forms of the remainder.} \PageSep{299} \begin{Remark} \Paragraph{163. Application of Cauchy's form to the Binomial Series.} If $f(x) = (1 + x)^{m}$, where $m$~is not a positive integer, then Cauchy's form of the remainder is \[ R_{n} = \frac{m(m - 1)\dots (m - n + 1)}{1·2\dots (n - 1)}\, \frac{(1 - \theta )^{n-1} x^{n}}{(1 + \theta x)^{n-m}}. \] Now $(1 - \theta)/(1 + \theta x)$ is less than unity, so long as $-1 < x < 1$, whether $x$~is positive or negative; and $(1 + \theta x)^{m-1}$ is less than a constant~$K$ for all values of~$n$, being in fact less than $(1 + |x|)^{m-1}$ if $m > 1$ and than $(1 - |x|)^{m-1}$ if $m < 1$\Add{.} Hence \[ |R_{n}| < K |m| \left|\binom{m - 1}{n - 1}\right| |x^{n}| = \rho_{n}, \] say\Add{.} But $\rho_{n} \to 0$ as $n \to \infty$, by \Ex{xxvii}.~13, and so $R_{n} \to 0$. The truth of the Binomial Theorem is thus established for all rational values of~$m$ and all values of~$x$ between $-1$ and~$1$. It will be remembered that the difficulty in using Lagrange's form, in \Ex{lvi}.~2, arose in connection with negative values of~$x$. \end{Remark} \Paragraph{164. Integrals of complex functions of a real variable.} So far we have always supposed that the subject of integration in a definite integral is real. We define the integral of a complex function $f(x) = \DPtypo{\psi}{\phi}(x) + i\psi(x)$ of the real variable~$x$, between the limits $a$~and~$b$, by the equations \[ \int_{a}^{b} f(x)\, dx = \int_{a}^{b} \{\phi(x) + i\psi(x)\}\, dx = \int_{a}^{b} \phi(x)\, dx + i \int_{a}^{b} \psi(x)\, dx; \] and it is evident that the properties of such integrals may be deduced from those of the real integrals already considered. There is one of these properties that we shall make use of later on. It is expressed by the inequality \[ \left|\int_{a}^{b} f(x)\, dx\right| \leq \int_{a}^{b} |f(x)|\, dx.\footnotemark \Tag{(1)} \] \footnotetext{The corresponding inequality for a real integral was proved in \Ex{lxv}.~14.}% This inequality may be deduced without difficulty from the definitions of \SecNo[§§]{156}~and~\SecNo{157}. If $\delta_{\nu}$~has the same meaning as in \SecNo[§]{156}, $\phi_{\nu}$~and~$\psi_{\nu}$ are the values of $\phi$~and~$\psi$ at a point of~$\delta_{\nu}$, and $f_{\nu} = \phi_{\nu} + i\psi_{\nu}$, then we have \begin{align*} \int_{a}^{b} f\, dx = \int_{a}^{b} \phi\, dx + i \int_{a}^{b} \psi\, dx &= \lim \tsum \phi_{\nu}\, \delta_{\nu} + i \lim \tsum \psi_{\nu}\, \delta_{\nu} \\ &= \lim \tsum (\phi_{\nu} + i\psi_{\nu})\, \delta_{\nu} = \lim \tsum f_{\nu}\, \delta_{\nu}, \end{align*} and so \[ \int_{a}^{b} f\, dx = |\lim \tsum f_{\nu}\, \delta_{\nu}| = \lim |\tsum f_{\nu}\, \delta_{\nu}|; \] \PageSep{300} while \[ \int_{a}^{b} |f|\, dx = \lim \tsum |f_{\nu}|\, \delta_{\nu}. \] The result now follows at once from the inequality \[ |\tsum f_{\nu}\, \delta_{\nu}| \leq \tsum |f_{\nu}|\, \delta_{\nu}. \] It is evident that the formulae \Eq{(1)}~and~\Eq{(2)} of \SecNo[§]{162} remain true when $f$~is a complex function $\phi + i\psi$. \Section{MISCELLANEOUS EXAMPLES ON CHAPTER VII.} %[** TN: Several displayed integrals are in-line in the original] \begin{Examples}{} \Item{1.} Verify the terms given of the following Taylor's Series: \begin{alignat*}{2} &\Item{(1)} & \tan x &= x + \tfrac{1}{3} x^{3} + \tfrac{2}{15} x^{5} + \dots, \\ &\Item{(2)} & \sec x &= 1 + \tfrac{1}{2} x^{2} + \tfrac{5}{24} x^{4} + \dots, \\ &\Item{(3)}\quad & x\cosec x &= 1 + \tfrac{1}{6} x^{2} + \tfrac{7}{360} x^{4} + \dots, \\ &\Item{(4)} & x\cot x &= 1 - \tfrac{1}{3} x^{2} - \tfrac{1}{45} x^{4} - \dots. \end{alignat*} \Item{2.} Show that if $f(x)$ and its first $n + 2$ derivatives are continuous, and $f^{(n+1)}(0) \neq 0$, and $\theta_{n}$~is the value of~$\theta$ which occurs in Lagrange's form of the remainder after $n$~terms of Taylor's Series, then \[ \theta_{n} = \frac{1}{n + 1} + \frac{n}{2(n + 1)^{2}(n + 2)} \left\{\frac{f^{(n+2)}(0)}{f^{(n+1)}(0)} + \epsilon_{x}\right\}x, \] where $\epsilon_{x} \to 0$ as $x \to 0$. [Follow the method of \Ex{lv}.~12.] \Item{3.} Verify the last result when $f(x) = 1/(1+ x)$. [Here $(1 + \theta_{n}x)^{n+1} = 1 + x$.] \Item{4.} Show that if $f(x)$~has derivatives of the first three orders then \[ f(b) = f(a) + \tfrac{1}{2}(b - a) \{f'(a) + f'(b)\} - \tfrac{1}{12}(b - a)^{3} f'''(\alpha), \] where $a < \alpha < b$. [Apply to the function \begin{multline*} f(x) - f(a) - \tfrac{1}{2}(x - a) \{f'(a) + f'(x)\}\\ - \left(\frac{x - a}{b - a}\right)^{3} [f(b) - f(a) - \tfrac{1}{2}(b - a) \{f'(a) + f'(b)\}] \end{multline*} arguments similar to those of \SecNo[§]{147}.] \Item{5.} Show that under the same conditions \[ f(b) = f(a) + (b - a) f'\{\tfrac{1}{2}(a + b)\} + \tfrac{1}{24}(b - a)^{3}f'''(\alpha). \] \Item{6.} Show that if $f(x)$ has derivatives of the first five orders then \[ f(b) = f(a) + \tfrac{1}{6}(b - a) [f'(a) + f'(b) + 4f'\{\tfrac{1}{2}(a + b)\}] - \tfrac{1}{2880}(b - a)^{5} f^{(5)}(\DPtypo{a}{\alpha}). \] \Item{7.} Show that under the same conditions \[ f(b) = f(a) + \tfrac{1}{2}(b - a) \{f'(a) + f'(b)\} - \tfrac{1}{12}(b - a)^{2} \{f''(b) - f''(a)\} + \tfrac{1}{720}(b - a)^{5} f^{(5)}(\alpha). \] \PageSep{301} \Item{8.} Establish the formulae \CenterLine{\Itemp{(i)}}{$\ds \begin{vmatrix} f(a) & f(b)\\ g(a) & g(b) \end{vmatrix} = (b - a) \begin{vmatrix} f(a) & f'(\beta)\\ g(a) & g'(\beta) \end{vmatrix}$,} where $\beta$~lies between $a$ and~$b$, and \CenterLine{\Itemp{(ii)}}{$\ds \begin{vmatrix} f(a) & f(b) & f(c)\\ g(a) & g(b) & g(c)\\ h(a) & h(b) & h(c) \end{vmatrix} = \tfrac{1}{2} (b - c)(c - a)(a - b) \begin{vmatrix} f(a) & f'(\beta) & f''(\gamma)\\ g(a) & g'(\beta) & g''(\gamma)\\ h(a) & h'(\beta) & h''(\gamma) \end{vmatrix}$,} where $\beta$ and $\gamma$ lie between the least and greatest of $a$,~$b$,~$c$. [To prove~(ii) consider the function \[ \phi(x) = \begin{vmatrix} f(a) & f(b) & f(x)\\ g(a) & g(b) & g(x)\\ h(a) & h(b) & h(x) \end{vmatrix} - \frac{(x - a)(x - b)}{(c - a)(c - b)} \begin{vmatrix} f(a) & f(b) & f(c)\\ g(a) & g(b) & g(c)\\ h(a) & h(b) & h(c) \end{vmatrix}\Add{,} \] which vanishes when $x = a$, $x = b$, and $x = c$. Its first derivative, by Theorem~B of \SecNo[§]{121}, must vanish for two distinct values of~$x$ lying between the least and greatest of $a$,~$b$,~$c$; and its second derivative must therefore vanish for a value~$\gamma$ of~$x$ satisfying the same condition. We thus obtain the formula \[ \begin{vmatrix} f(a) & f(b) & f(c)\\ g(a) & g(b) & g(c)\\ h(a) & h(b) & h(c) \end{vmatrix} = \tfrac{1}{2}(c - a)(c - b) \begin{vmatrix} f(a) & f(b) & f''(\gamma)\\ g(a) & g(b) & g''(\gamma)\\ h(a) & h(b) & h''(\gamma) \end{vmatrix}. \] The reader will now complete the proof without difficulty.] \Item{9.} If $F(x)$~is a function which has continuous derivatives of the first $n$~orders, of which the first~$n - 1$ vanish when $x = 0$, and $A \leq F^{(n)}(x) \leq B$ when $0 \leq x \leq h$, then $A(x^{n}/n!) \leq F(x) \leq B(x^{n}/n!)$ when $0 \leq x \leq h$. Apply this result to \[ f(x) - f(0) - xf'(0) - \dots - \frac{x^{n-1}}{(n - 1)!} f^{(n-1)}(0), \] and deduce Taylor's Theorem. \Item{10.} If $\Delta_{h}\phi(x) = \phi(x) - \phi(x + h)$, $\Delta_{h}^{2}\phi(x) = \Delta_{h}\{\Delta_{h}\phi(x)\}$, and so on, and $\phi(x)$~has derivatives of the first $n$~orders, then \[ \Delta_{h}^{n}\phi(x) = \sum_{r=0}^{n}(-1)^{r} \binom{n}{r} \phi(x + rh) = (-h)^{n} \phi^{(n)}(\xi), \] where $\xi$~lies between $x$ and~$x + nh$. Deduce that if $\phi^{(n)}(x)$~is continuous then $\{\Delta_{h}^{n}\phi(x)\}/h^{n} \to (-1)^{n}\phi^{(n)}(x)$ as $h \to 0$. [This result has been stated already when $n = 2$, in \Ex{lv}.~13.] \Item{11.} Deduce from Ex.~10 that $x^{n-m}\, \Delta_{h}^{n} x^{m} \to m(m - 1) \dots (m - n + 1)h^{n}$ as $x \to \infty$, $m$~being any rational number and $n$~any positive integer. In particular prove that \[ x\sqrt{x} \{\sqrt{x} - 2\sqrtp{x + 1} + \sqrtp{x + 2}\} \to -\tfrac{1}{4}. \] \PageSep{302} \Item{12.} Suppose that $y = \phi(x)$ is a function of~$x$ with continuous derivatives of at least the first four orders, and that $\phi(0) = 0$, $\phi'(0) = 1$, so that \[ y = \phi(x) = x + a_{2}x^{2} + a_{3}x^{3} + (a_{4} + \epsilon_{x})x^{4}, \] where $\epsilon_{x} \to 0$ as $x \to 0$. Establish the formula \[ x = \psi(y) = y - a_{2}y^{2} + (2a_{2}^{2} - a_{3})y^{3} - (5a_{2}^{3} - 5a_{2}a_{3} + a_{4} + \epsilon_{y})y^{4}, \] where $\epsilon_{y} \to 0$ as $y \to 0$, for that value of~$x$ which vanishes with~$y$; and prove that \[ \frac{\phi(x)\psi(x) - x^{2}}{x^{4}} \to a_{2}^{2} \] as $x \to 0$. \Item{13.} The coordinates $(\xi, \eta)$ of the centre of curvature of the curve $x = f(t)$, $y = F(t)$, at the point $(x, y)$, are given by \[ -(\xi - x)/y' = (\eta - y)/x' = (x'^{2} + y'^{2})/(x'y'' - x''y'); \] and the radius of curvature of the curve is \[ (x'^{2} + y'^{2})^{3/2}/(x'y'' - x''y'), \] dashes denoting differentiations with respect to~$t$. \Item{14.} The coordinates $(\xi, \eta)$ of the centre of curvature of the curve $27ay^{2} = 4x^{3}$, at the point $(x, y)$, are given by \[ 3a(\xi + x) + 2x^{2} = 0, \quad \eta = 4y + (9ay)/x.\quad \] \MathTrip{1899.} \Item{15.} Prove that the circle of curvature at a point $(x, y)$ will have contact of the third order with the curve if $(1 + y_{1}^{2})y_{3} = 3y_{1}y_{2}^{2}$ at that point. Prove also that the circle is the only curve which possesses this property at every point; and that the only points on a conic which possess the property are the extremities of the axes. [Cf.\ \Ref{Ch.}{VI}, \MiscEx{VI}\ 10~(iv).] \Item{16.} The conic of closest contact with the curve $y = ax^{2} + bx^{3} + cx^{4} + \dots + kx^{n}$, at the origin, is $a^{3}y = a^{4}x^{2} + a^{2}bxy + (ac - b^{2})y^{2}$. Deduce that the conic of closest contact at the point $(\xi, \eta)$ of the curve $y = f(x)$ is \[ 18\eta_{2}^{3}T = 9\eta_{2}^{4}(x - \xi)^{2} + 6\eta_{2}^{2}\eta_{3}(x - \xi)T + (3\eta_{2}\eta_{4} - 4\eta_{3}^{2})T^{2}, \] where $T = (y - \eta) - \eta_{1}(x - \xi)$. \MathTrip{1907.} \Item{17.} \Topic{Homogeneous functions.\footnote {In this and the following examples the reader is to assume the continuity of all the derivatives which occur.}} If $u = x^{n} f(y/x, z/x, \dots)$ then $u$~is unaltered, save for a factor~$\lambda^{n}$, when $x$,~$y$, $z$,~\dots\ are all increased in the ratio $\lambda : 1$. In these circumstances $u$~is called a \emph{homogeneous function of degree~$n$} in the variables $x$,~$y$, $z$,~\dots. Prove that if $u$~is homogeneous and of degree~$n$ then \[ x\frac{\dd u}{\dd x} + y\frac{\dd u}{\dd y} + z\frac{\dd u}{\dd z} + \dots = nu. \] This result is known as \Emph{Euler's Theorem} on homogeneous functions. \Item{18.} If $u$~is homogeneous and of degree~$n$ then $\dd u/\dd x$, $\dd u/\dd y$,~\dots\ are homogeneous and of degree $n - 1$. \PageSep{303} \Item{19.} Let $f(x, y) = 0$ be an equation in $x$~and~$y$ (\eg\ $x^{n} + y^{n} - x = 0$), and let $F(x, y, z) = 0$ be the form it assumes when made homogeneous by the introduction of a third variable~$z$ in place of unity (\eg\ $x^{n} + y^{n} - xz^{n-1} = 0$). Show that the equation of the tangent at the point $(\xi, \eta)$ of the curve $f(x, y) = 0$ is \[ xF_{\xi} + yF_{\eta} + zF_{\zeta} = 0, \] where $F_{\xi}$,~$F_{\eta}$,~$F_{\zeta}$ denote the values of $F_{x}$,~$F_{y}$,~$F_{z}$ when $x = \xi$, $y = \eta$, $z = \zeta = 1$. \Item{20.} \Topic{Dependent and independent functions. Jacobians or functional determinants.} Suppose that $u$~and~$v$ are functions of $x$~and~$y$ connected by an identical relation \[ \phi(u, v) = 0. \Tag{(1)} \] Differentiating~\Eq{(1)} with respect to $x$~and~$y$, we obtain \[ \frac{\dd \phi}{\dd u}\, \frac{\dd u}{\dd x} + \frac{\dd \phi}{\dd v}\, \frac{\dd v}{\dd x} = 0,\quad \frac{\dd \phi}{\dd u}\, \frac{\dd u}{\dd y} + \frac{\dd \phi}{\dd v}\, \frac{\dd v}{\dd y} = 0, \Tag{(2)} \] and, eliminating the derivatives of~$\phi$, \[ J = \begin{vmatrix} u_{x} & u_{y}\\ v_{x} & v_{y} \end{vmatrix} = u_{x}v_{y} - u_{y}v_{x} = 0, \Tag{(3)} \] where $u_{x}$,~$u_{y}$, $v_{x}$,~$v_{y}$ are the derivatives of $u$~and~$v$ with respect to $x$~and~$y$. This condition is therefore \emph{necessary} for the existence of a relation such as~\Eq{(1)}. It can be proved that the condition is also \emph{sufficient}; for this we must refer to Goursat's \textit{Cours d' Analyse}, vol.~i, pp.~125~\textit{et~seq.} Two functions $u$~and~$v$ are said to be \emph{dependent} or \emph{independent} according as they are or are not connected by such a relation as~\Eq{(1)}. It is usual to call~$J$ the \emph{Jacobian} or \emph{functional determinant} of $u$~and~$v$ with respect to $x$~and~$y$, and to write \[ J = \frac{\dd(u, v)}{\dd(x, y)}. \] Similar results hold for functions of any number of variables. Thus three functions $u$,~$v$,~$w$ of three variables $x$,~$y$,~$z$ are or are not connected by a relation $\phi(u, v, w) = 0$ according as \[ J = \begin{vmatrix} u_{x} & u_{y} & u_{z}\\ v_{x} & v_{y} & v_{z}\\ w_{x} & w_{y} & w_{z} \end{vmatrix} = \frac{\dd(u, v, w)}{\dd(x, y, z)} \] does or does not vanish for all values of $x$,~$y$,~$z$. \Item{21.} Show that $ax^{2} + 2hxy + by_{2}$ and $Ax^{2} + 2Hxy + By^{2}$ are independent unless $a/A = h/H = b/B$. \Item{22.} Show that $ax^{2} + by^{2} + cz^{2} + 2fyz + 2gzx + 2hxy$ can be expressed as a product of two linear functions of $x$,~$y$, and~$z$ if and only if \[ abc + 2fgh - af^{2} - bg^{2} - ch^{2} = 0. \] [Write down the condition that $px + qy + rz$ and $p'x + q'y + r'z$ should be connected with the given function by a functional relation.] \PageSep{304} \Item{23.} If $u$~and~$v$ are functions of $\xi$~and~$\eta$, which are themselves functions of $x$~and~$y$, then \[ \frac{\dd(u, v)}{\dd(x, y)} = \frac{\dd(u, v)}{\dd(\xi, \eta)}\, \frac{\dd(\xi, \eta)}{\dd(x, y)}. \] Extend the result to any number of variables. \Item{24.} Let $f(x)$~be a function of~$x$ whose derivative is~$1/x$ and which vanishes when $x = 1$. Show that if $u = f(x) + f(y)$, $v = xy$, then $u_{x}v_{y} - u_{y}v_{x} = 0$, and hence that $u$~and~$v$ are connected by a functional relation. By putting $y = 1$, show that this relation must be $f(x) + f(y) = f(xy)$. Prove in a similar manner that if the derivative of~$f(x)$ is $1/(1 + x^{2})$, and $f(0) = 0$, then $f(x)$~must satisfy the equation \[ f(x) + f(y) = f\left(\frac{x + y}{1 - xy}\right). \] \Item{25.} Prove that if $\ds f(x) = \int_{0}^{x} \frac{dt}{\sqrtp{1 - t^{4}}}$ then \[ f(x) + f(y) = f\left\{ \frac{x\sqrtp{1 - y^{4}} + y\sqrtp{1 - x^{4}}}{1 + x^{2}y^{2}} \right\}. \] \Item{26.} Show that if a functional relation exists between \[ u = f(x) + f(y) + f(z),\quad v = f(y)f(z) + f(z)f(x) + f(x)f(y),\quad w = f(x)f(y)f(z), \] then $f$~must be a constant. [The condition for a functional relation will be found to be \[ f'(x)f'(y)f'(z) \{f(y) - f(z)\} \{f(z) - f(x)\} \{f(x) - f(y)\} = 0.] \] \Item{27.} If $f(y, z)$, $f(z, x)$, and $f(x, y)$ are connected by a functional relation then $f(x, x)$~is independent of~$x$. \MathTrip{1909.} \Item{28.} If $u = 0$, $v = 0$, $w = 0$ are the equations of three circles, rendered homogeneous as in Ex.~19, then the equation \[ \frac{\dd(u, v, w)}{\dd(x, y, z)} = 0 \] represents the circle which cuts them all orthogonally. \MathTrip{1900.} \Item{29.} If $A$,~$B$,~$C$ are three functions of~$x$ such that \[ \begin{vmatrix} A & A' & A''\\ B & B' & B''\\ C & C' & C'' \end{vmatrix} \] vanishes identically, then we can find constants $\lambda$,~$\mu$,~$\nu$ such that $\lambda A + \mu B + \nu C$ vanishes identically; and conversely. [The converse is almost obvious. To prove the direct theorem let $\alpha = BC' - B'C$,~\dots. Then $\alpha' = BC'' - B''C$,~\dots, and it follows from the vanishing of the determinant that $\beta\gamma' - \beta'\gamma = 0$,~\dots; and so that the ratios $\alpha : \beta : \gamma$ are constant. But $\alpha A + \beta B + \gamma C = 0$.] \Item{30.} Suppose that three variables $x$,~$y$,~$z$ are connected by a relation in virtue of which (i)~$z$~is a function of $x$~and~$y$, with derivatives $z_{x}$\Add{,}~$z_{y}$, and (ii)~$x$ is a function of $y$~and~$z$, with derivatives $x_{y}$,~$x_{z}$. Prove that \[ x_{y} = - z_{y}/z_{x},\quad x_{z} = 1/z_{x}. \] \PageSep{305} [We have \[ dz = z_{x}\, dx + z_{y}\, dy,\quad dx = x_{y}\, dy + x_{z}\, dz\Add{.} \] The result of substituting for~$dx$ in the first equation is \[ dz = (z_{x} x_{y} + z_{y})\, dy + z_{x}x_{z}\, dz, \] which can be true only if $z_{x} x_{y} + z_{y} = 0$, $z_{x} x_{z} = 1$.] \Item{31.} Four variables $x$, $y$, $z$, $u$ are connected by two relations in virtue of which any two can be expressed as functions of the others. Show that \[ y_{z}^{u}z_{x}^{u}x_{y}^{u} = -y_{z}^{x}z_{x}^{y}x_{y}^{z} = 1,\quad x_{z}^{u}z_{x}^{y} + y_{z}^{u}z_{y}^{x} = 1, \] where $y_{z}^{u}$~denotes the derivative of~$y$, when expressed as a function of $z$~and~$u$, with respect to~$z$. \MathTrip{1897.} \Item{32.} Find $A$, $B$, $C$, $\lambda$ so that the first four derivatives of \[ \int_{a}^{a+x} f(t)\, dt - x[Af(a) + Bf(a + \lambda x) + Cf(a + x)] \] vanish when $x = 0$; and $A$, $B$, $C$, $D$, $\lambda$,~$\mu$ so that the first six derivatives of \[ \int_{a}^{a+x} f(t)\, dt - x[Af(a) + Bf(a + \lambda x) + Cf(a + \mu x) + Df(a + x)] \] vanish when $x = 0$. \Item{33.} If $a > 0$, $ac - b^{2} > 0$, and $x_{1} > x_{0}$, then \[ \int_{x_{0}}^{x_{1}} \frac{dx}{ax^{2} + 2bx + c} = \frac{1}{\sqrtp{ac - b^{2}}} \arctan\left\{ \frac{(x_{1} - x_{0}) \sqrtp{ac - b^{2}}} {ax_{1}x_{0} + b(x_{1} + x_{0}) + c} \right\}, \] the inverse tangent lying between $0$ and~$\pi$.\footnote {In connection with Exs.~33--35, 38, and~40 see a paper by Dr~Bromwich in vol.~xxxv of the \textit{Messenger of Mathematics}.} \Item{34.} Evaluate the integral $\ds\int_{-1}^{1} \frac{\sin\alpha\, dx}{1 - 2x\cos\alpha + x^{2}}$. For what values of~$\alpha$ is the integral a discontinuous function of~$\alpha$? \MathTrip{1904.} {\Loosen[The value of the integral is~$\frac{1}{2}\pi$ if $2n\pi < \alpha < (2n + 1)\pi$, and $-\frac{1}{2}\pi$ if $(2n - 1)\pi < \alpha < 2n\pi$, $n$~being any integer; and $0$~if $\alpha$~is a multiple of~$\pi$.]} \Item{35.} If $ax^{2} + 2bx + c > 0$ when $x_{0} \leq x \leq x_{1}$, $f(x) = \sqrtp{ax^{2} + 2bx + c}$, and \[ y = f(x),\quad y_{0} = f(x_{0}),\quad y_{1} = f(x_{1}),\quad X = (x_{1} - x_{0})/(y_{1} + y_{0}), \] then \[ \int_{x_{0}}^{x_{1}} \frac{dx}{y} = \frac{1}{\sqrt{a}} \log \frac{1 + X\sqrt{a}}{1 - X\sqrt{a}},\quad \frac{-2}{\sqrtp{-a}} \arctan\{X\sqrtp{-a}\}, \] according as $a$~is positive or negative. In the latter case the inverse tangent lies between $0$ and~$\frac{1}{2}\pi$. [It will be found that the substitution $t = \dfrac{x - x_{0}}{y + y_{0}}$ reduces the integral to the form $\ds 2\int_{0}^{X} \frac{dt}{1 - at^{2}}$.] \Item{36.} Prove that \[ \int_{0}^{a} \frac{dx}{x + \sqrtp{a^{2} - x^{2}}} = \tfrac{1}{4}\pi. \] \MathTrip{1913.} \Item{37.} If $a > 1$ then \[ \int_{-1}^{1} \frac{\sqrtp{1 - x^{2}}}{a - x}\, dx = \pi\{a - \sqrtp{a^{2} - 1}\}. \] \PageSep{306} \Item{38.} If $p > 1$, $0 < q < 1$, then \[ \int_{0}^{1} \frac{dx}{\sqrtbr{\{1 + (p^{2} - 1)x\}\{1 - (1 - q^{2}) x\}}} = \frac{2\omega}{(p + q)\sin\omega}, \] where $\omega$~is the positive acute angle whose cosine is $(1 + pq)/(p + q)$. \Item{39.} If $a > b > 0$, then \[ %[** TN: In-line in the original] \int_{0}^{2\pi} \frac{\sin^{2}\theta\, d\theta}{a - b\cos\theta} = \frac{2\pi}{b^{2}} \{a - \sqrtp{a^{2} - b^{2}}\}. \] \MathTrip{1904.} \Item{40.} Prove that if $a > \sqrtp{b^{2} + c^{2}}$ then \[ \int_{0}^{\pi} \frac{d\theta}{a + b\cos\theta + c\sin\theta} = \frac{2}{\sqrtp{a^{2} - b^{2} - c^{2}}} \arctan \left\{\frac{\sqrtp{a^{2} - b^{2} - c^{2}}}{c}\right\}, \] the inverse tangent lying between $0$ and~$\pi$. \Item{41.} If $f(x)$~is continuous and never negative, and $\ds\int_{a}^{b} f(x)\, dx = 0$, then $f(x) = 0$ for all values of~$x$ between $a$ and~$b$. [If $f(x)$~were equal to a positive number~$k$ when $x = \xi$, say, then we could, in virtue of the continuity of~$f(x)$, find an interval $\DPmod{(\xi - \delta, \xi + \delta)}{[\xi - \delta, \xi + \delta]}$ throughout which $f(x) > \frac{1}{2}k$; and then the value of the integral would be greater than~$\delta k$.] \Item{42.} \Topic{Schwarz's inequality for integrals.} Prove that \[ \left(\int_{a}^{b} \phi\psi\, dx\right)^{2} \leq \int_{a}^{b} \phi^{2}\, dx \int_{a}^{b} \psi^{2}\, dx. \] [Use the definitions of \SecNo[§§]{156}~and~\SecNo{157}, and the inequality \[ \left(\tsum\phi_{\nu}\psi_{\nu}\, \delta_{\nu}\right)^{2} \leq \tsum\phi_{\nu}^{2}\, \delta_{\nu} \tsum\psi_{\nu}^{2}\, \delta_{\nu} \] (\Ref{Ch.}{I}, \MiscEx{I}~10).] \Item{43.} If \[ %[** TN: In-line in the original] P_{n}(x) = \frac{1}{(\beta - \alpha)^{n} n!} \left(\frac{d}{dx}\right)^{n} \{(x - \alpha)(\beta - x)\}^{n}, \] then $P_{n}(x)$~is a polynomial of degree~$n$, which possesses the property that \[ \int_{\alpha}^{\beta} P_{n}(x)\theta(x)\, dx = 0 \] if $\theta(x)$~is any polynomial of degree less than~$n$. [Integrate by parts $m + 1$ times, where $m$~is the degree of~$\theta(x)$, and observe that $\theta^{(m+1)}(x) = 0$.] \Item{44.} Prove that \[ %[** TN: In-line in the original] \int_{\alpha}^{\beta} P_{m}(x) P_{n}(x)\, dx = 0 \] if $m \neq n$, but that if $m = n$ then the value of the integral is $(\beta - \alpha)/(2n + 1)$. \Item{45.} If $Q_{n}(x)$~is a polynomial of degree $n$, which possesses the property that \[ %[** TN: In-line in the original] \int_{\alpha}^{\beta} Q_{n}(x)\theta(x)\, dx = 0 \] if $\theta(x)$~is any polynomial of degree less than~$n$, then $Q_{n}(x)$~is a constant multiple of~$P_{n}(x)$. [We can choose~$\kappa$ so that $Q_{n} - \kappa P_{n}$ is of degree~$n - 1$: then \[ \int_{\alpha}^{\beta} Q_{n}(Q_{n} - \kappa P_{n})\, dx = 0,\quad \int_{\alpha}^{\beta} P_{n}(Q_{n} - \kappa P_{n})\, dx = 0, \] \PageSep{307} and so \[ \int_{\alpha}^{\beta} (Q_{n} - \kappa P_{n})^{2}\, dx = 0. \] Now apply Ex.~41.] \Item{46.} \Topic{Approximate Values of definite integrals.} Show that the error in taking $\tfrac{1}{2}(b - a) \{\phi(a) + \phi(b)\}$ as the value of the integral $\ds\int_{a}^{b} \phi(x)\, dx$ is less than $\tfrac{1}{12}M(b - a)^{3}$, where $M$~is the maximum of~$|\phi''(x)|$ in the interval $\DPmod{(a, b)}{[a, b]}$; and that the error in taking $(b - a)\phi\{\tfrac{1}{2}(a + b)\}$ is less than $\tfrac{1}{24}M(b - a)^{3}$. [Write $f'(x)= \phi(x)$ in Exs.\ 4~and~5.] Show that the error in taking \[ \tfrac{1}{6}(b - a)[\phi(a) + \phi(b) + 4\phi\{\tfrac{1}{2}(a + b)\}] \] as the value is less than $\tfrac{1}{2880}M(b - a)^{5}$, where $M$~is the maximum of~$\phi^{(4)}(x)$. [Use Ex.~6. This rule, which gives a very good approximation, is known as \Emph{Simpson's Rule}. It amounts to taking one-third of the first approximation given above and two-thirds of the second.] Show that the approximation assigned by Simpson's Rule is the area bounded by the lines $x = a$, $x = b$, $y = 0$, and a parabola with its axis parallel to~$OY$ and passing through the three points on the curve $y = \phi(x)$ whose abscissae are $a$,~$\tfrac{1}{2}(a + b)$,~$b$. It should be observed that if $\phi(x)$~is any cubic polynomial then $\phi^{(4)}(x) = 0$, and Simpson's Rule is exact. That is to say, given three points whose abscissae are $a$,~$\tfrac{1}{2}(a + b)$,~$b$, we can draw through them an infinity of curves of the type $y = \alpha + \beta x + \gamma x^{2} + \delta x^{3}$; and all such curves give the same area. For one curve $\delta = 0$, and this curve is a parabola. \Item{47.} If $\phi(x)$~is a polynomial of the fifth degree, then \[ \int_{0}^{1} \phi(x)\, dx = \tfrac{1}{18}\{5\phi(\alpha) + 8\phi(\tfrac{1}{2}) + 5\phi(\beta)\}, \] $\alpha$~and~$\beta$ being the roots of the equation $x^{2} - x + \frac{1}{10} = 0$. \MathTrip{1909.} \Item{48.} {\Loosen Apply Simpson's Rule to the calculation of~$\pi$ from the formula $\ds\tfrac{1}{4}\pi = \int_{0}^{1} \frac{dx}{1 + x^{2}}$. [The result is~$.7833\dots$. If we divide the integral into two, from $0$ to~$\tfrac{1}{2}$ and $\tfrac{1}{2}$ to~$1$, and apply Simpson's Rule to the two integrals separately, we obtain $.785\MS391\MS6\dots$. The correct value is~$.785\MS398\MS1\dots$.]} \Item{49.} Show that \[ 8.9 < \int_{3}^{5} \sqrtp{4 + x^{2}}\, dx < 9. \] \MathTrip{1903.} \Item{50.} Calculate the integrals \[ \int_{0}^{1} \frac{dx}{1 + x},\quad \int_{0}^{1} \frac{dx}{\sqrtp{1 + x^{4}}},\quad \int_{0}^{\pi} \sqrtp{\sin x}\, dx,\quad \int_{0}^{\pi} \frac{\sin x}{x}\, dx, \] to two places of decimals. [In the last integral the subject of integration is not defined when $x = 0$: but if we assign to it, when $x = 0$, the value~$1$, it becomes continuous throughout the range of integration.] \end{Examples} \PageSep{308} \Chapter[THE CONVERGENCE OF INFINITE SERIES, ETC.] {VIII}{THE CONVERGENCE OF INFINITE SERIES AND \\ INFINITE INTEGRALS} \Paragraph{165.} \First{In} \Ref{Ch.}{IV} we explained what was meant by saying that an infinite series is \emph{convergent}, \emph{divergent}, or \emph{oscillatory}, and illustrated our definitions by a few simple examples, mainly derived from the geometrical series \[ 1 + x + x^{2} + \dots \] and other series closely connected with it. In this chapter we shall pursue the subject in a more systematic manner, and prove a number of theorems which enable us to determine when the simplest series which commonly occur in analysis are convergent. We shall often use the notation \[ u_{m} + u_{m+1} + \dots + u_{n} = \sum_{m}^{n} \phi(\nu), \] and write $\sum\limits_{0}^{\infty} u_{n}$, or simply $\sum u_{n}$, for the infinite series $u_{0} + u_{1} + u_{2} + \dots$.\footnote {It is of course a matter of indifference whether we denote our series by $u_{1} + u_{2} + \dots$ (as in \Ref{Ch.}{IV}) or by $u_{0} + u_{1} + \dots$ (as here). Later in this chapter we shall be concerned with series of the type $a_{0} + a_{1}x + a_{2}x^{2} + \dots$: for these the latter notation is clearly more convenient. We shall therefore adopt this as our standard notation. But we shall not adhere to it systematically, and we shall suppose that $u_{1}$~is the first term whenever this course is more convenient. It is more convenient, for example, when dealing with the series $1 + \frac{1}{2} + \frac{1}{3} + \dots$, to suppose that $u_{n} = 1/n$ and that the series begins with~$u_{1}$, than to suppose that $u_{n} = 1/(n + 1)$ and that the series begins with~$u_{0}$. This remark applies, \eg, to \Ex{lxviii}.~4.} \Paragraph{166. Series of Positive Terms.} The theory of the convergence of series is comparatively simple when all the terms of the series considered are positive.\footnote {Here and in what follows `positive' is to be regarded as including zero.} We shall consider such series \PageSep{309} first, not only because they are the easiest to deal with, but also because the discussion of the convergence of a series containing negative or complex terms can often be made to depend upon a similar discussion of a series of positive terms only. When we are discussing the convergence or divergence of a series we may disregard any finite number of terms. Thus, when a series contains a finite number only of negative or complex terms, we may omit them and apply the theorems which follow to the remainder. \Paragraph{167\Add{.}} It will be well to recall the following fundamental theorems established in~\SecNo[§]{77}. \begin{Result} \Item{A.} A series of positive terms must be convergent or diverge to~$\infty$, and cannot oscillate. \end{Result} \begin{Result} \Item{B.} The necessary and sufficient condition that $\sum u_{n}$ should be convergent is that there should be a number~$K$ such that \[ u_{0} + u_{1} + \dots + u_{n} < K \] for all values of~$n$. \end{Result} \begin{Result} \Topic{\Item{C.} The comparison theorem.} If $\sum u_{n}$~is convergent, and $v_{n} \leq u_{n}$ for all values of~$n$, then $\sum v_{n}$~is convergent, and $\sum v_{n} \leq \sum u_{n}$. More generally, if $v_{n} \leq Ku_{n}$, where $K$~is a constant, then $\sum v_{n}$ is convergent and $\sum v_{n} \leq K \sum u_{n}$. And if $\sum u_{n}$~is divergent, and $v_{n} \geq Ku_{n}$, then $\sum v_{n}$~is divergent.\footnote {The last part of this theorem was not actually stated in \SecNo[§]{77}, but the reader will have no difficulty in supplying the proof.} \end{Result} Moreover, in inferring the convergence or divergence of~$\sum v_{n}$ by means of one of these tests, it is sufficient to know that the test is satisfied for \emph{sufficiently large} values of~$n$, \ie\ for all values of~$n$ greater than a definite value~$n_{0}$. But of course the conclusion that $\sum v_{n} \leq K \sum u_{n}$ does not necessarily hold in this case. A particularly useful case of this theorem is \begin{Result} \Item{D.} If $\sum u_{n}$~is convergent \(divergent\) and $u_{n}/v_{n}$~tends to a limit other than zero as $n \to \infty$, then $\sum v_{n}$~is convergent \(divergent\). \end{Result} \Paragraph{168. First applications of these tests.} The one important fact which we know at present, as regards the convergence of any \PageSep{310} special class of series, is that $\sum r^{n}$~is convergent if $r < 1$ and divergent if $r \geq 1$.\footnote {We shall use $r$ in this chapter to denote a number which is always positive or zero.} It is therefore natural to try to apply Theorem~C, taking $u_{n} = r^{n}$. We at once find \begin{Result} \Item{1.} The series~$\sum v_{n}$ is convergent if $v_{n} \leq Kr^{n}$, where $r < 1$, for all sufficiently large values of~$n$. \end{Result} When $K = 1$, this condition may be written in the form $v_{n}^{1/n} \leq r$. Hence we obtain what is known as \Emph{Cauchy's test} for the convergence of a series of positive terms; viz. \begin{Result} \Item{2.} The series~$\sum v_{n}$ is convergent if $v_{n}^{1/n} \leq r$, where $r < 1$, for all sufficiently large values of~$n$. \end{Result} There is a corresponding test for divergence, viz. \begin{Result} \Item{2\ia.} The series~$\sum v_{n}$ is divergent if $v_{n}^{1/n} \geq 1$ for an infinity of values of~$n$. \end{Result} This hardly requires proof, for $v_{n}^{1/n} \geq 1$ involves $v_{n} \geq 1$. The two theorems 2~and~2\ia\ are of very wide application, but for some purposes it is more convenient to use a different test of convergence, viz. \begin{Result} \Item{3.} The series~$\sum v_{n}$ is convergent if $v_{n+1}/v_{n} \leq r$, $r < 1$, for all sufficiently large values of~$n$. \end{Result} To prove this we observe that if $v_{n+1}/v_{n} \leq r$ when $n \geq n_{0}$ then \[ v_{n} = \frac{v_{n}}{v_{n-1}}\, \frac{v_{n-1}}{v_{n-2}} \dots \frac{v_{n_{0}+1}}{v_{n_{0}}}\, v_{n_{0}} \leq \frac{v_{n_{0}}}{r^{n_{0}}} r^{n}; \] and the result follows by comparison with the convergent series~$\sum r^{n}$. This test is known as \Emph{d'Alembert's test}. We shall see later that it is less general, theoretically, than Cauchy's, in that Cauchy's test can be applied whenever d'Alembert's can, and sometimes when the latter cannot. Moreover the test for divergence which corresponds to d'Alembert's test for convergence is much less general than the test given by Theorem~2\ia. It is true, as the reader will easily prove for himself, that if $v_{n+1}/v_{n} \geq r \geq 1$ for all values of~$n$, or all sufficiently large values, then $\sum v_{n}$~is divergent. But it is not true (see \Ex{lxvii}.~9) that this is so if only $v_{n+1}/v_{n} \geq r \geq 1$ for an \emph{infinity} of values of~$n$, whereas in Theorem~2\ia\ \PageSep{311} our test had only to be satisfied for such an infinity of values. None the less d'Alembert's test is very useful in practice, because when $v_{n}$~is a complicated function $v_{n+1}/v_{n}$~is often much less complicated and so easier to work with. In the simplest cases which occur in analysis it often happens that $v_{n+1}/v_{n}$ or $v_{n}^{1/n}$ tends to a limit as $n \to \infty$.\footnote {It will be proved in \Ref{Ch.}{IX} (\Ex{lxxxvii}.~36) that if $v_{n+1}/v_{n} \to l$ then $v_{n}^{1/n} \to l$. That the converse is not true may be seen by supposing that $v_{n} = 1$ when $n$~is odd and $v_{n} = 2$ when $n$~is even.} When this limit is less than~$1$, it is evident that the conditions of Theorems 2~or~3 above are satisfied. Thus \begin{Result} \Item{4.} If $v_{n}^{1/n}$ or $v_{n+1}/v_{n}$ tends to a limit less than unity as $n \to \infty$, then the series~$\sum v_{n}$ is convergent. \end{Result} It is almost obvious that if either function tend\DPnote{** [sic]} to a limit greater than unity, then $\sum v_{n}$~is divergent. We leave the formal proof of this as an exercise to the reader. But when $v_{n}^{1/n}$ or $v_{n+1}/v_{n}$ tends to~$1$ these tests generally fail completely, and they fail also when $v_{n}^{1/n}$ or $v_{n+1}/v_{n}$ oscillates in such a way that, while always less than~$1$, it assumes for an infinity of values of~$n$ values approaching indefinitely near to~$1$. And the tests which involve $v_{n+1}/v_{n}$ fail even when that ratio oscillates so as to be sometimes less than and sometimes greater than~$1$. When $v_{n}^{1/n}$~behaves in this way Theorem~2\ia\ is sufficient to prove the divergence of the series. But it is clear that there is a wide margin of cases in which some more subtle tests will be needed. \begin{Examples}{LXVII.} \Item{1.} Apply Cauchy's and d'Alembert's tests (as\PageLabel{311} specialised in 4~above) to the series $\sum n^{k} r^{n}$, where $k$~is a positive rational number. [Here $v_{n+1}/v_{n} = \{(n + 1)/n\}^{k} r \to r$, so that d'Alembert's test shows at once that the series is convergent if $r < 1$ and divergent if $r > 1$. The test fails if $r = 1$: but the series is then obviously divergent. Since $\lim n^{1/n} = 1$ (\Ex{xxvii}.~11), Cauchy's test leads at once to the same conclusions.] \Item{2.} Consider the series $\sum(An^{k} + Bn^{k-1} + \dots + K) r^{n}$. [We may suppose $A$ positive. If the coefficient of~$r^{n}$ is denoted by~$P(n)$, then $P(n)/n^{k} \to A$ and, by D~of \SecNo[§]{167}, the series behaves like $\sum n^{k} r^{n}$.] \Item{3.} Consider \[ \sum \frac{An^{k} + Bn^{k-1} + \dots + K} {\alpha n^{l} + \beta n^{l-1} + \dots + \kappa} r^{n}\quad (A > 0,\ \alpha > 0). \] [The series behaves like $\sum n^{k-l} r^{n}$. The case in which $r = 1$, $k < l$ requires further consideration.] \PageSep{312} \Item{4.} We have seen (\Ref{Ch.}{IV}, \MiscEx{IV}~17) that the series \[ \sum \frac{1}{n(n + 1)},\quad \sum \frac{1}{n(n + 1)\dots (n + p)} \] are convergent. Show that Cauchy's and d'Alembert's tests both fail when applied to them. [For $\lim u_{n}^{1/n} = \lim (u_{n+1}/u_{n}) = 1$.] \Item{5.} Show that the series~$\sum n^{-p}$, where $p$~is an integer not less than~$2$, is convergent. [Since $\lim \{n(n + 1)\dots (n + p - 1)\}/n^{p} = 1$, this follows from the convergence of the series considered in Ex.~4. It has already been shown in \SecNo[§]{77},~\Eq{(7)} that the series is divergent if $p = 1$, and it is obviously divergent if $p \leq 0$.] \Item{6.} Show that the series \[ \sum \frac{An^{k} + Bn^{k-1} + \dots + K} {\alpha n^{l} + \beta n^{l-1} + \dots + \kappa} \] is convergent if $l > k + 1$ and divergent if $l \leq k + 1$. \Item{7.} If $m_{n}$~is a positive integer, and $m_{n+1} > m_{n}$, then the series $\sum r^{m_{n}}$ is convergent if $r < 1$ and divergent if $r \geq 1$. For example the series $1 + r + r^{4} + r^{9} + \dots$ is convergent if $r < 1$ and divergent if $r \geq 1$. \Item{8.} Sum the series $1 + 2r + 2r^{4} + \dots$ to $24$~places of decimals when $r = .1$ and to $2$~places when $r = .9$. [If $r = .1$, then the first $5$~terms give the sum $1.200\MS200\MS002\MS000\MS000\MS2$, and the error is \[ 2r^{25} + 2r^{36} + \dots < 2r^{25} + 2r^{36} + 2r^{47} + \dots = 2r^{25}/(1 - r^{11}) < 3/10^{25}. \] If $r = .9$, then the first $8$~terms give the sum $5.458\dots$, and the error is less than $2r^{64}/(1 - r^{17}) < .003$.] \Item{9\Add{.}} If $0 < a < b < 1$, then the series $a + b + a^{2} + b^{2} + a^{3} + \dots$ is convergent. Show that Cauchy's test may be applied to this series, but that d'Alembert's test fails. [For \[ v_{2n+1}/v_{2n} = (b/a)^{n+1} \to \infty,\quad v_{2n+2}/v_{2n+1} = b(a/b)^{n+2} \to 0.] \] \Item{10.} The series $1 + r + \dfrac{r^{2}}{2!} + \dfrac{r^{3}}{3!} + \dots$ and $1 + r + \dfrac{r^{2}}{2^{2}} + \dfrac{r^{3}}{3^{3}} + \dots$ are convergent for all positive values of~$r$. \Item{11.} If $\sum u_{n}$~is convergent then so are $\sum u_{n}^{2}$ and $\sum u_{n}/(1 + u_{n})$. \Item{12.} If $\sum u_{n}^{2}$~is convergent then so is $\sum u_{n}/n$. [For $2u_{n}/n \leq u_{n}^{2} + (1/n^{2})$ and $\sum (1/n^{2})$~is convergent.] \Item{13.} Show that \[ %[** TN: In-line in the original] 1 + \frac{1}{3^{2}} + \frac{1}{5^{2}} + \dots = \frac{3}{4}\left(1 + \frac{1}{2^{2}} + \frac{1}{3^{2}} + \dots \right) \] and \[ 1 + \frac{1}{2^{2}} + \frac{1}{3^{2}} + \frac{1}{5^{2}} + \frac{1}{6^{2}} + \frac{1}{7^{2}} + \frac{1}{9^{2}} + \dots = \frac{15}{16} \left(1 + \frac{1}{2^{2}} + \frac{1}{3^{2}} + \dots\right). \] \PageSep{313} [To prove the first result we note that \begin{align*} 1 + \frac{1}{2^{2}} + \frac{1}{3^{2}} + \dots &= \left(1 + \frac{1}{2^{2}}\right) + \left(\frac{1}{3^{2}} + \frac{1}{4^{2}}\right) + \dots\\ &= 1 + \frac{1}{3^{2}} + \frac{1}{5^{2}} + \dots + \frac{1}{2^{2}} \left(1 + \frac{1}{2^{2}} + \frac{1}{3^{2}} + \dots\right), \end{align*} by theorems \Eq{(8)}~and~\Eq{(6)} of~\SecNo[§]{77}.] \Item{14.} Prove by a \textit{reductio ad absurdum} that $\sum (1/n)$~is divergent. [If the series were convergent we should have, by the argument used in Ex.~13, \[ 1 + \tfrac{1}{2} + \tfrac{1}{3} + \dots = (1 + \tfrac{1}{3} + \tfrac{1}{5} + \dots) + \tfrac{1}{2} (1 + \tfrac{1}{2} + \tfrac{1}{3}+ \dots), \] or \[ \tfrac{1}{2} + \tfrac{1}{4} + \tfrac{1}{6} + \dots = 1 + \tfrac{1}{3} + \tfrac{1}{5} + \dots \] which is obviously absurd, since every term of the first series is less than the corresponding term of the second.]\PageLabel{313} \end{Examples} \Paragraph{169.} Before proceeding further in the investigation of tests of convergence and divergence, we shall prove an important general theorem concerning series of positive terms. \begin{ParTheorem}{Dirichlet's Theorem.\protect\footnotemark} The sum of a series of positive\footnotetext {This theorem seems to have first been stated explicitly by Dirichlet in 1837. It was no doubt known to earlier writers, and in particular to Cauchy.} terms is the same in whatever order the terms are taken. \end{ParTheorem} This theorem asserts that if we have a convergent series of positive terms, $u_{0} + u_{1} + u_{2} + \dots$ say, and form any other series \[ v_{0} + v_{1} + v_{2} + \dots \] out of the same terms, by taking them in any new order, then the second series is convergent and has the same sum as the first. Of course no terms must be omitted: every~$u$ must come somewhere among the~$v'$s, and \textit{vice versa}. The proof is extremely simple. Let $s$~be the sum of the series of~$u'$s. Then the sum of any number of terms, selected from the~$u'$s, is not greater than~$s$. But every~$v$ is a~$u$, and therefore the sum of any number of terms selected from the~$v'$s is not greater than~$s$. Hence $\sum v_{n}$~is convergent, and its sum~$t$ is not greater than~$s$. But we can show in exactly the same way that $s \leq t$. Thus $s = t$. \Paragraph{170. Multiplication of Series of Positive Terms.} An immediate corollary from Dirichlet's Theorem is the following theorem: \begin{Result}if $u_{0} + u_{1} + u_{2} + \dots$ and $v_{0} + v_{1} + v_{2} + \dots$ are two convergent \PageSep{314} series of positive terms, and $s$~and~$t$ are their respective sums, then the series \[ u_{0} v_{0} + (u_{1} v_{0} + u_{0} v_{1}) + (u_{2} v_{0} + u_{1} v_{1} + u_{0} v_{2}) + \dots \] is convergent and has the sum~$st$. \end{Result} Arrange all the possible products of pairs~$u_{m}v_{n}$ in the form of a doubly infinite array \[ \begin{array}{c|c|c|c|cc} u_{0}v_{0}& u_{1}v_{0}& u_{2}v_{0}& u_{3}v_{0}& \dots\Strut \\ \cline{1-1} \TEntry{u_{0}v_{1}}& u_{1}v_{1}& u_{2}v_{1}& u_{3}v_{1}& \dots\Strut \\ \cline{1-2} \TEntry{u_{0}v_{2}}& \TEntry{u_{1}v_{2}}& u_{2}v_{2}& u_{3}v_{2}& \dots\Strut \\ \cline{1-3} \TEntry{u_{0}v_{3}}& \TEntry{u_{1}v_{3}}& \TEntry{u_{2}v_{3}}& u_{3}v_{3}& \dots\Strut \\ \cline{1-4} \TEntry{\dots}& \TEntry{\dots}& \TEntry{\dots}& \TEntry{\dots}& \dots\rlap{\;.}\Strut \end{array} \] We can rearrange these terms in the form of a simply infinite series in a variety of ways. Among these are the following. \Item{(1)} We begin with the single term~$u_{0}v_{0}$ for which $m + n = 0$; then we take the two terms $u_{1}v_{0}$,~$u_{0}v_{1}$ for which $m + n = 1$; then the three terms $u_{2}v_{0}$,~$u_{1}v_{1}$,~$u_{0}v_{2}$ for which $m + n = 2$; and so on. We thus obtain the series \[ u_{0}v_{0} + (u_{1}v_{0} + u_{0}v_{1}) + (u_{2}v_{0} + u_{1}v_{1} + u_{0}v_{2}) + \dots \] of the theorem. \Item{(2)} We begin with the single term~$u_{0}v_{0}$ for which both suffixes are zero; then we take the terms $u_{1}v_{0}$,~$u_{1}v_{1}$,~$u_{0}v_{1}$ which involve a suffix~$1$ but no higher suffix; then the terms $u_{2}v_{0}$, $u_{2}v_{1}$, $u_{2}v_{2}$, $u_{1}v_{2}$,~$u_{0}v_{2}$ which involve a suffix~$2$ but no higher suffix; and so on. The sums of these groups of terms are respectively equal to \begin{multline*} u_{0}v_{0},\quad (u_{0} + u_{1})(v_{0} + v_{1}) - u_{0}v_{0},\\ (u_{0} + u_{1} + u_{2})(v_{0} + v_{1} + v_{2}) - (u_{0} + u_{1})(v_{0} + v_{1}),\ \dots \end{multline*} and the sum of the first $n + 1$ groups is \[ (u_{0} + u_{1} + \dots + u_{n})(v_{0} + v_{1} + \dots + v_{n}), \] and tends to~$st$ as $n \to \infty$. When the sum of the series is formed in this manner the sum of the first one, two, three,~\dots\ groups comprises all the terms in the first, second, third,~\dots\ rectangles indicated in the diagram above. The sum of the series formed in the second manner is~$st$. But the first series is (when the brackets are removed) a rearrangement of the second; and therefore, by Dirichlet's Theorem, it converges to the sum~$st$. Thus the theorem is proved. \PageSep{315} \begin{Examples}{LXVIII.} \Item{1\Add{.}} Verify that if $r < 1$ then \[ 1 + r^{2} + r + r^{4} + r^{6} + r^{3} + \dots = 1 + r + r^{3} + r^{2} + r^{5} + r^{7} + \dots = 1/(1 - r). \] \Item{2.\footnote {In Exs.~2--4 the series considered are of course series of positive terms.}} If either of the series $u_{0} + u_{1} + \dots$, $v_{0} + v_{1} + \dots$ is divergent, then so is the series $u_{0}v_{0} + (u_{1}v_{0} + u_{0}v_{1}) + (u_{2}v_{0} + u_{1}v_{1} + u_{0}v_{2}) + \dots$, except in the trivial case in which every term of one series is zero. \Item{3.} If the series $u_{0} + u_{1} + \dots$, $v_{0} + v_{1} + \dots$, $w_{0} + w_{1} + \dots$ converge to sums $r$,~$s$,~$t$, then the series $\sum \lambda_{k}$, where $\lambda_{k} = \sum u_{m}v_{n}w_{p}$, the summation being extended to all sets of values of $m$,~$n$,~$p$ such that $m + n + p = k$, converges to the sum~$rst$. \Item{4.} If $\sum u_{n}$ and~$\sum v_{n}$ converge to sums $s$ and~$t$, then the series~$\sum w_{n}$, where $w_{n} = \sum u_{l} v_{m}$, the summation extending to all pairs $l$,~$m$ for which $lm = n$, converges to the sum~$st$. \end{Examples} \Paragraph{171. Further tests for convergence and divergence.} The examples on pp.~\PgNo[ref]{311}--\PgNo[ref]{313} suffice to show that there are simple and interesting types of series of positive terms which cannot be dealt with by the general tests of \SecNo[§]{168}. In fact, if we consider the simplest type of series, in which $u_{n+1}/u_{n}$~tends to a limit as $n \to \infty$, \emph{the tests of \SecNo[§]{168} generally fail when this limit is~$1$}. Thus in \Ex{lxvii}.~5 these tests failed, and we had to fall back upon a special device, which was in essence that of using the series of \Ex{lxvii}.~4 as our comparison series, instead of the geometric series. \begin{Remark} The fact is that the geometric series, by comparison with which the tests of \SecNo[§]{168} were obtained, is not only convergent but \emph{very rapidly} convergent, far more rapidly than is necessary in order to ensure convergence. The tests derived from comparison with it are therefore naturally very crude, and much more delicate tests are often wanted. We proved in \Ex{xxvii}.~7 that $n^{k}r^{n} \to 0$ as $n \to \infty$, provided $r < 1$, whatever value $k$ may have; and in \Ex{lxvii}.~1 we proved more than this, viz.\ that the series $\sum n^{k}r^{n}$ is convergent. It follows that the sequence $r$,~$r^{2}$, $r^{3}$,~\dots, $r^{n}$,~\dots, where $r < 1$, diminishes more rapidly than the sequence $1^{-k}$,~$2^{-k}$, $3^{-k}$,~\dots, $n^{-k}$,~\dots. This seems at first paradoxical if $r$~is not much less than unity, and $k$~is large. Thus of the two sequences \[ \tfrac{2}{3},\quad \tfrac{4}{9},\quad \tfrac{8}{27},\ \dots;\qquad 1,\quad \tfrac{1}{4096},\quad \tfrac{1}{531\MC441},\ \dots \] whose general terms are $(\frac{2}{3})^{n}$ and~$n^{-12}$, the second seems at first sight to decrease far more rapidly. But this is far from being the case; if only we go far enough into the sequences we shall find the terms of the first sequence very much the smaller. For example, \[ (2/3)^{4} = 16/81 < 1/5,\quad (2/3)^{12} < (1/5)^{3} < (1/10)^{2},\quad (2/3)^{1000} < (1/10)^{166}, \] while \[ 1000^{-12} = 10^{-36}; \] \PageSep{316} so that the $1000$th~term of the first sequence is less than the $10^{130}$th~part of the corresponding term of the second sequence. Thus the series $\sum (2/3)^{n}$ is far more rapidly convergent than the series $\sum n^{-12}$, and even this series is very much more rapidly convergent than~$\sum n^{-2}$.\footnote {Five terms suffice to give the sum of~$\sum n^{-12}$ correctly to $7$~places of decimals, whereas some $10,000,000$ are needed to give an equally good approximation to $\sum n^{-2}$. A large number of numerical results of this character will be found in Appendix~III (compiled by Mr~J. Jackson) to the author's tract `Orders of Infinity' (\textit{Cambridge Math.\ Tracts}, No.~12).} \end{Remark} \Paragraph{172.} We shall proceed to establish two further tests for the convergence or divergence of series of positive terms, \Emph{Maclaurin's (or Cauchy's) Integral Test} and \Emph{Cauchy's Condensation Test}, which, though very far from being completely general, are sufficiently general for our needs in this chapter. In applying either of these tests we make a further assumption as to the nature of the function~$u_{n}$, about which we have so far assumed only that it is positive. We assume that \emph{$u_{n}$~decreases steadily with $n$}: \ie\ that $u_{n+1} \leq u_{n}$ for all values of~$n$\DPtypo{.}{,} or at any rate all sufficiently large values. \begin{Remark} This condition is satisfied in all the most important cases. From one point of view it may be regarded as no restriction at all, so long as we are dealing with series of positive terms: for in virtue of Dirichlet's theorem above we may rearrange the terms without affecting the question of convergence or divergence; and there is nothing to prevent us rearranging the terms \emph{in descending order of magnitude}, and applying our tests to the series of decreasing terms thus obtained. \end{Remark} But before we proceed to the statement of these two tests, we shall state and prove a simple and important theorem, which we shall call \Emph{Abel's Theorem}.\footnote {This theorem was discovered by Abel but forgotten, and rediscovered by Pringsheim.} This is a \emph{one-sided} theorem in that it gives a sufficient test for divergence only and not for convergence, but it is essentially of a more elementary character than the two theorems mentioned above. \begin{Remark} \Paragraph{173. Abel's (or Pringsheim's) Theorem.} \begin{Result}If $\sum u_{n}$~is a convergent series of positive and decreasing terms, then $\lim nu_{n} = 0$. \end{Result} %[** TN: Keeping notation \delta] Suppose that $nu_{n}$ does not tend to zero. Then it is possible to find a positive number~$\delta$ such that $nu_{n} \geq \delta$ for an infinity of values of~$n$. Let $n_{1}$ be the first such value of~$n$; $n_{2}$~the next such value of~$n$ which is more than \PageSep{317} twice as large as~$n_{1}$; $n_{3}$~the next such value of~$n$ which is more than twice as large as~$n_{2}$; and so on. Then we have a sequence of numbers $n_{1}$,~$n_{2}$, $n_{3}$,~\dots\ such that $n_{2} > 2n_{1}$, $n_{3} > 2n_{2}$,~\dots\ and so $n_{2} - n_{1} > \frac{1}{2}n_{2}$, $n_{3} - \DPtypo{n_{1}}{n_{2}} > \frac{1}{2}n_{3}$,~\dots; and also $n_{1}u_{n_{1}} \geq \delta$, $n_{2}u_{n_{2}} \geq \delta$,~\dots. But, since $u_{n}$~decreases as $n$~increases, we have \begin{gather*} u_{0} + u_{1} + \dots + u_{n_{1} - 1} \geq n_{1}u_{n_{1}} \geq \delta,\\ u_{n_{1}} + \dots + u_{n_{2} - 1} \geq (n_{2} - n_{1})u_{n_{2}} > \tfrac{1}{2} n_{2}u_{n_{2}} \geq \tfrac{1}{2} \delta,\\ u_{n_{2}} + \dots + u_{n_{3} - 1} \geq (n_{3} - n_{2})u_{n_{3}} > \tfrac{1}{2} n_{3}u_{n_{3}} \geq \tfrac{1}{2} \delta, \end{gather*} and so on. Thus we can bracket the terms of the series~$\sum u_{n}$ so as to obtain a new series whose terms are severally greater than those of the divergent series \[ \delta + \tfrac{1}{2} \delta + \tfrac{1}{2} \delta + \dots; \] and therefore $\sum u_{n}$~is divergent. \end{Remark} \begin{Examples}{LXIX.} \Item{1.} Use Abel's theorem to show that $\sum (1/n)$ and $\sum \{1/(an + b)\}$ are divergent. [Here $nu_{n} \to 1$ or $nu_{n} \to 1/a$.] \Item{2.} Show that Abel's theorem is not true if we omit the condition that $u_{n}$~decreases as $n$~increases. [The series \[ 1 + \frac{1}{2^{2}} + \frac{1}{3^{2}} + \frac{1}{4} + \frac{1}{5^{2}} + \frac{1}{6^{2}} + \frac{1}{7^{2}} + \frac{1}{8^{2}} + \frac{1}{9} + \frac{1}{10^{2}} + \dots, \] in which $u_{n} = 1/n$ or $1/n^{2}$, according as $n$~is or is not a perfect square, is convergent, since it may be rearranged in the form \[ \frac{1}{2^{2}} + \frac{1}{3^{2}} + \frac{1}{5^{2}} + \frac{1}{6^{2}} + \frac{1}{7^{2}} + \frac{1}{8^{2}} + \frac{1}{10^{2}} + \dots + \left(1 + \frac{1}{4} + \frac{1}{9} + \dots\right), \] and each of these series is convergent. But, since $nu_{n} = 1$ whenever $\DPtypo{u}{n}$~is a perfect square, it is clearly not true that $nu_{n} \to 0$.] \Item{3.} \emph{The converse of Abel's theorem is not true}, \ie\ it is not true that, if $u_{n}$~decreases with~$n$ and $\lim nu_{n} = 0$, then $\sum u_{n}$~is convergent. [Take the series $\sum(1/n)$ and multiply the first term by~$1$, the second by~$\frac{1}{2}$, the next two by~$\frac{1}{3}$, the next four by~$\frac{1}{4}$, the next eight by~$\frac{1}{5}$, and so on. On grouping in brackets the terms of the new series thus formed we obtain \[ 1 + \tfrac{1}{2} · \tfrac{1}{2} + \tfrac{1}{3} \left(\tfrac{1}{3} + \tfrac{1}{4}\right) + \tfrac{1}{4} \left(\tfrac{1}{5} + \tfrac{1}{6} + \tfrac{1}{7} + \tfrac{1}{8}\right) + \dots; \] and this series is divergent, since its terms are greater than those of \[ 1 + \tfrac{1}{2} · \tfrac{1}{2} + \tfrac{1}{3} · \tfrac{1}{2} + \tfrac{1}{4} · \tfrac{1}{2} + \dots, \] which is divergent. But it is easy to see that the terms of the series \[ 1 + \tfrac{1}{2} · \tfrac{1}{2} + \tfrac{1}{3} · \tfrac{1}{3} + \tfrac{1}{3} · \tfrac{1}{4} + \tfrac{1}{4} · \tfrac{1}{5} + \tfrac{1}{4} · \tfrac{1}{6} + \dots \] satisfy the condition that $nu_{n} \to 0$. In fact $nu_{n} = 1/\nu$ if $2^{\nu-2} < n \leq 2^{\nu-1}$, and $\nu \to \infty$ as $n \to \infty$.] \end{Examples} \PageSep{318} \Paragraph{174. Maclaurin's (or Cauchy's) Integral Test.\protect\footnotemark} If $u_{n}$~decreases \footnotetext{The test was discovered by Maclaurin and rediscovered by Cauchy, to whom it is usually attributed.}% steadily as $n$~increases, we can write $u_{n} = \phi(n)$ and suppose that $\phi(n)$~is the value assumed, when $x = n$, by a continuous and steadily decreasing function~$\phi(x)$ of the continuous variable~$x$. Then, If $\nu$~is any positive integer, we have \[ \phi(\nu - 1) \geq \phi(x) \geq \phi(\nu) \] when $\nu - 1 \leq x \leq \nu$. Let \[ v_{\nu} = \phi(\nu - 1) - \int_{\nu-1}^{\nu} \phi(x)\, dx = \int_{\nu-1}^{\nu} \{\phi(\nu - 1) - \phi(x)\}\, dx, \] so that \[ 0 \leq v_{\nu} \leq \phi(\nu - 1) - \phi(\nu). \] Then $\sum v_{\nu}$~is a series of positive terms, and \[ v_{2} + v_{3} + \dots + v_{n} \leq \phi(1) - \phi(n) \leq \phi(1). \] Hence $\sum v_{\nu}$~is convergent, and so $v_{2} + v_{3} + \dots + v_{n}$ or \[ \sum_{1}^{n-1} \phi(\nu) - \int_{1}^{n} \phi(x)\, dx \] tends to a positive limit as $n \to \infty$. Let us write \[ \Phi(\xi) = \int_{1}^{\xi} \phi(x)\, dx, \] so that $\Phi(\xi)$~is a continuous and steadily increasing function of~$\xi$. Then \[ u_{1} + u_{2} + \dots + u_{n-1} - \Phi(n) \] tends to a positive limit, not greater than~$\phi(1)$, as $n \to \infty$. Hence $\sum u_{\nu}$~is convergent or divergent according as $\Phi(n)$~tends to a limit or to infinity as $n \to \infty$, and therefore, since $\Phi(n)$~increases steadily, according as $\Phi(\xi)$~tends to a limit or to infinity as $\xi \to \infty$. Hence \begin{Result}if $\phi(x)$~is a function of~$x$ which is positive and continuous for all values of~$x$ greater than unity, and decreases steadily as $x$~increases, then the series \[ \phi(1) + \phi(2) + \dots \] does or does not converge according as \[ \Phi(\xi) = \int_{1}^{\xi} \phi(x)\, dx \] does or does not tend to a limit~$l$ as $\xi \to \infty$; and, in the first case, the sum of the series is not greater than $\phi(1) + l$. \end{Result} \PageSep{319} \begin{Remark} The sum must in fact be less than~$\phi(1) + l$. For it follows from \Eq{(6)}~of \SecNo[§]{160}, and \Ref{Ch.}{VII}, \MiscEx{VII}~41, that $v_{\nu} < \phi(\nu - 1) - \phi(\nu)$, unless $\phi(x) = \phi(\nu)$ throughout the interval $\DPmod{(\nu - 1, \nu)}{[\nu - 1, \nu]}$; and this cannot be true for all values of~$\nu$. \end{Remark} \begin{Examples}{LXX.} \Item{1.} Prove that \[ \sum_{1}^{\infty} \frac{1}{n^{2} + 1} < \tfrac{1}{2} + \tfrac{1}{4}\pi\Add{.} \] \Item{2.} Prove that \[ -\tfrac{1}{2} \pi < \sum_{1}^{\infty} \frac{a}{a^{2} + n^{2}} < \tfrac{1}{2} \pi. \] \MathTrip{1909.} \Item{3.} Prove that if $m > 0$ then \[ \frac{1}{m^{2}} + \frac{1}{(m + 1)^{2}} + \frac{1}{(m + 2)^{2}} + \dots < \frac{m + 1}{m}\Add{.} \] \end{Examples} \Paragraph{175. The series $\sum n^{-s}$.} By far the most important application of the Integral Test is to the series \[ 1^{-s} + 2^{-s} + 3^{-s} + \dots + n^{-s} + \dots, \] where $s$~is any rational number. We have seen already (\SecNo[§]{77} and \Exs{lxvii}.~14, \Exs[]{lxix}.~1) that the series is divergent when $s = 1$. If $s \leq 0$ then it is obvious that the series is divergent. If $s > 0$ then $u_{n}$ decreases as $n$~increases, and we can apply the test. Here \[ \Phi(\xi) = \int_{1}^{\xi} \frac{dx}{x^{s}} = \frac{\xi^{1-s} - 1}{1 - s}, \] unless $s = 1$. If $s > 1$ then $\xi^{1-s} \to 0$ as $\xi \to \infty$, and \[ \Phi(\xi) \to \frac{1}{(s - 1)} = l, \] say. And if $s < 1$ then $\xi^{1-s} \to \infty$ as $\xi \to \infty$, and so $\Phi(\xi) \to \infty$. Thus \begin{Result}the series $\sum n^{-s}$ is convergent if $s > 1$, divergent if $s \leq 1$, and in the first case its sum is less than $s/(s - 1)$. \end{Result} \begin{Remark} So far as divergence for $s < 1$ is concerned, this result might have been derived at once from comparison with~$\sum (1/n)$, which we already know to be divergent. It is however interesting to see how the Integral Test may be applied to the series~$\sum (1/n)$, when the preceding analysis fails. In this case \[ \Phi(\xi) = \int_{1}^{\xi} \frac{dx}{x}, \] and it is easy to see that $\Phi(\xi) \to \infty$ as $\xi \to \infty$. For if $\xi > 2^{n}$ then \[ \Phi(\xi) > \int_{1}^{2^{n}} \frac{dx}{x} = \int_{1}^{2} \frac{dx}{x} + \int_{2}^{4} \frac{dx}{x} + \dots + \DPtypo{\int_{2^{n}}^{2^{n-1}}}{\int_{2^{n-1}}^{2^{n}}} \frac{dx}{x}. \] \PageSep{320} But by putting $x = 2^{r}u$ we obtain \[ \int_{2^{r}}^{2^{r+1}} \frac{dx}{x} = \int_{1}^{2} \frac{du}{u}, \] and so $\ds\Phi(\xi) > n\int_{1}^{2} \frac{du}{u}$, which shows that $\Phi(\xi) \to \infty$ as $\xi \to \infty$. \end{Remark} \begin{Examples}{LXXI.} \Item{1.} Prove by an argument similar to that used above, and without integration, that $\ds\Phi(\xi) = \int_{1}^{\xi} \frac{dx}{x^{s}}$, where $s < 1$, tends to infinity with~$\xi$. \Item{2.} The series $\sum n^{-2}$, $\sum n^{-3/2}$, $\sum n^{-11/10}$ are convergent, and their sums are not greater than $2$,~$3$,~$11$ respectively. The series $\sum n^{-1/2}$, $\sum n^{-10/11}$ are divergent. \Item{3.} The series $\sum n^{s}/(n^{t} + a)$, where $a > 0$, is convergent or divergent according as $t > 1 + s$ or $t \leq 1 + s$. [Compare with~$\sum n^{s-t}$.] \Item{4.} Discuss the convergence or divergence of the series \[ \tsum(a_{1}n^{s_{1}} + a_{2}n^{s_{2}} + \dots + a_{k}n^{s_{k}})/ (b_{1}n^{t_{1}} + b_{2}n^{t_{2}} + \dots + b_{l}n^{t_{l}}), \] where all the letters denote positive numbers and the $s$'s and~$t$'s are rational and arranged in descending order of magnitude. \Item{5.} Prove that \begin{gather*} 2\sqrt{n} - 2 < \frac{1}{\sqrt{1}} + \frac{1}{\sqrt{2}} + \dots + \frac{1}{\sqrt{n}} < 2\sqrt{n} - 1, \\ \tfrac{1}{2} \pi < \frac{1}{2\sqrt{1}} + \frac{1}{3\sqrt{2}} + \frac{1}{4\sqrt{3}} + \dots < \tfrac{1}{2}(\pi + 1). \end{gather*} \MathTrip{1911.} \Item{6.} If $\phi(n) \to l > 1$ then the series $\sum n^{-\phi(n)}$ is convergent. If $\phi(n) \to l < 1$ then it is divergent. \end{Examples} \Paragraph{176. Cauchy's Condensation Test.} The second of the two tests mentioned in \SecNo[§]{172} is as follows: \begin{Result}if $u_{n} = \phi(n)$ is a decreasing function of~$n$, then the series $\sum \phi(n)$~is convergent or divergent according as $\sum 2^{n}\phi(2^{n})$~is convergent or divergent. \end{Result} We can prove this by an argument which we have used already (\SecNo[§]{77}) in the special case of the series $\sum(1/n)$. In the first place \begin{gather*} \phi(3) + \phi(4) \geq 2\phi(4), \\ \phi(5) + \phi(6) + \dots + \phi(8) \geq 4\phi(8), \\ %[** TN: Hard-coded width] \DotRow{2.5in} \\ \phi(2^{n} + 1) + \phi(2^{n} + 2) + \dots + \phi(2^{n+1}) \geq 2^{n}\phi(2^{n+1}). \end{gather*} If $\sum 2^{n}\phi(2^{n})$ diverges then so do $\sum 2^{n+1}\phi(2^{n+1})$ and $\sum 2^{n}\phi(2^{n+1})$, and then the inequalities just obtained show that $\sum\phi(n)$~diverges. \PageSep{321} On the other hand \[ \phi(2) + \phi(3) \leq 2\phi(2),\quad \phi(4) + \phi(5) + \dots + \phi(7) \leq 4\phi(4), \] and so on. And from this set of inequalities it follows that if $\sum 2^{n}\phi(2^{n})$ converges then so does $\sum \phi(n)$. Thus the theorem is established. For our present purposes the field of application of this test is practically the same as that of the Integral Test. It enables us to discuss the series $\sum n^{-s}$ with equal ease. For $\sum n^{-s}$ will converge or diverge according as $\sum 2^{n}2^{-ns}$ converges or diverges, \ie\ according as $s > 1$ or $s \leq 1$. \begin{Examples}{LXXII.} \Item{1.} Show that if $a$~is any positive integer greater than~$1$ then $\sum \phi(n)$~is convergent or divergent according as $\sum a^{n}\phi(a^{n})$ is convergent or divergent. [Use the same arguments as above, taking groups of $a$,~$a^{2}$, $a^{3}$,~\dots\ terms.] \Item{2.} If $\sum 2^{n}\phi(2^{n})$ converges then it is obvious that $\lim 2^{n}\phi(2^{n}) = 0$. Hence deduce Abel's Theorem of~\SecNo[§]{173}. \end{Examples} \Paragraph{177. Infinite Integrals.} The Integral Test of \SecNo[§]{174} shows that, if $\phi(x)$~is a positive and decreasing function of~$x$, then the series $\sum \phi(n)$ is convergent or divergent according as the integral function~$\Phi(x)$ does or does not tend to a limit as $x \to \infty$. Let us suppose that it does tend to a limit, and that \[ \lim_{x \to \infty} \int_{1}^{x} \phi(t)\, dt = l. \] Then we shall say that \emph{the integral \[ \int_{1}^{\infty} \phi(t)\, dt \] is \Emph{convergent}, and has the value~$l$}; and we shall call the integral an \Emph{infinite integral}. So far we have supposed $\phi(t)$ positive and decreasing. But it is natural to extend our definition to other cases. Nor is there any special point in supposing the lower limit to be unity. We are accordingly led to formulate the following definition: \begin{Defn} If $\phi(t)$~is a function of~$t$ continuous when $t \geq a$, and \[ \lim_{x \to \infty} \int_{a}^{x} \phi(t)\, dt = l, \] \PageSep{322} then we shall say that the infinite integral \[ \int_{a}^{\infty}\phi(t)\, dt \Tag{(1)} \] is convergent and has the value~$l$. \end{Defn} The ordinary integral between limits $a$~and~$A$, as defined in \Ref{Ch.}{VII}, we shall sometimes call in contrast a \Emph{finite} integral. On the other hand, when \[ \int_{a}^{x}\phi(t)\, dt \to \infty, \] we shall say that the integral \emph{diverges} to~$\infty$, and we can give a similar definition of divergence to~$-\infty$. Finally, when none of these alternatives occur, we shall say that the integral \emph{oscillates}, \emph{finitely} or \emph{infinitely}, as $x \to \infty$. These definitions suggest the following remarks. \begin{Remark} \Itemp{(i)} If we write \[ \int_{a}^{x}\phi(t)\, dt = \Phi(x), \] then the integral converges, diverges, or oscillates according as $\Phi(x)$~tends to a limit, tends to~$\infty$ (or to~$-\infty$), or oscillates, as $x \to \infty$. If $\Phi(x)$ tends to a limit, which we may denote by~$\Phi(\infty)$, then the value of the integral is~$\Phi(\infty)$. More generally, if $\Phi(x)$~is any integral function of~$\phi(x)$, then the value of the integral is $\Phi(\infty) - \Phi(a)$. \Itemp{(ii)} In the special case in which $\phi(t)$~is always positive it is clear that $\Phi(x)$~is an increasing function of~$x$. Hence the only alternatives are convergence and divergence to~$\infty$. \Itemp{(iii)} The integral~\Eq{(1)} of course depends on~$a$, but is quite independent of~$t$, and is in no way altered by the substitution of any other letter for~$t$ (cf.~\SecNo[§]{157}). \Itemp{(iv)} Of course the reader will not be puzzled by the use of the term \emph{infinite integral} to denote something which has a definite value such as $2$ or~$\frac{1}{2}\pi$. The distinction between an infinite integral and a finite integral is similar to that between an infinite series and a finite series: no one supposes that an infinite series is necessarily divergent. \Itemp{(v)} The integral $\ds\int_{a}^{x} \phi(t)\, dt$ was defined in \SecNo[§§]{156}~and~\SecNo{157} as a \emph{simple} limit, \ie\ the limit of a certain finite sum. The infinite integral is therefore \emph{the limit of a limit}, or what is known as a \emph{repeated} limit. The notion of the infinite integral is in fact essentially more complex than that of the finite integral, of which it is a development. \PageSep{323} \Itemp{(vi)} The Integral Test of \SecNo[§]{174} may now be stated in the form: \begin{Result}if $\phi(x)$~is positive and steadily decreases as $x$~increases, then the infinite series $\sum\phi(n)$ and the infinite integral $\ds\int_{1}^{\infty} \phi(x)\, dx$ converge or diverge together. \end{Result} \Itemp{(vii)} The reader will find no difficulty in formulating and proving theorems for infinite integrals analogous to those stated in \Eq{(1)}--\Eq{(6)} of \SecNo[§]{77}. Thus the result analogous to~\Eq{(2)} is that \begin{Result}if $\ds\int_{a}^{\infty} \phi(x)\, dx$ is convergent, and $b > a$, then $\ds\int_{b}^{\infty} \phi(x)\, dx$ is convergent and \[ \int_{a}^{\infty} \phi(x)\, dx = \int_{a}^{b} \phi(x)\, dx + \int_{b}^{\infty}\phi(x)\, dx. \] \end{Result} \end{Remark} \Paragraph{178. The case in which $\phi(x)$~is positive.} It is natural to consider what are the general theorems, concerning the convergence or divergence of the infinite integral~\Eq{(1)} of \SecNo[§]{177}, analogous to theorems A--D of~\SecNo[§]{167}. That A~is true of integrals as well as of series we have already seen in \SecNo[§]{177},~\Eq{(ii)}. Corresponding to~B we have the theorem that \begin{Result}the necessary and sufficient condition for the convergence of the integral~\Eq{(1)} is that it should be possible to find a constant~$K$ such that \[ \int_{a}^{x} \phi(t)\, dt < K \] for all values of~$x$ greater than~$a$. \end{Result} Similarly, corresponding to~C, we have the theorem: \begin{Result}if $\ds\int_{a}^{\infty} \phi(x)\, dx$ is convergent, and $\psi(x) \leq K\phi(x)$ for all values of~$x$ greater than~$a$, then $\ds\int_{a}^{\infty} \psi(x)\, dx$ is convergent and %[** TN: Code hack; place envt. end here to avoid paragraph break below.] \end{Result} \[ \int_{a}^{\infty} \psi(x)\, dx \leq K\int_{a}^{\infty} \phi(x)\, dx. \] We leave it to the reader to formulate the corresponding test for divergence. We may observe that \DPchg{D'Alembert's}{d'Alembert's} test (\SecNo[§]{168}), depending as it does on the notion of successive terms, has no analogue for integrals; and that the analogue of Cauchy's test is not of much importance, and in any case could only be formulated when we have investigated in greater detail the theory of the function \PageSep{324} $\phi(x) = r^{x}$, as we shall do in \Ref{Ch.}{IX}\@. The most important special tests are obtained by comparison with the integral \[ \int_{a}^{\infty} \frac{dx}{x^{s}}\quad (a > 0), \] whose convergence or divergence we have investigated in \SecNo[§]{175}, and are as follows: \begin{Result}if $\phi(x) < Kx^{-s}$, where $s > 1$, when $x \geq a$, then $\ds\int_{a}^{\infty} \phi(x)\, dx$ is convergent; and if $\phi(x) > Kx^{-s}$, where $s \leq 1$, when $x \geq a$, then the integral is divergent; and in particular, if $\lim x^{s}\phi(x) = l$, where $l > 0$, then the integral is convergent or divergent according as $s > 1$ or $s \leq 1$. \end{Result} \begin{Remark} There is one fundamental property of a convergent infinite series in regard to which the analogy between infinite series and infinite integrals breaks down. If $\sum \phi(n)$~is convergent then $\phi(n) \to 0$; but it is \emph{not} always true, even when $\phi(x)$~is always positive, that if $\ds\int_{a}^{\infty} \phi(x)\, dx$ is convergent then $\phi(x) \to 0$. Consider for example the function~$\phi(x)$ whose graph is indicated by the thick line in the figure. Here the height of the peaks corresponding to the points $x = 1$, $2$,~$3$,~\dots\ is in each case unity, and the breadth of the peak corresponding %[Illustration: Fig. 50.] \Figure[3in]{50}{p324} to $x = n$ is~$2/(n + 1)^{2}$. The area of the peak is~$1/(n + 1)^{2}$, and it is evident that, for any value of~$\xi$, \[ \int_{0}^{\xi} \phi(x)\, dx < \sum_{0}^{\infty} \frac{1}{(n + 1)^{2}}, \] so that $\ds\int_{0}^{\infty} \phi(x)\, dx$ is convergent; but it is not true that $\phi(x) \to 0$\Add{.} \end{Remark} \begin{Examples}{LXXIII.} \Item{1.} The integral \[ \int_{a}^{\infty} \frac{\alpha x^{r} + \beta x^{r-1} + \dots + \lambda} {Ax^{s} + Bx^{s-1} + \dots + L}\, dx, \] where $\alpha$ and~$A$ are positive and $a$~is greater than the greatest root of the denominator, is convergent if $s > r + 1$ and otherwise divergent. \PageSep{325} \Item{2.} Which of the integrals %[** TN: All are displayed on one line in the original] $\ds\int_{a}^{\infty} \frac{dx}{\sqrt{x}}$, $\ds\int_{a}^{\infty} \frac{dx}{x^{4/3}}$, \[ \int_{a}^{\infty} \frac{dx}{c^{2} + x^{2}},\quad \int_{a}^{\infty} \frac{x\, dx}{c^{2} + x^{2}},\quad \int_{a}^{\infty} \frac{x^{2}\, dx}{c^{2} + x^{2}},\quad \int_{a}^{\infty} \frac{x^{2}\, dx}{\alpha + 2\beta x^{2} + \gamma x^{4}} \] are convergent? In the first two integrals it is supposed that $a > 0$, and in the last that $a$~is greater than the greatest root (if any) of the denominator. \Item{3.} The integrals \[ \int_{a}^{\xi} \cos x\, dx,\quad \int_{a}^{\xi} \sin x\, dx,\quad \int_{a}^{\xi} \cos(\alpha x + \beta)\, dx \] oscillate finitely as $\xi \to \infty$. \Item{4.} The integrals \[ \int_{a}^{\xi} x\cos x\, dx,\quad \int_{a}^{\xi} x^{2}\sin x\, dx\quad \int_{a}^{\xi} x^{n} \cos(\alpha x + \beta)\, dx, \] where $n$~is any positive integer, oscillate infinitely as $\xi \to \infty$. \Item{5.} \Topic{Integrals to~$-\infty$.} If $\ds\int_{\xi}^{a} \phi(x)\, dx$ tends to a limit~$l$ as $\xi \to -\infty$, then we say that $\ds\int_{-\infty}^{a} \phi(x)\, dx$ is convergent and equal to~$l$. Such integrals possess properties in every respect analogous to those of the integrals discussed in the preceding sections: the reader will find no difficulty in formulating them. \Item{6.} \Topic{Integrals from~$-\infty$ to~$+\infty$.} If the integrals \[ \int_{-\infty}^{a} \phi(x)\, dx,\quad \int_{a}^{\infty} \phi(x)\, dx \] are both convergent, and have the values $k$,~$l$ respectively, then we say that \[ \int_{-\infty}^{\infty} \phi(x)\, dx \] is convergent and has the value $k + l$. \Item{7.} Prove that \[ \int_{-\infty}^{0} \frac{dx}{1 + x^{2}} = \int_{0}^{\infty} \frac{dx}{1 + x^{2}} = \tfrac{1}{2} \int_{-\infty}^{\infty} \frac{dx}{1 + x^{2}} = \tfrac{1}{2}\pi. \] \Item{8.} Prove generally that \[ \int_{-\infty}^{\infty} \phi(x^{2})\, dx = 2\int_{0}^{\infty} \phi(x^{2})\, dx, \] provided that the integral $\ds\int_{0}^{\infty} \phi(x^{2})\, dx$ is convergent. \Item{9.} Prove that if $\ds\int_{0}^{\infty} x\phi(x^{2})\, dx$ is convergent then $\ds\int_{-\infty}^{\infty} x\phi(x^{2})\, dx = 0$. \PageSep{326} \Item{10.} \Topic{Analogue of Abel's Theorem of \SecNo[§]{173}.} \emph{If $\phi(x)$~is positive and steadily decreases, and $\ds\int_{a}^{\infty} \phi(x)\, dx$ is convergent, then $x\phi(x) \to 0$.} Prove this (\ia)~by means of Abel's Theorem and the Integral Test and (\ib)~directly, by arguments analogous to those of~\SecNo[§]{173}. \Item{11.} If $a = x_{0} < x_{1} < x_{2} < \dots$ and $x_{n} \to \infty$, and $\ds u_{n}= \int_{x_{n}}^{x_{n+1}} \phi(x)\, dx$, then the convergence of $\ds\int_{a}^{\infty} \phi(x)\, dx$ involves that of $\sum u_{n}$. If $\phi(x)$~is always positive the converse statement is also true. [That the converse is not true in general is shown by the example in which $\phi(x) = \cos x$, $x_{n} = n\pi$.] \end{Examples} \Paragraph{179. Application to infinite integrals of the rules for substitution and integration by parts.} The rules for the transformation of a definite integral which were discussed in \SecNo[§]{161} may be extended so as to apply to infinite integrals. \Item{(1)} \Topic{Transformation by substitution.} Suppose that \[ \int_{a}^{\infty} \phi(x)\, dx \Tag{(1)} \] is convergent. Further suppose that, for any value of~$\xi$ greater than~$a$, we have, as in~\SecNo[§]{161}, \[ \int_{a}^{\xi} \phi(x)\, dx = \int_{b}^{\tau} \phi\{f(t)\}f'(t)\, dt, \Tag{(2)} \] where $a = f(b)$, $\xi = f(\tau)$. Finally suppose that the functional relation $x = f(t)$ is such that $x \to \infty$ as $t \to \infty$. Then, making $\tau$ and so~$\xi$ tend to~$\infty$ in~\Eq{(2)}, we see that the integral \[ \int_{b}^{\infty} \phi\{f(t)\}f'(t)\, dt \Tag{(3)} \] is convergent and equal to the integral~\Eq{(1)}. On the other hand it may happen that $\xi \to \infty$ as $\tau \to -\infty$ or as $\tau \to c$. In the first case we obtain \begin{alignat*}{2} %[** TN: Unaligned in the original] \int_{a}^{\infty} \phi(x)\, dx &= &&\lim_{\tau\to-\infty} \int_{b}^{\tau} \phi\{f(t)\}f'(t)\, dt\\ &= -&&\lim_{\tau\to-\infty} \int_{\tau}^{b} \phi\{f(t)\}f'(t)\, dt = -\int_{-\infty}^{b} \phi\{f(t)\}f'(t)\, dt. \end{alignat*} In the second case we obtain \[ \int_{a}^{\infty} \phi(x)\, dx = \lim_{\tau\to c} \int_{b}^{\tau} \phi\{f(t)\}f'(t)\, dt. \Tag{(4)} \] We shall return to this equation in~\SecNo[§]{181}. \PageSep{327} There are of course corresponding results for the integrals \[ \int_{-\infty}^{a} \phi(x)\, dx,\quad \int_{-\infty}^{\infty} \phi(x)\, dx, \] which it is not worth while to set out in detail: the reader will be able to formulate them for himself. \begin{Examples}{LXXIV.} \Item{1.} Show, by means of the substitution $x = t^{\alpha}$, that if $s > 1$ and $\alpha >0$ then \[ \int_{1}^{\infty} x^{-s}\, dx = \alpha\int_{1}^{\infty} t^{\alpha(1-s) - 1}\, dt; \] and verify the result by calculating the value of each integral directly. \Item{2.} If $\ds\int_{a}^{\infty} \phi(x)\, dx$ is convergent then it is equal to one or other of \[ \alpha\int_{(a-\beta)/\alpha}^{\infty} \phi(\alpha t + \beta)\, dt,\quad -\alpha\int_{-\infty}^{(a-\beta)/\alpha} \phi(\alpha t + \beta)\, dt, \] according as $\alpha$~is positive or negative. \Item{3.} If $\phi(x)$~is a positive and steadily decreasing function of~$x$, and $\alpha$~and~$\beta$ are any positive numbers, then the convergence of the series $\sum \phi(n)$ implies and is implied by that of the series $\sum \phi(\alpha n + \beta)$. [It follows at once, on making the substitution $x = \alpha t + \beta$, that the integrals \[ \int_{a}^{\infty} \phi(x)\, dx,\quad \int_{(a-\beta)/\alpha}^{\infty} \phi(\alpha t + \beta)\, dt \] converge or diverge together. Now use the Integral Test.] \Item{4.} Show that \[ %[** TN: In-line in the original] \int_{1}^{\infty} \frac{dx}{(1 + x)\sqrt{x}} = \tfrac{1}{2} \pi. \] %[** TN: Added paragraph break] [Put $x = t^{2}$.] \Item{5.} Show that \[ \int_{0}^{\infty} \frac{\sqrt{x}}{(1 + x)^{2}}\, dx = \tfrac{1}{2}\pi. \] [Put $x = t^{2}$ and integrate by parts.] \Item{6.} If $\phi(x) \to h$ as $x \to \infty$, and $\phi(x) \to k$ as $x \to -\infty$, then \[ \int_{-\infty}^{\infty} \{\phi(x - a) - \phi(x - b)\}\, dx = -(a - b)(h - k). \] [For \begin{alignat*}{2} %[** TN: Re-broken] \int_{-\xi'}^{\xi} \{\phi(x - a) - \phi(x - b)\}\, dx &= \int_{-\xi'}^{\xi} \phi(x - a)\, dx &&- \int_{-\xi'}^{\xi} \phi(x - b)\, dx\\ &= \int_{-\xi'-a}^{\xi-a} \phi(t)\, dt &&- \int_{-\xi'-b}^{\xi-b} \phi(t)\, dt\\ &= \int_{-\xi'-a}^{-\xi'-b} \phi(t)\, dt &&- \int_{\xi-a}^{\xi-b} \phi(t)\, dt. \end{alignat*} \PageSep{328} The first of these two integrals may be expressed in the form \[ (a - b) k + \int_{-\xi'-a}^{-\xi'-b} \rho\, dt, \] where $\rho \to 0$ as $\xi' \to \infty$, and the modulus of the last integral is less than or equal to $|a - b| \kappa$, where $\kappa$~is the greatest value of $\rho$ throughout the interval $\DPmod{(-\xi' - a, -\xi' - b)}{[-\xi' - a, -\xi' - b]}$. Hence \[ \int_{-\xi'-a}^{-\xi'-b} \phi(t)\, dt \to (a - b) k. \] The second integral may be discussed similarly.] \end{Examples} \Item{(2)} \Topic{Integration by parts.} The formula for integration by parts (\SecNo[§]{161}) is \[ \int_{a}^{\xi} f(x)\phi'(x)\, dx = f(\xi)\phi(\xi) - f(a)\phi(a) - \int_{a}^{\xi} f'(x)\phi(x)\, dx. \] Suppose now that $\xi \to \infty$. Then if any two of the three terms in the above equation which involve~$\xi$ tend to limits, so does the third, and we obtain the result \[ \int_{a}^{\infty} f(x)\phi'(x)\, dx = \lim_{\xi\to\infty} f(\xi)\phi(\xi) - f(a)\phi(a) - \int_{a}^{\infty} f'(x)\phi(x)\, dx. \] There are of course similar results for integrals to~$-\infty$, or from $-\infty$ to~$\infty$. \begin{Examples}{LXXV.} \Item{1.} Show that \[ %[** TN: In-line in the original] \int_{0}^{\infty} \frac{x}{(1 + x)^{3}}\, dx = \tfrac{1}{2} \int_{0}^{\infty} \frac{dx}{(1 + x)^{2}} = \tfrac{1}{2}. \] \Item{2.} $\ds\int_{0}^{\infty} \frac{x^{2}}{(1 + x)^{4}}\, dx = \tfrac{2}{3} \int_{0}^{\infty} \frac{x}{(1 + x)^{3}}\, dx = \tfrac{1}{3}$. \Item{3.} If $m$ and~$n$ are positive integers, and \[ %[** TN: Two equations not displayed in the original] I_{m, n} = \int_{0}^{\infty} \frac{x^{m}\, dx}{(1 + x)^{m+n}}, \] then \[ I_{m, n} = \{m/(m + n - 1)\} I_{m-1, n}. \] Hence prove that $I_{m, n} = m!\, (n - 2)!/(m + n - 1)!$. \Item{4.} Show similarly that if \[ %[** TN: Not displayed in the original] I_{m, n} = \int_{0}^{\infty} \frac{x^{2m+1}\, dx}{(1 + x^{2})^{m+n}} \] then \[ I_{m, n} = \{m/(m + n - 1)\} I_{m-1, n},\quad 2I_{m, n} = m!\, (n - 2)!/(m + n - 1)!. \] Verify the result by applying the substitution $x = t^{2}$ to the result of Ex.~3. \end{Examples} \Paragraph{180. Other types of infinite integrals.} It was assumed, in the definition of the ordinary or finite integral given in \Ref{Ch.}{VII}, that (1)~the range of integration is finite and (2)~the subject of integration is continuous. It is possible, however, to extend the notion of the `definite integral' so as to apply to many cases in which these conditions \PageSep{329} are not satisfied. The `infinite' integrals which we have discussed in the preceding sections, for example, differ from those of \Ref{Ch.}{VII} in that the range of integration is infinite. We shall now suppose that it is the second of the conditions (1),~(2) that is not satisfied. It is natural to try to frame definitions applicable to some such cases at any rate. There is only one such case which we shall consider here. We shall suppose that $\phi(x)$~is continuous throughout the range of integration $\DPmod{(a, A)}{[a, A]}$ except for a finite number of values of~$x$, say $x = \xi_{1}$, $\xi_{2}$,~\dots, and that $\phi(x) \to \infty$ or $\phi(x) \to -\infty$ as $x$~tends to any of these exceptional values from either side. It is evident that we need only consider the case in which $\DPmod{(a, A)}{[a, A]}$ contains \emph{one} such point~$\xi$. When there is more than one such point we can divide up~$\DPmod{(a, A)}{[a, A]}$ into a finite number of sub-intervals each of which contains only one; and, if the value of the integral over each of these sub-intervals has been defined, we can then define the integral over the whole interval as being the sum of the integrals over each sub-interval. Further, we can suppose that the one point~$\xi$ in~$\DPmod{(a, A)}{[a, A]}$ comes at one or other of the limits $a$,~$A$. For, if it comes between $a$ and~$A$, we can then define $\ds\int_{a}^{A} \phi(x)\, dx$ as \[ \int_{a}^{\xi} \phi(x)\, dx + \int_{\xi}^{A} \phi(x)\, dx, \] assuming each of these integrals to have been satisfactorily defined. We shall suppose, then, that $\xi = a$; it is evident that the definitions to which we are led will apply, with trifling changes, to the case in which $\xi = A$. Let us then suppose $\phi(x)$ to be continuous throughout $\DPmod{(a, A)}{[a, A]}$ except for $x = a$, while $\phi(x) \to \infty$ as $x \to a$ through values greater than~$a$. A typical example of such a function is given by \[ \phi(x) = (x - a)^{-s}, \] where $s > 0$; or, in particular, if $a = 0$, by $\phi(x) = x^{-s}$. Let us therefore consider how we can define \[ \int_{0}^{A} \frac{dx}{x^{s}}, \Tag{(1)} \] when $s > 0$. \PageSep{330} The integral $\ds\int_{1/A}^{\infty} y^{s-2}\, dy$ is convergent if $s < 1$ (\SecNo[§]{175}) and means $\lim\limits_{\eta\to\infty} \ds\int_{1/A}^{\eta} y^{s-2}\, dy$. But if we make the substitution $y = 1/x$, we obtain \[ \int_{1/A}^{\eta} y^{s-2}\, dy = \int_{1/\eta}^{A} x^{-s}\, dx. \] Thus $\lim\limits_{\eta\to\infty} \ds\int_{1/\eta}^{A} x^{-s}\, dx$, or, what is the same thing, \[ % [** TN: Keeping notation \epsilon] \lim_{\epsilon\to +0} \int_{\epsilon}^{A} x^{-s}\, dx, \] exists provided that $s < 1$; and it is natural to define the value of the integral~\Eq{(1)} as being equal to this limit. Similar considerations lead us to define $\ds\int_{a}^{A} (x - a)^{-s}\, dx$ by the equation \[ \int_{a}^{A} (x - a)^{-s}\, dx = \lim_{\epsilon\to +0} \int_{a+\epsilon}^{A} (x - a)^{-s}\, dx. \] We are thus led to the following general definition: \begin{Defn}if the integral \[ \int_{a+\epsilon}^{A} \phi(x)\, dx \] tends to a limit~$l$ as $\epsilon \to +0$, we shall say that the integral \[ \int_{a}^{A} \phi(x)\, dx \] is convergent and has the value~$l$. \end{Defn} Similarly, when $\phi(x) \to \infty$ as $x$~tends to the upper limit~$A$, we define $\ds\int_{a}^{A} \phi(x)\, dx$ as being \[ \lim_{\epsilon \to +0} \int_{a}^{A-\epsilon} \phi(x)\, dx: \] and then, as we explained above, we can extend our definitions to cover the case in which the interval $\DPmod{(a, A)}{[a, A]}$ contains any finite number of infinities of~$\phi(x)$. An integral in which the subject of integration tends to~$\infty$ or to~$-\infty$ as $x$~tends to some value or values included in the range of integration will be called an \emph{infinite integral of the second kind}: the \emph{first kind} of infinite integrals being the class discussed in \SecNo[§§]{177}~\textit{et~seq.} Nearly all the remarks (i)--(vii) made at the end of \SecNo[§]{177} apply to infinite integrals of the second kind as well as to those of the first. \PageSep{331} \begin{Remark} \Paragraph{181.} We may now write the equation~\Eq{(4)} of \SecNo[§]{179} in the form \[ \int_{a}^{\infty} \phi(x)\, dx = \int_{b}^{c} \phi\{f(t)\}f'(t)\, dt. \Tag{(1)} \] The integral on the right-hand side is defined as the limit, as $\tau \to c$, of the corresponding integral over the range $\DPmod{(b, \tau)}{[b, \tau]}$, \ie\ as an infinite integral of the second kind. And when $\phi\{f(t)\}f'(t)$ has an infinity at $t = c$ the integral is essentially an infinite integral. Suppose for example, that $\phi(x) = (1 + x)^{-m}$, where $1 < m <2$, and $a = 0$, and that $f(t) = t/(1 - t)$. Then $b = 0$, $c = 1$, and \Eq{(1)}~becomes \[ \int_{0}^{\infty} \frac{dx}{(1 + x)^{m}} = \int_{0}^{1} (1 - t)^{m-2}\, dt; \Tag{(2)} \] and the integral on the right-hand side is an infinite integral of the second kind. On the other hand it may happen that $\phi\{f(t)\}f'(t)$ is continuous for $t = c$. In this case \[ \int_{b}^{c} \phi\{f(t)\}f'(t)\, dt \] is a finite integral, and \[ \lim_{\tau \to c} \int_{b}^{\tau} \phi\{f(t)\}f'(t)\, dt = \int_{b}^{c} \phi\{f(t)\}f'(t)\, dt, \] in virtue of the corollary to Theorem~\Eq{(10)} of \SecNo[§]{160}. In this case the substitution $x = f(t)$ transforms an infinite into a finite integral. This case arises if $m \geq 2$ in the example considered a moment ago. \end{Remark} \begin{Examples}{LXXVI.} \Item{1.} If $\phi(x)$~is continuous except for $x = a$, while $\phi(x) \to \infty$ as $x \to a$, then the necessary and sufficient condition that $\ds\int_{a}^{A} \phi(x)\, dx$ should be convergent is that we can find a constant~$K$ such that \[ \int_{a+\epsilon}^{A} \phi(x)\, dx < K \] for all values of~$\epsilon$, however small (cf.~\SecNo[§]{178}). It is clear that we can choose a number~$A'$ between $a$ and~$A$, such that $\phi(x)$~is positive throughout $\DPmod{(a, A')}{[a, A']}$. If $\phi(x)$~is positive throughout the whole interval $\DPmod{(a, A)}{[a, A]}$ then we can of course identify $A'$ and~$A$. Now \[ \int_{a-\epsilon}^{A} \phi(x)\, dx = \int_{a-\epsilon}^{A'} \phi(x)\, dx + \int_{A'}^{A} \phi(x)\, dx. \] The first integral on the right-hand side of the above equation increases as $\epsilon$~decreases, and therefore tends to a limit or to~$\infty$; and the truth of the result stated becomes evident. If the condition is not satisfied then $\ds\int_{a-\epsilon}^{A} \phi(x)\, dx \to \infty$. We shall then say that the integral $\ds\int_{a}^{A} \phi(x)\, dx$ \Emph{diverges} to~$\infty$. It is clear that, if $\phi(x) \to \infty$ as $x \to a + 0$, then convergence and divergence to~$\infty$ are the only alternatives for the integral. We may discuss similarly the case in which $\phi(x) \to -\infty$. \PageSep{332} %[** TN: Several displayed integrals are in-line in the original] \Item{2.} Prove that \[ \int_{a}^{A} (x - a)^{-s}\, dx = \frac{(A - a)^{1-s}}{1 - s} \] if $s < 1$, while the integral is divergent if $s \geq 1$. \Item{3.} If $\phi(x) \to \infty$ as $x \to a + 0$ and $\phi(x) < K(x - a)^{-s}$, where $s < 1$, then $\ds\int_{a}^{A} \phi(x)\, dx$ is convergent; and if $\phi(x) > K(x - a)^{-s}$, where $s \geq 1$, then the integral is divergent. [This is merely a particular case of a general comparison theorem analogous to that stated in~\SecNo[§]{178}.] \Item{4.} Are the integrals \begin{gather*} \int_{a}^{A} \frac{dx}{\sqrtb{(x - a)(A - x)}},\quad \int_{a}^{A} \frac{dx}{(A - x)\sqrtp[3]{x - a}},\quad \int_{a}^{A} \frac{dx}{(A - x)\sqrtp[3]{A - x}},\\ \int_{a}^{A} \frac{dx}{\sqrtp{x^{2} - a^{2}}},\quad \int_{a}^{A} \frac{dx}{\sqrtp[3]{A^{3} - x^{3}}},\quad \int_{a}^{A} \frac{dx}{x^{2} - a^{2}},\quad \int_{a}^{A} \frac{dx}{A^{3} - x^{3}} \end{gather*} convergent or divergent? \Item{5.} The integrals \[ %[** TN: Integrals in next ten questions in-line in the original] \int_{-1}^{1}\frac{dx}{\sqrt[3]{x}},\quad \int_{a-1}^{a+1} \frac{dx}{\sqrtp[3]{x - a}} \] are convergent, and the value of each is zero. \Item{6.} The integral \[ \int_{0}^{\pi} \frac{dx}{\sqrtp{\sin x}} \] is convergent. [The subject of integration tends to~$\infty$ as $x$~tends to either limit.] \Item{7.} The integral \[ \int_{0}^{\pi} \frac{dx}{(\sin x)^{s}} \] is convergent if and only if $s < 1$. \Item{8.} The integral \[ \int_{0}^{\frac{1}{2}\pi} \frac{x^{s}}{(\sin x)^{t}}\, dx \] is convergent if $t < s + 1$. \Item{9.} Show that \[ \int_{0}^{h} \frac{\sin x}{x^{p}}\, dx, \] where $h > 0$, is convergent if $p < 2$. Show also that, if $0 < p < 2$, the integrals \[ \int_{0}^{\pi} \frac{\sin x}{x^{p}} dx,\quad \int_{\pi}^{2\pi} \frac{\sin x}{x^{p}}\, dx,\quad \int_{2\pi}^{3\pi} \frac{\sin x}{x^{p}}\, dx,\ \dots \] alternate in sign and steadily decrease in absolute value. [Transform the integral whose limits are $k\pi$ and~$(k + 1)\pi$ by the substitution $x = k\pi + y$.] \Item{10.} Show that \[ \int_{0}^{h} \frac{\sin x}{x^{p}}\, dx, \] where $0 < p < 2$, attains its greatest value when $h = \pi$. \MathTrip{1911.} \Item{11.} The integral \[ \int_{0}^{\frac{1}{2} \pi}(\cos x)^{l}(\sin x)^{m}\, dx \] is convergent if and only if $l > -1$, $m > -1$. \Item{12.} Such an integral as \[ \int_{0}^{\infty} \frac{x^{s-1}\, dx}{1 + x}, \] where $s < 1$, does not fall directly under any of our previous definitions. For the range of integration is infinite \PageSep{333} and the subject of integration tends to~$\infty$ as $x \to +0$. It is natural to define this integral as being equal to the sum \[ \int_{0}^{1} \frac{x^{s-1}\, dx}{1 + x} + \int_{1}^{\infty} \frac{x^{s-1}\, dx}{1 + x}, \] provided that these two integrals are both convergent. {\Loosen The first integral is a convergent infinite integral of the second kind if $0 < s < 1$. The second is a convergent infinite integral of the first kind if $s < 1$. It should be noted that when $s > 1$ the first integral is an ordinary finite integral; but then the second is divergent. Thus the integral from~$0$ to~$\infty$ is convergent if and only if $0 < s < 1$.} \Item{13.} Prove that \[ \int_{0}^{\infty} \frac{x^{s-1}}{1 + x^{t}}\, dx \] is convergent if and only if $0 < s < t$. \Item{14.} The integral \[ \int_{0}^{\infty} \frac{x^{s-1} - x^{t-1}}{1 - x}\, dx \] is convergent if and only if $0 < s < 1$, $0 < t < 1$. [It should be noticed that the subject of integration is undefined when $x = 1$; but $(x^{s-1} - x^{t-1})/(1 - x) \to t - s$ as $x \to 1$ from either side; so that the subject of integration becomes a continuous function of~$x$ if we assign to it the value $t - s$ when $x = 1$. It often happens that the subject of integration has a discontinuity which is due simply to a failure in its definition at a particular point in the range of integration, and can be removed by attaching a particular value to it at that point. In this case it is usual to suppose the definition of the subject of integration completed in this way. Thus the integrals \[ \int_{0}^{\frac{1}{2} \pi} \frac{\sin mx}{x}\, dx,\quad \int_{0}^{\frac{1}{2} \pi} \frac{\sin mx}{\sin x}\, dx \] are ordinary finite integrals, if the subjects of integration are regarded as having the value~$m$ when $x = 0$.] \Item{15.} \Topic{Substitution and integration by parts.} The formulae for transformation by substitution and integration by parts may of course be extended to infinite integrals of the second as well as of the first kind. The reader should formulate the general theorems for himself, on the lines of~\SecNo[§]{179}. \Item{16.} Prove by integration by parts that if $s > 0$, $t > 1$, then \[ \int_{0}^{1} x^{s-1}(1 - x)^{t-1}\, dx = \frac{t - 1}{s} \int_{0}^{1} x^{s} (1 - x)^{t-2}\, dx. \] \Item{17.} If $s > 0$ then \[ \int_{0}^{1} \frac{x^{s-1}\, dx}{1 + x} = \int_{1}^{\infty} \frac{t^{-s}\, dt}{1 + t}. \] %[** TN: Added paragraph break] [Put $x = 1/t$.] \Item{18.} If $0 < s < 1$ then \[ \int_{0}^{1} \frac{x^{s-1} + x^{-s}}{1 + x}\, dx = \int_{0}^{\infty} \frac{t^{-s}\, dt}{1 + t} = \int_{0}^{\infty} \frac{t^{s-1}\, dt}{1 + t}. \] \Item{19.} If $a + b > 0$ then \[ \int_{b}^{\infty} \frac{dx}{(x + a)\sqrtp{x - b}} = \frac{\pi}{\sqrtp{a + b}}. \] \MathTrip{1909.} \PageSep{334} \Item{20.} Show, by means of the substitution $x = t/(1 - t)$, that if $l$~and~$m$ are both positive then \[ \int_{0}^{\infty} \frac{x^{l-1}}{(1 + x)^{l+m}}\, dx = \int_{0}^{1} t^{l-1} (1 - t)^{m-1}\, dt. \] \Item{21.} Show, by means of the substitution $x = pt/(p + 1 - t)$, that if $l$,~$m$, and~$p$ are all positive then \[ \int_{0}^{1} x^{l-1} (1 - x)^{m-1}\, \frac{dx}{(x + p)^{l + m}} = \frac{1}{(1 + p)^{l} p^{m}} \int_{0}^{1} t^{l-1} (1 - t)^{m-1}\, dt. \] \Item{22.} Prove that \[ %[** TN: In-line in the original] \int_{a}^{b} \frac{dx}{\sqrtb{(x - a)(b - x)}} = \pi\quad\text{and}\quad \int_{a}^{b} \frac{x\, dx}{\sqrtb{(x - a)(b - x)}} = \tfrac{1}{2} \pi (a + b), \] (i)~by means of the substitution $x = a + (b - a)t^{2}$, (ii)~by means of the substitution $(b - x)/(x - a) = t$, and (iii)~by means of the substitution $x = a\cos^{2} t + b\sin^{2} t$. \Item{23.} If $s > -1$ then \[ \int_{0}^{\frac{1}{2} \pi} (\sin\theta)^{s}\, d\theta = \int_{0}^{1} \frac{x^{s}\, dx}{\sqrtp{1 - x^{2}}} = \tfrac{1}{2} \int_{0}^{1} \frac{x^{\frac{1}{2}(s-1)}\, dx}{\sqrtp{1 - x}} = \tfrac{1}{2} \int_{0}^{1} (1 - x)^{\frac{1}{2}(s-1)} \frac{dx}{\sqrt{x}}. \] \Item{24.} Establish the formulae \begin{align*} &\int_{0}^{1} \frac{f(x)\, dx}{\sqrtp{1 - x^{2}}} = \int_{0}^{\frac{1}{2}\pi} f(\sin\theta)\, d\theta,\\ % &\int_{a}^{b} \frac{f(x)\, dx}{\sqrtb{(x - a)(b - x)}} = 2\int_{0}^{\frac{1}{2}\pi} f(a\cos^{2}\theta + b\sin^{2}\theta)\, d\theta,\\ % &\int_{-a}^{a} f\left\{\bigsqrtp{\frac{a - x}{a + x}}\right\} dx = 4a\int_{0}^{\frac{1}{2}\pi} f(\tan\theta) \cos\theta \sin\theta\, d\theta\Add{.} \end{align*} \Item{25.} Prove that \[ \int_{0}^{1} \frac{dx}{(1 + x)(2 + x) \sqrtb{x(1 - x)}} = \pi\left(\frac{1}{\sqrt{2}} - \frac{1}{\sqrt{6}}\right)\Add{.} \] %[** Added paragraph break] [Put $x = \sin^{2}\theta$ and use \Ex{lxiii}.~8.] \MathTrip{1912.} %[** TN: Dot added after "Math"] \end{Examples} \begin{Remark} \Paragraph{182.} Some care has occasionally to be exercised in applying the rule for transformation by substitution. The following example affords a good illustration of this. Let \[ J = \int_{1}^{7} (x^{2} - 6x + 13)\, dx. \] We find by direct integration that $J = 48$. Now let us apply the substitution \[ y = x^{2} - 6x + 13, \] which gives $x = 3 ± \sqrtp{y - 4}$. Since $y = 8$ when $x = 1$ and $y = 20$ when $x = 7$, we appear to be led to the result \[ J = \int_{8}^{20} y\frac{dx}{dy}\, dy = ±\tfrac{1}{2}\int_{8}^{20} \frac{y\, dy}{\sqrtp{y - 4}}. \] The indefinite integral is \[ \tfrac{1}{3}(y - 4)^{3/2} + 4(y - 4)^{1/2}, \] and so we obtain the value~$±\frac{80}{3}$, which is certainly wrong whichever sign we choose. \PageSep{335} The explanation is to be found in a closer consideration of the relation between $x$~and~$y$. The function $x^{2} - 6x + 13$ has a minimum for $x = 3$, when $y = 4$. As $x$~increases from $1$ to~$3$, $y$~decreases from $8$ to~$4$, and $dx/dy$~is negative, so that \[ \frac{dx}{dy} = -\frac{1}{2\sqrtp{y - 4}}. \] As $x$~increases from $3$ to~$7$, $y$~increases from $4$ to~$20$, and the other sign must be chosen. Thus \[ J = \int_{1}^{7} y\, dx = \int_{8}^{4} \left\{-\frac{y}{2\sqrtp{y - 4}}\right\} dy + \int_{4}^{20} \frac{y}{2\sqrtp{y - 4}}\, dy, \] a formula which will be found to lead to the correct result. {\Loosen Similarly, if we transform the integral $\ds\int_{0}^{\pi} dx = \pi$ by the substitution $x = \arcsin y$, we must observe that $dx/dy = 1/\sqrtp{1 - y^{2}}$ or $dx/dy = -1/\sqrtp{1 - y^{2}}$ according as $0 \leq x < \frac{1}{2}\pi$ or $\frac{1}{2}\pi < x \leq \pi$.} \Par{Example.} Verify the results of transforming the integrals \[ \int_{0}^{1} (4x^{2} - x + \tfrac{1}{16})\, dx,\quad \int_{0}^{\pi} \cos^{2}x\, dx \] by the substitutions $4x^{2} - x + \frac{1}{16} = y$, $x = \arcsin y$ respectively. \end{Remark} \Paragraph{183. Series of positive and negative terms.} Our definitions of the sum of an infinite series, and the value of an infinite integral, whether of the first or the second kind, apply to series of terms or integrals of functions whose values may be either positive or negative. But the special tests for convergence or divergence which we have established in this chapter, and the examples by which we have illustrated them, have had reference almost entirely to the case in which all these values are positive. Of course the case in which they are all negative is not essentially different, as it can be reduced to the former by changing $u_{n}$ into $-u_{n}$ or $\phi(x)$ into~$-\phi(x)$. In the case of a series it has always been explicitly or tacitly assumed that any conditions imposed upon~$u_{n}$ may be violated for a finite number of terms: all that is necessary is that such a condition (\eg\ that all the terms are positive) should be satisfied \emph{from some definite term onwards}. Similarly in the case of an infinite integral the conditions have been supposed to be satisfied \emph{for all values of~$x$ greater than some definite value}, or for all values of~$x$ within some definite interval $\DPmod{(a, a + \delta)}{[a, a + \delta]}$ which includes the \PageSep{336} value~$a$ near which the subject of integration tends to infinity. Thus our tests apply to such a series as \[ \sum \frac{n^{2} - 10}{n^{4}}, \] since $n^{2} - 10 > 0$ when $n \geq 4$, and to such integrals as \[ \int_{1}^{\infty} \frac{3x - 7}{(x + 1)^{3}}\, dx,\quad \int_{0}^{1} \frac{1 - 2x}{\sqrt{x}}\, dx, \] since $3x - 7 > 0$ when $x > \frac{7}{3}$, and $1 - 2x > 0$ when $0 < x < \frac{1}{2}$. But when the changes of sign of~$u_{n}$ \emph{persist throughout the series}, \ie~when the number of both positive and negative terms is infinite, as in the series $1 - \frac{1}{2} + \frac{1}{3} - \frac{1}{4} + \dots$; or when $\phi(x)$~continually changes sign as $x \to \infty$, as in the integral \[ \int_{1}^{\infty} \frac{\sin x}{x^{s}}\, dx, \] or as $x \to a$, where $a$~is a point of discontinuity of~$\phi(x)$, as in the integral \[ \int_{a}^{A} \sin\left(\frac{1}{x - a}\right) \frac{dx}{x - a}; \] then the problem of discussing convergence or divergence becomes more difficult. For now we have to consider the possibility of oscillation as well as of convergence or divergence. We shall not, in this volume, have to consider the more general problem for integrals. But we shall, in the ensuing chapters, have to consider certain simple examples of series containing an infinite number of both positive and negative terms. \Paragraph{184. Absolutely Convergent Series.} Let us then consider a series $\sum u_{n}$ in which any term may be either positive or negative. Let \[ |u_{n}| = \alpha_{n}, \] so that $\alpha_{n} = u_{n}$ if $u_{n}$~is positive and $\alpha_{n} = -u_{n}$ if $u_{n}$~is negative. Further, let $v_{n} = u_{n}$ or $v_{n} = 0$, according as $u_{n}$~is positive or negative, and $w_{n} = -u_{n}$ or $w_{n} = 0$, according as $u_{n}$~is negative or positive; or, what is the same thing, let $v_{n}$ or~$w_{n}$ be equal to~$\alpha_{n}$ according as $u_{n}$~is positive or negative, the other being in either case equal to zero. Then it is evident that $v_{n}$ and~$w_{n}$ are always positive, and that \[ u_{n} = v_{n} - w_{n},\quad \alpha_{n} = v_{n} + w_{n}. \] \PageSep{337} \begin{Remark} If, for example, our series is $1 - (1/2)^{2} + (1/3)^{2} - \dots$, then $u_{n} = (-1)^{n-1}/n^{2}$ and $\alpha_{n} = 1/n^{2}$, while $v_{n} = 1/n^{2}$ or $v_{n} = 0$ according as $n$~is odd or even and $w_{n} = 1/n^{2}$ or $w_{n} = 0$ according as $n$~is even or odd. \end{Remark} We can now distinguish two cases. \Item{A.} Suppose that the series $\sum \alpha_{n}$~is convergent. This is the case, for instance, in the example above, where $\sum \alpha_{n}$ is \[ 1 + (1/2)^{2} + (1/3)^{2} + \dots. \] Then both $\sum v_{n}$ and $\sum w_{n}$ are convergent: for (\Ex{xxx}.~18) any series selected from the terms of a convergent series of positive terms is convergent. And hence, by theorem~\Eq{(6)} of \SecNo[§]{77}, $\sum u_{n}$ or $\sum(v_{n} - w_{n})$ is convergent and equal to $\sum v_{n} - \sum w_{n}$. We are thus led to formulate the following definition. \begin{Definition} When $\sum \alpha_{n}$ or $\sum |u_{n}|$ is convergent, the series $\sum u_{n}$ is said to be \Emph{absolutely convergent}. \end{Definition} And what we have proved above amounts to this: \begin{Result}if $\sum u_{n}$~is absolutely convergent then it is convergent; so are the series formed by its positive and negative terms taken separately; and the sum of the series is equal to the sum of the positive terms plus the sum of the negative terms. \end{Result} \begin{Remark} The reader should carefully guard himself against supposing that the statement `an absolutely convergent series is convergent' is a mere tautology. When we say that $\sum u_{n}$~is `absolutely convergent' we do \emph{not} assert directly that $\sum u_{n}$~is convergent: we assert the convergence of \emph{another} series $\sum |u_{n}|$, and it is by no means evident \textit{a~priori} that this precludes oscillation on the part of~$\sum u_{n}$. \end{Remark} \begin{Examples}{LXXVII.} \Item{1.} Employ the `general principle of convergence' (\SecNo[§]{84}) to prove the theorem that an absolutely convergent series is convergent. [Since $\sum |u_{n}|$ is convergent, we can, when any positive number~$\DELTA$ is assigned, choose~$n_{0}$ so that \[ |u_{n_{1}+1}| + |u_{n_{1}+2}| + \dots + |u_{n_{2}}| < \DELTA \] when $n_{2} > n_{1} \geq n_{0}$. \textit{A~fortiori} \[ |u_{n_{1}+1} + u_{n_{1}+2} + \dots + u_{n_{2}}| < \DELTA, \] and therefore $\sum u_{n}$~is convergent.] \Item{2.} If $\sum a_{n}$ is a convergent series of positive terms, and $|b_{n}|\leq Ka_{n}$, then $\sum b_{n}$ is absolutely convergent. \Item{3.} If $\sum a_{n}$ is a convergent series of positive terms, then the series $\sum a_{n}x^{n}$ is absolutely convergent when $-1 \leq x \leq 1$. \PageSep{338} \Item{4.} If $\sum a_{n}$ is a convergent series of positive terms, then the series $\sum a_{n} \cos n\theta$, $\sum a_{n}\sin n\theta$ are absolutely convergent for all values of~$\theta$. [Examples are afforded by the series $\sum r^{n}\cos n\theta$, $\sum r^{n}\sin n\theta$ of~\SecNo[§]{88}.] \Item{5.} Any series selected from the terms of an absolutely convergent series is absolutely convergent. [For the series of the moduli of its terms is a selection from the series of the moduli of the terms of the original series.] \Item{6.} Prove that if $\sum |u_{n}|$~is convergent then \[ |\tsum u_{n}| \leq \tsum |u_{n}|, \] and that the only case to which the sign of equality can apply is that in which every term has the same sign. \end{Examples} \Paragraph{185. Extension of Dirichlet's Theorem to absolutely convergent series.} Dirichlet's Theorem (\SecNo[§]{169}) shows that the terms of a series of positive terms may be rearranged in any way without affecting its sum. It is now easy to see that any absolutely convergent series has the same property. For let $\sum u_{n}$ be so rearranged as to become $\sum u'_{n}$, and let $\alpha'_{n}$,~$v'_{n}$,~$w'_{n}$ be formed from~$u'_{n}$ as $\alpha_{n}$,~$v_{n}$,~$w_{n}$ were formed from~$u_{n}$. Then $\sum \alpha'_{n}$ is convergent, as it is a rearrangement of~$\sum \alpha_{n}$, and so are $\sum v'_{n}$, $\sum w'_{n}$, which are rearrangements of $\sum v_{n}$, $\sum w_{n}$. Also, by Dirichlet's Theorem, $\sum v'_{n} = \sum v_{n}$ and $\sum w'_{n} = \sum w_{n}$ and so \[ \tsum u'_{n} = \tsum v'_{n} - \tsum w'_{n} = \tsum v_{n} - \tsum w_{n} = \tsum u_{n}. \] \Paragraph{186. Conditionally convergent series.} \Item{B.} We have now to consider the second case indicated above, viz.\ that in which the series of moduli $\sum \alpha_{n}$ diverges to~$\infty$. \begin{Definition} If $\sum u_{n}$ is convergent, but $\sum |u_{n}|$ divergent, the original series is said to be \Emph{conditionally convergent}. \end{Definition} In the first place we note that, if $\sum u_{n}$ is conditionally convergent, then the series $\sum v_{n}$, $\sum w_{n}$ of \SecNo[§]{184} must both diverge to~$\infty$. For they obviously cannot both converge, as this would involve the convergence of $\sum(v_{n} + w_{n})$ or~$\sum \alpha_{n}$. And if one of them, say $\sum w_{n}$, is convergent, and $\sum v_{n}$ divergent, then \[ \sum_{0}^{N} u_{n} = \sum_{0}^{N} v_{n} - \sum_{0}^{N} w_{n}, \Tag{(1)} \] and therefore tends to~$\infty$ with~$N$, which is contrary to the hypothesis that $\sum u_{n}$ is convergent. Hence $\sum v_{n}$, $\sum w_{n}$ are both divergent. It is clear from equation~\Eq{(1)} above that the sum of a conditionally convergent series \PageSep{339} is the limit of the difference of two functions each of which tends to~$\infty$ with~$n$. It is obvious too that $\sum u_{n}$ no longer possesses the property of convergent series of positive terms (\Ex{xxx}.~18), and all absolutely convergent series (\Ex{lxxvii}.~5), that any selection from the terms itself forms a convergent series. And it seems more than likely that the property prescribed by Dirichlet's Theorem will not be possessed by conditionally convergent series; at any rate the proof of \SecNo[§]{185} fails completely, as it depended essentially on the convergence of $\sum v_{n}$ and $\sum w_{n}$ separately. We shall see in a moment that this conjecture is well founded, and that the theorem is not true for series such as we are now considering. \Paragraph{187. Tests of convergence for conditionally convergent series.} It is not to be expected that we should be able to find tests for conditional convergence as simple and general as those of \SecNo[§§]{167}~\textit{et~seq.} It is naturally a much more difficult matter to formulate tests of convergence for series whose convergence, as is shown by equation~\Eq{(1)} above, depends essentially on the cancelling of the positive by the negative terms. In the first instance \emph{there are no comparison tests for convergence of conditionally convergent series}. For suppose we wish to infer the convergence of $\sum v_{n}$ from that of $\sum u_{n}$. We have to compare \[ v_{0} + v_{1} + \dots + v_{n},\quad u_{0} + u_{1} + \dots + u_{n}. \] If every~$u$ and every~$v$ were positive, and every~$v$ less than the corresponding~$u$, we could at once infer that \[ v_{0} + v_{1} + \dots + v_{n} < u_{0} + \dots + u_{n}, \] and so that $\sum v_{n}$ is convergent. If the~$u$'s only were positive and every~$v$ \emph{numerically} less than the corresponding~$u$, we could infer that \[ |v_{0}| + |v_{1}| + \dots + |v_{n}| < u_{0} + \dots + u_{n}, \] and so that $\sum v_{n}$ is absolutely convergent. But in the general case, when the $u$'s and~$v$'s are both unrestricted as to sign, all that we can infer is that \[ |v_{0}| + |v_{1}| + \dots + |v_{n}| < |u_{0}| + \dots + |u_{n}|. \] This would enable us to infer the absolute convergence of $\sum v_{n}$ from the absolute convergence of~$\sum u_{n}$; but if $\sum u_{n}$ is only conditionally convergent we can draw no inference at all. \PageSep{340} \begin{Remark} \Par{Example.} We shall see shortly that the series $1 - \frac{1}{2} + \frac{1}{3} - \frac{1}{4} + \dots$ is convergent. But the series $\frac{1}{2} + \frac{1}{3} + \frac{1}{4} + \frac{1}{5} + \dots$ is divergent, although each of its terms is numerically less than the corresponding term of the former series. \end{Remark} It is therefore only natural that such tests as we can obtain should be of a much more special character than those given in the early part of this chapter. \Paragraph{188. Alternating Series.} The simplest and most common conditionally convergent series are what is known as \emph{alternating series}, series whose terms are alternately positive and negative. The convergence of the most important series of this type is established by the following theorem. \begin{Result} If $\phi(n)$~is a positive function of~$n$ which tends \Emph{steadily} to zero as $n \to \infty$, then the series \[ \phi(0) - \phi(1) + \phi(2) - \dots \] is convergent, and its sum lies between $\phi(0)$ and $\phi(0) - \phi(1)$. \end{Result} Let us write $\phi_{0}$, $\phi_{1}$,~\dots\ for $\phi(0)$, $\phi(1)$,~\dots; and let \[ s_{n} = \phi_{0} - \phi_{1} + \phi_{2} - \dots + (-1)^{n}\phi_{n}. \] Then \[ s_{2n+1} - s_{2n-1} = \phi_{2n} - \phi_{2n+1}\geq 0,\quad s_{2n} - s_{2n-2} = -(\phi_{2n-1} - \phi_{2n}) \leq 0. \] {\Loosen Hence $s_{0}$, $s_{2}$, $s_{4}$,~\dots, $s_{2n}$,~\dots\ is a decreasing sequence, and therefore tends to a limit or to~$-\infty$, and $s_{1}$, $s_{3}$, $s_{5}$,~\dots, $s_{2n+1}$,~\dots\ is an increasing sequence, and therefore tends to a limit or to~$\infty$. But $\lim (s_{2n+1} - s_{2n}) = \lim (-1)^{2n+1} \phi_{2n+1} = 0$, from which it follows that both sequences must tend to limits, and that the two limits must be the same. That is to say, the sequence $s_{0}$, $s_{1}$,~\dots, $s_{n}$,~\dots\ tends to a limit. Since $s_{0} = \phi_{0}$, $s_{1} = \phi_{0} - \phi_{1}$, it is clear that this limit lies between $\phi_{0}$ and~$\phi_{0} - \phi_{1}$.} \begin{Examples}{LXXVIII.} \Item{1.} The series \begin{gather*} 1 - \frac{1}{2} + \frac{1}{3} - \frac{1}{4} + \dots,\quad 1 - \frac{1}{\sqrt{2}} + \frac{1}{\sqrt{3}} - \frac{1}{\sqrt{4}} + \dots,\\ \sum \frac{(-1)^{n}}{(n + a)},\quad \sum \frac{(-1)^{n}}{\sqrtp{n + a}},\quad \sum \frac{(-1)^{n}}{(\sqrt{n} + \sqrt{a})},\quad \sum \frac{(-1)^{n}}{(\sqrt{n} + \sqrt{a})^{2}}, \end{gather*} where $a > 0$, are conditionally convergent. \Item{2.} The series $\sum(-1)^{n}(n + a)^{-s}$, where $a > 0$, is absolutely convergent if $s > 1$, conditionally convergent if $0 < s \leq 1$, and oscillatory if $s \leq 0$. \PageSep{341} \Item{3.} The sum of the series of \SecNo[§]{188} lies between $s_{n}$ and~$s_{n+1}$ for all values of~$n$; and the error committed by taking the sum of the first $n$ terms instead of the sum of the whole series is numerically not greater than the modulus of the $(n + 1)$th~term. \Item{4.} Consider the series \[ \sum \frac{(-1)^{n}}{\sqrt{n} + (-1)^{n}}, \] which we suppose to begin with the term for which $n = 2$, to avoid any difficulty as to the definitions of the first few terms. This series may be written in the form \[ \sum \left[\left\{ \frac{(-1)^{n}}{\sqrt{n} + (-1)^{n}} - \frac{(-1)^{n}}{\sqrt{n}}\right\} + \frac{(-1)^{n}}{\sqrt{n}}\right] \] or \[ \sum \left\{\frac{(-1)^{n}}{\sqrt{n}} - \frac{1}{n + (-1)^{n}\sqrt{n}}\right\} = \sum (\psi_{n} - \chi_{n}), \] say. The series $\sum \psi_{n}$ is convergent; but $\sum \chi_{n}$~is divergent, as all its terms are positive, and $\lim n\chi_{n} = 1$. Hence the original series is divergent, although it is of the form $\phi_{2} - \phi_{3} + \phi_{4} - \dots$, where $\phi_{n} \to 0$. This example shows that the condition that $\phi_{n}$~should tend \emph{steadily} to zero is essential to the truth of the theorem. The reader will easily verify that $\sqrtp{2n + 1} - 1 < \sqrtp{2n} + 1$, so that this condition is not satisfied. \Item{5.} If the conditions of \SecNo[§]{188} are satisfied except that $\phi_{n}$~tends steadily to a positive limit~$l$, then the series $\sum (-1)^{n}\phi_{n}$ oscillates finitely. \Item{6.} \Topic{Alteration of the sum of a conditionally convergent series by rearrangement of the terms.} Let $s$~be the sum of the series $1 - \frac{1}{2} + \frac{1}{3} - \frac{1}{4} + \dots$, and $s_{2n}$~the sum of its first $2n$ terms, so that $\lim s_{2n} = s$. Now consider the series \[ 1 + \tfrac{1}{3} - \tfrac{1}{2} + \tfrac{1}{5} + \tfrac{1}{7} - \tfrac{1}{4} + \dots \Tag{(1)} \] in which two positive terms are followed by one negative term, and let $t_{3n}$ denote the sum of the first $3n$~terms. Then \begin{align*} t_{3n} &= 1 + \frac{1}{3} + \dots + \frac{1}{4n-1} - \frac{1}{2} - \frac{1}{4} - \dots - \frac{1}{2n}\\ &= s_{2n} + \frac{1}{2n + 1} + \frac{1}{2n + 3} + \dots + \frac{1}{4n - 1}. \end{align*} Now \[ \lim \left[\frac{1}{2n + 1} - \frac{1}{2n + 2} + \frac{1}{2n + 3} - \dots + \frac{1}{4n - 1} - \frac{1}{4n}\right] = 0, \] {\Loosen since the sum of the terms inside the bracket is clearly less than $n/(2n + 1)(2n + 2)$; and} \[ \lim \left(\frac{1}{2n + 2} + \frac{1}{2n + 4} + \dots + \frac{1}{4n}\right) = \tfrac{1}{2} \lim \frac{1}{n} \sum_{r=1}^{n} \frac{1}{1 + (r/n)} = \tfrac{1}{2} \int_{1}^{2} \frac{dx}{x}, \] by \SecNo[§§]{156} and~\SecNo{158}. Hence \[ \lim t_{3n} = s + \tfrac{1}{2} \int_{1}^{2} \frac{dx}{x}, \] \PageSep{342} and it follows that the sum of the series~\Eq{(1)} is not~$s$, but the right-hand side of the last equation. Later on we shall give the actual values of the sums of the two series: see \SecNo[§]{213} and \Ref{Ch.}{IX}, \MiscEx{IX}~19. It can indeed be proved that a conditionally convergent series can always be so rearranged as to converge to any sum whatever, or to diverge to~$\infty$ or to~$-\infty$. For a proof we may refer to Bromwich's \textit{Infinite Series}, p.~68. \Item{7.} The series \[ 1 + \frac{1}{\sqrt{3}} - \frac{1}{\sqrt{2}} + \frac{1}{\sqrt{5}} + \frac{1}{\sqrt{7}} - \frac{1}{\sqrt{4}} + \dots \] diverges to~$\infty$. [Here \[ t_{3n} = s_{2n} + \frac{1}{\sqrtp{2n + 1}} + \frac{1}{\sqrtp{2n + 3}} + \dots + \frac{1}{\sqrtp{4n - 1}} > s_{2n} + \frac{n}{\sqrtp{4n - 1}}, \] where $s_{2n} = 1 - \dfrac{1}{\sqrt{2}} + \dots - \dfrac{1}{\DPtypo{\sqrt{2n}}{\sqrtp{2n}}}$, which tends to a limit as $n \to \infty$.] \end{Examples} \begin{Remark} \Paragraph{189. Abel's and Dirichlet's Tests of Convergence.} A more general test, which includes the test of \SecNo[§]{188} as a particular test case, is the following. \begin{ParTheorem}{Dirichlet's Test.} If $\phi_{n}$~satisfies the same conditions as in \SecNo[§]{188}, and $\sum a_{n}$ is any series which converges or oscillates finitely, then the series \[ a_{0}\phi_{0} + a_{1}\phi_{1} + a_{2}\phi_{2} + \dots \] is convergent. \end{ParTheorem} The reader will easily verify the identity \[ a_{0}\phi_{0} + a_{1}\phi_{1} + \dots + a_{n}\phi_{n} = s_{0}(\phi_{0} - \phi_{1}) + s_{1}(\phi_{1} - \phi_{2}) + \dots + s_{n-1}(\phi_{n-1} - \phi_{n}) + s_{n}\phi_{n}, \] where $s_{n} = a_{0} + a_{1} + \dots + a_{n}$. Now the series $(\phi_{0} - \phi_{1}) + (\phi_{1} - \phi_{2}) + \dots$ is convergent, since the sum to $n$~terms is $\phi_{0} - \phi_{n}$ and $\lim \phi_{n} = 0$; and all its terms are positive. Also since $\sum a_{n}$, if not actually convergent, at any rate oscillates finitely, we can determine a constant~$K$ so that $|s_{\nu}| < K$ for all values of~$\nu$. Hence the series \[ \tsum s_{\nu}(\phi_{\nu} - \phi_{\nu+1}) \] is absolutely convergent, and so \[ s_{0}(\phi_{0} - \phi_{1}) + s_{1}(\phi_{1} - \phi_{2}) + \dots + s_{n-1}(\phi_{n-1} - \phi_{n}) \] tends to a limit as $n \to \infty$. Also $\phi_{n}$, and therefore $s_{n}\phi_{n}$, tends to zero\Add{.} And therefore \[ a_{0}\phi_{0} + a_{1}\phi_{1} + \dots + a_{n}\phi_{n} \] tends to a limit, \ie\ the series $\sum a_{\nu}\phi_{\nu}$ is convergent. \Topic{Abel's Test.} There is another test, due to Abel, which, though of less frequent application than Dirichlet's, is sometimes useful. Suppose that $\phi_{n}$, as in Dirichlet's Test, is a positive and decreasing function of~$n$, but that its limit as $n \to \infty$ is not necessarily zero. Thus we postulate less about~$\phi_{n}$, but to make up for this we postulate more about $\sum a_{n}$, viz.\ that it is \emph{convergent}. Then we have the theorem: \begin{Result}if $\phi_{n}$~is a positive and decreasing function of~$n$, and $\sum a_{n}$~is convergent, then $\sum a_{n}\phi_{n}$~is convergent. \end{Result} For $\phi_{n}$~has a limit as $n \to \infty$, say~$l$: and $\lim (\phi_{n} - l) = 0$. Hence, by Dirichlet's Test, $\sum a_{n}(\phi_{n} - l)$ is convergent; and as $\sum{a_{n}}$~is convergent it follows that $\sum a_{n}\phi_{n}$ is convergent. \PageSep{343} This theorem may be stated as follows: \begin{Result}a convergent series remains convergent if we multiply its terms by any sequence of positive and decreasing factors. \end{Result} \end{Remark} \begin{Examples}{LXXIX.} \Item{1.} Dirichlet's and Abel's Tests may also be established by means of the general principle of convergence (\SecNo[§]{84}). Let us suppose, for example, that the conditions of Abel's Test are satisfied. We have identically {\setlength{\multlinegap}{0pt} \begin{multline*} a_{m}\phi_{m} + a_{m+1}\phi_{m+1} + \dots + a_{n}\phi_{n} = s_{m, m}(\phi_{m} - \phi_{m+1}) + s_{m, m+1}(\phi_{m+1} - \phi_{m+2})\\ + \dots + s_{m, n-1}(\phi_{n-1} - \phi_{n}) + s_{m, n}\phi_{n}\dots, \Tag{(1)} \end{multline*}}% where \[ s_{m, \nu} = a_{m} + a_{m+1} + \dots + a_{\nu}. \] The left-hand side of~\Eq{(1)} therefore lies between $h\phi_{m}$ and~$H\phi_{m}$, where $h$~and~$H$ are the algebraically least and greatest of $s_{m, m}$, $s_{m, m+1}$,~\dots, $s_{m, n}$. But, given any positive number~$\DELTA$, we can choose~$m_{0}$ so that $|s_{m, \nu}| < \DELTA$ when $m \geq m_{0}$, and so \[ |a_{m}\phi_{m} + a_{m+1}\phi_{m+1} + \dots + a_{n}\phi_{n}| < \DELTA \phi_{m} \leq \DELTA \phi_{1} \] when $n > m \geq m_{0}$. Thus the series $\sum a_{n}\phi_{n}$ is convergent. \Item{2.} The series $\sum \cos n\theta$ and $\sum \sin n\theta$ oscillate finitely when $\theta$~is not a multiple of~$\pi$. For, if we denote the sums of the first $n$ terms of the two series by $s_{n}$ and~$t_{n}$, and write $z = \Cis\theta$, so that $|z| = 1$ and $z \neq 1$, we have \[ |s_{n} + it_{n}| = \left|\frac{1 - z^{n}}{1 - z}\right| \leq \frac{1 + |z^{n}|}{|1 - z|} \leq \frac{2}{|1 - z|}; \] and so $|s_{n}|$ and~$|t_{n}|$ are also not greater than~$2/|1 - z|$. That the series are not actually convergent follows from the fact that their $n$th~terms do not tend to zero (\Exs{xxiv}.~7,~8). The sine series converges to zero if $\theta$~is a multiple of~$\pi$. The cosine series oscillates finitely if $\theta$~is an odd multiple of~$\pi$ and diverges if $\theta$~is an even multiple of~$\pi$. It follows that \emph{if $\theta_{n}$~is a positive function of~$n$ which tends steadily to zero as $n \to \infty$, then the series \[ \tsum \phi_{n} \cos n\theta,\quad \tsum \phi_{n} \sin n\theta \] are convergent}, except perhaps the first series when $\theta$~is a multiple of~$2\pi$. In this case the first series reduces to~$\sum \phi_{n}$, which may or may not be convergent: the second series vanishes identically. If $\sum \phi_{n}$~is convergent then both series are absolutely convergent (\Ex{lxxvii}.~4) for all values of~$\theta$, and the whole interest of the result lies in its application to the case in which $\sum \phi_{n}$~is divergent. And in this case the series above written are conditionally and \emph{not} absolutely convergent, as will be proved in \Ex{lxxix}.~6. If we put $\theta = \pi$ in the cosine series we are led back to the result of \SecNo[§]{188}, since $\cos n\pi = (-1)^{n}$. \Item{3.} The series $\sum n^{-s} \cos n\theta$, $\sum n^{-s} \sin n\theta$ are convergent if $s > 0$, unless (in the case of the first series) $\theta$~is a multiple of~$2\pi$ and $0 < s \leq 1$. \PageSep{344} \Item{4.} The series of Ex.~3 are in general absolutely convergent if $s > 1$, conditionally convergent if $0 < s \leq 1$, and oscillatory if $s \leq 0$ (finitely if $s = 0$ and infinitely if $s < 0$). Mention any exceptional cases. \Item{5.} If $\sum a_{n}n^{-s}$~is convergent or oscillates finitely, then $\sum a_{n}n^{-t}$~is convergent when $t > s$. \Item{6.} If $\phi_{n}$~is a positive function of~$n$ which tends steadily to~$0$ as $n \to \infty$, and $\sum \phi_{n}$~is divergent, then the series $\sum \phi_{n} \cos n\theta$, $\sum \phi_{n} \sin n\theta$ are \emph{not} absolutely convergent, except the sine-series when $\theta$~is a multiple of~$\pi$. [For suppose, \eg, that $\sum \phi_{n} |\cos n\theta|$ is convergent. Since $\cos^{2} n\theta \leq |\cos n\theta|$, it follows that $\sum \phi_{n} \cos^{2} n\theta$ or \[ \tfrac{1}{2} \tsum \phi_{n} (1 + \cos 2n\theta) \] is convergent. But this is impossible, since $\sum \phi_{n}$~is divergent and $\sum \phi_{n} \cos 2n\theta$, by Dirichlet's Test, convergent, unless $\theta$~is a multiple of~$\pi$. And in this case it is obvious that $\sum \phi_{n} |\cos n\theta|$ is divergent. The reader should write out the corresponding argument for the sine-series, noting where it fails when $\theta$~is a multiple of~$\pi$.] \end{Examples} \Paragraph{190. Series of complex terms.} So far we have confined ourselves to series all of whose terms are real. We shall now consider the series \[ \tsum u_{n} = \tsum (v_{n} + iw_{n}), \] where $v_{n}$ and~$w_{n}$ are real. The consideration of such series does not, of course, introduce anything really novel. The series is convergent if, and only if, the series \[ \tsum v_{n},\quad \tsum w_{n} \] are separately convergent. There is however one class of such series so important as to require special treatment. Accordingly we give the following definition, which is an obvious extension of that of~\SecNo[§]{184}. \begin{Definition} The series $\sum u_{n}$, where $u_{n} = v_{n} + iw_{n}$, is said to be \Emph{absolutely convergent} if the series $\sum v_{n}$ and $\sum w_{n}$ are absolutely convergent. \end{Definition} \begin{Theorem} The necessary and sufficient condition for the absolute convergence of~$\sum u_{n}$ is the convergence of $\sum |u_{n}|$ or $\sum \sqrtp{v_{n}^{2} + w_{n}^{2}}$. \end{Theorem} For if $\sum u_{n}$~is absolutely convergent, then both of the series $\sum |v_{n}|$, $\sum |w_{n}|$ are convergent, and so $\sum \{|v_{n}| + |w_{n}|\}$ is convergent: but \[ |u_{n}| = \sqrtp{v_{n}^{2} + w_{n}^{2}} \leq |v_{n}| + |w_{n}|, \] \PageSep{345} and therefore $\sum |u_{n}|$~is convergent. On the other hand \[ |v_{n}| \leq \sqrtp{v_{n}^{2} + w_{n}^{2}},\quad |w_{n}| \leq \sqrtp{v_{n}^{2} + w_{n}^{2}}, \] so that $\sum |v_{n}|$ and $\sum |w_{n}|$ are convergent whenever $\sum |u_{n}|$~is convergent. It is obvious that \emph{an absolutely convergent series is convergent}, since its real and imaginary parts converge separately. And Dirichlet's Theorem (\SecNo[§§]{169},~\SecNo{185}) may be extended at once to absolutely convergent complex series by applying it to the separate series $\sum v_{n}$ and~$\sum w_{n}$. \begin{Remark} The convergence of an absolutely convergent series may also be deduced directly from the general principle of convergence (cf.\ \Ex{lxxvii}.~1). We leave this as an exercise to the reader. \end{Remark} \Paragraph{191. Power Series.} One of the most important parts of the theory of the ordinary functions which occur in elementary analysis (such as the sine and cosine, and the logarithm and exponential, which will be discussed in the next chapter) is that which is concerned with their expansion in series of the form $\sum a_{n}x^{n}$. Such a series is called a \Emph{power series} in~$x$. We have already come across some cases of expansion in series of this kind in connection with Taylor's and Maclaurin's series (\SecNo[§]{148}). There, however, we were concerned only with a real variable~$x$. We shall now consider a few general properties of power series in~$z$, where $z$~is a complex variable. \begin{Result} \Item{A.} A power series $\sum a_{n}z^{n}$ may be convergent for all values of~$z$, for a certain region of values, or for no values except $z = 0$. \end{Result} It is sufficient to give an example of each possibility. \begin{Remark} \Item{1.} \emph{The series $\sum \dfrac{z^{n}}{n!}$ is convergent for all values of~$\DPtypo{x}{z}$.} For if $u_{n} = \dfrac{z^{n}}{n!}$ then \[ |u_{n+1}|/|u_{n}| = |z|/(n + 1) \to 0 \] as $n \to \infty$, whatever value $z$ may have. Hence, by d'Alembert's Test, $\sum |u_{n}|$~is convergent for all values of~$z$, and the original series is absolutely convergent for all values of~$z$. We shall see later on that a power series, when convergent, is \emph{generally} absolutely convergent. \Item{2.} \emph{The series $\sum n!\, z^{n}$ is not convergent for any value of~$z$ except $z = 0$.} For if $u_{n} = n!\, z^{n}$ then $|u_{n+1}|/|u_{n}| = (n + 1)|z|$, which tends to~$\infty$ with~$n$, unless $z = 0$. Hence (cf.\ \Exs{xxvii}.\ 1,~2,~5) the modulus of the $n$th~term tends to~$\infty$ with~$n$; and so the series cannot converge, except when $z = 0$. It is obvious that any power series converges when $z = 0$. \PageSep{346} \Item{3.} \emph{The series $\sum z^{n}$ is always convergent when $|z| < 1$, and never convergent when $|z| \geq 1$.} This was proved in \SecNo[§]{88}. Thus we have an actual example of each of the three possibilities. \end{Remark} \Paragraph{192.} \begin{Result}\Item{B.} If a power series $\sum a_{n}z^{n}$ is convergent for a particular value of~$z$, say $z_{1} = r_{1}(\cos\theta_{1} + i\sin\theta_{1})$, then it is absolutely convergent for all values of~$z$ such that $|z| < r_{1}$. \end{Result} For $\lim a_{n}z_{1}^{n} = 0$, since $\sum a_{n}z_{1}^{n}$~is convergent, and therefore we can certainly find a constant~$K$ such that $|a_{n}z_{1}^{n}| < K$ for all values of~$n$. But, if $|z| = r < r_{1}$, we have \[ |a_{n}z^{n}| = |a_{n}z_{1}^{n}| \left(\frac{r}{r_{1}}\right)^{n} < K \left(\frac{r}{r_{1}}\right)^{n}, \] and the result follows at once by comparison with the convergent geometrical series $\sum (r/r_{1})^{n}$. In other words, if the series converges at~$P$ \emph{then it converges absolutely at all points nearer to the origin than~$P$}. \begin{Remark} \Par{Example.} Show that the result is true even if the series oscillates finitely when $z = z_{1}$. [If $s_{n} = a_{0} + a_{1}z_{1} + \dots + a_{n}z_{1}^{n}$ then we can find~$K$ so that $|s_{n}| < K$ for all values of~$n$. But $|a_{n}z_{1}^{n}| = |s_{n} - s_{n-1}| \leq |s_{n-1}| + |s_{n}| < 2K$, and the argument can be completed as before.] \end{Remark} \Paragraph{193. The region of convergence of a power series. The circle of convergence.} Let $z = r$ be any point on the positive real axis. If the power series converges when $z = r$ then it converges absolutely at all points inside the circle $|z| = r$. In particular it converges for all real values of~$z$ less than~$r$. Now let us divide the points~$r$ of the positive real axis into two classes, the class at which the series converges and the class at which it does not. The first class must contain at least the one point $z = 0$. The second class, on the other hand, need not exist, as the series may converge for all values of~$z$. Suppose however that it does exist, and that the first class of points does include points besides $z = 0$. Then it is clear that every point of the first class lies to the left of every point of the second class. Hence there is a point, say the point $z = R$, which divides the two classes, and may itself belong to either one or the other. \emph{Then the series is absolutely convergent at all points inside the circle $|z| = R$.} \PageSep{347} For let $P$~be any such point. We can draw a circle, whose centre is~$O$ and whose radius is %[Illustration: Fig. 51.] \Figure[2.5in]{51}{p347} less than~$R$, so as to include~$P$ inside it. Let this circle cut~$OA$ in~$Q$. Then the series is convergent at~$Q$, and therefore, by Theorem~B, absolutely convergent at~$P$. On the other hand the series cannot converge at any point~$P'$ \emph{outside} the circle. For if it converged at~$P'$ it would converge absolutely at all points nearer to~$O$ than~$P$; and this is absurd, as it does not converge at any point between $A$ and~$Q'$ (\Fig{51}). So far we have excepted the cases in which the power series (1)~does not converge at any point on the positive real axis except $z = 0$ or (2)~converges at all points on the positive real axis. It is clear that in case~(1) the power series converges nowhere except when $z = 0$, and that in case~(2) it is absolutely convergent everywhere. Thus we obtain the following result: \begin{Result}a power series either \Item{(1)} converges for $z = 0$ and for no other value of~$z$; or \Item{(2)} converges absolutely for all values of~$z$; or \Item{(3)} \Hang[3.5em] converges absolutely for all values of~$z$ within a certain circle of radius~$R$, and does not converge for any value of~$z$ outside this circle. \end{Result} In case~(3) the circle is called the \Emph{circle of convergence} and its radius the \Emph{radius of convergence} of the power series. It should be observed that this general result gives absolutely no information about the behaviour of the series \emph{on} the circle of convergence. The examples which follow show that as a matter of fact there are very diverse possibilities as to this. \begin{Examples}{LXXX.} \Item{1.} The series $1 + az + a^{2}z^{2} + \dots$, where $a > 0$, has a radius of convergence equal to~$1/a$. It does not converge anywhere on its circle of convergence, diverging when $z = 1/a$ and oscillating finitely at all other points on the circle. \Item{2.} The series $\dfrac{z}{1^{2}} + \dfrac{z^{2}}{2^{2}} + \dfrac{z^{3}}{3^{2}} + \dots$ has its radius of convergence equal to~$1$; it converges absolutely at all points on its circle of convergence. \PageSep{348} \Item{3.} More generally, if $|a_{n+1}|/|a_{n}| \to \lambda$, or $|a_{n}|^{1/n} \to \lambda$, as $n \to \infty$, then the series $a_{0} + a_{1}z + a_{2}z^{2} + \dots$ has $1/\lambda$ as its radius of convergence. In the first case \[ \lim |a_{n+1}z^{n+1}|/|a_{n}z^{n}| = \lambda |z|, \] which is less or greater than unity according as $|z|$~is less or greater than~$1/\lambda$, so that we can use \DPchg{D'Alembert's}{d'Alembert's} Test (\SecNo[§]{168},~3). In the second case we can use Cauchy's Test (\SecNo[§]{168},~2) similarly. \Item{4.} \Topic{The logarithmic series.} The series \[ z - \tfrac{1}{2} z^{2} + \tfrac{1}{3} z^{3} - \dots \] is called (for reasons which will appear later) the `logarithmic' series. It follows from Ex.~3 that its radius of convergence is unity. When $z$~is on the circle of convergence we may write $z = \cos\theta + i\sin\theta$, and the series assumes the form \[ \cos\theta - \tfrac{1}{2} \cos 2\theta + \tfrac{1}{3} \cos 3\theta - \dots + i(\sin\theta - \tfrac{1}{2} \sin 2\theta + \tfrac{1}{3} \sin 3\theta - \dots). \] The real and imaginary parts are both convergent, though not absolutely convergent, unless $\theta$~is an odd multiple of~$\pi$ (\Exs{lxxix}.~3,~4). If $\theta$~is an odd multiple of~$\pi$ then $z = -1$, and the series assumes the form $-1 - \frac{1}{2} - \frac{1}{3} - \dots$, and so diverges to~$-\infty$. Thus the logarithmic series converges at all points of its circle of convergence except the point $z = -1$. \Item{5.} \Topic{The binomial series.} Consider the series \[ 1 + mz + \frac{m(m - 1)}{2!} z^{2} + \frac{m(m - 1)(m - 2)}{3!} z^{3} + \dots\Add{.} \] If $m$~is a positive integer then the series terminates. In general \[ \frac{|a_{n+1}|}{|a_{n}|} = \frac{|m - n|}{n + 1} \to 1, \] so that the radius of convergence is unity. We shall not discuss here the question of its convergence on the circle, which is a little more difficult.\footnote {See Bromwich, \textit{Infinite Series}, pp.~225 \textit{et~seq.}; Hobson, \textit{Plane Trigonometry} (3rd~edition), pp.~268~\textit{et~seq.}} \end{Examples} \begin{Remark} \Paragraph{194. Uniqueness of a power series.} If $\sum a_{n} z^{n}$ is a power series which is convergent for some values of~$z$ at any rate besides $z = 0$, and $f(z)$~is its sum, then it is easy to see that $f(z)$~can be expressed in the form \[ a_{0} + a_{1}z + a_{2}z^{2} + \dots + (a_{n} + \epsilon_{z})z^{n}, \] where $\epsilon_{z} \to 0$ as $|z| \to 0$. For if $\mu$~is any number less than the radius of convergence of the series, and $|z| < \mu$, then $|a_{n}| \mu^{n} < K$, where $K$~is a constant (cf.~\SecNo[§]{192}), and so \begin{align*} \left|f(z) - \sum_{0}^{n} a_{\nu}z^{\nu}\right| &\leq |a_{n+1}| |z^{n+1}| + |a_{n+2}| |z^{n+2}| + \dots\\ &< K \left(\frac{|z|}{\mu}\right)^{n+1} \left(1 + \frac{|z|}{\mu} + \frac{|z|^{2}}{\mu^{2}} + \dots\right) = \frac{K |z|^{n+1}}{\mu^{n} (\mu - |z|)}, \end{align*} \PageSep{349} where $K$~is a number independent of~$z$. It follows from \Ex{lv}.~15 that if $\sum a_{n}z^{n} = \sum b_{n}z^{n}$ for all values of~$z$ whose modulus is less than some number~$\mu$, then $a_{n} = b_{n}$ for all values of~$n$. This result is capable of considerable generalisations into which we cannot enter now. It shows that \emph{the same function~$f(z)$ cannot be represented by two different power series}. \end{Remark} \Paragraph{195. Multiplication of Series.} We saw in \SecNo[§]{170} that if $\sum u_{n}$ and $\sum v_{n}$ are two convergent series of positive terms, then $\sum u_{n} × \sum v_{n} = \sum w_{n}$, where \[ w_{n} = u_{0}v_{n} + u_{1}v_{n-1} + \dots + u_{n}v_{0}. \] We can now extend this result to all cases in which $\sum u_{n}$ and $\sum v_{n}$ are \emph{absolutely} convergent; for our proof was merely a simple application of Dirichlet's Theorem, which we have already extended to all absolutely convergent series. \begin{Examples}{LXXXI.} \Item{1.} If $|z|$~is less than the radius of convergence of either of the series $\sum a_{n}z^{n}$, $\sum b_{n}z^{n}$, then the product of the two series is $\sum c_{n}z^{n}$, where $c_{n} = a_{0}b_{n} + a_{1}b_{n-1} + \dots + a_{n}b_{0}$. \Item{2.} {\Loosen If the radius of convergence of $\sum a_{n}z^{n}$ is~$R$, and $f(z)$~is the sum of the series when $|z| < R$, and $|z|$~is less than either $R$ or unity, then $f(z)/(1 - z) = \sum s_{n}z^{n}$, where $s_{n} = a_{0} + a_{1} + \dots + a_{n}$.} \Item{3.} Prove, by squaring the series for $1/(1 - z)$, that $1/(1 - z)^{2} = 1 + 2z + 3z^{2} + \dots$ if $|z| < 1$. \Item{4.} Prove similarly that $1/(1 - z)^{3} = 1 + 3z + 6z^{2} + \dots$, the general term being $\frac{1}{2}(n + 1)(n + 2)z^{n}$. \Item{5.} \Topic{The Binomial Theorem for a negative integral exponent.} If $|z| < 1$, and $m$~is a positive integer, then \[ \frac{1}{(1 - z)^{m}} = 1 + mz + \frac{m(m + 1)}{1·2} z^{2} + \dots + \frac{m(m + 1) \dots (m + n - 1)}{1·2 \dots n} z^{n} + \dots. \] [Assume the truth of the theorem for all indices up to~$m$. Then, by Ex.~2, $1/(1 - z)^{m+1} = \sum s_{n}z^{n}$, where \begin{align*} %[** TN: Set on a single line in the original] s_{n} &= 1 + m + \frac{m(m + 1)}{1·2} + \dots + \frac{m(m + 1) \dots (m + n - 1)}{1·2 \dots n} \\ &= \frac{(m + 1)(m + 2) \dots (m + n)}{1·2 \dots n}, \end{align*} as is easily proved by induction.] \Item{6.} Prove by multiplication of series that if \[ f(m, z) = 1 + \binom{m}{1} z + \binom{m}{2} z^{2} + \dots, \] and $|z| < 1$, then $f(m, z)f(m', z) = f(m + m', z)$. [This equation forms the basis of Euler's proof of the Binomial Theorem. The coefficient of~$z^{n}$ in the product series is \[ \binom{m'}{n} + \binom{m}{1} \binom{m'}{n - 1} + \binom{m}{2} \binom{m'}{n - 2} + \dots + \binom{m}{n - 1} \binom{m'}{1} + \binom{m}{n}. \] \PageSep{350} This is a polynomial in $m$~and~$m'$: but when $m$~and~$m'$ are positive integers this polynomial must reduce to $\dbinom{m + m'}{k}$ in virtue of the Binomial Theorem for a positive integral exponent, and if two such polynomials are equal for all positive integral values of $m$~and~$m'$ then they must be equal identically.] \Item{7.} If $f(z) = 1 + z + \dfrac{z^{2}}{2!} + \dots$ then $f(z)f(z') = f(z + z')$. [For the series for~$f(z)$ is absolutely convergent for all values of~$z$: and it is easy to see that if $u_{n} = \dfrac{z^{n}}{n!}$, $v_{n} = \dfrac{z'^{n}}{n!}$, then $w_{n} = \dfrac{(z + z')^{n}}{n!}$.] \Item{8.} If \[ C(z) = 1 - \frac{z^{2}}{2!} + \frac{z^{4}}{4!} - \dots,\quad S(z) = z - \frac{z^{3}}{3!} + \frac{z^{5}}{5!} - \dots, \] then \[ C(z + z') = C(z)C(z') - S(z)S(z'),\quad S(z + z') = S(z)C(z') + C(z)S(z'), \] and \[ \{C(z)\}^{2} + \{S(z)\}^{2} = 1. \] \Item{9.} \Topic{Failure of the Multiplication Theorem.} That the theorem is not always true when $\sum u_{n}$ and $\sum v_{n}$ are not \emph{absolutely} convergent may be seen by considering the case in which \[ u_{n} = v_{n} = \frac{(-1)^{n}}{\sqrtp{n + 1}}. \] Then \[ w_{n} = (-1)^{n} \sum_{r=0}^{n} \frac{1}{\sqrtb{(r + 1)(n + 1 - r)}}. \] But $\sqrtb{(r + 1)(n + 1 - r)} \leq \frac{1}{2}(n + 2)$, and so $|w_{n}| > (2n + 2)/(n + 2)$, which tends to~$2$; so that $\sum w_{n}$~is certainly not convergent. \end{Examples} \Section{MISCELLANEOUS EXAMPLES ON CHAPTER VIII.} \begin{Examples}{} \Item{1.} Discuss the convergence of the series $\sum n^{k}\{\sqrtp{n + 1} - 2\sqrt{n} + \sqrtp{n - 1}\}$, where $k$~is real. \MathTrip{1890.} \Item{2.} Show that \[ \tsum n^{r} \Delta^{k}(n^{s}), \] where \[ \Delta u_{n} = u_{n} - u_{n+1},\quad \Delta^{2} u_{n} = \Delta(\Delta u_{n}), \] and so on, is convergent if and only if $k > r + s + 1$, except when $s$~is a positive integer less than~$k$, when every term of the series is zero. [The result of \Ref{Ch.}{VII}, \MiscEx{VII}~11, shows that $\Delta^{k}(n^{s})$~is in general of order~$n^{s-k}$.] \Item{3.} Show that \[ \sum_{1}^{\infty} \frac{n^{2} + 9n + 5}{(n + 1)(2n + 3)(2n + 5)(n + 4)} = \frac{5}{36}. \] \MathTrip{1912.} [Resolve the general term into partial fractions.] \PageSep{351} \Item{4.} Show that, if $R(n)$~is any rational function of~$n$, we can determine a polynomial~$P(n)$ and a constant~$A$ such that $\sum \{R(n) - P(n) - (A/n)\}$ is convergent. Consider in particular the cases in which $R(n)$~is one of the functions $1/(an + b)$, $(an^{2} + 2bn + c)/(\alpha n^{2} + 2\beta n + \gamma)$. \Item{5.} Show that the series \[ 1 - \frac{1}{1 + z} + \frac{1}{2} - \frac{1}{2 + z} + \frac{1}{3} - \frac{1}{3 + z} + \dots \] is convergent provided only that $z$~is not a negative integer. \Item{6.} Investigate the convergence or divergence of the series \begin{gather*} %[** TN: Set on one line in the original] \sum \sin\frac{a}{n},\quad \sum \frac{1}{n} \sin\frac{a}{n},\quad \sum (-1)^{n} \sin\frac{a}{n},\\ \sum \left(1 - \cos\frac{a}{n}\right),\quad \sum (-1)^{n} n\left(1 - \cos\frac{a}{n}\right), \end{gather*} where $a$~is real. \Item{7.} Discuss the convergence of the series \[ \sum_{1}^{\infty} \left(1 + \frac{1}{2} + \frac{1}{3} + \dots + \frac{1}{n}\right) \frac{\sin(n\theta + \alpha)}{n}, \] where $\theta$~and~$\alpha$ are real. \MathTrip{1989.} \Item{8.} Prove that the series \[ 1 - \tfrac{1}{2} - \tfrac{1}{3} + \tfrac{1}{4} + \tfrac{1}{5} + \tfrac{1}{6} - \tfrac{1}{7} - \tfrac{1}{8} - \tfrac{1}{9} - \tfrac{1}{10} + \dots, \] in which successive terms of the same sign form groups of $1$, $2$, $3$, $4$,~\dots\ terms, is convergent; but that the corresponding series in which the groups contain $1$, $2$, $4$, $8$,~\dots\ terms oscillates finitely. \MathTrip{1908.} \Item{9.} If $u_{1}$, $u_{2}$, $u_{3}$,~\dots\ is a decreasing sequence of positive numbers whose limit is zero, then the series \[ u_{1} - \tfrac{1}{2}(u_{1} + u_{2}) + \tfrac{1}{3}(u_{1} + u_{2} + u_{3}) - \dots, \quad u_{1} - \tfrac{1}{3}(u_{1} + u_{3}) + \tfrac{1}{5}(u_{1} + u_{3} + u_{5}) - \dots \] are convergent. [For if $(u_{1} + u_{2} + \dots + u_{n})/n = v_{n}$ then $v_{1}$, $v_{2}$, $v_{3}$,~\dots\ is also a decreasing sequence whose limit is zero (\Ref{Ch.}{IV}, \MiscExs{IV}~8,~27). This shows that the first series is convergent; the second we leave to the reader. In particular the series \[ 1 - \tfrac{1}{2}\left(1 + \tfrac{1}{2}\right) + \tfrac{1}{3}\left(1 + \tfrac{1}{2} + \tfrac{1}{3}\right) - \dots,\quad 1 - \tfrac{1}{3}\left(1 + \tfrac{1}{\DPtypo{5}{3}}\right) + \tfrac{1}{5}\left(1 + \tfrac{1}{3} + \tfrac{1}{5}\right) - \dots \] are convergent.] \Item{10.} If $u_{0} + u_{1} + u_{2} + \dots$ is a divergent series of positive and decreasing terms, then \[ (u_{0} + u_{2} + \dots + u_{2n})/(u_{1} + u_{3} + \dots + u_{2n+1}) \to 1. \] \Item{11.} Prove that if $\alpha > 0$ then $\lim\limits_{p\to\infty} \sum\limits_{n=0}^{\infty} (p + n)^{-1-\alpha} = 0$. \Item{12.} Prove that $\lim\limits_{\alpha\to 0+} \alpha \sum\limits_{1}^{\infty} n^{-1-\alpha} = 1$. [It follows from \SecNo[§]{174} that \[ 0 < 1^{-1-\alpha} + 2^{-1-\alpha} + \dots + (n - 1)^{-1-\alpha} - \int_{1}^{n} x^{-1-\alpha}\, dx \leq 1, \] and it is easy to deduce that $\sum n^{-1-\alpha}$ lies between $1/\alpha$ and~$(1/\alpha) + 1$.] \PageSep{352} \Item{13.} Find the sum of the series $\sum\limits_{1}^{\infty} u_{n}$, where \[ u_{n} = \frac{x^{n} - x^{-n-1}}{(x^{n} + x^{-n})(x^{n+1} + x^{-n-1}) } = \frac{1}{x - 1} \left(\frac{1}{x^{n} + x^{-n}} - \frac{1}{x^{n+1} + x^{-n-1}}\right), \] for all real values of~$x$ for which the series is convergent. \MathTrip{1901.} [If $|x|$~is not equal to unity then the series has the sum $x/\{(x - 1)(x^{2} + 1)\}$. If $x = 1$ then $u_{n} = 0$ and the sum is~$0$. If $x = -1$ then $u_{n} = \frac{1}{2}(-1)^{n+1}$ and the series oscillates finitely.] \Item{14.} Find the sums of the series \[ \frac{z}{1 + z} + \frac{2z^{2}}{1 + z^{2}} + \frac{4z^{4}}{1 + z^{4}} + \dots,\quad \frac{z}{1 - z^{2}} + \frac{z^{2}}{1 - z^{4}} + \frac{z^{4}}{1 - z^{8}} + \dots \] (in which all the indices are powers of~$2$), whenever they are convergent. [The first series converges only if $|z| < 1$, its sum then being~$z/(1 - z)$; the second series converges to~$z/(1 - z)$ if $|z| < 1$ and to~$1/(1 - z)$ if $|z| > 1$.] \Item{15.} If $|a_{n}| \leq 1$ for all values of~$n$ then the equation \[ 0 = 1 + a_{1}z + a_{2}z^{2} + \dots \] cannot have a root whose modulus is less than~$\frac{1}{2}$, and the only case in which it can have a root whose modulus is equal to~$\frac{1}{2}$ is that in which $a_{n} = -\Cis(n\theta)$, when $z = \frac{1}{2} \Cis(-\theta)$ is a root. \Item{16.} \Topic{Recurring Series.} A power series $\sum a_{n}z^{n}$ is said to be a \emph{recurring series} if its coefficients satisfy a relation of the type \[ a_{n} + p_{1}a_{n-1} + p_{2}a_{n-2} + \dots + p_{k}a_{n-k} = 0, \Tag{(1)} \] where $n \geq k$ and $p_{1}$, $p_{2}$,~\dots, $p_{k}$ are independent of~$n$. Any recurring series is the expansion of a rational function of~$z$. To prove this we observe in the first place that the series is certainly convergent for values of~$z$ whose modulus is sufficiently small. For let $G$ be the greater of the two numbers \[ 1,\quad |p_{1}| + |p_{2}| + \dots + |p_{k}|. \] Then it follows from the equation~\Eq{(1)} that $|a_{n}| \leq G\alpha_{n}$, where $\alpha_{n}$~is the modulus of the numerically greatest of the preceding coefficients; and from this that $|a_{n}| < KG^{n}$, where $K$~is independent of~$n$. Thus the recurring series is certainly convergent for values of~$z$ whose modulus is less than~$1/G$. But if we multiply the series $f(z) = \sum a_{n}z^{n}$ by $p_{1}z$, $p_{2}z^{2}$,~\dots\Add{,} $p_{k}z^{k}$, and add the results, we obtain a new series in which all the coefficients after the~$(k - 1)$th vanish in virtue of the relation~\Eq{(1)}, so that \[ (1 + p_{1}z + p_{2}z^{2} + \dots + p_{k}z^{k})f(z) = P_{0} + P_{1}z + \dots + P_{k-1}z^{k-1}, \] where $P_{0}$, $P_{1}$,~\dots, $P_{k-1}$ are constants. The polynomial $1 + p_{1}z + p_{2}z^{2} + \dots + p_{k}z^{k}$ is called the \emph{scale of relation} of the series. Conversely, it follows from the known results as to the expression of any rational function as the sum of a polynomial and certain partial fractions of the type~$A/(z - a)^{p}$, and from the Binomial Theorem for a negative integral \PageSep{353} exponent, that any rational function whose denominator is not divisible by~$z$ can be expanded in a power series convergent for values of~$z$ whose modulus is sufficiently small, in fact if $|z| < \rho$, where $\rho$~is the least of the moduli of the roots of the denominator (cf.\ \Ref{Ch.}{IV}, \MiscExs{IV}\ 18~\textit{et~seq.}). And it is easy to see, by reversing the argument above, that the series is a recurring series. Thus \begin{Result}the necessary and sufficient condition that a power series should be a recurring series is that it should be the expansion of such a rational function of~$z$. \end{Result} \Item{17.} \Topic{Solution of Difference-Equations.} A relation of the type of~\Eq{(1)} in Ex.~16 is called a \emph{linear difference-equation in~$a_{n}$ with constant coefficients}. Such equations may be solved by a method which will be sufficiently explained by an example. Suppose that the equation is \[ a_{n} - a_{n-1} - 8a_{n-2} + 12a_{n-3} = 0. \] Consider the recurring power series $\sum a_{n}z^{n}$. We find, as in Ex.~16, that its sum is \[ \frac{a_{0} + (a_{1} - a_{0}) z + (a_{2} - a_{1} - 8a_{0}) z^{2}} {1 - z - 8z^{2} + 12z^{3}} = \frac{A_{1}}{1 - 2z} + \frac{A_{2}}{(1 - 2z)^{2}} + \frac{B}{1 + 3z}, \] where $A_{1}$,~$A_{2}$, and~$B$ are numbers easily expressible in terms of $a_{0}$,~$a_{1}$, and~$a_{2}$. Expanding each fraction separately we see that the coefficient of~$z^{n}$ is \[ a_{n} = 2^{n}\{A_{1} + (n + 1) A_{2}\} + (-3)^{n} B. \] The values of $A_{1}$,~$A_{2}$,~$B$ depend upon the first three coefficients $a_{0}$,~$a_{1}$,~$a_{2}$, which may of course be chosen arbitrarily. \Item{18.} The solution of the difference-equation $u_{n} - 2\cos\theta u_{n-1} + u_{n-2} = 0$ is $u_{n} = A\cos n\theta + B\sin n\theta$, where $A$~and~$B$ are arbitrary constants. \Item{19.} If $u_{n}$~is a polynomial in~$n$ of degree~$k$, then $\sum u_{n} z^{n}$~is a recurring series whose scale of relation is $(1 - z)^{k+1}$. \MathTrip{1904.} \Item{20.} Expand $9/\{(z - 1)(z + 2)^{2}\}$ in ascending powers of~$z$. \MathTrip{1913.} \Item{21.} Prove that if $f(n)$~is the coefficient of~$z^{n}$ in the expansion of~$z/(1 + z + z^{2})$ in powers of~$z$, then \[ \Item{(1)}\ f(n) + f(n - 1) + f(n - 2) = 0,\quad \Item{(2)}\ f(n) = (\omega_{3}^{n} - \omega_{3}^{2n})/(\omega_{3} - \omega_{3}^{2}), \] where $\omega_{3}$~is a complex cube root of unity. Deduce that $f(n)$~is equal to $0$ or $1$ or~$-1$ according as $n$~is of the form $3k$ or $3k + 1$ or~$3k + 2$, and verify this by means of the identity $z/(1 + z + z^{2}) = z(1 - z)/(1 - z^{3})$. \Item{22.} A player tossing a coin is to score one point for every head he turns up and two for every tail, and is to play on until his score reaches or passes a total~$n$. Show that his chance of making exactly the total~$n$ is $\frac{1}{3}\{2 + (-\frac{1}{2})^{n}\}$. \MathTrip{1898.} [If $p_{n}$~is the probability then $p_{n} = \frac{1}{2} (p_{n-1} + p_{n-2})$. \DPtypo{also}{Also} $p_{0} = 1$, $p_{1} = \frac{1}{2}$.] \PageSep{354} \Item{23.} Prove that \[ \frac{1}{a + 1} + \frac{1}{a + 2} + \dots + \frac{1}{a + n} = \binom{n}{1}\frac{1}{a + 1} - \binom{n}{2}\frac{1!}{(a + 1)(a + 2)} + \dots \] if $n$~is a positive integer and $a$~is not one of the numbers $-1$, $-2$,~\dots,~$-n$. [This follows from splitting up each term on the right-hand side into partial fractions. When $a > -1$, the result may be deduced very simply from the equation \[ \int_{0}^{1} x^{a}\frac{1 - x^{n}}{1 - x}\, dx = \int_{0}^{1} (1 - x)^{a}\{1 - (1 - x)^{n}\}\frac{dx}{x} \] by expanding $(1 - x^{n})/(1 - x)$ and $1 - (1 - x)^{n}$ in powers of~$x$ and integrating each term separately. The result, being merely an algebraical identity, must be true for all values of~$a$ save $-1$, $-2$,~\dots,~$-n$.] \Item{24.} Prove by multiplication of series that \[ \sum_{0}^{\infty} \frac{z^{n}}{n!} \sum_{1}^{\infty} \frac{(-1)^{n-1}z^{n}}{n·n!} = \sum_{1}^{\infty} \left(1 + \frac{1}{2} + \frac{1}{3} + \dots + \frac{1}{n}\right) \frac{z^{n}}{n!}. \] [The coefficient of~$z^{n}$ will be found to be \[ \frac{1}{n!}\left\{ \binom{n}{1} - \frac{1}{2}\binom{n}{2} + \frac{1}{3}\binom{n}{3} - \dots \right\}. \] Now use Ex.~23, taking $a = 0$.] \Item{25.} If $A_{n} \to A$ and $B_{n} \to B$ as $n \to \infty$, then \[ (A_{1}B_{n} + A_{2}B_{n-1} + \dots + A_{n}B_{1})/n \to AB. \] [Let $A_{n} = A + \epsilon_{n}$. Then the expression given is equal to \[ A \frac{B_{1} + B_{2} + \dots + B_{n}}{n} + \frac{\epsilon_{1}B_{n} + \epsilon_{2}B_{n-1} + \dots + \epsilon_{n}B_{1}}{n}. \] The first term tends to~$AB$ (\Ref{Ch.}{IV}, \MiscEx{IV}~27). The modulus of the second is less than $\beta\{|\epsilon_{1}| + |\epsilon_{2}| + \dots + |\epsilon_{n}|\}/n$, where $\beta$~is any number greater than the greatest value of~$|B_{\nu}|$: and this expression tends to zero.] \Item{26.} Prove that if $c_{n} = a_{1}b_{n} + a_{2}b_{n-1} + \dots + a_{n}b_{1}$ and \[ A_{n} = a_{1} + a_{2} + \dots + a_{n},\quad B_{n} = b_{1} + b_{2} + \dots + b_{n},\quad C_{n} = c_{1} + c_{2} + \dots + c_{n}, \] then \[ C_{n} = a_{1}B_{n} + a_{2}B_{n-1} + \dots + a_{n}B_{1} = b_{1}A_{n} + b_{2}A_{n-1} + \dots + b_{n}A_{1} \] and \[ C_{1} + C_{2} + \dots + C_{n} = A_{1}B_{n} + A_{2}B_{n-1} + \dots + A_{n}B_{1}. \] Hence prove that if the series $\sum a_{n}$, $\sum b_{n}$ are convergent and have the sums $A$,~$B$, so that $A_{n} \to A$, $B_{n} \to B$, then \[ (C_{1} + C_{2} + \dots + C_{n})/n \to AB. \] Deduce that \emph{if $\sum c_{n}$ is convergent then its sum is~$AB$}. This result is known as \Emph{Abel's Theorem on the multiplication of Series}. We have already seen that we can multiply the series $\sum a_{n}$, $\sum b_{n}$ in this way if both series are \emph{absolutely} convergent: Abel's Theorem shows that we can do so even if one or both are not absolutely convergent, \emph{provided only that the product series is convergent}. \PageSep{355} \Item{27.} Prove that \begin{align*} \tfrac{1}{2} \left(1 - \tfrac{1}{2} + \tfrac{1}{3} - \dots\right)^{2} &= \tfrac{1}{2} - \tfrac{1}{3} \left(1 + \tfrac{1}{2}\right) + \tfrac{1}{4} \left(1 + \tfrac{1}{2} + \tfrac{1}{3}\right) - \dots,\\ % \tfrac{1}{2} \left(1 - \tfrac{1}{3} + \tfrac{1}{5} - \dots\right)^{2} &= \tfrac{1}{2} - \tfrac{1}{4} \left(1 + \tfrac{1}{3}\right) + \tfrac{1}{6} \left(1 + \tfrac{1}{3} + \tfrac{1}{5}\right) - \dots. \end{align*} [Use Ex.~9 to establish the convergence of the series.] \Item{28.} For what values of $m$~and~$n$ is the integral $\ds\int_{0}^{\pi} \sin^{m} x (1 - \cos x)^{n}\, dx$ convergent? [If $m + 1$ and~$m + 2n + 1$ are positive.] \Item{29.} Prove that if $a > 1$ then \[ \int_{-1}^{1} \frac{dx}{(a - x) \sqrtp{1 - x^{2}}} = \frac{\pi}{\sqrtp{a^{2} - 1}}. \] \Item{30.} Establish the formulae \begin{alignat*}{2} \int_{0}^{\infty} F\{\sqrtp{x^{2} + 1} + x\}\, dx &= \tfrac{1}{2}\int_{1}^{\infty} &&\left(1 + \frac{1}{y^{2}}\right) F(y) \, dy,\\ % \int_{0}^{\infty} F\{\sqrtp{x^{2} + 1} - x\}\, dx &= \tfrac{1}{2}\int_{0}^{1} &&\left(1 + \frac{1}{y^{2}}\right) F(y)\, dy. \end{alignat*} In particular, prove that if $n > 1$ then \[ \int_{0}^{\infty} \frac{dx}{\{\sqrtp{x^{2} + 1} + x\}^{n}} = \int_{0}^{\infty} \{\sqrtp{x^{2} + 1} - x\}^{n}\, dx = \frac{n}{n^{2} - 1}. \] [In this and the succeeding examples it is of course supposed that the arbitrary functions which occur are such that the integrals considered have a meaning in accordance with the definitions of \SecNo[§§]{177}~\textit{et~seq.}] \Item{31.} Show that if $2y = ax - (b/x)$, where $a$~and~$b$ are positive, then $y$~increases steadily from $-\infty$ to~$\infty$ as $x$~increases from $0$ to~$\infty$. Hence show that \begin{align*} \int_{0}^{\infty} f\left\{\tfrac{1}{2}\left(ax + \frac{b}{x}\right)\right\} dx &= \frac{1}{a} \int_{-\infty}^{\infty} f\{\sqrtp{y^{2} + ab}\} \left\{1 + \frac{y}{\sqrtp{y^{2} + ab}}\right\} dy\\ &= \frac{2}{a} \int_{0}^{\infty} f\{\sqrtp{y^{2} + ab}\}\, dy. \end{align*} \Item{32.} Show that if $2y = ax + (b/x)$, where $a$~and~$b$ are positive, then two values of~$x$ correspond to any value of~$y$ greater than~$\sqrtp{ab}$. Denoting the greater of these by~$x_{1}$ and the less by~$x_{2}$, show that, as $y$~increases from~$\sqrtp{ab}$ towards~$\infty$, $x_{1}$~increases from~$\sqrtp{b/a}$ towards~$\infty$, and $x_{2}$~decreases from~$\sqrtp{b/a}$ to~$0$. Hence show that \begin{align*} \int_{\sqrtp{b/a}}^{\infty} f(y)\, dx_{1} &= \frac{1}{a} \int_{\sqrtp{ab}}^{\infty} f(y) \left\{\frac{y}{\sqrtp{y^{2} - ab}} + 1\right\} dy,\\ % \int_{0}^{\sqrtp{b/a}} f(y)\, dx_{2} &= \frac{1}{a} \int_{\sqrtp{ab}}^{\infty} f(y) \left\{\frac{y}{\sqrtp{y^{2} - ab}} - 1\right\} dy, \end{align*} and that \[ \int_{0}^{\infty} f\left\{\tfrac{1}{2}\left(ax + \frac{b}{x}\right)\right\} dx = \frac{2}{a} \int_{\sqrtp{ab}}^{\infty} \frac{yf(y)}{\sqrtp{y^{2} - ab}}\, dy = \frac{2}{a} \int_{0}^{\infty} f\{\sqrtp{z^{2} + ab}\}\, dz. \] \PageSep{356} \Item{33.} Prove the formula \[ \int_{0}^{\pi} f(\sec\tfrac{1}{2}x + \tan\tfrac{1}{2}x)\frac{dx}{\sqrtp{\sin x}} = \int_{0}^{\pi} f(\cosec x)\frac{dx}{\sqrtp{\sin x}}. \] \Item{34.} If $a$~and~$b$ are positive, then \[ \int_{0}^{\infty} \frac{dx}{(x^{2} + a^{2})(x^{2} + b^{2})} = \frac{\pi}{2ab(a + b)},\quad \int_{0}^{\infty} \frac{x^{2}\, dx}{(x^{2} + a^{2})(x^{2} + b^{2})} = \frac{\pi}{2(a + b)}. \] Deduce that if $\alpha$,~$\beta$, and~$\gamma$ are positive, and $\beta^{2} \geq \alpha\gamma$, then \[ \int_{0}^{\infty} \frac{dx}{\alpha x^{4} + 2\beta x^{2} + \gamma} = \frac{\pi}{2\sqrtp{2\gamma A}}, \quad \int_{0}^{\infty} \frac{x^{2}\, dx}{\alpha x^{4} + 2\beta x^{2} + \gamma} = \frac{\pi}{2\sqrtp{2\alpha A}}, \] where $A = \beta + \sqrtp{\alpha\gamma}$. Also deduce the last result from Ex.~31, by putting $f(y) = 1/(c^{2} + y^{2})$. The last two results remain true when $\beta^{2} < \alpha\gamma$, but their proof is then not quite so simple. \Item{35.} Prove that if $b$~is positive then \[ \int_{0}^{\infty} \frac{x^{2}\, dx}{(x^{2} - a^{2})^{2} + b^{2}x^{2}} = \frac{\pi}{2b},\quad \int_{0}^{\infty} \frac{x^{4}\, dx}{\{(x^{2} - a^{2})^{2} + b^{2}x^{2}\}^{2}} = \frac{\pi}{4b^{3}}. \] \Item{36.} Extend Schwarz's inequality (\Ref{Ch.}{VII}, \MiscEx{VII}~42) to infinite integrals of the first and second kinds. \Item{37.} Prove that if $\phi(x)$~is the function considered at the end of~\SecNo[§]{178} then \[ \int_{0}^{\infty} \phi(x)\, dx = \sum_{0}^{\infty} \frac{1}{(n + 1)^{2}}. \] \Item{38.} Prove that \begin{alignat*}{2} \int_{1}^{\infty} dx \left(\int_{1}^{\infty} \frac{x - y}{(x + y)^{3}}\, dy\right) &= -1, \quad & \int_{1}^{\infty} dy \left(\int_{1}^{\infty} \frac{x - y}{(x + y)^{3}}\, dx\right) &= 1;\\ % \int_{1}^{\infty} dx \left(\int_{1}^{\infty} \frac{x^{2} - y^{2}}{(x^{2} + y^{2})^{2}}\, dy\right) &= -\tfrac{1}{4}\pi, \quad & \int_{1}^{\infty} dy \left(\int_{1}^{\infty} \frac{x^{2} - y^{2}}{(x^{2} + y^{2})^{2}}\, dx\right) &= \tfrac{1}{4}\pi. \end{alignat*} Establish similar results in which the limits of integration are $0$~and~$1$. \MathTrip{1913.} \end{Examples} \PageSep{357} \Chapter[THE LOGARITHMIC AND EXPONENTIAL FUNCTIONS] {IX}{THE LOGARITHMIC AND EXPONENTIAL FUNCTIONS \\ OF A REAL VARIABLE} \Paragraph{196.} \First{The} number of essentially different types of functions with which we have been concerned in the foregoing chapters is not very large. Among those which have occurred the most important for ordinary purposes are polynomials, rational functions, algebraical functions, explicit or implicit, and trigonometrical functions, direct or inverse. We are however far from having exhausted the list of functions which are important in mathematics. The gradual expansion of the range of mathematical knowledge has been accompanied by the introduction into analysis of one new class of function after another. These new functions have generally been introduced because it appeared that some problem which was occupying the attention of mathematicians was incapable of solution by means of the functions already known. The process may fairly be compared with that by which the irrational and complex numbers were first introduced, when it was found that certain algebraical equations could not be solved by means of the numbers already recognised. One of the most fruitful sources of new functions has been the problem of \emph{integration}. Attempts have been made to integrate some function~$f(x)$ in terms of functions already known. These attempts have failed; and after a certain number of failures it has begun to appear probable that the problem is insoluble. Sometimes it has been \emph{proved} that this is so; but as a rule such a strict proof has not been forthcoming until later on. Generally it has happened that mathematicians have taken the impossibility for granted as soon as they have become reasonably convinced of it, and have introduced a new function~$F(x)$ \emph{defined} by its \PageSep{358} possessing the required property, viz.\ that $F'(x) = f(x)$. Starting from this definition, they have investigated the properties of~$F(x)$; and it has then appeared that $F(x)$~has properties which no finite combination of the functions previously known could possibly have; and thus the correctness of the assumption that the original problem could not possibly be solved has been established. One such case occurred in the preceding pages, when in \Ref{Ch.}{VI} we defined the function~$\log x$ by means of the equation \[ \log x = \int \frac{dx}{x}. \] \begin{Remark} Let us consider what grounds we have for supposing $\log x$ to be a really new function. We have seen already (\Ex{xlii}.~4) that it cannot be a rational function, since the derivative of a rational function is a rational function whose denominator contains only repeated factors. The question whether it can be an algebraical or trigonometrical function is more difficult. But it is very easy to become convinced by a few experiments that differentiation will never get rid of algebraical irrationalities. For example, the result of differentiating $\sqrtp{1 + x}$ any number of times is always the product of $\sqrtp{1 + x}$ by a rational function, and so generally. The reader should test the correctness of the statement by experimenting with a number of examples. Similarly, if we differentiate a function which involves $\sin x$ or $\cos x$, one or other of these functions persists in the result. We have, therefore, not indeed a strict proof that $\log x$~is a new function---that we do not profess to give\footnotemark---but a reasonable presumption that it is. We shall therefore treat it as such, and we shall find on examination that its properties are quite unlike those of any function which we have as yet encountered. \end{Remark} \footnotetext{For such a proof see the author's tract quoted on \PageRef{p.}{236}.} \Paragraph{197. Definition of $\log x$.} We define $\log x$, the logarithm of~$x$, by the equation \[ \log x = \int_{1}^{x} \frac{dt}{t}. \] We must suppose that $x$~is positive, since (\Ex{lxxvi}.~2) the integral has no meaning if the range of integration includes the point $x = 0$. We might have chosen a lower limit other than~$1$; but $1$~proves to be the most convenient. With this definition $\log 1 = 0$. We shall now consider how $\log x$ behaves as $x$~varies from $0$ towards~$\infty$. It follows at once from the definition that $\log x$~is a \PageSep{359} continuous function of~$x$ which increases steadily with~$x$ and has a derivative \[ D_{x} \log x = 1/x; \] and it follows from \SecNo[§]{175} that $\log x$ tends to~$\infty$ as $x \to \infty$. If $x$~is positive but less than~$1$, then $\log x$~is negative. For \[ \log x = \int_{1}^{x} \frac{dt}{t} = -\int_{x}^{1} \frac{dt}{t} < 0. \] Moreover, if we make the substitution $t = 1/u$ in the integral, we obtain \[ \log x = \int_{1}^{x} \frac{dt}{t} = -\int_{1}^{1/x} \frac{du}{u} = -\log(1/x). \] Thus $\log x$ tends steadily to~$-\infty$ as $x$~decreases from $1$ to~$0$. The general form of the graph of the logarithmic function is shown in \Fig{52}. Since the derivative of~$\log x$ is~$1/x$, the slope of %[Illustration: Fig. 52.] \Figure[2.75in]{52}{p359} the curve is very gentle when $x$~is very large, and very steep when $x$~is very small. \begin{Examples}{LXXXII.} \Item{1.} Prove from the definition that if $u > 0$ then \[ u/(1 + u) < \log(1 + u) < u. \] [For $\ds\log(1 + u) = \int_{0}^{u} \frac{dt}{1 + t}$, and the subject of integration lies between $1$ and $1/(1 + u)$.] \Item{2.} Prove that $\log(1 + u)$ lies between $u - \dfrac{u^{2}}{2}$ and $u - \dfrac{u^{2}}{2(1 + u)}$ when $u$~is positive. [Use the fact that $\ds\log(1 + u) = u - \int_{0}^{u} \frac{t\, dt}{1 + t}$.] \Item{3.} If $0 < u < 1$ then $u < -\log(1 - u) < u/(1 - u)$. \Item{4.} Prove that \[ \lim_{x\to 1} \frac{\log x}{x - 1} = \lim_{t\to 0} \frac{\log (1 + t)}{t} = 1. \] %[** TN: Indented for consistency; no indent in the original] [Use Ex.~1.] \end{Examples} \PageSep{360} \Paragraph{198. The functional equation satisfied by $\log x$.} \emph{The function $\log x$ satisfies the functional equation} \[ f(xy) = f(x) + f(y). \Tag{(1)} \] For, making the substitution $t = yu$, we see that \begin{align*} \log xy &= \int_{1}^{xy} \frac{dt}{t} = \int_{1/y}^{x} \frac{du}{u} = \int_{1}^{x} \frac{du}{u} - \int_{1}^{1/y} \frac{du}{u}\\ &= \log x - \log(1/y) = \log x + \log y, \end{align*} which proves the theorem. \begin{Examples}{LXXXIII.} \Item{1.} It can be shown that there is no solution of the equation~\Eq{(1)} which possesses a differential coefficient and is fundamentally distinct from $\log x$. For when we differentiate the functional equation, first with respect to~$x$ and then with respect to~$y$, we obtain the two equations \[ yf'(xy) = f'(x),\quad xf'(xy) = f'(y); \] and so, eliminating $f'(xy)$, $xf'(x) = yf'(y)$. But if this is true for every pair of values of $x$~and~$y$, then we must have $xf'(x) = C$, or $f'(x) = C/x$, where $C$~is a constant. Hence \[ f(x) = \int \frac{C}{x}\, dx + C' = C\log x + C', \] and it is easy to see that $C' = 0$. Thus there is no solution fundamentally distinct from~$\log x$, except the trivial solution $f(x) = 0$, obtained by taking $C = 0$. \Item{2.} Show in the same way that there is no solution of the equation \[ f(x) + f(y) = f\left(\frac{x + y}{1 - xy}\right) \] which possesses a differential coefficient and is fundamentally distinct from $\arctan x$. \end{Examples} \Paragraph{199. The manner in which $\log x$ tends to infinity with~$x$.} It will be remembered that in \Ex{xxxvi}.~6 we defined certain different ways in which a function of~$x$ may tend to infinity with~$x$, distinguishing between functions which, when $x$~is large, are of the first, second, third,~\dots\ orders of greatness. A function~$f(x)$ was said to be of the $k$th~order of greatness when $f(x)/x^{k}$ tends to a limit different from zero as $x$~tends to infinity. It is easy to define a whole series of functions which tend to infinity with~$x$, but whose order of greatness is smaller than the first. Thus $\sqrt{x}$, $\sqrt[3]{x}$, $\sqrt[4]{x}$,~\dots\ are such functions. We may say generally that $x^{\alpha}$, where $\alpha$~is any positive rational number, is of the $\alpha$th~order of greatness when $x$~is large. We may suppose $\alpha$ as small \PageSep{361} as we please, \DPtypo{e.g.}{\eg}\ less than~$.000\MS000\MS1$. And it might be thought that by giving $\alpha$ all possible values we should exhaust the possible `orders of infinity' of~$f(x)$. At any rate it might be supposed that if $f(x)$~tends to infinity with~$x$, however slowly, we could always find a value of~$\alpha$ so small that $x^{\alpha}$~would tend to infinity more slowly still; and, conversely, that if $f(x)$~tends to infinity with~$x$, however rapidly, we could always find a value of~$\alpha$ so great that $x^{\alpha}$~would tend to infinity more rapidly still. Perhaps the most interesting feature of the function $\log x$ is its behaviour as $x$~tends to infinity. It shows that the presupposition stated above, which seems so natural, is unfounded. \emph{The logarithm of~$x$ tends to infinity with~$x$, but more slowly than \Emph{any} positive power of~$x$, integral or fractional.} In other words $\log x \to \infty$ but \[ \frac{\log x}{x^{\alpha}} \to 0 \] for \emph{all} positive values of~$\alpha$. This fact is sometimes expressed loosely by saying that the `order of infinity of~$\log x$ is infinitely small'; but the reader will hardly require at this stage to be warned against such modes of expression. \Paragraph{200. Proof that $(\log x)/x^{\alpha} \to 0$ as $x \to \infty$.} Let $\beta$~be any positive number. Then $1/t < 1/t^{1-\beta}$ when $t > 1$, and so \[ \log x = \int_{1}^{x} \frac{dt}{t} < \int_{1}^{x} \frac{dt}{t^{1-\beta}}, \] or \[ \log x < (x^{\beta} - 1)/\beta < x^{\beta}/\beta, \] when $x > 1$. Now if $\alpha$~is any positive number we can choose a smaller positive value of~$\beta$. And then \[ 0 < (\log x)/x^{\alpha} < x^{\beta-\alpha}/\beta \quad (x > 1). \] But, since $\alpha > \beta$, $x^{\beta-\alpha}/\beta \to 0$ as $x \to \infty$, and therefore \[ (\log x)/x^{\alpha} \to 0. \] \Paragraph{201. The behaviour of $\log x$ as $x \to +0$.} Since \[ (\log x)/x^{\alpha} = -y^{\alpha} \log y \] if $x = 1/y$, it follows from the theorem proved above that \[ \lim_{y\to +0} y^{\alpha} \log y = -\lim_{x\to +\infty} (\log x)/x^{\alpha} = 0. \] Thus $\log x$ tends to~$-\infty$ and $\log(1/x) = -\log x$ to~$\infty$ as $x$~tends to zero by positive values, but $\log(1/x)$ tends to~$\infty$ more slowly than any positive power of~$1/x$, integral or fractional. \PageSep{362} \begin{Remark} \Paragraph{202. Scales of infinity. The logarithmic scale.} Let us consider once more the series of functions \[ x,\quad \sqrt{x},\quad \sqrt[3]{x},\ \dots,\quad \sqrt[n]{x},\ \dots, \] which possesses the property that, if $f(x)$ and~$\phi(x)$ are any two of the functions contained in it, then $f(x)$ and~$\phi(x)$ both tend to~$\infty$ as $x \to \infty$, while $f(x)/\phi(x)$ tends to $0$ or to~$\infty$ according as $f(x)$~occurs to the right or the left of~$\phi(x)$ in the series. We can now continue this series by the insertion of new terms to the right of all those already written down. We can begin with $\log x$, which tends to infinity more slowly than any of the old terms. Then $\sqrtp{\log x}$ tends to~$\infty$ more slowly than~$\log x$, $\sqrtp[3]{\log x}$ than~$\sqrtp{\log x}$, and so on. Thus we obtain a series \[ x,\quad \sqrt{x},\quad \sqrt[3]{x},\ \dots,\quad \sqrt[n]{x},\ \dots\quad \log x,\quad \sqrtp{\log x},\quad \sqrtp[3]{\log x},\ \dots\quad \sqrtp[n]{\log x},\ \dots \] formed of two simply infinite series arranged one after the other. But this is not all. Consider the function $\log\log x$, the logarithm of~$\log x$. Since $(\log x)/x^{\alpha} \to 0$, for all positive values of~$\alpha$, it follows on putting $x = \log y$ that \[ (\log\log y)/(\log y)^{\alpha} = (\log x)/x^{\alpha} \to 0. \] Thus $\log\log y$ tends to~$\infty$ with~$y$, but more slowly than any power of~$\log y$. Hence we may continue our series in the form \begin{gather*} %[** TN: Set on one line in the original] x,\quad \sqrt{x},\quad \sqrt[3]{x},\ \dots\qquad \log x,\quad \sqrtp{\log x},\quad \sqrtp[3]{\log x},\ \dots\\ \log\log x,\quad \sqrtp{\log\log x},\ \dots\quad \sqrtp[n]{\log\log x},\ \dots; \end{gather*} and it will by now be obvious that by introducing the functions $\log\log\log x$, $\log\log\log\log x$,~\dots\ we can prolong the series to any extent we like. By putting $x = 1/y$ we obtain a similar scale of infinity for functions of~$y$ which tend to~$\infty$ as $y$~tends to~$0$ by positive values.\footnote {For fuller information as to `scales of infinity' see the author's tract `Orders of Infinity', \textit{Camb.\ Math.\ Tracts}, No.~12.\PageLabel{362}} \end{Remark} \begin{Examples}{LXXXIV.} \Item{1.} Between any two terms $f(x)$,~$F(x)$ of the series we can insert a new term~$\phi(x)$ such that $\phi(x)$~tends to~$\infty$ more slowly than~$f(x)$ and more rapidly than~$F(x)$. [Thus between $\sqrt{x}$ and~$\sqrt[3]{x}$ we could insert~$x^{5/12}$: between $\sqrtp{\log x}$ and~$\sqrtp[3]{\log x}$ we could insert $(\log x)^{5/12}$. And, generally, $\phi(x) = \sqrtb{f(x) F(x)}$ satisfies the conditions stated.] \Item{2.} Find a function which tends to~$\infty$ more slowly than~$\sqrt{x}$, but more rapidly than~$x^{\alpha}$, where $\alpha$~is any rational number less than~$1/2$. [$\sqrt{x}/(\log x)$~is such a function; or $\sqrt{x}/(\log x)^{\beta}$, where $\beta$~is any positive rational number.] \Item{3.} Find a function which tends to~$\infty$ more slowly than~$\sqrt{x}$, but more rapidly than~$\sqrt{x}/(\log x)^{\alpha}$, where $\alpha$~is any rational number. [The function $\sqrt{x}/(\log\log x)$ is such a function. It will be gathered from these examples that \emph{incompleteness} is an inherent characteristic of the logarithmic scale of infinity.] \Item{4.} How does the function \[ f(x) = \{x^{\alpha} (\log x)^{\alpha'} (\log\log x)^{\alpha''}\}/ \{x^{\beta} (\log x)^{\beta'} (\log\log x)^{\beta''}\} \] behave as $x$~tends to~$\infty$? [If $\alpha \neq \beta$ then the behaviour of \[ f(x) = x^{\alpha-\beta} (\log x)^{\alpha'-\beta'} (\log\log x)^{\alpha''-\beta''} \] \PageSep{363} is dominated by that of~$x^{\alpha-\beta}$. If $\alpha = \beta$ then the power of~$x$ disappears and the behaviour of~$f(x)$ is dominated by that of $(\log x)^{\alpha'-\beta'}$, unless $\alpha' = \beta'$, when it is dominated by that of $(\log\log x)^{\alpha''-\beta''}$. Thus $f(x) \to \infty$ if $\alpha > \beta$, or $\alpha = \beta$, $\alpha' > \beta'$, or $\alpha = \beta$, $\alpha' = \beta'$, $\alpha'' > \beta''$, and $f(x) \to 0$ if $\alpha < \beta$, or $\alpha = \beta$, $\alpha' < \beta'$, or $\alpha = \beta$, $\alpha' = \beta'$, $\alpha'' < \beta''$.] \Item{5.} Arrange the functions $x/\sqrtp{\log x}$, $x\sqrtp{\log x}/\log\log x$, $x\log\log x/\sqrtp{\log x}$, $(x\log\log\log x)/\sqrtp{\log\log x}$ according to the rapidity with which they tend to infinity as $x \to \infty$. \Item{6.} Arrange \[ \log\log x/(x\log x),\quad (\log x)/x,\quad x\log\log x/\sqrtp{x^{2} + 1},\quad \{\sqrtp{x + 1}\}/x(\log x)^{2} \] according to the rapidity with which they tend to zero as $x \to \infty$. \Item{7.} Arrange \[ x\log\log(1/x),\quad \sqrt{x}/\{\log(1/x)\},\quad \sqrtb{x\sin x\log(1/x)},\quad (1 - \cos x)\log(1/x) \] according to the rapidity with which they tend to zero as $x \to +0$. \Item{8.} Show that \[ D_{x}\log\log x = 1/(x\log x),\quad D_{x}\log\log\log x = 1/(x\log x\log\log x), \] and so on. \Item{9.} Show that \[ D_{x}(\log x)^{\alpha} = \alpha/\{x(\log x)^{1-\alpha}\},\quad D_{x}(\log\log x)^{\alpha} = \alpha/\{x\log x(\log\log x)^{1-\alpha}\}, \] and so on. \end{Examples} \Paragraph{203. The number $e$\Add{.}} We shall now introduce a number, usually denoted by~$e$, which is of immense importance in higher mathematics. It is, like~$\pi$, one of the fundamental constants of analysis. We define~$e$ as \emph{the number whose logarithm is~$1$}. In other words $e$~is defined by the equation \[ 1 = \int_{1}^{e} \frac{dt}{t}. \] Since $\log x$~is an increasing function of~$x$, in the stricter sense of \SecNo[§]{95}, it can only pass once through the value~$1$. Hence our definition does in fact define one definite number. Now $\log xy = \log x + \log y$ and so \[ \log x^{2} = 2\log x,\quad \log x^{3} = 3\log x,\ \dots,\quad \log x^{n} = n\log x, \] where $n$~is any positive integer. Hence \[ \log e^{n} = n\log e = n. \] \PageSep{364} Again, if $p$~and~$q$ are any positive integers, and $e^{p/q}$~denotes the positive $q$th~root of~$e^{p}$, we have \[ p = \log e^{p} = \log(e^{p/q})^{q} = q\log e^{p/q}, \] so that $\log e^{p/q} = p/q$. Thus, if $y$~has any positive rational value, and $e^{y}$~denotes the positive $y$th~power of~$e$, we have \[ \log e^{y} = y, \Tag{(1)} \] and $\log e^{-y} = -\log e^{y} = -y$. Hence the equation~\Eq{(1)} is true for all rational values of~$y$, positive or negative. In other words the equations \[ y = \log x,\quad x = e^{y} \Tag{(2)} \] are consequences of one another so long as $y$~is rational and $e^{y}$~has its positive value. At present we have not given any definition of a power such as~$e^{y}$ in which the index is irrational, and the function~$e^{y}$ is defined for rational values of~$y$ only. \Par{Example.} Prove that $2 < e < 3$. [In the first place it is evident that \[ \int_{1}^{2} \frac{dt}{t} < 1, \] and so $2 < e$. Also \[ \int_{1}^{3} \frac{dt}{t} = \int_{1}^{2} \frac{dt}{t} + \int_{2}^{3} \frac{dt}{t} = \int_{0}^{1} \frac{du}{2 - u} + \int_{0}^{1} \frac{du}{2 + u} = 4\int_{0}^{1} \frac{du}{4 - u^{2}} > 1, \] so that $e < 3$.] \Paragraph{204. The exponential function.} We now define the \emph{exponential function}~$e^{y}$ for all real values of~$y$ as the inverse of the logarithmic function. In other words we write \[ x = e^{y} \] if $y = \log x$. We saw that, as $x$~varies from $0$ towards~$\infty$, $y$~increases steadily, in the stricter sense, from $-\infty$ towards~$\infty$. Thus to one value of~$x$ corresponds one value of~$y$, and conversely. Also $y$~is a continuous function of~$x$, and it follows from \SecNo[§]{109} that $x$~is likewise a continuous function of~$y$. \begin{Remark} It is easy to give a direct proof of the continuity of the exponential function. For if $x = e^{y}$ and $x + \xi = e^{y+\eta}$ then \[ \eta = \int_{x}^{x+\xi} \frac{dt}{t}. \] Thus $|\eta|$~is greater than $\xi/(x + \xi)$ if $\xi > 0$, and than $|\xi|/x$ if $\xi < 0$; and if $\eta$~is very small $\xi$~must also be very small. \end{Remark} \PageSep{365} Thus $e^{y}$~is a positive and continuous function of~$y$ which increases steadily from $0$ towards~$\infty$ as $y$~increases from $-\infty$ towards~$\infty$. Moreover $e^{y}$~is the positive $y$th~power of the number~$e$, in accordance with the elementary definitions, whenever $y$~is a rational number. In particular $e^{y} = 1$ when $y = 0$. The general form of the graph of~$e^{y}$ is as shown in \Fig{53}. %[Illustration: Fig. 53.] \Figure[3in]{53}{p365} \Paragraph{205. The principal properties of the exponential function.} \Item{(1)}~If $x = e^{y}$, so that $y = \log x$, then $dy/dx = 1/x$ and \[ \frac{dx}{dy} = x = e^{y}. \] Thus \emph{the derivative of the exponential function is equal to the function itself}. More generally, if $x = e^{ay}$ then $dx/dy = ae^{ay}$. \Item{(2)} \emph{The exponential function satisfies the functional equation} \[ f(y + z) = f(y)f(z). \] This follows, when $y$ and~$z$ are rational, from the ordinary rules of indices. If $y$ or~$z$, or both, are irrational then we can choose two sequences $y_{1}$, $y_{2}$,~\dots, $y_{n}$,~\dots\ and $z_{1}$, $z_{2}$,~\dots, $z_{n}$,~\dots\ of rational numbers such that $\lim y_{n} = y$, $\lim z_{n} = z$. Then, since the exponential function is continuous, we have \[ e^{y} × e^{z} = \lim e^{y_{n}} × \lim e^{z_{n}} = \lim e^{y_{n}+z_{n}} = e^{y+z}. \] In particular $e^{y} × e^{-y} = e^{0} = 1$, or $e^{-y} = 1/e^{y}$. We may also deduce the functional equation satisfied by~$e^{y}$ from that satisfied by~$\log x$. For if $y_{1} = \log x_{1}$, $y_{2} = \log x_{2}$, so that $x_{1} = e^{y_{1}}$, $x_{2} = e^{y_{2}}$, then $y_{1} + y_{2} = \log x_{1} + \log x_{2} = \log x_{1}x_{2}$ and \[ e^{y_{1}+y_{2}} = e^{\log x_{1}x_{2}} = x_{1}x_{2} = e^{y_{1}} × e^{y_{2}}. \] \PageSep{366} \begin{Examples}{LXXXV.} \Item{1.} If $dx/dy = ax$ then $x = Ke^{ay}$, where $K$~is a constant. \Item{2.} There is no solution of the equation $f(y + z) = f(y)f(z)$ fundamentally distinct from the exponential function. [We assume that $f(y)$~has a differential coefficient. Differentiating the equation with respect to $y$~and~$z$ in turn, we obtain \[ f'(y + z) = f'(y)f(z),\quad f'(y + z) = f(y)f'(z) \] and so $f'(y)/f(y) = f'(z)/f(z)$, and therefore each is constant. Thus if $x = f(y)$ then $dx/dy = ax$, where $a$~is a constant, so that $x = Ke^{ay}$ (Ex.~1).] \Item{3.} Prove that $(e^{ay} - 1)/y \to a$ as $y \to 0$. [Applying the Mean Value Theorem, we obtain $e^{ay} - 1 = aye^{a\eta}$, where $0 < |\eta| < |y|$.] \end{Examples} \Paragraph{206.} \begin{Result}\Item{(3)} The function~$e^{y}$ tends to infinity with~$y$ more rapidly than any power of~$y$, or \[ \lim y^{\alpha}/e^{y} = \lim e^{-y}y^{\alpha} = 0 \] as $y \to \infty$, for all values of~$\alpha$ however great. \end{Result} We saw that $(\log x)/x^{\beta} \to 0$ as $x \to \infty$, for any positive value of~$\beta$ however small. Writing $\alpha$ for~$1/\beta$, we see that $(\log x)^{\alpha}/x \to 0$ for any value of~$\alpha$ however large. The result follows on putting $x = e^{y}$. It is clear also that $e^{\gamma y}$~tends to~$\infty$ if $\gamma > 0$, and to~$0$ if $\gamma < 0$, and in each case more rapidly than any power of~$y$. \begin{Remark} From this result it follows that we can construct a `scale of infinity' similar to that constructed in \SecNo[§]{202}, but extending in the opposite direction; \ie\ a scale of functions which tend to~$\infty$ more and more rapidly as $x \to \infty$.\footnote {The exponential function was introduced by inverting the equation $y = \log x$ into $x = e^{y}$; and we have accordingly, up to the present, used $y$~as the independent and $x$~as the dependent variable in discussing its properties. We shall now revert to the more natural plan of taking~$x$ as the independent variable, except when it is necessary to consider a pair of equations of the type $y = \log x$, $x = e^{y}$ simultaneously, or when there is some other special reason to the contrary.} The scale is \[ x,\quad x^{2},\quad x^{3},\ \dots\quad e^{x},\quad e^{2x},\ \dots\quad e^{x^{2}},\ \dots,\quad e^{x^{3}},\ \dots,\quad e^{e^{x}},\ \dots, \] where of course $e^{x^{2}}$,~\dots, $e^{e^{x}}$,~\dots\ denote $e^{(x^{2})}$,~\dots, $e^{(e^{x})}$,~\dots. The reader should try to apply the remarks about the logarithmic scale, made in \SecNo[§]{202} and \Exs{lxxxiv}, to this `exponential scale' also. The two scales may of course (if the order of one is reversed) be combined into one scale \[ \dots\ \log\log x,\ \dots\quad \log x,\ \dots\quad x,\ \dots\quad e^{x},\ \dots\quad e^{e^{x}},\ \dots. \] \end{Remark} \Paragraph{207. The general power~$a^{x}$.} The function~$a^{x}$ has been defined only for rational values of~$x$, except in the particular case \PageSep{367} when $a = e$. We shall now consider the case in which $a$~is any positive number. Suppose that $x$~is a positive rational number~$p/q$. Then the positive value~$y$ of the power~$a^{p/q}$ is given by $y^{q} = a^{p}$; from which it follows that \[ q\log y = p\log a,\quad \log y = (p/q)\log a = x\log a, \] and so \[ y = e^{x\log a}. \] {\Loosen We take this as our \emph{definition} of~$a^{x}$ when $x$~is irrational. Thus $10^{\sqrt{2}} = e^{\sqrt{2}\log 10}$. It is to be observed that $a^{x}$, when $x$~is irrational, is defined only for positive values of~$a$, and is itself essentially positive; and that $\log a^{x} = x\log a$. The most important properties of the function~$a^{x}$ are as follows.} \Item{(1)} Whatever value $a$ may have, $a^{x} × a^{y} = a^{x+y}$ and $(a^{x})^{y} = a^{xy}$. In other words the laws of indices hold for irrational no less than for rational indices. For, in the first place, \[ a^{x} × a^{y} = e^{x\log a} × e^{y\log a} = e^{(x+y)\log a} = a^{x+y}; \] and in the second \[ (a^{x})^{y} = e^{y\log a^{x}} = e^{xy\log a} = a^{xy}. \] \Item{(2)} If $a > 1$ then $a^{x} = e^{x\log a} = e^{\alpha x}$, where $\alpha$~is positive. The graph of~$a^{x}$ is in this case similar to that of~$e^{x}$, and $a^{x} \to \infty$ as $x \to \infty$, more rapidly than any power of~$x$. If $a < 1$ then $a^{x} = e^{x\log a} = e^{-\beta x}$, where $\beta$~is positive. The graph of~$a^{x}$ is then similar in shape to that of~$e^{x}$, but reversed as regards right and left, and $a^{x} \to 0$ as $x \to \infty$, more rapidly than any power of~$1/x$. \Item{(3)} $a^{x}$~is a continuous function of~$x$, and \[ D_{x} a^{x} = D_{x} e^{x\log a} = e^{x\log a} \log a = a^{x} \log a. \] \Item{(4)} $a^{x}$~is also a continuous function of~$a$, and \[ D_{a} a^{x} = D_{a} e^{x\log a} = e^{x\log a} (x/a) = xa^{x-1}. \] \Item{(5)} $(a^{x} - 1)/x \to \log a$ as $x \to 0$. This of course is a mere corollary from the fact that $D_{x}a^{x} = a^{x}\log a$, but the particular form of the result is often useful; it is of course equivalent to the result (\Ex{lxxxv}.~3) that $(e^{\alpha x} - 1)/x \to \alpha$ as $x \to 0$. \begin{Remark} In the course of the preceding chapters a great many results involving the function~$a^{x}$ have been stated with the limitation that $x$~is rational. The definition and theorems given in this section enable us to remove this restriction. \end{Remark} \PageSep{368} \Paragraph{208. The representation of~$e^{x}$ as a limit.} In \Ref{Ch.}{IV}, \SecNo[§]{73}, we proved that $\{1 + (1/n)\}^{n}$ tends, as $n \to \infty$, to a limit which we denoted provisionally by~$e$. We shall now identify this limit with the number~$e$ of the preceding sections. We can however establish a more general result, viz.\ that expressed by the equations \[ \lim_{n\to\infty} \left(1 + \frac{x}{n}\right)^{n} = \lim_{n\to\infty} \left(1 - \frac{x}{n}\right)^{-n} = e^{x}. \Tag{(1)} \] As the result is of very great importance, we shall indicate alternative lines of proof. \Item{(1)} Since \[ \frac{d}{dt} \log(1 + xt) = \frac{x}{1 + xt}, \] it follows that \[ \lim_{h\to 0} \frac{\log(1 + xh)}{h} = x. \] If we put $h = 1/\xi$, we see that \[ \lim \xi \log\left(1 + \frac{x}{\xi}\right) = x \] as $\xi \to \infty$ or $\xi \to -\infty$. Since the exponential function is continuous it follows that \[ \left(1 + \frac{x}{\xi}\right)^{\xi} = e^{\xi\log\{1+(x/\xi)\}} \to e^{x} \] as $\xi \to \infty$ or $\xi \to -\infty$: \ie\ that \[ \lim_{\xi\to\infty} \left(1 + \frac{x}{\xi}\right)^{\xi} = \lim_{\xi\to -\infty} \left(1 + \frac{x}{\xi}\right)^{\xi} = e^{x}. \Tag{(2)} \] If we suppose that $\xi \to \infty$ or $\xi \to -\infty$ through integral values only, we obtain the result expressed by the equations~\Eq{(1)}. \begin{Remark} \Item{(2)} If $n$~is any positive integer, however large, and $x > 1$, we have \[ \int_{1}^{x} \frac{dt}{t^{1+(1/n)}} < \int_{1}^{x} \frac{dt}{t} < \int_{1}^{x} \frac{dt}{t^{1-(1/n)}}, \] or \[ n(1 - x^{-1/n}) < \log x < n(x^{1/n} - 1). \Tag{(3)} \] Writing $y$ for~$\log x$, so that $y$~is positive and $x = e^{y}$, we obtain, after some simple transformations, \[ \left(1 + \frac{y}{n}\right)^{n} < x < \left(1 - \frac{y}{n}\right)^{-n}. \Tag{(4)} \] Now let \[ 1 + \frac{y}{n} = \eta_{1},\quad 1 - \frac{y}{n} = \frac{1}{\eta_{2}}. \] \PageSep{369} Then $0 < \eta_{1} < \eta_{2}$, at any rate for sufficiently large values of~$n$; and, by~\Eq{(9)} of \SecNo[§]{74}, \[ \eta_{2}^{n} - \eta_{1}^{n} < n\eta_{2}^{n-1} (\eta_{2} - \eta_{1}) = y^{2}\eta_{2}^{n}/n, \] which evidently tends to $0$ as $n \to \infty$. The result now follows from the inequalities~\Eq{(4)}. The more general result~\Eq{(2)} may be proved in the same way, if we replace~$1/n$ by a continuous variable~$h$. \Paragraph{209. The representation of $\log x$ as a limit.} We can also prove (cf.\ \SecNo[§]{75}) that \[ \lim n(1 - x^{-1/n}) = \lim n(x^{1/n} - 1) = \log x. \] For \[ n(x^{1/n} - 1) - n(1 - x^{-1/n}) = n(x^{1/n} - 1)(1 - x^{-1/n}), \] which tends to zero as $n \to \infty$, since $n(x^{1/n} - 1)$ tends to a limit (\SecNo[§]{75}) and $x^{-1/n}$ to~$1$ (\Ex{xxvii}.~10). The result now follows from the inequalities~\Eq{(3)} of \SecNo[§]{208}. \end{Remark} \begin{Examples}{LXXXVI.} \Item{1.} Prove, by taking $y = 1$ and $n = 6$ in the inequalities~\Eq{(4)} of \SecNo[§]{208}, that $2.5 < e < 2.9$. \Item{2.} Prove that if $t > 1$ then $(t^{1/n} - t^{-1/n})/(t - t^{-1}) < 1/n$, and so that if $x > 1$ then \[ \int_{1}^{x} \frac{dt}{t^{1-(1/n)}} - \int_{1}^{x} \frac{dt}{t^{1+(1/n)}} < \frac{1}{n} \int_{1}^{x} \left(t - \frac{1}{t}\right) \frac{dt}{t} = \frac{1}{n} \left(x + \frac{1}{x} - 2\right). \] Hence deduce the results of \SecNo[§]{209}. \Item{3.} If $\xi_{n}$~is a function of~$n$ such that $n\xi_{n} \to l$ as $n \to \infty$, then $(1 + \xi_{n})^{n} \to e^{l}$. [Writing $n\log(1 + \xi_{n})$ in the form \[ l \left(\frac{n\xi_{n}}{l}\right) \frac{\log(1 + \xi_{n})}{\xi_{n}}, \] and using \Ex{lxxxii}.~4, we see that $n\log(1 + \xi_{n})\to l$.] \Item{4.} If $n\xi_{n} \to \infty$, then $(1 + \xi_{n})^{n} \to \infty$; and if $1 + \xi_{n} > 0$ and $n\xi_{n} \to -\infty$, then \[ (1 + \xi_{n})^{n} \to 0. \] \Item{5.} Deduce from~\Eq{(1)} of \SecNo[§]{208} the theorem that $e^{y}$~tends to infinity more rapidly than any power of~$y$. \end{Examples} \Paragraph{210. Common logarithms.} The reader is probably familiar with the idea of a logarithm and its use in numerical calculation. He will remember that in elementary algebra $\log_{a} x$, the logarithm of~$x$ to the base~$a$, is defined by the equations \[ x = a^{y},\quad y = \log_{a} x. \] This definition is of course applicable only when $y$~is rational, though this point is often passed over in silence. \PageSep{370} Our logarithms are therefore logarithms to the base~$e$. For numerical work logarithms to the base~$10$ are used. If \[ y = \log x = \log_{e} x,\quad z = \log_{10} x, \] then $x = e^{y}$ and also $x = 10^{z} = e^{z\log 10}$, so that \[ \log_{10} x = (\log_{e} x)/(\log_{e} 10). \] Thus it is easy to pass from one system to the other when once $\log_{e} 10$ has been calculated. It is no part of our purpose in this book to go into details concerning the practical uses of logarithms. If the reader is not familiar with them he should consult some text-book on Elementary Algebra or Trigonometry.\footnote {See for example Chrystal's \textit{Algebra}, vol.~i, ch.~\textsc{xxi}. The value of~$\log_{e} 10$ is $2.302\dots$ and that of its reciprocal~$.434\dots$.} \begin{Examples}{LXXXVII.} \Item{1.} Show that \[ D_{x} e^{ax}\cos bx = re^{ax} \cos(bx + \theta),\quad D_{x} e^{ax}\sin bx = re^{ax} \sin(bx + \theta) \] where $r = \sqrtp{a^{2} + b^{2}}$, $\cos\theta = a/r$, $\sin\theta = b/r$. Hence determine the $n$th~derivatives {\Loosen of the functions $e^{ax}\cos bx$, $e^{ax}\sin bx$, and show in particular that $D_{x}^{n} e^{ax} = a^{n} e^{ax}$.} \Item{2.} Trace the curve $y = e^{-ax}\sin bx$, where $a$~and~$b$ are positive. Show that $y$~has an infinity of maxima whose values form a geometrical progression and which lie on the curve \[ y = \frac{b}{\sqrtp{a^{2} + b^{2}}}\, e^{-ax}. \] \MathTrip{1912.} \Item{3.} \Topic{Integrals containing the exponential function.} Prove that \begin{align*} %[** TN: Set on one line in the original] \int e^{ax}\cos bx\, dx &= \frac{a\cos bx + b\sin bx}{a^{2} + b^{2}}\, e^{ax}, \\ \int e^{ax}\sin bx\, dx &= \frac{a\sin bx - b\cos bx}{a^{2} + b^{2}}\, e^{ax}. \end{align*} [Denoting the two integrals by $I$,~$J$, and integrating by parts, we obtain \[ aI = e^{ax}\cos bx + bJ,\quad aJ = e^{ax}\sin bx - bI. \] Solve these equations for $I$~and~$J$.] \Item{4.} Prove that the successive areas bounded by the curve of Ex.~2 and the positive half of the axis of~$x$ form a geometrical progression, and that their sum is \[ \frac{b}{a^{2} + b^{2}}\, \frac{1 + e^{-a\pi/b}}{1 - e^{\DPtypo{}{-}a\pi/b}}. \] \Item{5.} Prove that if $a > 0$ then \[ \int_{0}^{\infty} e^{-ax}\cos bx\, dx = \frac{a}{a^{2} + b^{2}},\quad \int_{0}^{\infty} e^{-ax}\sin bx\, dx = \frac{b}{a^{2} + b^{2}}. \] \PageSep{371} \Item{6.} If $I_{n} = \ds\int e^{ax}x^{n}\, dx$ then $aI_{n} = e^{ax}x^{n} - nI_{n-1}$. [Integrate by parts. It follows that $I_{n}$~can be calculated for all positive integral values of~$n$.] \Item{7.} Prove that, if $n$~is a positive integer, then \[ \int_{0}^{\xi} e^{-x}x^{n}\, dx = n!\, e^{-\xi} \left( e^{\xi} - 1 - \xi - \frac{\xi^{2}}{2!} - \dots - \frac{\xi^{n}}{n!} \right) \] and \[ \int_{0}^{\infty} e^{-x}x^{n}\, dx = n!. \] \Item{8.} {\Loosen Show how to find the integral of any rational function of~$e^{x}$. [Put $x = \log u$, when $e^{x} = u$, $dx/du = 1/u$, and the integral is transformed into that of a rational function of~$u$.]} \Item{9.} Integrate \[ \frac{e^{2x}}{(c^{2}e^{x} + a^{2}e^{-x})(c^{2}e^{x} + b^{2}e^{-x})}, \] distinguishing the cases in which $a$~is and is not equal to~$b$. \Item{10.} Prove that we can integrate any function of the form $P(x, e^{ax}, e^{bx}, \dots)$, where $P$~denotes a polynomial. [This follows from the fact that $P$~can be expressed as the sum of a number of terms of the type $Ax^{m}e^{kx}$, where $m$~is a positive integer.] \Item{11.} Show how to integrate any function of the form \[ P(x,\ e^{ax},\ e^{bx},\ \dots,\ \cos lx,\ \cos mx,\ \dots,\ \sin lx,\ \sin mx,\ \dots). \] \Item{12.} Prove that $\ds\int_{a}^{\infty} e^{-\lambda x} R(x)\, dx$, where $\lambda > 0$ and $a$~is greater than the greatest root of the denominator of~$R(x)$, is convergent. [This follows from the fact that $e^{\lambda x}$~tends to infinity more rapidly than any power of~$x$.] \Item{13.} Prove that $\ds\int_{-\infty}^{\infty} e^{-\lambda x^{2} + \mu x}\, dx$, where $\lambda > 0$, is convergent for all values of~$\mu$, and that the same is true of $\ds\int_{-\infty}^{\infty} e^{-\lambda x^{2} + \mu x} x^{n}\, dx$, where $n$~is any positive integer. \Item{14.} Draw the graphs of $e^{x^{2}}$, $e^{-x^{2}}$, $xe^{x}$, $xe^{-x}$, $xe^{x^{2}}$, $xe^{-x^{2}}$, and $x\log x$, determining any maxima and minima of the functions and any points of inflexion on their graphs. \Item{15.} Show that the equation $e^{ax} = bx$, where $a$~and~$b$ are positive, has two real roots, one, or none, according as $b > ae$, $b = ae$, or $b < ae$. [The tangent to the curve $y = e^{ax}$ at the point $(\xi, e^{a\xi})$ is \[ y - e^{a\xi} = ae^{a\xi}(x - \xi), \] which passes through the origin if $a\xi = 1$, so that the line $y = aex$ touches the curve at the point $(1/a, e)$. The result now becomes obvious when we draw the line $y = bx$. The reader should discuss the cases in which $a$~or~$b$ or both are negative.] \PageSep{372} \Item{16.} Show that the equation $e^{x} = 1 + x$ has no real root except $x = 0$, and that $e^{x} = 1 + x + \frac{1}{2}x^{2}$ has three real roots. \Item{17.} Draw the graphs of the functions \begin{gather*} \log(x + \sqrtp{x^{2} + 1}),\quad \log\left(\frac{1 + x}{1 - x}\right),\quad e^{-ax}\cos^{2}bx,\\ e^{-(1/x)^{2}},\quad e^{-(1/x)^{2}}\sqrtp{1/x},\quad e^{-\cot x},\quad e^{-\cot^{2} x}. \end{gather*} \Item{18.} Determine roughly the positions of the real roots of the equations \[ \log(x + \sqrtp{x^{2} + 1}) = \frac{x}{100},\quad e^{x} - \frac{2 + x}{2 - x} = \frac{1}{10\MC000},\quad e^{x}\sin x = 7,\quad e^{x^{2}}\sin x = 10\MC000. \] \Item{19.} \Topic{The hyperbolic functions.} The hyperbolic functions $\cosh x$,\footnote {`Hyperbolic cosine': for an explanation of this phrase see Hobson's \textit{Trigonometry}, ch.~\textsc{xvi}.} $\sinh x$,~\dots\ are defined by the equations \begin{gather*} \cosh x = \tfrac{1}{2}(e^{x} + e^{-x}),\quad \sinh x = \tfrac{1}{2}(e^{x} - e^{-x}), \displaybreak[1]\\ % \tanh x = (\sinh x)/(\cosh x),\quad \coth x = (\cosh x)/(\sinh x), \displaybreak[1]\\ % \sech x = 1/(\cosh x),\quad \cosech x = 1/(\sinh x). \end{gather*} Draw the graphs of these functions. \Item{20.} Establish the formulae \begin{gather*} \cosh(-x) = \cosh x,\quad \sinh(-x) = -\sinh x,\quad \tanh(-x) = -\tanh x, \displaybreak[1]\\ % \cosh^{2} x - \sinh^{2} x = 1,\quad \sech^{2} x + \tanh^{2} x = 1,\quad \coth^{2} x - \cosech^{2} x = 1, \displaybreak[1]\\ % \cosh 2x = \cosh^{2} x + \sinh^{2} x,\quad \sinh 2x = 2\sinh x\cosh x, \displaybreak[1]\\ % \begin{alignedat}{2} \cosh(x + y) &= \cosh x\cosh y &&+ \sinh x\sinh y,\\ \sinh(x + y) &= \sinh x\cosh y &&+ \cosh x\sinh y. \end{alignedat} \end{gather*} \Item{21.} Verify that these formulae may be deduced from the corresponding formulae in $\cos x$ and $\sin x$, by writing $\cosh x$ for $\cos x$ and $i\sinh x$ for~$\sin x$. [It follows that the same is true of all the formulae involving $\cos nx$ and $\sin nx$ which are deduced from the corresponding elementary properties of $\cos x$ and~$\sin x$. The reason of this analogy will appear in \Ref{Ch.}{X}\@.] \Item{22.} Express $\cosh x$ and $\sinh x$ in terms (\ia)~of $\cosh 2x$ (\ib)~of $\sinh 2x$. Discuss any ambiguities of sign that may occur. \MathTrip{1908.} \Item{23.} Prove that \begin{gather*} D_{x}\cosh x = \sinh x,\quad D_{x}\sinh x = \cosh x,\\ % D_{x}\tanh x = \sech^{2}x,\quad D_{x}\coth x = -\cosech^{2}x,\\ % D_{x}\sech x = -\sech x\tanh x,\quad D_{x}\cosech x = -\cosech x\coth x,\\ % D_{x}\log \cosh x = \tanh x,\quad D_{x}\log|\sinh x| = \coth x,\\ % D_{x}\arctan e^{x} = \tfrac{1}{2}\sech x,\quad D_{x}\log |\tanh \tfrac{1}{2} x| = \cosech x. \end{gather*} [All these formulae may of course be transformed into formulae in integration.] \PageSep{373} \Item{24.} Prove that $\cosh x > 1$ and $-1 < \tanh x < 1$. \Item{25.} Prove that if $y = \cosh x$ then $x = \log\{y ± \sqrtp{y^{2} - 1}\}$, if $y = \sinh x$ then $x = \log\{y + \sqrtp{y^{2} + 1}\}$, and if $y = \tanh x$ then $x = \frac{1}{2}\log\{(1 + y)/(1 - y)\}$. Account for the ambiguity of sign in the first case. \Item{26.} We shall denote the functions inverse to $\cosh x$, $\sinh x$, $\tanh x$ by $\argcosh x$, $\argsinh x$, $\argtanh x$. Show that $\argcosh x$ is defined only when $x \geq 1$, and is in general two-valued, while $\argsinh x$ is defined for all real values of~$x$, and $\argtanh x$ when $-1 < x < 1$, and both of the two latter functions are one-valued. Sketch the graphs of the functions. \Item{27.} Show that if $-\frac{1}{2}\pi < x < \frac{1}{2}\pi$ and $y$~is positive, and $\cos x\cosh y = 1$, then \[ y = \log(\sec x + \tan x),\quad D_{x} y = \sec x,\quad D_{y} x = \sech y. \] \Item{28.} Prove that if $a > 0$ then $\ds\int \frac{dx}{\sqrtp{x^{2} + a^{2}}} = \argsinh(x/a)$, and $\ds\int \frac{dx}{\sqrtp{x^{2} - a^{2}}}$ is equal to $\argcosh(x/a)$ or to $-\argcosh(-x/a)$, according as $x > 0$ or $x < 0$. \Item{29.} Prove that if $a > 0$ then $\ds\int \frac{dx}{x^{2} - a^{2}}$ is equal to $-(1/a)\argtanh(x/a)$ or to $-(1/a)\argcoth(x/a)$, according as $|x|$~is less than or greater than~$a$. [The results of Exs.\ 28~and~29 furnish us with an alternative method of writing a good many of the formulae of \Ref{Ch.}{VI}\@.] \Item{30.} Prove that \begin{alignat*}{3} \int \frac{dx}{\sqrtb{(x - a)(x - b)}} &= &&2\log\{\sqrtp{x - a} + \sqrtp{x - b}\} &&(a < b < x),\\ \int \frac{dx}{\sqrtb{(a - x)(b - x)}} &= -&&2\log\{\sqrtp{a - x} + \sqrtp{b - x}\}\quad &&(x < a < b),\\ \int \frac{dx}{\sqrtb{(x - a)(b - x)}} &= &&2\arctan\bigsqrtp{\frac{x - a}{b - x}} &&(a < x < b). \end{alignat*} \Item{31.} Prove that \[ \int_{0}^{1} x \log(1 + \tfrac{1}{2}x)\, dx = \tfrac{3}{4} - \tfrac{3}{2}\log\tfrac{3}{2} < \tfrac{1}{2} \int_{0}^{1} x^{2}\, dx = \tfrac{1}{6}. \] \MathTrip{1913.} \Item{32.} Solve the equation $a\cosh x + b\sinh x = c$, where $c > 0$, showing that it has no real roots if $b^{2} + c^{2} - a^{2} < 0$, while if $b^{2} + c^{2} - a^{2} > 0$ it has two, one, or no real roots according as $a + b$ and $a - b$ are both positive, of opposite signs, or both negative. Discuss the case in which $b^{2} + c^{2} - a^{2} = 0$. \Item{33.} Solve the simultaneous equations $\cosh x\cosh y = a$, $\sinh x\sinh y = b$. \Item{34.} $x^{1/x} \to 1$ as $x \to \infty$. [For $x^{1/x} = e^{(\log x)/x}$, and $(\log x)/x \to 0$. Cf.\ \Ex{xxvii}.~11.] Show also that the function~$x^{1/x}$ has a maximum when $x = e$, and draw the graph of the function for positive values of~$x$. \Item{35.} $x^{x} \to 1$ as $x \to +0$. \PageSep{374} \Item{36.} If $\{f(n + 1)\}/\{f(n)\} \to l$, where $l > 0$, as $n \to \infty$, then $\sqrtb[n]{f(n)} \to l$. [For $\log f(n + 1) - \log f(n) \to \log l$, and so $(1/n)\log f(n) \to \log l$ (\Ref{Ch.}{IV}, \MiscEx{IV}~27).] \Item{37.} $\sqrt[n]{n!}/n \to 1/e$ as $n \to \infty$. [If $f(n) = n^{-n} n!$ then $\{f(n + 1)\}/\{f(n)\} = \{1 + (1/n)\}^{-n} \to 1/e$. Now use Ex.~36.] \Item{38.} $\sqrt[n]{(2n)!/(n!)^{2}} \to 4$ as $n \to \infty$. \Item{39.} Discuss the approximate solution of the equation $e^{x} = x^{1\MC000\MC000}$. [It is easy to see by general graphical considerations that the equation has two positive roots, one a little greater than~$1$ and one very large,\footnote {The phrase `very large' is of course not used here in the technical sense explained in \Ref{Ch.}{IV}\@. It means `a good deal larger than the roots of such equations as usually occur in elementary mathematics'. The phrase `a little greater than' must be interpreted similarly.} and one negative root a little greater than~$-1$. To determine roughly the size of the large positive root we may proceed as follows. If $e^{x} = x^{1\MC000\MC000}$ then \[ x = 10^{6} \log x,\quad \log x = 13.82 + \log\log x,\quad \log\log x = 2.63 + \log \left(1 + \frac{\log\log x}{13.82}\right), \] roughly, since $13.82$ and $2.63$ are approximate values of $\log 10^{6}$ and $\log\log 10^{6}$ respectively. It is easy to see from these equations that the ratios $\log x : 13.82$ and $\log\log x : 2.63$ do not differ greatly from unity, and that \[ x = 10^{6}(13.82 + \log\log x) = 10^{6}(13.82 + 2.63) = 16\MC450\MC000 \] gives a tolerable approximation to the root, the error involved being roughly measured by $10^{6}(\log\log x - 2.63)$ or $(10^{6} \log\log x)/13.82$ or $(10^{6} × 2.63)/13.82$, which is less than~$200,000$. The approximations are of course very rough, but suffice to give us a good idea of the scale of magnitude of the root.] \Item{40.} Discuss similarly the equations \[ %[** TN: In-line in the original] e^{x} = 1\MC000\MC000 x^{1\MC000\MC000},\quad e^{x^{2}} = x^{1\MC000\MC000\MC000}. \] \end{Examples} \Paragraph{211. Logarithmic tests of convergence for series and integrals.} We showed in \Ref{Ch.}{VIII} (\SecNo[§§]{175}~\textit{et~seq.})\ that \[ \sum_{1}^{\infty} \frac{1}{n^{s}},\quad \int_{a}^{\infty} \frac{dx}{x^{s}}\qquad (a > 0) \] are convergent if $s > 1$ and divergent if $s \leq 1$. Thus $\sum (1/n)$~is divergent, but $\sum n^{-1-\alpha}$~is convergent for all positive values of~$\alpha$. We saw however in \SecNo[§]{200} that with the aid of logarithms we can construct functions which tend to zero, as $n \to \infty$, more rapidly than~$1/n$, yet less rapidly than~$n^{-1-\alpha}$, however small $\alpha$ may be, provided of course that it is positive. For example $1/(n\log n)$ is such a function, and the question as to whether the series \[ \sum \frac{1}{n\log n} \] \PageSep{375} is convergent or divergent cannot be settled by comparison with any series of the type $\sum n^{-s}$. The same is true of such series as \[ \sum \frac{1}{n(\log n)^{2}},\quad \sum \frac{\log\log n}{n\sqrtp{\log n}}. \] It is a question of some interest to find tests which shall enable us to decide whether series such as these are convergent or divergent; and such tests are easily deduced from the Integral Test of~\SecNo[§]{174}. For since \[ D_{x}(\log x)^{1-s} = \frac{1 - s}{x(\log x)^{s}},\quad D_{x}\log\log x = \frac{1}{x\log x}, \] we have \[ \int_{a}^{\xi} \frac{dx}{x(\log x)^{s}} = \frac{(\log\xi)^{1-s} - (\log a)^{1-s}}{1 - s},\quad \int_{\DPtypo{}{a}}^{\xi} \frac{dx}{x\log x} = \log\log \xi - \log\log a, \] if $a > 1$. The first integral tends to the limit $-(\log a)^{1-s}/(1 - s)$ as $\xi \to \infty$, if $s > 1$, and to~$\infty$ if $s < 1$. The second integral tends to~$\infty$. Hence \begin{Result}the series and integral\PageLabel{375} \[ \sum_{n_{0}}^{\infty} \frac{1}{n(\log n)^{s}},\quad \int_{a}^{\infty} \frac{dx}{x(\log x)^{s}}, \] where $n_{0}$ and $a$ are greater than unity, are convergent if $s > 1$, divergent if $s \leq 1$. \end{Result} It follows, of course, that $\sum \phi(n)$~is convergent if $\phi(n)$~is positive and less than $K/\{n(\log n)^{s}\}$, where $s > 1$, for all values of $n$ greater than some definite value, and divergent if $\phi(n)$~is positive and greater than $K/(n\log n)$ for all values of $n$ greater than some definite value. And there is a corresponding theorem for integrals which we may leave to the reader. \begin{Examples}{LXXXVIII.} \Item{1.} The series \[ \sum \frac{1}{n(\log n)^{2}},\quad \sum \frac{(\log n)^{100}}{n^{101/100}},\quad \sum \frac{n^{2} - 1}{n^{2} + 1}\, \frac{1}{n(\log n)^{7/6}} \] are convergent. [The convergence of the first series is a direct consequence of the theorem of the preceding section. That of the second follows from the fact that $(\log n)^{100}$~is less than~$n^{\beta}$ for sufficiently large values of~$n$, however small $\beta$ may be, provided that it is positive. And so, taking $\beta = 1/200$, $(\log n)^{100} n^{-101/100}$ is less than~$n^{-201/200}$ for sufficiently large values of~$n$. The convergence of the third series follows from the comparison test at the end of the last section.] \PageSep{376} \Item{2.} The series \[ \sum \frac{1}{n(\log n)^{6/7}},\quad \sum \frac{1}{n^{100/101}(\log n)^{100}},\quad \sum \frac{n\log n}{(n\log n)^{2} + 1} \] are divergent. \Item{3.} The series \[ \sum \frac{(\log n)^{p}}{n^{1+s}},\quad \sum \frac{(\log n)^{p} (\log\log n)^{q}}{n^{1+s}},\quad \sum \frac{(\log\log n)^{p}}{n(\log n)^{1+s}}, \] where $s > 0$, are convergent for all values of $p$~and~$q$; similarly the series \[ \sum \frac{1}{n^{1-s}(\log n)^{p}},\quad \sum \frac{1}{n^{1-s}(\log n)^{p}(\log\log n)^{q}},\quad \sum \frac{1}{n(\log n)^{1-s}(\log\log n)^{p}} \] are divergent. \Item{4.} The question of the convergence or divergence of such series as \[ \sum \frac{1}{n\log n\log\log n},\quad \sum \frac{\log\log\log n}{n\log n\sqrtp{\log\log n}} \] cannot be settled by the theorem of \PageRef{p.}{375}, since in each case the function under the sign of summation tends to zero more rapidly than $1/(n\log n)$ yet less rapidly than $n^{-1}(\log n)^{-1-\alpha}$, where $\alpha$~is any positive number however small. For such series we need a still more delicate test. The reader should be able, starting from the equations \begin{align*} D_{x}(\log_{k}x)^{1-s} &= \frac{1 - s}{x \log x \log_{2}x \dots \log_{k-1} x (\log_{k}x)^{s}},\\ D_{x}\log_{k+1}x &= \frac{1}{x \log x \log_{2}x \dots \log_{k-1}x \log_{k}x}, \end{align*} where $\log_{2}x = \log\log x$, $\log_{3} x = \log\log\log x$,~\dots, to prove the following theorem: \emph{the series and integral \[ \sum_{n_{0}}^{\infty} \frac{1}{n \log n \log_{2}n \dots \log_{k-1}n (\log_{k}n)^{s}},\quad \int_{a}^{\infty} \frac{dx}{x \log x \log_{2}x \dots \log_{k-1}x (\log_{k}x)^{s}} \] are convergent if $s > 1$ and divergent if $s \leq 1$}, {\Loosen$n_{0}$~and~$a$ being any numbers sufficiently great to ensure that $\log_{k}n$ and $\log_{k}x$ are positive when $n \geq n_{0}$ or $x \geq a$. These values of $n_{0}$ and~$a$ increase very rapidly as $k$~increases: thus $\log x > 0$ requires $x > 1$, $\log_{2}x > 0$ requires $x > e$, $\DPtypo{\log\log x}{\log_{3}x} > 0$ requires $x > e^{e}$, and so on; and it is easy to see that $e^{e} > 10$, $e^{e^{e}} > e^{10} > 20,000$, $e^{e^{e^{e}}} > e^{20,000} > 10^{8000}$.} The reader should observe the extreme rapidity with which the higher exponential functions, such as $e^{e^{x}}$ and~$e^{e^{e^{x}}}$, increase with~$x$. The same remark of course applies to such functions as $a^{a^{x}}$ and~$a^{a^{a^{x}}}$, where $a$~has any value greater than unity\Add{.} It has been computed that $9^{9^{9}}$~has $369,693,100$ figures, while $10^{10^{10}}$ has of course $10,000,000,000$. Conversely, the rate of increase of the higher logarithmic functions is extremely slow. Thus to make $\log\log\log\log x > 1$ we have to suppose $x$~a number with over $8000$~figures.\footnote {See the footnote to \PageRef{p.}{362}.} \PageSep{377} \Item{5.} Prove that the integral $\ds\int_{0}^{a} \frac{1}{x} \left\{\log \left(\frac{1}{x}\right)\right\}^{s} dx$, where $0 < a < 1$, is convergent if $s < -1$, divergent if $s \geq -1$. [Consider the behaviour of \[ \int_{\epsilon}^{a} \frac{1}{x} \left\{\log \left(\frac{1}{x}\right)\right\}^{s} dx \] as $\epsilon \to +0$. This result also may be refined upon by the introduction of higher logarithmic factors.] \Item{6.} Prove that $\ds\int_{0}^{1} \frac{1}{x} \left\{\log \left(\frac{1}{x}\right)\right\}^{s} dx$ has no meaning for any value of~$s$. [The last example shows that $s < -1$ is a necessary condition for convergence at the lower limit: but $\{\log(1/x)\}^{s}$ tends to~$\infty$ like $(1 - x)^{s}$, as $x \to 1 - 0$, if $s$~is negative, and so the integral diverges at the upper limit when $s < -1$.] \Item{7.} {\Loosen The necessary and sufficient conditions for the convergence of $\ds\int_{0}^{1} x^{a-1} \left\{\log \left(\frac{1}{x}\right)\right\}^{s} dx$ are $a > 0$, $s > -1$.} \end{Examples} \begin{Examples}{LXXXIX.} \Item{1.} \Topic{Euler's limit.} Show that \[ \phi(n) = 1 + \frac{1}{2} + \frac{1}{3} + \dots + \frac{1}{n - 1} - \log n \] tends to a limit~$\gamma$ as $n \to \infty$, and that $0 < \gamma \leq 1$. [This follows at once from \SecNo[§]{174}. The value of~$\gamma$ is in fact~$.577\dots$, and $\gamma$~is usually called \Emph{Euler's constant}.] \Item{2.} If $a$ and~$b$ are positive then \[ \frac{1}{a} + \frac{1}{a + b} + \frac{1}{a + 2b} + \dots + \frac{1}{a + (n - 1) b} - \frac{1}{b}\log \DPtypo{(a + nb}{(a + nb)} \] tends to a limit as $n \to \infty$. \Item{3.} If $0 < s < 1$ then \[ \phi(n) = 1 + 2^{-s} + 3^{-s} + \dots + (n - 1)^{-s} - \frac{n^{1-s}}{1 - s} \] tends to a limit as $n \to \infty$. \Item{4.} Show that the series \[ \frac{1}{1} + \frac{1}{2(1 + \frac{1}{2})} + \frac{1}{3(1 + \frac{1}{2} + \frac{1}{3})} + \dots \] is divergent. [Compare the general term of the series with $1/(n\log n)$.] Show also that the series derived from $\sum n^{-s}$, in the same way that the above series is derived from~$\sum (1/n)$, is convergent if $s > 1$ and otherwise divergent. \Item{5.} Prove generally that if $\sum u_{n}$~is a series of positive terms, and \[ s_{n} = u_{1} + u_{2} + \dots + u_{n}, \] then $\sum (u_{n}/s_{n-1})$~is convergent or divergent according as $\sum u_{n}$~is convergent or \PageSep{378} divergent. [If $\sum u_{n}$~is convergent then $s_{n-1}$~tends to a positive limit~$l$, and so $\sum (u_{n}/s_{n-1})$~is convergent. If $\sum u_{n}$~is divergent then $s_{n-1} \to \infty$, and \[ u_{n}/s_{n-1} > \log\{1 + (u_{n}/s_{n-1})\} = \log (s_{n}/s_{n-1}) \] (\Ex{lxxxii}.~1); and it is evident that \[ \log(s_{2}/s_{1}) + \log(s_{3}/s_{2}) + \dots + \log(s_{n}/s_{n-1}) = \log(s_{n}/s_{1}) \] tends to~$\infty$ as $n \to \infty$.] \Item{6.} Prove that the same result holds for the series $\sum (u_{n}/s_{n})$. [The proof is the same in the case of convergence. If $\sum u_{n}$~is divergent, and $u_{n} < s_{n-1}$ from a certain value of~$n$ onwards, then $s_{n} < 2s_{n-1}$, and the divergence of $\sum (u_{n}/s_{n})$ follows from that of $\sum (u_{n}/s_{n-1})$. If on the other hand $u_{n} \geq s_{n-1}$ for an infinity of values of~$n$, as might happen with a rapidly divergent series, then $u_{n}/s_{n} \geq \frac{1}{2}$ for all these values of~$n$.] \Item{7.} Sum the series $1 - \frac{1}{2} + \frac{1}{3} - \dots$. [We have \[ 1 + \frac{1}{2} + \dots + \frac{1}{2n} = \log(2n + 1) + \gamma + \epsilon_{n}, \quad 2\left(\frac{1}{2} + \frac{1}{4} + \dots + \frac{1}{2n}\right) = \log(n + 1) + \gamma + \epsilon_{n}', \] by Ex.~1, $\gamma$~denoting Euler's constant, and $\epsilon_{n}$,~$\epsilon_{n}'$ being numbers which tend to zero as $n \to \infty$. Subtracting and making $n \to \infty$ we see that the sum of the given series is~$\log 2$. See also~\SecNo[§]{213}.] \Item{8.} Prove that the series \[ \sum_{0}^{\infty} (-1)^{n}\left(1 + \frac{1}{2} + \dots + \frac{1}{n + 1} - \log n - C\right) \] oscillates finitely except when $C = \gamma$, when it converges. \end{Examples} \Paragraph{212. Series connected with the exponential and logarithmic functions. Expansion of~$e^{x}$ by Taylor's Theorem.} Since all the derivatives of the exponential function are equal to the function itself, we have \[ e^{x} = 1 + x + \frac{x^{2}}{2!} + \dots + \frac{x^{n-1}}{(n - 1)!} + \frac{x^{n}}{n!} e^{\theta x} \] where $0 < \theta < 1$. But $x^{n}/n! \to 0$ as $n \to \infty$, whatever be the value of~$x$ (\Ex{xxvii}.~12); and $e^{\theta x} < e^{x}$. Hence, making $n$~tend to~$\infty$, we have \[ e^{x} = 1 + x + \frac{x^{2}}{2!} + \dots + \frac{x^{n}}{n!} + \dots. \Tag{(1)} \] The series on the right-hand side of this equation is known as the \Emph{exponential series}. In particular we have \[ e = 1 + 1 + \frac{1}{2!} + \dots + \frac{1}{n!} + \dots; \Tag{(2)} \] and so \[ \left(1 + 1 + \frac{1}{2!} + \dots + \frac{1}{n!} + \dots\right)^{x} = 1 + x + \frac{x^{2}}{2!} + \dots + \frac{x^{n}}{n!} + \dots, \Tag{(3)} \] \PageSep{379} a result known as the \Emph{exponential theorem}. Also \[ a^{x} = e^{x\log a} = 1 + (x\log a) + \frac{(x\log a)^{2}}{2!} + \dots \Tag{(4)} \] for all positive values of~$a$. \begin{Remark} The reader will observe that the exponential series has the property of reproducing itself when every term is differentiated, and that no other series of powers of~$x$ would possess this property: for some further remarks in this connection see \Ref{Appendix}{II}\@. The power series for~$e^{x}$ is so important that it is worth while to investigate it by an alternative method which does not depend upon Taylor's Theorem. Let \[ E_{n}(x) = 1 + x + \frac{x^{2}}{2!} + \dots + \frac{x^{n}}{n!}, \] and suppose that $x > 0$. Then \[ \left(1 + \frac{x}{n}\right)^{n} = 1 + n\left(\frac{x}{n}\right) + \frac{n(n - 1)}{1·2} \left(\frac{x}{n}\right)^{2} + \dots + \frac{n(n - 1)\dots 1}{1·2\dots n} \left(\frac{x}{n}\right)^{n}\Add{,} \] which is less than~$E_{n}(x)$. And, provided $n > x$, we have also, by the binomial theorem for a negative integral exponent, \[ \left(1 - \frac{x}{n}\right)^{-n} = 1 + n\left(\frac{x}{n}\right) + \frac{n(n + 1)}{1·2} \left(\frac{x}{n}\right)^{2} + \dots > E_{n}(x). \] Thus \[ \left(1 + \frac{x}{n}\right)^{n} < E_{n}(x) < \left(1 - \frac{x}{n}\right)^{-n}. \] {\Loosen But (\SecNo[§]{208}) the first and last functions tend to the limit~$e^{x}$ as $n \to \infty$, and therefore $E_{n}(x)$~must do the same. From this the equation~\Eq{(1)} follows when $x$~is positive; its truth when $x$~is negative follows from the fact that the exponential series, as was shown in \Ex{lxxxi}.~7, satisfies the functional equation $f(x)f(y) = f(x + y)$, so that $f(x)f(-x) = f(0) = 1$.} \end{Remark} \begin{Examples}{XC.} \Item{1.} Show that \[ \cosh x = 1 + \frac{x^{2}}{2!} + \frac{x^{4}}{4!} + \dots,\quad \sinh x = x + \frac{x^{3}}{3!} + \frac{x^{5}}{5!} + \dots. \] \Item{2.} If $x$~is positive then the greatest term in the exponential series is the $([x] + 1)$-th, unless $x$~is an integer, when the preceding term is equal to it. \Item{3.} Show that $n! > (n/e)^{n}$. [For $n^{n}/n!$~is one term in the series for~$e^{n}$.] \Item{4.} Prove that $e^{n} = (n^{n}/n!)(2 + S_{1} + S_{2})$, where \[ S_{1} = \frac{1}{1 + \nu} + \frac{1}{(1 + \nu)(1 + 2\nu)} + \dots,\quad S_{2} = (1 - \nu) + (1 - \nu)(1 - 2\nu) + \dots, \] and $\nu = 1/n$; and deduce that $n!$~lies between $2(n/e)^{n}$ and~$2(n + 1)(n/e)^{n}$. \Item{5.} Employ the exponential series to prove that $e^{x}$~tends to infinity more rapidly than any power of~$x$. [Use the inequality $e^{x} > x^{n}/n!$.] \PageSep{380} \Item{6.} Show that $e$~is not a rational number. [If $e = p/q$, where $p$ and~$q$ are integers, we must have \[ \frac{p}{q} = \DPtypo{}{1 + {}} 1 + \frac{1}{2!}+\frac{1}{3!} + \dots + \frac{1}{q!} + \dots \] or, multiplying up by~$q!$, \[ q! \left(\frac{p}{q} - 1 - 1 - \frac{1}{2!} - \dots - \frac{1}{q!}\right) = \frac{1}{q + 1} + \frac{1}{(q + 1)(q + 2)} + \dots \] and this is absurd, since the left-hand side is integral, and the right-hand side less than $\{1/(q + 1)\} + \{1/(q + 1)\}^{2} + \dots = 1/q$.] \Item{7.} Sum the series $\sum\limits_{0}^{\infty} P_{r}(n)\dfrac{x^{n}}{n!}$, where $P_{r}(n)$~is a polynomial of degree~$r$ in~$n$. [We can express $P_{r}(n)$ in the form \[ A_{0} + A_{1}n + A_{2}n(n - 1) + \dots + A_{r}n(n - 1) \dots (n - r + 1), \] and \begin{align*} \sum_{0}^{\infty} P_{r}(n) \frac{x^{n}}{n!} &= A_{0}\sum_{0}^{\infty}\frac{x^{n}}{n!} + A_{1}\sum_{1}^{\infty}\frac{x^{n}}{(n - 1)!} + \dots + A_{r}\sum_{r}^{\infty}\frac{x^{n}}{(n - r)!}\\ &= (A_{0} + A_{1}x + A_{2}x^{2} + \dots + A_{r}x^{r})e^{x}.] \end{align*} \Item{8.} Show that \[ \sum_{1}^{\infty} \frac{n^{3}}{n!} x^{n} = (x + 3x^{2} + x^{3})e^{x},\quad \sum_{1}^{\infty} \frac{n^{4}}{n!} x^{n} = (x + 7x^{2} + 6x^{3} + x^{4})e^{x}; \] and that if $S_{n} = 1^{3} + 2^{3} + \dots + n^{3}$ then \[ \sum_{1}^{\infty} S_{n}\frac{x^{n}}{n!} = \tfrac{1}{4}(4x + 14x^{2} + 8x^{3} + x^{4})e^{x}. \] In particular the last series is equal to zero when $x = -2$. \MathTrip{1904.} \Item{9.} Prove that $\sum (n/n!) = e$, $\sum (n^{2}/n!) = 2e$, $\sum (n^{3}/n!) = 5e$, and that $\sum (n^{k}/n!)$, where $k$~is any positive integer, is a positive integral multiple of~$e$. \Item{10.} Prove that $\sum\limits_{1}^{\infty} \dfrac{(n - 1)x^{n}}{(n + 2)n!} = \left\{(x^{2} - 3x + 3)e^{x} + \frac{1}{2}x^{2} - 3\right\}/x^{2}$. [Multiply numerator and denominator by~$n + 1$, and proceed as in Ex.~7.] \Item{11.} Determine $a$,~$b$,~$c$ so that $\{(x + a)e^{x} + (bx + c)\}/x^{3}$ tends to a limit as $x \to 0$, evaluate the limit, and draw the graph of the function $e^{x} + \dfrac{bx + c}{x + a}$. \Item{12.} Draw the graphs of $1 + x$, $1 + x + \frac{1}{2}x^{2}$, $1 + x + \frac{1}{2}x^{2} + \frac{1}{6}x^{3}$, and compare them with that of~$e^{x}$. \Item{13.} Prove that $e^{-x} - 1 + x - \dfrac{x^{n}}{2!} + \dots - (-1)^{n}\dfrac{x^{n}}{n!}$ is positive or negative according as $n$~is odd or even. Deduce the exponential theorem. \PageSep{381} \Item{14.} If \[ X_{0} = e^{x},\quad X_{1} = e^{x} - 1,\quad X_{2} = e^{x} - 1 - x,\quad X_{3} = e^{x} - 1 - x - (x^{2}/2!),\ \dots, \] then $dX_{\nu}/dx = X_{\nu-1}$. Hence prove that if $t > 0$ then \[ X_{1}(t) = \int_{0}^{t} X_{0}\, dx < te^{t},\quad X_{2}(t) = \int_{0}^{t} X_{1}\, dx < \int_{0}^{t} xe^{x}\, dx < e^{t} \int_{0}^{t} x\, dx = \frac{t^{2}}{2!} e^{t}, \] and generally $X_{\nu}(t) < \dfrac{t^{\nu}}{\nu!} e^{t}$. Deduce the exponential theorem. \Item{15.} Show that the expansion in powers of~$p$ of the positive root of $x^{2+p} = a^{2}$ begins with the terms \[ a\{1 - \tfrac{1}{2} p\log a + \tfrac{1}{8} p^{2}\log a (2 + \log a)\}. \] \MathTrip{1909.} \end{Examples} \Paragraph{213. The logarithmic series.} Another very important expansion in powers of~$x$ is that for~$\log(1 + x)$. Since \[ \log(1 + x) = \int_{0}^{x} \frac{dt}{1 + t}, \] and $1/(1 + t) = 1 - t + t^{2} - \dots$ if $t$~is numerically less than unity, it is natural to expect\footnote {See \Ref{Appendix}{II} for some further remarks on this subject.} that $\log(1 + x)$ will be equal, when $-1 < x < 1$, to the series obtained by integrating each term of the series $1 - t + t^{2} - \dots$ from $t = 0$ to $t = x$, \ie\ to the series $x - \frac{1}{2} x^{2} + \frac{1}{3} x^{3} - \dots$. And this is in fact the case. For \[ 1/(1 + t) = 1 - t + t^{2} - \dots + (-1)^{m-1} t^{m-1} + \frac{(-1)^{m} t^{m}}{1 + t}, \] and so, if $x > -1$, \[ \log(1 + x) = \int_{0}^{x} \frac{dt}{1 + t} = x - \frac{x^{2}}{2} + \dots + (-1)^{m-1} \frac{x^{m}}{m} + (-1)^{m} R_{m}, \] where \[ R_{m} = \int_{0}^{x} \frac{t^{m}\, dt}{1 + t}. \] We require to show that the limit of~$R_{m}$, when $m$~tends to~$\infty$, is zero. This is almost obvious when $0 < x \leq 1$; for then $R_{m}$~is positive and less than \[ \int_{0}^{x} t^{m}\, dt = \frac{x^{m+1}}{m + 1}, \] and therefore less than $1/(m + 1)$. If on the other hand $-1 < x < 0$, we put $t = -u$ and $x = -\xi$, so that \[ R_{m} = (-1)^{m} \int_{0}^{\xi} \frac{u^{m}\, du}{1 - u}, \] \PageSep{382} which shows that $R_{m}$~has the sign of~$(-1)^{m}$. Also, since the greatest value of~$1/(1 - u)$ in the range of integration is~$1/(1 - \xi)$, we have \[ 0 < |R_{m}| < \frac{1}{1 - \xi} \int_{0}^{\xi} u^{m}\, du = \frac{\xi^{m}}{(m + 1)(1 - \xi)} < \frac{1}{(m + 1)(1 - \xi)}: \] and so $R_{m} \to 0$. Hence \[ \log(1 + x) = x - \tfrac{1}{2} x^{2} + \tfrac{1}{3} x^{3} - \dots, \] provided that $-1 < x \leq 1$. If $x$~lies outside these limits the series is not convergent. If $x = 1$ we obtain \[ \log 2 = 1 - \tfrac{1}{2} + \tfrac{1}{3} - \dots, \] a result already proved otherwise (\Ex{lxxxix}.~7). \Paragraph{214. The series for the inverse tangent.} It is easy to prove in a similar manner that \begin{align*} \arctan x = \int_{0}^{x} \frac{dt}{1 + t^{2}} &= \int_{0}^{x}(1 - t^{2} + t^{4} - \dots)\, dt\\ &= x - \tfrac{1}{3} x^{3} + \tfrac{1}{5} x^{5} - \dots, \end{align*} provided that $-1 \leq x \leq 1$. The only difference is that the proof is a little simpler; for, since $\arctan x$~is an odd function of~$x$, we need only consider positive values of~$x$. And the series is convergent when $x = -1$ as well as when $x = 1$. We leave the discussion to the reader. The value of~$\arctan x$ which is represented by the series is of course that which lies between $-\frac{1}{4}\pi$ and~$\frac{1}{4}\pi$ when $-1 \leq x \leq 1$, and which we saw in \Ref{Ch.}{VII} (\Ex{lxiii}.~3) to be the value represented by the integral. If $x = 1$, we obtain the formula \[ \tfrac{1}{4}\pi = 1 - \tfrac{1}{3} + \tfrac{1}{5} - \dots. \] \begin{Examples}{XCI.} \Item{1.} $\log \left(\dfrac{1}{1 - x}\right) = x + \frac{1}{2} x^{2} + \frac{1}{3} x^{3} + \dots$ if $-1 \leq x < 1$. \Item{2.} $\argtanh x = \frac{1}{2} \log\left(\dfrac{1 + x}{1 - x}\right) = x + \frac{1}{3} x^{3} + \frac{1}{5} x^{5} + \dots$ if $-1 < x < 1$. \Item{3.} Prove that if $x$~is positive then \[ \log(1 + x) = \frac{x}{1 + x} + \tfrac{1}{2} \left(\frac{x}{1 + x}\right)^{2} + \tfrac{1}{3} \left(\frac{x}{1 + x}\right)^{3} + \dots. \] \MathTrip{1911.} \Item{4.} Obtain the series for $\log(1 + x)$ and $\arctan x$ by means of Taylor's theorem. [A difficulty presents itself in the discussion of the remainder in the \PageSep{383} first series when $x$~is negative, if Lagrange's form $R_{n} = (-1)^{n-1} x^{n}/\{n(1 + \theta x)^{n}\}$ is used; Cauchy's form, viz. \[ R_{n} = (-1)^{n-1} (1 - \theta)^{n-1} x^{n}/(1 + \theta x)^{n}, \] should be used (cf.\ the corresponding discussion for the Binomial Series, \Ex{lvi}.~2 and~\SecNo[§]{163}). In the case of the second series we have \begin{align*} D_{x}^{n} \arctan x &= D_{x}^{n-1} \{1/(1 + x^{2})\}\\ &= (-1)^{n-1} (n - 1)! (x^{2} + 1)^{-n/2} \sin \{n\arctan(1/x)\} \end{align*} (\Ex{xlv}.~11), and there is no difficulty about the remainder, which is obviously not greater in absolute value than~$1/n$.\footnotemark] \footnotetext{The formula for $D_{x}^{n} \arctan x$ fails when $x = 0$, as $\arctan(1/x)$ is then undefined. It is easy to see (cf.\ \Ex{xlv}.~11) that $\arctan(1/x)$~must then be interpreted as meaning~$\frac{1}{2}\pi$.} \Item{5.} If $y > 0$ then \[ \log y = 2 \left\{\frac{y - 1}{y + 1} + \frac{1}{3} \left(\frac{y - 1}{y + 1}\right)^{3} + \frac{1}{5} \left(\frac{y - 1}{y + 1}\right)^{5} + \dots\right\}. \] [Use the identity $y = \biggl(1 + \dfrac{y - 1}{y + 1}\biggr) \bigg/ \biggl(1 - \dfrac{y - 1}{y + 1}\biggr)$. This series may be used to calculate~$\log 2$, a purpose for which the series $1 - \frac{1}{2} + \frac{1}{3} - \dots$, owing to the slowness of its convergence, is practically useless. Put $y = 2$ and find $\log 2$ to $3$~places of decimals.] \Item{6.} Find $\log 10$ to $3$~places of decimals from the formula \[ \log 10 = 3\log 2 + \log(1 + \tfrac{1}{4}). \] \Item{7.} Prove that \[ \log \left(\frac{x + 1}{x}\right) = 2\left\{\frac{1}{2x + 1} + \frac{1}{3(2x + 1)^{3}} + \frac{1}{5(2x + 1)^{5}} + \dots\right\} \] if $x > 0$, and that \[ \log \frac{(x - 1)^{2}(x + 2)}{(x + 1)^{2}(x - 2)} = 2\left\{\frac{2}{x^{3} - 3x} + \frac{1}{3}\left(\frac{2}{x^{3} - 3x}\right)^{3} + \frac{1}{5}\left(\frac{2}{x^{3} - 3x}\right)^{5} + \dots\right\} \] if $x > 2$. Given that $\log 2 = .693\MS147\MS1\dots$ and $\log 3 = 1.098\MS612\MS3\dots$, show, by putting $x = 10$ in the second formula, that $\log 11 = 2.397\MS895\dots$. \MathTrip{1912.} \Item{8.} Show that if $\log 2$, $\log 5$, and $\log 11$ are known, then the formula \[ \log 13 = 3\log 11 + \log 5 - 9\log 2 \] gives $\log 13$ with an error practically equal to~$.000\MS15$. \MathTrip{1910.} \Item{9.} Show that \[ \tfrac{1}{2} \log 2 = 7a + 5b + 3c,\quad \tfrac{1}{2} \log 3 = 11a + 8b + 5c,\quad \tfrac{1}{2} \log 5 = 16a + 12b + 7c, \] where $a = \argtanh(1/31)$, $b = \argtanh(1/49)$, $c = \argtanh(1/161)$. [These formulae enable us to find $\log 2$, $\log 3$, and $\log 5$ rapidly and with any degree of accuracy.] \PageSep{384} \Item{10.} Show that \[ \tfrac{1}{4}\pi = \arctan(1/2) + \arctan(1/3) = 4\arctan(1/5) - \arctan(1/239), \] and calculate~$\pi$ to $6$~places of decimals. \Item{11.} Show that the expansion of $(1 + x)^{1+x}$ in powers of~$x$ begins with the terms $1 + x + x^{2} + 1/2 x^{3}$. \MathTrip{1910.} \Item{12.} Show that \[ \log_{10} e - \sqrtb{x(x + 1)} \log_{10}\left(\frac{1 + x}{x}\right) = \frac{\log_{10} e}{24x^{2}}, \] approximately, for large values of~$x$. Apply the formula, when $x = 10$, to obtain an approximate value of~$\log_{10} e$, and estimate the accuracy of the result. \MathTrip{1910.} \Item{13.} Show that \[ \frac{1}{1 - x} \log\left(\frac{1}{1 - x}\right) = x + \left(1 + \tfrac{1}{2}\right)x^{2} + \left(1 + \tfrac{1}{2} + \tfrac{1}{3}\right)x^{3} + \dots, \] if $-1 < x < 1$. [Use \Ex{lxxxi}.~2.] \Item{14.} {\Loosen Using the logarithmic series and the facts that $\log_{10} 2.3758 = .375\MS809\MS9\dots$ and $\log_{10} e = .4343\dots$, show that an approximate solution of the equation $x = 100 \log_{10}x$ is~$237.581\MS21$.} \MathTrip{1910.} \Item{15.} Expand $\log\cos x$ and $\log(\sin x/x)$ in powers of~$x$ as far as~$x^{4}$, and verify that, to this order, \[ \log\sin x = \log x - \tfrac{1}{45} \log\cos x + \tfrac{64}{45}\log\cos \tfrac{1}{2}x. \] \MathTrip{1908.} \Item{16.} Show that \[ %[** TN: In-line in the original] \int_{0}^{x} \frac{dt}{1 + t^{4}} = x - \tfrac{1}{5}x^{5} + \tfrac{1}{9}x^{9} - \dots \] if $-1 \leq x \leq 1$. Deduce that \[ 1 - \tfrac{1}{5} + \tfrac{1}{9} - \dots = \{\pi + 2\log(\sqrt{2} + 1)\}/4\sqrt{2}. \] \MathTrip{1896.} [Proceed as in \SecNo[§]{214} and use the result of \Ex{xlviii}.~7.] \Item{17.} Prove similarly that \[ \tfrac{1}{3} - \tfrac{1}{7} + \tfrac{1}{11} - \dots = \int_{0}^{1} \frac{t^{2}\, dt}{1 + t^{4}} = \{\pi - 2\log(\sqrt{2} + 1)\}/4\sqrt{2}. \] \Item{18.} Prove generally that if $a$ and~$b$ are positive integers then \[ \frac{1}{a} - \frac{1}{a + b} + \frac{1}{a + 2b} - \dots = \int_{0}^{1} \frac{t^{a-1}\, dt}{1 + t^{b}}, \] and so that the sum of the series can be found. Calculate in this way the sums of $1 - \frac{1}{4} + \frac{1}{7} - \dots$ and $\frac{1}{2} - \frac{1}{5} + \frac{1}{8} - \dots$. \end{Examples} \Paragraph{215. The Binomial Series.} We have already (\SecNo[§]{163}) investigated the Binomial Theorem \[ (1 + x)^{m} = 1 + \binom{m}{1}x + \binom{m}{2}x^{2} + \dots, \] \PageSep{385} assuming that $-1 < x < 1$ and that $m$~is rational. When $m$~is irrational we have \begin{gather*} (1 + x)^{m} = e^{m\log(1+ x)},\\ D_{x}(1 + x)^{m} = \{m/(1 + x)\} e^{m\log(1 + x)} = m(1 + x)^{m-1}, \end{gather*} so that the rule for the differentiation of~$(1 + x)^{m}$ remains the same, and the proof of the theorem given in \SecNo[§]{163} retains its validity. We shall not discuss the question of the convergence of the series when $x = 1$ or $x = -1$.\footnote {See Bromwich, \textit{Infinite Series}, pp.~150~\textit{et~seq.}; Hobson, \textit{Plane Trigonometry} (3rd~edition), p.~271.} \begin{Examples}{XCII.} \Item{1.} Prove that if $-1 < x < 1$ then \[ \frac{1}{\sqrtp{1 + x^{2}}} = 1 - \frac{1}{2}x^{2} + \frac{1·3}{2·4}x^{4} - \dots,\quad \frac{1}{\sqrtp{1 - x^{2}}} = 1 + \frac{1}{2}x^{2} + \frac{1·3}{2·4}x^{4} + \dots. \] \Item{2.} \Topic{Approximation to quadratic and other surds.} {\Loosen Let $\sqrt{M}$ be a quadratic surd whose numerical value is required. Let $N^{2}$ be the square nearest to~$M$; and let $M = N^{2} + x$ or $M = N^{2} - x$, $x$~being positive. Since $x$~cannot be greater than~$N$, $x/N^{2}$~is comparatively small and the surd $\sqrt{M} = N\sqrtb{1 ± (x/N^{2})}$ can be expressed in a series} \[ = N\left\{ 1 ± \frac{1}{2}\left(\frac{x}{N^{2}}\right) - \frac{1·1}{2·4}\left(\frac{x}{N^{2}}\right)^{2} ± \dots \right\}, \] which is at any rate fairly rapidly convergent, and may be very rapidly so. Thus \[ \sqrt{67} = \sqrtp{64 + 3} = 8\left\{ 1 + \frac{1}{2}\left(\frac{3}{64}\right) - \frac{1·1}{2·4}\left(\frac{3}{64}\right)^{2} + \dots \right\}. \] Let us consider the error committed in taking~$8\frac{3}{16}$ (the value given by the first two terms) as an approximate value. After the second term the terms alternate in sign and decrease. Hence the error is one of excess, and is less than~$3^{2}/64^{2}$, which is less than~$.003$. \Item{3\Add{.}} If $x$~is small compared with~$N^{2}$ then \[ \sqrtp{N^{2} + x} = N + \frac{x}{4N} + \frac{Nx}{2(2N^{2} + x)}, \] the error being of the order~$x^{4}/N^{7}$. Apply the process to~$\sqrt{907}$. [Expanding by the binomial theorem, we have \[ \sqrtp{N^{2} + x} = N + \frac{x}{2N} - \frac{x^{2}}{8N^{3}} + \frac{x^{3}}{16N^{5}}, \] the error being less than the numerical value of the next term, viz.\ $5x^{4}/128N^{7}$. Also \[ \frac{Nx}{2(2N^{2} + x)} = \frac{x}{4N} \left(1 + \frac{x}{2N^{2}}\right)^{-1} = \frac{x}{4N} - \frac{x^{2}}{8N^{3}} + \frac{x^{3}}{16N^{5}}, \] the error being less than~$x^{4}/32N^{7}$. The result follows. The same method may be applied to surds other than quadratic surds, \eg\ to~$\sqrt[3]{1031}$.] \PageSep{386} \Item{4.} If $M$~differs from~$N^{3}$ by less than $1$~per~cent.\ of either then $\sqrt[3]{M}$~differs from $\frac{2}{3}N + \frac{1}{3}(M/N^{2})$ by less than $N/90\MC000$. \MathTrip{1882.} \Item{5.} If $M = N^{4} + x$, and $x$~is small compared with~$N$, then a good approximation for~$\sqrt[4]{M}$ is \[ \frac{51}{56} N + \frac{5}{56}\, \frac{M}{N^{3}} + \frac{27Nx}{14(7M + 5N^{4})}. \] Show that when $N = 10$, $x = 1$, this approximation is accurate to $16$~places of decimals. \MathTrip{1886.} \Item{6.} Show how to sum the series \[ \sum_{0}^{\infty} P_{r}(n) \binom{m}{n} x^{n}, \] where $P_{r}(n)$~is a polynomial of degree~$r$ in~$n$. [Express $P_{r}(n)$ in the form $A_{0} + A_{1}n + A_{2}n(n - 1) + \dots$ as in \Ex{xc}.~7.] \Item{7.} Sum the series $\sum\limits_{0}^{\infty} n \dbinom{m}{n} x^{n}$, $\sum\limits_{0}^{\infty} n^{2} \dbinom{m}{n} x^{n}$ and prove that \[ \sum_{0}^{\infty} n^{3} \binom{m}{n} x^{n} = \{m^{3}x^{3} + m(3m - 1)x^{2} + mx\}(1 + x)^{m-3}. \] \end{Examples} \begin{Remark} \Paragraph{216. An alternative method of development of the theory of the exponential and logarithmic functions.} We shall now give an outline of a method of investigation of the properties of $e^{x}$ and $\log x$ entirely different in logical order from that followed in the preceding pages. This method starts from the exponential series $1 + x + \dfrac{x^{2}}{2!} + \dots$. We know that this series is convergent for all values of~$x$, and we may therefore define the function $\exp x$ by the equation \[ \exp x = 1 + x + \frac{x^{2}}{2!} + \dots. \Tag{(1)} \] We then prove, as in \Ex{lxxxi}.~7, that \[ \exp x × \exp y = \exp(x + y). \Tag{(2)} \] Again \[ \frac{\exp h - 1}{h} = 1 + \frac{h}{2!} + \frac{h^{2}}{3!} + \dots = 1 + \rho(h), \] where $\rho(h)$~is numerically less than \[ |\tfrac{1}{2}h| + |\tfrac{1}{2}h|^{2} + |\tfrac{1}{2}h|^{3} + \dots = |\tfrac{1}{2}h|/(1 - |\tfrac{1}{2}h|), \] so that $\rho(h) \to 0$ as $h \to 0$. And so \[ \frac{\exp(x + h) - \exp x}{h} = \exp x \left(\frac{\exp h - 1}{h}\right) \to \exp x \] as $h \to 0$, or \[ D_{x} \exp x = \exp x. \Tag{(3)} \] Incidentally we have proved that $\exp x$ is a continuous function. We have now a choice of procedure. Writing $y = \exp x$ and observing that $\exp 0 = 1$, we have \[ \frac{dy}{dx} = y,\quad x = \int_{1}^{y} \frac{dt}{t}, \] \PageSep{387} and, if we define the logarithmic function as the function inverse to the exponential function, we are brought back to the point of view adopted earlier in this chapter. But we may proceed differently. From~\Eq{(2)} it follows that if $n$~is a positive integer then \[ (\exp x)^{n} = \exp nx,\quad (\exp 1)^{n} = \exp n. \] If $x$~is a positive rational fraction~$m/n$, then \[ \{\exp(m/n)\}^{n} = \exp m = (\exp 1)^{m}, \] and so $\exp(m/n)$~is equal to the positive value of~$(\exp 1)^{m/n}$. This result may be extended to negative rational values of~$x$ by means of the equation \[ \exp x \exp(-x) = 1; \] and so we have \[ \exp x = (\exp 1)^{x} = e^{x}, \] say, where \[ e = \exp 1 = 1 + 1 + \frac{1}{2!} + \frac{1}{3!} + \dots, \] for all rational values of~$x$. Finally we define $e^{x}$, when $x$~is irrational, as being equal to~$\exp x$. The logarithm is then defined as the function inverse to $\exp x$ or~$e^{x}$. \Par{Example.} Develop the theory of the binomial series \[ 1 + \binom{m}{1} x + \binom{m}{2} x^{2} + \dots = f(m, x), \] where $-1 < x < 1$, in a similar manner, starting from the equation \[ f(m, x) f(m', x) = f(m + m'\Add{,} x) \] (\Ex{lxxxi}.~6). \end{Remark} \Section{MISCELLANEOUS EXAMPLES ON CHAPTER IX\protect\footnotemark} \footnotetext{A considerable number of these examples are taken from Bromwich's \textit{Infinite Series}.} \begin{Examples}{} \Item{1.} Given that $\log_{10} e = .4343$ and that $2^{10}$ and $3^{21}$ are nearly equal to powers of~$10$, calculate $\log_{10}2$ and $\log_{10}3$ to four places of decimals. \MathTrip{1905.} \Item{2.} Determine which of $(\frac{1}{2}e)^{\sqrt{3}}$ and $(\sqrt{2})^{\frac{1}{2}\pi}$ is the greater. [Take logarithms and observe that $\sqrt{3}/(\sqrt{3} + \frac{1}{4}\pi) < \frac{2}{5} \sqrt{3} < .6929 < \log 2$.] \Item{3.} {\Loosen Show that $\log_{10}n$ cannot be a rational number if $n$~is any positive integer not a power of~$10$. [If $n$~is not divisible by~$10$, and $\log_{10}n = p/q$, we have $10^{p} = n^{q}$, which is impossible, since $10^{p}$~ends with~$0$ and $n^{q}$~does not. If $n = 10^{a}N$, where $N$~is not divisible by~$10$, then $\log_{10}N$ and therefore} \[ \log_{10}n = a + \log_{10}N \] cannot be rational.] \PageSep{388} \Item{4.} For what values of~$x$ are the functions $\log x$, $\log\log x$, $\log\log\log x$,~\dots\ (\ia)~equal to~$0$ (\ib)~equal to~$1$ (\ic)~not defined? Consider also the same question for the functions $lx$, $llx$, $lllx$,~\dots, where $lx = \log |x|$. \Item{5.} Show that \[ \log x - \binom{n}{1} \log(x + 1) + \binom{n}{2} \log(x + 2) - \dots + (-1)^{n} \log(x + n) \] is negative and increases steadily towards $0$ as $x$~increases from $0$ towards~$\infty$. [The derivative of the function is \[ \sum_{0}^{n} (-1)^{r} \binom{n}{r} \frac{1}{x + r} = \frac{n!}{x(x + 1) \dots (x + n)}, \] as is easily seen by splitting up the right-hand side into partial fractions. This expression is positive, and the function itself tends to zero as $x \to \infty$, since \[ \log(x + r) = \log x + \epsilon_{x}, \] where $\epsilon_{x} \to 0$, and $1 - \dbinom{n}{1} + \dbinom{n}{2} - \dots = 0$.] \Item{6.} Prove that \[ \left(\frac{d}{dx}\right)^{n} \frac{\log x}{x} = \frac{(-1)^{n} n!}{x^{n+1}} \left(\log x - 1 - \frac{1}{2} - \dots - \frac{1}{n}\right). \] \MathTrip{1909.} \Item{7.} If $x > -1$ then $x^{2} > (1 + x) \{\log(1 + x)\}^{2}$. \MathTrip{1906.} [Put $1 + x = e^{\xi}$, and use the fact that $\sinh \xi > \xi$ when $\xi > 0$.] \Item{8.} Show that $\{\log(1 + x)\}/x$ and $x/\{(1 + x)\log(1 + x)\}$ both decrease steadily as $x$~increases from $0$ towards~$\infty$. \Item{9.} Show that, as $x$~increases from $-1$ towards~$\infty$, the function $(1 + x)^{-1/x}$ assumes once and only once every value between $0$ and~$1$. \MathTrip{1910.} \Item{10.} Show that $\dfrac{1}{\log(1 + x)} - \dfrac{1}{x} \to \dfrac{1}{2}$ as $x \to 0$. \Item{11.} Show that $\dfrac{1}{\log(1 + x)} - \dfrac{1}{x}$ decreases steadily from $1$ to~$0$ as $x$~increases from $-1$ towards~$\infty$. [The function is undefined when $x = 0$, but if we attribute to it the value~$\frac{1}{2}$ when $x = 0$ it becomes continuous for $x = 0$. Use Ex.~7 to show that the derivative is negative.] \Item{12.} Show that the function $(\log \xi - \log x)/(\xi - x)$, where $\xi$~is positive, decreases steadily as $x$~increases from $0$ to~$\xi$, and find its limit as $x \to \xi$. \Item{13.} Show that $e^{x} > Mx^{N}$, where $M$~and~$N$ are large positive numbers, \DPtypo{f}{if} $x$~is greater than the greater of $2\log M$ and~$16N^{2}$. [It is easy to prove that $\log x < 2\sqrt{x}$; and so the inequality given is certainly satisfied if \[ x > \log M + 2N\sqrt{x}, \] and therefore certainly satisfied if $\frac{1}{2}x > \log M$, $\frac{1}{2}x > 2N\sqrt{x}$.] \PageSep{389} \Item{14.} If $f(x)$ and $\phi(x)$ tend to infinity as $x \to \infty$, and $f'(x)/\phi'(x) \to \infty$, then $f(x)/\phi(x) \to \infty$. [Use the result of \Ref{Ch.}{VI}, \MiscEx{VI}~33.] By taking $f(x) = x^{\alpha}$, $\phi(x) = \log x$, prove that $(\log x)/x^{\alpha} \to 0$ for all positive values of~$\alpha$. \Item{15.} If $p$ and~$q$ are positive integers then \[ \frac{1}{pn + 1} + \frac{1}{pn + 2} + \dots + \frac{1}{qn} \to \log\left(\frac{q}{p}\right) \] as $n \to \infty$. [Cf.\ \Ex{lxxviii}.~6.] \Item{16.} Prove that if $x$~is positive then $n\log\{\frac{1}{2}(1 + x^{1/n})\} \to -\frac{1}{2}\log x$ as $n \to \infty$. [We have \[ n\log\{\tfrac{1}{2}(1 + x^{1/n})\} = n\log\{1 - \tfrac{1}{2}(1 - x^{1/n})\} = \tfrac{1}{2}n(1 - x^{1/n}) \frac{\log(1 - u)}{u} \] where $u = \frac{1}{2}(1 - x^{1/n})$. Now use \SecNo[§]{209} and \Ex{lxxxii}.~4.] \Item{17.} Prove that if $a$ and~$b$ are positive then \[ \{\tfrac{1}{2}(a^{1/n} + b^{1/n})\}^{n} \to \sqrtp{ab}. \] %[** TN: No paragraph break in the original] [Take logarithms and use Ex.~16.] \Item{18.} Show that \[ 1 + \frac{1}{3} + \frac{1}{5} + \dots + \frac{1}{2n - 1} = \tfrac{1}{2}\log n + \log 2 + \tfrac{1}{2} \gamma + \epsilon_{n}, \] where $\gamma$~is Euler's constant (\Ex{lxxxix}.~1) and $\epsilon_{n} \to 0$ as $n \to \infty$. \Item{19.} Show that \[ 1 + \tfrac{1}{3} - \tfrac{1}{2} + \tfrac{1}{5} + \tfrac{1}{7} - \tfrac{1}{4} + \tfrac{1}{9} + \dots = \tfrac{3}{2} \log 2, \] the series being formed from the series $1 - \frac{1}{2} + \frac{1}{3} - \dots$ by taking alternately two positive terms and then one negative. [The sum of the first $3n$ terms is \begin{multline*} 1 + \frac{1}{3} + \frac{1}{5} + \dots + \frac{1}{4n - 1} - \frac{1}{2} \left(1 + \frac{1}{2} + \dots + \frac{1}{n}\right)\\ = \tfrac{1}{2}\log 2n + \log 2 + \tfrac{1}{2}\gamma + \epsilon_{n} - \tfrac{1}{2}(\log n + \gamma + \epsilon_{n}'), \end{multline*} where $\epsilon_{n}$ and~$\epsilon'_{n}$ tend to~$0$ as $n \to \infty$. (Cf.\ \Ex{lxxviii}.~6).] \Item{20.} Show that $1 - \frac{1}{2} - \frac{1}{4} + \frac{1}{3} - \frac{1}{6} - \frac{1}{8} + \frac{1}{5} - \frac{1}{10} - \dots = \frac{1}{2}\log 2$. \Item{21.} Prove that \[ \sum_{1}^{n} \frac{1}{\nu(36\nu^{2} - 1)} = -3 + 3\Sigma_{3n+1} - \Sigma_{n} - S_{n} \] where $S_{n} = 1 + \dfrac{1}{2} + \dots + \dfrac{1}{n}$, $\Sigma_{n} = 1 + \dfrac{1}{3} + \dots + \dfrac{1}{2n - 1}$. Hence prove that the sum of the series when continued to infinity is \[ -3 + \tfrac{3}{2}\log 3 + 2\log 2. \] \MathTrip{1905.} \Item{22.} Show that \[ \sum_{1}^{\infty} \frac{1}{n(4n^{2} - 1)} = 2\log 2 - 1, \quad \sum_{1}^{\infty} \frac{1}{n(9n^{2} - 1)} = \tfrac{3}{2}(\log 3 - 1). \] \PageSep{390} \Item{23.} Prove that the sums of the four series \[ \sum_{1}^{\infty} \frac{1}{4n^{2} - 1},\quad \sum_{1}^{\infty} \frac{(-1)^{n-1}}{4n^{2} - 1},\quad \sum_{1}^{\infty} \frac{1}{(2n + 1)^{2} - 1},\quad \sum_{1}^{\infty} \frac{(-1)^{n-1}}{(2n + 1)^{2} - 1} \] are $\frac{1}{2}$, $\frac{1}{4}\pi - \frac{1}{2}$, $\frac{1}{4}$, $\frac{1}{2}\log 2 - \frac{1}{4}$ respectively. \Item{24.} Prove that $n!\, (a/n)^{n}$ tends to~$0$ or to~$\infty$ according as $a < e$ or $a > e$. [If $u_{n} = n!\, (a/n)^{n}$ then $u_{n+1}/u_{n} = a\{1 + (1/n)\}^{-n} \to a/e$. It can be shown that the function tends to~$\infty$ when $a = e$: for a proof, which is rather beyond the scope of the theorems of this chapter, see Bromwich's \textit{Infinite Series}, pp.~461~\textit{et~seq.}] \Item{25.} Find the limit as $x \to \infty$ of \[ \left(\frac{a_{0} + a_{1} x + \dots + a_{r} x^{r}} {b_{0} + b_{1} x + \dots + b_{r} x^{r}}\right)^{\lambda_{0}+\lambda_{1}x}, \] distinguishing the different cases which may arise. \MathTrip{1886.} \Item{26.} Prove that \[ \sum \log \left(1 + \frac{x}{n}\right)\quad (x > 0) \] diverges to~$\infty$. [Compare with $\sum (x/n)$.] Deduce that if $x$~is positive then \[ (1 + x)(2 + x) \dots (n + x)/n! \to \infty \] as $n \to \infty$. [The logarithm of the function is $\sum\limits_{1}^{n} \log \left(1 + \dfrac{x}{\nu}\right)$.] \Item{27.} Prove that if $x > -1$ then \begin{multline*} \frac{1}{(x + 1)^{2}} = \frac{1}{(x + 1) (x + 2)} + \frac{1!}{(x + 1) (x + 2) (x + 3)}\\ + \frac{2!}{(x + 1) (x + 2) (x + 3) (x + 4)} + \dots. \end{multline*} \MathTrip{1908.} [The difference between $1/(x + 1)^{2}$ and the sum of the first $n$ terms of the series is \[ \frac{1}{(x + 1)^{2}}\, \frac{n!}{(x + 2) (x + 3) \dots (x + n + 1)}.] \] \Item{28.} No equation of the type \[ Ae^{\alpha x} + Be^{\beta x} + \dots = 0, \] where $A$, $B$,~\dots\ are polynomials and $\alpha$, $\beta$,~\dots\ different real numbers, can hold for all values of~$x$. [If $\alpha$~is the algebraically greatest of $\alpha$, $\beta$,~\dots, then the term~$Ae^{\alpha x}$ outweighs all the rest as $x \to \infty$.] \Item{29.} Show that the sequence \[ a_{1} = e,\quad a_{2} = e^{e^{2}},\quad a_{3} = e^{e^{e^{3}}},\ \dots \] tends to infinity more rapidly than any member of the exponential scale. [Let $e_{1}(x) = e^{x}$, $e_{2}(x) = e^{e_{1}(x)}$, and so on. Then, if $e_{k}(x)$~is any member of the exponential scale, $a_{n} > e_{k}(n)$ when $n > k$.] \PageSep{391} \Item{30.} Prove that \[ \frac{d}{dx} \{\phi(x)\}^{\psi(x)} = \frac{d}{dx} \{\phi(x)\}^{\alpha} + \frac{d}{dx} \{\beta^{\psi(x)}\} \] where $\alpha$~is to be put equal to~$\psi(x)$ and $\beta$ to~$\phi(x)$ after differentiation. Establish a similar rule for the differentiation of $\phi(x)^{[\{\psi(x)\}^{\chi(x)}]}$. \Item{31.} Prove that if $D_{x}^{n} e^{-x^{2}} = e^{-x^{2}} \phi_{n}(x)$ then (i)~$\phi_{n}(x)$ is a polynomial of degree~$n$, (ii)~$\phi_{n+1} = -2x\phi_{n} + \phi_{n}'$, and (iii)~all the roots of $\phi_{n} = 0$ are real and distinct, and separated by those of $\phi_{n-1} = 0$. [To prove~(iii) assume the truth % [** TN: Typo in original; fixed while swapping roles of n and \kappa] of the result for $\DPtypo{n}{\kappa} = 1$, $2$,~\dots\Add{,} $\DPtypo{\kappa}{n}$, and consider the signs of~$\DPtypo{\phi_{\kappa+1}}{\phi_{n+1}}$ for the $n$~values of~$x$ for which $\DPtypo{\phi_{\kappa}}{\phi_{n}} = 0$ and for large (positive or negative) values of~$x$.] \Item{32.} The general solution of $f(xy) = f(x)f(y)$, where $f$~is a differentiable function, is~$x^{a}$, where $a$~is a constant: and that of \[ f(x + y) + f(x - y) = 2f(x)f(y) \] is $\cosh ax$ or $\cos ax$, according as $f''(0)$~is positive or negative. [In proving the second result assume that $f$~has derivatives of the first three orders. Then \[ 2f(x) + y^{2}\{f''(x) + \epsilon_{y}\} = 2f(x)[f(0) + yf'(0) + \tfrac{1}{2} y^{2}\{f''(0) + \epsilon_{y}'\}], \] where $\epsilon_{y}$ and~$\epsilon_{y}'$ tend to zero with~$y$. It follows that $f(0) = 1$, $f'(0) = 0$, $f''(x) = f''(0)f(x)$, so that $a = \sqrtb{f''(0)}$ or $a = \sqrtb{-f''(0)}$.] \Item{33.} How do the functions $x^{\sin(1/x)}$, $x^{\sin^{2}(1/x)}$, $x^{\cosec(1/x)}$ behave as $x \to +0$? \Item{34.} Trace the curves $y = \tan x e^{\tan x}$, $y = \sin x \log \tan \frac{1}{2}x$. \Item{35.} The equation $e^{x} = ax + b$ has one real root if $a < 0$ or $a = 0$, $b > 0$. If $a > 0$ then it has two real roots or none, according as $a\log a > b - a$ or $a\log a < b - a$. \Item{36.} Show by graphical considerations that the equation $e^{x} = ax^{2} + 2bx + c$ has one, two, or three real roots if $a > 0$, none, one, or two if $a < 0$; and show how to distinguish between the different cases. \Item{37.} Trace the curve $y = \dfrac{1}{x} \log\left(\dfrac{e^{x} - 1}{x}\right)$, showing that the point $(0, \frac{1}{2})$ is a centre of symmetry, and that as $x$~increases through all real values, $y$~steadily increases from $0$ to~$1$. Deduce that the equation \[ \frac{1}{x} \log\left(\frac{e^{x} - 1}{x}\right) = \alpha \] has no real root unless $0 < \alpha < 1$, and then one, whose sign is the same as that of $\alpha - \frac{1}{2}$. [In the first place \[ y - \tfrac{1}{2} = \frac{1}{x} \left\{\log\left(\frac{e^{x} - 1}{x}\right) - \log e^{\frac{1}{2} x}\right\} = \frac{1}{x} \log\left(\frac{\sinh \frac{1}{2}x}{\frac{1}{2}x}\right) \] is clearly an odd function of~$x$. Also \[ \frac{dy}{dx} = \frac{1}{x^{2}} \left\{\tfrac{1}{2} x\coth \tfrac{1}{2}x - 1 - \log\left(\frac{\sinh \frac{1}{2}x}{\frac{1}{2}x}\right)\right\}. \] \PageSep{392} The function inside the large bracket tends to zero as $x \to 0$; and its derivative is \[ \frac{1}{x} \left\{1 - \left(\frac{\frac{1}{2}x}{\sinh \frac{1}{2}x}\right)^2\right\}, \] which has the sign of~$x$. Hence $dy/dx > 0$ for all values of~$x$.] \Item{38.} Trace the curve $y = e^{1/x} \sqrtp{x^{2} + 2x}$, and show that the equation \[ e^{1/x} \sqrtp{x^{2} + 2x} = \alpha \] has no real roots if $\alpha$~is negative, one negative root if \[ %[** TN: In-line in the original] 0 < \alpha < a = e^{1/\sqrt{2}} \sqrtp{2 + 2\sqrt{2}}, \] and two positive roots and one negative if $\alpha > a$. \Item{39.} Show that the equation $f_{n}(x) = 1 + x + \dfrac{x^{2}}{2!} + \dots + \dfrac{x^{n}}{n!} = 0$ has one real root if $n$~is odd and none if $n$~is even. [Assume this proved for $n = 1$, $2$,~\dots~$2k$. Then $f_{2k+1}(x) = 0$ has at least one real root, since its degree is odd, and it cannot have more since, if it had, $f'_{2k+1}(x)$ or~$f_{2k}(x)$ would have to vanish once at least. Hence $f_{2k+1}(x) = 0$ has just one root, and so $f_{2k+2}(x) = 0$ cannot have more than two. If it has two, say $\alpha$~and~$\beta$, then $f'_{2k+2}(x)$ or~$f_{2k+1}(x)$ must vanish once at least between $\alpha$~and~$\beta$, say at~$\gamma$. And \[ f_{2k+2}(\gamma) = f_{2k+1}(\gamma) + \frac{\gamma^{2k+2}}{(2k + 2)!} > 0. \] But $f_{2k+2}(x)$~is also positive when $x$~is large (positively or negatively), and a glance at a figure will show that these results are contradictory. Hence $f_{2k+2}(x) = 0$ has no real roots.] \Item{40.} Prove that if $a$~and~$b$ are positive and nearly equal then \[ \log \frac{a}{b} = \frac{1}{2}(a - b) \left(\frac{1}{a} + \frac{1}{b}\right), \] approximately, the error being about $\frac{1}{6}\{(a - b)/a\}^{3}$. [Use the logarithmic series. This formula is interesting historically as having been employed by Napier for the numerical calculation of logarithms.] \Item{41.} Prove by multiplication of series that if $-1 < x < 1$ then \begin{align*} \tfrac{1}{2}\{\log(1 + x)\}^{2} &= \tfrac{1}{2} x^{2} - \tfrac{1}{3}(1 + \tfrac{1}{2})x^{3} + \tfrac{1}{4}(1 + \tfrac{1}{2} + \tfrac{1}{3})x^{4} - \dots,\\ \tfrac{1}{2}(\arctan x)^{2} &= \tfrac{1}{2} x^{2} - \tfrac{1}{4}(1 + \tfrac{1}{3})x^{4} + \tfrac{1}{6}(1 + \tfrac{1}{3} + \tfrac{1}{5})x^{6} - \dots. \end{align*} \Item{42.} Prove that \[ (1 + \alpha x)^{1/x} = e^{\alpha}\{1 - \tfrac{1}{2} a^{2}x + \tfrac{1}{24}(8 + 3a)a^{3}x^{2}(1 + \epsilon_{x})\}, \] where $\epsilon_{x} \to 0$ with~$x$. \Item{43.} The first $n + 2$ terms in the expansion of $\log\left(1 + x + \dfrac{x^{2}}{2!} + \dots + \dfrac{x^{n}}{n!}\right)$ in powers of~$x$ are \[ x - \frac{x^{n+1}}{n!} \left\{\frac{1}{n + 1} - \frac{x}{1!\, (n + 2)} + \frac{x^{2}}{2!\, (n + 3)} - \dots + (-1)^{n} \frac{x^{n}}{n!\, (2n + 1)} \right\}. \] \MathTrip{1899.} \PageSep{393} \Item{44.} Show that the expansion of \[ \exp \left(-x - \frac{x^{2}}{2} - \dots - \frac{x^{n}}{n}\right) \] in powers of~$x$ begins with the terms \[ 1 - x + \frac{x^{n+1}}{n + 1} - \sum_{s=1}^{n} \frac{x^{n+s+1}}{(n + s)(n + s + 1)}. \] \MathTrip{1909.} \Item{45.} Show that if $-1 < x < 1$ then \begin{align*} \frac{1}{3}x + \frac{1·4}{3·6}2^{2}x^{2} + \frac{1·4·7}{3·6·9}3^{2}x^{3} + \dots &= \frac{x(x + 3)}{9(1 - x)^{7/3}},\\ \frac{1}{3}x + \frac{1·4}{3·6}2^{3}x^{2} + \frac{1·4·7}{3·6·9}3^{3}x^{3} + \dots &= \frac{x(x^{2} + 18x + 9)}{27(1 - x)^{10/3}}. \end{align*} [Use the method of \Ex{xcii}.~6. The results are more easily obtained by differentiation; but the problem of the differentiation of an infinite series is beyond our range.] \Item{46.} Prove that \begin{align*} \int_{0}^{\infty} \frac{dx}{(x + a)(x + b)} &= \frac{1}{a - b} \log\left(\frac{a}{b}\right), \\ \int_{0}^{\infty} \frac{dx}{(x + a)(x + b)^{2}} &= \frac{1}{(a - b)^{2}b}\left\{a - b - b\log\left(\frac{a}{b}\right)\right\},\\ \int_{0}^{\infty} \frac{x\, dx}{(x + a)(x + b)^{2}} &= \frac{1}{(a - b)^{2}} \left\{a\log\left(\frac{a}{b}\right) - a + b\right\},\\ \int_{0}^{\infty} \frac{dx}{(x + a)(x^{2} + b^{2})} &= \frac{1}{(a^{2} + b^{2})b} \left\{\tfrac{1}{2}\pi a - b\log\left(\frac{a}{b}\right)\right\},\\ \int_{0}^{\infty} \frac{x\, dx}{(x + a)(x^{2} + b^{2})} &= \frac{1}{a^{2} + b^{2}} \left\{\tfrac{1}{2}\pi b + a\log\left(\frac{a}{b}\right)\right\}, \end{align*} provided that $a$~and~$b$ are positive. Deduce, and verify independently, that each of the functions \[ a - 1 - \log a,\quad a\log a - a + 1,\quad \tfrac{1}{2}\pi a - \log a,\quad \tfrac{1}{2}\pi + a\log a \] is positive for all positive values of~$a$. \Item{47.} Prove that if $\alpha$,~$\beta$,~$\gamma$ are all positive, and $\beta^{2} > \alpha\gamma$, then \[ \int_{0}^{\infty} \frac{dx}{\alpha x^{2} + 2\beta x + \gamma} = \frac{1}{\sqrtp{\beta^{2} - \alpha\gamma}} \log \left\{\frac{\beta + \sqrtp{\beta^{2} - \alpha\gamma}} {\sqrtp{\alpha\gamma}} \right\}; \] while if $\alpha$~is positive and $\alpha\gamma > \beta^{2}$ the value of the integral is \[ \frac{1}{\sqrtp{\alpha\gamma - \beta^{2}}} \arctan \left\{\frac{\sqrtp{\alpha\gamma - \beta^{2}}}{\beta}\right\}, \] that value of the inverse tangent being chosen which lies between $0$ and~$\pi$. Are there any other really different cases in which the integral is convergent? \Item{48.} Prove that if $a > -1$ then \[ \int_{1}^{\infty} \frac{dx}{(x + a)\sqrtp{x^{2} - 1}} = \int_{0}^{\infty} \frac{dt}{\cosh t + a} = 2\int_{1}^{\infty}\frac{du}{u^{2} + 2au + 1}; \] \PageSep{394} and deduce that the value of the integral is \[ \frac{2}{\sqrtp{1 - a^{2}}} \arctan \bigsqrtp{\frac{1 - a}{1 + a}} \] if $-1 < a < 1$, and \[ \frac{1}{\sqrtp{a^{2} - 1}} \log\frac{\sqrtp{a + 1} + \sqrtp{a - 1}} {\sqrtp{a + 1} - \sqrtp{a - 1}} = \frac{2}{\sqrtp{a^{2} - 1}} \argtanh \bigsqrtp{\frac{a - 1}{a + 1}} \] if $a > 1$. Discuss the case in which $a = 1$. \Item{49.} Transform the integral $\ds\int_{0}^{\infty} \frac{dx}{(x + a) \sqrtp{x^{2} + 1}}$, where $a > 0$, in the same ways, showing that its value is \[ \frac{1}{\sqrtp{a^{2} + 1}} \log\frac{a + 1 + \sqrtp{a^{2} + 1}}{a + 1 - \sqrtp{a^{2} + 1}} = \frac{2}{\sqrtp{a^{2} + 1}} \argtanh \frac{\sqrtp{a^{2} + 1}}{a + 1}\Add{.} \] \Item{50.} Prove that \[ \int_{0}^{1} \arctan x\, dx = \tfrac{1}{4}\pi - \tfrac{1}{2}\log 2. \] \Item{51.} If $0 < \alpha < 1$, $0 < \beta < 1$, then \[ \int_{-1}^{1} \frac{dx}{\sqrtb{(1 - 2\alpha x + \alpha^{2})(1 - 2\beta x + \beta^{2})}} = \frac{1}{\sqrtp{\alpha\beta}} \log \frac{1 + \sqrtp{\alpha\beta}}{1 - \sqrtp{\alpha\beta}}. \] \Item{52.} Prove that if $a > b > 0$ then \[ \int_{-\infty}^{\infty} \frac{d\theta}{a\cosh \theta + b\sinh \theta} = \frac{\pi}{\sqrtp{a^{2} - b^{2}}}\Add{.} \] \Item{53.} Prove that \[ \int_{0}^{1} \frac{\log x}{1 + x^{2}}\, dx = -\int_{1}^{\infty} \frac{\log x}{1 + x^{2}}\, dx,\quad \int_{0}^{\infty} \frac{\log x}{1 + x^{2}}\, dx = 0\Add{,} \] and deduce that if $a > 0$ then \[ \int_{0}^{\infty} \frac{\log x}{a^{2} + x^{2}}\, dx = \frac{\pi}{2a}\log a. \] [Use the substitutions $x = 1/t$ and $x = au$.] \Item{54.} Prove that \[ %[** TN: In-line in the original] \int_{0}^{\infty} \log \left(1 + \frac{a^{2}}{x^{2}}\right) dx = \pi a \] if $a > 0$. [Integrate by parts.] \end{Examples} \PageSep{395} \Chapter{X}{THE GENERAL THEORY OF THE LOGARITHMIC, EXPONENTIAL, AND CIRCULAR FUNCTIONS} \Paragraph{217. Functions of a complex variable.} In \Ref{Ch.}{III} we defined the complex variable \[ z = x + iy,\footnotemark \] \footnotetext{In this chapter we shall generally find it convenient to write $x + iy$ rather than $x + yi$.}% and we considered a few simple properties of some classes of expressions involving~$z$, such as the polynomial~$P(z)$. It is natural to describe such expressions as \emph{functions} of~$z$, and in fact we did describe the quotient $P(z)/Q(z)$, where $P(z)$ and~$Q(z)$ are polynomials, as a `rational function'. We have however given no general definition of what is meant by a function of~$z$. It might seem natural to define a function of~$z$ in the same way as that in which we defined a function of the real variable~$x$, \ie\ to say that $Z$~is a function of~$z$ if any relation subsists between $z$ and~$Z$ in virtue of which a value or values of~$Z$ corresponds to some or all values of~$z$. But it will be found, on closer examination, that this definition is not one from which any profit can be derived. For if $z$~is given, so are $x$~and~$y$, and conversely: to assign a value of~$z$ is precisely the same thing as to assign a pair of values of $x$~and~$y$. Thus a `function of~$z$', according to the definition suggested, is precisely the same thing as \emph{a complex function \[ f(x, y) + ig(x, y), \] of the two real variables $x$~and~$y$}. For example \[ x - iy,\quad xy,\quad |z| = \sqrtp{x^{2} + y^{2}},\quad \am z = \arctan(y/x) \] are `functions of~$z$'. The definition, although perfectly legitimate, \PageSep{396} is futile because it does not really define a new idea at all. It is therefore more convenient to use the expression `function of the complex variable~$z$' in a more restricted sense, or in other words to pick out, from the general class of complex functions of the two real variables $x$~and~$y$, a special class to which the expression shall be restricted. But if we were to attempt to explain how this selection is made, and what are the characteristic properties of the special class of functions selected, we should be led far beyond the limits of this book. We shall therefore not attempt to give any general definitions, but shall confine ourselves entirely to special functions defined directly. \Paragraph{218.} We have already defined \emph{polynomials} in~$z$ (\SecNo[§]{39}), \emph{rational functions} of~$z$ (\SecNo[§]{46}), and \emph{roots} of~$z$ (\SecNo[§]{47}). There is no difficulty in extending to the complex variable the definitions of \emph{algebraical functions}, explicit and implicit, which we gave (\SecNo[§§]{26}--\SecNo{27}) in the case of the real variable~$x$. In all these cases we shall call the complex number~$z$, the argument (\SecNo[§]{44}) of the point~$z$, the \emph{argument} of the function~$f(z)$ under consideration. The question which will occupy us in this chapter is that of defining and determining the principal properties of the logarithmic, exponential, and trigonometrical or circular functions of~$z$. These functions are of course so far defined for real values of~$z$ only, the logarithm indeed for positive values only. We shall begin with the logarithmic function. It is natural to attempt to define it by means of some extension of the definition \[ \log x = \int_{1}^{x} \frac{dt}{t}\quad (x > 0); \] and in order to do this we shall find it necessary to consider briefly some extensions of the notion of an integral. \Paragraph{219. Real and complex curvilinear integrals.} Let $AB$ be an arc~$C$ of a curve defined by the equations \[ x = \phi(t),\quad y = \psi(t), \] where $\phi$ and~$\psi$ are functions of~$t$ with continuous differential coefficients $\phi'$ and~$\psi'$; and suppose that, as $t$~varies from $t_{0}$ to~$t_{1}$, the point~$(x, y)$ moves along the curve, in the same direction, from $A$ to~$B$. \PageSep{397} Then we define the \emph{curvilinear integral} \[ \int_{C} \{g(x, y)\, dx + h(x, y)\, dy\}, \Tag{(1)} \] {\Loosen where $g$ and~$h$ are continuous functions of $x$~and~$y$, as being equivalent to the ordinary integral obtained by effecting the formal substitutions $x = \phi(t)$, $y = \psi(t)$, \ie\ to} \[ \int_{t_{0}}^{t_{1}} \{g(\phi, \psi) \phi' + h(\phi, \psi) \psi'\}\, dt. \] We call $C$ the \emph{path of integration}. Let us suppose now that \[ z = x + iy = \phi(t) + i\psi(t), \] so that $z$~describes the curve~$C$ in Argand's diagram as $t$~varies. Further let us suppose that \[ f(z) = u + iv \] is a polynomial in~$z$ or rational function of~$z$. Then we define \[ \int_{C} f(z)\, dz \Tag{(2)} \] as meaning \[ \int_{C} (u + iv) (dx + i\, dy), \] which is itself defined as meaning \[ \int_{C} (u\, dx - v\, dy) + i\int_{C} (v\, dx + u\, dy). \] \Paragraph{220. The definition of $\Log \zeta$.} Now let $\zeta = \xi + i\eta$ be any complex number. We define~$\Log \zeta$, the general logarithm of~$\zeta$, by the equation \[ \Log \zeta = \int_{C} \frac{dz}{z}, \] where $C$~is a curve which starts from~$1$ and ends at~$\zeta$ and does not pass through the origin. Thus (\Fig{54}) the paths (\ia),~(\ib),~(\ic) are paths such as are contemplated in the definition. The value of~$\Log z$ is thus defined when the particular path of integration has been chosen. But at present it is not clear how far the value of~$\Log z$ resulting from the definition depends upon what path is chosen. Suppose for example that $\zeta$~is real and positive, say \PageSep{398} equal to~$\xi$. Then one possible path of integration is the straight line from $1$ to~$\xi$, a path which we may suppose to be defined by %[Illustration: Fig. 54.] \Figure[3in]{54}{p398} the equations $x = t$, $y = 0$. In this case, and with this particular choice of the path of integration, we have \[ \Log \xi = \int_{1}^{\xi} \frac{dt}{t}, \] so that $\Log \xi$~is equal to~$\log \xi$, the logarithm of~$\xi$ according to the definition given in the last chapter. Thus one value at any rate of~$\Log \xi$, when $\xi$~is real and positive, is~$\log \xi$. But in this case, as in the general case, the path of integration can be chosen in an infinite variety of different ways. There is nothing to show that \emph{every} value of~$\Log \xi$ is equal to~$\log \xi$; and in point of fact we shall see that this is not the case. This is why we have adopted the notation $\Log \zeta$,~$\Log \xi$ instead of $\log \zeta$,~$\log \xi$. $\Log \xi$~is (possibly at any rate) a many valued function, and $\log \xi$~is only one of its values. And in the general case, so far as we can see at present, three alternatives are equally possible, viz.\ that \Item{(1)} \Hang[4em] we may always get the same value of~$\Log \zeta$, by whatever path we go from $1$ to~$\zeta$; \Item{(2)} \Hang[4em] we may get a different value corresponding to every different path; \Item{(3)} \Hang[4em] we may get a number of different values each of which corresponds to a whole class of paths: \noindent and the truth or falsehood of any one of these alternatives is in no way implied by our definition. \PageSep{399} \Paragraph{221. The values of $\Log \zeta$.} Let us suppose that the polar coordinates of the point $z = \zeta$ are $\rho$~and~$\phi$, so that \[ \zeta = \rho(\cos\phi + i\sin\phi). \] We suppose for the present that $-\pi < \phi < \pi$, while $\rho$~may have any positive value. Thus $\zeta$~may have any value other than zero or a real negative value. The coordinates $(x, y)$ of any point on the path~$C$ are functions of~$t$, and so also are its polar coordinates~$(r, \theta)$. Also \begin{align*} \Log \zeta &= \int_{C} \frac{dz}{z} = \int_{C} \frac{dx + i\, dy}{x + iy} \\ &= \int_{t_{0}}^{t_{1}} \frac{1}{x + iy} \left(\frac{dx}{dt} + i\frac{dy}{dt}\right) dt, \end{align*} in virtue of the definitions of \SecNo[§]{219}. But $x = r\cos\theta$, $y = r\sin\theta$, and \begin{align*} \frac{dx}{dt} + i\frac{dy}{dt} &= \left(\cos\theta\, \frac{dr}{dt} - r\sin\theta\, \frac{d\theta}{dt}\right) + i\left(\sin\theta\, \frac{dr}{dt} + r\cos\theta\, \frac{d\theta}{dt}\right) \\ &= (\cos\theta + i\sin\theta) \left(\frac{dr}{dt} + ir\frac{d\theta}{dt}\right); \end{align*} so that \[ \Log \zeta = \int_{t_{0}}^{t_{1}} \frac{1}{r}\, \frac{dr}{dt}\, dt + i\int_{t_{0}}^{t_{1}} \frac{d\theta}{dt}\, dt = [\log r] + i[\theta], \] where $[\log r]$~denotes the difference between the values of~$\log r$ at the points corresponding to $t = t_{1}$ and $t = t_{0}$, and $[\theta]$~has a similar meaning. It is clear that \[ [\log r] = \log \rho - \log 1 = \log \rho; \] but the value of~$[\theta]$ requires a little more consideration. Let us suppose first that the path of integration is the straight line from $1$ to~$\zeta$. The initial value of~$\theta$ is the amplitude of~$1$, or rather %[Illustration: Fig. 55.] \Figure[1.75in]{55}{p399} one of the amplitudes of~$1$, viz.\ $2k\pi$, where $k$~is any integer. Let us suppose that initially $\theta = 2k\pi$. It is evident from the figure that $\theta$~increases from $2k\pi$ to~$2k\pi + \phi$ as $t$~moves along the line. Thus \[ [\theta] = (2k\pi + \phi) - 2k\pi = \phi, \] and, when the path of integration is a straight line, $\Log \zeta = \log \rho + i\phi$. \PageSep{400} We shall call this particular value of~$\Log \zeta$ the \Emph{principal value}. When $\zeta$~is real and positive, $\zeta = \rho$ and $\phi = 0$, so that the principal value of~$\Log \zeta$ is the ordinary logarithm~$\log \zeta$. Hence it will be convenient in general to denote the principal value of~$\Log \zeta$ by~$\log \zeta$. Thus \[ \log \zeta = \log \rho + i\phi, \] and the principal value is characterised by the fact that its imaginary part lies between $-\pi$ and~$\pi$. Next let us consider any path (such as those shown in \Fig{56}) such that the area or areas included %[Illustration: Fig. 56.] \Figure[2.75in]{56}{p400a} between the path and the straight line from~$1$ to~$\zeta$ does not include the origin. It is easy to see that $[\theta]$~is still equal to~$\phi$. Along the curve shown in the figure by a continuous line, for example, $\theta$, initially equal to~$2k\pi$, first decreases to the value \[ 2k\pi - XOP \] and then increases again, being equal to~$2k\pi$ at~$Q$, and finally to~$2k\pi + \phi$. The dotted curve shows a similar but slightly more complicated case in which the straight line and the curve bound two areas, neither of which includes the origin. Thus \begin{Result}if the path of integration is such that the closed curve formed by it and the line from~$1$ to~$\zeta$ does not include the origin, then \[ \Log \zeta = \log \zeta = \log \rho + i\phi. \] \end{Result} On the other hand it is easy to construct paths of integration such that $[\theta]$~is not equal to~$\phi$. Consider, for example, the curve indicated by a continuous line in \Fig{57}. If $\theta$~is initially equal to~$2k\pi$, it will have increased %[Illustration: Fig. 57.] \Figure[2.75in]{57}{p400b} by~$2\pi$ when we get to~$P$ and by~$4\pi$ when we get to~$Q$; and its final value will be~$2k\pi + 4\pi + \phi$, so that $[\theta] = 4\pi + \phi$ and \[ \Log \zeta = \log \rho + i(4\pi + \phi). \] \PageSep{401} In this case the path of integration winds twice round the origin in the positive sense. If we had taken a path winding $k$~times round the origin we should have found, in a precisely similar way, that $[\theta] = 2k\pi+ \phi$ and \[ \Log \zeta = \log \rho + i(2k\pi + \phi). \] Here $k$~is positive. By making the path wind round the origin in the opposite direction (as shown in the dotted path in \Fig{57}), we obtain a similar series of values in which $k$~is negative. Since $|\zeta | = \rho$, and the different angles~$2k\pi + \phi$ are the different values of~$\am \zeta$, we conclude that every value of~$\log |\zeta| + i\am \zeta$ is a value of~$\Log \zeta$; and it is clear from the preceding discussion that every value of~$\Log \zeta$ must be of this form. We may summarise our conclusions as follows: \begin{Result}the general value of $\Log \zeta$ is \[ \log |\zeta| + i\am \zeta = \log \rho + i(2k\pi + \phi), \] where $k$~is any positive or negative integer. The value of~$k$ is determined by the path of integration chosen. If this path is a straight line then $k = 0$ and\PageLabel{401} \[ \Log \zeta = \log \zeta = \log \rho + i\phi. \] \end{Result} In what precedes we have used~$\zeta$ to denote the argument of the function~$\Log \zeta$, and $(\xi, \eta)$ or $(\rho, \phi)$ to denote the coordinates of~$\zeta$; and $z$, $(x, y)$, $(r, \theta)$ to denote an arbitrary point on the path of integration and its coordinates. There is however no reason now why we should not revert to the natural notation in which $z$~is used as the argument of the function~$\Log z$, and we shall do this in the following examples. \begin{Examples}{XCIII.} \Item{1.} We supposed above that $-\pi < \theta < \pi$, and so excluded the case in which $z$~is \emph{real and negative}. In this case the straight line from~$1$ to~$z$ passes through~$0$, and is therefore not admissible as a path of integration. Both $\pi$ and~$-\pi$ are values of~$\am z$, and $\theta$~is equal to one or other of them: also $r = -z$. The values of~$\Log z$ are still the values of~$\log |z| + i\am z$, viz.\ \[ \log (-z) + (2k + 1)\pi i, \] where $k$~is an integer. The values~$\log (-z) + \pi i$ and~$\log (-z) - \pi i$ correspond to paths from~$1$ to~$z$ lying respectively entirely above and entirely below the real axis. Either of them may be taken as the principal value of~$\Log z$, as convenience dictates. We shall choose the value~$\log (-z) + \pi$ i corresponding to the first path. \PageSep{402} \Item{2.} The real and imaginary parts of any value of~$\Log z$ are both continuous functions of $x$~and~$y$, except for $x = 0$, $y = 0$. \Item{3.} \Topic{The functional equation satisfied by~$\Log z$.} The function~$\Log z$ satisfies the equation \[ \Log z_{1} z_{2} = \Log z_{1} + \Log z_{2}, \Tag{(1)} \] in the sense that \emph{every} value of either side of this equation is \emph{one} of the values of the other side. This follows at once by putting \[ z_{1} = r_{1}(\cos\theta_{1} + i\sin\theta_{1}),\quad z_{2} = r_{2}(\cos\theta_{2} + i\sin\theta_{2}), \] and applying the formula of \PageRef{p.}{401}. It is however not true that \[ \log z_{1}z_{2} = \log z_{1} + \log z_{2} \Tag{(2)} \] in all circumstances. If, \eg, \[ z_{1} = z_{2} = \tfrac{1}{2}(-1 + i\sqrt{3}) = \cos \tfrac{2}{3}\pi + i \sin \tfrac{2}{3}\pi, \] then $\log z_{1} = \log z_{2} = \frac{2}{3}\pi i$, and $\log z_{1} + \log z_{2} = \frac{4}{3}\pi i$, which is one of the values of $\Log z_{1}z_{2}$, but not the principal value. In fact $\log z_{1}z_{2} = -\frac{2}{3}\pi i$. An equation such as~\Eq{(1)}, in which every value of either side is a value of the other, we shall call a \emph{complete} equation, or an equation which is \emph{completely true}. \Item{4.} The equation $\Log z^{m} = m\Log z$, where $m$~is an integer, is not completely true: every value of the right-hand side is a value of the left-hand side, but the converse is not true. \Item{5.} The equation $\Log (1/z) = -\Log z$ is completely true. It is also true that $\log (1/z) = -\log z$, except when $z$~is real and negative. \Item{6.} The equation \[ \log \left(\frac{z - a}{z - b}\right) = \log (z - a) - \log (z - b) \] is true if $z$~lies outside the region bounded by the line joining the points $z = a$, $z = b$, and lines through these points parallel to~$OX$ and extending to infinity in the negative direction. \Item{7.} The equation \[ \log \left(\frac{a - z}{b - z}\right) = \log \left(1 - \frac{a}{z}\right) - \log \left(1 - \frac{b}{z}\right) \] is true if $z$~lies outside the triangle formed by the three points $O$,~$a$,~$b$. \Item{8.} Draw the graph of the function $\Imag(\Log x)$ of the real variable~$x$. [The graph consists of the positive halves of the lines $y = 2k\pi$ and the negative halves of the lines $y = (2k + 1)\pi$.] \Item{9.} The function~$f(x)$ of the real variable~$x$, defined by \[ \pi f(x) = p\pi + (q - p)\Imag(\log x), \] is equal to~$p$ when $x$~is positive and to~$q$ when $x$~is negative. \PageSep{403} \Item{10.} The function~$f(x)$ defined by \[ \pi f(x) = p\pi + (q - p)\Imag\{\log(x - 1)\} + (r - q)\Imag(\log x) \] is equal to~$p$ when $x > 1$, to~$q$ when $0 < x < 1$, and to~$r$ when $x < 0$. \Item{11.} For what values of~$z$ is (i)~$\log z$ (ii)~any value of~$\Log z$ (\ia)~real or (\ib)~purely imaginary? \Item{12.} If $z = x + iy$ then $\Log\Log z = \log R + i(\Theta + 2k'\pi)$, where \[ R^{2} = (\log r)^{2} + (\theta + 2k\pi)^{2} \] and $\Theta$~is the least positive angle determined by the equations \[ \cos\Theta : \sin\Theta : 1 :: \log r : \theta + 2k\pi: \sqrtb{(\log r)^{2} + (\theta + 2k\pi)^{2}}. \] Plot roughly the doubly infinite set of values of $\Log\Log(1 + i\sqrt{3})$, indicating which of them are values of $\log\Log(1 + i \sqrt{3})$ and which of $\Log\log(1 + i\sqrt{3})$. \end{Examples} \Paragraph{222. The exponential function.} In \Ref{Ch.}{IX} we defined a function~$e^{y}$ of the real variable~$y$ as the inverse of the function $y = \log x$. It is naturally suggested that we should define a function of the complex variable~$z$ which is the inverse of the function~$\Log z$. \begin{Definition} If any value of~$\Log z$ is equal to~$\zeta$, we call $z$ the exponential of~$\zeta$ and write \[ z = \exp \zeta. \] \end{Definition} Thus $z = \exp \zeta$ if $\zeta = \Log z$. It is certain that to any given value of~$z$ correspond infinitely many different values of~$\zeta$. It would not be unnatural to suppose that, conversely, to any given value of~$\zeta$ correspond infinitely many values of~$z$, or in other words that $\exp \zeta$~is an infinitely many-valued function of~$\zeta$. This is however not the case, as is proved by the following theorem. \begin{Theorem} The exponential function $\exp \zeta$ is a one-valued function of~$\zeta$. \end{Theorem} For suppose that \[ z_{1} = r_{1}(\cos\theta_{1} + i\sin\theta_{1}),\quad z_{2} = r_{2}(\cos\theta_{2} + i\sin\theta_{2}) \] are both values of~$\exp \zeta$. Then \[ \zeta = \Log z_{1} = \Log z_{2}, \] and so \[ \log r_{1} + i(\theta_{1} + 2m\pi) = \log r_{2} + i(\theta_{2} + 2n\pi), \] where $m$~and~$n$ are integers. This involves \[ \log r_{1} = \log r_{2},\quad \theta_{1} + 2m\pi = \theta_{2} + 2n\pi. \] Thus $r_1 = r_2$, and $\theta_{1}$~and~$\theta_{2}$ differ by a multiple of~$2\pi$. Hence $z_{1} = z_{2}$. \PageSep{404} \begin{Corollary} If $\zeta$~is real then $\exp \zeta = e^{\zeta}$, the real exponential function of~$\zeta$ defined in \Ref{Ch.}{IX}\@. \end{Corollary} For if $z = e^{\zeta}$ then $\log z = \zeta$, \ie\ one of the values of~$\Log z$ is~$\zeta$. Hence $z = \exp \zeta$. \Paragraph{223. The value of $\exp \zeta$.} Let $\zeta = \xi + i\eta$ and \[ z = \exp \zeta = r(\cos\theta + i\sin\theta). \] Then \[ \xi + i\eta = \Log z = \log r + i(\theta + 2m\pi), \] where $m$~is an integer. Hence $\xi = \log r$, $\eta = \theta + 2m\pi$, or \[ r = e^{\xi},\quad \theta = \eta - 2m\pi; \] and accordingly \[ \exp (\xi + i\eta) = e^{\xi} (\cos\eta + i\sin\eta). \] If $\eta = 0$ then $\exp \xi = e^{\xi}$, as we have already inferred in \SecNo[§]{222}. It is clear that both the real and the imaginary parts of $\exp (\xi + i\eta)$ are continuous functions of $\xi$~and~$\eta$ for all values of $\xi$~and~$\eta$. {\Loosen\Paragraph{224. The functional equation satisfied by $\exp \zeta$.} Let $\zeta_{1} = \xi_{1} + i\eta_{1}$, $\zeta_{2} = \xi_{2} + i\eta_{2}$. Then} \begin{align*} \exp \zeta_{1} × \exp \zeta_{2} &= e^{\xi_{1}} (\cos\eta_{1} + i\sin\eta_{1}) × e^{\xi_{2}} (\cos\eta_{2} + i\sin\eta_{2}) \\ &= e^{\xi_{1}+\xi_{2}} \{\cos(\eta_{1} + \eta_{2}) + i\sin(\eta_{1} + \eta_{2})\} \\ &= \exp(\zeta_{1} + \zeta_{2}). \end{align*} {\Loosen The exponential function therefore satisfies the functional relation $f(\zeta_{1} + \zeta_{2}) = f(\zeta_{1}) f(\zeta_{2})$, an equation which we have proved already (\SecNo[§]{205}) to be true for real values of $\zeta_{1}$~and~$\zeta_{2}$.} \Paragraph{225. The general power~$a^{\zeta}$.} It might seem natural, as $\exp \zeta = e^{\zeta}$ when $\zeta$~is real, to adopt the same notation when $\zeta$~is complex and to drop the notation $\exp \zeta$ altogether. We shall not follow this course because we shall have to give a more general definition of the meaning of the symbol~$e^{\zeta}$: we shall find then that $e^{\zeta}$~represents a function with infinitely many values of which $\exp \zeta$~is only one. We have already defined the meaning of the symbol~$a^{\zeta}$ in a considerable variety of cases. It is defined in elementary Algebra in the case in which $a$~is real and positive and $\zeta$~rational, or $a$~real and negative and $\zeta$~a rational fraction whose denominator is odd. According to the definitions there given $a^{\zeta}$~has at most two values. \PageSep{405} In \Ref{Ch.}{III} we extended our definitions to cover the case in which $a$~is any real or complex number and $\zeta$~any rational number~$p/q$; and in \Ref{Ch.}{IX} we gave a new definition, expressed by the equation \[ a^{\zeta} = e^{\zeta\log a}, \] which applies whenever $\zeta$~is real and $a$~real and positive. Thus we have, in one way or another, attached a meaning to such expressions as \[ 3^{1/2},\quad (-1)^{1/3},\quad (\sqrt{3} + \tfrac{1}{2}i)^{-1/2},\quad (3.5)^{1+\sqrt{2}}; \] but we have as yet given no definitions which enable us to attach any meaning to such expressions as \[ (1 + i)^{\sqrt{2}},\quad 2^{i},\quad (3 + 2i)^{2+3i}. \] We shall now give a general definition of~$a^{\zeta}$ which applies to all values of $a$ and~$\zeta$, real or complex, with the one limitation that $a$~must not be equal to zero. \begin{Definition} The function~$a^{\zeta}$ is defined by the equation \[ a^{\zeta} = \exp (\zeta\Log a) \] where $\Log a$~is any value of the logarithm of~$a$. \end{Definition} We must first satisfy ourselves that this definition is consistent with the previous definitions and includes them all as particular cases. \Item{(1)} If $a$~is positive and $\zeta$~real, then one value of~$\zeta\Log a$, viz.\ $\zeta\log a$, is real: and $\exp (\zeta\log a) = e^{\zeta\log a}$, which agrees with the definition adopted in \Ref{Ch.}{IX}\@. The definition of \Ref{Ch.}{IX} is, as we saw then, consistent with the definition given in elementary Algebra; and so our new definition is so too. \Item{(2)} If $a = e^{\tau} (\cos\psi + i\sin\psi)$, then \begin{gather*} \Log a = \tau + i(\psi + 2m\pi), \\ \exp \{(p/q)\Log a\} = e^{p\tau/q} \Cis \{(p/q)(\psi + 2m\pi)\}, \end{gather*} where $m$~may have any integral value. It is easy to see that if $m$~assumes all possible integral values then this expression assumes $q$ and only~$q$ different values, which are precisely the values of~$a^{p/q}$ found in \SecNo[§]{48}. Hence our new definition is also consistent with that of \Ref{Ch.}{III}\@. \PageSep{406} \Paragraph{226. The general value of~$a^{\zeta}$.} Let \[ \zeta = \xi + i\eta,\quad a = \sigma(\cos\psi + i\sin\psi) \] where $-\pi < \psi \leq \pi$, so that, in the notation of \SecNo[§]{225}, $\sigma = e^{\tau}$ or $\tau = \log \sigma$. Then \[ \zeta \Log a = (\xi + i\eta)\{\log \sigma + i(\psi + 2m\pi)\} = L + iM, \] where \[ L = \xi \log \sigma - \eta(\psi + 2m\pi),\quad M = \eta\log \sigma + \xi (\psi + 2m\pi); \] and \[ a^{\zeta} = \exp(\zeta\Log a) = e^{L}(\cos M + i\sin M). \] Thus the general value of~$a^{\zeta}$ is \[ e^{\xi\log \sigma - \eta(\psi+2m\pi)} [\cos\{\eta\log \sigma + \xi(\psi + 2m\pi)\} + i\sin\{\eta\log \sigma + \xi(\psi + 2m\pi)\}]. \] In general $a^{\zeta}$~is an infinitely many-valued function. For \[ |a^{\zeta}| = e^{\xi\log \sigma - \eta(\psi+2m\pi)} \] has a different value for every value of~$m$, unless $\eta = 0$. If on the other hand $\eta = 0$, then the moduli of all the different values of~$a^{\zeta}$ are the same. But any two values differ unless their amplitudes are the same or differ by a multiple of~$2\pi$. This requires that $\xi(\psi + 2m\pi)$ and $\xi(\psi + 2n\pi)$, where $m$~and~$n$ are different integers, shall differ, if at all, by a multiple of~$2\pi$. But if \[ \xi(\psi + 2m\pi) - \xi(\psi + 2n\pi) = 2k\pi, \] then $\xi = k/(m - n)$ is rational. We conclude that \emph{$a^{\zeta}$~is infinitely many-valued unless $\zeta$~is real and rational}. On the other hand we have already seen that, when $\zeta$~is real and rational, $a^{\zeta}$~has but a finite number of values. \begin{Remark} The \emph{principal value} of $a^{\zeta} = \exp (\zeta\Log a)$ is obtained by giving $\Log a$ its principal value, \ie\ by supposing $m = 0$ in the general formula. Thus the principal value of~$a^{\zeta}$ is \[ e^{\xi\log \sigma - \eta\psi} \{\cos(\eta\log \sigma + \xi\psi) + i\sin(\eta\log \sigma + \xi\psi)\}. \] Two particular cases are of especial interest. If $a$~is real and positive and $\zeta$~real, then $\sigma = a$, $\psi = 0$, $\xi = \zeta$, $\eta = 0$, and the principal value of~$a^{\zeta}$ is~$e^{\zeta\log a}$, which is the value defined in the last chapter. If $|a| = 1$ and $\zeta$~is real, then $\sigma = 1$, $\xi = \zeta$, $\eta = 0$, and the principal value of $(\cos\psi + i\sin\psi)^{\zeta}$ is $\cos\zeta\psi + i\sin\zeta\psi$. This is a further generalisation of De~Moivre's Theorem (\SecNo[§§]{45},~\SecNo{49}). \end{Remark} \PageSep{407} \begin{Examples}{XCIV.} \Item{1.} Find all the values of~$i^{i}$. [By definition \[ i^{i} = \exp (i\Log i). \] But \[ i = \cos \tfrac{1}{2}\pi + i\sin \tfrac{1}{2}\pi,\quad \Log i = (2k + \tfrac{1}{2})\pi i, \] where $k$~is any integer. Hence \[ i^{i} = \exp\{-(2k + \tfrac{1}{2})\pi\} = e^{-(2k + \frac{1}{2})\pi}. \] All the values of~$i^{i}$ are therefore real and positive.] \Item{2.} Find all the values of $(1 + i)^{i}$, $i^{1+i}$, $(1 + i)^{1+i}$. \Item{3.} The values of~$a^{\zeta}$, when plotted in the Argand diagram, are the vertices of an equiangular polygon inscribed in an equiangular spiral whose angle is independent of~$a$. \MathTrip{1899.} [If $a^{\zeta} = r(\cos\theta + i\sin\theta)$ we have \[ r = e^{\xi\log \sigma - \eta(\psi + 2m\pi)},\quad \theta = \eta\log \sigma + \xi(\psi + 2m\pi); \] and all the points lie on the spiral $r = \sigma^{(\xi^{2} + \eta^{2})/\xi} e^{-\eta \theta/\xi}$.] \Item{4.} \Topic{The function~$e^{\zeta}$.} If we write~$e$ for~$a$ in the general formula, so that $\log \sigma = 1$, $\psi = 0$, we obtain \[ e^{\zeta} = e^{\xi-2m\pi\eta} \{\cos(\eta + 2m\pi\xi) + i\sin(\eta + 2m\pi\xi)\}. \] The principal value of~$e^{\zeta}$ is $e^{\xi}(\cos\eta + i\sin\eta)$, which is equal to~$\exp \zeta$ (\SecNo[§]{223}). In particular, if $\zeta$~is real, so that $\eta = 0$, we obtain \[ e^{\zeta} (\cos 2m\pi\zeta + i\sin 2m\pi\zeta) \] as the general and $e^{\zeta}$~as the principal value, $e^{\zeta}$~denoting here the positive value of the exponential defined in \Ref{Ch.}{IX}\@. \Item{5.} Show that $\Log e^{\zeta} = (1 + 2m\pi i)\zeta + 2n\pi i$, where $m$~and~$n$ are any integers, and that in general $\Log a^{\zeta}$~has a double infinity of values. \Item{6.} The equation $1/a^{\zeta} = a^{-\zeta}$ is completely true (\Ex{xciii}.~3): it is also true of the principal values. \Item{7.} The equation $a^{\zeta} × b^{\zeta} = (ab)^{\zeta}$ is completely true but not always true of the principal values. \Item{8.} The equation $a^{\zeta} × a^{\zeta'} = a^{\zeta+\zeta'}$ is not completely true, but is true of the principal values. [Every value of the right-hand side is a value of the left-hand side, but the general value of $a^{\zeta} × a^{\zeta'}$, viz. \[ \exp \{\zeta(\log a + 2m\pi i) + \zeta'(\log a + 2n\pi i)\}, \] is not as a rule a value of~$a^{\zeta+\zeta'}$ unless $m = n$.] \Item{9.} What are the corresponding results as regards the equations \[ \Log a^{\zeta} = \zeta\Log a,\quad (a^{\zeta})^{\zeta'} = (a^{\zeta'})^{\zeta} = a^{\zeta\zeta'}? \] \Item{10.} For what values of~$\zeta$ is (\ia)~any value (\ib)~the principal value of~$e^{\zeta}$ (i)~real (ii)~purely imaginary (iii)~of unit modulus? \PageSep{408} \Item{11.} The necessary and sufficient conditions that all the values of~$a^{\zeta}$ should be real are that $2\xi$~and~$\{\eta\log |a| + \xi\am a\}/\pi$, where $\am a$~denotes any value of the amplitude, should both be integral. What are the corresponding conditions that all the values should be of unit modulus? \Item{12.} The general value of~$|x^{i} + x^{-i}|$, where $x > 0$, is \[ e^{-(m-n)\pi} \sqrtbr{2\{\cosh 2(m + n)\pi + \cos(2\log x)\}}. \] \Item{13.} Explain the fallacy in the following argument: since $e^{2m\pi i} = e^{2n\pi i} = 1$, where $m$~and~$n$ are any integers, therefore, raising each side to the power~$i$ we obtain $e^{-2m\pi} = e^{-2n\pi}$. \Item{14.} In what circumstances are any of the values of~$x^{x}$, where $x$~is real, themselves real? [If $x > 0$ then \[ x^{x} = \exp (x\Log x) = \exp (x\log x) \Cis 2m\pi x, \] the first factor being real. The principal value, for which $m = 0$, is always real. If $x$~is a rational fraction~$p/(2q + 1)$, or is irrational, then there is no other real value. But if $x$~is of the form~$p/2q$, then there is one other real value, viz.\ $-\exp (x\log x)$, given by $m = q$. If $x = -\xi < 0$ then \[ x^{x} = \exp \{-\xi\Log (-\xi)\} = \exp (-\xi\log \xi) \Cis\{-(2m + 1)\pi\xi\}. \] The only case in which any value is real is that in which $\xi = p/(2q + 1)$, when $m = q$ gives the real value \[ \exp (-\xi\log \xi) \Cis (-p\pi) = (-1)^{p} \xi^{-\xi}. \] The cases of reality are illustrated by the examples \[ (\tfrac{1}{3})^{1/3} = \sqrt[3]{\tfrac{1}{3}},\quad (\tfrac{1}{2})^{\frac{1}{2}} = ±\sqrt{\tfrac{1}{2}},\quad (-\tfrac{2}{3})^{-\frac{2}{3}} = \sqrt[3]{\tfrac{9}{4}},\quad (-\tfrac{1}{3})^{-\frac{1}{3}} = -\sqrt[3]{3}.] \] \Item{15.} \Topic{Logarithms to any base.} We may define $\zeta = \Log_{a} z$ in two different ways. We may say (i)~that $\zeta = \Log_{a} z$ if the \emph{principal} value of~$a^{\zeta}$ is equal to~$z$; or we may say (ii)~that $\zeta = \Log_{a} z$ if \emph{any} value of~$a^{\zeta}$ is equal to~$z$. Thus if $a = e$ then $\zeta = \Log_{e} z$, according to the first definition, if the principal value of~$e^{\zeta}$ is equal to~$z$, or if $\exp \zeta = z$; and so $\Log_{e} z$~is identical with~$\Log z$. But, according to the second definition, $\zeta = \Log_{e} z$ if \[ e^{\zeta} = \exp (\zeta\Log e) = z,\quad \zeta\Log e = \Log z, \] or $\zeta = (\Log z)/(\Log e)$, any values of the logarithms being taken. Thus \[ \zeta = \Log_{e} z = \frac{\log |z| + (\am z + 2m\pi)i}{1 + 2n\pi i}, \] so that $\zeta$~is a doubly infinitely many-valued function of~$z$. And generally, according to this definition, $\Log_{a} z = (\Log z)/(\Log a)$. \Item{16.} $\Log_{e} 1 = 2m\pi i/(1 + 2n\pi i)$, $\Log_{e}(-1) = (2m + 1)\pi i/(1 + 2n\pi i)$, where $m$~and~$n$ are any integers. \end{Examples} \PageSep{409} \Paragraph{227. The exponential values of the sine and cosine.} From the formula \[ \exp (\xi + i\eta) = \exp \xi(\cos\eta + i\sin\eta), \] we can deduce a number of extremely important subsidiary formulae. Taking $\xi = 0$, we obtain $\exp (i\eta) = \cos\eta + i\sin\eta$; and, changing the sign of~$\eta$, $\exp (-i\eta) = \cos\eta - i\sin\eta$. Hence \begin{alignat*}{3} \cos\eta &= &&\tfrac{1}{2} &&\{\exp (i\eta) + \exp (-i\eta)\},\\ \sin\eta &= -&&\tfrac{1}{2}i&&\{\exp (i\eta) - \exp (-i\eta)\}. \end{alignat*} We can of course deduce expressions for any of the trigonometrical ratios of~$\eta$ in terms of~$\exp (i\eta)$. \Paragraph{228. Definition of $\sin\zeta$ and~$\cos\zeta$ for all values of~$\zeta$.} We saw in the last section that, when $\zeta$~is real, \begin{alignat*}{3} \cos\zeta &= &&\tfrac{1}{2} &&\{\exp (i\zeta) + \exp (-i\zeta)\}, \Tag{(1a)}\\ \sin\zeta &= -&&\tfrac{1}{2}i&&\{\exp (i\zeta) - \exp (-i\zeta)\}. \Tag{(1b)} \end{alignat*} The left-hand sides of these equations are defined, by the ordinary geometrical definitions adopted in elementary Trigonometry, only for real values of~$\zeta$. The right-hand sides have, on the other hand, been defined for all values of~$\zeta$, real or complex. We are therefore naturally led to adopt the formulae~\Eq{(1)} as the \emph{definitions} of $\cos \zeta$ and~$\sin \zeta$ for all values of~$\zeta$. These definitions agree, in virtue of the results of \SecNo[§]{227}, with the elementary definitions for real values of~$\zeta$. Having defined $\cos \zeta$ and~$\sin \zeta$, we define the other trigonometrical ratios by the equations \[ \tan \zeta = \frac{\sin \zeta}{\cos \zeta},\quad \cot \zeta = \frac{\cos \zeta}{\sin \zeta},\quad \sec \zeta = \frac{1}{\cos \zeta},\quad \cosec \zeta = \frac{1}{\sin \zeta}. \Tag{(2)} \] It is evident that $\cos \zeta$ and~$\sec \zeta$ are even functions of~$\zeta$, and $\sin \zeta$, $\tan \zeta$, $\cot \zeta$, and~$\cosec \zeta$ odd functions. Also, if $\exp (i\zeta) = t$, we have \begin{gather*} \cos \zeta = \tfrac{1}{2} \{t + (1/t)\},\quad \sin \zeta = -\tfrac{1}{2}i \{t - (1/t)\},\\ \cos^{2} \zeta + \sin^{2} \zeta = \tfrac{1}{4}[\{t + (1/t)\}^{2} - \{t - (1/t)\}^{2}] = 1. \Tag{(3)} \end{gather*} We can moreover express the trigonometrical functions of $\zeta + \zeta'$ in terms of those of $\zeta$~and~$\zeta'$ by precisely the same formulae \PageSep{410} as those which hold in elementary trigonometry. For if $\exp (i\zeta) = t$, $\exp (i\zeta') = t'$, we have \begin{align*} %[** TN: Set on two lines in the original, not aligned] \cos (\zeta + \zeta') &= \tfrac{1}{2} \left(tt' + \frac{1}{tt'}\right) \\ &= \tfrac{1}{4} \left\{ \left(t + \frac{1}{t}\right) \left(t' + \frac{1}{t'}\right) + \left(t - \frac{1}{t}\right) \left(t' - \frac{1}{t'}\right)\right\}\\ &= \cos\zeta \cos\zeta' - \sin\zeta \sin\zeta'; \Tag{(4)} \end{align*} and similarly we can prove that \[ \sin (\zeta + \zeta') = \sin\zeta \cos\zeta' + \cos\zeta \sin\zeta'. \Tag{(5)} \] In particular \[ \cos(\zeta + \tfrac{1}{2}\pi) = -\sin\zeta,\quad \sin(\zeta + \tfrac{1}{2}\pi) = \cos\zeta. \Tag{(6)} \] All the ordinary formulae of elementary Trigonometry are algebraical corollaries of the equations~\Eq{(2)}--\Eq{(6)}; and so all such relations hold also for the generalised trigonometrical functions defined in this section. \begin{Remark} \Paragraph{229. The generalised hyperbolic functions.} In \Ex{lxxxvii}.~19, we defined $\cosh \zeta$ and~$\sinh \zeta$, for real values of~$\zeta$, by the equations \[ \cosh\zeta = \tfrac{1}{2} \{\exp \zeta + \exp (-\zeta)\},\quad \sinh\zeta = \tfrac{1}{2} \{\exp \zeta - \exp (-\zeta)\}. \Tag{(1)} \] We can now extend this definition to complex values of the variable; \ie\ we can agree that the equations~\Eq{(1)} are to define $\cosh \zeta$ and~$\sinh \zeta$ for all values of~$\zeta$ real or complex. The reader will easily verify the following relations: \[ \cos i\zeta = \cosh \zeta,\quad \sin i\zeta = i\sinh \zeta,\quad \cosh i\zeta = \cos \zeta,\quad \sinh i\zeta = i\sin \zeta. \] We have seen that any elementary trigonometrical formula, such as the formula $\cos 2\zeta = \cos^{2} \zeta - \sin^{2} \zeta$, remains true when $\zeta$~is allowed to assume complex values. It remains true therefore if we write $\cos i\zeta$ for~$\cos \zeta$, $\sin i\zeta$ for~$\sin \zeta$ and $\cos 2i\zeta$ for~$\cos 2\zeta$; or, in other words, if we write $\cosh \zeta$ for~$\cos \zeta$, $i\sinh \zeta$ for~$\sin \zeta$, and $\cosh 2\zeta$ for~$\cos 2\zeta$. Hence \[ \cosh 2\zeta = \cosh^{2} \zeta + \sinh^{2} \zeta. \] The same process of transformation may be applied to any trigonometrical identity. It is of course this fact which explains the correspondence noted in \Ex{lxxxvii}.~21 between the formulae for the hyperbolic and those for the ordinary trigonometrical functions. \Paragraph{230. Formulae for $\cos(\xi + i\eta)$, $\sin(\xi + i\eta)$,~etc.} It follows from the addition formulae that \begin{alignat*}{4} \cos (\xi + i\eta) &= \cos\xi \cos i\eta &&- \sin\xi \sin i\eta &&= \cos\xi \cosh \eta &&- i\sin\xi \sinh \eta,\\ \sin (\xi + i\eta) &= \sin\xi \cos i\eta &&+ \cos\xi \sin i\eta &&= \sin\xi \cosh \eta &&+ i\cos\xi \sinh \eta. \end{alignat*} These formulae are true for all values of $\xi$ and~$\eta$. The interesting case is that in which $\xi$~and~$\eta$ are real. They then give expressions for the real and imaginary parts of the cosine and sine of a complex number. \end{Remark} \PageSep{411} \begin{Examples}{XCV.} \Item{1.} Determine the values of~$\zeta$ for which $\cos\zeta$ and~$\sin\zeta$ are (i)~real (ii)~purely imaginary. [For example $\cos\zeta$~is real when $\eta = 0$ or when $\xi$~is any multiple of~$\pi$.] \Item{2.} \begin{alignat*}{2} |\cos (\xi + i\eta)| &= \sqrtp{\cos^{2} \xi + \sinh^{2} \eta} &&= \sqrtb{\tfrac{1}{2} (\cosh 2\eta + \cos 2\xi)}, \\ |\sin (\xi + i\eta)| &= \sqrtp{\sin^{2} \xi + \sinh^{2} \eta} &&= \sqrtb{\tfrac{1}{2} (\cosh 2\eta - \cos 2\xi)}. \end{alignat*} [Use (\eg)\ the equation $|\cos(\xi + i\eta)| = \sqrtb{\cos(\xi + i\eta) \cos(\xi - i\eta)}$.] \Item{3.} $\tan (\xi + i \eta) = \dfrac{\sin 2\xi + i\sinh 2\eta}{\cosh 2\eta + \cos 2\xi}$,\quad $\cot (\xi + i \eta) = \dfrac{\sin 2\xi - i\sinh 2\eta}{\cosh 2\eta - \cos 2\xi}$. [For example \[ \tan (\xi + i\eta) = \frac{\sin (\xi + i\eta) \cos (\xi - i\eta)} {\cos (\xi + i\eta) \cos (\xi - i\eta)} = \frac{\sin 2\xi + \sin 2i\eta}{\cos 2\xi + \cos 2i\eta}, \] which leads at once to the result given.] \Item{4.} \begin{align*} \sec (\xi + i \eta) &= \frac{\cos\xi \cosh\eta + i\sin\xi \sinh\eta} {\frac{1}{2} (\cosh 2\eta + \cos 2\xi)}, \\ \cosec (\xi + i \eta) &= \frac{\sin\xi \cosh\eta - i\cos\xi \sinh\eta} {\frac{1}{2} (\cosh 2\eta - \cos 2\xi)}. \end{align*} \Item{5.} If $|\cos (\xi + i\eta)| = 1$ then $\sin^{2} \xi = \sinh^{2} \eta$, and if $|\sin (\xi + i\eta)| = 1$ then $\cos^{2} \xi = \sinh^{2} \eta$. \Item{6.} If $|\cos (\xi + i\eta)| = 1$, then \[ \sin \{\am \cos (\xi + i\eta)\} = ±\sin^{2} \xi = ±\sinh^{2} \eta. \] \Item{7.} Prove that $\Log \cos (\xi + i\eta) = A + iB$, where \[ A = \tfrac{1}{2} \log \{\tfrac{1}{2} (\cosh 2\eta + \cos 2\xi)\} \] and $B$~is any angle such that \[ \frac{\cos B}{\cos\xi \cosh\eta} = -\frac{\sin B}{\sin\xi \sinh\eta} = \frac{1}{\sqrtb{\frac{1}{2} (\cosh 2\eta + \cos 2\xi)}}. \] Find a similar formula for $\Log \sin (\xi + i\eta)$. \Item{8.} \Topic{Solution of the equation $\cos\zeta = a$, where $a$~is real.} Putting $\zeta = \xi + i\eta$, and equating real and imaginary parts, we obtain \[ \cos\xi \cosh\eta = a,\quad \sin\xi \sinh\eta = 0. \] Hence either $\eta = 0$ or $\xi$~is a multiple of~$\pi$. If (i)~$\eta = 0$ then $\cos\xi = a$, which is impossible unless $-1 \leq a \leq 1$. This hypothesis leads to the solution \[ \zeta = 2k\pi ± \arccos a, \] where $\arccos a$ lies between $0$ and~$\frac{1}{2}\pi$. If (ii)~$\xi = m\pi$ then $\cosh\eta = (-1)^{m}a$, so that either $a \geq 1$ and $m$~is even, or $a \leq -1$ and $m$~is odd. If $a = ± 1$ then $\eta = 0$, and we are led back to our first case. If $|a| > 1$ then $\cosh\eta = |a|$, and we are led to the solutions \begin{alignat*}{4} \zeta &=& 2k &\pi ± i\log \{ &&a + \sqrt{a^{2} - 1}\}\quad &&(a > 1), \\ \zeta &=&(2k + 1) &\pi ± i\log \{-&&a + \sqrt{a^{2} - 1}\}\quad &&(a < -1). \end{alignat*} For example, the general solution of $\cos\zeta = -\frac{5}{3}$ is $\zeta = (2k + 1)\pi ± i\log 3$. \PageSep{412} \Item{9.} Solve $\sin\zeta = \alpha$, where $\alpha$~is real. \Item{10.} \Topic{Solution of $\cos\zeta = \alpha + i\beta$, where $\beta \neq 0$.} We may suppose $\beta > 0$, since the results when $\beta < 0$ may be deduced by merely changing the sign of~$i$. In this case \[ \cos\xi \cosh\eta = \alpha,\quad \sin\xi \sinh\eta = -\beta, \Tag{(1)} \] and \[ (\alpha/\cosh\eta)^{2} + (\beta/\sinh\eta)^{2} = 1. \] If we put $\cosh^{2} \eta = x$ we find that \[ x^{2} - (1 + \alpha^{2} + \beta^{2})x + \alpha^{2} = 0 \] or $x = (A_{1} ± A_{2})^{2}$, where \[ A_{1} = \tfrac{1}{2}\sqrtb{(\alpha + 1)^{2} + \beta^{2}},\quad A_{2} = \tfrac{1}{2}\sqrtb{(\alpha - 1)^{2} + \beta^{2}}. \] Suppose $\alpha > 0$. Then $A_{1} > A_{2} > 0$ and $\cosh\eta = A_{1} ± A_{2}$. Also \[ \cos\xi = \alpha/(\cosh\eta) = A_{1} \mp A_{2}, \] and since $\cosh\eta > \cos\xi$ we must take \[ \cosh\eta = A_{1} + A_{2},\quad \cos\xi = A_{1} - A_{2}. \] The general solutions of these equations are \[ \xi = 2k\pi ± \arccos M,\quad \eta = ±\log \{L + \sqrtp{L^{2} - 1}\}, \Tag{(2)} \] where $L = A_{1} + A_{2}$, $M = A_{1} - A_{2}$, and $\arccos M$ lies between $0$ and~$\frac{1}{2}\pi$. The values of $\eta$ and~$\xi$ thus found above include, however, the solutions of the equations \[ \cos\xi \cosh\eta = \alpha,\quad \sin\xi \sinh\eta = \beta, \Tag{(3)} \] as well as those of the equations~\Eq{(1)}, since we have only used the second of the latter equations after squaring it. To distinguish the two sets of solutions we observe that the sign of~$\sin\xi$ is the same as the ambiguous sign in the first of the equations~\Eq{(2)}, and the sign of~$\sinh\eta$ is the same as the ambiguous sign in the second. Since $\beta > 0$, these two signs must be different. Hence the general solution required is \[ \zeta = 2k\pi ± [\arccos M - i\log \{L + \sqrtp{L^{2} - 1}\}]. \] \Item{11.} Work out the cases in which $\alpha < 0$ and $\alpha = 0$ in the same way. \Item{12.} If $\beta = 0$ then $L = \frac{1}{2}|\alpha + 1| + \frac{1}{2}|\alpha - 1|$ and $M = \frac{1}{2}|\alpha + 1| - \frac{1}{2}|\alpha - 1|$. Verify that the results thus obtained agree with those of Ex.~8. \Item{13.} {\Loosen Show that if $\alpha$~and~$\beta$ are positive then the general solution of $\sin\zeta = \alpha + i\beta$ is} \[ \zeta = k\pi +(-1)^{k} [\arcsin M + i\log \{L + \sqrtp{L^{2} - 1}\}], \] where $\arcsin M$ lies between $0$ and~$\frac{1}{2}\pi$. Obtain the solution in the other possible cases. \Item{14.} Solve $\tan\zeta = \alpha$, where $\alpha$~is real. [All the roots are real.] \PageSep{413} \Item{15.} Show that the general solution of $\tan \zeta = \alpha + i\beta$, where $\beta \neq 0$, is \[ \zeta = k\pi + \tfrac{1}{2}\theta + \tfrac{1}{4} i\log\left\{ \frac{\alpha^{2} + (1 + \beta)^{2}} {\alpha^{2} + (1 - \beta)^{2}} \right\}, \] where $\theta$~is the numerically least angle such that \[ \cos \theta : \sin \theta : 1 :: 1 - \alpha^{2} - \beta^{2} : 2\alpha : \sqrtb{(1 - \alpha^{2} - \beta^{2})^{2} + 4\alpha^{2}}. \] \Item{16.} If $z = \xi\exp(\frac{1}{4}\pi i)$, where $\xi$~is real, and $c$~is also real, then the modulus of $\cos 2\pi z - \cos 2\pi c$ is \[ \begin{aligned}[b] \surd[\tfrac{1}{2}\{1 + \cos 4\pi c + \cos(2\pi\xi\sqrt{2}) &+ \cosh(2\pi\xi\sqrt{2}) \\ &- 4\cos 2\pi c \cos(\pi\xi\sqrt{2}) \cosh(\pi\xi\sqrt{2})\}] \end{aligned}. \] \Item{17.} Prove that \begin{gather*} |\exp \exp(\xi + i\eta)| = \exp(\exp\xi \cos\eta), \\ \begin{aligned} \Real \{\cos\cos(\xi + i\eta)\} &= \cos(\cos\xi \cosh\eta) \cosh(\sin\xi \sinh\eta),\\ \Imag \{\sin\sin(\xi + i\eta)\} &= \cos(\sin\xi \cosh\eta) \sinh(\cos\xi \sinh\eta). \end{aligned} \end{gather*} \Item{18.} Prove that $|\exp\zeta|$~tends to~$\infty$ if $\zeta$~moves away towards infinity along any straight line through the origin making an angle less than~$\frac{1}{2}\pi$ with~$OX$, and to~$0$ if $\zeta$~moves away along a similar line making an angle greater than~$\frac{1}{2}\pi$ with~$OX$. \Item{19.} Prove that $|\cos\zeta|$ and $|\sin\zeta|$ tend to~$\infty$ if $\zeta$~moves away towards infinity along any straight line through the origin other than either half of the real axis. \Item{20.} Prove that $\tan\zeta$ tends to~$-i$ or to~$i$ if $\zeta$~moves away to infinity along the straight line of Ex.~19, to $-i$~if the line lies above the real axis and to~$i$ if it lies below. \end{Examples} \begin{Remark} \Paragraph{231. The connection between the logarithmic and the inverse trigonometrical functions.} We found in \Ref{Ch.}{VI} that the integral of a rational or algebraical function $\phi(x, \alpha, \beta, \dots)$, where $\alpha$,~$\beta$,~\dots\ are constants, often assumes different forms according to the values of $\alpha$,~$\beta$,~\dots; sometimes it can be expressed by means of logarithms, and sometimes by means of inverse trigonometrical functions. Thus, for example, \[ \int \frac{dx}{x^{2} + \alpha} = \frac{1}{\sqrt{\alpha}} \arctan \frac{x}{\sqrt{\alpha}} \Tag{(1)} \] if $\alpha > 0$, but \[ \int \frac{dx}{x^{2} + \alpha} = \frac{1}{2\sqrtp{-\alpha}} \log \left|\frac{x - \sqrtp{-\alpha}}{x + \sqrtp{-\alpha}}\right| \Tag{(2)} \] if $\alpha < 0$. These facts suggest the existence of some functional connection between the logarithmic and the inverse circular functions. That there is such a connection may also be inferred from the facts that we have expressed the circular functions of~$\zeta$ in terms of~$\exp i\zeta$, and that the logarithm is the inverse of the exponential function. Let us consider more particularly the equation \[ \int \frac{dx}{x^{2} - \alpha^{2}} = \frac{1}{2\alpha} \log \left(\frac{x - \alpha}{x + \alpha}\right), \] \PageSep{414} which holds when $\alpha$~is real and $(x - \alpha)/(x + \alpha)$ is positive. If we could write $i\alpha$ instead of~$\alpha$ in this equation, we should be led to the formula \[ \arctan \left(\frac{x}{\alpha}\right) = \frac{1}{2i} \log\left(\frac{x - i\alpha}{x + i\alpha}\right) + C, \Tag{(3)} \] where $C$~is a constant, and the question is suggested whether, now that we have defined the logarithm of a complex number, this equation will not be found to be actually true. Now (\SecNo[§]{221}) \[ \Log(x ± i\alpha) = \tfrac{1}{2} \log(x^{2} + \alpha^{2}) ± i(\phi + 2k\pi), \] {\Loosen where $k$~is an integer and $\phi$~is the numerically least angle such that $\cos\phi = x/\sqrtp{x^{2} + \alpha^{2}}$ and $\sin\phi = \alpha/\sqrtp{x^{2} + \alpha^{2}}$. Thus} \[ \frac{1}{2i} \Log\left(\frac{x - i\alpha}{x + i\alpha}\right) = -\phi - l\pi, \] where $l$~is an integer, and this does in fact differ by a constant from any value of~$\arctan(x/\alpha)$. The standard formula connecting the logarithmic and inverse circular functions is \[ \arctan x = \frac{1}{2i} \Log\left(\frac{1 + ix}{1 - ix}\right), \Tag{(4)} \] where $x$~is real. It is most easily verified by putting $x = \tan y$, when the right-hand side reduces to \[ \frac{1}{2i} \Log\left(\frac{\cos y + i\sin y}{\cos y - i\sin y}\right) = \frac{1}{2i} \Log(\exp 2iy) = y + k\pi, \] where $k$~is any integer, so that the equation~\Eq{(4)} is `completely' true (\Ex{xciii}.~3). The reader should also verify the formulae \[ \arccos x = -i \Log\{x ± i\sqrtp{1 - x^{2}}\},\quad \arcsin x = -i \Log\{ix ± \sqrtp{1 - x^{2}}\}, \Tag{(5)} \] where $-1 \leq x \leq 1$: each of these formulae also is `completely' true. \Par{Example.} Solving the equation \[ \cos u = x = \tfrac{1}{2}\{y + (1/y)\}, \] where $y = \exp(iu)$, with respect to~$y$, we obtain $y = x ± i\sqrtp{1 - x^{2}}$. Thus: \[ u = -i \Log y = -i \Log\{x ± i\sqrtp{1 - x^{2}}\}, \] which is equivalent to the first of the equations~\Eq{(5)}. Obtain the remaining equations \Eq{(4)}~and~\Eq{(5)} by similar reasoning. \end{Remark} \Paragraph{232. The power series for $\exp z$.\protect\footnotemark} We saw in \SecNo[§]{212}\footnotetext {It will be convenient now to use~$z$ instead of~$\zeta$ as the argument of the exponential function.} that when $z$~is real \[ \exp z = 1 + z +\frac{z^{2}}{2!} + \dots. \Tag{(1)} \] Moreover we saw in \SecNo[§]{191} that the series on the right-hand side \PageSep{415} remains convergent (indeed absolutely convergent) when $z$~is complex. It is naturally suggested that the equation~\Eq{(1)} also remains true, and we shall now prove that this is the case. Let the sum of the series~\Eq{(1)} be denoted by~$F(z)$. The series being absolutely convergent, it follows by direct multiplication (as in \Ex{lxxxi}.~7) that $F(z)$~satisfies the functional equation \[ F(z) F(h) = F(z + h). \Tag{(2)} \] Now let $z = iy$, where $y$~is real, and $F(z) = f(y)$. Then \[ f(y) f(k) = f(y + k); \] and so \[ \frac{f(y + k) - f(y)}{k} = f(y) \left\{\frac{f(k) - 1}{k}\right\}. \] But \[ \frac{f(k) - 1}{k} = i\left\{1 + \frac{ik}{2!} + \frac{(ik)^{2}}{3!} + \dots\right\}; \] and so, if $|k| < 1$, \[ \left|\frac{f(k) - 1}{k} - i\right| < \left(\frac{1}{2!} + \frac{1}{3!} + \dots\right)|k| < (e - 2)|k|. \] Hence $\{f(k) - 1\}/k\to i$ as $k \to 0$, and so \[ f'(y) = \lim_{k \to 0} \frac{f(y + k) - f(y)}{k} = if(y). \Tag{(3)} \] Now \[ % [** TN: 2! visually consistent, both mathematically correct] f(y) = F(iy) = 1 + (iy) + \frac{(iy)^{2}}{\DPchg{2}{2!}} + \dots = \phi(y) + i\psi(y), \] where $\phi(y)$~is an even and $\psi(y)$~an odd function of~$y$, and so \begin{align*} |f(y)| &= \sqrtbr{\{\phi(y)\}^{2} + \{\psi(y)\}^{2}}\\ &= \sqrtbr{\{\phi(y) + i\psi(y)\}\{\phi(y) - i\psi(y)\}}\\ &= \sqrtb{F(iy) F(-iy)} = \sqrtb{F(0)} = 1; \end{align*} and therefore \[ f(y) = \cos Y + i \sin Y, \] where $Y$~is a function of~$y$ such that $-\pi < Y \leq \pi$. Since $f(y)$~has a differential coefficient, its real and imaginary parts $\cos Y$ and~$\sin Y$ have differential coefficients, and are \textit{a~fortiori} continuous functions of~$y$. Hence $Y$~is a continuous function of~$y$. Suppose that $Y$ changes to~$Y + K$ when $y$~changes to~$y + k$. Then $K$~tends to zero with~$k$, and \[ \frac{K}{k} = \biggl\{\frac{\cos(Y + K) - \cos Y}{k}\biggr\} \bigg/ \biggl\{\frac{\cos(Y + K) - \cos Y}{K}\biggr\}. \] Of the two quotients on the right-hand side the first tends to a \PageSep{416} limit when $k \to 0$, since $\cos Y$~has a differential coefficient with respect to~$y$, and the second tends to the limit~$-\sin Y$. Hence $K/k$~tends to a limit, so that $Y$~has a differential coefficient with respect to~$y$. Further \[ f'(y) = (-\sin Y + i\cos Y) \frac{dY}{dy}. \] But we have seen already that \[ f'(y) = if(y) = -\sin Y + i\cos Y. \] Hence \[ \frac{dY}{dy} = 1,\quad Y = y + C, \] where $C$~is a constant, and \[ f(y) = \cos(y + C) + i\sin(y + C). \] {\Loosen But $f(0) = 1$ when $y = 0$, so that $C$~is a multiple of~$2\pi$, and $f(y) = \cos y + i\sin y$. Thus $F(iy) = \cos y + i\sin y$ for all real values of~$y$. And, if $x$~also is real, we have} \[ F(x + iy) = F(x) F(iy) = \exp x(\cos y + i\sin y) = \exp(x + iy), \] or \[ \exp z = 1 + z + \frac{z^{2}}{2!} + \dots, \] for all values of~$z$. \Paragraph{233. The power series for $\cos z$ and~$\sin z$.} From the result of the last section and the equations~\Eq{(1)} of \SecNo[§]{228} it follows at once that \[ \cos z = 1 - \frac{z^{2}}{2!} + \frac{z^{4}}{4!} - \dots,\quad \sin z = z - \frac{z^{3}}{3!} + \frac{z^{5}}{5!} - \dots \] for all values of~$z$. These results were proved for real values of~$z$ in \Ex{lvi}.~1. \begin{Examples}{XCVI.} \Item{1.} Calculate $\cos i$ and $\sin i$ to two places of decimals by means of the power series for $\cos z$ and~$\sin z$. \Item{2.} Prove that $|\cos z| \leq \cosh|z|$ and $|\sin z| \leq \sinh|z|$. \Item{3.} Prove that if $|z| < 1$ then $|\cos z| < 2$ and $|\sin z| < \frac{6}{5}|z|$. \Item{4.} Since $\sin 2z = 2\sin z \cos z$ we have \[ (2z) - \frac{(2z)^{3}}{3!} + \frac{(2z)^{5}}{5!} - \dots = 2\left(z - \frac{z^{3}}{3!} + \dots\right) \left(1 - \frac{z^{2}}{2!} + \dots\right). \] Prove by multiplying the two series on the right-hand side (\SecNo[§]{195}) and equating coefficients (\SecNo[§]{194}) that \[ \binom{2n + 1}{1} + \binom{2n + 1}{3} + \dots + \binom{2n + 1}{2n + 1} = 2^{2n}. \] Verify the result by means of the binomial theorem. Derive similar identities from the equations \[ \cos^{2}z + \sin^{2}z = 1,\quad \cos2z = 2\cos^{2}z - 1 = 1 - 2\sin^{2}z. \] \PageSep{417} \Item{5.} Show that \[ \exp\{(1 + i)z\} = \sum_{0}^{\infty} 2^{\frac{1}{2}n} \exp(\tfrac{1}{4}n\pi i) \frac{z^{n}}{n!}. \] \Item{6.} Expand $\cos z \cosh z$ in powers of~$z$. [We have \begin{align*} \cos z \cosh z + i\sin z \sinh z &= \cos\{(1 - i)z\} = \tfrac{1}{2} [\exp\{(1 + i)z\} + \exp\{-(1 + i)z\}]\\ &= \tfrac{1}{2} \sum_{0}^{\infty} 2^{\frac{1}{2}n} \{1 + (-1)^{n}\} \exp(\tfrac{1}{4}n\pi i) \frac{z^{n}}{n!}, \end{align*} and similarly \[ \cos z \cosh z - i\sin z \sinh z = \cos (1 + i)z = \tfrac{1}{2} \sum_{0}^{\infty} 2^{\frac{1}{2}n} \{1 + (-1)^{n}\} \exp(-\tfrac{1}{4}n\pi i) \frac{z^{n}}{n!}. \] Hence \[ \cos z \cosh z = \tfrac{1}{2} \sum_{0}^{\infty} 2^{\frac{1}{2}n}\{1 + (-1)^{n}\} \cos \tfrac{1}{4}n\pi \frac{z^{n}}{n!} = 1 - \frac{2^{2}z^{4}}{4!} + \frac{2^{4}z^{8}}{8!} - \dots.] \] \Item{7.} Expand $\sin z \sinh z$, $\cos z \sinh z$, and $\sin z \cosh z$ in powers of~$z$. \Item{8.} Expand $\sin^{2} z$ and $\sin^{3} z$ in powers of~$z$. [Use the formulae \[ \sin^{2} z = \tfrac{1}{2} (1 - \cos 2z),\quad \sin^{3} z = \tfrac{1}{4} (3\sin z - \sin 3z),\ \dots. \] It is clear that the same method may be used to expand $\cos^{n} z$ and~$\sin^{n} z$, where $n$~is any integer.] \Item{9.} Sum the series \[ C = 1 + \frac{\cos z}{1!} + \frac{\cos 2z}{2!} + \frac{\cos 3z}{3!} +\dots,\quad S = \frac{\sin z}{1!} + \frac{\sin 2z}{2!} + \frac{\sin 3z}{3!} + \dots. \] [Here \begin{align*} C + iS &= 1 + \dfrac{\exp(iz)}{1!} + \dfrac{\exp(2iz)}{2!} + \dots = \exp\{\exp(iz)\} \\ &= \exp(\cos z) \{\cos(\sin z) + i\sin(\sin z)\}, \end{align*} and similarly \[ C - iS = \exp\{\exp(-iz)\} = \exp(\cos z)\{\cos(\sin z) - i\sin(\sin z)\}. \] Hence \[ C = \exp(\cos z)\cos(\sin z),\quad S = \exp(\cos z)\sin(\sin z).] \] \Item{10.} Sum \[ 1 + \frac{a\cos z}{1!} + \frac{a^{2}\cos 2z}{2!} + \dots,\quad \frac{a\sin z}{1!} + \frac{a^{2}\sin 2z}{2!} + \dots. \] \Item{11.} Sum \[ 1 - \frac{\cos 2z}{2!} + \frac{\cos 4z}{4!} - \dots,\quad \frac{\cos z}{1!} - \frac{\cos 3z}{3!} + \dots \] and the corresponding series involving sines. \Item{12.} Show that \[ 1 + \frac{\cos 4z}{4!} + \frac{\cos 8z}{8!} + \dots = \tfrac{1}{2}\{\cos(\cos z) \cosh(\sin z) + \cos(\sin z) \cosh(\cos z)\}. \] \Item{13.} Show that the expansions of $\cos(x + h)$ and $\sin(x + h)$ in powers of~$h$ (\Ex{lvi}.~1) are valid for all values of $x$~and~$h$, real or complex. \end{Examples} \Paragraph{234. The logarithmic series.} We found in \SecNo[§]{213} that \[ \log(1 + z) = z - \tfrac{1}{2} z^{2} + \tfrac{1}{3} z^{3} - \dots \Tag{(1)} \] when $z$~is real and numerically less than unity. The series on the right-hand side is convergent, indeed absolutely convergent, when \PageSep{418} $z$~has any complex value whose modulus is less than unity. It is naturally suggested that the equation~\Eq{(1)} remains true for such complex values of~$z$. That this is true may be proved by a modification of the argument of \SecNo[§]{213}. We shall in fact prove rather more than this, viz.\ that \Eq{(1)}~is true for all values of~$z$ such that $|z| \leq 1$, with the exception of the value~$-1$. It will be remembered that $\log(1 + z)$~is the principal value of $\Log(1 + z)$, and that \[ \log(1 + z) = \int_{C} \frac{du}{u}, \] where $C$~is the straight line joining the points $1$ and~$1 + z$ in the plane of the complex variable~$u$. We may suppose that $z$~is not real, as the formula~\Eq{(1)} has been proved already for real values of~$z$. If we put \[ z = r(\cos\theta + i\sin\theta) = \zeta r, \] so that $|r| \leq 1$, and \[ u = 1 + \zeta t, \] then $u$~will describe~$C$ as $t$~increases from $0$ to~$r$. And \begin{align*} \int_{C} \frac{du}{u} &= \int_{0}^{r} \frac{\zeta\, dt}{1 + \zeta t} \\ &= \int_{0}^{r} \left\{\zeta - \zeta^{2} t + \zeta^{3} t^{2} - \dots + (-1)^{m-1} \zeta^{m} t^{m-1} + \frac{(-1)^{m} \zeta^{m+1} t^{m}}{1 + \zeta t}\right\} dt \\ &= \zeta r - \frac{(\zeta r)^{2}}{2} + \frac{(\zeta r)^{3}}{3} - \dots + (-1)^{m-1} \frac{(\zeta r)^{m}}{m} + R_{m} \\ &= z - \frac{z^{2}}{2} + \frac{z^{3}}{3} - \dots + (-1)^{m-1} \frac{z^{m}}{m} + R_{m}, \Tag{(2)} \end{align*} where \[ R_{m} = (-1)^{m} \zeta^{m+1} \int_{0}^{r} \frac{t^{m}\, dt}{1 + \zeta t}. \Tag{(3)} \] It follows from \Eq{(1)}~of \SecNo[§]{164} that \[ |R_{m}| \leq \int_{0}^{r} \frac{t^{m}\, dt}{|1 + \zeta t|}. \Tag{(4)} \] Now $|1 + \zeta t|$ or $|u|$~is never less than~$\varpi$, the perpendicular from~$O$ on to the line~$C$.\footnote {Since $z$~is not real, $C$~cannot pass through~$O$ when produced. The reader is recommended to draw a figure to illustrate the argument.} Hence \[ |R_{m}| \leq \frac{1}{\varpi} \int_{0}^{r} t^{m}\, dt = \frac{r^{m+1}}{(m + 1) \varpi} \leq \frac{1}{(m + 1) \varpi}, \] \PageSep{419} and so $R_{m} \to 0$ as $m \to \infty$. It follows from~\Eq{(2)} that \[ \log(1 + z) = z - \tfrac{1}{2} z^{2} + \tfrac{1}{3} z^{3} - \dots. \Tag{(5)} \] We have of course shown in the course of our proof that the series is convergent: this however has been proved already (\Ex{lxxx}.~4). The series is in fact absolutely convergent when $\DPtypo{z|}{|z|} < 1$ and conditionally convergent when $|z| = 1$. Changing $z$ into~$-z$ we obtain \[ \log \left(\frac{1}{1 - z}\right) = -\log(1 - z) = z + \tfrac{1}{2} z^{2} + \tfrac{1}{3} z^{3} + \dots. \Tag{(6)} \] \Paragraph{235.} Now \begin{align*} \log(1 + z) &= \log\{(1 + r \cos\theta) + ir\sin\theta\} \\ &= \tfrac{1}{2} \log(1 + 2r\cos\theta + r^{2}) + i\arctan \left(\frac{r\sin\theta}{1 + r\cos\theta}\right). \end{align*} That value of the inverse tangent must be taken which lies between $-\frac{1}{2}\pi$ and~$\frac{1}{2}\pi$. For, since $1 + z$~is the vector represented by the line from $-1$ to~$z$, the principal value of~$\am(1 + z)$ always lies between these limits when $z$~lies within the circle $|z| = 1$.\footnote {See the preceding footnote.} Since $z^{m} = r^{m}(\cos m\theta + i\sin m\theta)$, we obtain, on equating the real and imaginary parts in equation~\Eq{(5)} of~\SecNo[§]{234}, \begin{align*} \tfrac{1}{2} \log(1 + 2r\cos\theta + r^{2}) &= r\cos\theta - \tfrac{1}{2}r^{2} \cos 2\theta + \tfrac{1}{3}r^{3} \cos 3\theta - \dots, \\ \arctan \left(\frac{r\sin\theta}{1 + r\cos\theta}\right) &= r\sin\theta - \tfrac{1}{2}r^{2} \sin 2\theta + \tfrac{1}{3}r^{3} \sin 3\theta - \dots. \end{align*} These equations hold when $0 \leq r \leq 1$, and for all values of~$\theta$, except that, when $r = 1$, $\theta$~must not be equal to an odd multiple of~$\pi$. It is easy to see that they also hold when $-1 \leq r \leq 0$, except that, when $r = -1$, $\theta$~must not be equal to an even multiple of~$\pi$. A particularly interesting case is that in which $r = 1$. In this case we have \begin{align*} \log(1 + z) = \log(1 + \Cis\theta) &= \tfrac{1}{2} \log(2 + 2\cos\theta) + i\arctan\left(\frac{\sin\theta}{1 + \cos\theta}\right) \\ &= \tfrac{1}{2} \log(4\cos^{2} \tfrac{1}{2}\theta) + \tfrac{1}{2}i\theta, \end{align*} if $-\pi < \theta < \pi$, and so \begin{alignat*}{4} \cos\theta &- \tfrac{1}{2} \cos 2\theta &&+ \tfrac{1}{3} \cos 3\theta &&- \dots &&= \tfrac{1}{2} \log(4\cos^{2} \tfrac{1}{2}\theta), \\ \sin\theta &- \tfrac{1}{2} \sin 2\theta &&+ \tfrac{1}{3} \sin 3\theta &&- \dots &&= \tfrac{1}{2} \theta. \end{alignat*} \PageSep{420} The sums of the series, for other values of~$\theta$, are easily found from the consideration that they are periodic functions of~$\theta$ with the period~$2\pi$. Thus the sum of the cosine series is $\frac{1}{2} \log(4\cos^{2} \frac{1}{2}\theta)$ for all values of~$\theta$ save odd multiples of~$\pi$ (for which values the series is divergent), while the sum of the sine series is $\frac{1}{2} (\theta - 2k\pi)$ if $(2k - 1)\pi < \theta < (2k + 1)\pi$, and zero if $\theta$~is an odd multiple of~$\pi$. The graph of the function represented by the sine series is shown in \Fig{58}. The function is discontinuous for $\theta = (2k + 1)\pi$. %[Illustration: Fig. 58.] \Figure{58}{p420} \begin{Remark} If we write $iz$ and~$-iz$ for~$z$ in~\Eq{(5)}, and subtract, we obtain \[ \frac{1}{2i} \log\left(\frac{1 + iz}{1 - iz}\right) = z - \tfrac{1}{3}z^{3} + \tfrac{1}{5}z^{5} - \dots. \] If $z$~is real and numerically less than unity, we are led, by the results of \SecNo[§]{231}, to the formula \[ \arctan z = z - \tfrac{1}{3}z^{3} + \tfrac{1}{5}z^{5} - \dots, \] already proved in a different manner in~\SecNo[§]{214}. \end{Remark} \begin{Examples}{XCVII.} \Item{1.} Prove that, in any triangle in which $a > b$, \[ \log c = \log a - \frac{b}{a} \cos C - \frac{b^{2}}{2a^{2}} \cos 2C - \dots. \] [Use the formula $\log c = \frac{1}{2} \log(a^{2} + b^{2} - 2ab\cos C )$.] \Item{2.} Prove that if $-1 < r < 1$ and $-\frac{1}{2}\pi < \theta < \frac{1}{2}\pi$ then \[ r\sin 2\theta - \tfrac{1}{2}r^{2} \sin 4\theta + \tfrac{1}{3}r^{3} \sin 6\theta - \dots = \theta - \arctan \left\{\left(\frac{1 - r}{1 + r}\right) \tan\theta\right\}, \] the inverse tangent lying between $-\frac{1}{2}\pi$ and~$\frac{1}{2}\pi$. Determine the sum of the series for all other values of~$\theta$. \Item{3.} Prove, by considering the expansions of $\log(1 + iz)$ and $\log(1 - iz)$ in powers of~$z$, that if $-1 < r < 1$ then \begin{gather*} \begin{alignedat}{4} r\sin\theta &+ \tfrac{1}{2}r^{2} \cos 2\theta &&- \tfrac{1}{3}r^{3} \sin 3\theta &&- \tfrac{1}{4}r^{4} \cos 4\theta + \dots &&= \tfrac{1}{2} \log(1 + 2r \sin\theta + r^{2}),\\ r\cos\theta &+ \tfrac{1}{2}r^{2} \sin 2\theta &&- \tfrac{1}{3}r^{3} \cos 3\theta &&- \tfrac{1}{4}r^{4} \sin 4\theta + \dots &&= \arctan \left(\frac{r\cos\theta}{1 - r\sin\theta}\right), \end{alignedat} \displaybreak[1] \\ \begin{alignedat}{2} r\sin\theta &- \tfrac{1}{3}r^{3} \sin 3\theta + \dots &&= \tfrac{1}{4} \log\left(\frac{1 + 2r \sin\theta + r^{2}} {1 - 2r \sin\theta + r^{2}}\right),\\ r\cos\theta &- \tfrac{1}{3}r^{3} \cos 3\theta + \dots &&= \tfrac{1}{2} \arctan \left(\frac{2r\cos\theta}{1 - r^{2}}\right), \end{alignedat} \end{gather*} the inverse tangents lying between $-\frac{1}{2}\pi$ and~$\frac{1}{2}\pi$. \PageSep{421} \Item{4.} Prove that \begin{alignat*}{3} \cos\theta \cos\theta &- \tfrac{1}{2} \cos 2\theta \cos^{2}\theta &&+ \tfrac{1}{3} \cos 3\theta \cos^{3} \theta - \dots &&= \tfrac{1}{2} \log(1 + 3\cos^{2} \theta),\\ \sin\theta \sin\theta &- \tfrac{1}{2} \sin 2\theta \sin^{2}\theta &&+ \tfrac{1}{3} \sin 3\theta \sin^{3} \theta - \dots &&= \arccot (1 + \cot\theta + \cot^{2}\theta), \end{alignat*} the inverse cotangent lying between $-\frac{1}{2}\pi$ and~$\frac{1}{2}\pi$; and find similar expressions for the sums of the series \[ \cos\theta \sin\theta - \tfrac{1}{2} \cos 2\theta \sin^{2}\theta + \dots,\quad \sin\theta \cos\theta - \tfrac{1}{2} \sin 2\theta \cos^{2}\theta + \dots. \] \end{Examples} \Paragraph{236. Some applications of the logarithmic series. The exponential limit.} Let $z$~be any complex number, and $h$~a real number small enough to ensure that $|hz| < 1$. Then \[ \log(1 + hz) = hz - \tfrac{1}{2}(hz)^{2} + \tfrac{1}{3}(hz)^{3} - \dots, \] and so \[ \frac{\log(1 + hz)}{h} = z + \phi(h, z), \] where \begin{gather*} \phi(h, z) = -\tfrac{1}{2}hz^{2} + \tfrac{1}{3}h^{2}z^{3} - \tfrac{1}{4}h^{3}z^{4} + \dots,\\ |\phi(h, z)| < |hz^{2}| (1 + |hz| + |h^{2}z^{2}| + \dots) = \frac{|hz^{2}|}{1 - |hz|}, \end{gather*} so that $\phi(h, z) \to 0$ as $h \to 0$. It follows that \[ \lim_{h\to 0} \frac{\log(1 + hz)}{h} = z. \Tag{(1)} \] If in particular we suppose $h = 1/n$, where $n$~is a positive integer, we obtain \[ \lim_{n\to \infty} n\log \left(1 + \frac{z}{n}\right) = z, \] and so \[ \lim_{n\to \infty} \left(1 + \frac{z}{n}\right)^{n} = \lim_{n\to \infty} \exp\left\{n\log\left(1 + \frac{z}{n}\right)\right\} = \exp z. \Tag{(2)} \] This is a generalisation of the result proved in \SecNo[§]{208} for real values of~$z$. From~\Eq{(1)} we can deduce some other results which we shall require in the next section. If $t$ and~$h$ are real, and $h$~is sufficiently small, we have \[ \frac{\log(1 + tz + hz) - \log(1 + tz)}{h} = \frac{1}{h}\log\left(1 + \frac{hz}{1 + tz}\right) \] which tends to the limit $z/(1 + tz)$ as $h \to 0$. Hence \[ \frac{d}{dt} \{\log(1 + tz)\} = \frac{z}{1 + tz}. \Tag{(3)} \] \PageSep{422} We shall also require a formula for the differentiation of $(1 + tz)^{m}$, where $m$~is any number real or complex, with respect to~$t$. We observe first that, if $\phi(t) = \psi(t) + i\chi(t)$ is a complex function of~$t$, whose real and imaginary parts $\phi(t)$ and~$\chi(t)$ possess derivatives, then \begin{align*} \frac{d}{dt}(\exp\phi) &= \frac{d}{dt}\{(\cos\chi + i\sin\chi) \exp\psi\}\\ &= \{(\cos\chi + i\sin\chi) \psi' + (-\sin\chi + i\cos\chi)\chi'\} \exp\psi\\ &= (\psi' + i\chi')(\cos\chi + i\sin\chi) \exp\psi\\ &= (\psi' + i\chi') \exp(\psi + i\chi) = \phi' \exp\phi, \end{align*} so that the rule for differentiating~$\exp\phi$ is the same as when $\phi$~is real. This being so we have \begin{align*} \frac{d}{dt}(1 + tz)^{m} &= \frac{d}{dt} \exp\{m\log(1 + tz)\}\\ &= \frac{mz}{1 + tz} \exp\{m\log(1 + tz)\}\\ &= mz(1 + tz)^{m-1}. \Tag{(4)} \end{align*} Here both $(1 + tz)^{m}$ and~$(1 + tz)^{m-1}$ have their principal values\Add{.} \Paragraph{237. The general form of the Binomial Theorem.} We have proved already (\SecNo[§]{215}) that the sum of the series \[ 1 + \binom{m}{1} z + \binom{m}{2} z^{2} + \dots \] is $(1 + z)^{m} = \exp\{m\log(1 + z)\}$, for all real values of~$m$ and all real values of~$z$ between $-1$ and~$1$. If $a_{n}$~is the coefficient of~$z^{n}$ then \[ \left|\frac{a_{n+1}}{a_{n}}\right| = \left|\frac{m - n}{n + 1}\right| \to 1, \] whether $m$~is real or complex. Hence (\Ex{lxxx}.~3) the series is always convergent if the modulus of~$z$ is less than unity, and we shall now prove that its sum is still $\exp\{m\log(1 + z)\}$, \ie\ the principal value of~$(1 + z)^{m}$. It follows from \SecNo[§]{236} that if $t$~is real then \[ \frac{d}{dt}(1 + tz)^{m} = mz(1 + tz)^{m-1}, \] \PageSep{423} $z$ and~$m$ having any real or complex values and each side having its principal value. Hence, if $\phi(t) = (1 + tz)^{m}$, we have \[ \phi^{(n)}(t) = m(m - 1) \dots (m - n + 1)z^{n} (1 + tz)^{m-n}. \] This formula still holds if $t = 0$, so that \[ \frac{\phi^{n}(0)}{n!} = \binom{m}{n} z^{n}. \] Now, in virtue of the remark made at the end of \SecNo[§]{164}, we have \[ \phi(1) = \phi(0) + \phi'(0) + \frac{\phi''(0)}{2!} + \dots + \frac{\phi^{(n-1)}(0)}{(n - 1)!} + R_{n}, \] where \[ R_{n} = \frac{1}{(n - 1)!}\int_{0}^{1} (1 - t)^{n-1} \phi^{(n)}(t)\, dt. \] But if $z = r(\cos\theta + i\sin\theta)$ then \[ |1 + tz| = \sqrtp{1 + 2tr\cos\theta + t^{2}r^{2}} \geq 1 - tr, \] and therefore \begin{align*} |R_{n}| &< \frac{|m(m - 1) \dots (m - n + 1)|}{(n - 1)!}\, r^{n} \int_{0}^{1} \frac{(1 - t)^{n-1}}{(1 - tr)^{n-m}}\, dt\\ &< \frac{|m(m - 1) \dots (m - n + 1)|}{(n - 1)!}\, \frac{(1 - \theta)^{n-1} r^{n}}{(1 - \theta r)^{n-m}}, \end{align*} where $0 < \theta < 1$; so that (cf.\ \SecNo[§]{163}) \[ |R_{n}| < K\frac{|m(m - 1) \dots (m - n + 1)|}{(n - 1)!}\, r^{n} = \rho_{n}, \] say. But \[ \frac{\rho_{n+1}}{\rho_{n}} = \frac{|m - n|}{n}r \to r, \] and so (\Ex{xxvii}.~6) $\rho_{n} \to 0$, and therefore $R_{n} \to 0$, as $n \to \infty$. Hence we arrive at the following theorem. \begin{Theorem} The sum of the binomial series\PageLabel{423} \[ 1 + \binom{m}{1} z + \binom{m}{2} z^{2} + \dots \] is $\exp\{m\log(1 + z)\}$, where the logarithm has its principal value, for all values of~$m$, real or complex, and all values of~$z$ such that $\DPtypo{z|}{|z|} < 1$. \end{Theorem} A more complete discussion of the binomial series, taking account of the more difficult case in which $|z| = 1$, will be found on pp.~225~\textit{et~seq.}\ of Bromwich's \textit{Infinite Series}. \PageSep{424} \begin{Examples}{XCVIII.} \Item{1.} Suppose $m$~real. Then since \[ \log(1 + z) = \tfrac{1}{2} \log(1 + 2r\cos\theta + r^{2}) + i\arctan\left(\frac{r\sin\theta}{1 + r\cos\theta}\right), \] we obtain \begin{align*} \sum_{0}^{\infty} \binom{m}{n} z^{n} &= \exp\{\tfrac{1}{2}m \log(1 + 2r\cos\theta + r^{2})\} \Cis \left\{m\arctan\left(\frac{r\sin\theta}{1 + r\cos\theta}\right)\right\} \\ &= (1 + 2r\cos\theta + r^{2})^{\frac{1}{2}m} \Cis \left\{m\arctan\left(\frac{r\sin\theta}{1 + r\cos\theta}\right)\right\}, \end{align*} all the inverse tangents lying between $-\frac{1}{2}\pi$ and~$\frac{1}{2}\pi$. In particular, if we suppose $\theta = \frac{1}{2}\pi$, $z = ir$, and equate the real and imaginary parts, we obtain \begin{align*} 1 - \binom{m}{2} r^{2} + \binom{m}{4} r^{4} - \dots &= (1 + r^{2})^{\frac{1}{2}m} \cos(m\arctan r), \\ \binom{m}{1} r - \binom{m}{3} r^{3} + \binom{m}{5} r^{5} - \dots &= (1 + r^{2})^{\frac{1}{2}m} \sin(m\arctan r). \end{align*} \Item{2.} Verify the formulae of Ex.~1 when $m = 1$, $2$, $3$. [Of course when $m$~is a positive integer the series is finite.] \Item{3.} Prove that if $0 \leq r < 1$ then \begin{align*} 1 - \frac{1·3}{2·4} r^{2} + \frac{1·3·5·7}{2·4·6·8} r^{4} - \dots &= \bigsqrtb{\frac{\sqrtp{1 + r^{2}} + 1}{2(1 + r^{2})}}, \\ \frac{1}{2} r - \frac{1·3·5}{2·4·6} r^{3} + \frac{1·3·5·7·9}{2·4·6·8·10} r^{5} - \dots &= \bigsqrtb{\frac{\sqrtp{1 + r^{2}} - 1}{2(1 + r^{2})}}. \end{align*} [Take $m = -\frac{1}{2}$ in the last two formulae of Ex.~1.] \Item{4.} Prove that if $-\frac{1}{4}\pi < \theta < \frac{1}{4}\pi$ then \begin{align*} \cos m\theta &= \cos^{m} \theta \left\{1 - \binom{m}{2} \tan^{2} \theta + \binom{m}{4} \tan^{4} \theta - \dots\right\}, \\ \sin m\theta &= \cos^{m} \theta \left\{\binom{m}{1} \tan\theta - \binom{m}{3} \tan^{3} \theta + \dots\right\}, \end{align*} for all real values of~$m$. [These results follow at once from the equations \[ \cos m\theta + i\sin m\theta = (\cos\theta + i\sin\theta )^{m} = \cos^{m} \theta(1 + i\tan\theta)^{m}.] \] \Item{5.} We proved (\Ex{lxxxi}.~6), by direct multiplication of series, that $f(m, z) = \sum\dbinom{m}{n} z^{n}$, where $|z| < 1$, satisfies the functional equation \[ f(m, z) f(m', z) = f(m + m', z). \] Deduce, by an argument similar to that of \SecNo[§]{216}, and without assuming the general result of \PageRef{p.}{423}, that if $m$~is real and rational then \[ f(m, z) = \exp\{m\log(1 + z)\}. \] \Item{6.} If $z$~and~$\mu$ are real, and $-1 < z < 1$, then \[ \sum \binom{i\mu}{n} z^{n} = \cos\{\mu\log(1 + z)\} + i\sin\{\mu\log(1 + z)\}. \] \end{Examples} \PageSep{425} \Section{MISCELLANEOUS EXAMPLES ON CHAPTER X.} \begin{Examples}{} \Item{1.} Show that the real part of $i^{\log(1+i)}$ is \[ e^{(4k+1)\pi^{2}/8 } \cos \{\tfrac{1}{4}(4k + 1)\pi\log 2\}, \] where $k$~is any integer. \Item{2.} If $a\cos\theta + b\sin\theta + c = 0$, where $a$,~$b$,~$c$ are real and $c^{2} > a^{2} + b^{2}$, then \[ \theta = m\pi + \alpha ± i\log \frac{|c| + \sqrtp{c^{2} - a^{2} - b^{2}}}{\sqrtp{a^{2} + b^{2}}}, \] where $m$~is any odd or any even integer, according as $c$~is positive or negative, and $\alpha$~is an angle whose cosine and sine are $a/\sqrtp{a^{2} + b^{2}}$ and $b/\sqrtp{a^{2} + b^{2}}$. \Item{3.} Prove that if $\theta$~is real and $\sin\theta \sin\phi = 1$ then \[ \phi = (k + \tfrac{1}{2})\pi ± i\log \cot \tfrac{1}{2}(k\pi + \theta), \] where $k$~is any even or any odd integer, according as $\sin\theta$~is positive or negative. \Item{4.} Show that if $x$~is real then \begin{gather*} \frac{d}{dx} \exp\{(a + ib)x\} = (a + ib) \exp\{(a + ib) x\}, \\ \int \exp \{(a + ib)x\}\, dx = \frac{\exp{(a + ib)x}}{a + ib}. \end{gather*} Deduce the results of \Ex{lxxxvii}.~3. \Item{5.} Show that if $a > 0$ then $\ds\int_{0}^{\infty} \exp\{-(a + ib)x\}\, dx = \frac{1}{a + ib}$, and deduce the results of \Ex{lxxxvii}.~5. \Item{6.} Show that if $(x/a)^{2} + (y/b)^{2} = 1$ is the equation of an ellipse, and $f(x, y)$ denotes the terms of highest degree in the equation of any other algebraic curve, then the sum of the eccentric angles of the points of intersection of the ellipse and the curve differs by a multiple of~$2\pi$ from \[ -i\{\log f(a, ib) - \log f(a, -ib)\}. \] [The eccentric angles are given by $f(a\cos\alpha, b\sin\alpha) + \dots = 0$ or by \[ f\left\{\tfrac{1}{2} a \left(u + \frac{1}{u}\right),\ -\tfrac{1}{2} ib \left(u - \frac{1}{u}\right) \right\} + \dots = 0, \] where $u = \exp i\alpha$; and $\sum\alpha$~is equal to one of the values of~$-i\Log P$, where $P$~is the product of the roots of this equation.] \Item{7.} Determine the number and approximate positions of the roots of the equation $\tan z = az$, where $a$~is real. [We know already (\Ex{xvii}.~4) that the equation has infinitely many real roots. Now let $z = x + iy$, and equate real and imaginary parts. We obtain \[ \sin 2x/(\cos 2x + \cosh 2y) = ax,\quad \sinh 2y/(\cos 2x + \cosh 2y) = ay, \] so that, unless $x$ or~$y$ is zero, we have \[ (\sin 2x)/2x = (\sinh 2y)/2y. \] \PageSep{426} This is impossible, the left-hand side being numerically less, and the right-hand side numerically greater than unity. Thus $x = 0$ or $y = 0$. If $y = 0$ we come back to the real roots of the equation. If $x = 0$ then $\tanh y = ay$. It is easy to see that this equation has no real root other than zero if $a \leq 0$ or $a \geq 1$, and two such roots if $0 < a < 1$. Thus there are two purely imaginary roots if $0 < a < 1$; otherwise all the roots are real.] \Item{8.} The equation $\tan z = az + b$, where $a$ and~$b$ are real and $b$~is not equal to zero, has no complex roots if $a \leq 0$. If $a > 0$ then the real parts of all the complex roots are numerically greater than~$|b/2a|$. \Item{9.} The equation $\tan z = a/z$, where $a$~is real, has no complex roots, but has two purely imaginary roots if $a < 0$. \Item{10.} The equation $\tan z = a\tanh cz$, where $a$ and~$c$ are real, has an infinity of real and of purely imaginary roots, but no complex roots. \Item{11.} Show that if $x$~is real then \[ e^{ax} \cos bx = \sum_{0}^{\infty} \frac{x^{n}}{n!} \left\{ a^{n} - \binom{n}{2} a^{n-2} b^{2} + \binom{n}{4} a^{n-4} b^{4} - \dots \right\}, \] where there are $\frac{1}{2}(n + 1)$ or~$\frac{1}{2}(n + 2)$ terms inside the large brackets. Find a similar series for~$e^{ax} \sin bx$. \Item{12.} If $n\phi(z, n) \to z$ as $n \to \infty$, then $\{1 + \phi(z, n)\}^{n} \to \exp z$. \Item{13.} If $\phi(t)$~is a complex function of the real variable~$t$, then \[ \frac{d}{dt} \log \phi(t) = \frac{\phi'(t)}{\phi(t)}. \] %[** TN: Paragraph break added] [Use the formulae \[ \phi = \psi + i\chi,\quad \log \phi = \tfrac{1}{2}\log(\psi^{2} + \chi^{2}) + i\arctan(\chi/\psi).] \] \Item{14.} \Topic{Transformations.} In \Ref{Ch.}{III} (\Exs{xxi}.\ 21~\textit{et~seq.}, and \MiscExs{III}\ 22~\textit{et seq.})\ we considered some simple examples of the geometrical relations between figures in the planes of two variables $z$,~$Z$ connected by a relation $z = f(Z)$. We shall now consider some cases in which the relation involves logarithmic, exponential, or circular functions. Suppose firstly that \[ z = \exp(\pi Z/a),\quad Z = (a/\pi) \Log z \] where $a$~is positive. To one value of~$Z$ corresponds one of~$z$, but to one of~$z$ infinitely many of~$Z$. If $x$,~$y$, $r$,~$\theta$ are the coordinates of~$z$ and $X$,~$Y$, $R$,~$\Theta$ those of~$Z$, we have the relations \begin{alignat*}{2} x &= e^{\pi X/a} \cos(\pi Y/a),\qquad & y &= e^{\pi X/a} \sin(\pi Y/a),\\ X &= (a/\pi) \log r, & Y &= (a\theta/\pi) + 2ka, \end{alignat*} where $k$~is any integer. If we suppose that $-\pi < \theta \leq \pi$, and that $\Log z$~has its principal value~$\log z$, then $k = 0$, and $Z$~is confined to a strip of its plane parallel to the axis~$OX$ and extending to a distance~$a$ from it on each side, one point \PageSep{427} of this strip corresponding to one of the whole $z$-plane, and conversely. By taking a value of~$\Log z$ other than the principal value we obtain a similar relation between the $z$-plane and another strip of breadth~$2a$ in the $Z$-plane. To the lines in the $Z$-plane for which $X$~and~$Y$ are constant correspond the circles and radii vectores in the $z$-plane for which $r$~and~$\theta$ are constant. To one of the latter lines corresponds the whole of a parallel to~$OX$, but to a circle for which $r$~is constant corresponds only a part, of length~$2a$, of a parallel to~$OY$. To make $Z$~describe the whole of the latter line we must make $z$ move continually round and round the circle. \Item{15.} Show that to a straight line in the $Z$-plane corresponds an equiangular spiral in the $z$-plane. \Item{16.} Discuss similarly the transformation $z = c\cosh(\pi Z/a)$, showing in particular that the whole $z$-plane corresponds to any one of an infinite number of strips in the $Z$-plane, each parallel to the axis $OX$ and of breadth~$2a$. Show also that to the line $X = X_{0}$ corresponds the ellipse \[ \left\{\frac{x}{c\cosh(\pi X_{0}/a)}\right\}^{2} + \left\{\frac{y}{c\sinh(\pi X_{0}/a)}\right\}^{2} = 1, \] and that for different values of~$X_{0}$ these ellipses form a confocal system; and that the lines $Y = Y_{0}$ correspond to the associated system of confocal hyperbolas. Trace the variation of~$z$ as $Z$~describes the whole of a line $X = X_{0}$ or $Y = Y_{0}$. How does $Z$~vary as $z$~describes the degenerate ellipse and hyperbola formed by the segment between the foci of the confocal system and the remaining segments of the axis of~$x$? \Item{17.} Verify that the results of Ex.~16 are in agreement with those of Ex.~14 and those of \Ref{Ch.}{III}, \MiscEx{III}~25. [The transformation $z = c\cosh(\pi Z/a)$ may be regarded as compounded from the transformations \[ z = cz_{1},\quad z_{1} = \tfrac{1}{2}\{z_{2} + (1/z_{2})\},\quad z_{2} = \exp(\pi Z/a).] \] \Item{18.} Discuss similarly the transformation $z = c\tanh(\pi Z/a)$, showing that to the lines $X = X_{0}$ correspond the coaxal circles \[ \{x - c\coth(2\pi X_{0}/a)\}^{2} + y^{2} = c^{2}\cosech^{2}(2\pi X_{0}/a), \] and to the lines $Y = Y_{0}$ the orthogonal system of coaxal circles. \Item{19.} \Topic{The Stereographic and Mercator's Projections.} The points of a unit sphere whose centre is the origin are projected from the south pole (whose coordinates are $0$,~$0$,~$-1$) on to the tangent plane at the north pole. The coordinates of a point on the sphere are $\xi$,~$\eta$,~$\zeta$, and Cartesian axes $OX$,~$OY$ are taken on the tangent plane, parallel to the axes of $\xi$ and~$\eta$. Show that the coordinates of the projection of the point are \[ x = 2\xi/(1 + \zeta),\quad y = 2\eta/(1 + \zeta), \] and that $x + iy = 2\tan \frac{1}{2}\theta \Cis\phi$, where $\phi$~is the longitude (measured from the plane $\eta = 0$) and $\theta$~the north polar distance of the point on the sphere. \PageSep{428} This projection gives a map of the sphere on the tangent plane, generally known as the \emph{Stereographic Projection}. If now we introduce a new complex variable \[ Z = X + iY = -i\log \tfrac{1}{2}z = -i\log \tfrac{1}{2}(x + iy) \] so that $X = \phi$, $Y = \log \cot \frac{1}{2}\theta$, we obtain another map in the plane of~$Z$, usually called \emph{Mercator's Projection}. In this map parallels of latitude and longitude are represented by straight lines parallel to the axes of $X$ and $Y$ respectively. \Item{20.} Discuss the transformation given by the equation \[ z = \Log \left(\frac{Z - a}{Z - b}\right), \] showing that the straight lines for which $x$~and~$y$ are constant correspond to two orthogonal systems of coaxal circles in the $Z$-plane. \Item{21.} Discuss the transformation \[ z = \Log \left\{\frac{\sqrtp{Z - a} + \sqrtp{Z - b}}{\sqrtp{b - a}}\right\}, \] showing that the straight lines for which $x$~and~$y$ are constant correspond to sets of confocal ellipses and hyperbolas whose foci are the points $Z = a$ and $Z = b$. [We have \begin{alignat*}{2} \sqrtp{Z - a} + \sqrtp{Z - b} &= \sqrtp{b - a}\, \exp(& &x + iy), \\ \sqrtp{Z - a} - \sqrtp{Z - b} &= \sqrtp{b - a}\, \exp(&-&x - iy); \end{alignat*} and it will be found that \[ |Z - a| + |Z - b| = |b - a|\cosh 2x,\quad |Z - a| - |Z - b| = |b - a|\cos 2y.] \] \Item{22.} \Topic{The transformation $z = Z^{i}$.} If $z = Z^{i}$, where the imaginary power has its principal value, we have \[ \exp(\log r + i\theta) = z = \exp(i\log Z) = \exp(i\log R - \Theta), \] so that $\log r = -\Theta$, $\theta = \log R + 2k\pi$, where $k$~is an integer. As all values of~$k$ give the same point~$z$, we shall suppose that $k = 0$, so that \[ \log r = -\Theta,\quad \theta = \log R. \Tag{(1)} \] The whole plane of~$Z$ is covered when $R$~varies through all positive values and $\Theta$~from $-\pi$ to~$\pi$: then $r$~has the range $\exp(-\pi)$ to~$\exp\pi$ and $\theta$~ranges through all real values. Thus the $Z$-plane corresponds to the ring bounded by the circles $r = \exp(-\pi)$, $r = \exp\pi$; but this ring is covered infinitely often. If however $\theta$~is allowed to vary only between $-\pi$ and~$\pi$, so that the ring is covered only once, then $R$~can vary only from $\exp(-\pi)$ to~$\exp \pi$, so that the variation of~$Z$ is restricted to a ring similar in all respects to that within which $z$~varies. Each ring, moreover, must be regarded as having a barrier along the negative real axis which~$z$ (or~$Z$) must not cross, as its amplitude must not transgress the limits $-\pi$ and~$\pi$. \PageSep{429} We thus obtain a correspondence between two rings, given by the pair of equations \[ z = Z^{i},\quad Z = z^{-i}, \] where each power has its principal value. To circles whose centre is the origin in one plane correspond straight lines through the origin in the other. \Item{23.} Trace the variation of~$z$ when $Z$, starting at the point~$\exp \pi$, moves round the larger circle in the positive direction to the point~$-\exp \pi$, along the barrier, round the smaller circle in the negative direction, back along the barrier, and round the remainder of the larger circle to its original position. \Item{24.} Suppose each plane to be divided up into an infinite series of rings by circles of radii \[ \dots,\quad e^{-(2n+1)\pi},\ \dots,\quad e^{-\pi},\quad e^{\pi},\quad e^{3\pi},\ \dots,\quad e^{(2n+1)\pi},\ \dots. \] Show how to make any ring in one plane correspond to any ring in the other, by taking suitable values of the powers in the equations $z = Z^{i}$, $Z = z^{-i}$. \Item{25.} If $z = Z^{i}$, any value of the power being taken, and $Z$~moves along an equiangular spiral whose pole is the origin in its plane, then $z$~moves along an equiangular spiral whose pole is the origin in its plane. \Item{26.} How does $Z = z^{ai}$, where $a$~is real, behave as $z$~approaches the origin along the real axis\DPtypo{.}{?} [$Z$~moves round and round a circle whose centre is the origin (the unit circle if $z^{ai}$~has its principal value), and the real and imaginary parts of~$Z$ both oscillate finitely.] \Item{27.} Discuss the same question for $Z = z^{a+bi}$, where $a$~and~$b$ are any real numbers. \Item{28.} Show that the region of convergence of a series of the type $\sum\limits_{-\infty}^{\infty} a_{n}z^{nai}$, where $a$~is real, is an angle, \ie\ a region bounded by inequalities of the type $\theta_{0} < \am z < \theta_{1}$ [The angle may reduce to a line, or cover the whole plane.] \Item{29.} \Topic{Level Curves.} If $f(z)$~is a function of the complex variable~$z$, we call the curves for which $|f(z)|$~is constant the \emph{level curves} of~$f(z)$. Sketch the forms of the level curves of \begin{alignat*}{2} z - a \quad& \text{(\emph{concentric circles})}, \qquad& (z - a)(z - b) \quad& \text{(\emph{Cartesian ovals})}, \\ (z - a)/(z - b) \quad& \text{(\emph{coaxal circles})}, \qquad& \exp z \quad& \text{(\emph{straight lines})}. \end{alignat*} \Item{30.} Sketch the forms of the level curves of $(z - a)(z - b)(z - c)$, $(1 + z\sqrt{3} + z^{2})/z$. [Some of the level curves of the latter function are drawn in \Fig{59}, the curves marked \textsc{i}--\textsc{vii} corresponding to the values \[ .10,\quad 2 - \sqrt{3} = .27,\quad .40,\quad 1.00,\quad 2.00,\quad 2 + \sqrt{3} = 3.73,\quad 4.53 \] of~$|f(z)|$. The reader will probably find but little difficulty in arriving at a general idea of the forms of the level curves of any given rational function; but to enter into details would carry us into the general theory of functions of a complex variable.] \PageSep{430} %[Illustration: Fig. 59.] %[Illustration: Fig. 60.] %[Illustration: Fig. 61.] \ifthenelse{\boolean{ForPrinting}}{% \begin{figure}[p!] \centering \Graphic{0.9\textwidth}{p430a} \caption{Fig.~59.} \label{fig:59} \vfill \begin{minipage}{0.45\textwidth} \centering \Graphic{2in}{p430b} \caption{Fig.~60.} \label{fig:60} \end{minipage} \begin{minipage}{0.45\textwidth} \centering \Graphic{2in}{p430c} \caption{Fig.~61.} \label{fig:61} \end{minipage} \end{figure} }{% Else not ForPrinting \Figure[\textwidth]{59}{p430a} \Figures{2in}{60}{p430b}{2in}{61}{p430c} } \PageSep{431} \Item{31.} Sketch the forms of the level curves of (i)~$z\exp z$, (ii)~$\sin z$. [See \Fig{60}, which represents the level curves of~$\sin z$. The curves marked \textsc{i}--\textsc{viii} correspond to $k = .35$, $.50$, $.71$, $1.00$, $1.41$, $2.00$, $2.83$,~$4.00$.] \Item{32.} Sketch the forms of the level curves of~$\exp z - c$, where $c$~is a real constant. [\Fig{61} shows the level curves of $|\exp z - 1|$, the curves \textsc{i}--\textsc{vii} corresponding to the values of~$k$ given by $\log k = -1.00$, $-.20$, $-.05$, $0.00$, $.05$, $.20$,~$1.00$.] \Item{33.} The level curves of~$\sin z - c$, where $c$~is a positive constant, are sketched in Figs.~62,~63. [The nature of the curves differs according as to whether $c < 1$ or~$c > 1$. In \Fig{62} we have taken $c = .5$, and the curves \textsc{i}--\textsc{viii} correspond to $k = .29$, $.37$, $.50$, $.87$, $1.50$, $2.60$, $4.50$,~$7.79$. In \Fig{63} we have taken $c = 2$, and the curves \textsc{i}--\textsc{vii} correspond to $k = .58$, $1.00$, $1.73$, $3.00$, $5.20$, $9.00$,~$15.59$. If $c = 1$ then the curves are the same as those of \Fig{60}, except that the origin and scale are different.] %[Illustration: Fig. 62.] %[Illustration: Fig. 63.] \Figures{2.25in}{62}{p431a}{2.25in}{63}{p431b} \Item{34.} Prove that if $0 < \theta < \pi$ then \begin{alignat*}{3} \cos\theta &+ \tfrac{1}{3} \cos 3\theta &&+ \tfrac{1}{5} \cos 5\theta &&+ \dots = \tfrac{1}{4} \log \cot^{2}\tfrac{1}{2}\theta,\\ \sin\theta &+ \tfrac{1}{3} \sin 3\theta &&+ \tfrac{1}{5} \sin 5\theta &&+ \dots = \tfrac{1}{4}\pi, \end{alignat*} and determine the sums of the series for all other values of~$\theta$ for which they are convergent. [Use the equation \[ z + \tfrac{1}{3}z^{3} + \tfrac{1}{5}z^{5} + \dots = \tfrac{1}{2} \log \left(\frac{1 + z}{1 - z}\right) \] where $z = \cos\theta + i\sin\theta$. When $\theta$~is increased by~$\pi$ the sum of each series simply changes its sign. It follows that the first formula holds for all values of~$\theta$ save multiples of~$\pi$ (for which the series diverges), while the sum of the second series is~$\frac{1}{4}\pi$ if $2k\pi < \theta < (2k + 1)\pi$, $-\frac{1}{4}\pi$ if $(2k + 1)\pi < \theta < (2k + 2)\pi$, and $0$ if $\theta$~is a multiple of~$\pi$.] \PageSep{432} \Item{35.} Prove that if $0 < \theta < \frac{1}{2}\pi$ then \begin{alignat*}{3} \cos\theta &- \tfrac{1}{3} \cos 3\theta &&+ \tfrac{1}{5} \cos 5\theta &&- \dots = \tfrac{1}{4}\pi,\\ \sin\theta &- \tfrac{1}{3} \sin 3\theta &&+ \tfrac{1}{5} \sin 5\theta &&- \dots = \tfrac{1}{4} \log (\sec\theta + \tan\theta)^{2}; \end{alignat*} and determine the sums of the series for all other values of~$\theta$ for which they are convergent. \Item{36.} Prove that \[ \cos\theta \cos\alpha + \tfrac{1}{2} \cos 2\theta \cos 2\alpha + \tfrac{1}{3} \cos 3\theta \cos 3\alpha + \dots = -\tfrac{1}{4} \log \{4(\cos\theta - \cos\alpha)^{2}\}, \] unless $\theta - \alpha$ or $\theta + \alpha$ is a multiple of~$2\pi$. \Item{37.} Prove that if neither $a$ nor~$b$ is real then \[ \int_{0}^{\infty} \frac{dx}{(x - a)(x - b)} = -\frac{\log(-a) - \log(-b)}{a - b}, \] each logarithm having its principal value. Verify the result when $a = ci$, $b = -ci$, where $c$~is positive. Discuss also the cases in which $a$ or~$b$ or both are real and negative. \Item{38.} Prove that if $\alpha$ and~$\beta$ are real, and $\beta > 0$, then \[ \int_{0}^{\infty} \frac{d}{x^{2} - (\alpha + i\beta)^{2}} = \frac{\pi i}{2(\alpha + i\beta)}. \] What is the value of the integral when $\beta < 0$? \Item{39.} Prove that, if the roots of $Ax^{2} + 2Bx + C = 0$ have their imaginary parts of opposite signs, then \[ \int_{-\infty}^{\infty} \frac{dx}{Ax^{2} + 2Bx + C} = \frac{\pi i}{\sqrtp{B^{2} - AC}}, \] the sign of $\sqrtp{B^{2} - AC}$ being so chosen that the real part of $\{\sqrtp{B^{2} - AC}\}/Ai$ is positive. \end{Examples} \PageSep{433} \BackMatter \Appendix{I}{(To Chapters III, IV, V)}{The Proof that every Equation has a Root} \First{Let} \[ Z = P(z) = \alpha_{0} z^{n} + \alpha_{1} z^{n-1} + \dots + \alpha_{n} \] be a polynomial in~$z$, with real or complex coefficients. We can represent the values of $z$ and~$Z$ by points in two planes, which we may call the $z$-plane and the $Z$-plane respectively. It is evident that if $z$~describes a closed path~$\gamma$ in the $z$-plane, then $Z$~describes a corresponding closed path~$\Gamma$ in the $Z$-plane. We shall assume for the present that the path~$\Gamma$ does not pass through the origin. To any value of~$Z$ correspond an infinity of values of~$\am Z$, differing by multiples of~$2\pi$, and each of these values varies continuously as $Z$~describes~$\Gamma$.\footnote {It is here that we assume that $\Gamma$~does not pass through the origin.} We can select a particular value of~$\am Z$ corresponding to each point %[Illustration: Fig. A.] %[Illustration: Fig. B.] \Figures{2.25in}{A}{p433a}{2.25in}{B}{p433b} of~$\Gamma$, by first selecting a particular value corresponding to the initial value of~$Z$, and then following the continuous variation of this value as $Z$~moves along~$\Gamma$. We shall, in the argument which follows, use the phrase `the amplitude of~$Z$' and the formula~$\am Z$ to denote the particular value of the amplitude of~$Z$ thus selected. Thus $\am Z$~denotes a one-valued and continuous function of $X$~and~$Y$, the real and imaginary parts of~$Z$. \PageSep{434} When $Z$, after describing~$\Gamma$, returns to its original position, its amplitude may be the same as before, as will certainly be the case if $\Gamma$~does not enclose the origin, like path~(\ia) in \Fig{B}, or it may differ from its original value by any multiple of~$2\pi$. Thus if its path is like~(\ib) in \Fig{B}, winding once round the origin in the positive direction, then its amplitude will have increased by~$2\pi$. These remarks apply, not merely to~$\Gamma$, but to any closed contour in the $Z$-plane which does not pass through the origin. Associated with any such contour there is a number which we may call `the increment of~$\am Z$ when $Z$~describes the contour', a number independent of the initial choice of a particular value of the amplitude of~$Z$. We shall now prove that \begin{Result}if the amplitude of~$Z$ is not the same when $Z$~returns to its original position, then the path of~$z$ must contain inside or on it at least one point at which $Z = 0$. \end{Result} We can divide~$\gamma$ into a number of smaller contours by drawing parallels to the axes at a distance~$\delta_{1}$ from one another, as in \Fig{C}\@.\footnote {There is no difficulty in giving a definite rule for the construction of these parallels: the most obvious course is to draw all the lines $x = k\delta_{1}$, $y = k\delta_{1}$, where $k$~is an integer positive or negative.} If there is, on the boundary of any one of these contours, a point at which $Z = 0$, what we wish to prove is already established. We may therefore suppose %[Illustration: Fig. C.] %[Illustration: Fig. D.] \Figures{2.5in}{C}{p434a}{2in}{D}{p434b} that this is not the case. Then the increment of~$\am Z$, when $z$~describes~$\gamma$, is equal to the sum of all the increments of~$\am Z$ obtained by supposing $z$~to describe each of these smaller contours separately in the same sense as~$\gamma$. For if $z$~describes each of the smaller contours in turn, in the same sense, it will ultimately (see \Fig{D}) have described the boundary of~$\gamma$ once, and each part of each of the dividing parallels twice and in opposite directions. Thus $PQ$~will have been described twice, once from $P$ to~$Q$ and once from $Q$ to~$P$. As $z$~moves from $P$ to~$Q$, $\am Z$~varies continuously, since $Z$~does not pass through the origin; and if the increment of~$\am Z$ is in this case~$\theta$, then its increment when $z$~moves from $Q$ to~$P$ is~$-\theta$; so that, when we add up the increments of~$\am Z$ due to the description of the various parts of the smaller contours, all cancel one another, save the increments due to the description of parts of $\gamma$~itself. \PageSep{435} Hence, if $\am Z$~is changed when $z$~describes~$\gamma$, there must be \emph{at least one} of the smaller contours, say~$\gamma_{1}$, such that $\am Z$~is changed when $z$~describes~$\gamma_{1}$. This contour may be a square whose sides are parts of the auxiliary parallels, or may be composed of parts of these parallels and parts of the boundary of~$\gamma$. In any case every point of the contour lies in or on the boundary of a square~$\Delta_{1}$ whose sides are parts of the auxiliary parallels and of length~$\delta_{1}$. We can now further subdivide~$\gamma_{1}$ by the help of parallels to the axes at a smaller distance~$\delta_{2}$ from one another, and we can find a contour~$\gamma_{2}$, entirely included in a square~$\Delta_{2}$, of side~$\delta_{2}$ and itself included in~$\Delta_{1}$ such that $\am Z$~is changed when $z$~describes the contour. Now let us take an infinite sequence of decreasing numbers $\delta_{1}$, $\delta_{2}$,~\dots, $\delta_{m}$,~\dots, whose limit is zero.\footnote {We may, \eg, take $\delta_{m} = \delta_{1}/2^{m-1}$.} By repeating the argument used above, we can determine a series of squares $\Delta_{1}$, $\Delta_{2}$,~\dots, $\Delta_{m}$,~\dots\ and a series of contours $\gamma_{1}$, $\gamma_{2}$,~\dots, $\gamma_{m}$,~\dots\ such that (i)~$\Delta_{m+1}$~lies entirely inside~$\Delta_{m}$, (ii)~$\gamma_{m}$~lies entirely inside~$\Delta_{m}$, (iii)~$\am Z$~is changed when $z$~describes~$\gamma_{m}$. If $(x_{m}, y_{m})$ and~$(x_{m} + \delta_{m}, y_{m} + \delta_{m})$ are the lower left-hand and upper right-hand corners of~$\Delta_{m}$, it is clear that $x_{1}$, $x_{2}$,~\dots, $x_{m}$,~\dots\ is an increasing and $x_{1} + \delta_{1}$, $x_{2} + \delta_{2}$,~\dots, $x_{m} + \delta_{m}$,~\dots\ a decreasing sequence, and that they have a common limit~$x_{0}$. Similarly $y_{m}$~and~$y_{m} + \delta_{m}$ have a common limit~$y_{0}$, and $(x_{0}, y_{0})$~is the one and only point situated inside every square~$\Delta_{m}$. However small $\delta$ may be, we can draw a square which includes~$(x_{0}, y_{0})$, and whose sides are parallel to the axes and of length~$\delta$, and inside this square a closed contour such that $\am Z$~is changed when $z$~describes the contour. It can now be shown that \[ P(x_{0} + iy_{0}) = 0. \] For suppose that $P(x_{0} + iy_{0}) = a$, where $|a| = \rho > 0$. Since $P(x + iy)$~is a continuous function of $x$~and~$y$, we can draw a square whose centre is~$(x_{0}, y_{0})$ and whose sides are parallel to the axes, and which is such that \[ |P(x + iy) - P(x_{0} + iy_{0})| < \tfrac{1}{2}\rho \] at all points~$x + iy$ inside the square or on its boundary. At all such points \[ P(x + iy) = a + \phi, \] where $|\phi| < \frac{1}{2}\rho$. Now let us take any closed contour lying entirely inside this square. As $z$~describes this contour, $Z = a + \phi$ also describes a closed contour. But the latter contour evidently lies inside the circle whose centre is~$a$ and whose radius is~$\frac{1}{2}\rho$, and this circle does not include the origin. Hence the amplitude of~$Z$ is unchanged. But this contradicts what was proved above, viz.\ that inside each square~$\Delta_{m}$ we can find a closed contour the description of which by~$z$ changes~$\am Z$\Add{.} Hence $P(x_{0} + iy_{0}) = 0$. \PageSep{436} All that remains is to show that we can always find \emph{some} contour such that $\am Z$~is changed when $z$~describes~$\gamma$. Now \[ Z = a_{0} z^{n} \left(1 + \frac{a_{1}}{a_{0}z} + \frac{a_{2}}{a_{0} z^{2}} + \dots + \frac{a_{n}}{a_{0} z^{n}}\right). \] We can choose $R$ so that \[ \frac{|a_{1}|}{|a_{0}| R} + \frac{|a_{2}|}{|a_{0}| R^{2}} + \dots + \frac{|a_{n}|}{|a_{0}| R^{n}} < \delta, \] where $\delta$~is any positive number, however small; and then, if $\gamma$~is the circle whose centre is the origin and whose radius is~$R$, we have \[ Z = a_{0} z^{n} (1 + \rho), \] where $|\rho| < \delta$, at all points on~$\gamma$. We can then show, by an argument similar to that used above, that $\am(1 + \rho)$~is unchanged as $z$~describes $\gamma$~in the positive sense, while $\am z^{n}$ on the other hand is increased by~$2n\pi$. Hence $\am Z$~is increased by~$2n\pi$, and the proof that $Z = 0$ has a root is completed. We have assumed throughout the argument that neither~$\Gamma$, nor any of the smaller contours into which it is resolved, passes through the origin. This assumption is obviously legitimate, for to suppose the contrary, at any stage of the argument, is to admit the truth of the theorem. We leave it as an exercise to the reader to infer, from the discussion which precedes and that of \SecNo[§]{43}, that \begin{Result}when $z$~describes any contour~$\gamma$ in the positive sense the increment of~$\am Z$ is~$2k\pi$, where $k$~is the number of roots of $Z = 0$ inside~$\gamma$, multiple roots being counted multiply. \end{Result} There is another proof, proceeding on different lines, which is often given. It depends, however, on an extension to functions of two or more variables of the results of \SecNo[§§]{102}~\textit{et~seq.} We define, precisely on the lines of \SecNo[§]{102}, the \emph{upper and lower bounds} of a function~$f(x, y)$, for all pairs of values of $x$ and~$y$ corresponding to any point of any region in the plane of~$(x, y)$ bounded by a closed curve. And we can prove, much as in \SecNo[§]{102}, that a continuous function~$f(x, y)$ attains its upper and lower bounds in any such region. Now \[ |Z| = |P(x + iy)| \] is a positive and continuous function of $x$ and~$y$. If $m$~is its lower bound for points on and inside~$\gamma$, then there must be a point~$z_{0}$ for which $|Z| = m$, and this must be the \emph{least} value assumed by~$|Z|$. If $m = 0$, then $P(z_{0}) = 0$, and we have proved what we want. We may therefore suppose that $m > 0$. The point~$z_{0}$ must lie either inside or on the boundary of~$\gamma$: but if $\gamma$~is a circle whose centre is the origin, and whose radius~$R$ is large enough, then the last hypothesis is untenable, since $|P(z)| \to \infty$ as $|z| \to \infty$. We may therefore suppose that $z_{0}$~lies inside~$\gamma$. \PageSep{437} If we put $z = z_{0} + \zeta$, and rearrange $P(z)$ according to powers of~$\zeta$, we obtain \[ P(z) = P(z_{0}) + A_{1}\zeta + A_{2}\zeta^{2} + \dots + A_{n}\zeta^{n}, \] say. Let $A_{k}$ be the first of the coefficients which does not vanish, and let $|A_{k}| = \mu$, $|\zeta| = \rho$. We can choose~$\rho$ so small that \[ |A_{k+1}|\rho + |A_{k+2}|\rho^{2} + \dots + |A_{n}|\rho^{n-k} < \tfrac{1}{2}\mu. \] Then \[ |P(z) - P(z_{0}) - A_{k}\zeta^{k}| < \tfrac{1}{2}\mu\rho^{k}, \] and \[ |P(z)| < |P(z_{0} + A_{k}\zeta^{k}| + \tfrac{1}{2}\mu\rho^{k}. \] Now suppose that $z$~moves round the circle whose centre is~$z_{0}$ and radius~$\rho$. Then \[ P(z_{0}) + A_{k}\zeta^{k} \] moves $k$~times round the circle whose centre is~$P(z_{0})$ and radius $|A_{k}\zeta^{k}| = \mu\rho^{k}$, and passes $k$~times through the point in which this circle is intersected by the line joining~$P(z_{0})$ to the origin. Hence there are $k$~points on the circle described by~$z$ at which $|P(z_{0}) + A_{k}\zeta^{k}| = |P(z_{0})| - \mu\rho^{k}$ and so \[ |P(z)| < |P(z_{0})| - \mu\rho^{k} + \tfrac{1}{2}\mu\rho^{k} = m - \tfrac{1}{2}\mu\rho^{k} < m; \] and this contradicts the hypothesis that $m$~is the lower bound of~$|P(z)|$. It follows that $m$~must be zero and that $P(z_{0}) = 0$. \Section{EXAMPLES ON APPENDIX I} \begin{Examples}{} \Item{1.} Show that the number of roots of $f(z) = 0$ which lie within a closed contour which does not pass through any root is equal to the increment of \[ \{\log f(z)\}/2\pi i \] when $z$~describes the contour. \Item{2.} Show that if $R$~is any number such that \[ \frac{|a_{1}|}{R} + \frac{|a_{2}|}{R^{2}} + \dots + \frac{|a_{n}|}{R^{n}} < 1, \] then all the roots of $z^{n} + a_{1}z^{n-1} + \dots + a_{n} = 0$ are in absolute value less than~$R$. In particular show that all the roots of $z^{5} - 13z -7 = 0$ are in absolute value less than~$2\frac{1}{67}$. \Item{3.} Determine the numbers of the roots of the equation $z^{2p} + az + b = 0$ where $a$~and~$b$ are real and $p$~odd, which have their real parts positive and negative. Show that if $a > 0$, $b > 0$ then the numbers are $p - 1$ and $p + 1$; if $a < 0$, $b > 0$ they are $p + 1$ and $p - 1$; and if $b < 0$ they are $p$~and~$p$. Discuss the particular cases in which $a = 0$ or $b = 0$. Verify the results when $p = 1$. [Trace the variation of $\am(z^{2p} + az + b)$ as $z$~describes the contour formed by a large semicircle whose centre is the origin and whose radius is~$R$, and the part of the imaginary axis intercepted by the semicircle.] \Item{4.} Consider similarly the equations \[ z^{4q} + az + b = 0,\quad z^{4q-1} + az + b = 0,\quad z^{4q+1} + az + b = 0. \] \PageSep{438} \Item{5.} Show that if $\alpha$~and~$\beta$ are real then the numbers of the roots of the equation $z^{2n} + \alpha^{2} z^{2n-1} + \beta^{2} = 0$ which have their real parts positive and negative are $n - 1$ and $n + 1$, or $n$~and~$n$, according as $n$~is odd or even. \MathTrip{1891.} \Item{6.} Show that when $z$~moves along the straight line joining the points $z = z_{1}$, $z = z_{2}$, from a point near~$z_{1}$ to a point near~$z_{2}$, the increment of \[ \am \left(\frac{1}{z - z_{1}} + \frac{1}{z - z_{2}}\right) \] is nearly equal to~$\pi$. \Item{7.} A contour enclosing the three points $z = z_{1}$, $z = z_{2}$, $z = z_{3}$ is defined by parts of the sides of the triangle formed by $z_{1}$,~$z_{2}$,~$z_{3}$, and the parts exterior to the triangle of three small circles with their centres at those points. Show that when $z$~describes the contour the increment of \[ \am \left(\frac{1}{z - z_{1}} + \frac{1}{z - z_{2}} + \frac{1}{z - z_{3}}\right) \] is equal to~$-2\pi$. \Item{8.} Prove that a closed oval path which surrounds all the roots of a cubic equation $f(z) = 0$ also surrounds those of the derived equation $f'(z) = 0$. [Use the equation \[ f'(z) = f(z) \left( \frac{1}{z - z_{1}} + \frac{1}{z - z_{2}} + \frac{1}{z - z_{3}} \right), \] where $z_{1}$,~$z_{2}$,~$z_{3}$ are the roots of $f(z) = 0$, and the result of Ex.~7.] \Item{9.} Show that the roots of $f'(z) = 0$ are the foci of the ellipse which touches the sides of the triangle $(z_{1}, z_{2}, z_{3})$ at their middle points. [For a proof see Cesàro's \textit{Elementares Lehrbuch der algebraischen Analysis}, p.~352.] \Item{10.} Extend the result of Ex.~8 to equations of any degree. \Item{11.} If $f(z)$ and~$\phi(z)$ are two polynomials in~$z$, and $\gamma$~is a contour which does not pass through any root of~$f(z)$, and $|\phi(z)| < |f(z)|$ at all points on~$\gamma$, then the numbers of the roots of the equations \[ f(z) = 0,\quad f(z) + \phi(z) = 0 \] which lie inside~$\gamma$ are the same. \Item{12.} Show that the equations \[ e^{z} = az,\quad e^{z} = az^{2},\quad e^{z} = az^{3}, \] where $a > e$, have respectively (i)~one positive root (ii)~one positive and one negative root and (iii)~one positive and two complex roots within the circle $|z| = 1$. \MathTrip{1910.} \end{Examples} \PageSep{439} \Appendix{II}{(To Chapters IX, X)}{A Note on Double Limit Problems} \First{In} the course of Chapters IX~and~X we came on several occasions into contact with problems of a kind which invariably puzzle beginners and are indeed, when treated in their most general forms, problems of great difficulty and of the utmost interest and importance in higher mathematics. Let us consider some special instances. In \SecNo[§]{213} we proved that \[ \log(1 + x) = x - \tfrac{1}{2}x^{2} + \tfrac{1}{3}x^{3} - \dots, \] where $-1 < x \leq 1$, by integrating the equation \[ 1/(1 + t) = 1 - t + t^{2} - \dots \] between the limits $0$ and~$x$. What we proved amounted to this, that \[ \int_{0}^{x} \frac{dt}{1 + t} = \int_{0}^{x} dt - \int_{0}^{x} t\, dt + \int_{0}^{x} t^{2}\, dt - \dots; \] {\Loosen or in other words that \emph{the integral of the sum of the infinite series $1 - t + t^{2} - \dots$, taken between the limits $0$ and~$x$, is equal to the sum of the integrals of its terms taken between the same limits}. Another way of expressing this fact is to say that the operations of summation from $0$ to~$\infty$, and of integration from $0$ to~$x$, are \emph{commutative} when applied to the function $(-1)^{n}t^{n}$, \ie\ that it does not matter in what order they are performed on the function.} Again, in \SecNo[§]{216}, we proved that the differential coefficient of the exponential function \[ \exp x = 1 + x + \frac{x^{2}}{2!} + \dots \] is itself equal to $\exp x$, or that \[ D_{x} \left(1 + x + \frac{x^{2}}{2!} + \dots\right) = D_{x}1 + D_{x}x + D_{x} \frac{x^{2}}{2!} + \dots; \] \PageSep{440} that is to say that \emph{the differential coefficient of the sum of the series is equal to the sum of the differential coefficients of its terms}, or that the operations of summation from $0$ to~$\infty$ and of differentiation with respect to~$x$ are commutative when applied to~$x^{n}/n!$. Finally we proved incidentally in the same section that the function $\exp x$ is a continuous function of~$x$, or in other words that \[ \lim_{x\to\xi} \left(1 + x + \frac{x^{2}}{2!} + \dots\right) = 1 + \xi + \frac{\xi^{2}}{2!} + \dots = \lim_{x\to\xi} 1 + \lim_{x\to\xi} x + \lim_{x\to\xi} \frac{x^{2}}{2!} + \dots; \] \ie\ that the limit of the sum of the series is equal to the sum of the limits of the terms, or that the sum of the series is continuous for $x = \xi$, or that the operations of summation from $0$ to~$\infty$ and of making $x$~tend to~$\xi$ are commutative when applied to~$x^{n}/n!$. In each of these cases we gave a special proof of the correctness of the result. We have not proved, and in this volume shall not prove, any general theorem from which the truth of any one of them could be inferred immediately. In \Ex{xxxvii}.~1 we saw that the sum of a finite number of continuous terms is itself continuous, and in \SecNo[§]{113} that the differential coefficient of the sum of a finite number of terms is equal to the sum of their differential coefficients; and in \SecNo[§]{160} we stated the corresponding theorem for integrals. Thus we have proved that in certain circumstances the operations symbolised by \[ \lim_{x\to\xi} \dots,\quad D_{x} \dots,\quad \int \dots\, dx \] are commutative with respect to the operation of summation of a \emph{finite} number of terms. And it is natural to suppose that, in certain circumstances which it should be possible to define precisely, they should be commutative also with respect to the operation of summation of an \emph{infinite} number. It is natural to suppose so: but that is all that we have a right to say at present. A few further instances of commutative and non-commutative operations may help to elucidate these points. \Item{(1)} Multiplication by~$2$ and multiplication by~$3$ are always commutative, for \[ 2 × 3 × x = 3 × 2 × x \] for all values of~$x$. \Item{(2)} The operation of taking the real part of~$z$ is never commutative with that of multiplication by~$i$, except when $z = 0$; for \[ i × \Re(x + iy) = ix,\quad \Re\{i × (x + iy)\} = -y. \] \Item{(3)} The operations of proceeding to the limit zero with each of two variables $x$~and~$y$ may or may not be commutative when applied to a function~$f(x, y)$. Thus \[ \lim_{x\to 0} \{\lim_{y\to 0} (x + y)\} = \lim_{x\to 0} x = 0,\quad \lim_{y\to 0} \{\lim_{x\to 0} (x + y)\} = \lim_{y\to 0} y = 0; \] \PageSep{441} but on the other hand \begin{alignat*}{2} \lim_{x\to 0} \left(\lim_{y\to 0} \frac{x - y}{x + y}\right) &= \lim_{x\to 0} \frac{x}{x} &&= \lim_{x\to 0} 1 = 1,\\ \lim_{y\to 0} \left(\lim_{x\to 0} \frac{x - y}{x + y}\right) &= \lim_{y\to 0}\frac{-y}{y} &&= \lim_{y\to 0} (-1) = -1. \end{alignat*} \Item{(4)} The operations $\sum\limits_{1}^{\infty} \dots$, $\lim\limits_{x\to 1} \dots$ may or may not be commutative. Thus if $x \to 1$ through values less than~$1$ then \begin{alignat*}{2} \lim_{x\to 1} \left\{\sum_{1}^{\infty} \frac{(-1)^{n}}{n}x^{n}\right\} &= \lim_{x\to 1}\log(1 + x) &&= \log 2,\\ \sum_{1}^{\infty} \left\{\lim_{x\to 1} \frac{(-1)^{n}}{n}x^{n}\right\} &= \quad \sum_{1}^{\infty} \frac{(-1)^{n}}{n} &&= \log 2; \end{alignat*} but on the other hand \begin{align*} \lim_{x\to 1} \left\{\sum_{1}^{\infty} (x^{n} - x^{n+1})\right\} &= \lim_{x\to 1} \{(1 - x) + (x - x^{2}) + \dots\} = \lim_{x\to 1} 1 = 1,\\ \sum_{1}^{\infty} \left\{\lim_{x\to 1} (x^{n} - x^{n+1})\right\} &= \sum_{1}^{\infty} (1 - 1) = 0 + 0 + 0 + \dots = 0. \end{align*} The preceding examples suggest that there are three possibilities with respect to the commutation of two given operations, viz.:\ (1)~the operations may \emph{always} be commutative; (2)~they may \emph{never} be commutative, \emph{except in very special circumstances}; (3)~they may be commutative \emph{in most of the ordinary cases which occur practically}. The really important case (as is suggested by the instances which we gave from \Ref{Ch.}{IX}) is that in which each operation is one which involves a passage to the limit, such as a differentiation or the summation of an infinite series: such operations are called \emph{limit operations}. The general question as to the circumstances in which two given limit operations are commutative is one of the most important in all mathematics. But to attempt to deal with questions of this character by means of general theorems would carry us far beyond the scope of this volume. We may however remark that the answer to the general question is on the lines suggested by the examples above. If $L$~and~$L'$ are two limit operations then the numbers $LL'z$ and~$L'Lz$ are not \emph{generally} equal, in the strict theoretical sense of the word `general'. We can always, by the exercise of a little ingenuity, find~$z$ so that $LL'z$ and~$L'Lz$ shall differ from one another. But they \emph{are} equal generally, if we use the word in a more practical sense, viz.\ as meaning `in a great majority of such cases as are likely to occur naturally' or in \emph{ordinary} cases. \PageSep{442} Of course, in an exact science like pure mathematics, we cannot be satisfied with an answer of this kind; and in the higher branches of mathematics the detailed investigation of these questions is an absolute necessity. But for the present the reader may be content if he realises the point of the remarks which we have just made. In practice, a result obtained by assuming that two limit-operations are commutative is \emph{probably} true: it at any rate affords a valuable \emph{suggestion} as to the answer to the problem under consideration. But an answer thus obtained must, in default of a further study of the general question or a special investigation of the particular problem, such as we gave in the instances which occurred in \Ref{Ch.}{IX}, be regarded as suggested only and not proved. Detailed investigations of a large number of important double limit problems will be found in Bromwich's \textit{Infinite Series}. \PageSep{443} \Appendix{III}{(To \SecNo[§]{158} and \Ref{Chapter}{IX})}{The circular functions} \First{The} reader will find it an instructive exercise to work out the theory of the circular functions, starting from the definition \CenterDef[\footnotemark]{\Item{(1)}}{$y = y(x) = \arctan x = \ds\int_{0}^{x} \frac{dt}{1 + t^{2}}$.} \footnotetext{These letters at the end of a line indicate that the formulae which it contains are definitions.} The equation~\Eq{(1)} defines a unique value of~$y$ corresponding to every real value of~$x$. As $y$~is continuous and strictly increasing, there is an inverse function $x = x(y)$, also continuous and steadily increasing. We write \CenterDef{\Item{(2)}}{$x = x(y) = \tan y$.} If we define~$\pi$ by the equation \CenterDef{\Item{(3)}}{$\frac{1}{2}\pi = \ds\int_{0}^{\infty} \frac{dt}{1 + t^{2}}$,} then this function is defined for $-\frac{1}{2}\pi < y < \frac{1}{2}\pi$. We write further \CenterDef{\Item{(4)}}{$\cos y = \dfrac{1}{\sqrt{1 + x^{2}}},\quad \sin y = \dfrac{x}{\sqrt{1 + x^{2}}}$,} where the square root is positive; and we define $\cos y$ and~$\sin y$, when $y$~is $-\frac{1}{2}\pi$ or~$\frac{1}{2}\pi$, so that the functions shall remain continuous for those values of~$y$. Finally we define $\cos y$ and~$\sin y$, outside the interval $\DPmod{(-\frac{1}{2}\pi, \frac{1}{2}\pi)}{[-\frac{1}{2}\pi, \frac{1}{2}\pi]}$, by %[** TN: Set on one line in the original] \CenterDef{\Item{(5)}}{$\begin{alignedat}{2}\tan(y + \pi) &= &&\tan y,\\ \cos(y + \pi) &= -&&\cos y, \\ \sin(y + \pi) &= -&&\sin y.\end{alignedat}$} We have thus defined $\cos y$ and~$\sin y$ for all values of~$y$, and $\tan y$~for all values of~$y$ other than odd multiples of~$\frac{1}{2}\pi$. The cosine and sine are continuous for all values of~$y$, the tangent except at the points where its definition fails. The further development of the theory depends merely on the addition formulae. Write \[ x = \frac{x_{1} + x_{2}}{1 - x_{1}x_{2}}, \] and transform the equation~\Eq{(1)} by the substitution \[ t = \frac{x_{1} + u}{1 - x_{1}u},\quad u = \frac{t - x_{1}}{1 + x_{1}t}. \] We find \begin{align*} \arctan \frac{x_{1} + x_{2}}{1 - x_{1}x_{2}} &= \int_{-x_{1}}^{x_{2}} \frac{du}{1 + u^{2}} = \int_{0}^{x_{1}} \frac{du}{1 + u^{2}} + \int_{0}^{x_{2}} \frac{du}{1 + u^{2}} \\ &= \arctan x_{1} + \arctan x_{2}. \end{align*} \PageSep{444} From this we deduce \CenterLine{\Item{(6)}}{$\tan (y_{1} + y_{2}) = \dfrac{\tan y_{1} + \tan y_{2}}{1 - \tan y_{1}\tan y_{2}}$,} an equation proved in the first instance only when $y_{1}$,~$y_{2}$, and~$y_{1} + y_{2}$ lie in $\DPmod{(-\frac{1}{2}\pi, \frac{1}{2}\pi)}{[-\frac{1}{2}\pi, \frac{1}{2}\pi]}$, but immediately extensible to all values of $y_{1}$~and~$y_{2}$ by means of the equations~\Eq{(5)}. From~\Eq{(4)} and~\Eq{(6)} we deduce \[ \cos(y_{1} + y_{2}) = ±(\cos y_{1}\cos y_{2} - \sin y_{1}\sin y_{2}). \] {\Loosen To determine the sign put $y_{2} = 0$. The equation reduces to $\cos y_{1} = ±\cos y_{1}$, which shows that the positive sign must be chosen for at least one value of~$y_{2}$, viz.\ $y_{2} = 0$. It follows from considerations of continuity that the positive sign must be chosen in all cases. The corresponding formula for $\sin(y_{1} + y_{2})$ may be deduced in a similar manner.} The formulae for differentiation of the circular functions may now be deduced in the ordinary way, and the power series derived from Taylor's Theorem. An alternative theory of the circular functions is based on the theory of infinite series. An account of this theory, in which, for example, $\cos x$~is defined by the equation \[ \cos x = 1 - \frac{x^{2}}{2!} + \frac{x^{4}}{4!} - \dots \] will be found in Whittaker and Watson's \textit{Modern Analysis} (Appendix~A). \PageSep{445} \Appendix{IV}{}{The infinite in analysis and geometry} \First{Some}, though not all, systems of analytical geometry contain `infinite' elements, the line at infinity, the circular points at infinity, and so on. The object of this brief note is to point out that these concepts are in no way dependent upon the analytical doctrine of limits. In what may be called `common Cartesian geometry', a \emph{point} is \emph{a pair of real numbers $(x, y)$}. A \emph{line} is the class of points which satisfy a linear relation $ax + by + c=0$, in which $a$~and~$b$ are not both zero. There are no infinite elements, and two lines may have no point in common. In a system of real homogeneous geometry a point is \emph{a class of triads of real numbers $(x, y, z)$}, not all zero, triads being classed together when their constituents are proportional. A line is a class of points which satisfy a linear relation $ax + by + cz = 0$, where $a$,~$b$,~$c$ are not all zero. In some systems one point or line is on exactly the same footing as another. In others certain `special' points and lines are regarded as peculiarly distinguished, and it is on the relations of other elements to these special elements that emphasis is laid. Thus, in what may be called `real homogeneous Cartesian geometry', those points are special for which $z = 0$, and there is one special line, viz.\ the line $z = 0$. This special line is called `the line at infinity'. This is not a treatise on geometry, and there is no occasion to develop the matter in detail. The point of importance is this. The infinite of analysis is a `limiting' and not an `actual' infinite. The symbol~`$\infty$' has, throughout this book, been regarded as an `incomplete symbol', a symbol to which no independent meaning has been attached, though one has been attached to certain phrases containing it. But \emph{the infinite of geometry is an actual and not a limiting infinite}. The `line at infinity' is a line in precisely the same sense in which other lines are lines. {\Loosen It is possible to set up a correlation between `homogeneous' and `common' Cartesian geometry in which all elements of the first system, \emph{the special elements excepted}, have correlates in the second. The line $ax + by + cz = 0$, for example, corresponds to the line $ax + by + c = 0$. Every point of the first line has a correlate on the second, except one, viz.\ the point for which $z = 0$. When $(x, y, z)$ varies on the first line, in such a manner as to tend in the limit to the special point for which $z = 0$, the corresponding point on the second line varies so that its distance from the origin tends to infinity. This correlation is historically important, for it is from it that the vocabulary of the subject has been derived, and it is often useful for purposes of illustration. It is however no more than an illustration, and no rational account of the geometrical infinite can be based upon it. The confusion about these matters so prevalent among students arises from the fact that, in the commonly used text books of analytical geometry, the illustration is taken for the reality.} \PageSep{446} \clearpage \thispagestyle{empty} \null\vfill \begin{center} \footnotesize CAMBRIDGE: PRINTED BY \\ J.\ B.\ PEACE, M.A., \\ AT THE UNIVERSITY PRESS \end{center} \vfill %%%%%%%%%%%%%%%%%%%%%%%%% GUTENBERG LICENSE %%%%%%%%%%%%%%%%%%%%%%%%%% \FlushRunningHeads \vfill \begin{center} \TranscribersNote[Modification Note]{% \ChangeNote\bigskip \ifthenelse{\boolean{Modernize}}{\ModernizationNote}{} } \end{center} \PGLicense \begin{PGtext} End of the Project Gutenberg EBook of A Course of Pure Mathematics, by G. H. (Godfrey Harold) Hardy *** END OF THIS PROJECT GUTENBERG EBOOK A COURSE OF PURE MATHEMATICS *** ***** This file should be named 38769-pdf.pdf or 38769-pdf.zip ***** This and all associated files of various formats will be found in: http://www.gutenberg.org/3/8/7/6/38769/ Produced by Andrew D. Hwang, Brenda Lewis, and the Online Distributed Proofreading Team at http://www.pgdp.net (This file was produced from images generously made available by The Internet Archive/American Libraries.) Updated editions will replace the previous one--the old editions will be renamed. Creating the works from public domain print editions means that no one owns a United States copyright in these works, so the Foundation (and you!) can copy and distribute it in the United States without permission and without paying copyright royalties. Special rules, set forth in the General Terms of Use part of this license, apply to copying and distributing Project Gutenberg-tm electronic works to protect the PROJECT GUTENBERG-tm concept and trademark. Project Gutenberg is a registered trademark, and may not be used if you charge for the eBooks, unless you receive specific permission. If you do not charge anything for copies of this eBook, complying with the rules is very easy. You may use this eBook for nearly any purpose such as creation of derivative works, reports, performances and research. They may be modified and printed and given away--you may do practically ANYTHING with public domain eBooks. Redistribution is subject to the trademark license, especially commercial redistribution. *** START: FULL LICENSE *** THE FULL PROJECT GUTENBERG LICENSE PLEASE READ THIS BEFORE YOU DISTRIBUTE OR USE THIS WORK To protect the Project Gutenberg-tm mission of promoting the free distribution of electronic works, by using or distributing this work (or any other work associated in any way with the phrase "Project Gutenberg"), you agree to comply with all the terms of the Full Project Gutenberg-tm License (available with this file or online at http://gutenberg.net/license). Section 1. General Terms of Use and Redistributing Project Gutenberg-tm electronic works 1.A. By reading or using any part of this Project Gutenberg-tm electronic work, you indicate that you have read, understand, agree to and accept all the terms of this license and intellectual property (trademark/copyright) agreement. If you do not agree to abide by all the terms of this agreement, you must cease using and return or destroy all copies of Project Gutenberg-tm electronic works in your possession. If you paid a fee for obtaining a copy of or access to a Project Gutenberg-tm electronic work and you do not agree to be bound by the terms of this agreement, you may obtain a refund from the person or entity to whom you paid the fee as set forth in paragraph 1.E.8. 1.B. "Project Gutenberg" is a registered trademark. It may only be used on or associated in any way with an electronic work by people who agree to be bound by the terms of this agreement. There are a few things that you can do with most Project Gutenberg-tm electronic works even without complying with the full terms of this agreement. See paragraph 1.C below. There are a lot of things you can do with Project Gutenberg-tm electronic works if you follow the terms of this agreement and help preserve free future access to Project Gutenberg-tm electronic works. See paragraph 1.E below. 1.C. The Project Gutenberg Literary Archive Foundation ("the Foundation" or PGLAF), owns a compilation copyright in the collection of Project Gutenberg-tm electronic works. Nearly all the individual works in the collection are in the public domain in the United States. If an individual work is in the public domain in the United States and you are located in the United States, we do not claim a right to prevent you from copying, distributing, performing, displaying or creating derivative works based on the work as long as all references to Project Gutenberg are removed. Of course, we hope that you will support the Project Gutenberg-tm mission of promoting free access to electronic works by freely sharing Project Gutenberg-tm works in compliance with the terms of this agreement for keeping the Project Gutenberg-tm name associated with the work. You can easily comply with the terms of this agreement by keeping this work in the same format with its attached full Project Gutenberg-tm License when you share it without charge with others. 1.D. The copyright laws of the place where you are located also govern what you can do with this work. Copyright laws in most countries are in a constant state of change. If you are outside the United States, check the laws of your country in addition to the terms of this agreement before downloading, copying, displaying, performing, distributing or creating derivative works based on this work or any other Project Gutenberg-tm work. The Foundation makes no representations concerning the copyright status of any work in any country outside the United States. 1.E. Unless you have removed all references to Project Gutenberg: 1.E.1. The following sentence, with active links to, or other immediate access to, the full Project Gutenberg-tm License must appear prominently whenever any copy of a Project Gutenberg-tm work (any work on which the phrase "Project Gutenberg" appears, or with which the phrase "Project Gutenberg" is associated) is accessed, displayed, performed, viewed, copied or distributed: This eBook is for the use of anyone anywhere at no cost and with almost no restrictions whatsoever. You may copy it, give it away or re-use it under the terms of the Project Gutenberg License included with this eBook or online at www.gutenberg.net 1.E.2. If an individual Project Gutenberg-tm electronic work is derived from the public domain (does not contain a notice indicating that it is posted with permission of the copyright holder), the work can be copied and distributed to anyone in the United States without paying any fees or charges. If you are redistributing or providing access to a work with the phrase "Project Gutenberg" associated with or appearing on the work, you must comply either with the requirements of paragraphs 1.E.1 through 1.E.7 or obtain permission for the use of the work and the Project Gutenberg-tm trademark as set forth in paragraphs 1.E.8 or 1.E.9. 1.E.3. If an individual Project Gutenberg-tm electronic work is posted with the permission of the copyright holder, your use and distribution must comply with both paragraphs 1.E.1 through 1.E.7 and any additional terms imposed by the copyright holder. Additional terms will be linked to the Project Gutenberg-tm License for all works posted with the permission of the copyright holder found at the beginning of this work. 1.E.4. Do not unlink or detach or remove the full Project Gutenberg-tm License terms from this work, or any files containing a part of this work or any other work associated with Project Gutenberg-tm. 1.E.5. Do not copy, display, perform, distribute or redistribute this electronic work, or any part of this electronic work, without prominently displaying the sentence set forth in paragraph 1.E.1 with active links or immediate access to the full terms of the Project Gutenberg-tm License. 1.E.6. You may convert to and distribute this work in any binary, compressed, marked up, nonproprietary or proprietary form, including any word processing or hypertext form. However, if you provide access to or distribute copies of a Project Gutenberg-tm work in a format other than "Plain Vanilla ASCII" or other format used in the official version posted on the official Project Gutenberg-tm web site (www.gutenberg.net), you must, at no additional cost, fee or expense to the user, provide a copy, a means of exporting a copy, or a means of obtaining a copy upon request, of the work in its original "Plain Vanilla ASCII" or other form. Any alternate format must include the full Project Gutenberg-tm License as specified in paragraph 1.E.1. 1.E.7. Do not charge a fee for access to, viewing, displaying, performing, copying or distributing any Project Gutenberg-tm works unless you comply with paragraph 1.E.8 or 1.E.9. 1.E.8. You may charge a reasonable fee for copies of or providing access to or distributing Project Gutenberg-tm electronic works provided that - You pay a royalty fee of 20% of the gross profits you derive from the use of Project Gutenberg-tm works calculated using the method you already use to calculate your applicable taxes. The fee is owed to the owner of the Project Gutenberg-tm trademark, but he has agreed to donate royalties under this paragraph to the Project Gutenberg Literary Archive Foundation. Royalty payments must be paid within 60 days following each date on which you prepare (or are legally required to prepare) your periodic tax returns. Royalty payments should be clearly marked as such and sent to the Project Gutenberg Literary Archive Foundation at the address specified in Section 4, "Information about donations to the Project Gutenberg Literary Archive Foundation." - You provide a full refund of any money paid by a user who notifies you in writing (or by e-mail) within 30 days of receipt that s/he does not agree to the terms of the full Project Gutenberg-tm License. You must require such a user to return or destroy all copies of the works possessed in a physical medium and discontinue all use of and all access to other copies of Project Gutenberg-tm works. - You provide, in accordance with paragraph 1.F.3, a full refund of any money paid for a work or a replacement copy, if a defect in the electronic work is discovered and reported to you within 90 days of receipt of the work. - You comply with all other terms of this agreement for free distribution of Project Gutenberg-tm works. 1.E.9. If you wish to charge a fee or distribute a Project Gutenberg-tm electronic work or group of works on different terms than are set forth in this agreement, you must obtain permission in writing from both the Project Gutenberg Literary Archive Foundation and Michael Hart, the owner of the Project Gutenberg-tm trademark. Contact the Foundation as set forth in Section 3 below. 1.F. 1.F.1. Project Gutenberg volunteers and employees expend considerable effort to identify, do copyright research on, transcribe and proofread public domain works in creating the Project Gutenberg-tm collection. Despite these efforts, Project Gutenberg-tm electronic works, and the medium on which they may be stored, may contain "Defects," such as, but not limited to, incomplete, inaccurate or corrupt data, transcription errors, a copyright or other intellectual property infringement, a defective or damaged disk or other medium, a computer virus, or computer codes that damage or cannot be read by your equipment. 1.F.2. LIMITED WARRANTY, DISCLAIMER OF DAMAGES - Except for the "Right of Replacement or Refund" described in paragraph 1.F.3, the Project Gutenberg Literary Archive Foundation, the owner of the Project Gutenberg-tm trademark, and any other party distributing a Project Gutenberg-tm electronic work under this agreement, disclaim all liability to you for damages, costs and expenses, including legal fees. YOU AGREE THAT YOU HAVE NO REMEDIES FOR NEGLIGENCE, STRICT LIABILITY, BREACH OF WARRANTY OR BREACH OF CONTRACT EXCEPT THOSE PROVIDED IN PARAGRAPH 1.F.3. YOU AGREE THAT THE FOUNDATION, THE TRADEMARK OWNER, AND ANY DISTRIBUTOR UNDER THIS AGREEMENT WILL NOT BE LIABLE TO YOU FOR ACTUAL, DIRECT, INDIRECT, CONSEQUENTIAL, PUNITIVE OR INCIDENTAL DAMAGES EVEN IF YOU GIVE NOTICE OF THE POSSIBILITY OF SUCH DAMAGE. 1.F.3. LIMITED RIGHT OF REPLACEMENT OR REFUND - If you discover a defect in this electronic work within 90 days of receiving it, you can receive a refund of the money (if any) you paid for it by sending a written explanation to the person you received the work from. If you received the work on a physical medium, you must return the medium with your written explanation. The person or entity that provided you with the defective work may elect to provide a replacement copy in lieu of a refund. If you received the work electronically, the person or entity providing it to you may choose to give you a second opportunity to receive the work electronically in lieu of a refund. If the second copy is also defective, you may demand a refund in writing without further opportunities to fix the problem. 1.F.4. Except for the limited right of replacement or refund set forth in paragraph 1.F.3, this work is provided to you 'AS-IS' WITH NO OTHER WARRANTIES OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO WARRANTIES OF MERCHANTIBILITY OR FITNESS FOR ANY PURPOSE. 1.F.5. Some states do not allow disclaimers of certain implied warranties or the exclusion or limitation of certain types of damages. If any disclaimer or limitation set forth in this agreement violates the law of the state applicable to this agreement, the agreement shall be interpreted to make the maximum disclaimer or limitation permitted by the applicable state law. The invalidity or unenforceability of any provision of this agreement shall not void the remaining provisions. 1.F.6. INDEMNITY - You agree to indemnify and hold the Foundation, the trademark owner, any agent or employee of the Foundation, anyone providing copies of Project Gutenberg-tm electronic works in accordance with this agreement, and any volunteers associated with the production, promotion and distribution of Project Gutenberg-tm electronic works, harmless from all liability, costs and expenses, including legal fees, that arise directly or indirectly from any of the following which you do or cause to occur: (a) distribution of this or any Project Gutenberg-tm work, (b) alteration, modification, or additions or deletions to any Project Gutenberg-tm work, and (c) any Defect you cause. Section 2. Information about the Mission of Project Gutenberg-tm Project Gutenberg-tm is synonymous with the free distribution of electronic works in formats readable by the widest variety of computers including obsolete, old, middle-aged and new computers. It exists because of the efforts of hundreds of volunteers and donations from people in all walks of life. Volunteers and financial support to provide volunteers with the assistance they need are critical to reaching Project Gutenberg-tm's goals and ensuring that the Project Gutenberg-tm collection will remain freely available for generations to come. In 2001, the Project Gutenberg Literary Archive Foundation was created to provide a secure and permanent future for Project Gutenberg-tm and future generations. To learn more about the Project Gutenberg Literary Archive Foundation and how your efforts and donations can help, see Sections 3 and 4 and the Foundation web page at http://www.pglaf.org. Section 3. Information about the Project Gutenberg Literary Archive Foundation The Project Gutenberg Literary Archive Foundation is a non profit 501(c)(3) educational corporation organized under the laws of the state of Mississippi and granted tax exempt status by the Internal Revenue Service. The Foundation's EIN or federal tax identification number is 64-6221541. Its 501(c)(3) letter is posted at http://pglaf.org/fundraising. Contributions to the Project Gutenberg Literary Archive Foundation are tax deductible to the full extent permitted by U.S. federal laws and your state's laws. The Foundation's principal office is located at 4557 Melan Dr. S. Fairbanks, AK, 99712., but its volunteers and employees are scattered throughout numerous locations. Its business office is located at 809 North 1500 West, Salt Lake City, UT 84116, (801) 596-1887, email business@pglaf.org. Email contact links and up to date contact information can be found at the Foundation's web site and official page at http://pglaf.org For additional contact information: Dr. Gregory B. Newby Chief Executive and Director gbnewby@pglaf.org Section 4. Information about Donations to the Project Gutenberg Literary Archive Foundation Project Gutenberg-tm depends upon and cannot survive without wide spread public support and donations to carry out its mission of increasing the number of public domain and licensed works that can be freely distributed in machine readable form accessible by the widest array of equipment including outdated equipment. Many small donations ($1 to $5,000) are particularly important to maintaining tax exempt status with the IRS. The Foundation is committed to complying with the laws regulating charities and charitable donations in all 50 states of the United States. Compliance requirements are not uniform and it takes a considerable effort, much paperwork and many fees to meet and keep up with these requirements. We do not solicit donations in locations where we have not received written confirmation of compliance. To SEND DONATIONS or determine the status of compliance for any particular state visit http://pglaf.org While we cannot and do not solicit contributions from states where we have not met the solicitation requirements, we know of no prohibition against accepting unsolicited donations from donors in such states who approach us with offers to donate. International donations are gratefully accepted, but we cannot make any statements concerning tax treatment of donations received from outside the United States. U.S. laws alone swamp our small staff. Please check the Project Gutenberg Web pages for current donation methods and addresses. Donations are accepted in a number of other ways including including checks, online payments and credit card donations. To donate, please visit: http://pglaf.org/donate Section 5. General Information About Project Gutenberg-tm electronic works. Professor Michael S. Hart is the originator of the Project Gutenberg-tm concept of a library of electronic works that could be freely shared with anyone. For thirty years, he produced and distributed Project Gutenberg-tm eBooks with only a loose network of volunteer support. Project Gutenberg-tm eBooks are often created from several printed editions, all of which are confirmed as Public Domain in the U.S. unless a copyright notice is included. Thus, we do not necessarily keep eBooks in compliance with any particular paper edition. Most people start at our Web site which has the main PG search facility: http://www.gutenberg.net This Web site includes information about Project Gutenberg-tm, including how to make donations to the Project Gutenberg Literary Archive Foundation, how to help produce our new eBooks, and how to subscribe to our email newsletter to hear about new eBooks. \end{PGtext} % %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% % % % % End of the Project Gutenberg EBook of A Course of Pure Mathematics, by % % G. H. (Godfrey Harold) Hardy % % % % *** END OF THIS PROJECT GUTENBERG EBOOK A COURSE OF PURE MATHEMATICS ***% % % % ***** This file should be named 38769-t.tex or 38769-t.zip ***** % % This and all associated files of various formats will be found in: % % http://www.gutenberg.org/3/8/7/6/38769/ % % % % %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% % \end{document} ### @ControlwordReplace = ( ['\\Contents', 'CONTENTS'], ['\\continued', 'continued'], ['\\ia', 'a'], ['\\ib', 'b'], ['\\ic', 'c'], ['\\id', 'd'], ['\\ie', 'i.e.'], ['\\Ie', 'I.e.'], ['\\eg', 'e.g.'], ['\\\(', '('], ['\\\)', ')'], ['\\begin{ToCPar}', ''], ['\\end{ToCPar}', ''], ['\\end{Theorem}', ''], ['\\end{Corollary}', ''], ['\\end{Cor}', ''], ['\\end{Definition}', ''], ['\\end{Definitions}', ''], ['\\end{Construction}', ''], ['\\end{Defn}', ''], ['\\end{Result}', ''], ['\\end{ParTheorem}', ''], ['\\begin{Remark}', ''], ['\\end{Remark}', ''], ['\\end{Examples}', ''] ); @ControlwordArguments = ( ['\\ToCChap', 1, 1, '', ' ', 1, 1, '', ' '], ['\\ToCSect', 1, 1, '', ' ', 1, 1, '', '', 1, 1, ' ... ', '', 1, 0, '', ''], ['\\ToCApp', 1, 1, 'Appendix ', ' ', 1, 1, '', '', 1, 1, ' ... ', ''], ['\\PgNo', 0, 0, '', '', 1, 1, '', ''], ['\\Preface', 1, 1, '', ''], ['\\Chapter', 0, 0, '', '', 1, 1, 'Chapter ', '. ', 1, 1, '', ''], ['\\Section', 1, 1, '', ''], ['\\Paragraph', 0, 1, '', '', 1, 1, '', ''], ['\\Par', 1, 1, '', ''], ['\\Appendix', 1, 1, 'Appendix ', '. ', 1, 1, '', ' ', 1, 1, '', ''], ['\\Item', 1, 1, '', ''], ['\\SubItem', 1, 1, '', ''], ['\\begin{Theorem}', 0, 1, 'Theorem ', ''], ['\\begin{Corollary}', 0, 1, 'Corollary ', ''], ['\\begin{Cor}', 0, 1, 'Cor. ', ''], ['\\begin{Definition}', 0, 1, 'Definition ', ''], ['\\begin{Definitions}', 0, 1, 'Definitions ', ''], ['\\begin{Construction}', 1, 1, '', ''], ['\\begin{Defn}', 1, 1, '', ''], ['\\begin{Result}', 1, 1, '', ''], ['\\begin{ParTheorem}', 1, 1, '', ''], ['\\begin{Examples}', 1, 1, 'Examples ', ''], ['\\TranscribersNote', 0, 0, '', '', 1, 0, '', ''], ['\\Signature', 1, 1, '', ' ', 1, 1, '', ''], ['\\CenterLine', 0, 0, '', '', 1, 1, '', ' ', 1, 0, '', ''], ['\\CenterDef', 0, 0, '', '', 1, 1, '', ' ', 1, 0, '', ''], ['\\SetLine', 1, 1, '', ''], ['\\MathTrip', 1, 1, '', ''], ['\\Graphic', 0, 0, '', '', 1, 0, '', '', 1, 0, '', ''], ['\\Figure', 0, 0, '', '', 1, 1, 'Fig. ', '', 1, 0, '', ''], ['\\Figures', 1, 0, '', '', 1, 1, 'Fig. ', ' ', 1, 0, '', '', 1, 0, '', '', 1, 1, 'Fig. ', '', 1, 0, '', ''], ['\\First', 1, 1, '', ''], ['\\Emph', 1, 1, '', ''], ['\\Topic', 1, 1, '', ''], ['\\Eq', 1, 1, '', ''], ['\\PageLabel', 0, 0, '', '', 1, 0, '', ''], ['\\PageRef', 1, 1, '', ' ', 1, 1, '', ''], ['\\Fig', 1, 1, 'Fig. ', ''], ['\\Ref', 1, 1, '', ' ', 1, 1, '', ''], ['\\SecNo', 0, 1, '', '', 1, 1, '', ''], ['\\Ex', 1, 1, 'Ex. ', ''], ['\\Exs', 1, 1, 'Exs. ', ''], ['\\MiscEx', 1, 0, 'Misc. Ex. ', ''], ['\\MiscExs', 1, 0, 'Misc. Exs. ', ''], ['\\Inum', 1, 1, '', ''], ['\\DPchg', 1, 0, '', '', 1, 1, '', ''], ['\\DPtypo', 1, 0, '', '', 1, 1, '', ''], ['\\DPnote', 1, 0, '', ''], ['\\Add', 1, 1, '', ''], ['\\Hang', 0, 0, '', ''], ['\\First', 1, 1, '', ''] ); $PageSeparator = qr/^\\PageSep/; $CustomClean = 'print "\\nCustom cleaning in progress..."; my $cline = 0; while ($cline <= $#file) { $file[$cline] =~ s/--------[^\n]*\n//; # strip page separators $cline++ } print "done\\n";'; ### This is pdfTeX, Version 3.1415926-1.40.10 (TeX Live 2009/Debian) (format=pdflatex 2011.9.6) 4 FEB 2012 20:22 entering extended mode %&-line parsing enabled. **38769-t.tex (./38769-t.tex LaTeX2e <2009/09/24> Babel and hyphenation patterns for english, usenglishmax, dumylang, noh yphenation, farsi, arabic, croatian, bulgarian, ukrainian, russian, czech, slov ak, danish, dutch, finnish, french, basque, ngerman, german, german-x-2009-06-1 9, ngerman-x-2009-06-19, ibycus, monogreek, greek, ancientgreek, hungarian, san skrit, italian, latin, latvian, lithuanian, mongolian2a, mongolian, bokmal, nyn orsk, romanian, irish, coptic, serbian, turkish, welsh, esperanto, uppersorbian , estonian, indonesian, interlingua, icelandic, kurmanji, slovenian, polish, po rtuguese, spanish, galician, catalan, swedish, ukenglish, pinyin, loaded. (/usr/share/texmf-texlive/tex/latex/base/book.cls Document Class: book 2007/10/19 v1.4h Standard LaTeX document class (/usr/share/texmf-texlive/tex/latex/base/bk12.clo File: bk12.clo 2007/10/19 v1.4h Standard LaTeX file (size option) ) \c@part=\count79 \c@chapter=\count80 \c@section=\count81 \c@subsection=\count82 \c@subsubsection=\count83 \c@paragraph=\count84 \c@subparagraph=\count85 \c@figure=\count86 \c@table=\count87 \abovecaptionskip=\skip41 \belowcaptionskip=\skip42 \bibindent=\dimen102 ) (/usr/share/texmf-texlive/tex/latex/base/inputenc.sty Package: inputenc 2008/03/30 v1.1d Input encoding file \inpenc@prehook=\toks14 \inpenc@posthook=\toks15 (/usr/share/texmf-texlive/tex/latex/base/latin1.def File: latin1.def 2008/03/30 v1.1d Input encoding file )) (/usr/share/texmf-texlive/tex/latex/base/ifthen.sty Package: ifthen 2001/05/26 v1.1c Standard LaTeX ifthen package (DPC) ) (/usr/share/texmf-texlive/tex/latex/amsmath/amsmath.sty Package: amsmath 2000/07/18 v2.13 AMS math features \@mathmargin=\skip43 For additional information on amsmath, use the `?' option. (/usr/share/texmf-texlive/tex/latex/amsmath/amstext.sty Package: amstext 2000/06/29 v2.01 (/usr/share/texmf-texlive/tex/latex/amsmath/amsgen.sty File: amsgen.sty 1999/11/30 v2.0 \@emptytoks=\toks16 \ex@=\dimen103 )) (/usr/share/texmf-texlive/tex/latex/amsmath/amsbsy.sty Package: amsbsy 1999/11/29 v1.2d \pmbraise@=\dimen104 ) (/usr/share/texmf-texlive/tex/latex/amsmath/amsopn.sty Package: amsopn 1999/12/14 v2.01 operator names ) \inf@bad=\count88 LaTeX Info: Redefining \frac on input line 211. \uproot@=\count89 \leftroot@=\count90 LaTeX Info: Redefining \overline on input line 307. \classnum@=\count91 \DOTSCASE@=\count92 LaTeX Info: Redefining \ldots on input line 379. LaTeX Info: Redefining \dots on input line 382. LaTeX Info: Redefining \cdots on input line 467. \Mathstrutbox@=\box26 \strutbox@=\box27 \big@size=\dimen105 LaTeX Font Info: Redeclaring font encoding OML on input line 567. LaTeX Font Info: Redeclaring font encoding OMS on input line 568. \macc@depth=\count93 \c@MaxMatrixCols=\count94 \dotsspace@=\muskip10 \c@parentequation=\count95 \dspbrk@lvl=\count96 \tag@help=\toks17 \row@=\count97 \column@=\count98 \maxfields@=\count99 \andhelp@=\toks18 \eqnshift@=\dimen106 \alignsep@=\dimen107 \tagshift@=\dimen108 \tagwidth@=\dimen109 \totwidth@=\dimen110 \lineht@=\dimen111 \@envbody=\toks19 \multlinegap=\skip44 \multlinetaggap=\skip45 \mathdisplay@stack=\toks20 LaTeX Info: Redefining \[ on input line 2666. LaTeX Info: Redefining \] on input line 2667. ) (/usr/share/texmf-texlive/tex/latex/amsfonts/amssymb.sty Package: amssymb 2009/06/22 v3.00 (/usr/share/texmf-texlive/tex/latex/amsfonts/amsfonts.sty Package: amsfonts 2009/06/22 v3.00 Basic AMSFonts support \symAMSa=\mathgroup4 \symAMSb=\mathgroup5 LaTeX Font Info: Overwriting math alphabet `\mathfrak' in version `bold' (Font) U/euf/m/n --> U/euf/b/n on input line 96. )) (/usr/share/texmf-texlive/tex/latex/base/alltt.sty Package: alltt 1997/06/16 v2.0g defines alltt environment ) (/usr/share/texmf-texlive/tex/latex/footmisc/footmisc.sty Package: footmisc 2009/09/15 v5.5a a miscellany of footnote facilities \FN@temptoken=\toks21 \footnotemargin=\dimen112 \c@pp@next@reset=\count100 \c@@fnserial=\count101 Package footmisc Info: Declaring symbol style bringhurst on input line 855. Package footmisc Info: Declaring symbol style chicago on input line 863. Package footmisc Info: Declaring symbol style wiley on input line 872. Package footmisc Info: Declaring symbol style lamport-robust on input line 883. Package footmisc Info: Declaring symbol style lamport* on input line 903. Package footmisc Info: Declaring symbol style lamport*-robust on input line 924 . ) (/usr/share/texmf-texlive/tex/latex/tools/indentfirst.sty Package: indentfirst 1995/11/23 v1.03 Indent first paragraph (DPC) ) (/usr/share/texmf-texlive/tex/latex/was/icomma.sty Package: icomma 2002/03/10 v2.0 (WaS) ) (/usr/share/texmf-texlive/tex/latex/tools/calc.sty Package: calc 2007/08/22 v4.3 Infix arithmetic (KKT,FJ) \calc@Acount=\count102 \calc@Bcount=\count103 \calc@Adimen=\dimen113 \calc@Bdimen=\dimen114 \calc@Askip=\skip46 \calc@Bskip=\skip47 LaTeX Info: Redefining \setlength on input line 76. LaTeX Info: Redefining \addtolength on input line 77. \calc@Ccount=\count104 \calc@Cskip=\skip48 ) (/usr/share/texmf-texlive/tex/latex/fancyhdr/fancyhdr.sty \fancy@headwidth=\skip49 \f@ncyO@elh=\skip50 \f@ncyO@erh=\skip51 \f@ncyO@olh=\skip52 \f@ncyO@orh=\skip53 \f@ncyO@elf=\skip54 \f@ncyO@erf=\skip55 \f@ncyO@olf=\skip56 \f@ncyO@orf=\skip57 ) (/usr/share/texmf-texlive/tex/latex/graphics/graphicx.sty Package: graphicx 1999/02/16 v1.0f Enhanced LaTeX Graphics (DPC,SPQR) (/usr/share/texmf-texlive/tex/latex/graphics/keyval.sty Package: keyval 1999/03/16 v1.13 key=value parser (DPC) \KV@toks@=\toks22 ) (/usr/share/texmf-texlive/tex/latex/graphics/graphics.sty Package: graphics 2009/02/05 v1.0o Standard LaTeX Graphics (DPC,SPQR) (/usr/share/texmf-texlive/tex/latex/graphics/trig.sty Package: trig 1999/03/16 v1.09 sin cos tan (DPC) ) (/etc/texmf/tex/latex/config/graphics.cfg File: graphics.cfg 2009/08/28 v1.8 graphics configuration of TeX Live ) Package graphics Info: Driver file: pdftex.def on input line 91. (/usr/share/texmf-texlive/tex/latex/pdftex-def/pdftex.def File: pdftex.def 2009/08/25 v0.04m Graphics/color for pdfTeX \Gread@gobject=\count105 )) \Gin@req@height=\dimen115 \Gin@req@width=\dimen116 ) (/usr/share/texmf-texlive/tex/latex/caption/caption.sty Package: caption 2009/10/09 v3.1k Customizing captions (AR) (/usr/share/texmf-texlive/tex/latex/caption/caption3.sty Package: caption3 2009/10/09 v3.1k caption3 kernel (AR) \captionmargin=\dimen117 \captionmargin@=\dimen118 \captionwidth=\dimen119 \caption@indent=\dimen120 \caption@parindent=\dimen121 \caption@hangindent=\dimen122 ) \c@ContinuedFloat=\count106 ) (/usr/share/texmf-texlive/tex/latex/geometry/geometry.sty Package: geometry 2008/12/21 v4.2 Page Geometry (/usr/share/texmf-texlive/tex/generic/oberdiek/ifpdf.sty Package: ifpdf 2009/04/10 v2.0 Provides the ifpdf switch (HO) Package ifpdf Info: pdfTeX in pdf mode detected. ) (/usr/share/texmf-texlive/tex/generic/oberdiek/ifvtex.sty Package: ifvtex 2008/11/04 v1.4 Switches for detecting VTeX and its modes (HO) Package ifvtex Info: VTeX not detected. ) \Gm@cnth=\count107 \Gm@cntv=\count108 \c@Gm@tempcnt=\count109 \Gm@bindingoffset=\dimen123 \Gm@wd@mp=\dimen124 \Gm@odd@mp=\dimen125 \Gm@even@mp=\dimen126 \Gm@dimlist=\toks23 (/usr/share/texmf-texlive/tex/xelatex/xetexconfig/geometry.cfg)) (/usr/share/te xmf-texlive/tex/latex/hyperref/hyperref.sty Package: hyperref 2009/10/09 v6.79a Hypertext links for LaTeX (/usr/share/texmf-texlive/tex/generic/ifxetex/ifxetex.sty Package: ifxetex 2009/01/23 v0.5 Provides ifxetex conditional ) (/usr/share/texmf-texlive/tex/latex/oberdiek/hycolor.sty Package: hycolor 2009/10/02 v1.5 Code for color options of hyperref/bookmark (H O) (/usr/share/texmf-texlive/tex/latex/oberdiek/xcolor-patch.sty Package: xcolor-patch 2009/10/02 xcolor patch )) \@linkdim=\dimen127 \Hy@linkcounter=\count110 \Hy@pagecounter=\count111 (/usr/share/texmf-texlive/tex/latex/hyperref/pd1enc.def File: pd1enc.def 2009/10/09 v6.79a Hyperref: PDFDocEncoding definition (HO) ) (/usr/share/texmf-texlive/tex/generic/oberdiek/etexcmds.sty Package: etexcmds 2007/12/12 v1.2 Prefix for e-TeX command names (HO) (/usr/share/texmf-texlive/tex/generic/oberdiek/infwarerr.sty Package: infwarerr 2007/09/09 v1.2 Providing info/warning/message (HO) ) Package etexcmds Info: Could not find \expanded. (etexcmds) That can mean that you are not using pdfTeX 1.50 or (etexcmds) that some package has redefined \expanded. (etexcmds) In the latter case, load this package earlier. ) (/etc/texmf/tex/latex/config/hyperref.cfg File: hyperref.cfg 2002/06/06 v1.2 hyperref configuration of TeXLive ) (/usr/share/texmf-texlive/tex/latex/oberdiek/kvoptions.sty Package: kvoptions 2009/08/13 v3.4 Keyval support for LaTeX options (HO) (/usr/share/texmf-texlive/tex/generic/oberdiek/kvsetkeys.sty Package: kvsetkeys 2009/07/30 v1.5 Key value parser with default handler suppor t (HO) )) Package hyperref Info: Option `hyperfootnotes' set `false' on input line 2864. Package hyperref Info: Option `bookmarks' set `true' on input line 2864. Package hyperref Info: Option `linktocpage' set `false' on input line 2864. Package hyperref Info: Option `pdfdisplaydoctitle' set `true' on input line 286 4. Package hyperref Info: Option `pdfpagelabels' set `true' on input line 2864. Package hyperref Info: Option `bookmarksopen' set `true' on input line 2864. Package hyperref Info: Option `colorlinks' set `true' on input line 2864. Package hyperref Info: Hyper figures OFF on input line 2975. Package hyperref Info: Link nesting OFF on input line 2980. Package hyperref Info: Hyper index ON on input line 2983. Package hyperref Info: Plain pages OFF on input line 2990. Package hyperref Info: Backreferencing OFF on input line 2995. Implicit mode ON; LaTeX internals redefined Package hyperref Info: Bookmarks ON on input line 3191. (/usr/share/texmf-texlive/tex/latex/ltxmisc/url.sty \Urlmuskip=\muskip11 Package: url 2006/04/12 ver 3.3 Verb mode for urls, etc. ) LaTeX Info: Redefining \url on input line 3428. (/usr/share/texmf-texlive/tex/generic/oberdiek/bitset.sty Package: bitset 2007/09/28 v1.0 Data type bit set (HO) (/usr/share/texmf-texlive/tex/generic/oberdiek/intcalc.sty Package: intcalc 2007/09/27 v1.1 Expandable integer calculations (HO) ) (/usr/share/texmf-texlive/tex/generic/oberdiek/bigintcalc.sty Package: bigintcalc 2007/11/11 v1.1 Expandable big integer calculations (HO) (/usr/share/texmf-texlive/tex/generic/oberdiek/pdftexcmds.sty Package: pdftexcmds 2009/09/23 v0.6 LuaTeX support for pdfTeX utility functions (HO) (/usr/share/texmf-texlive/tex/generic/oberdiek/ifluatex.sty Package: ifluatex 2009/04/17 v1.2 Provides the ifluatex switch (HO) Package ifluatex Info: LuaTeX not detected. ) (/usr/share/texmf-texlive/tex/generic/oberdiek/ltxcmds.sty Package: ltxcmds 2009/08/05 v1.0 Some LaTeX kernel commands for general use (HO ) ) Package pdftexcmds Info: LuaTeX not detected. Package pdftexcmds Info: \pdf@primitive is available. Package pdftexcmds Info: \pdf@ifprimitive is available. ))) \Fld@menulength=\count112 \Field@Width=\dimen128 \Fld@charsize=\dimen129 \Field@toks=\toks24 Package hyperref Info: Hyper figures OFF on input line 4377. Package hyperref Info: Link nesting OFF on input line 4382. Package hyperref Info: Hyper index ON on input line 4385. Package hyperref Info: backreferencing OFF on input line 4392. Package hyperref Info: Link coloring ON on input line 4395. Package hyperref Info: Link coloring with OCG OFF on input line 4402. Package hyperref Info: PDF/A mode OFF on input line 4407. (/usr/share/texmf-texlive/tex/generic/oberdiek/atbegshi.sty Package: atbegshi 2008/07/31 v1.9 At begin shipout hook (HO) ) \Hy@abspage=\count113 \c@Item=\count114 ) *hyperref using driver hpdftex* (/usr/share/texmf-texlive/tex/latex/hyperref/hpdftex.def File: hpdftex.def 2009/10/09 v6.79a Hyperref driver for pdfTeX \Fld@listcount=\count115 ) \TmpLen=\skip58 \c@tocentry=\count116 \c@ParNo=\count117 \c@ExNo=\count118 (./38769-t.aux) \openout1 = `38769-t.aux'. LaTeX Font Info: Checking defaults for OML/cmm/m/it on input line 821. LaTeX Font Info: ... okay on input line 821. LaTeX Font Info: Checking defaults for T1/cmr/m/n on input line 821. LaTeX Font Info: ... okay on input line 821. LaTeX Font Info: Checking defaults for OT1/cmr/m/n on input line 821. LaTeX Font Info: ... okay on input line 821. LaTeX Font Info: Checking defaults for OMS/cmsy/m/n on input line 821. LaTeX Font Info: ... okay on input line 821. LaTeX Font Info: Checking defaults for OMX/cmex/m/n on input line 821. LaTeX Font Info: ... okay on input line 821. LaTeX Font Info: Checking defaults for U/cmr/m/n on input line 821. LaTeX Font Info: ... okay on input line 821. LaTeX Font Info: Checking defaults for PD1/pdf/m/n on input line 821. LaTeX Font Info: ... okay on input line 821. (/usr/share/texmf/tex/context/base/supp-pdf.mkii [Loading MPS to PDF converter (version 2006.09.02).] \scratchcounter=\count119 \scratchdimen=\dimen130 \scratchbox=\box28 \nofMPsegments=\count120 \nofMParguments=\count121 \everyMPshowfont=\toks25 \MPscratchCnt=\count122 \MPscratchDim=\dimen131 \MPnumerator=\count123 \everyMPtoPDFconversion=\toks26 ) Package caption Info: Begin \AtBeginDocument code. Package caption Info: hyperref package is loaded. Package caption Info: End \AtBeginDocument code. *geometry auto-detecting driver* *geometry detected driver: pdftex* -------------------- Geometry parameters paper: class default landscape: -- twocolumn: -- twoside: true asymmetric: -- h-parts: 9.03374pt, 379.4175pt, 9.03375pt v-parts: 7.04944pt, 560.53635pt, 10.5742pt hmarginratio: 1:1 vmarginratio: 2:3 lines: -- heightrounded: -- bindingoffset: 0.0pt truedimen: -- includehead: true includefoot: true includemp: -- driver: pdftex -------------------- Page layout dimensions and switches \paperwidth 397.48499pt \paperheight 578.15999pt \textwidth 379.4175pt \textheight 498.66255pt \oddsidemargin -63.23625pt \evensidemargin -63.23624pt \topmargin -65.22055pt \headheight 15.0pt \headsep 19.8738pt \footskip 30.0pt \marginparwidth 98.0pt \marginparsep 7.0pt \columnsep 10.0pt \skip\footins 10.8pt plus 4.0pt minus 2.0pt \hoffset 0.0pt \voffset 0.0pt \mag 1000 \@twosidetrue \@mparswitchtrue (1in=72.27pt, 1cm=28.45pt) ----------------------- (/usr/share/texmf-texlive/tex/latex/graphics/color.sty Package: color 2005/11/14 v1.0j Standard LaTeX Color (DPC) (/etc/texmf/tex/latex/config/color.cfg File: color.cfg 2007/01/18 v1.5 color configuration of teTeX/TeXLive ) Package color Info: Driver file: pdftex.def on input line 130. ) Package hyperref Info: Link coloring ON on input line 821. (/usr/share/texmf-texlive/tex/latex/hyperref/nameref.sty Package: nameref 2007/05/29 v2.31 Cross-referencing by name of section (/usr/share/texmf-texlive/tex/latex/oberdiek/refcount.sty Package: refcount 2008/08/11 v3.1 Data extraction from references (HO) ) \c@section@level=\count124 ) LaTeX Info: Redefining \ref on input line 821. LaTeX Info: Redefining \pageref on input line 821. (./38769-t.out) (./38769-t.out) \@outlinefile=\write3 \openout3 = `38769-t.out'. \AtBeginShipoutBox=\box29 LaTeX Font Info: Try loading font information for U+msa on input line 851. (/usr/share/texmf-texlive/tex/latex/amsfonts/umsa.fd File: umsa.fd 2009/06/22 v3.00 AMS symbols A ) LaTeX Font Info: Try loading font information for U+msb on input line 851. (/usr/share/texmf-texlive/tex/latex/amsfonts/umsb.fd File: umsb.fd 2009/06/22 v3.00 AMS symbols B ) [1 {/var/lib/texmf/fonts/map/pdftex/updmap/pdftex.map}] [2 ] [1 ] <./images/device.png, id=115, 61.6704pt x 69.3792pt> File: ./images/device.png Graphic file (type png) Underfull \hbox (badness 5787) detected at line 892 \OT1/cmr/m/n/10.95 NEW YORK : THE MACMILLAN CO. [] Underfull \hbox (badness 1502) detected at line 901 \OT1/cmr/m/n/10.95 TORONTO : THE MACMILLAN CO. OF [] [2 <./images/device.png (PNG copy)>] [3 ] [4 ] LaTeX Font Info: Try loading font information for OMS+cmr on input line 951. (/usr/share/texmf-texlive/tex/latex/base/omscmr.fd File: omscmr.fd 1999/05/25 v2.5h Standard LaTeX font definitions ) LaTeX Font Info: Font shape `OMS/cmr/m/n' in size <12> not available (Font) Font shape `OMS/cmsy/m/n' tried instead on input line 951. [5 ] [6] [7 ] Underfull \hbox (badness 1112) in paragraph at lines 1117--1122 \OT1/cmr/m/n/10 cients, 107[]. Coaxal cir-cles, 110[]. Bi-lin-ear and other tra ns-for-ma- [] [8] [9] [10] [11] [12] [13] [14] [1 ] <./images/p002.pdf, id=547, 329.23pt x 22.0825pt> File: ./images/p002.pdf Graphic file (type pdf) [2 <./images/p002.pdf>] LaTeX Font Info: Font shape `OMS/cmr/m/n' in size <8> not available (Font) Font shape `OMS/cmsy/m/n' tried instead on input line 1568. LaTeX Font Info: Font shape `OMS/cmr/m/n' in size <7> not available (Font) Font shape `OMS/cmsy/m/n' tried instead on input line 1571. [3] [4] <./images/p005.pdf, id=589, 341.275pt x 114.4275pt> File: ./images/p005.pdf Graphic file (type pdf) [5] [6 <./images/p005.pdf>] [7] [8] LaTeX Font Info: Font shape `OMS/cmr/m/n' in size <10.95> not available (Font) Font shape `OMS/cmsy/m/n' tried instead on input line 1815. [9] <./images/p009.pdf, id=630, 329.23pt x 77.28876pt> File: ./images/p009.pdf Graphic file (type pdf) [10 <./images/p009.pdf>] [11] [12] [13] [14] [15] [16] <./images/p016.pdf, id=701, 170.6375pt x 101.37875pt> File: ./images/p016.pdf Graphic file (type pdf) [17 <./images/p016.pdf>] [18] [19] [20] [21] [22] <./im ages/p021.pdf, id=766, 329.23pt x 137.51375pt> File: ./images/p021.pdf Graphic file (type pdf) [23 <./images/p021.pdf>] [24] [25] [26] [27] [28] [29] LaTeX Font Info: Font shape `OMS/cmr/m/n' in size <10> not available (Font) Font shape `OMS/cmsy/m/n' tried instead on input line 2826. [30] [31] [32] [33] [34] [35] [36] [37] [38] [39] [40] [41] [42] [43 ] [44] [45] <./images/p041.pdf, id=938, 146.5475pt x 189.70876pt> File: ./images/p041.pdf Graphic file (type pdf) [46] [47 <./images/p041.pdf>] <./images/p043.pdf, id=96 0, 124.465pt x 107.40125pt> File: ./images/p043.pdf Graphic file (type pdf) [48] [49 <./images/p043.pdf>] <./images/p044.pdf, id=97 8, 192.72pt x 152.57pt> File: ./images/p044.pdf Graphic file (type pdf) [50] <./images/p045a.pdf, id=990, 179.67125pt x 137.513 75pt> File: ./images/p045a.pdf Graphic file (type pdf) <./images/p045b.pdf, id=991, 172.645pt x 178.6675pt> File: ./images/p045b.pdf Graphic file (type pdf) [51 <./images/p044.pdf>] [52 <./images/p045a.pdf> <./i mages/p045b.pdf>] [53] [54] <./images/p049a.pdf, id=1052, 206.7725pt x 164.615p t> File: ./images/p049a.pdf Graphic file (type pdf) <./images/p049b.pdf, id=1053, 178.6675pt x 164.615pt> File: ./images/p049b.pdf Graphic file (type pdf) [55] [56 <./images/p049a.pdf> <./images/p049b.pdf>] [5 7] [58] [59] [60] <./images/p053a.pdf, id=1122, 212.795pt x 170.6375pt> File: ./images/p053a.pdf Graphic file (type pdf) <./images/p053b.pdf, id=1123, 212.795pt x 171.64125pt> File: ./images/p053b.pdf Graphic file (type pdf) [61] [62 <./images/p053a.pdf> <./images/p053b.pdf>] <. /images/p055a.pdf, id=1145, 148.555pt x 149.55875pt> File: ./images/p055a.pdf Graphic file (type pdf) <./images/p055b.pdf, id=1146, 148.555pt x 149.55875pt> File: ./images/p055b.pdf Graphic file (type pdf) [63] <./images/p056a.pdf, id=1164, 148.555pt x 149.558 75pt> File: ./images/p056a.pdf Graphic file (type pdf) <./images/p056b.pdf, id=1165, 173.64874pt x 162.6075pt > File: ./images/p056b.pdf Graphic file (type pdf) [64 <./images/p055a.pdf> <./images/p055b.pdf> <./image s/p056a.pdf> <./images/p056b.pdf>] <./images/p057.pdf, id=1189, 311.1625pt x 25 8.9675pt> File: ./images/p057.pdf Graphic file (type pdf) [65] [66 <./images/p057.pdf>] [67] [68] [69] [70] [71] [72] <./images/p063.pdf, id=1254, 290.08376pt x 168.63pt> File: ./images/p063.pdf Graphic file (type pdf) [73 <./images/p063.pdf>] [74] <./images/p064a.pdf, id=1 276, 150.5625pt x 199.74625pt> File: ./images/p064a.pdf Graphic file (type pdf) <./images/p064b.pdf, id=1277, 150.5625pt x 117.43875pt > File: ./images/p064b.pdf Graphic file (type pdf) <./images/p064c.pdf, id=1278, 150.5625pt x 117.43875pt > File: ./images/p064c.pdf Graphic file (type pdf) [75 <./images/p064a.pdf> <./images/p064b.pdf> <./image s/p064c.pdf>] [76] [77] [78] [79] [80] [81 ] <./images/p070.pdf, id=1338, 159.59625pt x 122.4575pt> File: ./images/p070.pdf Graphic file (type pdf) [82 <./images/p070.pdf>] [83] <./images/p072.pdf, id=13 59, 318.18875pt x 288.07625pt> File: ./images/p072.pdf Graphic file (type pdf) [84] Underfull \vbox (badness 3260) has occurred while \output is active [] [85 <./images/p072.pdf>] [86] <./images/p074.pdf, id=1392, 218.8175pt x 141.528 75pt> File: ./images/p074.pdf Graphic file (type pdf) [87 <./images/p074.pdf>] [88] [89] <./images/p076.pdf, id=1416, 241.90375pt x 166.6225pt> File: ./images/p076.pdf Graphic file (type pdf) [90] [91 <./images/p076.pdf>] [92] [93] <./images/p080. pdf, id=1456, 192.72pt x 153.57375pt> File: ./images/p080.pdf Graphic file (type pdf) [94] [95 <./images/p080.pdf>] [96] [97] [98] [99] <./im ages/p085.pdf, id=1508, 193.72375pt x 164.615pt> File: ./images/p085.pdf Graphic file (type pdf) [100 <./images/p085.pdf>] [101] [102] <./images/p087.pd f, id=1533, 275.0275pt x 188.705pt> File: ./images/p087.pdf Graphic file (type pdf) [103 <./images/p087.pdf>] [104] [105] [106] [107] [108] [109] [110] <./images/p093.pdf, id=1600, 190.7125pt x 306.14375pt> File: ./images/p093.pdf Graphic file (type pdf) [111] [112 <./images/p093.pdf>] [113] [114] [115] [116] [117] [118] [119] [120] [121] [122] [123] [124] [125] [126] [127] [128 ] [129] [130] [131] [132] [133] [134] [135] [136] [137] [138] [139] <./images/p 117.pdf, id=1806, 353.32pt x 166.6225pt> File: ./images/p117.pdf Graphic file (type pdf) [140 <./images/p117.pdf>] [141] Underfull \vbox (badness 6063) has occurred while \output is active [] [142] [143] [144] [145] [146] [147] [148] [149] [150] [151] [152] [153] [154] [ 155] [156] [157] [158] LaTeX Font Info: Font shape `OMS/cmr/bx/n' in size <12> not available (Font) Font shape `OMS/cmsy/b/n' tried instead on input line 8818. <./images/p134.pdf, id=1957, 339.2675pt x 66.2475pt> File: ./images/p134.pdf Graphic file (type pdf) [159] [160 <./images/p134.pdf>] [161] [162] [163] [164] [165] [166] [167] [168] [169] [170] [171] [172] [173] [174] [175] [176] [177] [ 178] [179] [180] [181] [182] [183] [184] [185] [186] LaTeX Font Info: Font shape `OMS/cmr/m/it' in size <12> not available (Font) Font shape `OMS/cmsy/m/n' tried instead on input line 10191 . [187] [188] [189] [190] [191] [192] [193] [194] [195] [196] [197 ] [198] [199] [200] [201] [202] [203] [204] [205] [206] [207] [208] [209] <./im ages/p174.pdf, id=2384, 343.2825pt x 136.51pt> File: ./images/p174.pdf Graphic file (type pdf) [210] [211 <./images/p174.pdf>] <./images/p176.pdf, id= 2408, 396.48125pt x 152.57pt> File: ./images/p176.pdf Graphic file (type pdf) [212 <./images/p176.pdf>] [213] [214] [215] [216] <./im ages/p181a.pdf, id=2470, 204.765pt x 160.6pt> File: ./images/p181a.pdf Graphic file (type pdf) <./images/p181b.pdf, id=2471, 175.65625pt x 191.71625p t> File: ./images/p181b.pdf Graphic file (type pdf) [217] [218 <./images/p181a.pdf> <./images/p181b.pdf>] [219] [220] [221] [222] [223] <./images/p187.pdf, id=2542, 406.51875pt x 44.165 pt> File: ./images/p187.pdf Graphic file (type pdf) [224] [225 <./images/p187.pdf>] [226] <./images/p189.pd f, id=2575, 189.70876pt x 35.13126pt> File: ./images/p189.pdf Graphic file (type pdf) [227 <./images/p189.pdf>] [228] [229] LaTeX Font Info: Font shape `OMS/cmr/m/it' in size <10.95> not available (Font) Font shape `OMS/cmsy/m/n' tried instead on input line 12159 . <./images/p192.pdf, id=2607, 150.5625pt x 150.5625pt> File: ./images/p192.pdf Graphic file (type pdf) [230 <./images/p192.pdf>] [231] [232] [233] [234] [235] [236] <./images/p198.pdf, id=2666, 252.945pt x 140.525pt> File: ./images/p198.pdf Graphic file (type pdf) [237 ] [238 <./images/p198.pdf>] <./images/p199.pdf, id=2691, 182.6825pt x 128.48pt> File: ./images/p199.pdf Graphic file (type pdf) [239 <./images/p199.pdf>] [240] [241] <./images/p202.pd f, id=2721, 345.29pt x 188.705pt> File: ./images/p202.pdf Graphic file (type pdf) [242] [243 <./images/p202.pdf>] [244] [245] [246] [247] [248] [249] [250] [251] [252] [253] [254] [255] [256] [257] [258] [259] [260] [ 261] [262] [263] <./images/p219.pdf, id=2894, 331.2375pt x 101.37875pt> File: ./images/p219.pdf Graphic file (type pdf) [264] [265 <./images/p219.pdf>] <./images/p221.pdf, id= 2918, 226.8475pt x 142.5325pt> File: ./images/p221.pdf Graphic file (type pdf) [266] [267 <./images/p221.pdf>] [268] [269] <./images/p 224a.pdf, id=2962, 118.4425pt x 116.435pt> File: ./images/p224a.pdf Graphic file (type pdf) <./images/p224b.pdf, id=2963, 115.43124pt x 116.435pt> File: ./images/p224b.pdf Graphic file (type pdf) <./images/p224c.pdf, id=2964, 115.43124pt x 116.435pt> File: ./images/p224c.pdf Graphic file (type pdf) [270] <./images/p224d.pdf, id=2973, 117.43875pt x 116. 435pt> File: ./images/p224d.pdf Graphic file (type pdf) [271 <./images/p224a.pdf> <./images/p224b.pdf> <./imag es/p224c.pdf>] [272 <./images/p224d.pdf>] [273] <./images/p226.pdf, id=3010, 15 3.57375pt x 128.48pt> File: ./images/p226.pdf Graphic file (type pdf) [274] [275 <./images/p226.pdf>] [276] [277] [278] [279] [280] [281] [282] [283] [284] [285] [286] [287] [288] [289] [290] [291] [292] [ 293] [294] [295] [296] [297] [298] [299] [300] [301] <./images/p249a.pdf, id=32 71, 180.675pt x 185.69376pt> File: ./images/p249a.pdf Graphic file (type pdf) <./images/p249b.pdf, id=3272, 129.48375pt x 148.555pt> File: ./images/p249b.pdf Graphic file (type pdf) [302] [303 <./images/p249a.pdf> <./images/p249b.pdf>] [304] [305] [306] [307] [308] [309] [310] [311] [312] [313] Overfull \hbox (7.30269pt too wide) in paragraph at lines 16172--16172 [] [] Underfull \vbox (badness 7133) has occurred while \output is active [] [314] [315] [316] [317] [318] [319 ] [320] [321] [322] [323] [324] [325] [326] [327] [328] [329] <./images/p271.pd f, id=3471, 164.615pt x 166.6225pt> File: ./images/p271.pdf Graphic file (type pdf) [330 <./images/p271.pdf>] [331] [332] [333] [334] [335] <./images/p276.pdf, id=3525, 194.7275pt x 198.7425pt> File: ./images/p276.pdf Graphic file (type pdf) [336] [337 <./images/p276.pdf>] [338] [339] [340] [341] [342] [343] [344] [345] [346] <./images/p283.pdf, id=3621, 190.7125pt x 164.615 pt> File: ./images/p283.pdf Graphic file (type pdf) [347] [348 <./images/p283.pdf>] <./images/p285.pdf, id= 3642, 149.55875pt x 150.5625pt> File: ./images/p285.pdf Graphic file (type pdf) [349 <./images/p285.pdf>] [350] [351] [352] <./images/p 288.pdf, id=3674, 156.585pt x 128.48pt> File: ./images/p288.pdf Graphic file (type pdf) [353 <./images/p288.pdf>] [354] [355] [356] [357] [358] [359] [360] [361] [362] [363] [364] [365] [366] [367] [368] [369] [370] [371] [ 372] [373] [374] [375] [376] [377] [378] [379] [380] [381] [382 ] [383] [384] [385] [386] [387] [388] [389] [390] [391] [392] [393] [394] [395] [396] [397] [398] [399] [400] <./images/p324.pdf, id=4072, 309.155pt x 164.615p t> File: ./images/p324.pdf Graphic file (type pdf) [401 <./images/p324.pdf>] [402] LaTeX Font Info: Font shape `OMS/cmr/bx/n' in size <10.95> not available (Font) Font shape `OMS/cmsy/b/n' tried instead on input line 20337 . [403] [404] [405] [406] [407] [408] [409] [410] [411] [412] [413] [414] [415] [ 416] [417] [418] [419] [420] [421] [422] [423] [424] [425] [426] [427] [428] [4 29] <./images/p347.pdf, id=4321, 193.72375pt x 111.41624pt> File: ./images/p347.pdf Graphic file (type pdf) [430 <./images/p347.pdf>] [431] [432] [433] [434] [435] [436] [437] [438] [439] [440] [441] [442] [443] [444 ] [445] <./images/p359.pdf, id=4452, 225.84375pt x 152.57pt> File: ./images/p359.pdf Graphic file (type pdf) [446 <./images/p359.pdf>] [447] [448] [449] [450] [451] [452] <./images/p365.pdf, id=4516, 225.84375pt x 128.48pt> File: ./images/p365.pdf Graphic file (type pdf) [453] [454 <./images/p365.pdf>] [455] [456] [457] [458] [459] [460] [461] [462] [463] [464] [465] [466] [467] [468] [469] [470] [471] [ 472] [473] [474] [475] [476] [477] [478] [479] [480] [481] [482] [483] [484] [4 85] [486] [487] [488] [489] [490] [491] [492] [493] [494] [495 ] [496] <./images/p398.pdf, id=4859, 225.84375pt x 200.75pt> File: ./images/p398.pdf Graphic file (type pdf) [497] [498 <./images/p398.pdf>] [499] <./images/p399.pd f, id=4888, 141.52875pt x 104.39pt> File: ./images/p399.pdf Graphic file (type pdf) <./images/p400a.pdf, id=4890, 226.8475pt x 182.6825pt> File: ./images/p400a.pdf Graphic file (type pdf) [500 <./images/p399.pdf>] <./images/p400b.pdf, id=4904 , 198.7425pt x 161.60374pt> File: ./images/p400b.pdf Graphic file (type pdf) [501 <./images/p400a.pdf>] [502 <./images/p400b.pdf>] [503] [504] [505] [506] [507] [508] [509] [510] [511] [512] [513] [514] [515] [ 516] [517] [518] [519] [520] [521] [522] [523] [524] [525] [526] <./images/p420 .pdf, id=5143, 330.23375pt x 114.4275pt> File: ./images/p420.pdf Graphic file (type pdf) [527] [528 <./images/p420.pdf>] [529] [530] [531] [532] [533] [534] [535] [536] [537] [538] [539] <./images/p430a.pdf, id=5251, 433.62p t x 472.76625pt> File: ./images/p430a.pdf Graphic file (type pdf) <./images/p430b.pdf, id=5252, 171.64125pt x 216.81pt> File: ./images/p430b.pdf Graphic file (type pdf) <./images/p430c.pdf, id=5253, 212.795pt x 253.94875pt> File: ./images/p430c.pdf Graphic file (type pdf) <./images/p431a.pdf, id=5259, 212.795pt x 226.8475pt> File: ./images/p431a.pdf Graphic file (type pdf) <./images/p431b.pdf, id=5260, 212.795pt x 226.8475pt> File: ./images/p431b.pdf Graphic file (type pdf) [540] [541 <./images/p430a.pdf>] [542 <./images/p430b. pdf> <./images/p430c.pdf> <./images/p431a.pdf> <./images/p431b.pdf>] [543] [544 ] <./images/p433a.pdf, id=5345, 180.675pt x 138.5175pt> File: ./images/p433a.pdf Graphic file (type pdf) <./images/p433b.pdf, id=5346, 182.6825pt x 154.5775pt> File: ./images/p433b.pdf Graphic file (type pdf) [545 <./images/p433a.pdf> <./images/p433b.pdf>] <./images/p434a.pdf, id=5371, 178.6 675pt x 133.49875pt> File: ./images/p434a.pdf Graphic file (type pdf) <./images/p434b.pdf, id=5372, 234.8775pt x 288.07625pt > File: ./images/p434b.pdf Graphic file (type pdf) [546] [547 <./images/p434a.pdf> <./images/p434b.pdf>] [548] [549] [550] [551] [552] [553 ] [554] [555] [556] LaTeX Font Info: Font shape `OMS/cmr/m/sc' in size <14.4> not available (Font) Font shape `OMS/cmsy/m/n' tried instead on input line 27463 . [557 ] [558] [559] [560 ] [561] [562 ] [563 ] Underfull \vbox (badness 10000) has occurred while \output is active [] [1 ] Underfull \vbox (badness 10000) has occurred while \output is active [] [2] Underfull \vbox (badness 10000) has occurred while \output is active [] [3] Underfull \vbox (badness 10000) has occurred while \output is active [] [4] Underfull \vbox (badness 10000) has occurred while \output is active [] [5] Underfull \vbox (badness 10000) has occurred while \output is active [] [6] Underfull \vbox (badness 10000) has occurred while \output is active [] [7] [8] (./38769-t.aux) *File List* book.cls 2007/10/19 v1.4h Standard LaTeX document class bk12.clo 2007/10/19 v1.4h Standard LaTeX file (size option) inputenc.sty 2008/03/30 v1.1d Input encoding file latin1.def 2008/03/30 v1.1d Input encoding file ifthen.sty 2001/05/26 v1.1c Standard LaTeX ifthen package (DPC) amsmath.sty 2000/07/18 v2.13 AMS math features amstext.sty 2000/06/29 v2.01 amsgen.sty 1999/11/30 v2.0 amsbsy.sty 1999/11/29 v1.2d amsopn.sty 1999/12/14 v2.01 operator names amssymb.sty 2009/06/22 v3.00 amsfonts.sty 2009/06/22 v3.00 Basic AMSFonts support alltt.sty 1997/06/16 v2.0g defines alltt environment footmisc.sty 2009/09/15 v5.5a a miscellany of footnote facilities indentfirst.sty 1995/11/23 v1.03 Indent first paragraph (DPC) icomma.sty 2002/03/10 v2.0 (WaS) calc.sty 2007/08/22 v4.3 Infix arithmetic (KKT,FJ) fancyhdr.sty graphicx.sty 1999/02/16 v1.0f Enhanced LaTeX Graphics (DPC,SPQR) keyval.sty 1999/03/16 v1.13 key=value parser (DPC) graphics.sty 2009/02/05 v1.0o Standard LaTeX Graphics (DPC,SPQR) trig.sty 1999/03/16 v1.09 sin cos tan (DPC) graphics.cfg 2009/08/28 v1.8 graphics configuration of TeX Live pdftex.def 2009/08/25 v0.04m Graphics/color for pdfTeX caption.sty 2009/10/09 v3.1k Customizing captions (AR) caption3.sty 2009/10/09 v3.1k caption3 kernel (AR) geometry.sty 2008/12/21 v4.2 Page Geometry ifpdf.sty 2009/04/10 v2.0 Provides the ifpdf switch (HO) ifvtex.sty 2008/11/04 v1.4 Switches for detecting VTeX and its modes (HO) geometry.cfg hyperref.sty 2009/10/09 v6.79a Hypertext links for LaTeX ifxetex.sty 2009/01/23 v0.5 Provides ifxetex conditional hycolor.sty 2009/10/02 v1.5 Code for color options of hyperref/bookmark (HO ) xcolor-patch.sty 2009/10/02 xcolor patch pd1enc.def 2009/10/09 v6.79a Hyperref: PDFDocEncoding definition (HO) etexcmds.sty 2007/12/12 v1.2 Prefix for e-TeX command names (HO) infwarerr.sty 2007/09/09 v1.2 Providing info/warning/message (HO) hyperref.cfg 2002/06/06 v1.2 hyperref configuration of TeXLive kvoptions.sty 2009/08/13 v3.4 Keyval support for LaTeX options (HO) kvsetkeys.sty 2009/07/30 v1.5 Key value parser with default handler support (HO) url.sty 2006/04/12 ver 3.3 Verb mode for urls, etc. bitset.sty 2007/09/28 v1.0 Data type bit set (HO) intcalc.sty 2007/09/27 v1.1 Expandable integer calculations (HO) bigintcalc.sty 2007/11/11 v1.1 Expandable big integer calculations (HO) pdftexcmds.sty 2009/09/23 v0.6 LuaTeX support for pdfTeX utility functions ( HO) ifluatex.sty 2009/04/17 v1.2 Provides the ifluatex switch (HO) ltxcmds.sty 2009/08/05 v1.0 Some LaTeX kernel commands for general use (HO) atbegshi.sty 2008/07/31 v1.9 At begin shipout hook (HO) hpdftex.def 2009/10/09 v6.79a Hyperref driver for pdfTeX supp-pdf.mkii color.sty 2005/11/14 v1.0j Standard LaTeX Color (DPC) color.cfg 2007/01/18 v1.5 color configuration of teTeX/TeXLive nameref.sty 2007/05/29 v2.31 Cross-referencing by name of section refcount.sty 2008/08/11 v3.1 Data extraction from references (HO) 38769-t.out 38769-t.out umsa.fd 2009/06/22 v3.00 AMS symbols A umsb.fd 2009/06/22 v3.00 AMS symbols B ./images/device.png omscmr.fd 1999/05/25 v2.5h Standard LaTeX font definitions ./images/p002.pdf ./images/p005.pdf ./images/p009.pdf ./images/p016.pdf ./images/p021.pdf ./images/p041.pdf ./images/p043.pdf ./images/p044.pdf ./images/p045a.pdf ./images/p045b.pdf ./images/p049a.pdf ./images/p049b.pdf ./images/p053a.pdf ./images/p053b.pdf ./images/p055a.pdf ./images/p055b.pdf ./images/p056a.pdf ./images/p056b.pdf ./images/p057.pdf ./images/p063.pdf ./images/p064a.pdf ./images/p064b.pdf ./images/p064c.pdf ./images/p070.pdf ./images/p072.pdf ./images/p074.pdf ./images/p076.pdf ./images/p080.pdf ./images/p085.pdf ./images/p087.pdf ./images/p093.pdf ./images/p117.pdf ./images/p134.pdf ./images/p174.pdf ./images/p176.pdf ./images/p181a.pdf ./images/p181b.pdf ./images/p187.pdf ./images/p189.pdf ./images/p192.pdf ./images/p198.pdf ./images/p199.pdf ./images/p202.pdf ./images/p219.pdf ./images/p221.pdf ./images/p224a.pdf ./images/p224b.pdf ./images/p224c.pdf ./images/p224d.pdf ./images/p226.pdf ./images/p249a.pdf ./images/p249b.pdf ./images/p271.pdf ./images/p276.pdf ./images/p283.pdf ./images/p285.pdf ./images/p288.pdf ./images/p324.pdf ./images/p347.pdf ./images/p359.pdf ./images/p365.pdf ./images/p398.pdf ./images/p399.pdf ./images/p400a.pdf ./images/p400b.pdf ./images/p420.pdf ./images/p430a.pdf ./images/p430b.pdf ./images/p430c.pdf ./images/p431a.pdf ./images/p431b.pdf ./images/p433a.pdf ./images/p433b.pdf ./images/p434a.pdf ./images/p434b.pdf *********** ) Here is how much of TeX's memory you used: 10761 strings out of 493848 143103 string characters out of 1152824 255926 words of memory out of 3000000 11710 multiletter control sequences out of 15000+50000 22641 words of font info for 84 fonts, out of 3000000 for 9000 714 hyphenation exceptions out of 8191 37i,26n,44p,298b,507s stack positions out of 5000i,500n,10000p,200000b,50000s Output written on 38769-t.pdf (587 pages, 3304029 bytes). PDF statistics: 6061 PDF objects out of 6186 (max. 8388607) 2117 named destinations out of 2487 (max. 500000) 557 words of extra memory for PDF output out of 10000 (max. 10000000)