Analytic Number Theory/Printable version


Analytic Number Theory

The current, editable version of this book is available in Wikibooks, the open-content textbooks collection, at
https://en.wikibooks.org/wiki/Analytic_Number_Theory

Permission is granted to copy, distribute, and/or modify this document under the terms of the Creative Commons Attribution-ShareAlike 3.0 License.

Useful summation formulas

Analytic number theory is so abysmally complex that we need a basic toolkit of summation formulas first in order to prove some of the most basic theorems of the theory.

Abel's summation formula

edit

Theorem 1.1 (Abel's summation formula, also called Abel's identity):

Let   be a sequence and let   be a differentiable function such that   is Riemann integrable. If we define

 ,

then we have

 .

Note: We need the Riemann integrability to be able to apply the fundamental theorem of calculus.

Proof 1:

We prove the theorem by induction on  .

1.  :

First, we have in this case

 .

Then, we have

 

by the fundamental theorem of calculus.

2. Induction step:

Define  . We have

 

by the induction hypothesis. Further,

 .

Putting things together, we obtain

 

and thus the desired formula.

The method of proof we applied here was using induction and then trying to express the terms from the induction hypothesis in terms of the terms from the desired formula. 

Proof 2:

We prove the theorem by direct manipulation of the term on the left.

Define  .

  

Proof 3:

We prove the formula by the means of the Riemann-Stieltjes integral. Indeed, by integration by parts, we have

 . 

Corollary 1.2:

 .

Proof 1:

We deduce the formula from integration by parts for the Riemann-Stieltjes integral.

  

Proof 2:

We directly manipulate the LHS (left hand side).

Define   and  .

  

Two further proofs are given in exercises 1.1.1 and 1.1.5.

We note that induction and direct manipulation are quicker proofs for theorem 1.1, while corollary 1.2 is quicker proven from theorem 1.1 or Riemann-Stieltjes integration.

Exercises

edit
  • Exercise 1.1.1: Prove corollary 1.2 from theorem 1.1. Hint:  .
  • Exercise 1.1.2: Compute  . Hint: Use  ,  , apply Abelian summation and split the resulting integral into pieces where   is constant. Then apply a similar process.
  • Exercise 1.1.3: Prove that the limit   exists. This limit is called the Euler–Mascheroni constant. Hint: Use   and  .
  • Exercise 1.1.4: Prove theorem 1.1 from corollary 1.2.
  • Exercise 1.1.5: Prove corollary 1.2 using induction on  .

Euler's summation formula

edit

Definition 1.3:

For  , we define

 .

Theorem 1.4 (Euler's summation formula):

Let   be a differentiable function, such that   is Riemann integrable. Then

 .

Proof:

We prove the theorem from Corollary 1.2, setting   and using integration by parts (integration by parts is proven using the fundamental theorem of calculus).

Indeed,

 ,

where in the last line we used integration by parts on the integral  . 

Corollary 1.5:

Exercises

edit
  1. Prove corollary 1.5.

Euler–Maclaurin formula

edit

Theorem 1.6 (Euler–Maclaurin formula):

Define the functions   and  . Then for any twice continuously differentiable function   such that   is Riemann integrable, we have

 .

Proof 1:

We prove the theorem by direct computation.

Proof 2:

We prove the theorem from Euler's summation formula.


The Chebychev ψ and ϑ functions

Proposition (the Chebychev ψ function may be written as the sum of Chebyshev ϑ functions):

We have the identity

 .

Proposition (estimate of the distance between the Chebychev ψ and ϑ functions):

Whenever  , we have

 .

Note: The current proof gives an inferior error term. A subsequent version will redeem this issue. (Given the Riemann hypothesis, the error term can be made even smaller.)

Proof: We know that the formula

 

holds. Hence,

 .

By a result obtained by Pierre Dusart (based upon the computational verification of the Riemann hypothesis for small moduli), we have

 

whenever  . If   is in that range, we hence conclude

 .

By Euler's summation formula, we have

 .

Certainly   and  . Moreover,  . Now derivation shows that

 

is an anti-derivative of the function

 

of  . By the fundamental theorem of calculus, it follows that

 

for real numbers   such that  . This integral is not precisely the one we want to estimate. Hence, some analytical trickery will be necessary in order to obtain the estimate we want.

We start by noting that if only the bracketed term in the integral were absent, we would have the estimate we desire. In order to proceed, we replace   by the more general expression   (where  ), and obtain

 .

The integrand is non-negative so long as

 .

Moreover, if   is strictly within that range, we obtain

 .

We now introduce a constant   and obtain the integrals

  and  .

The first integral majorises the integral

 ,

whereas the second integral majorises the integral

 .

We obtain that

 .

Now we would like to set  . To do so, we must ensure that   is sufficiently large so that   resp.   is strictly within the admissible interval.

The two summands on the left are now estimated using our computation above, where   is replaced by   for the first computation: Indeed,

 

and

 .

Putting the estimates together and setting  , we obtain

 

whenever

  and  .

We now choose the ansatz

  and  

for constants   and  . These equations are readily seen to imply

  and  .

Note though that   and   is needed. The first condition yields

 .

The equations for   and   may be inserted into the above constraints on   and  ; this yields

  and  , that is,   and  .

If all these conditions are true, the ansatz immediately yields

 .

We now amend our ansatz by further postulating

 .

This yields

 

and

 .

From this we deduce that in order to obtain an asymptotically sharp error term, we need to set  . But doing so yields the desired result.  


Arithmetic functions

In this chapter, we shall set up the basic theory of arithmetic functions. This theory will be seen in action in later chapters, but in particular in chapter 9.

Definitions

edit

Definition 2.1:

An arithmetical function is a function  .

Definition 2.2 (important arithmetical functions):

  1. The Kronecker delta:  
  2. Euler's totient function:  
  3. Möbius'  -function:  
  4. The von Mangoldt function:  
  5. The monomials:  
  6. The number of distinct prime divisors:  ,  
  7. The sum of prime factors with multiplicity:  ,  
  8. The Liouville function:  

Exercises

edit
  • Exercise 2.1.1: Compute  ,   and  .
  • Exercise 2.1.2: Compute  . Hint:  .
  • Exercise 2.1.3: Compute   up to three decimal places. Hint: Use a Taylor expansion.
  • Exercise 2.1.4: Prove that for each   and    .

The convolution and the ring of arithmetic functions

edit

Definition 2.3:

Let   be arithmetical functions. Then the convolution of   and   is defined to be the function

 .

In the following theorem, we show that the arithmetical functions form an Abelian monoid, where the monoid operation is given by the convolution. Further, since the sum of two arithmetic functions is again an arithmetic function, the arithmetic functions form a commutative ring. In fact, as we shall also see, they form an integral domain.

Theorem 2.4 (Abelian monoid properties of the arithmetical functions):

  1. The convolution is commutative, i. e.  .
  2. The convolution is associative, i. e.  .
  3. The function   from definition 2.2 is an identity for the convolution, i. e.  .

Proof:

1.:

 ,

where   is a bijection from the set of divisors of   to itself.

2.:

 ,

where the last equality follows from the identity function

 

being a bijection. But

 

and hence associativity.

3.:

  

Theorem 2.5:

The ring of arithmetic functions is an integral domain.

Proof: Let   be arithmetic functions, and let   be minimal such that  ,  . Then

 . 

We shall now determine the units of the ring of arithmetic functions.

Theorem 2.6:

Let   be an arithmetic function. Then   is invertible (with respect to convolution) if and only if  .

Proof:

Assume first  . Then for any arithmetic function  ,  .

Assume now  . Then  , given by the recursive formula

 ,
 ,  

is an inverse (and thus the inverse) of  , since   and for   inductively

 

 

Exercises

edit
  • Exercise 2.2.1:
  • Exercise 2.2.2:

Multiplicative functions

edit

Definition 2.7:

An arithmetical function   is called multiplicative iff it satisfies

  1.  , and
  2.  .

Theorem 2.8:

Let   be multiplicative arithmetical functions. Then   is multiplicative.

Proof:

Let  . Then

 ,

since the function   is a bijection from the divisors of   to the Cartesian product of the divisors of   and the divisors of  ; this is because multiplication is the inverse:

 ,  .

To rigorously prove this actually is an exercise in itself. But due to the multiplicativity of   and  ,

 .

Furthermore,  . 

Since   is multiplicative, we conclude that the multiplicative functions form an Abelian submonoid of the arithmetic functions with convolution. Unfortunately, we do not have a subring since the sum of two multiplicative functions is never multiplicative (look at  ).

Theorem 2.9:

Let   be a multiplicative function such that   converges absolutely. Then

 .

Proof: Let   be the ordered sequence of all prime numbers. For all   we have

 

due to the multiplicativity of  . For each  , we successively take  , ...,   and then  . It follows from the definitions and the rule   that the right hand side converges to

 .

We claim that

 .

Indeed, choose   such that

 .

Then by the fundamental theorem of arithmetic, there exists an   and   such that

 .

Then we have by the triangle inequality for  ,   and   arbitrary that

 

From this easily follows the claim.

It is left to show that the product on the left is independent of the order of multiplication. But this is clear since if the sequence   is enumerated differently, the argument works in just the same way and the left hand side remains the same. 

Definition 2.10:

An arithmetical function   is called strongly multiplicative iff it satisfies

  1.  , and
  2.  .

Equivalently, a strongly multiplicative function is a monoid homomorphism  .

Theorem 2.11:

Let   be a strongly multiplicative function such that   converges absolutely. Then

 .

Proof:

Due to theorem 2.9, we have

 .

Due to strong multiplicativity and the geometric series, the latter expression equals

 . 

Exercises

edit
  • Exercise 2.3.1: Let   be an arithmetic function such that for all    , and let  . Prove that the function   is multiplicative.

Bell series

edit

Definition 2.12:

Let   be an arithmetic function. Then for a prime   the Bell series modulo   is the formal power series

 .

Examples 2.13:

We shall here compute the Bell series for some important arithmetic functions.

We note that in general for a completely multiplicative function  , we have

 .

In particular, in this case the Bell series defines a function.

1. The Kronecker delta:

 

2. Euler' totient function (we use lemma 9.?):

 

3. The Möbius   function:

 

4. The von Mangoldt function:

 

5. The monomials:

 

6. The number of distinct prime divisors:

 

7. The number of prime divisors including multiplicity:

 

8. The Liouville function:

 

Theorem 2.14 (compatibility of Bell series and convolution):

Let   arithmetic functions, and   be a prime. Then

 .

Proof:

  

In case of multiplicativity, we have the following theorem:

Theorem 2.15 (Uniqueness theorem):

Let   be multiplicative functions. Then

 .

Proof:   is pretty obvious;  :   as formal power series is equivalent to saying  . If now  , then

 

due to the multiplicativity of   and  . 

In chapter 9, we will use Bell series to obtain equations for number-theoretic functions.

Derivatives

edit

Definition 2.16:

Let   be an arithmetic function. Then the derivative of   is defined to be the function

 .

Theorem 2.17 (rules for the derivative):

Let   arithmetic functions. We have the following rules:

  1.  
  2.  
  3.   if   invertible, i.e. 

Note that   is not the inverse function of   (this wouldn't make much sense anyway since   arithmetic can not be surjective, since   is uncountable), but rather the convolution inverse.

Proof:

1. is easily checked.

2.:

 

3.

We have   and  . Hence, by 2.

 .

Convolving with   and using   yields the desired formula. 

Note that a chain rule wouldn't make much sense, since   arithmetic may map anywhere but to   and thus   doesn't make a lot of sense in general.

Further reading

edit


Characters and Dirichlet characters

Definitions, basic properties

edit

Definition 4.1

Let   be a finite group. A character of G is a function   such that

  1.   and
  2.  .

Lemma 4.2:

Let   be a finite group and let   be a character. Then

 .

In particular,  .

Proof:

Since   is finite, each   has finite order  . Furthermore, let   such that  ; then   and thus  . Hence, we are allowed to cancel and

 . 

Lemma 4.3:

Let   be a finite group and let   be characters. Then the function   is also a character.

Proof:

 ,

since   is a field and thus free of zero divisors. 

Lemma 4.4:

Let   be a finite group and let   be a character. Then the function   is also a character.

Proof: Trivial, since   as shown by the previous lemma. 

The previous three lemmas (or only the first, together with a few lemmas from elementary group theory) justify the following definition.

Definition 4.5

Let   be a finite group. Then the group

 

is called the character group of  .

Required algebra

edit

We need the following result from group theory:

Lemma 4.6

Let   be a finite Abelian group, let   be a subgroup of order  , and let   such that   is the smallest number such that  . Then the group

 

is a subgroup of   containing   of order  .

Proof:

Since   is the disjoint union of the cosets of  ,   is the disjoint union  , as   and  . Hence, the cardinality of   equals  .

Furthermore, if  , then  , and hence   is a subgroup. 

Theorems about characters

edit

Dirichlet characters

edit


Dirichlet series

For the remainder of this book, we shall use Riemann's convention of denoting complex numbers:

 

Definition

edit

Definition 5.1:

Let   be an arithmetic function. Then the Dirichlet series associated to   is the series

 ,

where   ranges over the complex numbers.

Convergence considerations

edit

Theorem 5.2 (abscissa of absolute convergence):

Let   be an arithmetic function such that the series of absolute values associated to the Dirichlet series associated to  

 

neither diverges at all   nor converges for all  . Then there exists  , called the abscissa of absolute convergence, such that the Dirichlet series associated to   converges absolutely for all  ,   and it's associated series of absolute values diverges for all  ,  .

Proof:

Denote by   the set of all real numbers   such that

 

diverges. Due to the assumption, this set is neither empty nor equal to  . Further, if  , then for all   and all    , since

 

and due to the comparison test. It follows that   has a supremum. Let   be that supremum. By definition, for   we have convergence, and if we had convergence for   we would have found a lower upper bound due to the above argument, contradicting the definition of  . 

Theorem 5.3 (abscissa of conditional convergence):

Formulas

edit

Theorem 8.4 (Euler product):

Let   be a strongly multiplicative function, and let   such that the corresponding Dirichlet series converges absolutely. Then for that series we have the formula

 .

Proof:

This follows directly from theorem 2.11 and the fact that   strongly multiplicative     strongly multiplicative. 


Formulas for number-theoretic functions

Formulas for the Möbius μ function

edit

Lemma 2.9:

 .

Proof:

For   a multiindex,   and   a vector define

 ,
 .

Let  . Then

 . 

Lemma 2.10:

 .

Proof 1:

We prove the lemma from lemma 2.14.

We have by lemma 2.14

  

Proof 2:

We prove the lemma from the product formula for Euler's totient function and lemma 2.9. Indeed, for  

 . 

Lemma 2.14:

 .

Proof 1:

We use the Möbius inversion formula.

Indeed,  , and hence  . 

Proof 2:

We use multiplicativity.

Indeed, for a prime  ,   we have

 ,

and thus due to the multiplicativity of   and     if   contains at least one prime factor. Since further   the claim follows. 

Proof 3:

We prove the lemma by direct computation. Indeed, if  , then

 . 

Proof 4:

We prove the lemma from the Binomial theorem and combinatorics.

Let  . From combinatorics we note that for  , there exist   distinct ways to pick a subset   such that  . Define   where  . Then, by the Binomial theorem

 . 

Formulas for Euler's totient function

edit

Lemma 2.11 (Gauß 1801):

 .

Proof 1:

We use the Möbius inversion formula, proven below without using this lemma, and lemma 2.10.

We have   and hence   by the Möbius inversion formula. On the other hand,

 

by lemma 2.10.

Hence, we obtain  , and by cancellation of   (the arithmetic functions form an integral domain) we get the lemma. 

Proof 2:

We use the converse of the Möbius inversion formula, proven below without using this lemma, and lemma 2.10.

Since   by lemma 2.10, we obtain from the converse of the Möbius inversion formula that  . 

Proof 3:

We prove the lemma by double counting.

We first note that there are   many fractions of the form  ,  .

We now prove that there are also   many fractions of this form. Indeed, each fraction  ,   can be reduced to  , where  .   is a divisor of  , since it is obtained by dividing  . Furthermore, for each divisor   of   there exist precisely   many such fractions by definition of  . 

Proof 4:

We prove the lemma by the means of set theory.

Define  . Then  . Since   and   is the disjoint union of the sets  , we thus have

 . 

The next theorem comprises one of the most important examples for a multiplicative function.

Theorem 2.12 (Euler 1761):

Euler's totient function is multiplicative.

Proof 1:

We prove the theorem using double counting (due to Kronecker).

By definition of  , there are   sums of the form

 ,

where both summands are reduced. We claim that there is a bijection

 .

From this would follow  .

We claim that such a bijection is given by  .

Well-definedness: Let  ,   be reduced. Then

 

is also reduced, for if  , then without loss of generality  , and from   follows   or  . In both cases we obtain a contradiction, either to   or to   is reduced.

Surjectivity: Let   be reduced. Using the Euclidean algorithm, we find   such that  . Then  . Define  ,  . Then

 .

Injectivity: Let  . We show  ; the proof for   is the same.

Indeed, from   follows  , and since  ,   is invertible modulo  , which is why we may multiply this inverse on the right to obtain  . Since  , the claim follows. 

Proof 2:

We prove the theorem from the Chinese remainder theorem.

Let  . From the Chinese remainder theorem, we obtain a ring isomorphism

 ,

which induces a group isomorphism

 .

Hence,  , and from   follows the claim. 

Proof 3: We prove the theorem from lemma 2.11 and induction (due to Hensel).

Let   such that  . By lemma 2.11, we have   and   and hence

 .

Furthermore, by lemma 2.11 and the bijection from the proof of theorem 2.8,

 .

By induction on   we thus have

 . 

Proof 4: We prove the theorem from lemma 2.11 and the Möbius inversion formula.

Indeed, from lemma 2.10 and the Möbius inversion formula, we obtain

 ,

which is why   is multiplicative as the convolution of two multiplicative functions. 

Proof 5: We prove the theorem from Euler's product formula.

Indeed, if   and   and  , then   and hence

 . 

Theorem 2.15 (Möbius inversion formula):

Let   be an arithmetical function and define

 .

Then

 .

Proof:

By lemma 2.14 and associativity of convolution,

 . 

Theorem 2.16 (Product formula for Euler's totient function):

Let  , where   are pairwise different prime numbers and   (recall that every number has such a decomposition by the fundamental theorem of arithmetic. Then

 .

Proof 1:

We prove the theorem from lemma 2.10 and the fact that   is multiplicative.

Indeed, let   be a prime number and let  . Then  , since

 

by lemma 2.10. Therefore,

 ,

where the latter equation follows from

 . 

Proof 2:

We prove the identity by the means of probability theory.

Let  ,  . Choose  ,  ,  . For   define the event  . Then we have

 .

On the other hand, for each  , we have

 .

Thus, it follows that   are independent. But since events are independent if and only if their complements are, we obtain

 . 

Proof 3:

We prove the identity from the Möbius inversion formula and lemmas 2.9 and 2.10.

But by the Möbius inversion formula and since by lemma 2.10  ,

 . 

Proof 4:

We prove the identity from the inclusion–exclusion principle.

Indeed, by one of de Morgan's rules and the inclusion–exclusion principle we have for sets  

 ,

where we use the convention that the empty intersection equals the universal set  . Let now  , and define   and   for  . Since

 ,

we then have

 .

But for each  , we have

 .

It follows

 ,

and since

 ,

the theorem is proven. 

Exercises

edit

Formulas for the von Mangoldt function

edit

Theorem 8.? (The Selberg identity):


Partial fraction decomposition

Existence theorem

edit

Theorem 2.1 (Existence theorem of the partial fraction decomposition):

Let   be polynomials over a unique factorisation domain, and let  , where the   are irreducible. Then we may write

 ,

where   are polynomials of degree strictly less than   and   is a polynomial. The term on the right hand side is called the partial fraction decomposition of  .

Proof:

We proceed by induction on  . For  , the statement is true since by division with remainder, we may write

 

with   to obtain

 ,

and we have reduced the degree of the denominator by one (the latter summand already satisfies the required condition). By repetition of this process, we eventually obtain a denominator of one and thus a polynomial.

Let now the hypothesis be true for  , and assume that  . Write   and  . By irreducibility,  . Hence, we find polynomials   such that  . Then

 .

Each of the summands of the last term can by the induction hypothesis be written in the desired form. 

Technique

edit

No matter how complicated our fraction of polynomials   may be, we can give the partial fraction decomposition in finite time, using easy techniques. The method, which for the sake of simplicity differs from the one given in the above constructive existence proof, goes as follows:

  1. Split the polynomial   into irreducible factors.
  2. Using division with remainder of   by  , reduce to the case   (the resulting polynomial   is allowed in the formula of theorem 2.1).
  3. Solve the equation given in theorem 2.1 for the   (this is equivalent to solving a system of linear equations; namely multiply by   and then equate coefficients).

Theorem 2.2:

The algorithm given above always terminates and gives the partial fraction decomposition of  .

Proof: Due to theorem 2.1, in step three we do obtain a system of linear equations which is solvable. Hence follow termination and correctness. 

Exercises

edit


Tools from complex analysis

Infinite products

edit

Lemma 5.1 (Convergence of real products):

Let   be such that

 

converges absolutely. Then if  ,

 

converges.

Proof: Without loss of generality, we assume   for all  .

Denote

 .

Then we have

 .

We now apply the Taylor formula of first degree with Lagrange remainder to   at   to obtain for  

 ,  .

Hence, we have for  

 ,  .

Hence,   and thus we obtain the (even absolute) convergence of the  ; thus, by the continuity of the exponential, also the   converge. 

Theorem 5.2 (Comparison test for complex products):

Assume that   is a non-negative real sequence such that

 

converges. Assume further that   is a sequence of complex numbers such that  . Then also

 

converges. Furthermore, for all  

 .

Proof:

We define

 ,  . We note that
 .

Without loss of generality we may assume that all the products are nonzero; else we have immediate convergence (to zero).

We now prove that   is a Cauchy sequence. Indeed, we have

 

and furthermore

 

and therefore

 .

Since  , it is a Cauchy sequence, and thus, by the above inequality, so is  . The last claim of the theorem follows by taking   in the above inequality. 

Theorem 5.3 (Sum test for complex products):

Let   be a real sequence such that

 

converges absolutely. Then if  ,

 

converges, where   is a complex sequence. Furthermore, for all  

 .

Proof 1:

We prove the theorem using lemma 5.1 and the comparison test.

Indeed, by lemma 5.1 the product

 

converges. Hence by theorem 5.2, we obtain convergence and the desired inequality. 

Proof 2 (without the inequality):

We prove the theorem except the inequality at the end from lemma 5.1 and by using the Taylor formula on  .

We define  . Then since every complex number satisfies  , we need to prove the convergence of the sequences   and  .

For the first sequence, we note that the convergence of   is equivalent to the convergence of  . Now for each  

 

Theorem 5.4 (Holomorphic products):

Let   be a sequence of holomorphic functions in a domain   such that for each   we can find a compact   and a sequence   such that   and

 

converges absolutely. Then

 

defines a holomorphic function.

Proof:

First, we note that   is well-defined for each   due to theorem 5.2. In order to prove that the product is holomorphic, we use the fact from complex analysis that if a sequence of functions converging locally uniformly to another function has infinitely many holomorphic members, then the limit is holomorphic as well. Indeed, we note by the inequality in theorem 5.3, that we are given uniform convergence. Hence, the theorem follows. 

Exercises

edit

The Weierstraß factorisation

edit

The following lemma is of great importance, since we can deduce three important theorems from it:

  1. The existence of holomorphic functions with prescribed zeroes
  2. The Weierstraß factorisation theorem (a way to write any holomorphic function made up from linear factors and the exponential)
  3. The Mittag-Leffler theorem (named after Gösta Mittag-Leffler (one guy))

Lemma 5.5:

Let   be a sequence of complex numbers such that

 

and

 .

Then the function

 

has exactly the zeroes   in the correct multiplicity.

Proof:

Define for each  

 .

Our plan is to prove that   converges uniformly in every subcircle of the circle of radius   for every  . Since the function   is holomorphic in a unit ball around zero, it is equal to its Taylor series there, i.e.

 .

Hence, for  

 .

Let now   be given and   be arbitrary. Then we have for  ,   arbitrary

 .

Now summing over  , we obtain

 

for all  . Hence, we have uniform convergence in that circle; thus the sum of the logarithms is holomorphic, and so is the original product if we plug everything into the exponential function (note that we do have   even if   is an arbitrary complex number). 

Note that our method of proof was similar to how we proved lemma 5.1. In spite of this, it is not possible to prove the above lemma directly from theorem 5.4 since the corresponding series does not converge if the   are chosen increasing too slowly.

Theorem 5.6 (Holomorphic functions with given zeroes):

Let   be a sequence of complex numbers which does not have an accumulation point. Then the function

 

has zeroes   with the right multiplicity, where the sequence   are the nonzero elements of the sequence   ordered ascendingly with respect to their absolute value and   is the number of zeroes within the sequence  .

Proof:

We order   increasingly according to the modulus   and the standard greater or equal order on the real numbers. We go on to observe that then  , since if it were to remain bounded, there would be an accumulation point according to the Heine–Borel theorem. Also, the sequence is zero only finitely many often (otherwise zero would be an accumulation point). After eliminating the zeroes from the sequence   we call the remaining sequence  . Let   the number of zeroes in  . Then due to lemma 5.5, the function

 

has the required properties. 

Theorem 5.7 (Weierstraß factorisation theorem):

Let   be holomorphic and not the constant zero function with zeroes  , let   are the nonzero elements of the sequence   ordered ascendingly with respect to their absolute value, and if  , let   be the order of the zero   of  . Then there exists a holomorphic function   such that

 .

Proof:

First, we note that   does not have an accumulation point, since otherwise   would be the constant zero function by the identity theorem from complex analysis. From theorem 5.6, we obtain that the function   has exactly the zeroes   with the right multiplicity, where the sequence   are the nonzero elements of the sequence   ordered ascendingly with respect to their absolute value and   is the number of zeroes within the sequence  . We have that   has no zeroes and is bounded and hence holomorphic due to Riemann's theorem on resolvable singularities. For, if   were unbounded, it would have a singularity at a zero   of  . This singularity can not be essential since dividing   by finitely many linear factors would eliminate that singularity. Hence we have a pole, and this would be resolvable by multiplying linear factors to  . But then   has a zero of the order of that pole, which is not possible since we may eliminate all the zeroes of   by writing  ,   holomorphic and nonzero at  , where   is the order of the zero of   at  .

Hence,   has a holomorphic logarithm on  , which we shall denote by  . This satisfies

 . 

Corollary 5.8 (Mittag-Leffler's theorem):

Let   be a sequence of complex numbers which does not have an accumulation point. Then there exists a meromorphic function   which has exactly the poles  , where the pole   has order  .

Proof:

From theorem 5.7 we obtain a function   with zeroes   in the right multiplicity. Set  . 

Exercises

edit

The Hadamard factorisation

edit

In this subsection, we strive to factor certain holomorphic functions in a way that makes them even easier to deal with than the Weierstraß factorisation. This is the Hadamard factorisation. It only works for functions satisfying a certain growth estimate, but in fact, many important functions occuring in analytic number theory do satisfy this estimate, and thus that factorisation will give us ways to prove certain theorems about those functions.

In order to prove that we may carry out a Hadamard factorisation, we need some estimates for holomorphic functions as well as some preparatory lemmata.

Estimates for holomorphic functions

edit

Theorem 5.9:

Let   be a holomorphic function such that  , and let   be the sequence of zeroes of that function ordered ascendingly by absolute value. Let  . If we denote the number of zeroes of   inside   by  , then

 .

Proof:

Set   and define the function   by

 ,

where the latter limit exists by developing   into a power series at   and observing that the constant coefficient vanishes. By Riemann's theorem on removable singularities,   is holomorphic. We now have

 ,

and if further  , then   and hence we may multiply that number without change to anything to obtain for  

 .

Now writing   and  , we obtain on the one hand

 

and on the other hand

 .

Hence,

 ,

which is why both   and   have the same distance to  , since   lies on the real axis.

Hence, due to the maximum principle, we have

 . 

Theorem 5.10:

Let  , let   be holomorphic within   and define  . Then

 .

Proof:

First, we consider the case   and  . We may write   in its power series form

 ,

where  . If we write   and  , we obtain by Euler's formula

 

and thus

 .

Since the latter sum is majorised by the sum

 ,

it converges absolutely and uniformly in  . Hence, by exchanging the order of integration and summation, we obtain

 

due to

 

and further for all  

 

due to

 ,

as can be seen using integration by parts twice and  . By monotonicity of the integral, we now have

 .

This proves the theorem in the case  . For the general case, we define

 .

Then  , hence by the case we already proved

 . 

Theorem 5.11:

Further preparations

edit

Definition 5.12 (exponent of convergence):

Let   be a sequence of complex numbers not containing zero such that

 

converges for a  . Then

 

is called the exponent of convergence of the sequence  .

Definition 5.13 (holomorphic functions of finite order):

Let   be holomorphic, and define for  

 .

If there exists   such that

 

for a suitable  , then   is said to be of finite order. In this case,

 

is called the order of  .

Lemma 5.14:

The theorem

edit

Exercises

edit