Commutative Algebra/Algebras and integral elements

Algebras

edit
note to self: 21.4 is false when the constant polynomials are allowed!

Definition 21.1:

Let   be a ring. An algebra   over   is an  -module together with a multiplication  . This multiplication shall be  -bilinear.

Within an algebra it is thus true that we have an addition and a multiplication, and many of the usual rules of algebra stay true. Thus the name algebra.

Of course, there are some algebras whose multiplication is not commutative or associative. If the underlying ring is commutative, the ring gives a certain commutativity property in the sense of

 .

Definition 21.2:

Let   be an algebra, and let   be a subset of  .   is called a subalgebra of   iff it is closed with respect to the operations

  • addition
  • multiplication
  • module operation

of  .

Note that this means that  , together with the operations inherited from  , is itself an  -algebra; the necessary rules just carry over from  .

Example 21.3: Let   be a ring, let   be another ring, and let   be a ring homomorphism. Then   is an  -algebra, where the module operation is given by

 ,

and multiplication and addition for this algebra are given by the multiplication and addition of  , the ring.

Proof:

The required rules for the module operation follow as thus:

  1.  
  2.  
  3.  
  4.  

Since in   we have all the rules for a ring, the only thing we need to check for the  -bilinearity of the multiplication is compatibility with the module operation.

Indeed,

 

and analogously for the other argument. 

We shall note that if we are given an  -algebra  , then we can take a polynomial   and some elements   of   and evaluate   as thus:

  1. Using the algebra multiplication, we form the monomials  .
  2. Using the module operation, we multiply each monomial with the respective coefficient:  .
  3. Using the algebra addition (=module addition), we add all these   together.

The commutativity of multiplication (1.) and addition (3.) ensure that this procedure does not depend on the choices of order, that can be made in regard to addition and multiplication.

Definition 21.4:

Let   be an  -algebra, and let   be any elements of  . We then define a new object,  , to be the set of all elements of   that arise when applying the algebra operations of   and the module operation (with arbitrary elements   of the underlying ring) to the elements   a finite number of times, in an arbitrary fashion (for example the elements  ,  ,   are all in  ). By multiplying everything out (using the rules we are given for an algebra), we find that this is equal to

 .

We call   the algebra generated by the elements  .

Theorem 21.5:

Let an  -algebra   be given, and let  . Then

  •   is a subalgebra of  .

Furthermore,

  •  

and

  •   is (with respect to set inclusion) smaller than any other subalgebra of   containing each element  .

Proof:

The first claim follows from the very definition of subalgebras of  : The closedness under the three operations. For, if we are given any elements of  , applying any operation to them is just one further step of manipulations with the elements  .

We go on to prove the equation

 .

For " " we note that since   are contained within every   occuring on the right hand side. Thus, by the closedness of these  , we can infer that all finite manipulations by the three algebra operations (addition, multiplication, module operation) are included in each  . From this follows " ".

For " " we note that   is also a subalgebra of   containing  , and intersection with more things will only make the set at most smaller.

Now if any other subalgebra of   is given that contains  , the intersection on the right hand side of our equation must be contained within it, since that subalgebra would be one of the  . 

Exercises

edit
  • Exercise 21.1.1:

Symmetric polynomials

edit

Definition 21.6:

Let   be a ring. A polynomial   is called symmetric if and only if for all   (  being the symmetric group), we have

 .

That means, we can permute the variables arbitrarily and still get the same result.

This section shall be devoted to proving a very fundamental fact about these polynomials. That is, there are some so-called elementary symmetric polynomials, and every symmetric polynomial can be written as a polynomial in those elementary symmetric polynomials.

Definition 21.7:

Fix an  . The elementary symmetric polynomials in   variables are the   polynomials

 

Without further ado, we shall proceed to the theorem that we promised:

Theorem 21.8:

Let any symmetric polynomial   be given. Then we find another polynomial   such that

 .

Hence, every symmetric polynomial is a polynomial in the elementary symmetric polynomials.

Proof 1:

We start out by ordering all monomials (remember, those are polynomials of the form  ), using the following order:

 .

With this order, the largest monomial of   is given by  ; this is because for all monomials of  , the sum of the exponent equals  , and the last condition of the order is optimized by monomials which have the first zero exponent as late as possible.

Furthermore, for any given  , the largest monomial of

 

is given by  ; this is because the sum of the exponents always equals  , further the above monomial does occur (multiply all the maximal monomials from each elementary symmetric factor together) and if one of the factors of a given monomial of   coming from an elementary symmetric polynomial is not the largest monomial of that elementary symmetric polynomial, we may replace it by a larger monomial and obtain a strictly larger monomial of the product  ; this is because a part of the sum   is moved to the front.

Now, let a symmetric polynomial   be given. We claim that if   is the largest monomial of  , then we have  .

For assume otherwise, say  . Then since   is symmetric, we may exchange the exponents of the  -th and  -th variable respectively and still obtain a monomial of  , and the resulting monomial will be strictly larger.

Thus, if we define for  

 

and furthermore  , we obtain numbers that are non-negative. Hence, we may form the product

 ,

and if   is the coefficient of the largest monomial of  , then the largest monomial of

 

is strictly smaller than that of  ; this is because the largest monomial of   is, by our above computation and calculating some telescopic sums, equal to the largest monomial of  , and the two thus cancel out.

Since the elementary symmetric polynomials are symmetric and sums, linear combinations and products of symmetric polynomials are symmetric, we may repeat this procedure until we are left with nothing. All the stuff that we subtracted from   collected together then forms the polynomial in elementary symmetric polynomials we have been looking for. 

Proof 2:

Let   be an arbitrary symmetric polynomial, and let   be the degree of   and   be the number of variables of  .

In order to prove the theorem, we use induction on the sum   of the degree and number of variables of  .

If  , we must have   (since   would imply the absurd  ). But any polynomial of one variable is already a polynomial of the symmetric polynomial  .

Let now  . We write

 ,

where every monomial occuring within   lacks at least one variable, that is, is not divisible by  .

The polynomial   is still symmetric, because any permutation of a monomial that lacks at least one variable, also lacks at least one variable and hence occurs in   with same coefficient, since no bit of it could have been sorted to the " " part.

The polynomial   has the same number of variables, but the degree of   is smaller than the degree of  . Furthermore,   is symmetric because of

 .

Hence, by induction hypothesis,   can be written as a polynomial in the symmetric polynomials:

 

for a suitable  .

If  , then   is a polynomial of the elementary symmetric polynomial   anyway. Hence, it is sufficient to only consider the case  . In that case, we may define the polynomial

 .

Now   has one less variable than   and at most the same degree, which is why by induction hypothesis, we find a representation

 

for a suitable  .

We observe that for all  , we have  . This is because the unnecessary monomials just vanish. Hence,

 .

We claim that even

 .

Indeed, by the symmetry of   and   and renaming of variables, the above equation holds where we may set an arbitrary of the variables equal to zero. But each monomial of   lacks at least one variable. Hence, by successively equating coefficients in   where one of the variables is set to zero, we obtain that the coefficients on the right and left of   are equal, and thus the polynomials are equal. 

Integral dependence

edit

Definition 21.9:

If   is any ring and   a subring,   is called integral over   iff

 

for suitable  .

A polynomial of the form

  (leading coefficient equals  )

is called a monic polynomial. Thus,   being integral over   means that   is the root of a monic polynomial with coefficients in  .

Whenever we have a subring   of a ring  , we consider the module structure of   as an  -module, where the module operation and summation are given by the ring operations of  .

Theorem 21.10 (characterisation of integral dependence):

Let   be a ring,   a subring. The following are equivalent:

  1.   is integral over  
  2.   is a finitely generated  -module.
  3.   is contained in a subring   that is finitely generated as an  -module.
  4. There exists a faithful, nonzero  -module which is finitely generated as an  -module.

Proof:

1.   2.: Let   be integral over  , that is,  . Let   be an arbitrary element of  . If   is larger or equal  , then we can express   in terms of lower coefficients using the integral relation. Repetition of this process yields that   generate   over  .

2.   3.:  .

3.   4.: Set  ;   is faithful because if   annihilates  , then in particular  .

4.   1.: Let   be such a module. We define the morphism of modules

 .

We may restrict the module operation of   to   to obtain an  -module.   is also a morphism of  -modules. Further, set  . Then   ( ). The Cayley–Hamilton theorem gives an equation

 ,  ,

where   is to be read as the multiplication operator by   and   as the zero operator, and by the faithfulness of  ,   in the usual sense. 

Theorem 21.11:

Let   be a field and   a subring of  . If   is integral over  , then   is a field.

Proof:

Let  . Since   is a field, we find an inverse  ; we don't know yet whether   is contained within  . Since   is integral over  ,   satisfies an equation of the form

 

for suitable  . Multiplying this equation by   yields

 . 

Theorem 21.12:

Let   be a subring of  . The set of all elements of   which are integral over   constitutes a subring of  .

Proof 1 (from the Atiyah–Macdonald book):

If   are integral over  ,   is integral over  . By theorem 21.10,   is finitely generated as  -module and   is finitely generated as  -module. Hence,   is finitely generated as  -module. Further,   and  . Hence, by theorem 21.10,   and   are integral over  . 

Proof 2 (Dedekind):

If   are integral over  ,   and   are finitely generated as  -modules. Hence, so is

 .

Furthermore,   and  . Hence, by theorem 21.10,   are integral over  . 

Definition 21.13:

Let   be a subring of the ring  . The integral closure of   over   is the ring consisting of all elements of   which are integral over  .

Definition 21.14:

Let   be a subring of the ring  . If all elements of   are integral over  ,   is called an integral ring extension of  .