Mathematical Methods of Physics/Vector Spaces

As is no doubt seen in elementary Physics, the notion of vectors, quantities that have a "magnitude" and a "direction" (whatever these may be) is very convenient in several parts of Physics. Here, we wish to put this idea on the rigorous foundation of Linear Algebra, to facilitate its further use in Physics. The interested reader is encouraged to look up the Wikibook Linear Algebra for details regarding the intricacies of the topic.

Vector Spaces

edit

Let   be field and let   be a set.   is said to be a Vector Space over   along with the binary operations of addition and scalar product iff

 

(i)   ...(Commutativity)

(ii)   ...(Associativity)

(iii)   such that   ...(Identity)

(iv)  such that   ...(Inverse)

   

(v)  

(vi) 

(vii) 

The elements of   are called vectors while the elements of   are called scalars. In most problems of Physics, the field   of scalars is either the set of real numbers   or the set of complex numbers  .


Examples of vector spaces:

(i) The set   over   can be visualised as the space of ordinary vectors "arrows" of elementary Physics.

(ii) The set of all real polynomials   is a vector space over  

(iii) Indeed, the set of all functions   is also a vector spaces over  , with addition and scalar multiplication defined as is usual.

Although the idea of vectors as "arrows" works well in most examples of vector spaces and is useful in solving problems, the latter two examples were deliberately provided as cases where this intuition fails to work.

Basis

edit

A set   is said to be linearly independent if and only if

  implies that  , whenever  


A set   is said to cover   if for every   there exist   such that  . (we leave the question of finiteness of the number of terms open at this point)


A set   is said to be a basis for   if   is linearly independent and if   covers  .

If a vector space has a finite basis with   elements, the vector space is said to be n-dimensional


As an example, we can consider the vector space   over reals. The vectors   form one of the several possible basis for  . These vectors are often denoted as   or as  

Theorem

edit

Let   be a vector space and let   be a basis for  . Then any subset of   with   elements is linearly dependent.

Proof

edit

Let   with  

By definition of basis, there exist scalars   such that  

Hence we can write   as   that is

 
 
 
 

Which has a nontrivial solution for  . Hence   is linearly dependent.

If a vector space has a finite basis of   elements, we say that the vector space is n-dimensional

Inner Product

edit

An in-depth treatment of inner-product spaces will be provided in the chapter on Hilbert Spaces. Here we wish to provide an introduction to the inner product using a basis.

Let   be a vector space over   and let   be a basis for  . Thus for every member   of  , we can write  .   are called the components of   with respect to the basis  .

We define the inner product as a binary operation   as  , where   are the components of   with respect to  

Note here that the inner product so defined is intrinsically dependent on the basis. Unless otherwise mentioned, we will assume the basis   while dealing with inner product of ordinary "vectors".

Linear Transformations

edit

Let  ,   be vector spaces over  . A function   is said to be a Linear transformation if for all   and   if

(i) 

(ii) 

Now let   and   be bases for   respectively.

Let  . As   is a basis, we can write  .

Thus, by linearity we can say that if  , we can write the components   of   in terms of those of   as  

The collection of coefficients   is called a matrix, written as

  and we can say that   can be represented as a matrix   with respect to the bases  

Eigenvalue Problems

edit

Let   be a vector space over reals and let   be a linear transformations.

Equations of the type  , to be solved for   and   are called eigenvalue problems. The solutions   are called eigenvalues of   while the corresponding   are called eigenvectors or eigenfunctions. (Here we take  )