# Abstract Algebra/Linear Algebra

The reader is expected to have some familiarity with linear algebra. For example, statements such as

- Given vector spaces and with bases and and dimensions and , respectively, a linear map corresponds to a unique matrix, dependent on the particular choice of basis.

should be familiar. It is impossible to give a summary of the relevant topics of linear algebra in one section, so the reader is advised to take a look at the linear algebra book.

In any case, the core of linear algebra is the study of linear functions, that is, functions with the property , where greek letters are scalars and roman letters are vectors.

The core of the theory of finitely generated vector spaces is the following:

Every finite-dimensional vector space is isomorphic to for some field and some , called the dimension of . Specifying such an isomorphism is equivalent to choosing a basis for . Thus, any linear map between vector spaces with dimensions and and given bases and induces a unique linear map . These maps are precisely the matrices, and the matrix in question is called the *matrix representation* of relative to the bases .

**Remark:** The idea of identifying a basis of a vector space with an isomorphism to may be new to the reader, but the basic principle is the same. An alternative term for *vector space* is *coordinate space* since any point in the space may be expressed, on some particular basis, as a sequence of field elements. (All bases are equivalent under some non-singular linear transformation.) The name associated with pointy things like arrows, spears, or daggers, is distasteful to peace-loving people who don’t imagine taking up such a weapon. The *orientation* or *direction* associated with a point in coordinate space is implicit in the positive orientation of the real line (if that is the field) or in an orientation instituted in a polar expression of the multiplicative group of the field.

A coordinate space *V* with basis has vectors where e_{i} is all zeros except 1 at index i.

As an algebraic structure, *V* is an amalgam of an abelian group (addition and subtraction of vectors), a scalar field *F* (the source of the x_{i}'s), its multiplicative group *F* *, and a group action *F* * x *V* → *V*, given by

- The group action is
*scalar-vector multiplication*.

Linear transformations are mappings from one coordinate space *V* to another *W* corresponding to a matrix (a_{i j }). Suppose *W* has basis

Then the elements of the matrix (a_{i j }) are given by the rate of change of y_{j} depending on x_{i}:

- constant.

A common case involves *V* = *W* and *n* is a low number, such as *n* = 2. When *F* = {real numbers} = R, the set of matrices is denoted M(2,R). As an algebraic structure, M(2,R) has two binary operations that make it a ring: component-wise addition and matrix multiplication. See the chapter on 2x2 real matrices for a deconstruction of M(2,R) into a pencil of planar algebras.

More generally, when dim *V* = dim *W* = *n*, (a_{i j }) is a square matrix, an element of M(n, F), which is a ring with the + and x binary operations. These benchmarks in algebra serve as *representations*. In particular, when the rows or columns of such a matrix are linearly independent, then there is a matrix (b_{i j }) acting as a multiplicative inverse with respect to the identity matrix. The subset of invertible matrices is called the *general linear group*, GL(n, F). This group and its subgroups carry the burden of demonstrating physical symmetries associated with them.

The pioneers in this field included w:Sophus Lie, who viewed the continuous groups as evolving out of 1 in all directions according to an "algebra" now named after him. w:Hermann Weyl, spurred on by Eduard Study, explored and named GL(n, F) and its subgroups, calling them *the classical groups*.