homotopy theory, (∞,1)-category theory, homotopy type theory
flavors: stable, equivariant, rational, p-adic, proper, geometric, cohesive, directed…
models: topological, simplicial, localic, …
see also algebraic topology
Introductions
Definitions
Paths and cylinders
Homotopy groups
Basic facts
Theorems
symmetric monoidal (∞,1)-category of spectra
The determinant is the (essentially unique) universal alternating multilinear map.
Let Vect be the category of vector spaces over a field , and assume for the moment that the characteristic . For each , let
be the 1-dimensional sign representation on the symmetric group , taking each transposition to . We may linearly extend the sign action of , so that names a (right) -module with underlying vector space . At the same time, acts on the tensor product of a vector space by permuting tensor factors, giving a left -module structure on . We define the Schur functor
by the formula
It is called the alternating power (of ).
Another point of view on the alternating power is via superalgebra. For any cosmos let be the category of commutative monoid objects in . The forgetful functor has a left adjoint
whose values are naturally regarded as graded by degree .
This applies in particular to the category of supervector spaces; if is a supervector space concentrated in odd degree, say with component , then each symmetry maps for elements . It follows that the graded component is concentrated in degree, with component .
There is a canonical natural isomorphism .
Again take to be the category of supervector spaces. Since the left adjoint preserves coproducts and since the tensor product of provides the coproduct for commutative monoid objects, we have a natural isomorphism
Examining the grade component , this leads to an identification
and now the result follows by considering the case where are concentrated in odd degree.
If is -dimensional, then has dimension . In particular, is 1-dimensional.
By induction on dimension. If , we have that and are -dimensional, and clearly for , at least when .
We then infer
where the dimensions satisfy the same recurrence relation as for binomial coefficients: .
More concretely: if is a basis for , then expressions of the form form a basis for . Let denote the image of this element under the quotient map . We have
(consider the transposition in which swaps and ) and so we may take only such expressions on the left where as forming a spanning set for , and indeed these form a basis. The number of such expressions is .
In the case where , the same development may be carried out by simply decreeing that whenever for some pair of distinct indices , .
Now let be an -dimensional space, and let be a linear map. By the corollary, the map
being an endomorphism on a 1-dimensional space, is given by multiplying by a scalar . It is manifestly functorial since is, i.e., . The quantity is called the determinant of .
We see then that if is of dimension ,
is a homomorphism of multiplicative monoids; by commutativity of multiplication in , we infer that
for each invertible linear map .
If we choose a basis of so that we have an identification , then the determinant gives a function
or by restriction to invertible matrices with invertible determinants a function
that takes products of matrices to products in . The determinant however is of course independent of choice of basis, since any two choices are related by a change-of-basis matrix , where and its transform have the same determinant. The above map is furthermore natural in , hence is a natural transformation from the general linear group to the group of units of a field (or more generally a ring), which are both functors from Field (or more generally Ring) to Grp.
By following the definitions above, we can give an explicit formula:
This may equivalently be written using the Levi-Civita symbol and the Einstein summation convention as
which in turn may be re-written more symmetrically as
We work over fields of arbitrary characteristic. The determinant satisfies the following properties, which taken together uniquely characterize the determinant. Write a square matrix as a row of column vectors .
is separately linear in each column vector:
whenever for distinct .
, where is the identity matrix.
Other properties may be worked out, starting from the explicit formula or otherwise:
If is a diagonal matrix, then is the product of its diagonal entries.
More generally, if is an upper (or lower) triangular matrix, then is the product of the diagonal entries.
If is a field extension and is a -linear map , then . Using the preceding properties and the Jordan normal form? of a matrix, this means that is the product of its eigenvalues (counted with multiplicity), as computed in the algebraic closure of .
If is the transpose of , then .
A simple observation which flows from these basic properties is
(Cramer’s Rule)
Let be column vectors of dimension , and suppose
Then for each we have
where occurs as the column vector on the right.
This follows straightforwardly from properties 1 and 2 above.
For instance, given a square matrix such that , and writing , this allows us to solve for a vector in an equation
and we easily conclude that is invertible if .
This holds true even if we replace the field by an arbitrary commutative ring , and we replace the condition by the condition that is a unit. (The entire development given above goes through, mutatis mutandis.)
Given a linear endomorphism of a finite rank free unital module over a commutative unital ring, one can consider the zeros of the characteristic polynomial . The coefficients of the polynomial are the concern of the Cayley-Hamilton theorem.
A useful intuition to have for determinants of real matrices is that they measure change of volume. That is, an matrix with real entries will map a standard unit cube in to a parallelpiped in (quashed to lie in a hyperplane if the matrix is singular), and the determinant is, up to sign, the volume of the parallelpiped. It is easy to convince oneself of this in the planar case by a simple dissection of a parallelogram, rearranging the dissected pieces in the style of Euclid to form a rectangle. In algebraic terms, the dissection and rearrangement amount to applying shearing or elementary column operations to the matrix which, by the properties discussed earlier, leave the determinant unchanged. These operations transform the matrix into a diagonal matrix whose determinant is the area of the corresponding rectangle. This procedure easily generalizes to dimensions.
The sign itself is a matter of interest. An invertible transformation is said to be orientation-preserving if is positive, and orientation-reversing if is negative. Orientations play an important role throughout geometry and algebraic topology, for example in the study of orientable manifolds (where the tangent bundle as -bundle can be lifted to a -bundle structure, being the subgroup of matrices of positive determinant). See also KO-theory.
Finally, we include one more property of determinants which pertains to matrices with real coefficients (which works slightly more generally for matrices with coefficients in a local field):
On the relation between determinant and trace:
If is an matrix, the determinant of its exponential equals the exponential of its trace
More generally, the determinant of is a polynomial in the traces of the powers of :
For -matrices:
For -matrices:
For -matrices:
Generally for -matrices (Kondratyuk-Krivoruchenko 92, appendix B):
It is enough to prove this for semisimple matrices (matrices that are diagonalizable upon passing to the algebraic closure of the ground field) because this subset of matrices is Zariski dense (using for example the nonvanishing of the discriminant of the characteristic polynomial) and the set of for which the equation holds is Zariski closed.
Thus, without loss of generality we may suppose that is diagonal with eigenvalues along the diagonal, where the statement can be rewritten as follows. Letting , the following identity holds:
This of course is just a polynomial identity, one closely related to various of the Newton identities that concern symmetric polynomials in indeterminates . Thus we again let , and define the elementary symmetric polynomials via the generating function identity
Then we compute
and simply match coefficients of in the initial and final series expansions, where we easily compute
This completes the proof.
see Pfaffian for the moment
matrix, linear algebra, exterior algebra, characteristic polynomial
quasideterminant, Berezinian,Jacobian, Pfaffian, hafnian, Wronskian
determinant line, determinant line bundle, Pfaffian line bundle
See also
One derivation of the formula (3) for the determinant as a polynomial in traces of powers is spelled out in appendix B of
Last revised on February 4, 2024 at 02:29:57. See the history of this page for a list of all contributions to it.