Você está na página 1de 96

Sparse matrices

en.wikipedia.org
Chapter 1

Anti-diagonal matrix

In mathematics, an anti-diagonal matrix is a matrix where all the entries are zero except those on the diagonal going
from the lower left corner to the upper right corner (), known as the anti-diagonal.

1.1 Formal denition

An n-by-n matrix A is an anti-diagonal matrix if the (i, j) element is zero i, j {1, . . . , n} (i + j = n + 1).

1.2 Example

An example of an anti-diagonal matrix is


0 0 0 0 1
0 0 0 2 0

0 0 5 0 0
.
0 7 0 0 0
1 0 0 0 0

1.3 Properties

All anti-diagonal matrices are also persymmetric.


The product of two anti-diagonal matrices is a diagonal matrix. Furthermore, the product of an anti-diagonal matrix
with a diagonal matrix is anti-diagonal, as is the product of a diagonal matrix with an anti-diagonal matrix.
An anti-diagonal matrix is invertible if and only if the entries on the diagonal from the lower left corner to the upper
right corner are nonzero. The inverse of any invertible anti-diagonal matrix is also anti-diagonal, as can be seen from
the paragraph above. The determinant of an anti-diagonal matrix has absolute value given by the product of the
entries on the diagonal from the lower left corner to the upper right corner. However, the sign of this determinant
will vary because the one nonzero signed elementary product from an anti-diagonal matrix will have a dierent sign
depending on whether the permutation related to it is odd or even:
More precisely, the sign of the elementary product needed to calculate the determinant of an anti-diagonal matrix is
related to whether the corresponding triangular number is even or odd. This is because the number of inversions in
the permutation for the only nonzero signed elementary product of any n n anti-diagonal matrix is always equal to
the nth such number.

2
1.4. SEE ALSO 3

1.4 See also


Main diagonal

Exchange matrix, an anti-diagonal matrix with 1s along the counter-diagonal.

1.5 External links


Anti-diagonal matrix. PlanetMath.
Chapter 2

Band matrix

In mathematics, particularly matrix theory, a band matrix is a sparse matrix whose non-zero entries are conned to
a diagonal band, comprising the main diagonal and zero or more diagonals on either side.

2.1 Band matrix

2.1.1 Bandwidth
Formally, consider an nn matrix A=(ai,j ). If all matrix elements are zero outside a diagonally bordered band whose
range is determined by constants k1 and k2 :

ai,j = 0 if j < i k1 or j > i + k2 ; k1 , k2 0.

then the quantities k1 and k2 are called the lower bandwidth and upper bandwidth, respectively.[1] The bandwidth
of the matrix is the maximum of k1 and k2 ; in other words, it is the number k such that ai,j = 0 if |i j| > k .[2]

2.1.2 Denition
A matrix is called a band matrix or banded matrix if its bandwidth is reasonably small.

2.2 Examples
A band matrix with k1 = k2 = 0 is a diagonal matrix

A band matrix with k1 = k2 = 1 is a tridiagonal matrix

For k1 = k2 = 2 one has a pentadiagonal matrix and so on.

Triangular matrices

For k1 = 0, k2 = n1, one obtains the denition of an upper triangular matrix


similarly, for k1 = n1, k2 = 0 one obtains a lower triangular matrix.

Upper and lower Hessenberg matrices

Toeplitz matrices when bandwidth is limited.

Block diagonal matrices

Shift matrices and shear matrices

4
2.3. APPLICATIONS 5

Matrices in Jordan normal form


A skyline matrix, also called variable band matrix a generalization of band matrix
The inverses of Lehmer matrices are constant tridiagonal matrices, and are thus band matrices.

2.3 Applications
In numerical analysis, matrices from nite element or nite dierence problems are often banded. Such matrices can
be viewed as descriptions of the coupling between the problem variables; the bandedness corresponds to the fact that
variables are not coupled over arbitrarily large distances. Such matrices can be further divided for instance, banded
matrices exist where every element in the band is nonzero. These often arise when discretising one-dimensional
problems.
Problems in higher dimensions also lead to banded matrices, in which case the band itself also tends to be sparse.
For instance, a partial dierential equation on a square domain (using central dierences) will yield a matrix with
a bandwidth equal to the square root of the matrix dimension, but inside the band only 5 diagonals are nonzero.
Unfortunately, applying Gaussian elimination (or equivalently an LU decomposition) to such a matrix results in the
band being lled in by many non-zero elements.

2.4 Band storage


Band matrices are usually stored by storing the diagonals in the band; the rest is implicitly zero.
For example, a tridiagonal matrix has bandwidth 1. The 6-by-6 matrix


B11 B12 0 0
.. .. ..
B21 B22 B23 . . .

.. ..
0 B32 B33 B34 . .

.. ..
. . B43 B44 B45 0

. .. ..
.. . . B54 B55 B56
0 0 B65 B66

is stored as the 6-by-3 matrix


0 B11 B12
B21 B22 B23

B32 B33 B34
.
B43 B44 B45

B54 B55 B56
B65 B66 0

A further saving is possible when the matrix is symmetric. For example, consider a symmetric 6-by-6 matrix with an
upper bandwidth of 2:


A11 A12 A13 0 0
.. ..
A22 A23 A24 . .

A33 A34 A35 0
.
A44 A45 A46

sym A55 A56
A66

This matrix is stored as the 6-by-3 matrix:


6 CHAPTER 2. BAND MATRIX


A11 A12 A13
A22 A23 A24

A33 A34 A35
.
A44 A45 A46

A55 A56 0
A66 0 0

2.5 Band form of sparse matrices


From a computational point of view, working with band matrices is always preferential to working with similarly
dimensioned square matrices. A band matrix can be likened in complexity to a rectangular matrix whose row di-
mension is equal to the bandwidth of the band matrix. Thus the work involved in performing operations such as
multiplication falls signicantly, often leading to huge savings in terms of calculation time and complexity.
As sparse matrices lend themselves to more ecient computation than dense matrices, as well as in more ecient
utilization of computer storage, there has been much research focused on nding ways to minimise the bandwidth
(or directly minimise the ll-in) by applying permutations to the matrix, or other such equivalence or similarity
transformations.[3]
The CuthillMcKee algorithm can be used to reduce the bandwidth of a sparse symmetric matrix. There are, however,
matrices for which the reverse CuthillMcKee algorithm performs better. There are many other methods in use.
The problem of nding a representation of a matrix with minimal bandwidth by means of permutations of rows and
columns is NP-hard.[4]

2.6 See also


Graph bandwidth

2.7 Notes
[1] Golub & Van Loan 1996, 1.2.1.

[2] Atkinson 1989, p. 527.

[3] Davis 2006, 7.7.

[4] Feige 2000.

2.8 References
Atkinson, Kendall E. (1989), An Introduction to Numerical Analysis, John Wiley & Sons, ISBN 0-471-62489-6.

Davis, Timothy A. (2006), Direct Methods for Sparse Linear Systems, Society for Industrial and Applied Math-
ematics, ISBN 978-0-898716-13-9.

Feige, Uriel (2000), Coping with the NP-Hardness of the Graph Bandwidth Problem, Algorithm Theory -
SWAT 2000, Lecture Notes in Computer Science, 1851, pp. 129145, doi:10.1007/3-540-44985-X_2.

Golub, Gene H.; Van Loan, Charles F. (1996), Matrix Computations (3rd ed.), Baltimore: Johns Hopkins,
ISBN 978-0-8018-5414-9.

Press, WH; Teukolsky, SA; Vetterling, WT; Flannery, BP (2007), Section 2.4, Numerical Recipes: The Art
of Scientic Computing (3rd ed.), New York: Cambridge University Press, ISBN 978-0-521-88068-8.
2.9. EXTERNAL LINKS 7

2.9 External links


Information pertaining to LAPACK and band matrices

A tutorial on banded matrices and other sparse matrix formats


Chapter 3

Bidiagonal matrix

In mathematics, a bidiagonal matrix is a banded matrix with non-zero entries along the main diagonal and either
the diagonal above or the diagonal below. This means there are exactly two non zero diagonals in the matrix.
When the diagonal above the main diagonal has the non-zero entries the matrix is upper bidiagonal. When the
diagonal below the main diagonal has the non-zero entries the matrix is lower bidiagonal.
For example, the following matrix is upper bidiagonal:


1 4 0 0
0 4 1 0

0 0 3 4
0 0 0 3

and the following matrix is lower bidiagonal:


1 0 0 0
2 4 0 0
.
0 3 3 0
0 0 4 3

3.1 Usage
One variant of the QR algorithm starts with reducing a general matrix into a bidiagonal one,[1] and the Singular value
decomposition uses this method as well.

3.1.1 Bidiagonalization

Main article: Bidiagonalization

3.2 See also


List of matrices

LAPACK

Hessenberg form The Hessenberg form is similar, but has more non zero diagonal lines than 2.

8
3.3. REFERENCES 9

3.3 References
Stewart, G. W. (2001) Matrix Algorithms, Volume II: Eigensystems. Society for Industrial and Applied Mathe-
matics. ISBN 0-89871-503-2.

[1] Bochkanov Sergey Anatolyevich. ALGLIB User Guide - General Matrix operations - Singular value decomposition . AL-
GLIB Project. 2010-12-11. URL:http://www.alglib.net/matrixops/general/svd.php. Accessed: 2010-12-11. (Archived
by WebCite at https://www.webcitation.org/5utO4iSnR)

3.4 External links


High performance algorithms for reduction to condensed (Hessenberg, tridiagonal, bidiagonal) form
Chapter 4

Block matrix

In mathematics, a block matrix or a partitioned matrix is a matrix that is interpreted as having been broken into
sections called blocks or submatrices.[1] Intuitively, a matrix interpreted as a block matrix can be visualized as the
original matrix with a collection of horizontal and vertical lines, which break it up, or partition it, into a collection of
smaller matrices.[2] Any matrix may be interpreted as a block matrix in one or more ways, with each interpretation
dened by how its rows and columns are partitioned.
This notion can be made more precise for an n by m matrix M by partitioning n into a collection rowgroups , and
then partitioning m into a collection colgroups . The original matrix is then considered as the total of these groups,
in the sense that the (i, j) entry of the original matrix corresponds in a 1-to-1 way with some (s, t) oset entry of
some (x, y) , where x rowgroups and y colgroups .
Block matrix algebra arises in general from biproducts in categories of matrices.[3]

4.1 Example
The matrix


1 1 2 2
1 1 2 2
P=
3

3 4 4
3 3 4 4

can be partitioned into 4 22 blocks

[ ] [ ] [ ] [ ]
1 1 2 2 3 3 4 4
P11 = , P12 = , P21 = , P22 = .
1 1 2 2 3 3 4 4

The partitioned matrix can then be written as

[ ]
P P12
P = 11 .
P21 P22

4.2 Block matrix multiplication


It is possible to use a block partitioned matrix product that involves only algebra on submatrices of the factors. The
partitioning of the factors is not arbitrary, however, and requires conformable partitions[4] between two matrices
A and B such that all submatrix products that will be used are dened.[5] Given an (m p) matrix A with q row
partitions and s column partitions

10
4.2. BLOCK MATRIX MULTIPLICATION 11

A 168168 element block matrix with 1212, 1224, 24x12, and 2424 sub-Matrices. Non-zero elements are in blue, zero elements
are grayed.


A11 A12 A1s
A21 A22 A2s

A= . .. .. ..
.. . . .
Aq1 Aq2 Aqs

and a (p n) matrix B with s row partitions and r column partitions


B11 B12 B1r
B21 B22 B2r

B= . .. .. .. ,
.. . . .
Bs1 Bs2 Bsr

that are compatible with the partitions of A , the matrix product

C = AB

can be formed blockwise, yielding C as an (mn) matrix with q row partitions and r column partitions. The matrices
in the resulting matrix C are calculated by multiplying:


s
C = A B .
=1
12 CHAPTER 4. BLOCK MATRIX

Or, using the Einstein notation that implicitly sums over repeated indices:

C = A B .

4.3 Block matrix inversion


See also: HelmertWolf blocking

If a matrix is partitioned into four blocks, it can be inverted blockwise as follows:


where A, B, C and D have arbitrary size. (A and D must be square, so that they can be inverted. Furthermore, A
and DCA1 B must be nonsingular.[6] )
Equivalently,

4.4 Block diagonal matrices


A block diagonal matrix is a block matrix that is a square matrix, and having main diagonal blocks square matrices,
such that the o-diagonal blocks are zero matrices. A block diagonal matrix A has the form


A1 0 0
0 A2 0

A= . .. .. ..
.. . . .
0 0 An

where Ak is a square matrix; in other words, it is the direct sum of A1 , , An. It can also be indicated as A1 A2
. . . A or diag(A1 , A2 , . . . , A ) (the latter being the same formalism used for a diagonal matrix). Any square
matrix can trivially be considered a block diagonal matrix with only one block.
For the determinant and trace, the following properties hold

det A = det A1 . . . det An

tr A = tr A1 + + tr An .
The inverse of a block diagonal matrix is another block diagonal matrix, composed of the inverse of each block, as
follows:

1 1
A1 0 0 A1 0 0
0 A2 0 0 A1 0
2
.. .. .. .. = . .. .. .. .
. . . . .. . . .
0 0 An 0 0 A1
n

The eigenvalues and eigenvectors of A are simply those of A1 and A2 and ... and An (combined).

4.5 Block tridiagonal matrices


A block tridiagonal matrix is another special block matrix, which is just like the block diagonal matrix a square
matrix, having square matrices (blocks) in the lower diagonal, main diagonal and upper diagonal, with all other blocks
being zero matrices. It is essentially a tridiagonal matrix but has submatrices in places of scalars. A block tridiagonal
matrix A has the form
4.6. BLOCK TOEPLITZ MATRICES 13


B1 C1 0
A2 B2 C2

.. .. ..
..
. . .
.

A=
Ak Bk Ck

.. .. .. ..
. . . .

An1 Bn1 Cn1
0 An Bn

where Ak, Bk and Ck are square sub-matrices of the lower, main and upper diagonal respectively.
Block tridiagonal matrices are often encountered in numerical solutions of engineering problems (e.g., computational
uid dynamics). Optimized numerical methods for LU factorization are available and hence ecient solution algo-
rithms for equation systems with a block tridiagonal matrix as coecient matrix. The Thomas algorithm, used for
ecient solution of equation systems involving a tridiagonal matrix can also be applied using matrix operations to
block tridiagonal matrices (see also Block LU decomposition).

4.6 Block Toeplitz matrices


A block Toeplitz matrix is another special block matrix, which contains blocks that are repeated down the diagonals
of the matrix, as a Toeplitz matrix has elements repeated down the diagonal. The individual block matrix elements,
Aij, must also be a Toeplitz matrix.
A block Toeplitz matrix A has the form


A(1,1) A(1,2) A(1,n1) A(1,n)
A(2,1) A(1,1) A(1,2) A(1,n1)

.. .. .. ..
. . . .

A=
A(2,1) A(1,1) A(1,2) .

.. .. .. ..
. . . .

A(n1,1) A(2,1) A(1,1) A(1,2)
A(n,1) A(n1,1) A(2,1) A(1,1)

4.7 Direct sum


For any arbitrary matrices A (of size m n) and B (of size p q), we have the direct sum of A and B, denoted by
A B and dened as


a11 a1n 0 0
.. .. .. ..
. . . .

am1 amn 0 0
AB=
0
.
0 b11 b1q
.. .. .. ..
. . . .
0 0 bp1 bpq

For instance,


[ ] 1
[ ] 3 2 0 0
1 3 2 1 6 2 3 1 0 0
=
0
.
2 3 1 0 1 0 0 1 6
0 0 0 0 1
14 CHAPTER 4. BLOCK MATRIX

This operation generalizes naturally to arbitrary dimensioned arrays (provided that A and B have the same number
of dimensions).
Note that any element in the direct sum of two vector spaces of matrices could be represented as a direct sum of two
matrices.

4.8 Direct product


Main article: Kronecker product

4.9 partitioned identity matrix


identity matrix,


1 0 0 0 1 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0
0 1 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 1 0 0
I=
0
= + = +
0 1 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 1 0
0 0 0 1 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 1

1 0 0 0 0 0 0 0
0 0 0 0 0 1 0 0
Ia =
0 0 0 0 , I
b
=
0

0 1 0
0 0 0 0 0 0 0 1

1 1 2 2
1 1 2 2
A= 3 3 4 4

3 3 4 4

1 1 2 2 1 0 0 0 1 0 0 0
1 1 2 20 0 0 0 1 0 0 0
AI =
a
3 3 4 40
=
0 0 0 3 0 0 0
3 3 4 4 0 0 0 0 3 0 0 0

1 1 2 2 0 0 0 0 0 1 2 2
1 1 2 20 1 0 0 0 1 2 2
AI =
b
3 3 4 40
=
0 1 0 0 3 4 4
3 3 4 4 0 0 0 1 0 3 4 4

AI a + AI b = A = AI

4.10 Application
In linear algebra terms, the use of a block matrix corresponds to having a linear mapping thought of in terms of corre-
sponding 'bunches of basis vectors. That again matches the idea of having distinguished direct sum decompositions
of the domain and range. It is always particularly signicant if a block is the zero matrix; that carries the information
that a summand maps into a sub-sum.
Given the interpretation via linear mappings and direct sums, there is a special type of block matrix that occurs
for square matrices (the case m = n). For those we can assume an interpretation as an endomorphism of an n-
dimensional space V; the block structure in which the bunching of rows and columns is the same is of importance
because it corresponds to having a single direct sum decomposition on V (rather than two). In that case, for example,
the diagonal blocks in the obvious sense are all square. This type of structure is required to describe the Jordan
normal form.
4.11. NOTES 15

This technique is used to cut down calculations of matrices, column-row expansions, and many computer science
applications, including VLSI chip design. An example is the Strassen algorithm for fast matrix multiplication, as well
as the Hamming(7,4) encoding for error detection and recovery in data transmissions.

4.11 Notes
[1] Eves, Howard (1980). Elementary Matrix Theory (reprint ed.). New York: Dover. p. 37. ISBN 0-486-63946-0. Retrieved
24 April 2013. We shall nd that it is sometimes convenient to subdivide a matrix into rectangular blocks of elements.
This leads us to consider so-called partitioned, or block, matrices.

[2] Anton, Howard (1994). Elementary Linear Algebra (7th ed.). New York: John Wiley. p. 30. ISBN 0-471-58742-7. A
matrix can be subdivided or partitioned into smaller matrices by inserting horizontal and vertical rules between selected
rows and columns.

[3] Macedo, H.D.; Oliveira, J.N. (2013). Typing linear algebra: A biproduct-oriented approach. Science of Computer
Programming. 78 (11): 21602191. doi:10.1016/j.scico.2012.07.012.

[4] Eves, Howard (1980). Elementary Matrix Theory (reprint ed.). New York: Dover. p. 37. ISBN 0-486-63946-0. Retrieved
24 April 2013. A partitioning as in Theorem 1.9.4 is called a conformable partition of A and B.

[5] Anton, Howard (1994). Elementary Linear Algebra (7th ed.). New York: John Wiley. p. 36. ISBN 0-471-58742-7.
...provided the sizes of the submatrices of A and B are such that the indicated operations can be performed.

[6] Bernstein, Dennis (2005). Matrix Mathematics. Princeton University Press. p. 44. ISBN 0-691-11802-7.

4.12 References
Strang, Gilbert (1999). Lecture 3: Multiplication and inverse matrices. MIT Open Course ware. 18:30
21:10.
Chapter 5

Reverse Cuthill-McKee algorithm

Cuthill-McKee ordering of a matrix

In numerical linear algebra, the CuthillMcKee algorithm (CM), named for Elizabeth Cuthill and James[1] McKee,[2]

16
5.1. ALGORITHM 17

RCM ordering of the same matrix

is an algorithm to permute a sparse matrix that has a symmetric sparsity pattern into a band matrix form with a small
bandwidth. The reverse CuthillMcKee algorithm (RCM) due to Alan George is the same algorithm but with
the resulting index numbers reversed.[3] In practice this generally results in less ll-in than the CM ordering when
Gaussian elimination is applied.[4]
The Cuthill McKee algorithm is a variant of the standard breadth-rst search algorithm used in graph algorithms. It
starts with a peripheral node and then generates levels Ri for i = 1, 2, .. until all nodes are exhausted. The set Ri+1
is created from set Ri by listing all vertices adjacent to all nodes in Ri . These nodes are listed in increasing degree.
This last detail is the only dierence with the breadth-rst search algorithm.

5.1 Algorithm

Given a symmetric n n matrix we visualize the matrix as the adjacency matrix of a graph. The CuthillMcKee
algorithm is then a relabeling of the vertices of the graph to reduce the bandwidth of the adjacency matrix.
The algorithm produces an ordered n-tuple R of vertices which is the new order of the vertices.
18 CHAPTER 5. REVERSE CUTHILL-MCKEE ALGORITHM

First we choose a peripheral vertex (the vertex with the lowest degree) x and set R := ({x}) .
Then for i = 1, 2, . . . we iterate the following steps while |R| < n

Construct the adjacency set Ai of Ri (with Ri the i-th component of R ) and exclude the vertices we already
have in R

Ai := Adj(Ri ) \ R

Sort Ai with ascending vertex order (vertex degree).


Append Ai to the Result set R .

In other words, number the vertices according to a particular breadth-rst traversal where neighboring vertices are
visited in order from lowest to highest vertex order.

5.2 See also


Graph bandwidth

Sparse matrix

5.3 References
[1] Recommendations for ship hull surface representation, page 6

[2] E. Cuthill and J. McKee. Reducing the bandwidth of sparse symmetric matrices In Proc. 24th Nat. Conf. ACM, pages
157172, 1969.

[3] http://ciprian-zavoianu.blogspot.ch/2009/01/project-bandwidth-reduction.html

[4] J. A. George and J. W-H. Liu, Computer Solution of Large Sparse Positive Denite Systems, Prentice-Hall, 1981

CuthillMcKee documentation for the Boost C++ Libraries.

A detailed description of the CuthillMcKee algorithm.


symrcm MATLABs implementation of RCM.

reverse_cuthill_mckee RCM routine from SciPy written in Cython.


Chapter 6

Diagonal matrix

In linear algebra, a diagonal matrix is a matrix in which the entries outside[the main
] diagonal are all zero. The term
3 0
usually refers to square matrices. An example of a 2-by-2 diagonal matrix is ; the following matrix is a 3-by-3
0 2
6 0 0
diagonal matrix: 0 7 0 . An identity matrix of any size, or any multiple of it, will be a diagonal matrix.
0 0 19

6.1 Background
As stated above, the o-diagonal entries are zero. That is, the matrix D = (di,j) with n columns and n rows is diagonal
if

i, j {1, 2, . . . , n}, i = j = di,j = 0

However, the main diagonal entries need not be zero.

6.2 Rectangular diagonal matrices


The term diagonal matrix may sometimes refer to a rectangular diagonal matrix, which is an m-by-n matrix with
all the entries not of the form di,i being zero. For example:

1 0 0
0 1 0 0 0 0
4 0
0 or 0 4 0 0 0
0 3
0 0 3 0 0
0 0 0

6.3 Symmetric diagonal matrices


The following matrix is a symmetric diagonal matrix:


1 0 0
0 4 0
0 0 2

If the entries are real numbers or complex numbers, then it is a normal matrix as well.
In the remainder of this article we will consider only square matrices.

19
20 CHAPTER 6. DIAGONAL MATRIX

6.3.1 Scalar matrix


A square diagonal matrix with all its main diagonal entries equal is a scalar matrix, that is, a scalar multiple I of the
identity matrix I. Its eect on a vector is scalar multiplication by . For example, a 33 scalar matrix has the form:


0 0
0 0 I 3
0 0

The scalar matrices are the center of the algebra of matrices: that is, they are precisely the matrices that commute
with all other square matrices of the same size.
For an abstract vector space V (rather than the concrete vector space K n ), or more generally a module M over a
ring R, with the endomorphism algebra End(M) (algebra of linear operators on M) replacing the algebra of matrices,
the analog of scalar matrices are scalar transformations. Formally, scalar multiplication is a linear map, inducing
a map R End(M ), (send a scalar to the corresponding scalar transformation, multiplication by ) exhibiting
End(M) as a R-algebra. For vector spaces, or more generally free modules M = Rn , for which the endomorphism
algebra is isomorphic to a matrix algebra, the scalar transforms are exactly the center of the endomorphism algebra,
and similarly invertible transforms are the center of the general linear group GL(V), where they are denoted by Z(V),
follow the usual notation for the center.

6.4 Matrix operations


The operations of matrix addition and matrix multiplication are especially simple for symmetric diagonal matrices.
Write diag(a1 , ..., an) for a diagonal matrix whose diagonal entries starting in the upper left corner are a1 , ..., an.
Then, for addition, we have

diag(a1 , ..., an) + diag(b1 , ..., bn) = diag(a1 + b1 , ..., an + bn)

and for matrix multiplication,

diag(a1 , ..., an) diag(b1 , ..., bn) = diag(a1 b1 , ..., anbn).

The diagonal matrix diag(a1 , ..., an) is invertible if and only if the entries a1 , ..., an are all non-zero. In this case, we
have

diag(a1 , ..., an)1 = diag(a1 1 , ..., an1 ).

In particular, the diagonal matrices form a subring of the ring of all n-by-n matrices.
Multiplying an n-by-n matrix A from the left with diag(a1 , ..., an) amounts to multiplying the ith row of A by ai for
all i; multiplying the matrix A from the right with diag(a1 , ..., an) amounts to multiplying the ith column of A by ai
for all i.

6.5 Operator matrix in eigenbasis


Main articles: Finding the matrix of a transformation and Eigenvalues and eigenvectors

As explained in determining coecients of operator matrix, there is a special basis, e1 , ..., en, for which the matrix
takes the
diagonal form. Being diagonal means that all coecients ai,j but ai,i are zeros in the dening equation
Aej = ai,j ei , leaving only one term per sum. The surviving diagonal elements, ai,i , are known as eigenvalues
and designated with i in the equation, which reduces to Aei = iei . The resulting equation is known as eigenvalue
equation[1] and used to derive the characteristic polynomial and, further, eigenvalues and eigenvectors.
In other words, the eigenvalues of diag(1 , ..., n) are 1 , ..., n with associated eigenvectors of e1 , ..., en.
6.6. PROPERTIES 21

6.6 Properties
The determinant of diag(a1 , ..., an) is the product a1 ...an.
The adjugate of a diagonal matrix is again diagonal.
A square matrix is diagonal if and only if it is triangular and normal.
Any square diagonal matrix is also a symmetric matrix.
A symmetric diagonal matrix can be dened as a matrix that is both upper- and lower-triangular. The identity matrix
In and any square zero matrix are diagonal. A one-dimensional matrix is always diagonal.

6.7 Applications
Diagonal matrices occur in many areas of linear algebra. Because of the simple description of the matrix operation
and eigenvalues/eigenvectors given above, it is typically desirable to represent a given matrix or linear map by a
diagonal matrix.
In fact, a given n-by-n matrix A is similar to a diagonal matrix (meaning that there is a matrix X such that X1 AX is
diagonal) if and only if it has n linearly independent eigenvectors. Such matrices are said to be diagonalizable.
Over the eld of real or complex numbers, more is true. The spectral theorem says that every normal matrix is
unitarily similar to a diagonal matrix (if AA = A A then there exists a unitary matrix U such that UAU is diagonal).
Furthermore, the singular value decomposition implies that for any matrix A, there exist unitary matrices U and V
such that UAV is diagonal with positive entries.

6.8 Operator theory


In operator theory, particularly the study of PDEs, operators are particularly easy to understand and PDEs easy to
solve if the operator is diagonal with respect to the basis with which one is working; this corresponds to a separable
partial dierential equation. Therefore, a key technique to understanding operators is a change of coordinates in
the language of operators, an integral transform which changes the basis to an eigenbasis of eigenfunctions: which
makes the equation separable. An important example of this is the Fourier transform, which diagonalizes constant
coecient dierentiation operators (or more generally translation invariant operators), such as the Laplacian operator,
say, in the heat equation.
Especially easy are multiplication operators, which are dened as multiplication by (the values of) a xed function
the values of the function at each point correspond to the diagonal entries of a matrix.

6.9 See also


Anti-diagonal matrix
Banded matrix
Bidiagonal matrix
Diagonally dominant matrix
Jordan normal form
Multiplication operator
Tridiagonal matrix
Toeplitz matrix
Toral Lie algebra
Circulant matrix
22 CHAPTER 6. DIAGONAL MATRIX

6.10 Notes
[1] Nearing, James (2010). Chapter 7.9: Eigenvalues and Eigenvectors (PDF). Mathematical Tools for Physics. ISBN
048648212X. Retrieved January 1, 2012.

6.11 References
Roger A. Horn and Charles R. Johnson, Matrix Analysis, Cambridge University Press, 1985. ISBN 0-521-
30586-1 (hardback), ISBN 0-521-38632-2 (paperback).
Chapter 7

Generalized permutation matrix

In mathematics, a generalized permutation matrix (or monomial matrix) is a matrix with the same nonzero pattern
as a permutation matrix, i.e. there is exactly one nonzero entry in each row and each column. Unlike a permutation
matrix, where the nonzero entry must be 1, in a generalized permutation matrix the nonzero entry can be any nonzero
value. An example of a generalized permutation matrix is


0 0 3 0
0 2 0 0
.
1 0 0 0
0 0 0 1

7.1 Structure
An invertible matrix A is a generalized permutation matrix if and only if it can be written as a product of an invertible
diagonal matrix D and an (implicitly invertible) permutation matrix P: i.e.,

A = DP.

7.1.1 Group structure


The set of nn generalized permutation matrices with entries in a eld F forms a subgroup of the general linear group
GL(n,F), in which the group of nonsingular diagonal matrices (n, F) forms a normal subgroup. Indeed, the gener-
alized permutation matrices are the normalizer of the diagonal matrices, meaning that the generalized permutation
matrices are the largest subgroup of GL in which diagonal matrices are normal.
The abstract group of generalized permutation matrices is the wreath product of F and Sn. Concretely, this means
that it is the semidirect product of (n, F) by the symmetric group Sn:

(n, F) Sn,

where Sn acts by permuting coordinates and the diagonal matrices (n, F) are isomorphic to the n-fold product (F )n .
To be precise, the generalized permutation matrices are a (faithful) linear representation of this abstract wreath
product: a realization of the abstract group as a subgroup of matrices.

7.1.2 Subgroups
The subgroup where all entries are 1 is exactly the permutation matrices, which is isomorphic to the symmetric
group.

23
24 CHAPTER 7. GENERALIZED PERMUTATION MATRIX

The subgroup where all entries are 1 is the signed permutation matrices, which is the hyperoctahedral group.

The subgroup where the entries are mth roots of unity m is isomorphic to a generalized symmetric group.

The subgroup of diagonal matrices is abelian, normal, and a maximal abelian subgroup. The quotient group is
the symmetric group, and this construction is in fact the Weyl group of the general linear group: the diagonal
matrices are a maximal torus in the general linear group (and are their own centralizer), the generalized per-
mutation matrices are the normalizer of this torus, and the quotient, N (T )/Z(T ) = N (T )/T = Sn is the
Weyl group.

7.2 Properties
If a nonsingular matrix and its inverse are both nonnegative matrices (i.e. matrices with nonnegative entries),
then the matrix is a generalized permutation matrix.

7.3 Generalizations
One can generalize further by allowing the entries to lie in a ring, rather than in a eld. In that case if the non-zero
entries are required to be units in the ring (invertible), one again obtains a group. On the other hand, if the non-zero
entries are only required to be non-zero, but not necessarily invertible, this set of matrices forms a semigroup instead.
One may also schematically allow the non-zero entries to lie in a group G, with the understanding that matrix multipli-
cation will only involve multiplying a single pair of group elements, not adding group elements. This is an abuse of
notation, since element of matrices being multiplied must allow multiplication and addition, but is suggestive notion
for the (formally correct) abstract group G Sn (the wreath product of the group G by the symmetric group).

7.4 Signed permutation group


Further information: Hyperoctahedral group

A signed permutation matrix is a generalized permutation matrix whose nonzero entries are 1, and are the integer
generalized permutation matrices with integer inverse.

7.4.1 Properties
It is the Coxeter group Bn , and has order 2n n! .

It is the symmetry group of the hypercube and (dually) of the cross-polytope.

Its index 2 subgroup of matrices with determinant 1 is the Coxeter group Dn and is the symmetry group of the
demihypercube.

It is a subgroup of the orthogonal group.

7.5 Applications

7.5.1 Monomial representations


Main article: Monomial representation

Monomial matrices occur in representation theory in the context of monomial representations. A monomial represen-
tation of a group G is a linear representation : G GL(n, F) of G (here F is the dening eld of the representation)
such that the image (G) is a subgroup of the group of monomial matrices.
7.6. REFERENCES 25

7.6 References
Joyner, David (2008). Adventures in group theory. Rubiks cube, Merlins machine, and other mathematical toys
(2nd updated and revised ed.). Baltimore, MD: Johns Hopkins University Press. ISBN 978-0-8018-9012-3.
Zbl 1221.00013.
Chapter 8

Heptadiagonal matrix

In linear algebra, a heptadiagonal matrix is a matrix that is nearly diagonal; to be exact, it is a matrix in which the
only nonzero entries are on the main diagonal, and the rst three diagonals above and below it. So it is of the form


B11 B12 B13 B14 0 0 0
..
B21 B22 B23 B24 B25 0 . 0

.. ..
B31 B32 B33 B34 B35 B36 . .

B41 B42 B43 B44 B45 B46 B47 0

0 B52 B53 B54 B55 B56 B57 B58

. ..
.. . B63 B64 B65 B66 B67 B68

0 0 B74 B75 B76 B77 B78
0 0 0 B85 B86 B87 B88

It follows that a pentadiagonal matrix has at most 7n 12 nonzero entries, where n is the size of the matrix. Hence,
pentadiagonal matrices are sparse. This makes them useful in numerical analysis.

8.1 See also


tridiagonal matrix

pentadiagonal matrix

26
Chapter 9

Identity matrix

Not to be confused with matrix of ones or unitary matrix.

In linear algebra, the identity matrix, or sometimes ambiguously called a unit matrix, of size n is the n n square
matrix with ones on the main diagonal and zeros elsewhere. It is denoted by In, or simply by I if the size is immaterial
or can be trivially determined by the context. (In some elds, such as quantum mechanics, the identity matrix is
denoted by a boldface one, 1; otherwise it is identical to I.) Less frequently, some mathematics books use U or E to
represent the identity matrix, meaning unit matrix[1] and the German word Einheitsmatrix,[2] respectively.


1 0 0 0
0 1 0 0
[ ] 1 0 0
[ ] 1 0 0
I1 = 1 , I2 = , I3 = 0 1 0, , In = 0 0 1
0 1 .. .. .. .. ..
0 0 1 . . . . .
0 0 0 1
When A is mn, it is a property of matrix multiplication that

Im A = AIn = A.
In particular, the identity matrix serves as the unit of the ring of all nn matrices, and as the identity element of the
general linear group GL(n) consisting of all invertible nn matrices. (The identity matrix itself is invertible, being its
own inverse.)
Where nn matrices are used to represent linear transformations from an n-dimensional vector space to itself, In
represents the identity function, regardless of the basis.
The ith column of an identity matrix is the unit vector ei. It follows that the determinant of the identity matrix is 1
and the trace is n.
Using the notation that is sometimes used to concisely describe diagonal matrices, we can write:

In = diag(1, 1, ..., 1).


It can also be written using the Kronecker delta notation:

(In )ij = ij .
The identity matrix also has the property that, when it is the product of two square matrices, the matrices can be said
to be the inverse of one another.
The identity matrix of a given size is the only idempotent matrix of that size having full rank. That is, it is the only
matrix such that (a) when multiplied by itself the result is itself, and (b) all of its rows, and all of its columns, are
linearly independent.

27
28 CHAPTER 9. IDENTITY MATRIX

The principal square root of an identity matrix is itself, and this is its only positive denite square root. However,
every identity matrix with at least two rows and columns has an innitude of symmetric square roots.[3]

9.1 See also


Binary matrix

Zero matrix

Unitary matrix
Matrix of ones

Square root of a 2 by 2 identity matrix


Zero-One matrix

9.2 Notes
[1] Pipes, Louis Albert (1963). Matrix Methods for Engineering. Prentice-Hall International Series in Applied Mathematics.
Prentice-Hall. p. 91.

[2] Identity Matrix on MathWorld;

[3] Mitchell, Douglas W. Using Pythagorean triples to generate square roots of I 2 ". The Mathematical Gazette 87, November
2003, 499-500.

9.3 External links


Identity matrix. PlanetMath.
Chapter 10

List of matrices

Several important classes of matrices are subsets of each other.

This page lists some important classes of matrices used in mathematics, science and engineering. A matrix (plural
matrices, or less commonly matrixes) is a rectangular array of numbers called entries. Matrices have a long history
of both study and application, leading to diverse ways of classifying matrices. A rst group is matrices satisfying
concrete conditions of the entries, including constant matrices. An important example is the identity matrix given by


1 0 0
0 1 0

In = . . .. .. .
.. .. . .
0 0 1

Further ways of classifying matrices are according to their eigenvalues or by imposing conditions on the product of
the matrix with other matrices. Finally, many domains, both in mathematics and other sciences including physics and
chemistry have particular matrices that are applied chiey in these areas.

29
30 CHAPTER 10. LIST OF MATRICES

10.1 Matrices with explicitly constrained entries


The following lists matrices whose entries are subject to certain conditions. Many of them apply to square matrices
only, that is matrices with the same number of columns and rows. The main diagonal of a square matrix is the
diagonal joining the upper left corner and the lower right one or equivalently the entries ai,i. The other diagonal is
called anti-diagonal (or counter-diagonal).

10.1.1 Constant matrices


The list below comprises matrices whose elements are constant for any given dimension (size) of matrix. The matrix
entries will be denoted aij. The table below uses the Kronecker delta ij for two integers i and j which is 1 if i = j and
0 else.

10.2 Matrices with conditions on eigenvalues or eigenvectors

10.3 Matrices satisfying conditions on products or inverses


A number of matrix-related notions is about properties of products or inverses of the given matrix. The matrix
product of a m-by-n matrix A and a n-by-k matrix B is the m-by-k matrix C given by


n
(C)i,j = Ai,r Br,j .
r=1

This matrix product is denoted AB. Unlike the product of numbers, matrix products are not commutative, that is
to say AB need not be equal to BA. A number of notions are concerned with the failure of this commutativity. An
inverse of square matrix A is a matrix B (necessarily of the same dimension as A) such that AB = I. Equivalently, BA
= I. An inverse need not exist. If it exists, B is uniquely determined, and is also called the inverse of A, denoted A1 .

10.4 Matrices with specic applications

10.5 Matrices used in statistics


The following matrices nd their main application in statistics and probability theory.

Bernoulli matrix a square matrix with entries +1, 1, with equal probability of each.
Centering matrix a matrix which, when multiplied with a vector, has the same eect as subtracting the mean
of the components of the vector from every component.
Correlation matrix a symmetric nn matrix, formed by the pairwise correlation coecients of several
random variables.
Covariance matrix a symmetric nn matrix, formed by the pairwise covariances of several random variables.
Sometimes called a dispersion matrix.
Dispersion matrix another name for a covariance matrix.
Doubly stochastic matrix a non-negative matrix such that each row and each column sums to 1 (thus the
matrix is both left stochastic and right stochastic)
Fisher information matrix a matrix representing the variance of the partial derivative, with respect to a
parameter, of the log of the likelihood function of a random variable.
Hat matrix - a square matrix used in statistics to relate tted values to observed values.
10.6. MATRICES USED IN GRAPH THEORY 31

Precision matrix a symmetric nn matrix, formed by inverting the covariance matrix. Also called the
information matrix.
Stochastic matrix a non-negative matrix describing a stochastic process. The sum of entries of any row is
one.
Transition matrix a matrix representing the probabilities of conditions changing from one state to another
in a Markov chain

10.6 Matrices used in graph theory


The following matrices nd their main application in graph and network theory.

Adjacency matrix a square matrix representing a graph, with aij non-zero if vertex i and vertex j are adjacent.
Biadjacency matrix a special class of adjacency matrix that describes adjacency in bipartite graphs.
Degree matrix a diagonal matrix dening the degree of each vertex in a graph.
Edmonds matrix a square matrix of a bipartite graph.
Incidence matrix a matrix representing a relationship between two classes of objects (usually vertices and
edges in the context of graph theory).
Laplacian matrix a matrix equal to the degree matrix minus the adjacency matrix for a graph, used to nd
the number of spanning trees in the graph.
Seidel adjacency matrix a matrix similar to the usual adjacency matrix but with 1 for adjacency; +1 for
nonadjacency; 0 on the diagonal.
Skew-adjacency matrix an adjacency matrix in which each non-zero aij is 1 or 1, accordingly as the direc-
tion i j matches or opposes that of an initially specied orientation.
Tutte matrix a generalisation of the Edmonds matrix for a balanced bipartite graph.

10.7 Matrices used in science and engineering


Cabibbo-Kobayashi-Maskawa matrix a unitary matrix used in particle physics to describe the strength of
avour-changing weak decays.
Density matrix a matrix describing the statistical state of a quantum system. Hermitian, non-negative and
with trace 1.
Fundamental matrix (computer vision) a 3 3 matrix in computer vision that relates corresponding points
in stereo images.
Fuzzy associative matrix a matrix in articial intelligence, used in machine learning processes.
Gamma matrices 4 4 matrices in quantum eld theory.
Gell-Mann matrices a generalisation of the Pauli matrices; these matrices are one notable representation of
the innitesimal generators of the special unitary group SU(3).
Hamiltonian matrix a matrix used in a variety of elds, including quantum mechanics and linear-quadratic
regulator (LQR) systems.
Irregular matrix a matrix used in computer science which has a varying number of elements in each row.
Overlap matrix a type of Gramian matrix, used in quantum chemistry to describe the inter-relationship of
a set of basis vectors of a quantum system.
S matrix a matrix in quantum mechanics that connects asymptotic (innite past and future) particle states.
32 CHAPTER 10. LIST OF MATRICES

State transition matrix Exponent of state matrix in control systems.

Substitution matrix a matrix from bioinformatics, which describes mutation rates of amino acid or DNA
sequences.

Z-matrix a matrix in chemistry, representing a molecule in terms of its relative atomic geometry.

10.8 Other matrix-related terms and denitions


Jordan canonical form an 'almost' diagonalised matrix, where the only non-zero elements appear on the lead
and super-diagonals.

Linear independence two or more vectors are linearly independent if there is no way to construct one from
linear combinations of the others.

Matrix exponential dened by the exponential series.


Matrix representation of conic sections

Pseudoinverse a generalization of the inverse matrix.


Quaternionic matrix - matrix using quaternions as numbers

Row echelon form a matrix in this form is the result of applying the forward elimination procedure to a
matrix (as used in Gaussian elimination).

Wronskian the determinant of a matrix of functions and their derivatives such that row n is the (n-1)th
derivative of row one.

10.9 See also


Perfect matrix

10.10 Notes
[1] Hogben 2006, Ch. 31.3

10.11 References
Hogben, Leslie (2006), Handbook of Linear Algebra (Discrete Mathematics and Its Applications), Boca Raton:
Chapman & Hall/CRC, ISBN 978-1-58488-510-8
Chapter 11

Matrix (mathematics)

For other uses, see Matrix.


Matrix theory redirects here. For the physics topic, see Matrix string theory.
In mathematics, a matrix (plural matrices) is a rectangular array[1] of numbers, symbols, or expressions, arranged

m-by-n matrix
ai,j n columns j changes
m
rows
a1,1 a1,2 a1,3
.
.
.
i
c a2,1 a2,2 a2,3
.
.
.

h
a
n a3,1 a3,2 a3,3
.
.
.

g
e . . .
. . .
.

s
.

. . .
.

The m rows are horizontal and the n columns are vertical. Each element of a matrix is often denoted by a variable with two subscripts.
For example, a2,1 represents the element at the second row and rst column of a matrix A.

in rows and columns.[2][3] For example, the dimensions of the matrix below are 2 3 (read two by three), because
there are two rows and three columns:

33
34 CHAPTER 11. MATRIX (MATHEMATICS)

[ ]
1 9 13
.
20 5 6
The individual items in an m n matrix A, often denoted by ai,j, where max i = m and max j = n, are called its elements
or entries.[4] Provided that they have the same size (each matrix has the same number of rows and the same number
of columns as the other), two matrices can be added or subtracted element by element (see Conformable matrix).
The rule for matrix multiplication, however, is that two matrices can be multiplied only when the number of columns
in the rst equals the number of rows in the second (i.e., the inner dimensions are the same, n for Am,n Bn,p).
Any matrix can be multiplied element-wise by a scalar from its associated eld. A major application of matrices
is to represent linear transformations, that is, generalizations of linear functions such as f(x) = 4x. For example,
the rotation of vectors in three-dimensional space is a linear transformation, which can be represented by a rotation
matrix R: if v is a column vector (a matrix with only one column) describing the position of a point in space, the
product Rv is a column vector describing the position of that point after a rotation. The product of two transformation
matrices is a matrix that represents the composition of two linear transformations. Another application of matrices
is in the solution of systems of linear equations. If the matrix is square, it is possible to deduce some of its properties
by computing its determinant. For example, a square matrix has an inverse if and only if its determinant is not zero.
Insight into the geometry of a linear transformation is obtainable (along with other information) from the matrixs
eigenvalues and eigenvectors.
Applications of matrices are found in most scientic elds. In every branch of physics, including classical mechanics,
optics, electromagnetism, quantum mechanics, and quantum electrodynamics, they are used to study physical phe-
nomena, such as the motion of rigid bodies. In computer graphics, they are used to manipulate 3D models and project
them onto a 2-dimensional screen. In probability theory and statistics, stochastic matrices are used to describe sets
of probabilities; for instance, they are used within the PageRank algorithm that ranks the pages in a Google search.[5]
Matrix calculus generalizes classical analytical notions such as derivatives and exponentials to higher dimensions.
Matrices are used in economics to describe systems of economic relationships.
A major branch of numerical analysis is devoted to the development of ecient algorithms for matrix computations,
a subject that is centuries old and is today an expanding area of research. Matrix decomposition methods simplify
computations, both theoretically and practically. Algorithms that are tailored to particular matrix structures, such as
sparse matrices and near-diagonal matrices, expedite computations in nite element method and other computations.
Innite matrices occur in planetary theory and in atomic theory. A simple example of an innite matrix is the matrix
representing the derivative operator, which acts on the Taylor series of a function.

11.1 Denition
A matrix is a rectangular array of numbers or other mathematical objects for which operations such as addition and
multiplication are dened.[6] Most commonly, a matrix over a eld F is a rectangular array of scalars each of which
is a member of F.[7][8] Most of this article focuses on real and complex matrices, that is, matrices whose elements are
real numbers or complex numbers, respectively. More general types of entries are discussed below. For instance, this
is a real matrix:


1.3 0.6
A = 20.4 5.5 .
9.7 6.2
The numbers, symbols or expressions in the matrix are called its entries or its elements. The horizontal and vertical
lines of entries in a matrix are called rows and columns, respectively.

11.1.1 Size
The size of a matrix is dened by the number of rows and columns that it contains. A matrix with m rows and n
columns is called an m n matrix or m-by-n matrix, while m and n are called its dimensions. For example, the matrix
A above is a 3 2 matrix.
Matrices which have a single row are called row vectors, and those which have a single column are called column
vectors. A matrix which has the same number of rows and columns is called a square matrix. A matrix with an
11.2. NOTATION 35

innite number of rows or columns (or both) is called an innite matrix. In some contexts, such as computer algebra
programs, it is useful to consider a matrix with no rows or no columns, called an empty matrix.

11.2 Notation
Matrices are commonly written in box brackets or parentheses:


a11 a12 a1n a11 a12 a1n
a21 a22 a2n a21 a22 a2n

A= . .. .. .. = .. .. .. .. = (aij ) Rmn .
.. . . . . . . .
am1 am2 amn am1 am2 amn

The specics of symbolic matrix notation vary widely, with some prevailing trends. Matrices are usually symbolized
using upper-case letters (such as A in the examples above), while the corresponding lower-case letters, with two sub-
script indices (for example, a11 , or a,), represent the entries. In addition to using upper-case letters to symbolize
matrices, many authors use a special typographical style, commonly boldface upright (non-italic), to further distin-
guish matrices from other mathematical objects. An alternative notation involves the use of a double-underline with
the variable name, with or without boldface style, (for example, A ).
The entry in the i-th row and j-th column of a matrix A is sometimes referred to as the i,j, (i,j), or (i,j)th entry of
the matrix, and most commonly denoted as ai,j, or aij. Alternative notations for that entry are A[i,j] or Ai,j. For
example, the (1,3) entry of the following matrix A is 5 (also denoted a13 , a,, A[1,3] or A1,3):


4 7 5 0
A = 2 0 11 8
19 1 3 12

Sometimes, the entries of a matrix can be dened by a formula such as ai,j = f(i, j). For example, each of the entries
of the following matrix A is determined by aij = i j.


0 1 2 3
A = 1 0 1 2
2 1 0 1

In this case, the matrix itself is sometimes dened by that formula, within square brackets or double parentheses. For
example, the matrix above is dened as A = [i-j], or A = ((i-j)). If matrix size is m n, the above-mentioned formula
f(i, j) is valid for any i = 1, ..., m and any j = 1, ..., n. This can be either specied separately, or using m n as a
subscript. For instance, the matrix A above is 3 4 and can be dened as A = [i j] (i = 1, 2, 3; j = 1, ..., 4), or A =
[i j]34.
Some programming languages utilize doubly subscripted arrays (or arrays of arrays) to represent an m--n matrix.
Some programming languages start the numbering of array indexes at zero, in which case the entries of an m-by-n
matrix are indexed by 0 i m 1 and 0 j n 1.[9] This article follows the more common convention in
mathematical writing where enumeration starts from 1.
An asterisk is occasionally used to refer to whole rows or columns in a matrix. For example, ai, refers to the ith row
of A, and a,j refers to the jth column of A. The set of all m-by-n matrices is denoted (m, n).

11.3 Basic operations


There are a number of basic operations that can be applied to modify matrices, called matrix addition, scalar multi-
plication, transposition, matrix multiplication, row operations, and submatrix.[11]
36 CHAPTER 11. MATRIX (MATHEMATICS)

11.3.1 Addition, scalar multiplication and transposition

Main articles: Matrix addition, Scalar multiplication, and Transpose

Familiar properties of numbers extend to these operations of matrices: for example, addition is commutative, that is,
the matrix sum does not depend on the order of the summands: A + B = B + A.[12] The transpose is compatible with
addition and scalar multiplication, as expressed by (cA)T = c(AT ) and (A + B)T = AT + BT . Finally, (AT )T = A.

11.3.2 Matrix multiplication

Main article: Matrix multiplication


Multiplication of two matrices is dened if and only if the number of columns of the left matrix is the same as the

B
b1,1 b1,2 b1,3

b2,1 b2,2 b2,3

a1,1 a1,2

a2,1 a2,2
A a3,1 a3,2

a4,1 a4,2

Schematic depiction of the matrix product AB of two matrices A and B.

number of rows of the right matrix. If A is an m-by-n matrix and B is an n-by-p matrix, then their matrix product AB
is the m-by-p matrix whose entries are given by dot product of the corresponding row of A and the corresponding
column of B:

n
[AB]i,j = Ai,1 B1,j + Ai,2 B2,j + + Ai,n Bn,j = r=1 Ai,r Br,j ,

where 1 i m and 1 j p.[13] For example, the underlined entry 2340 in the product is calculated as (2 1000)
+ (3 100) + (4 10) = 2340:
11.3. BASIC OPERATIONS 37


[ ] 0 1000 [ ]
2 3 4 3 2340
1 100 = .
1 0 0 0 1000
0 10
Matrix multiplication satises the rules (AB)C = A(BC) (associativity), and (A+B)C = AC+BC as well as C(A+B)
= CA+CB (left and right distributivity), whenever the size of the matrices is such that the various products are
dened.[14] The product AB may be dened without BA being dened, namely if A and B are m-by-n and n-by-k
matrices, respectively, and m k. Even if both products are dened, they need not be equal, that is, generally

AB BA,

that is, matrix multiplication is not commutative, in marked contrast to (rational, real, or complex) numbers whose
product is independent of the order of the factors. An example of two matrices not commuting with each other is:

[ ][ ] [ ]
1 2 0 1 0 1
= ,
3 4 0 0 0 3
whereas

[ ][ ] [ ]
0 1 1 2 3 4
= .
0 0 3 4 0 0
Besides the ordinary matrix multiplication just described, there exist other less frequently used operations on matrices
that can be considered forms of multiplication, such as the Hadamard product and the Kronecker product.[15] They
arise in solving matrix equations such as the Sylvester equation.

11.3.3 Row operations


Main article: Row operations

There are three types of row operations:

1. row addition, that is adding a row to another.


2. row multiplication, that is multiplying all entries of a row by a non-zero constant;
3. row switching, that is interchanging two rows of a matrix;

These operations are used in a number of ways, including solving linear equations and nding matrix inverses.

11.3.4 Submatrix
A submatrix of a matrix is obtained by deleting any collection of rows and/or columns.[16][17][18] For example, from
the following 3-by-4 matrix, we can construct a 2-by-3 submatrix by removing row 3 and column 2:


1 2 3 4 [ ]
1 3 4
A = 5 6 7 8 .
5 7 8
9 10 11 12

The minors and cofactors of a matrix are found by computing the determinant of certain submatrices.[18][19]
A principal submatrix is a square submatrix obtained by removing certain rows and columns. The denition varies
from author to author. According to some authors, a principal submatrix is a submatrix in which the set of row indices
that remain is the same as the set of column indices that remain.[20][21] Other authors dene a principal submatrix to
be one in which the rst k rows and columns, for some number k, are the ones that remain;[22] this type of submatrix
has also been called a leading principal submatrix.[23]
38 CHAPTER 11. MATRIX (MATHEMATICS)

11.4 Linear equations


Main articles: Linear equation and System of linear equations

Matrices can be used to compactly write and work with multiple linear equations, that is, systems of linear equations.
For example, if A is an m-by-n matrix, x designates a column vector (that is, n1-matrix) of n variables x1 , x2 , ...,
xn, and b is an m1-column vector, then the matrix equation

Ax = b
is equivalent to the system of linear equations

A,x1 + A,x2 + ... + A,nxn = b1


...
Am,x1 + Am,x2 + ... + Am,nxn = bm .[24]

Using matrices, this can be solved more compactly than would be possible by writing out all the equations separately.
If n = m and the equations are independent, this can be done by writing

x = A1 b,

where A1 is the inverse matrix of A. If A has no inverse, solutions if any can be found using its generalized inverse.

11.5 Linear transformations


Main articles: Linear transformation and Transformation matrix
Matrices and matrix multiplication reveal their essential features when related to linear transformations, also known
as linear maps. A real m-by-n matrix A gives rise to a linear transformation Rn Rm mapping each vector x in Rn
to the (matrix) product Ax, which is a vector in Rm . Conversely, each linear transformation f: Rn Rm arises from
a unique m-by-n matrix A: explicitly, the (i, j)-entry of A is the ith coordinate of f(ej), where ej = (0,...,0,1,0,...,0) is
the unit vector with 1 in the j th position and 0 elsewhere. The matrix A is said to represent the linear map f, and A
is called the transformation matrix of f.
For example, the 22 matrix

[ ]
a c
A=
b d
can be viewed as the transform of the unit square into a parallelogram with vertices at (0, 0), (a, b), (a + c, b + d),
and
[ ] (c,
[ ]d).[ The
] parallelogram
[ ] pictured at the right is obtained by multiplying A with each of the column vectors
0 1 1 0
, , and in turn. These vectors dene the vertices of the unit square.
0 0 1 1
The following table shows a number of 2-by-2 matrices with the associated linear maps of R2 . The blue original is
mapped to the green grid and shapes. The origin (0,0) is marked with a black point.
Under the 1-to-1 correspondence between matrices and linear maps, matrix multiplication corresponds to composition
of maps:[25] if a k-by-m matrix B represents another linear map g : Rm Rk , then the composition g f is represented
by BA since

(g f)(x) = g(f(x)) = g(Ax) = B(Ax) = (BA)x.

The last equality follows from the above-mentioned associativity of matrix multiplication.
The rank of a matrix A is the maximum number of linearly independent row vectors of the matrix, which is the same
as the maximum number of linearly independent column vectors.[26] Equivalently it is the dimension of the image of
the linear map represented by A.[27] The ranknullity theorem states that the dimension of the kernel of a matrix plus
the rank equals the number of columns of the matrix.[28]
11.6. SQUARE MATRIX 39

(a+c,b+d)

(c,d)

adbc

(a,b)

(0,0)

The vectors represented by a 2-by-2 matrix correspond to the sides of a unit square transformed into a parallelogram.

11.6 Square matrix


Main article: Square matrix

A square matrix is a matrix with the same number of rows and columns. An n-by-n matrix is known as a square
matrix of order n. Any two square matrices of the same order can be added and multiplied. The entries aii form the
main diagonal of a square matrix. They lie on the imaginary line which runs from the top left corner to the bottom
right corner of the matrix.

11.6.1 Main types


40 CHAPTER 11. MATRIX (MATHEMATICS)

Diagonal and triangular matrix

If all entries of A below the main diagonal are zero, A is called an upper triangular matrix. Similarly if all entries of
A above the main diagonal are zero, A is called a lower triangular matrix. If all entries outside the main diagonal are
zero, A is called a diagonal matrix.

Identity matrix

Main article: Identity matrix

The identity matrix In of size n is the n-by-n matrix in which all the elements on the main diagonal are equal to 1 and
all other elements are equal to 0, for example,


1 0 0
[ ] 0 1 0
[ ] 1 0
I1 = 1 , I2 = , , In = . . .. ..
0 1 .. .. . .
0 0 1

It is a square matrix of order n, and also a special kind of diagonal matrix. It is called an identity matrix because
multiplication with it leaves a matrix unchanged:

AIn = ImA = A for any m-by-n matrix A.

A nonzero scalar multiple of an identity matrix is called a scalar matrix. If the matrix entries come from a eld, the
scalar matrices form a group, under matrix multiplication, that is isomorphic to the multiplicative group of nonzero
elements of the eld.

Symmetric or skew-symmetric matrix

A square matrix A that is equal to its transpose, that is, A = AT , is a symmetric matrix. If instead, A is equal to the
negative of its transpose, that is, A = AT , then A is a skew-symmetric matrix. In complex matrices, symmetry is
often replaced by the concept of Hermitian matrices, which satisfy A = A, where the star or asterisk denotes the
conjugate transpose of the matrix, that is, the transpose of the complex conjugate of A.
By the spectral theorem, real symmetric matrices and complex Hermitian matrices have an eigenbasis; that is, every
vector is expressible as a linear combination of eigenvectors. In both cases, all eigenvalues are real.[29] This theorem
can be generalized to innite-dimensional situations related to matrices with innitely many rows and columns, see
below.

Invertible matrix and its inverse

A square matrix A is called invertible or non-singular if there exists a matrix B such that

AB = BA = In ,[30][31]

where In is the nn identity matrix with 1s on the main diagonal and 0s elsewhere. If B exists, it is unique and is
called the inverse matrix of A, denoted A1 .

Denite matrix

A symmetric nn-matrix A is called positive-denite if for all nonzero vectors x Rn the associated quadratic form
given by

f(x) = xT A x
11.6. SQUARE MATRIX 41

produces only positive values for any input vector x. If f(x) only yields negative values then A is negative-denite; if f
does produce both negative and positive values then A is indenite.[32] If the quadratic form f yields only non-negative
values (positive or zero), the symmetric matrix is called positive-semidenite (or if only non-positive values, then
negative-semidenite); hence the matrix is indenite precisely when it is neither positive-semidenite nor negative-
semidenite.
A symmetric matrix is positive-denite if and only if all its eigenvalues are positive, that is, the matrix is positive-
semidenite and it is invertible.[33] The table at the right shows two possibilities for 2-by-2 matrices.
Allowing as input two dierent vectors instead yields the bilinear form associated to A:

BA (x, y) = xT Ay.[34]

Orthogonal matrix

Main article: Orthogonal matrix

An orthogonal matrix is a square matrix with real entries whose columns and rows are orthogonal unit vectors (that
is, orthonormal vectors). Equivalently, a matrix A is orthogonal if its transpose is equal to its inverse:

AT = A1 ,

which entails

AT A = AAT = In ,

where I is the identity matrix of size n.


An orthogonal matrix A is necessarily invertible (with inverse A1 = AT ), unitary (A1 = A*), and normal (A*A =
AA*). The determinant of any orthogonal matrix is either +1 or 1. A special orthogonal matrix is an orthogonal
matrix with determinant +1. As a linear transformation, every orthogonal matrix with determinant +1 is a pure
rotation, while every orthogonal matrix with determinant 1 is either a pure reection, or a composition of reection
and rotation.
The complex analogue of an orthogonal matrix is a unitary matrix.

11.6.2 Main operations

Trace

The trace, tr(A) of a square matrix A is the sum of its diagonal entries. While matrix multiplication is not commutative
as mentioned above, the trace of the product of two matrices is independent of the order of the factors:

tr(AB) = tr(BA).

This is immediate from the denition of matrix multiplication:


m
n
tr(AB) = Aij Bji = tr(BA).
i=1 j=1

Also, the trace of a matrix is equal to that of its transpose, that is,

tr(A) = tr(AT ).
42 CHAPTER 11. MATRIX (MATHEMATICS)

( 01 11)
x2 f(x1 )

x1

f(x2 )

A linear transformation on R2 given by the indicated matrix. The determinant of this matrix is 1, as the area of the green
parallelogram at the right is 1, but the map reverses the orientation, since it turns the counterclockwise orientation of the vectors
to a clockwise one.

Determinant

Main article: Determinant


The determinant det(A) or |A| of a square matrix A is a number encoding certain properties of the matrix. A matrix
is invertible if and only if its determinant is nonzero. Its absolute value equals the area (in R2 ) or volume (in R3 ) of
the image of the unit square (or cube), while its sign corresponds to the orientation of the corresponding linear map:
the determinant is positive if and only if the orientation is preserved.
The determinant of 2-by-2 matrices is given by

[ ]
a b
det = ad bc.
c d
The determinant of 3-by-3 matrices involves 6 terms (rule of Sarrus). The more lengthy Leibniz formula generalises
these two formulae to all dimensions.[35]
The determinant of a product of square matrices equals the product of their determinants:

det(AB) = det(A) det(B).[36]

Adding a multiple of any row to another row, or a multiple of any column to another column, does not change
the determinant. Interchanging two rows or two columns aects the determinant by multiplying it by 1.[37] Using
these operations, any matrix can be transformed to a lower (or upper) triangular matrix, and for such matrices the
determinant equals the product of the entries on the main diagonal; this provides a method to calculate the determinant
of any matrix. Finally, the Laplace expansion expresses the determinant in terms of minors, that is, determinants
of smaller matrices.[38] This expansion can be used for a recursive denition of determinants (taking as starting case
the determinant of a 1-by-1 matrix, which is its unique entry, or even the determinant of a 0-by-0 matrix, which is
1), that can be seen to be equivalent to the Leibniz formula. Determinants can be used to solve linear systems using
Cramers rule, where the division of the determinants of two related square matrices equates to the value of each of
the systems variables.[39]

Eigenvalues and eigenvectors

Main article: Eigenvalues and eigenvectors

A number and a non-zero vector v satisfying

Av = v
11.7. COMPUTATIONAL ASPECTS 43

are called an eigenvalue and an eigenvector of A, respectively.[40][41] The number is an eigenvalue of an nn-matrix
A if and only if AIn is not invertible, which is equivalent to

det(A I) = 0. [42]

The polynomial pA in an indeterminate X given by evaluation the determinant det(XInA) is called the characteristic
polynomial of A. It is a monic polynomial of degree n. Therefore the polynomial equation pA() = 0 has at most
n dierent solutions, that is, eigenvalues of the matrix.[43] They may be complex even if the entries of A are real.
According to the CayleyHamilton theorem, pA(A) = 0, that is, the result of substituting the matrix itself into its own
characteristic polynomial yields the zero matrix.

11.7 Computational aspects


Matrix calculations can be often performed with dierent techniques. Many problems can be solved by both direct
algorithms or iterative approaches. For example, the eigenvectors of a square matrix can be obtained by nding a
sequence of vectors xn converging to an eigenvector when n tends to innity.[44]
To be able to choose the more appropriate algorithm for each specic problem, it is important to determine both
the eectiveness and precision of all the available algorithms. The domain studying these matters is called numerical
linear algebra.[45] As with other numerical situations, two main aspects are the complexity of algorithms and their
numerical stability.
Determining the complexity of an algorithm means nding upper bounds or estimates of how many elementary
operations such as additions and multiplications of scalars are necessary to perform some algorithm, for example,
multiplication of matrices. For example, calculating the matrix product of two n-by-n matrix using the denition
given above needs n3 multiplications, since for any of the n2 entries of the product, n multiplications are necessary.
The Strassen algorithm outperforms this naive algorithm; it needs only n2.807 multiplications.[46] A rened approach
also incorporates specic features of the computing devices.
In many practical situations additional information about the matrices involved is known. An important case are
sparse matrices, that is, matrices most of whose entries are zero. There are specically adapted algorithms for, say,
solving linear systems Ax = b for sparse matrices A, such as the conjugate gradient method.[47]
An algorithm is, roughly speaking, numerically stable, if little deviations in the input values do not lead to big de-
viations in the result. For example, calculating the inverse of a matrix via Laplaces formula (Adj (A) denotes the
adjugate matrix of A)

A1 = Adj(A) / det(A)

may lead to signicant rounding errors if the determinant of the matrix is very small. The norm of a matrix can be
used to capture the conditioning of linear algebraic problems, such as computing a matrixs inverse.[48]
Although most computer languages are not designed with commands or libraries for matrices, as early as the 1970s,
some engineering desktop computers such as the HP 9830 had ROM cartridges to add BASIC commands for matrices.
Some computer languages such as APL were designed to manipulate matrices, and various mathematical programs
can be used to aid computing with matrices.[49]

11.8 Decomposition
Main articles: Matrix decomposition, Matrix diagonalization, Gaussian elimination, and Montantes method

There are several methods to render matrices into a more easily accessible form. They are generally referred to as
matrix decomposition or matrix factorization techniques. The interest of all these techniques is that they preserve
certain properties of the matrices in question, such as determinant, rank or inverse, so that these quantities can be
calculated after applying the transformation, or that certain matrix operations are algorithmically easier to carry out
for some types of matrices.
44 CHAPTER 11. MATRIX (MATHEMATICS)

The LU decomposition factors matrices as a product of lower (L) and an upper triangular matrices (U).[50] Once
this decomposition is calculated, linear systems can be solved more eciently, by a simple technique called forward
and back substitution. Likewise, inverses of triangular matrices are algorithmically easier to calculate. The Gaus-
sian elimination is a similar algorithm; it transforms any matrix to row echelon form.[51] Both methods proceed by
multiplying the matrix by suitable elementary matrices, which correspond to permuting rows or columns and adding
multiples of one row to another row. Singular value decomposition expresses any matrix A as a product UDV , where
U and V are unitary matrices and D is a diagonal matrix.

An example of a matrix in Jordan normal form. The grey blocks are called Jordan blocks.

The eigendecomposition or diagonalization expresses A as a product VDV1 , where D is a diagonal matrix and V
is a suitable invertible matrix.[52] If A can be written in this form, it is called diagonalizable. More generally, and
applicable to all matrices, the Jordan decomposition transforms a matrix into Jordan normal form, that is to say
matrices whose only nonzero entries are the eigenvalues 1 to of A, placed on the main diagonal and possibly
entries equal to one directly above the main diagonal, as shown at the right.[53] Given the eigendecomposition, the nth
power of A (that is, n-fold iterated matrix multiplication) can be calculated via

An = (VDV1 )n = VDV1 VDV1 ...VDV1 = VDn V1

and the power of a diagonal matrix can be calculated by taking the corresponding powers of the diagonal entries, which
is much easier than doing the exponentiation for A instead. This can be used to compute the matrix exponential eA , a
need frequently arising in solving linear dierential equations, matrix logarithms and square roots of matrices.[54] To
avoid numerically ill-conditioned situations, further algorithms such as the Schur decomposition can be employed.[55]
11.9. ABSTRACT ALGEBRAIC ASPECTS AND GENERALIZATIONS 45

11.9 Abstract algebraic aspects and generalizations


Matrices can be generalized in dierent ways. Abstract algebra uses matrices with entries in more general elds
or even rings, while linear algebra codies properties of matrices in the notion of linear maps. It is possible to
consider matrices with innitely many columns and rows. Another extension are tensors, which can be seen as
higher-dimensional arrays of numbers, as opposed to vectors, which can often be realised as sequences of numbers,
while matrices are rectangular or two-dimensional arrays of numbers.[56] Matrices, subject to certain requirements
tend to form groups known as matrix groups. Similarly under certain conditions matrices form rings known as matrix
rings. Though the product of matrices is not in general commutative yet certain matrices form elds known as matrix
elds.

11.9.1 Matrices with more general entries

This article focuses on matrices whose entries are real or complex numbers. However, matrices can be considered
with much more general types of entries than real or complex numbers. As a rst step of generalization, any eld,
that is, a set where addition, subtraction, multiplication and division operations are dened and well-behaved, may
be used instead of R or C, for example rational numbers or nite elds. For example, coding theory makes use of
matrices over nite elds. Wherever eigenvalues are considered, as these are roots of a polynomial they may exist
only in a larger eld than that of the entries of the matrix; for instance they may be complex in case of a matrix with
real entries. The possibility to reinterpret the entries of a matrix as elements of a larger eld (for example, to view a
real matrix as a complex matrix whose entries happen to be all real) then allows considering each square matrix to
possess a full set of eigenvalues. Alternatively one can consider only matrices with entries in an algebraically closed
eld, such as C, from the outset.
More generally, abstract algebra makes great use of matrices with entries in a ring R.[57] Rings are a more general
notion than elds in that a division operation need not exist. The very same addition and multiplication operations
of matrices extend to this setting, too. The set M(n, R) of all square n-by-n matrices over R is a ring called matrix
ring, isomorphic to the endomorphism ring of the left R-module Rn .[58] If the ring R is commutative, that is, its
multiplication is commutative, then M(n, R) is a unitary noncommutative (unless n = 1) associative algebra over R.
The determinant of square matrices over a commutative ring R can still be dened using the Leibniz formula; such
a matrix is invertible if and only if its determinant is invertible in R, generalising the situation over a eld F, where
every nonzero element is invertible.[59] Matrices over superrings are called supermatrices.[60]
Matrices do not always have all their entries in the same ring or even in any ring at all. One special but common
case is block matrices, which may be considered as matrices whose entries themselves are matrices. The entries need
not be quadratic matrices, and thus need not be members of any ordinary ring; but their sizes must full certain
compatibility conditions.

11.9.2 Relationship to linear maps

Linear maps Rn Rm are equivalent to m-by-n matrices, as described above. More generally, any linear map f: V
W between nite-dimensional vector spaces can be described by a matrix A = (aij), after choosing bases v1 , ...,
vn of V, and w1 , ..., wm of W (so n is the dimension of V and m is the dimension of W), which is such that


m
f (vj ) = ai,j wi for j = 1, . . . , n.
i=1

In other words, column j of A expresses the image of vj in terms of the basis vectors wi of W; thus this relation uniquely
determines the entries of the matrix A. Note that the matrix depends on the choice of the bases: dierent choices of
bases give rise to dierent, but equivalent matrices.[61] Many of the above concrete notions can be reinterpreted in
this light, for example, the transpose matrix AT describes the transpose of the linear map given by A, with respect to
the dual bases.[62]
These properties can be restated in a more natural way: the category of all matrices with entries in a eld k with
multiplication as composition is equivalent to the category of nite dimensional vector spaces and linear maps over
this eld.
46 CHAPTER 11. MATRIX (MATHEMATICS)

More generally, the set of mn matrices can be used to represent the R-linear maps between the free modules Rm
and Rn for an arbitrary ring R with unity. When n = m composition of these maps is possible, and this gives rise to
the matrix ring of nn matrices representing the endomorphism ring of Rn .

11.9.3 Matrix groups

Main article: Matrix group

A group is a mathematical structure consisting of a set of objects together with a binary operation, that is, an operation
combining any two objects to a third, subject to certain requirements.[63] A group in which the objects are matrices
and the group operation is matrix multiplication is called a matrix group.[64][65] Since in a group every element has
to be invertible, the most general matrix groups are the groups of all invertible matrices of a given size, called the
general linear groups.
Any property of matrices that is preserved under matrix products and inverses can be used to dene further matrix
groups. For example, matrices with a given size and with a determinant of 1 form a subgroup of (that is, a smaller
group contained in) their general linear group, called a special linear group.[66] Orthogonal matrices, determined by
the condition

MT M = I,

form the orthogonal group.[67] Every orthogonal matrix has determinant 1 or 1. Orthogonal matrices with determi-
nant 1 form a subgroup called special orthogonal group.
Every nite group is isomorphic to a matrix group, as one can see by considering the regular representation of the
symmetric group.[68] General groups can be studied using matrix groups, which are comparatively well understood,
by means of representation theory.[69]

11.9.4 Innite matrices

It is also possible to consider matrices with innitely many rows and/or columns[70] even if, being innite objects, one
cannot write down such matrices explicitly. All that matters is that for every element in the set indexing rows, and
every element in the set indexing columns, there is a well-dened entry (these index sets need not even be subsets of
the natural numbers). The basic operations of addition, subtraction, scalar multiplication and transposition can still
be dened without problem; however matrix multiplication may involve innite summations to dene the resulting
entries, and these are not dened in general.

If R is any ring with unity, then the ring of endomorphisms of M = iI R as a right R module is isomorphic to
the ring of column nite matrices CFMI (R) whose entries are indexed by I I , and whose columns each contain
only nitely many nonzero entries. The endomorphisms of M considered as a left R module result in an analogous
object, the row nite matrices RFMI (R) whose rows each only have nitely many nonzero entries.
If innite matrices are used to describe linear maps, then only those matrices can be used all of whose columns have
but a nite number of nonzero entries, for the following reason. For a matrix A to describe a linear map f: VW,
bases for both spaces must have been chosen; recall that by denition this means that every vector in the space can be
written uniquely as a (nite) linear combination of basis vectors, so that written as a (column) vector v of coecients,
only nitely many entries vi are nonzero. Now the columns of A describe the images by f of individual basis vectors
of V in the basis of W, which is only meaningful if these columns have only nitely many nonzero entries. There
is no restriction on the rows of A however: in the product Av there are only nitely many nonzero coecients of
v involved, so every one of its entries, even if it is given as an innite sum of products, involves only nitely many
nonzero terms and is therefore well dened. Moreover, this amounts to forming a linear combination of the columns
of A that eectively involves only nitely many of them, whence the result has only nitely many nonzero entries,
because each of those columns do. One also sees that products of two matrices of the given type is well dened
(provided as usual that the column-index and row-index sets match), is again of the same type, and corresponds to
the composition of linear maps.
If R is a normed ring, then the condition of row or column niteness can be relaxed. With the norm in place, absolutely
convergent series can be used instead of nite sums. For example, the matrices whose column sums are absolutely
11.10. APPLICATIONS 47

convergent sequences form a ring. Analogously of course, the matrices whose row sums are absolutely convergent
series also form a ring.
In that vein, innite matrices can also be used to describe operators on Hilbert spaces, where convergence and
continuity questions arise, which again results in certain constraints that have to be imposed. However, the explicit
point of view of matrices tends to obfuscate the matter,[71] and the abstract and more powerful tools of functional
analysis can be used instead.

11.9.5 Empty matrices

An empty matrix is a matrix in which the number of rows or columns (or both) is zero.[72][73] Empty matrices help
dealing with maps involving the zero vector space. For example, if A is a 3-by-0 matrix and B is a 0-by-3 matrix,
then AB is the 3-by-3 zero matrix corresponding to the null map from a 3-dimensional space V to itself, while BA
is a 0-by-0 matrix. There is no common notation for empty matrices, but most computer algebra systems allow
creating and computing with them. The determinant of the 0-by-0 matrix is 1 as follows from regarding the empty
product occurring in the Leibniz formula for the determinant as 1. This value is also consistent with the fact that the
identity map from any nite dimensional space to itself has determinant 1, a fact that is often used as a part of the
characterization of determinants.

11.10 Applications

There are numerous applications of matrices, both in mathematics and other sciences. Some of them merely take
advantage of the compact representation of a set of numbers in a matrix. For example, in game theory and economics,
the payo matrix encodes the payo for two players, depending on which out of a given (nite) set of alternatives the
players choose.[74] Text mining and automated thesaurus compilation makes use of document-term matrices such as
tf-idf to track frequencies of certain words in several documents.[75]
Complex numbers can be represented by particular real 2-by-2 matrices via

[ ]
a b
a + ib ,
b a

under which addition and multiplication of complex numbers and matrices correspond to each other. For example,
2-by-2 rotation matrices represent the multiplication with some complex number of absolute value 1, as above. A
similar interpretation is possible for quaternions[76] and Cliord algebras in general.
Early encryption techniques such as the Hill cipher also used matrices. However, due to the linear nature of matrices,
these codes are comparatively easy to break.[77] Computer graphics uses matrices both to represent objects and to
calculate transformations of objects using ane rotation matrices to accomplish tasks such as projecting a three-
dimensional object onto a two-dimensional screen, corresponding to a theoretical camera observation.[78] Matrices
over a polynomial ring are important in the study of control theory.
Chemistry makes use of matrices in various ways, particularly since the use of quantum theory to discuss molecular
bonding and spectroscopy. Examples are the overlap matrix and the Fock matrix used in solving the Roothaan
equations to obtain the molecular orbitals of the HartreeFock method.

11.10.1 Graph theory

The adjacency matrix of a nite graph is a basic notion of graph theory.[79] It records which vertices of the graph are
connected by an edge. Matrices containing just two dierent values (1 and 0 meaning for example yes and no,
respectively) are called logical matrices. The distance (or cost) matrix contains information about distances of the
edges.[80] These concepts can be applied to websites connected by hyperlinks or cities connected by roads etc., in
which case (unless the connection network is extremely dense) the matrices tend to be sparse, that is, contain few
nonzero entries. Therefore, specically tailored matrix algorithms can be used in network theory.
48 CHAPTER 11. MATRIX (MATHEMATICS)

2 3

1

1 1 0
An undirected graph with adjacency matrix 1 0 1 .
0 1 0

11.10.2 Analysis and geometry


The Hessian matrix of a dierentiable function : Rn R consists of the second derivatives of with respect to the
several coordinate directions, that is, [81]

[ ]
2f
H(f ) = .
xi xj
It encodes information about the local growth behaviour of the function: given a critical point x = (x1 , ..., xn), that is,
a point where the rst partial derivatives f /xi of vanish, the function has a local minimum if the Hessian matrix
is positive denite. Quadratic programming can be used to nd global minima or maxima of quadratic functions
closely related to the ones attached to matrices (see above).[82]
Another matrix frequently used in geometrical situations is the Jacobi matrix of a dierentiable map f: Rn Rm . If
f 1 , ..., fm denote the components of f, then the Jacobi matrix is dened as [83]

[ ]
fi
J(f ) = .
xj 1im,1jn

If n > m, and if the rank of the Jacobi matrix attains its maximal value m, f is locally invertible at that point, by the
11.10. APPLICATIONS 49

[ ]
2 0
At the saddle point (x = 0, y = 0) (red) of the function f(x,y) = x2 y2 , the Hessian matrix is indenite.
0 2

implicit function theorem.[84]


Partial dierential equations can be classied by considering the matrix of coecients of the highest-order dierential
operators of the equation. For elliptic partial dierential equations this matrix is positive denite, which has decisive
inuence on the set of possible solutions of the equation in question.[85]
The nite element method is an important numerical method to solve partial dierential equations, widely applied in
simulating complex physical systems. It attempts to approximate the solution to some equation by piecewise linear
functions, where the pieces are chosen with respect to a suciently ne grid, which in turn can be recast as a matrix
equation.[86]

11.10.3 Probability theory and statistics


Stochastic matrices are square matrices whose rows are probability vectors, that is, whose entries are non-negative
and sum up to one. Stochastic matrices are used to dene Markov chains with nitely many states.[87] A row of the
stochastic matrix gives the probability distribution for the next position of some particle currently in the state that
corresponds to the row. Properties of the Markov chain like absorbing states, that is, states that any particle attains
eventually, can be read o the eigenvectors of the transition matrices.[88]
Statistics also makes use of matrices in many dierent forms.[89] Descriptive statistics is concerned with describing
data sets, which can often be represented as data matrices, which may then be subjected to dimensionality reduction
techniques. The covariance matrix encodes the mutual variance of several random variables.[90] Another technique
using matrices are linear least squares, a method that approximates a nite set of pairs (x1 , y1 ), (x2 , y2 ), ..., (xN, yN),
by a linear function

yi axi + b, i = 1, ..., N

which can be formulated in terms of matrices, related to the singular value decomposition of matrices.[91]
50 CHAPTER 11. MATRIX (MATHEMATICS)

Two dierent Markov chains. The chart depicts the number of [ particles
] (of a total
[ of 1000)
] in state 2. Both limiting values can be
.7 0 .7 .2
determined from the transition matrices, which are given by (red) and (black).
.3 1 .3 .8

Random matrices are matrices whose entries are random numbers, subject to suitable probability distributions, such
as matrix normal distribution. Beyond probability theory, they are applied in domains ranging from number theory
to physics.[92][93]

11.10.4 Symmetries and transformations in physics


Further information: Symmetry in physics

Linear transformations and the associated symmetries play a key role in modern physics. For example, elementary
particles in quantum eld theory are classied as representations of the Lorentz group of special relativity and, more
specically, by their behavior under the spin group. Concrete representations involving the Pauli matrices and more
general gamma matrices are an integral part of the physical description of fermions, which behave as spinors.[94] For
the three lightest quarks, there is a group-theoretical representation involving the special unitary group SU(3); for
their calculations, physicists use a convenient matrix representation known as the Gell-Mann matrices, which are also
used for the SU(3) gauge group that forms the basis of the modern description of strong nuclear interactions, quantum
chromodynamics. The CabibboKobayashiMaskawa matrix, in turn, expresses the fact that the basic quark states
that are important for weak interactions are not the same as, but linearly related to the basic quark states that dene
particles with specic and distinct masses.[95]

11.10.5 Linear combinations of quantum states


The rst model of quantum mechanics (Heisenberg, 1925) represented the theorys operators by innite-dimensional
matrices acting on quantum states.[96] This is also referred to as matrix mechanics. One particular example is the
density matrix that characterizes the mixed state of a quantum system as a linear combination of elementary, pure
11.11. HISTORY 51

eigenstates.[97]
Another matrix serves as a key tool for describing the scattering experiments that form the cornerstone of experimen-
tal particle physics: Collision reactions such as occur in particle accelerators, where non-interacting particles head
towards each other and collide in a small interaction zone, with a new set of non-interacting particles as the result,
can be described as the scalar product of outgoing particle states and a linear combination of ingoing particle states.
The linear combination is given by a matrix known as the S-matrix, which encodes all information about the possible
interactions between particles.[98]

11.10.6 Normal modes


A general application of matrices in physics is to the description of linearly coupled harmonic systems. The equations
of motion of such systems can be described in matrix form, with a mass matrix multiplying a generalized velocity
to give the kinetic term, and a force matrix multiplying a displacement vector to characterize the interactions. The
best way to obtain solutions is to determine the systems eigenvectors, its normal modes, by diagonalizing the matrix
equation. Techniques like this are crucial when it comes to the internal dynamics of molecules: the internal vibra-
tions of systems consisting of mutually bound component atoms.[99] They are also needed for describing mechanical
vibrations, and oscillations in electrical circuits.[100]

11.10.7 Geometrical optics


Geometrical optics provides further matrix applications. In this approximative theory, the wave nature of light is
neglected. The result is a model in which light rays are indeed geometrical rays. If the deection of light rays by
optical elements is small, the action of a lens or reective element on a given light ray can be expressed as multiplication
of a two-component vector with a two-by-two matrix called ray transfer matrix: the vectors components are the light
rays slope and its distance from the optical axis, while the matrix encodes the properties of the optical element.
Actually, there are two kinds of matrices, viz. a refraction matrix describing the refraction at a lens surface, and a
translation matrix, describing the translation of the plane of reference to the next refracting surface, where another
refraction matrix applies. The optical system, consisting of a combination of lenses and/or reective elements, is
simply described by the matrix resulting from the product of the components matrices.[101]

11.10.8 Electronics
Traditional mesh analysis and nodal analysis in electronics lead to a system of linear equations that can be described
with a matrix.
The behaviour of many electronic components can be described using matrices. Let A be a 2-dimensional vector
with the components input voltage v1 and input current i1 as its elements, and let B be a 2-dimensional vector
with the components output voltage v2 and output current i2 as its elements. Then the behaviour of the electronic
component can be described by B = H A, where H is a 2 x 2 matrix containing one impedance element (h12 ),
one admittance element (h21 ) and two dimensionless elements (h11 and h22 ). Calculating a circuit now reduces to
multiplying matrices.

11.11 History
Matrices have a long history of application in solving linear equations but they were known as arrays until the 1800s.
The Chinese text The Nine Chapters on the Mathematical Art written in 10th2nd century BCE is the rst example
of the use of array methods to solve simultaneous equations,[102] including the concept of determinants. In 1545
Italian mathematician Girolamo Cardano brought the method to Europe when he published Ars Magna.[103] The
Japanese mathematician Seki used the same array methods to solve simultaneous equations in 1683.[104] The Dutch
Mathematician Jan de Witt represented transformations using arrays in his 1659 book Elements of Curves (1659).[105]
Between 1700 and 1710 Gottfried Wilhelm Leibniz publicized the use of arrays for recording information or solutions
and experimented with over 50 dierent systems of arrays.[103] Cramer presented his rule in 1750.
The term matrix (Latin for womb, derived from matermother[106] ) was coined by James Joseph Sylvester in
1850,[107] who understood a matrix as an object giving rise to a number of determinants today called minors, that is
52 CHAPTER 11. MATRIX (MATHEMATICS)

to say, determinants of smaller matrices that derive from the original one by removing columns and rows. In an 1851
paper, Sylvester explains:

I have in previous papers dened a Matrix as a rectangular array of terms, out of which dierent
systems of determinants may be engendered as from the womb of a common parent.[108]

Arthur Cayley published a treatise on geometric transformations using matrices that were not rotated versions of the
coecients being investigated as had previously been done. Instead he dened operations such as addition, subtrac-
tion, multiplication, and division as transformations of those matrices and showed the associative and distributive
properties held true. Cayley investigated and demonstrated the non-commutative property of matrix multiplication
as well as the commutative property of matrix addition.[103] Early matrix theory had limited the use of arrays almost
exclusively to determinants and Arthur Cayleys abstract matrix operations were revolutionary. He was instrumental
in proposing a matrix concept independent of equation systems. In 1858 Cayley published his A memoir on the theory
of matrices[109][110] in which he proposed and demonstrated the Cayley-Hamilton theorem.[103]
An English mathematician named Cullis was the rst to use modern bracket notation for matrices in 1913 and he
simultaneously demonstrated the rst signicant use of the notation A = [ai,j] to represent a matrix where ai,j refers
to the ith row and the jth column.[103]
The study of determinants sprang from several sources.[111] Number-theoretical problems led Gauss to relate co-
ecients of quadratic forms, that is, expressions such as x2 + xy 2y2 , and linear maps in three dimensions to
matrices. Eisenstein further developed these notions, including the remark that, in modern parlance, matrix products
are non-commutative. Cauchy was the rst to prove general statements about determinants, using as denition of the
determinant of a matrix A = [ai,j] the following: replace the powers aj k by ajk in the polynomial


a1 a2 an (aj ai )
i<j

where denotes the product of the indicated terms. He also showed, in 1829, that the eigenvalues of symmetric ma-
trices are real.[112] Jacobi studied functional determinantslater called Jacobi determinants by Sylvesterwhich
can be used to describe geometric transformations at a local (or innitesimal) level, see above; Kroneckers Vorlesun-
gen ber die Theorie der Determinanten[113] and Weierstrass Zur Determinantentheorie,[114] both published in 1903,
rst treated determinants axiomatically, as opposed to previous more concrete approaches such as the mentioned
formula of Cauchy. At that point, determinants were rmly established.
Many theorems were rst established for small matrices only, for example the CayleyHamilton theorem was proved
for 22 matrices by Cayley in the aforementioned memoir, and by Hamilton for 44 matrices. Frobenius, working
on bilinear forms, generalized the theorem to all dimensions (1898). Also at the end of the 19th century the Gauss
Jordan elimination (generalizing a special case now known as Gauss elimination) was established by Jordan. In the
early 20th century, matrices attained a central role in linear algebra.[115] partially due to their use in classication of
the hypercomplex number systems of the previous century.
The inception of matrix mechanics by Heisenberg, Born and Jordan led to studying matrices with innitely many
rows and columns.[116] Later, von Neumann carried out the mathematical formulation of quantum mechanics, by
further developing functional analytic notions such as linear operators on Hilbert spaces, which, very roughly speaking,
correspond to Euclidean space, but with an innity of independent directions.

11.11.1 Other historical usages of the word matrix in mathematics


The word has been used in unusual ways by at least two authors of historical importance.
Bertrand Russell and Alfred North Whitehead in their Principia Mathematica (19101913) use the word matrix
in the context of their axiom of reducibility. They proposed this axiom as a means to reduce any function to one of
lower type, successively, so that at the bottom (0 order) the function is identical to its extension:

Let us give the name of matrix to any function, of however many variables, which does not involve any
apparent variables. Then any possible function other than a matrix is derived from a matrix by means of
generalization, that is, by considering the proposition which asserts that the function in question is true
with all possible values or with some value of one of the arguments, the other argument or arguments
remaining undetermined.[117]
11.12. SEE ALSO 53

For example, a function (x, y) of two variables x and y can be reduced to a collection of functions of a single variable,
for example, y, by considering the function for all possible values of individuals ai substituted in place of variable
x. And then the resulting collection of functions of the single variable y, that is, a: (ai, y), can be reduced to
a matrix of values by considering the function for all possible values of individuals bi substituted in place of
variable y:

b a: (ai, b ).

Alfred Tarski in his 1946 Introduction to Logic used the word matrix synonymously with the notion of truth table
as used in mathematical logic.[118]

11.12 See also


Algebraic multiplicity
Geometric multiplicity
GramSchmidt process
List of matrices
Matrix calculus
Matrix function
Periodic matrix set
Tensor

11.13 Notes
[1] Equivalently, table.

[2] Anton (1987, p. 23)

[3] Beauregard & Fraleigh (1973, p. 56)

[4] Young, Cynthia. Precalculus. Laurie Rosatone. p. 727.

[5] K. Bryan and T. Leise. The $25,000,000,000 eigenvector: The linear algebra behind Google. SIAM Review, 48(3):569
581, 2006.

[6] Lang 2002

[7] Fraleigh (1976, p. 209)

[8] Nering (1970, p. 37)

[9] Oualline 2003, Ch. 5

[10] How to organize, add and multiply matrices - Bill Shillito. TED ED. Retrieved April 6, 2013.

[11] Brown 1991, Denition I.2.1 (addition), Denition I.2.4 (scalar multiplication), and Denition I.2.33 (transpose)

[12] Brown 1991, Theorem I.2.6

[13] Brown 1991, Denition I.2.20

[14] Brown 1991, Theorem I.2.24

[15] Horn & Johnson 1985, Ch. 4 and 5

[16] Bronson (1970, p. 16)

[17] Kreyszig (1972, p. 220)


54 CHAPTER 11. MATRIX (MATHEMATICS)

[18] Protter & Morrey (1970, p. 869)

[19] Kreyszig (1972, pp. 241,244)

[20] Schneider, Hans; Barker, George Phillip (2012), Matrices and Linear Algebra, Dover Books on Mathematics, Courier
Dover Corporation, p. 251, ISBN 9780486139302.

[21] Perlis, Sam (1991), Theory of Matrices, Dover books on advanced mathematics, Courier Dover Corporation, p. 103, ISBN
9780486668109.

[22] Anton, Howard (2010), Elementary Linear Algebra (10th ed.), John Wiley & Sons, p. 414, ISBN 9780470458211.

[23] Horn, Roger A.; Johnson, Charles R. (2012), Matrix Analysis (2nd ed.), Cambridge University Press, p. 17, ISBN
9780521839402.

[24] Brown 1991, I.2.21 and 22

[25] Greub 1975, Section III.2

[26] Brown 1991, Denition II.3.3

[27] Greub 1975, Section III.1

[28] Brown 1991, Theorem II.3.22

[29] Horn & Johnson 1985, Theorem 2.5.6

[30] Brown 1991, Denition I.2.28

[31] Brown 1991, Denition I.5.13

[32] Horn & Johnson 1985, Chapter 7

[33] Horn & Johnson 1985, Theorem 7.2.1

[34] Horn & Johnson 1985, Example 4.0.6, p. 169

[35] Brown 1991, Denition III.2.1

[36] Brown 1991, Theorem III.2.12

[37] Brown 1991, Corollary III.2.16

[38] Mirsky 1990, Theorem 1.4.1

[39] Brown 1991, Theorem III.3.18

[40] Eigen means own in German and in Dutch.

[41] Brown 1991, Denition III.4.1

[42] Brown 1991, Denition III.4.9

[43] Brown 1991, Corollary III.4.10

[44] Householder 1975, Ch. 7

[45] Bau III & Trefethen 1997

[46] Golub & Van Loan 1996, Algorithm 1.3.1

[47] Golub & Van Loan 1996, Chapters 9 and 10, esp. section 10.2

[48] Golub & Van Loan 1996, Chapter 2.3

[49] For example, Mathematica, see Wolfram 2003, Ch. 3.7

[50] Press, Flannery & Teukolsky 1992

[51] Stoer & Bulirsch 2002, Section 4.1

[52] Horn & Johnson 1985, Theorem 2.5.4

[53] Horn & Johnson 1985, Ch. 3.1, 3.2


11.13. NOTES 55

[54] Arnold & Cooke 1992, Sections 14.5, 7, 8

[55] Bronson 1989, Ch. 15

[56] Coburn 1955, Ch. V

[57] Lang 2002, Chapter XIII

[58] Lang 2002, XVII.1, p. 643

[59] Lang 2002, Proposition XIII.4.16

[60] Reichl 2004, Section L.2

[61] Greub 1975, Section III.3

[62] Greub 1975, Section III.3.13

[63] See any standard reference in group.

[64] Additionally, the group is required to be closed in the general linear group.

[65] Baker 2003, Def. 1.30

[66] Baker 2003, Theorem 1.2

[67] Artin 1991, Chapter 4.5

[68] Rowen 2008, Example 19.2, p. 198

[69] See any reference in representation theory or group representation.

[70] See the item Matrix in It, ed. 1987

[71] Not much of matrix theory carries over to innite-dimensional spaces, and what does is not so useful, but it sometimes
helps. Halmos 1982, p. 23, Chapter 5

[72] Empty Matrix: A matrix is empty if either its row or column dimension is zero, Glossary, O-Matrix v6 User Guide

[73] A matrix having at least one dimension equal to zero is called an empty matrix, MATLAB Data Structures

[74] Fudenberg & Tirole 1983, Section 1.1.1

[75] Manning 1999, Section 15.3.4

[76] Ward 1997, Ch. 2.8

[77] Stinson 2005, Ch. 1.1.5 and 1.2.4

[78] Association for Computing Machinery 1979, Ch. 7

[79] Godsil & Royle 2004, Ch. 8.1

[80] Punnen 2002

[81] Lang 1987a, Ch. XVI.6

[82] Nocedal 2006, Ch. 16

[83] Lang 1987a, Ch. XVI.1

[84] Lang 1987a, Ch. XVI.5. For a more advanced, and more general statement see Lang 1969, Ch. VI.2

[85] Gilbarg & Trudinger 2001

[86] olin 2005, Ch. 2.5. See also stiness method.

[87] Latouche & Ramaswami 1999

[88] Mehata & Srinivasan 1978, Ch. 2.8

[89] Healy, Michael (1986), Matrices for Statistics, Oxford University Press, ISBN 978-0-19-850702-4

[90] Krzanowski 1988, Ch. 2.2., p. 60


56 CHAPTER 11. MATRIX (MATHEMATICS)

[91] Krzanowski 1988, Ch. 4.1

[92] Conrey 2007

[93] Zabrodin, Brezin & Kazakov et al. 2006

[94] Itzykson & Zuber 1980, Ch. 2

[95] see Burgess & Moore 2007, section 1.6.3. (SU(3)), section 2.4.3.2. (KobayashiMaskawa matrix)

[96] Schi 1968, Ch. 6

[97] Bohm 2001, sections II.4 and II.8

[98] Weinberg 1995, Ch. 3

[99] Wherrett 1987, part II

[100] Riley, Hobson & Bence 1997, 7.17

[101] Guenther 1990, Ch. 5

[102] Shen, Crossley & Lun 1999 cited by Bretscher 2005, p. 1

[103] Discrete Mathematics 4th Ed. Dossey, Otto, Spense, Vanden Eynden, Published by Addison Wesley, October 10, 2001
ISBN 978-0321079121 | p.564-565

[104] Needham, Joseph; Wang Ling (1959). Science and Civilisation in China. III. Cambridge: Cambridge University Press. p.
117. ISBN 9780521058018.

[105] Discrete Mathematics 4th Ed. Dossey, Otto, Spense, Vanden Eynden, Published by Addison Wesley, October 10, 2001
ISBN 978-0321079121 | p.564

[106] MerriamWebster dictionary, MerriamWebster, retrieved April 20, 2009

[107] Although many sources state that J. J. Sylvester coined the mathematical term matrix in 1848, Sylvester published nothing
in 1848. (For proof that Sylvester published nothing in 1848, see: J. J. Sylvester with H. F. Baker, ed., The Collected
Mathematical Papers of James Joseph Sylvester (Cambridge, England: Cambridge University Press, 1904), vol. 1.) His
earliest use of the term matrix occurs in 1850 in: J. J. Sylvester (1850) Additions to the articles in the September
number of this journal, On a new class of theorems, and on Pascals theorem, The London, Edinburgh and Dublin
Philosophical Magazine and Journal of Science, 37 : 363-370. From page 369: For this purpose we must commence, not
with a square, but with an oblong arrangement of terms consisting, suppose, of m lines and n columns. This will not in
itself represent a determinant, but is, as it were, a Matrix out of which we may form various systems of determinants "

[108] The Collected Mathematical Papers of James Joseph Sylvester: 18371853, Paper 37, p. 247

[109] Phil.Trans. 1858, vol.148, pp.17-37 Math. Papers II 475-496

[110] Dieudonn, ed. 1978, Vol. 1, Ch. III, p. 96

[111] Knobloch 1994

[112] Hawkins 1975

[113] Kronecker 1897

[114] Weierstrass 1915, pp. 271286

[115] Bcher 2004

[116] Mehra & Rechenberg 1987

[117] Whitehead, Alfred North; and Russell, Bertrand (1913) Principia Mathematica to *56, Cambridge at the University Press,
Cambridge UK (republished 1962) cf page 162.

[118] Tarski, Alfred; (1946) Introduction to Logic and the Methodology of Deductive Sciences, Dover Publications, Inc, New York
NY, ISBN 0-486-28462-X.
11.14. REFERENCES 57

11.14 References
Anton, Howard (1987), Elementary Linear Algebra (5th ed.), New York: Wiley, ISBN 0-471-84819-0

Arnold, Vladimir I.; Cooke, Roger (1992), Ordinary dierential equations, Berlin, DE; New York, NY:
Springer-Verlag, ISBN 978-3-540-54813-3

Artin, Michael (1991), Algebra, Prentice Hall, ISBN 978-0-89871-510-1

Association for Computing Machinery (1979), Computer Graphics, Tata McGrawHill, ISBN 978-0-07-059376-
3

Baker, Andrew J. (2003), Matrix Groups: An Introduction to Lie Group Theory, Berlin, DE; New York, NY:
Springer-Verlag, ISBN 978-1-85233-470-3

Bau III, David; Trefethen, Lloyd N. (1997), Numerical linear algebra, Philadelphia, PA: Society for Industrial
and Applied Mathematics, ISBN 978-0-89871-361-9

Beauregard, Raymond A.; Fraleigh, John B. (1973), A First Course In Linear Algebra: with Optional Introduc-
tion to Groups, Rings, and Fields, Boston: Houghton Miin Co., ISBN 0-395-14017-X

Bretscher, Otto (2005), Linear Algebra with Applications (3rd ed.), Prentice Hall

Bronson, Richard (1970), Matrix Methods: An Introduction, New York: Academic Press, LCCN 70097490

Bronson, Richard (1989), Schaums outline of theory and problems of matrix operations, New York: McGraw
Hill, ISBN 978-0-07-007978-6

Brown, William C. (1991), Matrices and vector spaces, New York, NY: Marcel Dekker, ISBN 978-0-8247-
8419-5

Coburn, Nathaniel (1955), Vector and tensor analysis, New York, NY: Macmillan, OCLC 1029828

Conrey, J. Brian (2007), Ranks of elliptic curves and random matrix theory, Cambridge University Press, ISBN
978-0-521-69964-8

Fraleigh, John B. (1976), A First Course In Abstract Algebra (2nd ed.), Reading: Addison-Wesley, ISBN 0-
201-01984-1

Fudenberg, Drew; Tirole, Jean (1983), Game Theory, MIT Press

Gilbarg, David; Trudinger, Neil S. (2001), Elliptic partial dierential equations of second order (2nd ed.),
Berlin, DE; New York, NY: Springer-Verlag, ISBN 978-3-540-41160-4

Godsil, Chris; Royle, Gordon (2004), Algebraic Graph Theory, Graduate Texts in Mathematics, 207, Berlin,
DE; New York, NY: Springer-Verlag, ISBN 978-0-387-95220-8

Golub, Gene H.; Van Loan, Charles F. (1996), Matrix Computations (3rd ed.), Johns Hopkins, ISBN 978-0-
8018-5414-9

Greub, Werner Hildbert (1975), Linear algebra, Graduate Texts in Mathematics, Berlin, DE; New York, NY:
Springer-Verlag, ISBN 978-0-387-90110-7

Halmos, Paul Richard (1982), A Hilbert space problem book, Graduate Texts in Mathematics, 19 (2nd ed.),
Berlin, DE; New York, NY: Springer-Verlag, ISBN 978-0-387-90685-0, MR 675952

Horn, Roger A.; Johnson, Charles R. (1985), Matrix Analysis, Cambridge University Press, ISBN 978-0-521-
38632-6

Householder, Alston S. (1975), The theory of matrices in numerical analysis, New York, NY: Dover Publica-
tions, MR 0378371

Kreyszig, Erwin (1972), Advanced Engineering Mathematics (3rd ed.), New York: Wiley, ISBN 0-471-50728-
8.
58 CHAPTER 11. MATRIX (MATHEMATICS)

Krzanowski, Wojtek J. (1988), Principles of multivariate analysis, Oxford Statistical Science Series, 3, The
Clarendon Press Oxford University Press, ISBN 978-0-19-852211-9, MR 969370

It, Kiyosi, ed. (1987), Encyclopedic dictionary of mathematics. Vol. I-IV (2nd ed.), MIT Press, ISBN 978-0-
262-09026-1, MR 901762

Lang, Serge (1969), Analysis II, Addison-Wesley

Lang, Serge (1987a), Calculus of several variables (3rd ed.), Berlin, DE; New York, NY: Springer-Verlag,
ISBN 978-0-387-96405-8

Lang, Serge (1987b), Linear algebra, Berlin, DE; New York, NY: Springer-Verlag, ISBN 978-0-387-96412-6

Lang, Serge (2002), Algebra, Graduate Texts in Mathematics, 211 (Revised third ed.), New York: Springer-
Verlag, ISBN 978-0-387-95385-4, MR 1878556

Latouche, Guy; Ramaswami, Vaidyanathan (1999), Introduction to matrix analytic methods in stochastic mod-
eling (1st ed.), Philadelphia, PA: Society for Industrial and Applied Mathematics, ISBN 978-0-89871-425-8

Manning, Christopher D.; Schtze, Hinrich (1999), Foundations of statistical natural language processing, MIT
Press, ISBN 978-0-262-13360-9

Mehata, K. M.; Srinivasan, S. K. (1978), Stochastic processes, New York, NY: McGrawHill, ISBN 978-0-07-
096612-3

Mirsky, Leonid (1990), An Introduction to Linear Algebra, Courier Dover Publications, ISBN 978-0-486-
66434-7

Nering, Evar D. (1970), Linear Algebra and Matrix Theory (2nd ed.), New York: Wiley, LCCN 76-91646

Nocedal, Jorge; Wright, Stephen J. (2006), Numerical Optimization (2nd ed.), Berlin, DE; New York, NY:
Springer-Verlag, p. 449, ISBN 978-0-387-30303-1

Oualline, Steve (2003), Practical C++ programming, O'Reilly, ISBN 978-0-596-00419-4

Press, William H.; Flannery, Brian P.; Teukolsky, Saul A.; Vetterling, William T. (1992), LU Decomposi-
tion and Its Applications, Numerical Recipes in FORTRAN: The Art of Scientic Computing (PDF) (2nd ed.),
Cambridge University Press, pp. 3442

Protter, Murray H.; Morrey, Jr., Charles B. (1970), College Calculus with Analytic Geometry (2nd ed.), Reading:
Addison-Wesley, LCCN 76087042

Punnen, Abraham P.; Gutin, Gregory (2002), The traveling salesman problem and its variations, Boston, MA:
Kluwer Academic Publishers, ISBN 978-1-4020-0664-7

Reichl, Linda E. (2004), The transition to chaos: conservative classical systems and quantum manifestations,
Berlin, DE; New York, NY: Springer-Verlag, ISBN 978-0-387-98788-0

Rowen, Louis Halle (2008), Graduate Algebra: noncommutative view, Providence, RI: American Mathematical
Society, ISBN 978-0-8218-4153-2

olin, Pavel (2005), Partial Dierential Equations and the Finite Element Method, Wiley-Interscience, ISBN
978-0-471-76409-0

Stinson, Douglas R. (2005), Cryptography, Discrete Mathematics and its Applications, Chapman & Hall/CRC,
ISBN 978-1-58488-508-5

Stoer, Josef; Bulirsch, Roland (2002), Introduction to Numerical Analysis (3rd ed.), Berlin, DE; New York,
NY: Springer-Verlag, ISBN 978-0-387-95452-3

Ward, J. P. (1997), Quaternions and Cayley numbers, Mathematics and its Applications, 403, Dordrecht, NL:
Kluwer Academic Publishers Group, ISBN 978-0-7923-4513-8, MR 1458894

Wolfram, Stephen (2003), The Mathematica Book (5th ed.), Champaign, IL: Wolfram Media, ISBN 978-1-
57955-022-6
11.15. EXTERNAL LINKS 59

11.14.1 Physics references


Bohm, Arno (2001), Quantum Mechanics: Foundations and Applications, Springer, ISBN 0-387-95330-2

Burgess, Cli; Moore, Guy (2007), The Standard Model. A Primer, Cambridge University Press, ISBN 0-521-
86036-9

Guenther, Robert D. (1990), Modern Optics, John Wiley, ISBN 0-471-60538-7

Itzykson, Claude; Zuber, Jean-Bernard (1980), Quantum Field Theory, McGrawHill, ISBN 0-07-032071-3

Riley, Kenneth F.; Hobson, Michael P.; Bence, Stephen J. (1997), Mathematical methods for physics and
engineering, Cambridge University Press, ISBN 0-521-55506-X

Schi, Leonard I. (1968), Quantum Mechanics (3rd ed.), McGrawHill

Weinberg, Steven (1995), The Quantum Theory of Fields. Volume I: Foundations, Cambridge University Press,
ISBN 0-521-55001-7

Wherrett, Brian S. (1987), Group Theory for Atoms, Molecules and Solids, PrenticeHall International, ISBN
0-13-365461-3

Zabrodin, Anton; Brezin, douard; Kazakov, Vladimir; Serban, Didina; Wiegmann, Paul (2006), Applications
of Random Matrices in Physics (NATO Science Series II: Mathematics, Physics and Chemistry), Berlin, DE; New
York, NY: Springer-Verlag, ISBN 978-1-4020-4530-1

11.14.2 Historical references


A. Cayley A memoir on the theory of matrices. Phil. Trans. 148 1858 17-37; Math. Papers II 475-496

Bcher, Maxime (2004), Introduction to higher algebra, New York, NY: Dover Publications, ISBN 978-0-486-
49570-5, reprint of the 1907 original edition

Cayley, Arthur (1889), The collected mathematical papers of Arthur Cayley, I (18411853), Cambridge Uni-
versity Press, pp. 123126

Dieudonn, Jean, ed. (1978), Abrg d'histoire des mathmatiques 1700-1900, Paris, FR: Hermann

Hawkins, Thomas (1975), Cauchy and the spectral theory of matrices, Historia Mathematica, 2: 129, ISSN
0315-0860, MR 0469635, doi:10.1016/0315-0860(75)90032-4

Knobloch, Eberhard (1994), From Gauss to Weierstrass: determinant theory and its historical evaluations,
The intersection of history and mathematics, Science Networks Historical Studies, 15, Basel, Boston, Berlin:
Birkhuser, pp. 5166, MR 1308079

Kronecker, Leopold (1897), Hensel, Kurt, ed., Leopold Kroneckers Werke, Teubner

Mehra, Jagdish; Rechenberg, Helmut (1987), The Historical Development of Quantum Theory (1st ed.), Berlin,
DE; New York, NY: Springer-Verlag, ISBN 978-0-387-96284-9

Shen, Kangshen; Crossley, John N.; Lun, Anthony Wah-Cheung (1999), Nine Chapters of the Mathematical
Art, Companion and Commentary (2nd ed.), Oxford University Press, ISBN 978-0-19-853936-0

Weierstrass, Karl (1915), Collected works, 3

11.15 External links


Encyclopedic articles

Hazewinkel, Michiel, ed. (2001) [1994], Matrix, Encyclopedia of Mathematics, Springer Science+Business
Media B.V. / Kluwer Academic Publishers, ISBN 978-1-55608-010-4
60 CHAPTER 11. MATRIX (MATHEMATICS)

History

MacTutor: Matrices and determinants

Matrices and Linear Algebra on the Earliest Uses Pages


Earliest Uses of Symbols for Matrices and Vectors

Online books

Kaw, Autar K., Introduction to Matrix Algebra, ISBN 978-0-615-25126-4

The Matrix Cookbook (PDF), retrieved 24 March 2014


Brookes, Mike (2005), The Matrix Reference Manual, London: Imperial College, retrieved 10 Dec 2008

Online matrix calculators

matrixcalc (Matrix Calculator)


SimplyMath (Matrix Calculator)

Free C++ Library

Matrix Calculator (DotNumerics)


Xiao, Gang, Matrix calculator, retrieved 10 Dec 2008

Online matrix calculator, retrieved 10 Dec 2008


Online matrix calculator (ZK framework), retrieved 26 Nov 2009

Oehlert, Gary W.; Bingham, Christopher, MacAnova, University of Minnesota, School of Statistics, retrieved
10 Dec 2008, a freeware package for matrix algebra and statistics

Online matrix calculator, retrieved 14 Dec 2009


Operation with matrices in R (determinant, track, inverse, adjoint, transpose)

Matrix operations widget in Wolfram|Alpha


Chapter 12

Nested dissection

In numerical analysis, nested dissection is a divide and conquer heuristic for the solution of sparse symmetric systems
of linear equations based on graph partitioning. Nested dissection was introduced by George (1973); the name was
suggested by Garrett Birkho.[1]
Nested dissection consists of the following steps:

Form an undirected graph in which the vertices represent rows and columns of the system of linear equations,
and an edge represents a nonzero entry in the sparse matrix representing the system.

Recursively partition the graph into subgraphs using separators, small subsets of vertices the removal of which
allows the graph to be partitioned into subgraphs with at most a constant fraction of the number of vertices.

Perform Cholesky decomposition (a variant of Gaussian elimination for symmetric matrices), ordering the
elimination of the variables by the recursive structure of the partition: each of the two subgraphs formed by
removing the separator is eliminated rst, and then the separator vertices are eliminated.

As a consequence of this algorithm, the ll-in (the set of nonzero matrix entries created in the Cholesky decomposition
that are not part of the input matrix structure) is limited to at most the square of the separator size at each level of
the recursive partition. In particular, for planar graphs (frequently arising in the solution of sparse linear systems
derived from two-dimensional nite element method meshes) the resulting matrix has O(n log n) nonzeros, due to the
planar separator theorem guaranteeingseparators of size O(n).[2] For arbitrary graphs there is a nested dissection
that guarantees ll-in within a O(min{ d log4 n, m1/4 log3.5 n}) factor of optimal, where d is the maximum degree
and m is the number of non-zeros. [3]

12.1 See also

Cycle rank of a graph, or a symmetric Boolean matrix, measures the minimum parallel time needed to perform
Cholesky decomposition

Vertex separator

12.2 Notes
[1] George (1973).

[2] Lipton, Rose & Tarjan (1979); Gilbert & Tarjan (1986).

[3] Agrawal, Klein & Ravi (1993).

61
62 CHAPTER 12. NESTED DISSECTION

12.3 References
George, J. Alan (1973), Nested dissection of a regular nite element mesh, SIAM Journal on Numerical
Analysis, 10 (2): 345363, JSTOR 2156361, doi:10.1137/0710032.
Gilbert, John R. (1988), Some nested dissection order is nearly optimal, Information Processing Letters, 26
(6): 325328, doi:10.1016/0020-0190(88)90191-3.

Gilbert, John R.; Tarjan, Robert E. (1986), The analysis of a nested dissection algorithm, Numerische Math-
ematik, 50 (4): 377404, doi:10.1007/BF01396660.

Lipton, Richard J.; Rose, Donald J.; Tarjan, Robert E. (1979), Generalized nested dissection, SIAM Journal
on Numerical Analysis, 16 (2): 346358, JSTOR 2156840, doi:10.1137/0716027.

Agrawal, Ajit; Klein, Philip; Ravi, R. (1993), Cutting down on Fill Using Nested Dissection: Provably Good
Elimination Orderings, Graph Theory and Sparse Matrix Computation, Springer New York, pp. 3155, ISBN
978-1-4613-8371-0, doi:10.1007/978-1-4613-8369-7_2.
Chapter 13

Pentadiagonal matrix

In linear algebra, a pentadiagonal matrix is a matrix that is nearly diagonal; to be exact, it is a matrix in which the
only nonzero entries are on the main diagonal, and the rst two diagonals above and below it. So it is of the form


c1 d1 e1 0 0
.. ..
b1 c2 d2 e2 . .

.. .. .. .. ..
a1 b2 . . . . .

.. .. ..
0 a2 . . . en3 0 .

. .. .. .. ..
.. . . . . dn2 en2

. ..
.. . an3 bn2 cn1 dn1
0 0 an2 bn1 cn

It follows that a pentadiagonal matrix has at most 5n 6 nonzero entries, where n is the size of the matrix. Hence,
pentadiagonal matrices are sparse. This makes them useful in numerical analysis.

13.1 See also


tridiagonal matrix
heptadiagonal matrix

This article incorporates material from Pentadiagonal matrix on PlanetMath, which is licensed under the Creative
Commons Attribution/Share-Alike License.

63
Chapter 14

Permutation matrix

In mathematics, particularly in matrix theory, a permutation matrix is a square binary matrix that has exactly one
entry of 1 in each row and each column and 0s elsewhere. Each such matrix, say P, represents a permutation of m
elements and, when used to multiply another matrix, say A, results in permuting the rows (when pre-multiplying, i.e.,
PA) or columns (when post-multiplying, AP) of the matrix A.

14.1 Denition

Given a permutation of m elements,

: {1, . . . , m} {1, . . . , m}

represented in two-line form by

( )
1 2 m
,
(1) (2) (m)

there are two natural ways to associate the permutation with a permutation matrix; namely, starting with the m m
identity matrix, Im, either permute the columns or permute the rows, according to . Both methods of dening per-
mutation matrices appear in the literature and the properties expressed in one representation can be easily converted
to the other representation. This article will primarily deal with just one of these representations and the other will
only be mentioned when there is a dierence to be aware of.
The m m permutation matrix P = (pij) obtained by permuting the columns of the identity matrix Im, that is, for
each i, pij = 1 if j = (i) and 0 otherwise, will be referred to as the column representation in this article.[1] Since
the entries in row i are all 0 except that a 1 appears in column (i), we may write


e(1)
e(2)

P = . ,
..
e(m)

where ej , a standard basis vector, denotes a row vector of length m with 1 in the jth position and 0 in every other
position.[2]
( )
1 2 3 4 5
For example, the permutation matrix P corresponding to the permutation : = , is
1 4 2 5 3

64
14.2. PROPERTIES 65


e(1) e1 1 0 0 0 0
e(2) e4 0 0 0 1 0

P =
e(3) = e2 = 0 1 0 0 0.
e(4) e5 0 0 0 0 1
e(5) e3 0 0 1 0 0

Observe that the jth column of the I5 identity matrix now appears as the (j)th column of P.
The other representation, obtained by permuting the rows of the identity matrix Im, that is, for each j, pij = 1 if i =
(j) and 0 otherwise, will be referred to as the row representation.

14.2 Properties
The column representation of a permutation matrix is used throughout this section, except when otherwise indicated.
Multiplying P times a column vector g will permute the rows of the vector:


e(1) g1 g(1)
e(2) g2 g(2)

P g = . . = . .
. .
. . . .
e(n) gn g(n)

Repeated use of this result shows that if M is an appropriately sized matrix, the product, P M is just a permutation
of the rows of M. However, observing that

P e T T
k = e 1 (k)

for each k shows that the permutation of the rows is given by 1 . ( M T is the transpose of matrix M.)
As permutation matrices are orthogonal matrices (i.e., P PT = I ), the inverse matrix exists and can be written as

P1 = P1 = PT .

Multiplying a row vector h times P will permute the columns of the vector:


e(1)
[ ]
e(2) [ ]
hP = h1 h2 . . . hn . = h1 (1) h1 (2) . . . h1 (n)
.
.
e(n)

Again, repeated application of this result shows that post-multiplying a matrix M by the permutation matrix P, that
is, M P, results in permuting the columns of M. Notice also that

ek P = e(k) .

Given two permutations and of m elements, the corresponding permutation matrices P and P acting on column
vectors are composed with

P P g = P g.

The same matrices acting on row vectors (that is, post-multiplication) compose according to the same rule
66 CHAPTER 14. PERMUTATION MATRIX

hP P = hP .

To be clear, the above formulas use the prex notation for permutation composition, that is,

(k) = ((k)) .

Let Q be the permutation matrix corresponding to in its row representation. The properties of this representation
can be determined from those of the column representation since Q = PT = P1 . In particular,

Q e T T T T
k = P 1 ek = e( 1 )1 (k) = e(k) .

From this it follows that

Q Q g = Q g.

Similarly,

h Q Q = h Q .

14.3 Matrix group


If (1) denotes the identity permutation, then P is the identity matrix.
Let Sn denote the symmetric group, or group of permutations, on {1,2,...,n}. Since there are n! permutations, there
are n! permutation matrices. By the formulas above, the n n permutation matrices form a group under matrix
multiplication with the identity matrix as the identity element.
The map Sn A GL(n, Z2 ) is a faithful representation. Thus, |A| = n!.

14.4 Doubly stochastic matrices


A permutation matrix is itself a doubly stochastic matrix, but it also plays a special role in the theory of these matri-
ces. The Birkhovon Neumann theorem says that every doubly stochastic real matrix is a convex combination of
permutation matrices of the same order and the permutation matrices are precisely the extreme points of the set of
doubly stochastic matrices. That is, the Birkho polytope, the set of doubly stochastic matrices, is the convex hull
of the set of permutation matrices.[3]

14.5 Linear algebraic properties


The trace of a permutation matrix is the number of xed points of the permutation. If the permutation has xed
points, so it can be written in cycle form as = (a1 )(a2 )...(ak) where has no xed points, then ea1 ,ea2 ,...,eak are
eigenvectors of the permutation matrix.
To calculate the eigenvalues of a permutation matrix P , write as a product of cycles, say, = C1 C2 Ct . Let
the corresponding lengths of these cycles be l1 , l2 ...lt , and let Ri (1 i t) be the set of complex solutions of
xli = 1 . The union of all Ri s is the set of eigenvalues of the corresponding permutation matrix. The geometric
multiplicity of each eigenvalue equals the number of Ri s that contain it.[4]
From group theory we know that any permutation may be written as a product of transpositions. Therefore, any
permutation matrix P factors as a product of row-interchanging elementary matrices, each having determinant 1.
Thus the determinant of a permutation matrix P is just the signature of the corresponding permutation.
14.6. EXAMPLES 67

14.6 Examples

14.6.1 Permutation of rows and columns

When a permutation matrix P is multiplied from the left with a matrix M (PM) it will permute the rows of M (here
the elements of a column vector),
when P is multiplied from the right with M (MP) it will permute the columns of M (here the elements of a row
vector):
Permutations of rows and columns are for example reections (see below) and cyclic permutations (see cyclic per-
mutation matrix).

14.6.2 Permutation of rows


( )
1 2 3 4 5
The permutation matrix P corresponding to the permutation : = , is
1 4 2 5 3


e(1) e1 1 0 0 0 0
e(2) e4 0 0 0 1 0

P =
e(3) = e2 = 0 1 0 0 0.
e(4) e5 0 0 0 0 1
e(5) e3 0 0 1 0 0

Given a vector g,


e(1) g1 g1
e(2) g2 g4

P g =
e(3) g3 = g2 .
e(4) g4 g5
e(5) g5 g3

14.7 Explanation
A permutation matrix will always be in the form


ea1
ea2

..
.
eaj

where eai represents the ith basis vector (as a row) for Rj , and where

[ ]
1 2 ... j
a1 a2 ... aj

is the permutation form of the permutation matrix.


Now, in performing matrix multiplication, one essentially forms the dot product of each row of the rst matrix with
each column of the second. In this instance, we will be forming the dot product of each row of this matrix with the
vector of elements we want to permute. That is, for example, v= (g0 ,...,g5 )T ,

eaiv=gai
68 CHAPTER 14. PERMUTATION MATRIX

So, the product of the permutation matrix with the vector v above, will be a vector in the form (ga1 , ga2 , ..., gaj),
and that this then is a permutation of v since we have said that the permutation form is

( )
1 2 ... j
.
a1 a2 ... aj

So, permutation matrices do indeed permute the order of elements in vectors multiplied with them.

14.8 See also


Alternating sign matrix

Generalized permutation matrix

14.9 References
[1] Terminology is not standard. Most authors choose one representation to be consistent with other notation they have intro-
duced, so there is generally no need to supply a name.

[2] Brualdi (2006) p.2

[3] Brualdi (2006) p.19

[4] J Najnudel, A Nikeghbali 2010 p.4

Brualdi, Richard A. (2006). Combinatorial matrix classes. Encyclopedia of Mathematics and Its Applications.
108. Cambridge: Cambridge University Press. ISBN 0-521-86565-4. Zbl 1106.05001.

Joseph, Najnudel; Ashkan, Nikeghbali (2010), The Distribution of Eigenvalues of Randomized Permutation
Matrices (PDF)
Chapter 15

Reverse Cuthill-McKee algorithm

Cuthill-McKee ordering of a matrix

In numerical linear algebra, the CuthillMcKee algorithm (CM), named for Elizabeth Cuthill and James[1] McKee,[2]

69
70 CHAPTER 15. REVERSE CUTHILL-MCKEE ALGORITHM

RCM ordering of the same matrix

is an algorithm to permute a sparse matrix that has a symmetric sparsity pattern into a band matrix form with a small
bandwidth. The reverse CuthillMcKee algorithm (RCM) due to Alan George is the same algorithm but with
the resulting index numbers reversed.[3] In practice this generally results in less ll-in than the CM ordering when
Gaussian elimination is applied.[4]
The Cuthill McKee algorithm is a variant of the standard breadth-rst search algorithm used in graph algorithms. It
starts with a peripheral node and then generates levels Ri for i = 1, 2, .. until all nodes are exhausted. The set Ri+1
is created from set Ri by listing all vertices adjacent to all nodes in Ri . These nodes are listed in increasing degree.
This last detail is the only dierence with the breadth-rst search algorithm.

15.1 Algorithm

Given a symmetric n n matrix we visualize the matrix as the adjacency matrix of a graph. The CuthillMcKee
algorithm is then a relabeling of the vertices of the graph to reduce the bandwidth of the adjacency matrix.
The algorithm produces an ordered n-tuple R of vertices which is the new order of the vertices.
15.2. SEE ALSO 71

First we choose a peripheral vertex (the vertex with the lowest degree) x and set R := ({x}) .
Then for i = 1, 2, . . . we iterate the following steps while |R| < n

Construct the adjacency set Ai of Ri (with Ri the i-th component of R ) and exclude the vertices we already
have in R

Ai := Adj(Ri ) \ R

Sort Ai with ascending vertex order (vertex degree).


Append Ai to the Result set R .

In other words, number the vertices according to a particular breadth-rst traversal where neighboring vertices are
visited in order from lowest to highest vertex order.

15.2 See also


Graph bandwidth

Sparse matrix

15.3 References
[1] Recommendations for ship hull surface representation, page 6

[2] E. Cuthill and J. McKee. Reducing the bandwidth of sparse symmetric matrices In Proc. 24th Nat. Conf. ACM, pages
157172, 1969.

[3] http://ciprian-zavoianu.blogspot.ch/2009/01/project-bandwidth-reduction.html

[4] J. A. George and J. W-H. Liu, Computer Solution of Large Sparse Positive Denite Systems, Prentice-Hall, 1981

CuthillMcKee documentation for the Boost C++ Libraries.

A detailed description of the CuthillMcKee algorithm.


symrcm MATLABs implementation of RCM.

reverse_cuthill_mckee RCM routine from SciPy written in Cython.


Chapter 16

Shear matrix

In mathematics, a shear matrix or transvection is an elementary matrix that represents the addition of a multiple
of one row or column to another. Such a matrix may be derived by taking the identity matrix and replacing one of
the zero elements with a non-zero value.
A typical shear matrix is shown below:


1 0 0 0
0 1 0 0 0

S=
0 0 1 0 0.
0 0 0 1 0
0 0 0 0 1

The name shear reects the fact that the matrix represents a shear transformation. Geometrically, such a transforma-
tion takes pairs of points in a linear space, that are purely axially separated along the axis whose row in the matrix
contains the shear element, and eectively replaces those pairs by pairs whose separation is no longer purely axial
but has two vector components. Thus, the shear axis is always an eigenvector of S.
A shear parallel to the x axis results in x = x + y and y = y . In matrix form:

( ) ( )( )
x 1 x
= .
y 0 1 y

Similarly, a shear parallel to the y axis has x = x and y = y + x . In matrix form:

( ) ( )( )
x 1 0 x
= .
y 1 y

Clearly the determinant will always be 1, as no matter where the shear element is placed, it will be a member of a
skew-diagonal that also contains zero elements (as all skew-diagonals have length at least two) hence its product will
remain zero and won't contribute to the determinant. Thus every shear matrix has an inverse, and the inverse is simply
a shear matrix with the shear element negated, representing a shear transformation in the opposite direction. In fact,
this is part of an easily derived more general result: if S is a shear matrix with shear element , then Sn is a shear
matrix whose shear element is simply n . Hence, raising a shear matrix to a power n multiplies its shear factor by n.

16.1 Properties
If S is an nn shear matrix, then:

72
16.2. APPLICATIONS 73

S has rank n and therefore is invertible

1 is the only eigenvalue of S, so det S = 1 and trace S = n


the eigenspace of S has n-1 dimensions.

S is asymmetric
S may be made into a block matrix by at most 1 column interchange and 1 row interchange operation

the area, volume, or any higher order interior capacity of a polytope is invariant under the shear transformation
of the polytopes vertices.

16.2 Applications
Shear matrices are often used in computer graphics.[1]

16.3 See also


Transformation matrix

16.4 Notes
[1] Foley et al. (1991, pp. 207208,216217)

16.5 References
Foley, James D.; van Dam, Andries; Feiner, Steven K.; Hughes, John F. (1991), Computer Graphics: Principles
and Practice (2nd ed.), Reading: Addison-Wesley, ISBN 0-201-12110-7
Chapter 17

Shift matrix

In mathematics, a shift matrix is a binary matrix with ones only on the superdiagonal or subdiagonal, and zeroes
elsewhere. A shift matrix U with ones on the superdiagonal is an upper shift matrix. The alternative subdiagonal
matrix L is unsurprisingly known as a lower shift matrix. The (i,j):th component of U and L are

Uij = i+1,j , Lij = i,j+1 ,

where ij is the Kronecker delta symbol.


For example, the 55 shift matrices are


0 1 0 0 0 0 0 0 0 0
0 0 1 0 0 1 0 0 0 0

U5 =
0 0 0 1 0
L5 = 0 1 0 0 0.
0 0 0 0 1 0 0 1 0 0
0 0 0 0 0 0 0 0 1 0

Clearly, the transpose of a lower shift matrix is an upper shift matrix and vice versa.
As a linear transformation, a lower shift matrix shifts the components of a row vector one position to the right, with
a zero appearing in the rst position. An upper shift matrix shifts the components of a row vector one position to the
left, with a zero appearing in the last position.[1]
Premultiplying a matrix A by a lower shift matrix results in the elements of A being shifted downward by one posi-
tion, with zeroes appearing in the top row. Postmultiplication by a lower shift matrix results in a shift left. Similar
operations involving an upper shift matrix result in the opposite shift.
Clearly all shift matrices are nilpotent; an n by n shift matrix S becomes the null matrix when raised to the power of
its dimension n.

17.1 Properties
Let L and U be the n by n lower and upper shift matrices, respectively. The following properties hold for both U and
L. Let us therefore only list the properties for U:

det(U) = 0
trace(U) = 0
rank(U) = n1
The characteristic polynomials of U is

74
17.2. EXAMPLES 75

pU () = (1)n n .

U n = 0. This follows from the previous property by the CayleyHamilton theorem.

The permanent of U is 0.

The following properties show how U and L are related:

LT = U; U T = L

The null spaces of U and L are

N (U ) = span{(1, 0, . . . , 0)T },
N (L) = span{(0, . . . , 0, 1)T }.

The spectrum of U and L is {0} . The algebraic multiplicity of 0 is n, and its geometric multiplicity is 1. From
the expressions for the null spaces, it follows that (up to a scaling) the only eigenvector for U is (1, 0, . . . , 0)T
, and the only eigenvector for L is (0, . . . , 0, 1)T .

For LU and UL we have

U L = I diag(0, . . . , 0, 1),
LU = I diag(1, 0, . . . , 0).
These matrices are both idempotent, symmetric, and have the same rank as U and L

Ln-a U n-a + La U a = U n-a Ln-a + U a La = I (the identity matrix), for any integer a between 0 and n inclusive.

If N is any nilpotent matrix, then N is similar to a block diagonal matrix of the form


S1 0 ... 0
0 S2 ... 0

.. .. .. ..
. . . .
0 0 ... Sr

where each of the blocks S 1 , S 2 , ..., Sr is a shift matrix (possibly of dierent sizes).[2][3]

17.2 Examples


0 0 0 0 0 1 1 1 1 1
1 0 0 0 0 1 2 2 2 1

S=
0 1 0 0 0; A=
1 2 3 2 1.
0 0 1 0 0 1 2 2 2 1
0 0 0 1 0 1 1 1 1 1


0 0 0 0 0 1 1 1 1 0
1 1 1 1 1 2 2 2 1 0

Then SA =
1 2 2 2 1; AS =
2 3 2 1 0.
1 2 3 2 1 2 2 2 1 0
1 2 2 2 1 1 1 1 1 0
Clearly there are many possible permutations. For example, S T AS is equal to the matrix A shifted up and left along
the main diagonal.
76 CHAPTER 17. SHIFT MATRIX


2 2 2 1 0
2 3 2 1 0

S T AS =
2 2 2 1 0.
1 1 1 1 0
0 0 0 0 0

17.3 Shift matrix and Null matrix



0 0 0 a b c
S = 1 0 0 A = d e f
0 1 0 g h i

a b c 0 0 0 b c 0
AS = d e f 1 0 0 = e f 0
g h i 0 1 0 h i 0

b c 0 0 0 0 c 0 0
ASS = e f 0 1 0 0 = f 0 0
h i 0 0 1 0 i 0 0

c 0 0 0 0 0 0 0 0
ASSS = f 0 0 1 0 0 = 0 0 0
i 0 0 0 1 0 0 0 0
ASSS = Null matrix(zero matrix)

17.4 See also


Nilpotent matrix

17.5 Notes
[1] Beauregard & Fraleigh (1973, p. 312)

[2] Beauregard & Fraleigh (1973, pp. 312,313)

[3] Herstein (1964, p. 250)

17.6 References
Beauregard, Raymond A.; Fraleigh, John B. (1973), A First Course In Linear Algebra: with Optional Introduc-
tion to Groups, Rings, and Fields, Boston: Houghton Miin Co., ISBN 0-395-14017-X
Herstein, I. N. (1964), Topics In Algebra, Waltham: Blaisdell Publishing Company, ISBN 978-1114541016

17.7 External links


Shift Matrix - entry in the Matrix Reference Manual
Chapter 18

Single-entry matrix

In mathematics a single-entry matrix is a matrix where a single element is one and the rest of the elements are
zero,[1][2] e.g.,


0 0 0
J23 = 0 0 1 .
0 0 0

It is a specic type of a sparse matrix. The single-entry matrix can be regarded a row-selector when it is multiplied
on the left side of the matrix, e.g.:


0 0 0
23
J A = 31
a a32 a33 .
0 0 0

Alternatively, a column-selector when multiplied on the right side:


0 0 a12
AJ23 = 0 0 a22 .
0 0 a32

The name, single-entry matrix, is not common, but seen in a few works.[3]

18.1 References
[1] Kaare Brandt Petersen & Michael Syskind Pedersen (2008-02-16). The Matrix Cookbook (PDF).

[2] Shohei Shimizu, Patrick O. Hoyer, Aapo Hyvrinen & Antti Kerminen (2006). A Linear Non-Gaussian Acyclic Model
for Causal Discovery (PDF). Journal of Machine Learning Research. 7: 20032030.

[3] Examples:

Distributed Gain Matrix Optimization in Non-Regenerative MIMO Relay Networks (PDF).


Marcel Blattner. B-Rank: A top N Recommendation Algorithm (PDF).

77
Chapter 19

Skyline matrix

In scientic computing, skyline matrix storage, or SKS, or a variable band matrix storage, or envelope storage
scheme[1] is a form of a sparse matrix storage format matrix that reduces the storage requirement of a matrix more
than banded storage. In banded storage, all entries within a xed distance from the diagonal (called half-bandwidth)
are stored. In column-oriented skyline storage, only the entries from the rst nonzero entry to the last nonzero entry
in each column are stored. There is also row oriented skyline storage, and, for symmetric matrices, only one triangle
is usually stored.[2]
Skyline storage has become very popular in the nite element codes for structural mechanics, because the skyline is
preserved by Cholesky decomposition (a method of solving systems of linear equations with a symmetric, positive-
denite matrix; all ll-in falls within the skyline), and systems of equations from nite elements have a relatively
small skyline. In addition, the eort of coding skyline Cholesky[3] is about same as for Cholesky for banded matrices
(available for banded matrices, e.g. in LAPACK; for a prototype skyline code, see [3] ).
Before storing a matrix in skyline format, the rows and columns are typically renumbered to reduce the size of the
skyline (the number of nonzero entries stored) and to decrease the number of operations in the skyline Cholesky
algorithm. The same heuristic renumbering algorithm that reduce the bandwidth are also used to reduce the skyline.
The basic and one of the earliest algorithms to do that is reverse CuthillMcKee algorithm.
However, skyline storage is not as popular for very large systems (many millions of equations) because skyline
Cholesky is not so easily adapted for massively parallel computing, and general sparse methods,[4] which store only
the nonzero entries of the matrix, become more ecient for very large problems due to much less ll-in.

19.1 See also


Sparse matrix
Band matrix
Frontal solver

19.2 References
[1] Watkins, David S. (2002), Fundamentals of matrix computations (Second ed.), New York: John Wiley & Sons, Inc., p. 60,
ISBN 0-471-21394-2
[2] Barrett, Richard; Berry; Chan; Demmel; Donato; Dongarra; Eijkout; Pozo; Romine; Van der Vorst (1994), Skyline
Storage (SKS)", Templates for the solution of linear systems, SIAM, ISBN 0-89871-328-5
[3] George, Alan; Liu, Joseph W. H. (1981), Computer solution of large sparse positive denite systems, Prentice-Hall Inc.,
ISBN 0-13-165274-5. The book also contains the description and source code of simple sparse matrix routines, still useful
even if long superseded.
[4] Du, Iain S.; Erisman, Albert M.; Reid, John K. (1986), Direct methods for sparse matrices, Oxford University Press, ISBN
0-19-853408-6

78
Chapter 20

Sparse matrix

A sparse matrix obtained when solving a nite element problem in two dimensions. The non-zero elements are shown in black.

In numerical analysis and computer science, a sparse matrix or sparse array is a matrix in which most of the
elements are zero. By contrast, if most of the elements are nonzero, then the matrix is considered dense. The
number of zero-valued elements divided by the total number of elements (e.g., m n for an m n matrix) is called

79
80 CHAPTER 20. SPARSE MATRIX

the sparsity of the matrix (which is equal to 1 minus the density of the matrix).
Conceptually, sparsity corresponds to systems which are loosely coupled. Consider a line of balls connected by springs
from one to the next: this is a sparse system as only adjacent balls are coupled. By contrast, if the same line of balls
had springs connecting each ball to all other balls, the system would correspond to a dense matrix. The concept of
sparsity is useful in combinatorics and application areas such as network theory, which have a low density of signicant
data or connections.
Large sparse matrices often appear in scientic or engineering applications when solving partial dierential equations.
When storing and manipulating sparse matrices on a computer, it is benecial and often necessary to use specialized
algorithms and data structures that take advantage of the sparse structure of the matrix. Operations using standard
dense-matrix structures and algorithms are slow and inecient when applied to large sparse matrices as processing
and memory are wasted on the zeroes. Sparse data is by nature more easily compressed and thus require signicantly
less storage. Some very large sparse matrices are infeasible to manipulate using standard dense-matrix algorithms.

20.1 Storing a sparse matrix


A matrix is typically stored as a two-dimensional array. Each entry in the array represents an element ai,j of the
matrix and is accessed by the two indices i and j. Conventionally, i is the row index, numbered from top to bottom,
and j is the column index, numbered from left to right. For an m n matrix, the amount of memory required to store
the matrix in this format is proportional to m n (disregarding the fact that the dimensions of the matrix also need
to be stored).
In the case of a sparse matrix, substantial memory requirement reductions can be realized by storing only the non-
zero entries. Depending on the number and distribution of the non-zero entries, dierent data structures can be
used and yield huge savings in memory when compared to the basic approach. The trade-o is that accessing the
individual elements becomes more complex and additional structures are needed to be able to recover the original
matrix unambiguously.
Formats can be divided into two groups:

Those that support ecient modication, such as DOK (Dictionary of keys), LIL (List of lists), or COO
(Coordinate list). These are typically used to construct the matrices.

Those that support ecient access and matrix operations, such as CSR (Compressed Sparse Row) or CSC
(Compressed Sparse Column).

20.1.1 Dictionary of keys (DOK)

DOK consists of a dictionary that maps (row, column)-pairs to the value of the elements. Elements that are missing
from the dictionary are taken to be zero. The format is good for incrementally constructing a sparse matrix in random
order, but poor for iterating over non-zero values in lexicographical order. One typically constructs a matrix in this
format and then converts to another more ecient format for processing.[1]

20.1.2 List of lists (LIL)

LIL stores one list per row, with each entry containing the column index and the value. Typically, these entries are
kept sorted by column index for faster lookup. This is another format good for incremental matrix construction.[2]

20.1.3 Coordinate list (COO)

COO stores a list of (row, column, value) tuples. Ideally, the entries are sorted (by row index, then column index) to
improve random access times. This is another format which is good for incremental matrix construction.[3]
20.1. STORING A SPARSE MATRIX 81

20.1.4 Compressed sparse row (CSR, CRS or Yale format)


The compressed sparse row (CSR) or compressed row storage (CRS) format represents a matrix M by three (one-
dimensional) arrays, that respectively contain nonzero values, the extents of rows, and column indices. It is similar
to COO, but compresses the row indices, hence the name. This format allows fast row access and matrix-vector
multiplications (Mx). The CSR format has been in use since at least the mid-1960s, with the rst complete description
appearing in 1967.[4]
The CSR format stores a sparse m n matrix M in row form using three (one-dimensional) arrays (A, IA, JA). Let
NNZ denote the number of nonzero entries in M. (Note that zero-based indices shall be used here.)

The array A is of length NNZ and holds all the nonzero entries of M in left-to-right top-to-bottom (row-
major) order.
The array IA is of length m + 1. It is dened by this recursive denition:
IA[0] = 0
IA[i] = IA[i 1] + (number of nonzero elements on the (i 1)-th row in the original matrix)
Thus, the rst m elements of IA store the index into A of the rst nonzero element in each row of M,
and the last element IA[m] stores NNZ, the number of elements in A, which can be also thought of as
the index in A of rst element of a phantom row just beyond the end of the matrix M. The values of the
i-th row of the original matrix is read from the elements A[IA[i]] to A[IA[i + 1] 1] (inclusive on both
ends), i.e. from the start of one row to the last index just before the start of the next.[5]
The third array, JA, contains the column index in M of each element of A and hence is of length NNZ as well.

For example, the matrix


0 0 0 0
5 8 0 0

0 0 3 0
0 6 0 0

is a 4 4 matrix with 4 nonzero elements, hence


A = [ 5 8 3 6 ] IA = [ 0 0 2 3 4 ] JA = [ 0 1 2 1 ]
So, in array JA, the element 5 from A has column index 0, 8 and 6 have index 1, and element 3 has index 2.
In this case the CSR representation contains 13 entries, compared to 16 in the original matrix. The CSR format saves
on memory only when NNZ < (m (n 1) 1) / 2. Another example, the matrix


10 20 0 0 0 0
0 30 0 40 0 0

0 0 50 60 70 0
0 0 0 0 0 80

is a 4 6 matrix (24 entries) with 8 nonzero elements, so


A = [ 10 20 30 40 50 60 70 80 ] IA = [ 0 2 4 7 8 ] JA = [ 0 1 1 3 2 3 4 5 ]
The whole is stored as 21 entries.

IA splits the array A into rows: (10, 20) (30, 40) (50, 60, 70) (80);
JA aligns values in columns: (10, 20, ...) (0, 30, 0, 40, ...)(0, 0, 50, 60, 70, 0) (0, 0, 0, 0, 0, 80).

Note that in this format, the rst value of IA is always zero and the last is always NNZ, so they are in some sense
redundant (although in programming languages where the array length needs to be explicitly stored, NNZ would not
82 CHAPTER 20. SPARSE MATRIX

be redundant). Nonetheless, this does avoid the need to handle an exceptional case when computing the length of
each row, as it guarantees the formula IA[i + 1] IA[i] works for any row i. Moreover, the memory cost of this
redundant storage is likely insignicant for a suciently large matrix.
The (old and new) Yale sparse matrix formats are instances of the CSR scheme. The old Yale format works exactly
as described above, with three arrays; the new format achieves a further compression by combining IA and JA into a
single array.[6]

20.1.5 Compressed sparse column (CSC or CCS)

CSC is similar to CSR except that values are read rst by column, a row index is stored for each value, and column
pointers are stored. I.e. CSC is (val, row_ind, col_ptr), where val is an array of the (top-to-bottom, then left-to-right)
non-zero values of the matrix; row_ind is the row indices corresponding to the values; and, col_ptr is the list of
val indexes where each column starts. The name is based on the fact that column index information is compressed
relative to the COO format. One typically uses another format (LIL, DOK, COO) for construction. This format is
ecient for arithmetic operations, column slicing, and matrix-vector products. See scipy.sparse.csc_matrix. This is
the traditional format for specifying a sparse matrix in MATLAB (via the sparse function).

20.2 Special structure

20.2.1 Banded

Main article: Band matrix

An important special type of sparse matrices is band matrix, dened as follows. The lower bandwidth of a matrix A
is the smallest number p such that the entry ai,j vanishes whenever i > j + p. Similarly, the upper bandwidth is the
smallest number p such that ai,j = 0 whenever i < j p (Golub & Van Loan 1996, 1.2.1). For example, a tridiagonal
matrix has lower bandwidth 1 and upper bandwidth 1. As another example, the following sparse matrix has lower
and upper bandwidth both equal to 3. Notice that zeros are represented with dots for clarity.

X X X

X X X X
X X X
X X X
X X X X X
X X X
X X

Matrices with reasonably small upper and lower bandwidth are known as band matrices and often lend themselves
to simpler algorithms than general sparse matrices; or one can sometimes apply dense matrix algorithms and gain
eciency simply by looping over a reduced number of indices.
By rearranging the rows and columns of a matrix A it may be possible to obtain a matrix A with a lower bandwidth.
A number of algorithms are designed for bandwidth minimization.

20.2.2 Diagonal

A very ecient structure for an extreme case of band matrices, the diagonal matrix, is to store just the entries in the
main diagonal as a one-dimensional array, so a diagonal n n matrix requires only n entries.

20.2.3 Symmetric

A symmetric sparse matrix arises as the adjacency matrix of an undirected graph; it can be stored eciently as an
adjacency list.
20.3. REDUCING FILL-IN 83

20.3 Reducing ll-in


The ll-in of a matrix are those entries which change from an initial zero to a non-zero value during the execution of
an algorithm. To reduce the memory requirements and the number of arithmetic operations used during an algorithm
it is useful to minimize the ll-in by switching rows and columns in the matrix. The symbolic Cholesky decomposition
can be used to calculate the worst possible ll-in before doing the actual Cholesky decomposition.
There are other methods than the Cholesky decomposition in use. Orthogonalization methods (such as QR factor-
ization) are common, for example, when solving problems by least squares methods. While the theoretical ll-in is
still the same, in practical terms the false non-zeros can be dierent for dierent methods. And symbolic versions
of those algorithms can be used in the same manner as the symbolic Cholesky to compute worst case ll-in.

20.4 Solving sparse matrix equations


Both iterative and direct methods exist for sparse matrix solving.
Iterative methods, such as conjugate gradient method and GMRES utilize fast computations of matrix-vector products
Axi , where matrix A is sparse. The use of preconditioners can signicantly accelerate convergence of such iterative
methods.

20.5 Software
Several software libraries support sparse matrices, and provide solvers for sparse matrix equations. The following are
open-source:

PETSc, a huge C library, contains many dierent matrix solvers.


Eigen3 is a C++ library that contains several sparse matrix solvers. However, none of them are parallelized.
MUMPS (MUltifrontal Massively Parallel sparse direct Solver), written in Fortran90, is a frontal solver
PaStix
SuperLU

20.6 History
The term sparse matrix was possibly coined by Harry Markowitz who triggered some pioneering work but then left
the eld.[7]

20.7 See also


Matrix representation
Pareto principle
Ragged matrix
Skyline matrix
Sparse graph code
Sparse le
Harwell-Boeing le format
Matrix Market exchange formats
84 CHAPTER 20. SPARSE MATRIX

20.8 Notes
[1] See scipy.sparse.dok_matrix

[2] See scipy.sparse.lil_matrix

[3] See scipy.sparse.coo_matrix

[4] Bulu, Aydn; Fineman, Jeremy T.; Frigo, Matteo; Gilbert, John R.; Leiserson, Charles E. (2009). Parallel sparse matrix-
vector and matrix-transpose-vector multiplication using compressed sparse blocks (PDF). ACM Symp. on Parallelism in
Algorithms and Architectures. CiteSeerX 10.1.1.211.5256 .

[5] netlib.org

[6] Bank, Randolph E.; Douglas, Craig C. (1993), Sparse Matrix Multiplication Package (SMMP)" (PDF), Advances in
Computational Mathematics, 1

[7] pp. 9,10 in Oral history interview with Harry M. Markowitz

20.9 References
Golub, Gene H.; Van Loan, Charles F. (1996). Matrix Computations (3rd ed.). Baltimore: Johns Hopkins.
ISBN 978-0-8018-5414-9.

Stoer, Josef; Bulirsch, Roland (2002). Introduction to Numerical Analysis (3rd ed.). Berlin, New York:
Springer-Verlag. ISBN 978-0-387-95452-3.

Tewarson, Reginald P. (May 1973). Sparse Matrices (Part of the Mathematics in Science & Engineering series).
Academic Press Inc. (This book, by a professor at the State University of New York at Stony Book, was the
rst book exclusively dedicated to Sparse Matrices. Graduate courses using this as a textbook were oered at
that University in the early 1980s).

Bank, Randolph E.; Douglas, Craig C. Sparse Matrix Multiplication Package (PDF).
Pissanetzky, Sergio (1984). Sparse Matrix Technology. Academic Press.

Snay, Richard A. (1976). Reducing the prole of sparse symmetric matrices. Bulletin Godsique. 50
(4): 341. doi:10.1007/BF02521587. Also NOAA Technical Memorandum NOS NGS-4, National Geodetic
Survey, Rockville, MD.

20.10 Further reading


Gibbs, Norman E.; Poole, William G.; Stockmeyer, Paul K. (1976). A comparison of several bandwidth and
prole reduction algorithms. ACM Transactions on Mathematical Software. 2 (4): 322330. doi:10.1145/355705.355707.

Gilbert, John R.; Moler, Cleve; Schreiber, Robert (1992). Sparse matrices in MATLAB: Design and Imple-
mentation. SIAM Journal on Matrix Analysis and Applications. 13 (1): 333356. doi:10.1137/0613024.

Sparse Matrix Algorithms Research at the University of Florida, containing the UF sparse matrix collection.
SMALL project A EU-funded project on sparse models, algorithms and dictionary learning for large-scale
data.
Chapter 21

Sparse matrix-vector multiplication

Sparse matrix-vector multiplication (SpMV) of the form y = Ax is a widely used computational kernel existing
in many scientic applications. The input matrix A is sparse. The input vector x and the output vector y are dense.
In case of repeated y = Ax operation involving the same input matrix A but possibly changing numerical values of
its elements, A can be preprocessed to reduce both the parallel and sequential run time of the SpMV kernel.[1]

21.1 References
[1] Hypergraph Partitioning Based Models and Methods for Exploiting Cache Locality in Sparse Matrix-Vector Multiplica-
tion. Retrieved 13 April 2014.

85
Chapter 22

Tridiagonal matrix

In linear algebra, a tridiagonal matrix is a band matrix that has nonzero elements only on the main diagonal, the
rst diagonal below this, and the rst diagonal above the main diagonal.
For example, the following matrix is tridiagonal:


1 4 0 0
3 4 1 0
.
0 2 3 4
0 0 1 3

The determinant of a tridiagonal matrix is given by the continuant of its elements.[1]


An orthogonal transformation of a symmetric (or Hermitian) matrix to tridiagonal form can be done with the Lanczos
algorithm.

22.1 Properties

A tridiagonal matrix is a matrix that is both upper and lower Hessenberg matrix.[2] In particular, a tridiagonal matrix
is a direct sum of p 1-by-1 and q 2-by-2 matrices such that p + q/2 = n -- the dimension of the tridiagonal. Although
a general tridiagonal matrix is not necessarily symmetric or Hermitian, many of those that arise when solving linear
algebra problems have one of these properties. Furthermore, if a real tridiagonal matrix A satises ak,k ak,k > 0
for all k, so that the signs of its entries are symmetric, then it is similar to a Hermitian matrix, by a diagonal change of
basis matrix. Hence, its eigenvalues are real. If we replace the strict inequality by ak,k ak,k 0, then by continuity,
the eigenvalues are still guaranteed to be real, but the matrix need no longer be similar to a Hermitian matrix.[3]
The set of all n n tridiagonal matrices forms a 3n-2 dimensional vector space.
Many linear algebra algorithms require signicantly less computational eort when applied to diagonal matrices, and
this improvement often carries over to tridiagonal matrices as well.

22.1.1 Determinant

Main article: continuant (mathematics)

The determinant of a tridiagonal matrix A of order n can be computed from a three-term recurrence relation.[4] Write
f 1 = |a1 | = a1 and

86
22.1. PROPERTIES 87


a1 b1

c1 a2 b2

.. ..
fn = c2 . . .

.. ..
. . bn1

cn1 an

The sequence (fi) is called the continuant and satises the recurrence relation

fn = an fn1 cn1 bn1 fn2

with initial values f 0 = 1 and f = 0. The cost of computing the determinant of a tridiagonal matrix using this formula
is linear in n, while the cost is cubic for a general matrix.

22.1.2 Inversion
The inverse of a non-singular tridiagonal matrix T


a1 b1
c1 a2 b2

.. ..
T =
c2 . .

.. ..
. . bn1
cn1 an

is given by



(1) bi bj1 i1 j+1 /n
i+j
if i < j
1
(T )ij = i1 j+1 /n if i = j


(1)i+j cj ci1 j1 i+1 /n if i > j

where the i satisfy the recurrence relation

i = ai i1 bi1 ci1 i2 for i = 2, 3, . . . , n

with initial conditions 0 = 1, 1 = a1 and the i satisfy

i = ai i+1 bi ci i+2 for i = n 1, . . . , 1

with initial conditions n = 1 and n = an.[5][6]


Closed form solutions can be computed for special cases such as symmetric matrices with all o-diagonal elements
equal[7] or Toeplitz matrices[8] and for the general case as well.[9][10]
In general, the inverse of a tridiagonal matrix is a semiseparable matrix and vice versa.[11]
88 CHAPTER 22. TRIDIAGONAL MATRIX

22.1.3 Solution of linear system


Main article: tridiagonal matrix algorithm

A system of equations A x = b for bRn can be solved by an ecient form of Gaussian elimination when A is tridiagonal
called tridiagonal matrix algorithm, requiring O(n) operations.[12]

22.1.4 Eigenvalues

a tridiagonal matrix is also Toeplitz, there is a simple closed-form solution for its eigenvalues, namely a
When
2 bc cos(k/(n + 1)) , for k = 1, ..., n. [13][14]
A real symmetric tridiagonal matrix has real eigenvalues, and all the eigenvalues are distinct (simple) if all o-
diagonal elements are nonzero.[15] Numerous methods exist for the numerical computation of the eigenvalues of
a real symmetric tridiagonal matrix to arbitrary nite precision, typically requiring O(n2 ) operations for a matrix of
size n n , although fast algorithms exist which require only O(n ln n) .[16]

22.2 Computer programming


A transformation that reduces a general matrix to Hessenberg form will reduce a Hermitian matrix to tridiagonal
form. So, many eigenvalue algorithms, when applied to a Hermitian matrix, reduce the input Hermitian matrix to
tridiagonal form as a rst step.
A tridiagonal matrix can also be stored more eciently than a general matrix by using a special storage scheme. For
instance, the LAPACK Fortran package stores an unsymmetric tridiagonal matrix of order n in three one-dimensional
arrays, one of length n containing the diagonal elements, and two of length n 1 containing the subdiagonal and
superdiagonal elements.

22.3 See also


Pentadiagonal matrix

22.4 Notes
[1] Thomas Muir (1960). A treatise on the theory of determinants. Dover Publications. pp. 516525.

[2] Horn, Roger A.; Johnson, Charles R. (1985). Matrix Analysis. Cambridge University Press. p. 28. ISBN 0521386322.

[3] Horn & Johnson, page 174

[4] El-Mikkawy, M. E. A. (2004). On the inverse of a general tridiagonal matrix. Applied Mathematics and Computation.
150 (3): 669679. doi:10.1016/S0096-3003(03)00298-4.

[5] Da Fonseca, C. M. (2007). On the eigenvalues of some tridiagonal matrices. Journal of Computational and Applied
Mathematics. 200: 283286. doi:10.1016/j.cam.2005.08.047.

[6] Usmani, R. A. (1994). Inversion of a tridiagonal jacobi matrix. Linear Algebra and its Applications. 212-213: 413414.
doi:10.1016/0024-3795(94)90414-6.

[7] Hu, G. Y.; O'Connell, R. F. (1996). Analytical inversion of symmetric tridiagonal matrices. Journal of Physics A:
Mathematical and General. 29 (7): 1511. doi:10.1088/0305-4470/29/7/020.

[8] Huang, Y.; McColl, W. F. (1997). Analytical inversion of general tridiagonal matrices. Journal of Physics A: Mathe-
matical and General. 30 (22): 7919. doi:10.1088/0305-4470/30/22/026.

[9] Mallik, R. K. (2001). The inverse of a tridiagonal matrix. Linear Algebra and its Applications. 325: 109139.
doi:10.1016/S0024-3795(00)00262-7.
22.5. EXTERNAL LINKS 89

[10] Kl, E. (2008). Explicit formula for the inverse of a tridiagonal matrix by backward continued fractions. Applied
Mathematics and Computation. 197: 345357. doi:10.1016/j.amc.2007.07.046.

[11] Raf Vandebril; Marc Van Barel; Nicola Mastronardi (2008). Matrix Computations and Semiseparable Matrices. Volume I:
Linear Systems. JHU Press. Theorem 1.38, p. 41. ISBN 978-0-8018-8714-7.

[12] Golub, Gene H.; Van Loan, Charles F. (1996). Matrix Computations (3rd ed.). The Johns Hopkins University Press. ISBN
0-8018-5414-8.

[13] Noschese, S.; Pasquini, L.; Reichel, L. (2013). Tridiagonal Toeplitz matrices: Properties and novel applications. Nu-
merical Linear Algebra with Applications. 20 (2): 302. doi:10.1002/nla.1811.

[14] This can also be written as a 2 bc cos(k/(n + 1)) because cos(x) = cos( x) , as is done in: Kulkarni, D.;
Schmidt, D.; Tsui, S. K. (1999). Eigenvalues of tridiagonal pseudo-Toeplitz matrices. Linear Algebra and its Applications.
297: 63. doi:10.1016/S0024-3795(99)00114-7.

[15] Parlett, B.N. (1980). The Symmetric Eigenvalue Problem. Prentice Hall, Inc.

[16] Coakley, E.S.; Rokhlin, V. (2012). A fast divide-and-conquer algorithm for computing the spectra of real symmetric
tridiagonal matrices. Applied and Computational Harmonic Analysis. 34 (3): 379414. doi:10.1016/j.acha.2012.06.003.

22.5 External links


Tridiagonal and Bidiagonal Matrices in the LAPACK manual.
Moawwad El-Mikkawy, Abdelrahman Karawia (2006). Inversion of general tridiagonal matrices (PDF).
Applied Mathematics Letters. 19 (8): 712720. doi:10.1016/j.aml.2005.11.012.
High performance algorithms for reduction to condensed (Hessenberg, tridiagonal, bidiagonal) form

Tridiagonal linear system solver in C++


Chapter 23

Z-matrix (mathematics)

For the chemistry related meaning of this term see Z-matrix (chemistry).

In mathematics, the class of Z-matrices are those matrices whose o-diagonal entries are less than or equal to zero;
that is, a Z-matrix Z satises

Z = (zij ); zij 0, i = j.

Note that this denition coincides precisely with that of a negated Metzler matrix or quasipositive matrix, thus the
term quasinegative matrix appears from time to time in the literature, though this is rare and usually only in contexts
where references to quasipositive matrices are made.
The Jacobian of a competitive dynamical system is a Z-matrix by denition. Likewise, if the Jacobian of a cooper-
ative dynamical system is J, then (J) is a Z-matrix.
Related classes are L-matrices, M-matrices, P-matrices, Hurwitz matrices and Metzler matrices. L-matrices have the
additional property that all diagonal entries are greater than zero. M-matrices have several equivalent denitions, one
of which is as follows: a Z-matrix is an M-matrix if it is nonsingular and its inverse is nonnegative. All matrices that
are both Z-matrices and P-matrices are nonsingular M-matrices.

23.1 See also


P-matrix
M-matrix

Hurwitz matrix
Metzler matrix

23.2 References
Huan T.; Cheng G.; Cheng X. (1 April 2006). Modied SOR-type iterative method for Z-matrices. Applied
Mathematics and Computation. 175 (1): 258268. doi:10.1016/j.amc.2005.07.050.

Saad, Y. Iterative methods for sparse linear systems (2nd ed.). Philadelphia, PA.: Society for Industrial and
Applied Mathematics. p. 28. ISBN 0-534-94776-X.

Berman, Abraham; Plemmons, Robert J. (2014). Nonnegative Matrices in the Mathematical Sciences. Aca-
demic Press. ISBN 9781483260860.

90
Chapter 24

Zero matrix

In mathematics, particularly linear algebra, a zero matrix or null matrix is a matrix all of whose entries are zero.[1]
Some examples of zero matrices are

[ ] [ ]
[ ] 0 0 0 0 0
01,1 = 0 , 02,2 = , 02,3 = .
0 0 0 0 0

24.1 Properties
The set of mn matrices with entries in a ring K forms a ring Km,n . The zero matrix 0Km,n in Km,n is the matrix
with all entries equal to 0K , where 0K is the additive identity in K.


0K 0K 0K
0K 0K 0K

0Km,n = . .. .. ..
.. . . .
0K 0K 0K mn

The zero matrix is the additive identity in Km,n .[2] That is, for all A Km,n it satises

0Km,n + A = A + 0Km,n = A.

There is exactly one zero matrix of any given size mn having entries in a given ring, so when the context is clear one
often refers to the zero matrix. In general the zero element of a ring is unique and typically denoted as 0 without any
subscript indicating the parent ring. Hence the examples above represent zero matrices over any ring.
The zero matrix represents the linear transformation sending all vectors to the zero vector.[3]
The zero matrix is idempotent, meaning that when it is multiplied by itself the result is itself.
The zero matrix is the only matrix whose rank is 0.

24.2 Occurrences
The mortal matrix problem is the problem of determining, given a nite set of n n matrices with integer entries,
whether they can be multiplied in some order, possibly with repetition, to yield the zero matrix. This is known to be
undecidable for a set of six or more 3 3 matrices, or a set of two 15 15 matrices.[4]
In ordinary least squares regression, if there is a perfect t to the data the annihilator matrix is the zero matrix.

91
92 CHAPTER 24. ZERO MATRIX

24.3 See also


Identity matrix, the multiplicative identity for matrices

Matrix of ones, a matrix where all elements are one


Single-entry matrix, a matrix where all but one element is zero

24.4 References
[1] Lang, Serge (1987), Linear Algebra, Undergraduate Texts in Mathematics, Springer, p. 25, ISBN 9780387964126, We
have a zero matrix in which aij = 0 for all i, j. ... We shall write it O.

[2] Warner, Seth (1990), Modern Algebra, Courier Dover Publications, p. 291, ISBN 9780486663418, The neutral element
for addition is called the zero matrix, for all of its entries are zero.

[3] Bronson, Richard; Costa, Gabriel B. (2007), Linear Algebra: An Introduction, Academic Press, p. 377, ISBN 9780120887842,
The zero matrix represents the zero transformation 0, having the property 0(v) = 0 for every vector v V.

[4] Cassaigne, Julien; Halava, Vesa; Harju, Tero; Nicolas, Francois (2014). Tighter Undecidability Bounds for Matrix Mor-
tality, Zero-in-the-Corner Problems, and More. arXiv:1404.0644 [cs.DM].

24.5 External links


Weisstein, Eric W. Zero Matrix. MathWorld.
24.6. TEXT AND IMAGE SOURCES, CONTRIBUTORS, AND LICENSES 93

24.6 Text and image sources, contributors, and licenses


24.6.1 Text
Anti-diagonal matrix Source: https://en.wikipedia.org/wiki/Anti-diagonal_matrix?oldid=770918643 Contributors: Jitse Niesen, Alex-
Churchill, H2g2bob, MarkHudson, HappyCamper, Octahedron80, Luokehao, Addbot, Yobot, Krystofer, Ebrambot, A.A.Gra, Brirush
and Anonymous: 5
Band matrix Source: https://en.wikipedia.org/wiki/Band_matrix?oldid=777606629 Contributors: Eijkhout, Michael Hardy, Mdupont,
Charles Matthews, Jitse Niesen, Giftlite, Gazpacho, Corti, Mairi, Endersdouble, Oleg Alexandrov, MarSch, SmackBot, Tsca.bot, CBM,
Thijs!bot, Aeriform, Hermel, R'n'B, Hany 195, Peskydan, 28bytes, Gsdefender2, DragonBot, Addbot, Luckas-bot, Amirobot, AnomieBOT,
FrescoBot, HamburgerRadio, ZroBot, A.A.Gra, ServiceAT, Zaqu413, Luke-r2d2, Mark viking, Comp.arch, Fmadd, Hawkmoth-
accent, AvalerionV and Anonymous: 16
Bidiagonal matrix Source: https://en.wikipedia.org/wiki/Bidiagonal_matrix?oldid=794714320 Contributors: Mdupont, CesarB, Charles
Matthews, Dysprosia, GreatWhiteNortherner, Giftlite, Chowbok, Corti, Mairi, Oleg Alexandrov, Caerwine, SmackBot, Zido, Jlenthe,
LogC, STBot, Lantonov, Aleks-eng, Addbot, Newtekken, AnomieBOT, ArthurBot, ZroBot, A.A.Gra, Qetuth, Jon Kolbert, Magic
links bot and Anonymous: 6
Block matrix Source: https://en.wikipedia.org/wiki/Block_matrix?oldid=795791997 Contributors: Tarquin, Tbackstr, Michael Hardy,
Stan Shebs, Charles Matthews, Jitse Niesen, Zoicon5, MathMartin, Sverdrup, Robinh, Giftlite, Fropu, Jason Quinn, Kwamikagami, Pho-
tonique, Oleg Alexandrov, Rjwilmsi, NeilenMarais, SmackBot, RDBury, BiT, Octahedron80, Radagast83, WalterMB, Eassin, CmdrObot,
Thijs!bot, RobHar, Alphachimpbot, Grimlock, Mickwilson20, R'n'B, Haseldon, Fylwind, TXiKiBoT, A4bot, Neparis, JimInTheUSA,
Addbot, Fgnievinski, Luckas-bot, Yobot, Bryanjohnston, Xqbot, Calcio33, Omnipaedista, Erik9bot, Thehelpfulbot, HRoestBot, Pairwise,
Pk0001, Vortex Shedding, A.A.Gra, Brandmaier, KlappCK, Jj1236, DrGoscha, Abhigyan Goswami Guwahati, BattyBot, Justincheng12345-
bot, EagerToddler39, Mark L MacDonald, Greenjelibean, Loraof, Fmadd, Bender the Bot and Anonymous: 27
CuthillMcKee algorithm Source: https://en.wikipedia.org/wiki/Cuthill%E2%80%93McKee_algorithm?oldid=790078537 Contribu-
tors: Michael Hardy, MathMartin, BenFrantzDale, Andreas Kaufmann, Mecanismo, Nicolasbock, Oleg Alexandrov, BD2412, Robert-
van1, SmackBot, Memming, Rigadoun, Khromegnome, Hermel, David Eppstein, R'n'B, CommonsDelinker, Addbot, Luckas-bot, Yobot,
DSisyphBot, Omnipaedista, Czavoianu, Kxx, RedBot, Jfmantis, NameIsRon, EmausBot, ZroBot, A.A.Gra, BG19bot, Ysaad us, Non-
hermitian, Oisguad, Nbro and Anonymous: 5
Diagonal matrix Source: https://en.wikipedia.org/wiki/Diagonal_matrix?oldid=788225071 Contributors: AxelBoldt, Tarquin, Michael
Hardy, Wshun, Minesweeper, Andres, Charles Matthews, Dysprosia, Jitse Niesen, Sabbut, Aleph4, Robbot, MathMartin, Ojigiri~enwiki,
Tosha, Giftlite, Rgdboer, O18, Forderud, Mathbot, RexNL, DVdm, YurikBot, Bota47, Lunch, BiT, Octahedron80, Nbarth, Javalenok,
Andstergiou, Gleuschk, Tmcw, JAnDbot, GromXXVII, Albmont, Leyo, DorganBot, Shikasta.net, LokiClock, Jmath666, AlleborgoBot,
Neparis, Denisarona, ClueBot, Hadas583, Niceguyedc, Muhandes, XLinkBot, Amonthothra, Addbot, Wikomidia, Luckas-bot, Yobot,
AnomieBOT, Xqbot, LucienBOT, D'ohBot, Duoduoduo, WikitanvirBot, Primefac, JoDu987, Ebrambot, Quondum, Jfpower, A.A.Gra,
Anita5192, ClueBot NG, Bilorv, Manish Bhujade, Fmadd, Magic links bot and Anonymous: 35
Generalized permutation matrix Source: https://en.wikipedia.org/wiki/Generalized_permutation_matrix?oldid=751017524 Contribu-
tors: Tbackstr, Michael Hardy, Wshun, Pcb21, Charles Matthews, Giftlite, Fropu, Natalya, Open2universe, Nbarth, Tyrrell McAllister,
Rettetast, Peskydan, JackSchmidt, Addbot, Leycec, Yobot, AnomieBOT, ArthurBot, Erik9bot, Thecheesykid, A.A.Gra, ClueBot NG,
Spectral sequence and Anonymous: 7
Heptadiagonal matrix Source: https://en.wikipedia.org/wiki/Heptadiagonal_matrix?oldid=787709497 Contributors: Meatsgains and
Jiribaxa
Identity matrix Source: https://en.wikipedia.org/wiki/Identity_matrix?oldid=787629596 Contributors: AxelBoldt, Tarquin, Stevertigo,
Patrick, Minesweeper, Poor Yorick, Andres, Dysprosia, Jitse Niesen, Sabbut, Robbot, Romanm, MathMartin, Sverdrup, Giftlite, Rich
Farmbrough, Quistnix, Bender235, Burn, Mindmatrix, HappyCamper, Lionelbrits, TheSun, YurikBot, Bota47, Pred, GrinBot~enwiki,
Maksim-e~enwiki, InverseHypercube, Melchoir, Octahedron80, Fph, Cydebot, Knakts, WhiteCrane, Sodabottle, David Eppstein, Michael
Angelkovich, VolkovBot, Mini273, TXiKiBoT, Toadaron, Auyloxuk, SieBot, Ivan tambuk, Casablanca2000in, Cli, TurionTzukosson,
Addbot, Fyrael, Fgnievinski, LaaknorBot, Arbitrarily0, Luckas-bot, AnomieBOT, LucienBOT, Paine Ellsworth, Meysampg, RedBot,
Duoduoduo, A.A.Gra, Maschen, ClueBot NG, Wcherowi, K9re11, Rywais, CarlosGonz27 and Anonymous: 32
List of matrices Source: https://en.wikipedia.org/wiki/List_of_matrices?oldid=777022951 Contributors: AxelBoldt, Bryan Derksen,
Tarquin, Tbackstr, Edward, Michael Hardy, Wshun, Tregoweth, Ahoerstemeier, Cyp, Poor Yorick, Charles Matthews, Timwi, Dys-
prosia, Jitse Niesen, Johannes Hsing, Aleph4, Altenmann, Kuszi, MathMartin, Bkell, Giftlite, Fropu, Simon Lacoste-Julien, Je-
BobFrank, C17GMaster, Mh, Rich Farmbrough, ZeroOne, Kipton, CanisRufus, Rgdboer, Jrme, ABCD, RJFJR, Oleg Alexandrov,
Julien Tuerlinckx, Vatter, Mathbot, Tardis, Wavelength, KSmrq, Welsh, Sangwine, Nahaj, Paul D. Anderson, Lunch, SmackBot, Tim-
Bentley, Hongooi, Jon Awbrey, Dmh~enwiki, Syrcatbot, 16@r, TNeloms, Myasuda, WISo, Konradek, LachlanA, Harish victory, Vanish2,
Jakob.scholbach, Sullivan.t.j, David Eppstein, ANONYMOUS COWARD0xC0DE, Adavidb, Nigholith, TomyDuby, Peskydan, Hasel-
don, Kyap, Cuzkatzimhut, Ocolon, Wolfrock, Arcfrk, Petergans, Neparis, AlphaPyro, Kero 925, Mx. Granger, Cli, Wikidsp, Katanada,
Qwfp, Addbot, Jncraton, Breggen, Legobot, Luckas-bot, Titi2~enwiki, Omnipaedista, EmausBot, OZH, Ebrambot, Mxctor, Anita5192,
Frietjes, BG19bot, Queen of Awesome, Saung Tadashi, Saranavan2013, Qaswed and Anonymous: 41
Matrix (mathematics) Source: https://en.wikipedia.org/wiki/Matrix_(mathematics)?oldid=795973460 Contributors: AxelBoldt, Tar-
quin, Tbackstr, Hajhouse, XJaM, Ramin Nakisa, Stevertigo, Patrick, Michael Hardy, Wshun, Cole Kitchen, SGBailey, Chinju, Zeno
Gantner, Dcljr, Ejrh, Looxix~enwiki, Muriel Gottrop~enwiki, Angela, , Poor Yorick, Rmilson, Andres, Schneelocke, Charles
Matthews, Dysprosia, Jitse Niesen, Lou Sander, Dtgm, Bevo, J D, Francs2000, Robbot, Mazin07, Sander123, Chrism, Fredrik, R3m0t,
Gandalf61, MathMartin, Sverdrup, Rasmus Faber, Bkell, Paul Murray, Neckro, HaeB, Tea2min, Tosha, Giftlite, Jao, Arved, BenFrantz-
Dale, Netoholic, Herbee, Dissident, Dratman, Michael Devore, Waltpohl, Duncharris, Macrakis, Utcursch, Alexf, Antandrus, Mark-
Sweep, Profvk, Wiml, Urhixidur, Sam nead, Azuredu, Barnaby dawson, Porges, PhotoBox, Shahab, Rich Farmbrough, FiP, Arnol-
dReinhold, Pavel Vozenilek, Paul August, Bender235, ZeroOne, El C, Rgdboer, JRM, NetBot, The strategy freak, La goutte de pluie,
Obradovic Goran, Mdd, Tsirel, LutzL, Landroni, Jumbuck, Jigen III, Alansohn, ABCD, Fritzpoll, Wanderingstan, Mlm42, Jheald, Si-
mone, RJFJR, Dirac1933, AN(Ger), Adrian.benko, Oleg Alexandrov, Nessalc, Woohookitty, Igny, LOL, Webdinger, David Haslam,
UbiquitousUK, Username314, Tabletop, Waldir, Prashanthns, Mandarax, Qwertyus, SixWingedSeraph, Grammarbot, Porcher, Sjakkalle,
94 CHAPTER 24. ZERO MATRIX

Koavf, Salix alba, Joti~enwiki, Watcharakorn, SchuminWeb, Old Moonraker, RexNL, Jrtayloriv, Krun, Fresheneesz, Srleer, Vonkje,
Masnevets, NevilleDNZ, Chobot, Krishnavedala, Karch, DVdm, Bgwhite, YurikBot, Wavelength, Borgx, RussBot, Michael Slone, Bhny,
NawlinWiki, Rick Norwood, Jfheche, 48v, Bayle Shanks, Jimmyre, Misza13, Samuel Huang, Merosonox, DeadEyeArrow, Bota47,
Glich, Szhaider, Ms2ger, Jezzabr, Leptictidium, Mythobeast, Spondoolicks, Alasdair, Lunch, Sardanaphalus, SmackBot, RDBury, Cy-
clePat, KocjoBot~enwiki, Jagged 85, GoonerW, Minglai, Scott Paeth, Gilliam, Skizzik, Saros136, Chris the speller, Optikos, Bduke,
Silly rabbit, DHN-bot~enwiki, Colonies Chris, Darth Panda, Scwlong, Foxjwill, Can't sleep, clown will eat me, Smallbones, Kaiserb-
Bot, Rrburke, Mhym, SundarBot, Jon Awbrey, Tesseran, Aghitza, The undertow, Lambiam, Wvbailey, Attys, Nat2, Cronholm144,
Terry Bollinger, Nijdam, Aleenf1, IronGargoyle, Jacobdyer, WhiteHatLurker, Beetstra, Kaarebrandt, Mets501, Neddyseagoon, Dr.K.,
P199, MTSbot~enwiki, Quaeler, Rschwieb, Levineps, JMK, Tawkerbot2, Dlohcierekim, DKqwerty, Dan1679, Propower, CRGreathouse,
CBM, JohnCD, INVERTED, SelfStudyBuddy, HalJor, MC10, Pascal.Tesson, Bkgoodman, Alucard (Dr.), Juansempere, Codetiger,
Bellayet, , Epbr123, Paragon12321, Markus Pssel, Aeriform, Gamer007, Headbomb, Marek69, RobHar, Urdutext, AntiVan-
dalBot, Lself, Jj137, Hermel, Oatmealcookiemon, Dhrm77, JAnDbot, Fullverse, MER-C, The Transhumanist, Yanngerotin~enwiki,
Bennybp, VoABot II, Fusionmix, T@nn, JNW, Jakob.scholbach, Rivertorch, EagleFan, JJ Harrison, Sullivan.t.j, David Eppstein, User
A1, ANONYMOUS COWARD0xC0DE, JoergenB, Philg88, Nevit, Hbent, Gjd001, Doccolinni, Yodalee327, R'n'B, Alfred Legrand,
J.delanoy, Rlsheehan, Maurice Carbonaro, Richard777, Wayp123, Toghrul Talibzadeh, Aqwis, It Is Me Here, Cole the ninja, TomyDuby,
Peskydan, AntiSpamBot, JonMcLoone, Policron, Doug4, Fylwind, Kevinecahill, Ben R. Thomas, CardinalDan, OktayD, Egghead06,
X!, Malik Shabazz, UnicornTapestry, Shiggity, VolkovBot, Dark123, JohnBlackburne, LokiClock, VasilievVV, DoorsAjar, TXiKi-
BoT, Hlevkin, Rei-bot, Anonymous Dissident, D23042304, PaulTanenbaum, LeaveSleaves, BigDunc, Wolfrock, Surajx, Wdrev, Bri-
anga, Dmcq, KjellG, AlleborgoBot, Symane, Anoko moonlight, W4chris, Typoer, Neparis, T-9000, D. Recorder, ChrisMiddleton,
GirasoleDE, Dogah, SieBot, Ivan tambuk, Bachcell, Gerakibot, Cwkmail, Yintan, Radon210, Elcobbola, Blueclaw, Paolo.dL, Oxy-
moron83, Ddxc, Oculi, Manway, AlanUS, Anchor Link Bot, Rinconsoleao, Denisarona, Canglesea, Myrvin, DEMcAdams, ClueBot,
Sural, Wpoely86, Remag Kee, SuperHamster, LizardJr8, Masterpiece2000, Excirial, Da rulz07, Bender2k14, Ftbhrygvn, Muhandes,
Brews ohare, Tyler, Livius3, Jotterbot, Hans Adler, Manco Capac, MiraiWarren, Qwfp, Johnuniq, TimothyRias, Lakeworks, XLinkBot,
Marc van Leeuwen, Rror, AndreNatas, Jaan Vajakas, Porphyro, Stephen Poppitt, Addbot, Proofreader77, Deepmath, RPHv, Steve.jaramillov~enwiki,
WardenWalk, Jccwiki, CactusWriter, Mohamed Magdy, MrOllie, Tide rolls, Gail, Jarble, CountryBot, LuK3, Luckas-bot, Yobot, Senator
Palpatine, QueenCake, TestEditBot, AnomieBOT, Autarkaw, Gazzawi, Archon 2488, IDangerMouse, MattTait, Kingpin13, Materialsci-
entist, Citation bot, Wrelwser43, LilHelpa, FactSpewer, Xqbot, Capricorn42, Drilnoth, HHahn, El Caro, BrainFRZ, J04n, Nickmn,
RibotBOT, Cerniagigante, Smallman12q, WaysToEscape, Much noise, LucienBOT, Tobby72, VS6507, Recognizance, Sawomir Biay,
Izzedine, IT2000, HJ Mitchell, Sae1962, Jamesooders, Cafreen, Citation bot 1, Swordsmankirby, I dream of horses, Kiefer.Wolfowitz,
MarcelB612, NoFlyingCars, RedBot, RobinK, Kallikanzarid, Jordgette, ItsZippy, Vairoj, Rentzepopoulos, SeoMac, MathInclined, The
last username left was taken, Earthandmoon, Birat lamichhane, Katovatzschyn, Soupjvc, Sfbaldbear, Salvio giuliano, Cowpig, Mandolin-
face, EmausBot, Lkh2099, Nurath224, Primefac, DesmondSteppe, RIS cody, Slawekb, Gclink, Quondum, Chocochipmun, Jadzia2341,
U+003F, Rcorcs, , Maschen, Babababoshka, Adjointh, Donner60, Pun, JFB80, Anita5192, Petrb, Mikhail Ryazanov,
ClueBot NG, Wcherowi, Michael P. Barnett, Rtucker913, Satellizer, Rank Penguin, Tyrantbrian, Frietjes, Dsperlich, Helpful Pixie
Bot, Rxnt, Christian Matt, MarcoPotok, BG19bot, Wiki13, Muscularmussel, MusikAnimal, JMtB03, Brad7777, Ren Vpenk, Soa
karampataki, BattyBot, Freesodas, IkamusumeFan, Lucaspentzlp, OwenGage, Enterprisey, Dexbot, Mark L MacDonald, Numberma-
niac, Frosty, Gordino110, JustAMuggle, Reatlas, Acetotyce, Debouch, Wamiq, Ugog Nizdast, Zenibus, SwimmerOfAwesome, Jian-
hui67, OrthogonalFrog, Albert Maosa, Airwoz, Derpghvdyj, Mezafo, Botha42, CarnivorousBunny, Cosmia Nebula, Xxhihi, Fzzle,
Sordin, Username89911998, Gronk Oz, Hidrolandense, Ansathas, Kellywacko, Frost.derec, Norbornene, Solid Frog, Loraof, Cleaner
1, JArnold99, Anson Law Sum Kiu, Mutantoe, Kavya l, Graboy, Minima2014, Mikeloud, H.dryad, Yrtnasd, Skpandey12, Kavyalat9,
Fmadd, Pictomania, Chuckwoodjohn, Nickfury95, Tompop888, CarlosGonz27, ShubhamShaw and Anonymous: 662
Nested dissection Source: https://en.wikipedia.org/wiki/Nested_dissection?oldid=751226020 Contributors: Hermel, David Eppstein,
R'n'B, Yobot, Citation bot and Chao Xu
Pentadiagonal matrix Source: https://en.wikipedia.org/wiki/Pentadiagonal_matrix?oldid=787705818 Contributors: Jitse Niesen, Ad-
dbot, ArthurBot, 777sms, A.A.Gra, Jiribaxa and Anonymous: 1
Permutation matrix Source: https://en.wikipedia.org/wiki/Permutation_matrix?oldid=778642202 Contributors: Michael Hardy, Wshun,
Pcb21, Cyp, Poor Yorick, Charles Matthews, Dysprosia, Jitse Niesen, Wik, MathMartin, DHN, Giftlite, Dratman, Falcon Kirtaran, Jpp,
Tagishsimon, Chris Howard, Zaslav, Gauge, Purplefeltangel, Reinyday, Cburnett, Aitter, Oleg Alexandrov, Thruston, Magister Mathe-
maticae, FlaBot, Vatter, Gaius Cornelius, Airbete~enwiki, DHN-bot~enwiki, Wdvorak, Riteshsood, Michael Ross, JRSpriggs, Dan1679,
Stebulus, Harrigan, Goldencako, Thijs!bot, Headbomb, West Brom 4ever, Olenielsen, Albmont, Jackbaird, David Eppstein, R'n'B, Com-
monsDelinker, Zhouf12, DorganBot, VolkovBot, Camrn86, LokiClock, Adamsandberg, Justin W Smith, Gvw007, Tim32, Watchduck,
Pixelator30, DumZiBoT, Marc van Leeuwen, Addbot, Cbauckhage, LaaknorBot, SpBot, Arbitrarily0, Kewpie doll517, Leycec, Legobot,
Luckas-bot, Yobot, Xqbot, Ladzin, FrescoBot, LucienBOT, Slawekb, A.A.Gra, Magara adami, Wcherowi, Boriaj, Spectral sequence,
Nbro, Kaartic, NovemberMars and Anonymous: 37
Reverse Cuthill-McKee algorithm Source: https://en.wikipedia.org/wiki/Cuthill%E2%80%93McKee_algorithm?oldid=790078537 Con-
tributors: Michael Hardy, MathMartin, BenFrantzDale, Andreas Kaufmann, Mecanismo, Nicolasbock, Oleg Alexandrov, BD2412, Robert-
van1, SmackBot, Memming, Rigadoun, Khromegnome, Hermel, David Eppstein, R'n'B, CommonsDelinker, Addbot, Luckas-bot, Yobot,
DSisyphBot, Omnipaedista, Czavoianu, Kxx, RedBot, Jfmantis, NameIsRon, EmausBot, ZroBot, A.A.Gra, BG19bot, Ysaad us, Non-
hermitian, Oisguad, Nbro and Anonymous: 5
Shear matrix Source: https://en.wikipedia.org/wiki/Shear_matrix?oldid=783050928 Contributors: Giftlite, Gracefool, Rich Farmbrough,
Bender235, Cedar101, Nbarth, Kostmo, Jim.belk, Thijs!bot, Smartcat, Peskydan, Haseldon, Maxtremus, Addbot, Erik9bot, Gelisam,
A.A.Gra, Anita5192, ClueBot NG, Qetuth, ChrisGualtieri and Anonymous: 9
Shift matrix Source: https://en.wikipedia.org/wiki/Shift_matrix?oldid=795458245 Contributors: Jarekadam, Giftlite, CBM, Vanish2,
David Eppstein, Peskydan, Haseldon, Addbot, ArthurBot, Pk0001, A.A.Gra, Anita5192, BG19bot, Divottamis and Anonymous: 3
Single-entry matrix Source: https://en.wikipedia.org/wiki/Single-entry_matrix?oldid=764148868 Contributors: Fnielsen, SmackBot,
Robosh, CBM, Headbomb, Jojan, Muhandes, Addbot, AkhtaBot, Yobot, AnomieBOT, Dcirovic, A.A.Gra, Zfeinst, Michael P. Barnett,
Qetuth, Comp.arch and Anonymous: 2
Skyline matrix Source: https://en.wikipedia.org/wiki/Skyline_matrix?oldid=747736472 Contributors: Michael Hardy, Jitse Niesen, An-
dreas Kaufmann, Pol098, GregorB, Chris the speller, Rigadoun, Aeriform, Davecrosby uk, Jmath666, SilverbackNet, Saeed.Veradi,
Addbot, ArthurBot, FrescoBot, A.A.Gra, Cerabot~enwiki, Bender the Bot, Deacon Vorbis and Anonymous: 4
24.6. TEXT AND IMAGE SOURCES, CONTRIBUTORS, AND LICENSES 95

Sparse matrix Source: https://en.wikipedia.org/wiki/Sparse_matrix?oldid=788942589 Contributors: Tbackstr, Nealmcb, Ixfd64, El-


lywa, Charles Matthews, Dysprosia, Jitse Niesen, Doradus, Psychonaut, MathMartin, Lupo, Giftlite, BenFrantzDale, D3, Beland, Karol
Langner, MacGyverMagic, Fintor, Andreas Kaufmann, Pyrop, Mecanismo, Martpol, Gauge, Cmdrjameson, Oleg Alexandrov, Mind-
matrix, Gwil, Jaia, Qwertyus, LanguageMan, FlaBot, Mathbot, Alphachimp, Srleer, King of Hearts, Bgwhite, Encyclops, Michael
Slone, Tkinkhorst, Bota47, Nahaj, Vepi, SmackBot, Sin100, JonHarder, Rigadoun, Inquisitus, WeggeBot, Simeon, Arnstein87, Di-
macq, Ebrahim, Wikid77, Pjvpjv, Ben pcc, Hermel, JAnDbot, Jrennie, Magioladitis, VoABot II, Cic, Mah159, MartinBot, P thomp-
son, Nigholith, Michaelban, Fylwind, Dogacan, Philip Trueman, Jmath666, Wolfgangvenkman, SieBot, BotMultichill, Gsdefender2,
Lourakis, JerroldPease-Atlanta, Hariva, Nn123645, MenoBot, Fioravante Patrone en, Drmies, Excirial, Visitor.iQ, Addbot, Wordsoup,
DOI bot, Fgnievinski, Cst17, LaaknorBot, Torla42, Amirobot, Calle, Marius63, DynamoDegsy, Lin Li, GrouchoBot, Mvpranav, Zasd-
fgbnm, Baradys, Lostella~enwiki, Citation bot 1, Teuxe, MastiBot, Primefac, John Cline, Diego Grez Bot, A.A.Gra, AManWithNo-
Plan, Gaggio23, Rocketrod1960, ClueBot NG, Wilson SK Jin, BG19bot, Nsda, GlaedrH, ThomasTC, Hebert Per, Adjenkin, Thom2729,
SoledadKabocha, Pintoch, Frosty, Yonoodle, Antromindopofagist, Julia Abril, Monkbot, Ffxi Spot, Scoppola, Oisguad, Jan Louw, Tty32,
Graphicitis7, Fmadd and Anonymous: 121
Sparse matrix-vector multiplication Source: https://en.wikipedia.org/wiki/Sparse_matrix-vector_multiplication?oldid=630964850 Con-
tributors: Michael Hardy, Wavelength, Kadirakbudak and Itsalleasy
Tridiagonal matrix Source: https://en.wikipedia.org/wiki/Tridiagonal_matrix?oldid=781300646 Contributors: Mdupont, Dysprosia,
Jitse Niesen, Aleph4, Giftlite, BenFrantzDale, CyborgTosser, MuDavid, Rgdboer, Oyz, Salix alba, FlaBot, Maxal, Roboto de Ajvol,
Gareth Jones, SmackBot, Maksim-bot, Eassin, Thijs!bot, Headbomb, Vanish2, Wera~enwiki, Haseldon, Calculus~enwiki, Jfessler, Ad-
dbot, SpellingBot, PV=nRT, Luckas-bot, SlothMcCarty, JackieBot, Citation bot, Howard McCay, FrescoBot, Jonesey95, Medicine-
Man555, RjwilmsiBot, John of Reading, Keonlo, WikitanvirBot, A.A.Gra, BG19bot, BattyBot, Dexbot, Makecat-bot, Arsmall, Mzer-
vos, TheBestMathemagician, Deacon Vorbis and Anonymous: 33
Z-matrix (mathematics) Source: https://en.wikipedia.org/wiki/Z-matrix_(mathematics)?oldid=735122036 Contributors: Rpchase, Geek-
dog, Peskydan, MystBot, Addbot, Luckas-bot, Citation bot, ArthurBot, J04n, Dcirovic, AManWithNoPlan, Helpful Pixie Bot, Manoguru,
Qetuth, Saung Tadashi and Anonymous: 2
Zero matrix Source: https://en.wikipedia.org/wiki/Zero_matrix?oldid=792703674 Contributors: Tbackstr, Fnielsen, Silversh, Charles
Matthews, Giftlite, Jossi, Margosbot~enwiki, Chobot, YurikBot, Bota47, Pred, RDBury, Maksim-e~enwiki, Melchoir, Octahedron80,
Can't sleep, clown will eat me, Ser Amantio di Nicolao, BrownHairedGirl, TooMuchMath, Cydebot, Thijs!bot, David Eppstein, Jeep-
day, OnlineCop, Rei-bot, Lechatjaune, Anonymous Dissident, Paolo.dL, Sergey kudryavtsev, Estirabot, Addbot, Luckas-bot, Erik9bot,
RedBot, A.A.Gra, Wcherowi, Solomon7968, Loraof, Bender the Bot and Anonymous: 13

24.6.2 Images
File:Area_parallellogram_as_determinant.svg Source: https://upload.wikimedia.org/wikipedia/commons/a/ad/Area_parallellogram_
as_determinant.svg License: Public domain Contributors: Own work, created with Inkscape Original artist: Jitse Niesen
File:BlockMatrix168square.png Source: https://upload.wikimedia.org/wikipedia/commons/5/5d/BlockMatrix168square.png License:
CC BY-SA 3.0 Contributors: Own work Original artist: Parasakthi ( ) / User:Photonique
File:Can_73_cm_svg.svg Source: https://upload.wikimedia.org/wikipedia/commons/d/d4/Can_73_cm_svg.svg License: CC BY-SA
3.0 Contributors: Own work Original artist: Kxx
File:Can_73_rcm_svg.svg Source: https://upload.wikimedia.org/wikipedia/commons/0/06/Can_73_rcm_svg.svg License: CC BY-SA
3.0 Contributors: Own work Original artist: Kxx
File:Commons-logo.svg Source: https://upload.wikimedia.org/wikipedia/en/4/4a/Commons-logo.svg License: PD Contributors: ? Orig-
inal artist: ?
File:Determinant_example.svg Source: https://upload.wikimedia.org/wikipedia/commons/a/a7/Determinant_example.svg License: CC
BY-SA 3.0 Contributors: Own work Original artist: Krishnavedala
File:Ellipse_in_coordinate_system_with_semi-axes_labelled.svg Source: https://upload.wikimedia.org/wikipedia/commons/8/8e/Ellipse_
in_coordinate_system_with_semi-axes_labelled.svg License: CC BY-SA 3.0 Contributors: Own work Original artist: Jakob.scholbach
File:Finite_element_sparse_matrix.png Source: https://upload.wikimedia.org/wikipedia/commons/8/8a/Finite_element_sparse_matrix.
png License: Public domain Contributors: self-made, with en:Matlab Original artist: Oleg Alexandrov
File:Flip_map.svg Source: https://upload.wikimedia.org/wikipedia/commons/3/3f/Flip_map.svg License: CC BY-SA 3.0 Contributors:
derived from File:Rotation_by_pi_over_6.svg Original artist: Jakob.scholbach
File:Hyperbola2_SVG.svg Source: https://upload.wikimedia.org/wikipedia/commons/d/d9/Hyperbola2_SVG.svg License: CC BY-SA
3.0 Contributors: Own work Original artist: IkamusumeFan
File:Jordan_blocks.svg Source: https://upload.wikimedia.org/wikipedia/commons/4/4f/Jordan_blocks.svg License: CC BY-SA 3.0
Contributors: Own work Original artist: Jakob.scholbach
File:Labelled_undirected_graph.svg Source: https://upload.wikimedia.org/wikipedia/commons/a/a5/Labelled_undirected_graph.svg
License: CC BY-SA 3.0 Contributors: derived from http://en.wikipedia.org/wiki/File:6n-graph2.svg Original artist: Jakob.scholbach
File:Lock-green.svg Source: https://upload.wikimedia.org/wikipedia/commons/6/65/Lock-green.svg License: CC0 Contributors: en:
File:Free-to-read_lock_75.svg Original artist: User:Trappist the monk
File:Markov_chain_SVG.svg Source: https://upload.wikimedia.org/wikipedia/commons/2/29/Markov_chain_SVG.svg License: CC
BY-SA 3.0 Contributors: Own work
Original artist: IkamusumeFan
File:Matrix.svg Source: https://upload.wikimedia.org/wikipedia/commons/b/bb/Matrix.svg License: GFDL Contributors: Own work
Original artist: Lakeworks
File:Matrix_multiplication_diagram_2.svg Source: https://upload.wikimedia.org/wikipedia/commons/e/eb/Matrix_multiplication_diagram_
2.svg License: CC-BY-SA-3.0 Contributors: This le was derived from: Matrix multiplication diagram.svg
Original artist: File:Matrix multiplication diagram.svg:User:Bilou
96 CHAPTER 24. ZERO MATRIX

File:Nuvola_apps_edu_mathematics_blue-p.svg Source: https://upload.wikimedia.org/wikipedia/commons/3/3e/Nuvola_apps_edu_


mathematics_blue-p.svg License: GPL Contributors: Derivative work from Image:Nuvola apps edu mathematics.png and Image:Nuvola
apps edu mathematics-p.svg Original artist: David Vignoni (original icon); Flamurai (SVG convertion); bayo (color)
File:Nuvola_apps_kaboodle.svg Source: https://upload.wikimedia.org/wikipedia/commons/1/1b/Nuvola_apps_kaboodle.svg License:
LGPL Contributors: http://ftp.gnome.org/pub/GNOME/sources/gnome-themes-extras/0.9/gnome-themes-extras-0.9.0.tar.gz Original artist:
David Vignoni / ICON KING
File:Permutation_matrix;_P\char"005E\relax{}T_*_column.svg Source: https://upload.wikimedia.org/wikipedia/commons/8/8f/Permutation_
matrix%3B_P%5ET_%2A_column.svg License: Public domain Contributors: Own work Original artist: <a href='//commons.wikimedia.
org/wiki/File:Watchduck.svg' class='image'><img alt='Watchduck.svg' src='https://upload.wikimedia.org/wikipedia/commons/thumb/d/
d8/Watchduck.svg/40px-Watchduck.svg.png' width='40' height='46' srcset='https://upload.wikimedia.org/wikipedia/commons/thumb/
d/d8/Watchduck.svg/60px-Watchduck.svg.png 1.5x, https://upload.wikimedia.org/wikipedia/commons/thumb/d/d8/Watchduck.svg/80px-Watchduck.
svg.png 2x' data-le-width='703' data-le-height='806' /></a> Watchduck (a.k.a. Tilman Piesk)
File:Permutation_matrix;_P_*_column.svg Source: https://upload.wikimedia.org/wikipedia/commons/2/2d/Permutation_matrix%3B_
P_%2A_column.svg License: Public domain Contributors: Own work Original artist: <a href='//commons.wikimedia.org/wiki/File:
Watchduck.svg' class='image'><img alt='Watchduck.svg' src='https://upload.wikimedia.org/wikipedia/commons/thumb/d/d8/Watchduck.
svg/40px-Watchduck.svg.png' width='40' height='46' srcset='https://upload.wikimedia.org/wikipedia/commons/thumb/d/d8/Watchduck.
svg/60px-Watchduck.svg.png 1.5x, https://upload.wikimedia.org/wikipedia/commons/thumb/d/d8/Watchduck.svg/80px-Watchduck.svg.
png 2x' data-le-width='703' data-le-height='806' /></a> Watchduck (a.k.a. Tilman Piesk)
File:Permutation_matrix;_row_*_P.svg Source: https://upload.wikimedia.org/wikipedia/commons/a/a3/Permutation_matrix%3B_row_
%2A_P.svg License: Public domain Contributors: Own work Original artist: <a href='//commons.wikimedia.org/wiki/File:Watchduck.
svg' class='image'><img alt='Watchduck.svg' src='https://upload.wikimedia.org/wikipedia/commons/thumb/d/d8/Watchduck.svg/40px-Watchduck.
svg.png' width='40' height='46' srcset='https://upload.wikimedia.org/wikipedia/commons/thumb/d/d8/Watchduck.svg/60px-Watchduck.
svg.png 1.5x, https://upload.wikimedia.org/wikipedia/commons/thumb/d/d8/Watchduck.svg/80px-Watchduck.svg.png 2x' data-le-width='703'
data-le-height='806' /></a> Watchduck (a.k.a. Tilman Piesk)
File:Permutation_matrix;_row_*_P\char"005E\relax{}T.svg Source: https://upload.wikimedia.org/wikipedia/commons/4/4c/Permutation_
matrix%3B_row_%2A_P%5ET.svg License: Public domain Contributors: Own work Original artist: <a href='//commons.wikimedia.org/
wiki/File:Watchduck.svg' class='image'><img alt='Watchduck.svg' src='https://upload.wikimedia.org/wikipedia/commons/thumb/d/d8/
Watchduck.svg/40px-Watchduck.svg.png' width='40' height='46' srcset='https://upload.wikimedia.org/wikipedia/commons/thumb/d/d8/
Watchduck.svg/60px-Watchduck.svg.png 1.5x, https://upload.wikimedia.org/wikipedia/commons/thumb/d/d8/Watchduck.svg/80px-Watchduck.
svg.png 2x' data-le-width='703' data-le-height='806' /></a> Watchduck (a.k.a. Tilman Piesk)
File:Portal-puzzle.svg Source: https://upload.wikimedia.org/wikipedia/en/f/fd/Portal-puzzle.svg License: Public domain Contributors:
? Original artist: ?
File:Question_book-new.svg Source: https://upload.wikimedia.org/wikipedia/en/9/99/Question_book-new.svg License: Cc-by-sa-3.0
Contributors:
Created from scratch in Adobe Illustrator. Based on Image:Question book.png created by User:Equazcion Original artist:
Tkgd2007
File:Rotation_by_pi_over_6.svg Source: https://upload.wikimedia.org/wikipedia/commons/8/8e/Rotation_by_pi_over_6.svg License:
Public domain Contributors: Own work using Inkscape Original artist: RobHar
File:Rubik{}s_cube_v3.svg Source: https://upload.wikimedia.org/wikipedia/commons/b/b6/Rubik%27s_cube_v3.svg License: CC-
BY-SA-3.0 Contributors: Image:Rubik{}s cube v2.svg Original artist: User:Booyabazooka, User:Meph666 modied by User:Niabot
File:Saddle_Point_SVG.svg Source: https://upload.wikimedia.org/wikipedia/commons/0/0d/Saddle_Point_SVG.svg License: CC BY-
SA 3.0 Contributors: Own work
Original artist: IkamusumeFan
File:Scaling_by_1.5.svg Source: https://upload.wikimedia.org/wikipedia/commons/c/c7/Scaling_by_1.5.svg License: Public domain
Contributors: Own work using Inkscape Original artist: RobHar
File:Software_spanner.png Source: https://upload.wikimedia.org/wikipedia/commons/8/82/Software_spanner.png License: CC-BY-
SA-3.0 Contributors: Transferred from en.wikipedia to Commons by Rockfang. Original artist: CharlesC at English Wikipedia
File:Squeeze_r=1.5.svg Source: https://upload.wikimedia.org/wikipedia/commons/6/67/Squeeze_r%3D1.5.svg License: Public domain
Contributors: Own work Original artist: RobHar
File:Taxonomy_of_Complex_Matrices.svg Source: https://upload.wikimedia.org/wikipedia/commons/d/d1/Taxonomy_of_Complex_
Matrices.svg License: CC BY-SA 3.0 Contributors: {own} Original artist: Jrme
File:VerticalShear_m=1.25.svg Source: https://upload.wikimedia.org/wikipedia/commons/9/92/VerticalShear_m%3D1.25.svg License:
Public domain Contributors: Own work using Inkscape Original artist: RobHar
File:Wiki_letter_w_cropped.svg Source: https://upload.wikimedia.org/wikipedia/commons/1/1c/Wiki_letter_w_cropped.svg License:
CC-BY-SA-3.0 Contributors: This le was derived from Wiki letter w.svg: <a href='//commons.wikimedia.org/wiki/File:Wiki_letter_w.
svg' class='image'><img alt='Wiki letter w.svg' src='https://upload.wikimedia.org/wikipedia/commons/thumb/6/6c/Wiki_letter_w.svg/
50px-Wiki_letter_w.svg.png' width='50' height='50' srcset='https://upload.wikimedia.org/wikipedia/commons/thumb/6/6c/Wiki_letter_
w.svg/75px-Wiki_letter_w.svg.png 1.5x, https://upload.wikimedia.org/wikipedia/commons/thumb/6/6c/Wiki_letter_w.svg/100px-Wiki_
letter_w.svg.png 2x' data-le-width='44' data-le-height='44' /></a>
Original artist: Derivative work by Thumperward
File:Wikibooks-logo-en-noslogan.svg Source: https://upload.wikimedia.org/wikipedia/commons/d/df/Wikibooks-logo-en-noslogan.
svg License: CC BY-SA 3.0 Contributors: Own work Original artist: User:Bastique, User:Ramac et al.
File:Wikiversity-logo.svg Source: https://upload.wikimedia.org/wikipedia/commons/9/91/Wikiversity-logo.svg License: CC BY-SA
3.0 Contributors: Snorky (optimized and cleaned up by verdy_p) Original artist: Snorky (optimized and cleaned up by verdy_p)

24.6.3 Content license


Creative Commons Attribution-Share Alike 3.0

Você também pode gostar