Você está na página 1de 14

Business Mathematics Project

Matrices and Determinants


PREPAR ED BYRENUKA BHAMBHANI 5111 B.COM(H) 2ND YEAR SECTION A

KESH AV MAHAVIDYALAYA

Acknowledgement
I would like to thank my lecturer Mr.Rabindra for his continuous support and guidance, without which this project would not have been reached its success. I would also like to thank my collge classmates for helping me and solving my doubts each time. At last, I sincerely thank my family for providing me with all the necessary help and support throughout the making of this project.

Contents
Matrices Hisory of matrices Definition Matrix multiplication Linear equations Linear transformations Square Matrices Determinants Computational aspects

MATRICES
In mathematics, a matrix (plural matrices, or less commonly matrixes) is a rectangular array of numbers, such as

An item in a matrix is called an entry or an element. The example has entries 1, 9, 13, 20, 55, and 6. Entries are often denoted by a variable with two subscripts, as shown on the right; thus in the example above, a2,1 = 20. Matrices of the same size can be added and subtracted entrywise and matrices of compatible sizes can be multiplied. These operations have many of the properties of ordinary arithmetic, except that matrix multiplication is not commutative, that is, AB and BA are not equal in general. Matrices consisting of only one column or row define the components of vectors, while higher-dimensional (e.g., threedimensional) arrays of numbers define the components of a generalization of a vector called a tensor. Matrices with entries in other fields or rings are also studied. Matrices are a key tool in linear algebra. One use of matrices is to represent linear transformations, which are higher-dimensional analogs of linear functions of the form f(x) = cx, where c is a constant; matrix multiplication corresponds to composition of linear transformations. Matrices can also keep track of the coefficients in a system of linear equations. For a square matrix, the determinant and inverse matrix (when it exists) govern the behavior of solutions to the corresponding system of linear equations, and eigenvalues and eigenvectors provide insight into the geometry of the associated linear transformation. Matrices find many applications. Physics makes use of matrices in various domains, for example in geometrical optics and matrix mechanics; the latter led to studying in more detail matrices with an infinite number of rows and columns. Graph theory uses matrices to keep track of distances between pairs of vertices in a graph. Computer graphics uses matrices to project 3-dimensional space onto a 2-

dimensional screen. Matrix calculus generalizes classical analytical notions such as derivatives of functions or exponentials to matrices. The latter is a recurring need in solving ordinary differential equations. Serialism and dodecaphonism are musical movements of the 20th century that use a square mathematical matrix to determine the pattern of music intervals. A major branch of numerical analysis is devoted to the development of efficient algorithms for matrix computations, a subject that is centuries old but still an active area of research. Matrix decomposition methods simplify computations, both theoretically and practically. For sparse matrices, specifically tailored algorithms can provide speedups; such matrices arise in the finite element method.

History
Matrices have a long history of application in solving linear equations. The Chinese text The Nine Chapters on the Mathematical Art (Jiu Zhang Suan Shu), from between 300 BC and AD 200, is the first example of the use of matrix methods to solve simultaneous equations, including the concept of determinants, over 1000 years before its publication by the Japanese mathematician Seki in 1683 and the German mathematician Leibniz in 1693. Cramer presented his rule in 1750. Early matrix theory emphasized determinants more strongly than matrices and an independent matrix concept akin to the modern notion emerged only in 1858, with Cayley's Memoir on the theory of matrices. The term "matrix" was coined by Sylvester, who understood a matrix as an object giving rise to a number of determinants today called minors, that is to say, determinants of smaller matrices which derive from the original one by removing columns and rows. Etymologically, matrix derives from Latin mater (mother). The study of determinants sprang from several sources. Numbertheoretical problems led Gauss to relate coefficients of quadratic forms, i.e., expressions such as x2 + xy 2y2, and linear maps in three dimensions to matrices. Eisenstein further developed these notions,

including the remark that, in modern parlance, matrix products are noncommutative. Cauchy was the first to prove general statements about determinants, using as definition of the determinant of a matrix A = [ai,j] the following: replace the powers ajk by ajk in the polynomial

where denotes the product of the indicated terms. He also showed, in 1829, that the eigenvalues of symmetric matrices are real. Jacobi studied "functional determinants"later called Jacobi determinants by Sylvester which can be used to describe geometric transformations at a local (or infinitesimal) level, see above; Kronecker's Vorlesungen ber die Theorie der Determinanten and Weierstrass' Zur Determinantentheorie, both published in 1903, first treated determinants axiomatically, as opposed to previous more concrete approaches such as the mentioned formula of Cauchy. At that point, determinants were firmly established. Many theorems were first established for small matrices only, for example the Cayley-Hamilton theorem was proved for 22 matrices by Cayley in the aforementioned memoir, and by Hamilton for 44 matrices. Frobenius, working on bilinear forms, generalized the theorem to all dimensions (1898). Also at the end of the 19th century the GaussJordan elimination (generalizing a special case now known as Gauss elimination) was established by Jordan. In the early 20th century, matrices attained a central role in linear algebra, partially due to their use in classification of the hypercomplex number systems of the previous century.

Definition
A matrix is a rectangular arrangement of numbers. For example,

An alternative notation uses large parentheses instead of box brackets:

The horizontal and vertical lines in a matrix are called rows and columns, respectively. The numbers in the matrix are called its entries or its elements. To specify the size of a matrix, a matrix with m rows and n columns is called an m-byn matrix or m n matrix, while m and n are called its dimensions. The above is a 4-by-3 matrix. A matrix with one row (a 1 n matrix) is called a row vector, and a matrix with one column (an m 1 matrix) is called a column vector. Any row or column of a matrix determines a row or column vector, obtained by removing all other rows or columns respectively from the matrix. For example, the row vector for the third row of the above matrix A is

When a row or column of a matrix is interpreted as a value, this refers to the corresponding row or column vector. For instance one may say that two different rows of a matrix are equal, meaning they determine the same row vector. In some cases the value of a row or column should be interpreted just as a sequence of values (an element of Rn if entries are real numbers) rather than as a matrix, for instance when saying that the rows of a matrix are equal to the corresponding columns of its transpose matrix. Most of this article focuses on real and complex matrices, i.e., matrices whose entries are real or complex numbers. More general types of entries are discussed below. The specifics of matrices notation varies widely, with some prevailing trends. Matrices are usually denoted using upper-case letters, while the corresponding lower-case letters, with two subscript indices, represent the entries. In addition to using upper-case letters to symbolize matrices, many authors use a special typographical style, commonly boldface upright (non-italic), to further distinguish matrices from other variables. An alternative notation involves the use of a doubleunderline with the variable name, with or without boldface style, (e.g., ). The entry that lies in the i-th row and the j-th column of a matrix is typically referred to as the i,j, (i,j), or (i,j)th entry of the matrix. For example, the (2,3) entry

of the above matrix A is 7. The (i, j)th entry of a matrix A is most commonly written as ai,j. Alternative notations for that entry are A[i,j] or Ai,j. Sometimes a matrix is referred to by giving a formula for its (i,j)th entry, often with double parenthesis around the formula for the entry, for example, if the (i,j)th entry of A were given by aij, A would be denoted ((aij)). An asterisk is commonly used to refer to whole rows or columns in a matrix. For example, ai, refers to the ith row of A, and a,j refers to the jth column of A. The set of all m-by-n matrices is denoted (m, n). A common shorthand is A = [ai,j]i=1,...,m; j=1,...,n or more briefly A = [ai,j]mn to define an m n matrix A. Usually the entries ai,j are defined separately for all integers 1 i m and 1 j n. They can however sometimes be given by one formula; for example the 3-by-4 matrix

can alternatively be specified by A = [i j]i=1,2,3; j=1,...,4, or simply A = ((i-j)), where the size of the matrix is understood. Some programming languages start the numbering of rows and columns at zero, in which case the entries of an m-by-n matrix are indexed by 0 i m 1 and 0 j n 1. This article follows the more common convention in mathematical writing where enumeration starts from 1.

Matrix multiplication, linear equations and linear transformations


Matrix multiplication

Schematic depiction of the matrix product AB of two matrices A and B. Multiplication of two matrices is defined only if the number of columns of the left matrix is the same as the number of rows of the right matrix. If A is an m-by-n matrix and B is an n-by-p matrix, then their matrix product AB is the m-by-p matrix whose entries are given by dot-product of the corresponding row of A and the corresponding column of B:

where 1 i m and 1 j p. For example (the underlined entry 1 in the product is calculated as the product 1 1 + 0 1 + 2 0 = 1):

Matrix multiplication satisfies the rules (AB)C = A(BC) (associativity), and (A+B)C = AC+BC as well as C(A+B) = CA+CB (left and right distributivity), whenever the size of the matrices is such that the various products are defined.[6] The product AB may be defined without BA being defined, namely if A and B are m-by-n and n-by-k matrices, respectively, and m k. Even if both products are defined, they need not be equal, i.e. generally one has AB BA, i.e., matrix multiplication is not commutative, in marked contrast to (rational, real, or complex) numbers whose product is independent of the order of the factors. An example of two matrices not commuting with each other is:

whereas

The identity matrix In of size n is the n-by-n matrix in which all the elements on the main diagonal are equal to 1 and all other elements are equal to 0, e.g.

It is called identity matrix because multiplication with it leaves a matrix unchanged: MIn = ImM = M for any m-by-n matrix M. Besides the ordinary matrix multiplication just described, there exist other less frequently used operations on matrices that can be considered forms of multiplication, such as the Hadamard product and the Kronecker product. They arise in solving matrix equations such as the Sylvester equation.

Linear equations
A particular case of matrix multiplication is tightly linked to linear equations: if x designates a column vector (i.e. n1-matrix) of n variables x1, x2, ..., xn, and A is an m-by-n matrix, then the matrix equation Ax = b, where b is some m1-column vector, is equivalent to the system of linear equations A1,1x1 + A1,2x2 + ... + A1,nxn = b1 ... Am,1x1 + Am,2x2 + ... + Am,nxn = bm .[8] This way, matrices can be used to compactly write and deal with multiple linear equations, i.e. systems of linear equations.

Linear transformations

The vectors represented by a 2-by-2 matrix correspond to the sides of a unit square transformed into a parallelogram. Matrices and matrix multiplication reveal their essential features when related to linear transformations, also known as linear maps. A real m-by-n matrix A gives rise to a linear transformation Rn Rm mapping each vector x in Rn to the (matrix) product Ax, which is a vector in Rm. Conversely, each linear transformation f: Rn Rm arises from a unique m-by-n matrix A: explicitly, the (i, j)-entry of A is the ith coordinate of f(ej), where ej = (0,...,0,1,0,...,0) is the unit vector with 1 in the jth position and 0 elsewhere. The matrix A is said to represent the linear map f, and A is called the transformation matrix of f. For example, the 22 matrix

can be viewed as the transform of the unit square into a parallelogram with vertices at (0, 0), (a, b), (a + c, b + d), and (c, d). The parallelogram pictured at the right is obtained by multiplying A with each of the column vectors and in turn. These vectors define the vertices of the unit square. The following table shows a number of 2-by-2 matrices with the associated linear maps of R2. The blue original is mapped to the green grid and shapes, the origin (0,0) is marked with a black point.

Horizontal shear with m=1.25.

Horizontal flip

Squeeze Scaling by a mapping factor of 3/2 with r=3/2

Rotation by /6R = 30

Under the 1-to-1 correspondence between matrices and linear maps, matrix multiplication corresponds to composition of maps: if a k-by-m matrix B represents another linear map g : Rm Rk, then the composition g f is represented by BA since (g f)(x) = g(f(x)) = g(Ax) = B(Ax) = (BA)x. The last equality follows from the above-mentioned associativity of matrix multiplication. The rank of a matrix A is the maximum number of linearly independent row vectors of the matrix, which is the same as the maximum number of linearly independent column vectors. Equivalently it is the dimension of the image of the linear map represented by A. The rank-nullity theorem states that the dimension of the kernel of a matrix plus the rank equals the number of columns of the matrix.

Square matrices
A square matrix is a matrix which has the same number of rows and columns. An n-by-n matrix is known as a square matrix of order n. Any two square matrices of the same order can be added and multiplied. A square matrix A is called invertible or non-singular if there exists a matrix B such that AB = In. This is equivalent to BA = In. Moreover, if B exists, it is unique and is called the inverse matrix of A, denoted A1. The entries Ai,i form the main diagonal of a matrix. The trace, tr(A) of a square matrix A is the sum of its diagonal entries. While, as mentioned above, matrix multiplication is not commutative, the trace of the product of two matrices is independent of the order of the factors: tr(AB) = tr(BA).

Also, the trace of a matrix is equal to that of its transpose, i.e. tr(A) = tr(AT). If all entries outside the main diagonal are zero, A is called a diagonal matrix. If only all entries above (below) the main diagonal are zero, A is called a lower triangular matrix (upper triangular matrix, respectively). For example, if n = 3, they look like

(diagonal), (upper triangular matrix).

(lower) and

DETERMINANT
A linear transformation on R2 given by the indicated matrix. The determinant of this matrix is 1, as the area of the green parallelogram at the right is 1, but the map reverses the orientation, since it turns the counterclockwise orientation of the vectors to a clockwise one. The determinant det(A) or |A| of a square matrix A is a number encoding certain properties of the matrix. A matrix is invertible if and only if its determinant is nonzero. Its absolute value equals the area (in R2) or volume (in R3) of the image of the unit square (or cube), while its sign corresponds to the orientation of the corresponding linear map: the determinant is positive if and only if the orientation is preserved. The determinant of 2-by-2 matrices is given by

When the determinant is equal to one, then the matrix represents an equi-areal mapping. The determinant of 3-by-3 matrices involves 6 terms (rule of Sarrus). The more lengthy Leibniz formula generalises these two formulae to all dimensions. The determinant of a product of square matrices equals the product of their determinants: det(AB) = det(A) det(B). Adding a multiple of any row to another row, or a multiple of any column to another column, does not change the determinant. Interchanging two rows or two columns affects the determinant by multiplying it by 1. Using these operations, any matrix can be transformed to a lower (or upper) triangular matrix.

Computational aspects
In addition to theoretical knowledge of properties of matrices and their relation to other fields, it is important for practical purposes to perform matrix calculations effectively and precisely. The domain studying these matters is called numerical linear algebra. As with other numerical situations, two main aspects are the complexity of algorithms and their numerical stability. Many problems can be solved by both direct algorithms or iterative approaches. For example, finding eigenvectors can be done by finding a sequence of vectors xn converging to an eigenvector when n tends to infinity. Determining the complexity of an algorithm means finding upper bounds or estimates of how many elementary operations such as additions and multiplications of scalars are necessary to perform some algorithm, e.g. multiplication of matrices. For example, calculating the matrix product of two n-by-n matrix using the definition given above needs n3 multiplications, since for any of the n2 entries of the product, n multiplications are necessary. The Strassen algorithm outperforms this "naive" algorithm; it needs only n2.807 multiplications. A refined approach also incorporates specific features of the computing devices. In many practical situations additional information about the matrices involved is known. An important case are sparse matrices, i.e. matrices most of whose entries are zero. There are specifically adapted algorithms for, say, solving linear systems Ax = b for sparse matrices A, such as the conjugate gradient method. An algorithm is, roughly speaking, numerically stable, if little deviations (such as rounding errors) do not lead to big deviations in the result. For example, calculating the inverse of a matrix via Laplace's formula (Adj (A) denotes the adjugate matrix of A) A1 = Adj(A) / det(A) may lead to significant rounding errors if the determinant of the matrix is very small. The norm of a matrix can be used to capture the conditioning of linear algebraic problems, such as computing a matrix' inverse. Although most computer languages are not designed with commands or libraries for matrices, as early as the 1970s, some engineering desktop computers such as the HP 9830 had ROM cartridges to add BASIC commands for matrices. Some computer languages such as APL were designed to manipulate matrices, and various mathematical programs can be used to aid computing with matrices.

Você também pode gostar