**Geometric Interpretation - II**

In general, when there are more variables and equations, the determinant of n vectors of length n will give the volume of the parallelepiped determined by those vectors in the n-th dimensional Euclidean space.

Therefore, the area of the parallelogram determined by

`{\displaystyle x_{1}{\binom {a_{11}}{a_{21}}}}`

gives

and

`{\displaystyle {\binom {a_{12}}{a_{22}}}}`

gives

has to be

`{\displaystyle x_{1}}`

gives

times the area of the first one since one of the sides has been multiplied by this factor. Now, this last parallelogram, by Cavalieri's principle, has the same area as the parallelogram determined by

`{\displaystyle {\binom {b_{1}}{b_{2}}}=x_{1}{\binom {a_{11}}{a_{21}}}+x_{2}{\binom {a_{12}}{a_{22}}}}`

gives

and

`{\displaystyle {\binom {a_{12}}{a_{22}}}.}`

gives

Equating the areas of this last and the second parallelogram gives the equation

`{\displaystyle {\begin{vmatrix}b_{1}&a_{12}\\b_{2}&a_{22}\end{vmatrix}}={\begin{vmatrix}a_{11}x_{1}&a_{12}\\a_{21}x_{1}&a_{22}\end{vmatrix}}=x_{1}{\begin{vmatrix}a_{11}&a_{12}\\a_{21}&a_{22}\end{vmatrix}}}`

gives

from which Cramer's rule follows.]]>

**Geometric interpretation - I**

Cramer's rule has a geometric interpretation that can be considered also a proof or simply giving insight about its geometric nature. These geometric arguments work in general and not only in the case of two equations with two unknowns presented here.

Given the system of equations

`{\displaystyle {\begin{matrix}a_{11}x_{1}+a_{12}x_{2}&=b_{1}\\a_{21}x_{1}+a_{22}x_{2}&=b_{2}\end{matrix}}}`

gives

it can be considered as an equation between vectors

`{\displaystyle x_{1}{\binom {a_{11}}{a_{21}}}+x_{2}{\binom {a_{12}}{a_{22}}}={\binom {b_{1}}{b_{2}}}.}`

gives

The area of the parallelogram determined by

`{\displaystyle {\binom {a_{11}}{a_{21}}}}`

gives

and

`{\displaystyle {\binom {a_{12}}{a_{22}}}}`

gives

is given by the determinant of the system of equations:

`{\displaystyle {\begin{vmatrix}a_{11}&a_{12}\\a_{21}&a_{22}\end{vmatrix}}.}`

gives

**Finding inverse matrix**

`{\displaystyle A\,\operatorname {adj} (A)=\operatorname {adj} (A)\,A=\det(A)I}`

gives

where adj(A) denotes the adjugate matrix, det(A) is the determinant, and I is the identity matrix. If det(A) is nonzero, then the inverse matrix of A is

`{\displaystyle A^{-1}={\frac {1}{\det(A)}}\operatorname {adj} (A).}`

gives

This gives a formula for the inverse of A, provided

`det(A) \ \neq \ 0`

gives

.In fact, this formula works whenever F is a commutative ring, provided that det(A) is a unit. If det(A) is not a unit, then A is not invertible over the ring (it may be invertible over a larger ring in which some non-unit elements of F may be invertible).

]]>**Row operations**

There are three types of row operations:

* row addition, that is adding a row to another.

* row multiplication, that is multiplying all entries of a row by a non-zero constant;

* row switching, that is interchanging two rows of a matrix;

These operations are used in several ways, including solving linear equations and finding matrix inverses.

**Submatrix**

A submatrix of a matrix is a matrix obtained by deleting any collection of rows and/or columns. For example, from the following 3-by-4 matrix, we can construct a 2-by-3 submatrix by removing row 3 and column 2.

The minors and cofactors of a matrix are found by computing the determinant of certain submatrices.

A principal submatrix is a square submatrix obtained by removing certain rows and columns. The definition varies from author to author. According to some authors, a principal submatrix is a submatrix in which the set of row indices that remain is the same as the set of column indices that remain. Other authors define a principal submatrix as one in which the first k rows and columns, for some number k, are the ones that remain; this type of submatrix has also been called a leading principal submatrix.

**Square matrix**

A square matrix is a matrix with the same number of rows and columns. An n-by-n matrix is known as a square matrix of order n. Any two square matrices of the same order can be added and multiplied. The entries a ii form the main diagonal of a square matrix. They lie on the imaginary line that runs from the top left corner to the bottom right corner of the matrix.

**Main types**

Name : Example with n = 3

**Diagonal matrix**

`{\displaystyle {\begin{bmatrix}a_{11}&0&0\\0&a_{22}&0\\0&0&a_{33}\\\end{bmatrix}}}`

gives

**Lower triangular matrix**

`{\displaystyle {\begin{bmatrix}a_{11}&0&0\\a_{21}&a_{22}&0\\a_{31}&a_{32}&a_{33}\\\end{bmatrix}}}`

gives

**Upper triangular matrix**

`{\displaystyle {\begin{bmatrix}a_{11}&a_{12}&a_{13}\\0&a_{22}&a_{23}\\0&0&a_{33}\\\end{bmatrix}}}`

gives

**Diagonal and triangular matrix**

If all entries of A below the main diagonal are zero, A is called an upper triangular matrix. Similarly, if all entries of A above the main diagonal are zero, A is called a lower triangular matrix. If all entries outside the main diagonal are zero, A is called a diagonal matrix.

**Identity matrix**

The identity matrix In of size n is the n-by-n matrix in which all the elements on the main diagonal are equal to 1 and all other elements are equal to 0, for example,

`{\displaystyle \mathbf {I} _{1}={\begin{bmatrix}1\end{bmatrix}},\ \mathbf {I} _{2}={\begin{bmatrix}1&0\\0&1\end{bmatrix}},\ \ldots ,\ \mathbf {I} _{n}={\begin{bmatrix}1&0&\cdots &0\\0&1&\cdots &0\\\vdots &\vdots &\ddots &\vdots \\0&0&\cdots &1\end{bmatrix}}}`

gives

It is a square matrix of order n, and also a special kind of diagonal matrix. It is called an identity matrix because multiplication with it leaves a matrix unchanged:

`AI_n = I_mA = A`

gives

for any m-by-n matrix A.]]>Matrix multiplication satisfies the rules (AB)C = A(BC) (associativity), and

(A + B)C = AC + BC as well as C(A + B) = CA + CB (left and right distributivity), whenever the size of the matrices is such that the various products are defined. The product AB may be defined without BA being defined, namely if A and B are m-by-n and n-by-k matrices, respectively, and

`m \leq k`

gives

.Even if both products are defined, they generally need not be equal, that is:

`AB \neq BA`

gives

.In other words, matrix multiplication is not commutative, in marked contrast to (rational, real, or complex) numbers, whose product is independent of the order of the factors. An example of two matrices not commuting with each other is:

`{\displaystyle {\begin{bmatrix}1&2\\3&4\\\end{bmatrix}}{\begin{bmatrix}0&1\\0&0\\\end{bmatrix}}={\begin{bmatrix}0&1\\0&3\\\end{bmatrix}},}`

gives

whereas

`{\displaystyle {\begin{bmatrix}0&1\\0&0\\\end{bmatrix}}{\begin{bmatrix}1&2\\3&4\\\end{bmatrix}}={\begin{bmatrix}3&4\\0&0\\\end{bmatrix}}.}`

gives

Besides the ordinary matrix multiplication just described, other less frequently used operations on matrices that can be considered forms of multiplication also exist, such as the Hadamard product and the Kronecker product. They arise in solving matrix equations such as the Sylvester equation.

]]>**Matrix multiplication**

Multiplication of two matrices is defined if and only if the number of columns of the left matrix is the same as the number of rows of the right matrix. If A is an m-by-n matrix and B is an n-by-p matrix, then their matrix product AB is the m-by-p matrix whose entries are given by dot product of the corresponding row of A and the corresponding column of B:

`{\displaystyle [\mathbf {AB} ]_{i,j}=a_{i,1}b_{1,j}+a_{i,2}b_{2,j}+\cdots +a_{i,n}b_{n,j}=\sum _{r=1}^{n}a_{i,r}b_{r,j},}`

gives

where

`1 \leq i \leq m \text { and } 1 \leq j \leq p`

gives

For example, the underlined entry 2340 in the product is calculated as

`(2 \times 1000) + (3 \times 100) + (4 \times 10) = 2340`

gives

`{\displaystyle {\begin{aligned}{\begin{bmatrix}{\underline {2}}&{\underline {3}}&{\underline {4}}\\1&0&0\\\end{bmatrix}}{\begin{bmatrix}0&{\underline {1000}}\\1&{\underline {100}}\\0&{\underline {10}}\\\end{bmatrix}}&={\begin{bmatrix}3&{\underline {2340}}\\0&1000\\\end{bmatrix}}.\end{aligned}}}`

gives

**Subtraction**

The subtraction of two

`m \times n`

gives

matrices is defined by composing matrix addition with scalar multiplication by -1:`{\displaystyle \mathbf {A} -\mathbf {B} =\mathbf {A} +(-1)\cdot \mathbf {B} }`

gives

**Transposition**

The transpose of an m-by-n matrix A is the n-by-m matrix

`A^T`

gives

(also denoted`A^{tr}`

gives

or`^t{A}`

gives

) formed by turning rows into columns and vice versa:`(A^T)_{i,j} = A_{j,i}`

gives

.For example:

`{\displaystyle {\begin{bmatrix}1&2&3\\0&-6&7\end{bmatrix}}^{\mathrm {T} }={\begin{bmatrix}1&0\\2&-6\\3&7\end{bmatrix}}}`

gives

Familiar properties of numbers extend to these operations on matrices: for example, addition is commutative, that is, the matrix sum does not depend on the order of the summands: A + B = B + A. The transpose is compatible with addition and scalar multiplication, as expressed by

`(cA)^T = c(A^T) \text{ and } (A + B)^T = A^T + B^T.`

gives

Finally,

`(A^{T})^{T} = A`

gives

.]]>**Scalar Multiplication**

The product cA of a number c (also called a scalar in this context) and a matrix A is computed by multiplying every entry of A by c:

`(cA)_{i,j} = c \cdot A_{i,j}`

gives

.This operation is called scalar multiplication, but its result is not named "scalar product" to avoid confusion, since "scalar product" is often used as a synonym for "inner product". For example:

`{\displaystyle 2\cdot {\begin{bmatrix}1&8&-3\\4&-2&5\end{bmatrix}}={\begin{bmatrix}2\cdot 1&2\cdot 8&2\cdot -3\\2\cdot 4&2\cdot -2&2\cdot 5\end{bmatrix}}={\begin{bmatrix}2&16&-6\\8&-4&10\end{bmatrix}}}`

gives

**Addition**

The sum A+B of two m-by-n matrices A and B is calculated entrywise:

`(A + B)_{i,j} = A_{i,j} + B_{i,j}`

gives

where

`1\leq_i`

and

`1_\leq_j`

`1 \leq i \leq m, 1 \leq j \leq n`

gives

For example,

`{\displaystyle {\begin{bmatrix}1&3&1\\1&0&0\end{bmatrix}}+{\begin{bmatrix}0&0&5\\7&5&0\end{bmatrix}}={\begin{bmatrix}1+0&3+0&1+5\\1+7&0+5&0+0\end{bmatrix}}={\begin{bmatrix}1&3&6\\8&5&0\end{bmatrix}}}`

gives

This may be abbreviated by writing only a single generic term, possibly along with indices, as in

`{\displaystyle \mathbf {A} =\left(a_{ij}\right),\quad \left[a_{ij}\right],\quad {\text{or}}\quad \left(a_{ij}\right)_{1\leq i\leq m,\;1\leq j\leq n}}`

gives

or

`{\displaystyle \mathbf {A} =(a_{i,j})_{1\leq i,j\leq n}}`

gives

**Notation**

The specifics of symbolic matrix notation vary widely, with some prevailing trends. Matrices are commonly written in square brackets or parentheses, so that an

`{\displaystyle m\times n}`

gives

matrix`{\displaystyle \mathbf {A} }`

gives

is represented as`{\displaystyle \mathbf {A} ={\begin{bmatrix}a_{11}&a_{12}&\cdots &a_{1n}\\a_{21}&a_{22}&\cdots &a_{2n}\\\vdots &\vdots &\ddots &\vdots \\a_{m1}&a_{m2}&\cdots &a_{mn}\end{bmatrix}}={\begin{pmatrix}a_{11}&a_{12}&\cdots &a_{1n}\\a_{21}&a_{22}&\cdots &a_{2n}\\\vdots &\vdots &\ddots &\vdots \\a_{m1}&a_{m2}&\cdots &a_{mn}\end{pmatrix}}.}`

gives

**Overview of a matrix size**

Name : Size : Example : Description : Notation

`1 \times n`

gives

`{\displaystyle {\begin{bmatrix}3&7&2\end{bmatrix}}}`

gives

A matrix with one row, sometimes used to represent a vector

`{a_{i}}`

gives

:`n \times 1`

gives

`{\displaystyle {\begin{bmatrix}4\\1\\8\end{bmatrix}}}`

gives

A matrix with one column, sometimes used to represent a vector

`{a_{j}}`

gives

.**Square matrix** :

`n \times n`

gives

`{\displaystyle {\begin{bmatrix}9&13&5\\1&11&7\\2&6&3\end{bmatrix}}}`

gives

A matrix with the same number of rows and columns, sometimes used to represent a linear transformation from a vector space to itself, such as reflection, rotation, or shearing.

]]>In mathematics, a matrix (pl.: matrices) is a rectangular array or table of numbers, symbols, or expressions, with elements or entries arranged in rows and columns, which is used to represent a mathematical object or property of such an object.

For example,

`{\displaystyle {\begin{bmatrix}1&9&-13\\20&5&-6\end{bmatrix}}}`

gives

is a matrix with two rows and three columns. This is often referred to as a "two-by-three matrix", a "

`{\displaystyle 2\times 3}`

gives

matrix", or a matrix of dimension`{\displaystyle 2\times 3}`

gives

.Matrices are commonly related to linear algebra. Notable exceptions include incidence matrices and adjacency matrices in graph theory. This article focuses on matrices related to linear algebra, and, unless otherwise specified, all matrices represent linear maps or may be viewed as such.

Square matrices, matrices with the same number of rows and columns, play a major role in matrix theory. Square matrices of a given dimension form a noncommutative ring, which is one of the most common examples of a noncommutative ring. The determinant of a square matrix is a number associated with the matrix, which is fundamental for the study of a square matrix; for example, a square matrix is invertible if and only if it has a nonzero determinant and the eigenvalues of a square matrix are the roots of a polynomial determinant.

In geometry, matrices are widely used for specifying and representing geometric transformations (for example rotations) and coordinate changes. In numerical analysis, many computational problems are solved by reducing them to a matrix computation, and this often involves computing with matrices of huge dimensions. Matrices are used in most areas of mathematics and scientific fields, either directly, or through their use in geometry and numerical analysis.

Matrix theory is the branch of mathematics that focuses on the study of matrices. It was initially a sub-branch of linear algebra, but soon grew to include subjects related to graph theory, algebra, combinatorics and statistics.

**Definition**

A matrix is a rectangular array of numbers (or other mathematical objects), called the entries of the matrix. Matrices are subject to standard operations such as addition and multiplication. Most commonly, a matrix over a field F is a rectangular array of elements of F. A real matrix and a complex matrix are matrices whose entries are respectively real numbers or complex numbers. More general types of entries are discussed below. For instance, this is a real matrix:

`{\displaystyle \mathbf {A} ={\begin{bmatrix}-1.3&0.6\\20.4&5.5\\9.7&-6.2\end{bmatrix}}.}`

gives

The numbers, symbols, or expressions in the matrix are called its entries or its elements. The horizontal and vertical lines of entries in a matrix are called rows and columns, respectively.

**Size**

The size of a matrix is defined by the number of rows and columns it contains. There is no limit to the number of rows and columns, that a matrix (in the usual sense) can have as long as they are positive integers. A matrix with

m rows and n columns is called an

`{\displaystyle {m\times n}}`

gives

matrix, orm-by-n matrix, where

m and n are called its dimensions. For example, the matrix

`{\displaystyle {\mathbf {A} }}`

gives

above is a`{\displaystyle {3\times 2}}`

gives

matrix.Matrices with a single row are called row vectors, and those with a single column are called column vectors. A matrix with the same number of rows and columns is called a square matrix. A matrix with an infinite number of rows or columns (or both) is called an infinite matrix. In some contexts, such as computer algebra programs, it is useful to consider a matrix with no rows or no columns, called an empty matrix.

]]>Third example (with two sums)

`{\displaystyle {\begin{aligned}(a+b)\cdot (a-b)&=a\cdot (a-b)+b\cdot (a-b)=a^{2}-ab+ba-b^{2}=a^{2}-b^{2}\\&=(a+b)\cdot a-(a+b)\cdot b=a^{2}+ba-ab-b^{2}=a^{2}-b^{2}\\\end{aligned}}}`

gives

Here the distributive law was applied twice, and it does not matter which bracket is first multiplied out.

**Fourth example**

Here the distributive law is applied the other way around compared to the previous examples. Consider

`{\displaystyle 12a^{3}b^{2}-30a^{4}bc+18a^{2}b^{3}c^{2}\,.}`

Since the factor

`{\displaystyle 6a^{2}b}`

gives

occurs in all summands, it can be factored out. That is, due to the distributive law one obtains`{\displaystyle 12a^{3}b^{2}-30a^{4}bc+18a^{2}b^{3}c^{2}=6a^{2}b\left(2ab-5a^{2}c+3b^{2}c^{2}\right).}`

gives

**Examples**

**Real numbers**

In the following examples, the use of the distributive law on the set of real numbers R

`{\displaystyle \mathbb {R} }`

gives

is illustrated. When multiplication is mentioned in elementary mathematics, it usually refers to this kind of multiplication. From the point of view of algebra, the real numbers form a field, which ensures the validity of the distributive law.**First example (mental and written multiplication)**

During mental arithmetic, distributivity is often used unconsciously:

`{\displaystyle 6\cdot 16=6\cdot (10+6)=6\cdot 10+6\cdot 6=60+36=96}`

gives

Thus, to calculate

`{\displaystyle 6\cdot 16}`

gives

in one's head, one first multiplies`{\displaystyle 6\cdot 10}`

gives

and`{\displaystyle 6\cdot 6}`

gives

and add the intermediate results. Written multiplication is also based on the distributive law.**Second example (with variables)**

`{\displaystyle 3a^{2}b\cdot (4a-5b)=3a^{2}b\cdot 4a-3a^{2}b\cdot 5b=12a^{3}b-15a^{2}b^{2}}`

gives