You are not logged in.
Fundamental theorem of algebra
The fundamental theorem of algebra, of Carl Friedrich Gauss and Jean le Rond d'Alembert, states that for any complex numbers (called coefficients)
a_0, ..., a_n
gives
, the equation{\displaystyle a_{n}z^{n}+\dotsb +a_{1}z+a_{0}=0}
gives
a_1, ..., a_n
gives
{\displaystyle \mathbb {Q} }
gives
x^2 - 2
gives
does not have a rational root, because\sqrt{2}
gives
is not a rational number) nor the real numbers{\displaystyle \mathbb {R} }
gives
(the polynomialx^2 + 4
gives
does not have a real root, because the square of x is positive for any real number x).Because of this fact,
{\displaystyle \mathbb {C} }
gives
is called an algebraically closed field. It is a cornerstone of various applications of complex numbers, as is detailed further below. There are various proofs of this theorem, by either analytic methods such as Liouville's theorem, or topological ones such as the winding number, or a proof combining Galois theory and the fact that any real polynomial of odd degree has at least one real root.It appears to me that if one wants to make progress in mathematics, one should study the masters and not the pupils. - Niels Henrik Abel.
Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.
Offline
Matrix representation of complex numbers
Complex numbers a + bi can also be represented by 2 × 2 matrices that have the form
{\displaystyle {\begin{pmatrix}a&-b\\b&\;\;a\end{pmatrix}}.}
gives
Here the entries a and b are real numbers. As the sum and product of two such matrices is again of this form, these matrices form a subring of the ring of 2 × 2 matrices.
A simple computation shows that the map
{\displaystyle a+ib\mapsto {\begin{pmatrix}a&-b\\b&\;\;a\end{pmatrix}}}
gives
The geometric description of the multiplication of complex numbers can also be expressed in terms of rotation matrices by using this correspondence between complex numbers and such matrices. The action of the matrix on a vector (x, y) corresponds to the multiplication of x + iy by a + ib. In particular, if the determinant is 1, there is a real number t such that the matrix has the form
{\displaystyle {\begin{pmatrix}\cos t&-\sin t\\\sin t&\;\;\cos t\end{pmatrix}}.}
gives
In this case, the action of the matrix on vectors and the multiplication by the complex number
{\displaystyle \cos t+i\sin t}
gives
It appears to me that if one wants to make progress in mathematics, one should study the masters and not the pupils. - Niels Henrik Abel.
Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.
Offline
Complex analysis
The study of functions of a complex variable is known as complex analysis and has enormous practical use in applied mathematics as well as in other branches of mathematics. Often, the most natural proofs for statements in real analysis or even number theory employ techniques from complex analysis.
Unlike real functions, which are commonly represented as two-dimensional graphs, complex functions have four-dimensional graphs and may usefully be illustrated by color-coding a three-dimensional graph to suggest four dimensions, or by animating the complex function's dynamic transformation of the complex plane.
Convergence
The notions of convergent series and continuous functions in (real) analysis have natural analogs in complex analysis. A sequence of complex numbers is said to converge if and only if its real and imaginary parts do. This is equivalent to the (ε, δ)-definition of limits, where the absolute value of real numbers is replaced by the one of complex numbers. From a more abstract point of view,
{\displaystyle \mathbb {C} }
gives
, endowed with the metric{\displaystyle \operatorname {d} (z_{1},z_{2})=|z_{1}-z_{2}|}
gives
{\displaystyle |z_{1}+z_{2}|\leq |z_{1}|+|z_{2}|}
gives
z_1
gives
andz_2
gives
.It appears to me that if one wants to make progress in mathematics, one should study the masters and not the pupils. - Niels Henrik Abel.
Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.
Offline
Complex exponential
Like in real analysis, this notion of convergence is used to construct a number of elementary functions: the exponential function exp z, also written
e^z
given as
, is defined as the infinite series, which can be shown to converge for any z:{\displaystyle \exp z:=1+z+{\frac {z^{2}}{2\cdot 1}}+{\frac {z^{3}}{3\cdot 2\cdot 1}}+\cdots =\sum _{n=0}^{\infty }{\frac {z^{n}}{n!}}.}
gives
For example,
{\displaystyle \exp(1)}
gives
is Euler's constant{\displaystyle e\approx 2.718}
gives
. Euler's formula states:{\displaystyle \exp(i\varphi )=\cos \varphi +i\sin \varphi }
gives
{\displaystyle \exp(i\pi )=-1.}
gives
.It appears to me that if one wants to make progress in mathematics, one should study the masters and not the pupils. - Niels Henrik Abel.
Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.
Offline
Complex logarithm - I
The exponential function maps complex numbers z differing by a multiple of
{\displaystyle 2\pi i}
gives
For any positive real number t, there is a unique real number x such that
{\displaystyle \exp(x)=t}.
gives
{\displaystyle \ln \colon \mathbb {R} ^{+}\to \mathbb {R} ;x\mapsto \ln x}
gives
{\displaystyle \exp(z+2\pi i)=\exp z\exp(2\pi i)=\exp z}
gives
by the functional equation and Euler's identity.
In general, given any non-zero complex number w, any number z solving the equation
{\displaystyle \exp z=w}
gives
{\displaystyle \log w}.
gives
{\displaystyle z=\log w=\ln |w|+i\arg w,}
gives
2\pi
gives
,(-\pi, \pi]
gives
{\displaystyle \mathbb {R} ^{+}+\;i\,\left(-\pi ,\pi \right]}
gives
{\displaystyle S_{0}}
gives
{\displaystyle \ln \colon \;\mathbb {C} ^{\times }\;\to \;\;\;\mathbb {R} ^{+}+\;i\,\left(-\pi ,\pi \right].}
gives
It appears to me that if one wants to make progress in mathematics, one should study the masters and not the pupils. - Niels Henrik Abel.
Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.
Offline
Complex logarithm - II
If
{\displaystyle z\in \mathbb {C} \setminus \left(-\mathbb {R} {\geq 0}\right)}
gives
is not a non-positive real number (a positive or a non-real number), the resulting principal value of the complex logarithm is obtained with
-\pi < \varphi < \pi
gives
.{\displaystyle z\in -\mathbb {R} ^{+}}
gives
ln \ z = ln(-z) + i\pi
gives
Complex exponentiation zω is defined as
{\displaystyle z^{\omega }=\exp(\omega \ln z),}
gives
\omega
gives
is an integer.\omega = 1 / n
gives
, for some natural number n, this recovers the non-uniqueness of nth roots mentioned above. If z > 0 is real (and\omega
gives
an arbitrary complex number), one has a preferred choice of{\displaystyle \ln x}
gives
,Complex numbers, unlike real numbers, do not in general satisfy the unmodified power and logarithm identities, particularly when naïvely treated as single-valued functions; see failure of power and logarithm identities. For example, they do not satisfy
{\displaystyle a^{bc}=\left(a^{b}\right)^{c}.}
gives
Both sides of the equation are multivalued by the definition of complex exponentiation given here, and the values on the left are a subset of those on the right.
It appears to me that if one wants to make progress in mathematics, one should study the masters and not the pupils. - Niels Henrik Abel.
Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.
Offline
Root of Unity
In mathematics, a root of unity, occasionally called a de Moivre number, is any complex number that yields 1 when raised to some positive integer power n. Roots of unity are used in many branches of mathematics, and are especially important in number theory, the theory of group characters, and the discrete Fourier transform.
Roots of unity can be defined in any field. If the characteristic of the field is zero, the roots are complex numbers that are also algebraic integers. For fields with a positive characteristic, the roots belong to a finite field, and, conversely, every nonzero element of a finite field is a root of unity. Any algebraically closed field contains exactly n nth roots of unity, except when n is a multiple of the (positive) characteristic of the field.
General definition
Geometric representation of the 2nd to 6th root of a general complex number in polar form. For the nth root of unity, set r = 1 and
\varphi = 0
gives
.An nth root of unity, where n is a positive integer, is a number z satisfying the equation
{\displaystyle z^{n}=1.}
gives
{\displaystyle \exp \left({\frac {2k\pi i}{n}}\right)=\cos {\frac {2k\pi }{n}}+i\sin {\frac {2k\pi }{n}},\qquad k=0,1,\dots ,n-1.}
gives
However, the defining equation of roots of unity is meaningful over any field (and even over any ring) F, and this allows considering roots of unity in F. Whichever is the field F, the roots of unity in F are either complex numbers, if the characteristic of F is 0, or, otherwise, belong to a finite field. Conversely, every nonzero element in a finite field is a root of unity in that field.
An nth root of unity is said to be primitive if it is not an mth root of unity for some smaller m, that is if
z^n = 1
gives
z^m \neq 1
gives
{\displaystyle z^{n}=1\quad {\text{and}}\quad z^{m}\neq 1{\text{ for }}m=1,2,3,\ldots ,n-1.}
gives
In the above formula in terms of exponential and trigonometric functions, the primitive nth roots of unity are those for which k and n are coprime integers.
It appears to me that if one wants to make progress in mathematics, one should study the masters and not the pupils. - Niels Henrik Abel.
Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.
Offline
Elementary properties
Every nth root of unity z is a primitive ath root of unity for some
a \leq n
gives
, which is the smallest positive integer such thatz^a = 1
gives
.Any integer power of an nth root of unity is also an nth root of unity, as
{\displaystyle (z^{k})^{n}=z^{kn}=(z^{n})^{k}=1^{k}=1.}
gives
This is also true for negative exponents. In particular, the reciprocal of an nth root of unity is its complex conjugate, and is also an nth root of unity:
{\displaystyle {\frac {1}{z}}=z^{-1}=z^{n-1}={\bar {z}}.}
gives
If z is an nth root of unity and a ≡ b (mod n) then
z^a = z^b
gives
. Indeed, by the definition of congruence modulo n, a = b + kn for some integer k, and hence{\displaystyle z^{a}=z^{b+kn}=z^{b}z^{kn}=z^{b}(z^{n})^{k}=z^{b}1^{k}=z^{b}.}
gives
Therefore, given a power
z^a
gives
of z, one hasz^a = z^r
gives
, where0 \leq r < n
gives
is the remainder of the Euclidean division of a by n.It appears to me that if one wants to make progress in mathematics, one should study the masters and not the pupils. - Niels Henrik Abel.
Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.
Offline
Determinant - I
In mathematics, the determinant is a scalar-valued function of the entries of a square matrix. The determinant of a matrix A is commonly denoted det(A), det A, or |A|. Its value characterizes some properties of the matrix and the linear map represented, on a given basis, by the matrix. In particular, the determinant is nonzero if and only if the matrix is invertible and the corresponding linear map is an isomorphism.
The determinant is completely determined by the two following properties: the determinant of a product of matrices is the product of their determinants, and the determinant of a triangular matrix is the product of its diagonal entries.
The determinant of a
2 \times 2
gives
matrix is{\displaystyle {\begin{vmatrix}a&b\\c&d\end{vmatrix}}=ad-bc,}
gives
3 \times 3
gives
{\displaystyle {\begin{vmatrix}a&b&c\\d&e&f\\g&h&i\end{vmatrix}}=aei+bfg+cdh-ceg-bdi-afh.}
gives
The determinant of an n × n matrix can be defined in several equivalent ways, the most common being Leibniz formula, which expresses the determinant as a sum of
{\displaystyle n!}
gives
(the factorial of n) signed products of matrix entries. It can be computed by the Laplace expansion, which expresses the determinant as a linear combination of determinants of submatrices, or with Gaussian elimination, which allows computing a row echelon form with the same determinant, equal to the product of the diagonal entries of the row echelon form.Determinants can also be defined by some of their properties. Namely, the determinant is the unique function defined on the
n \times n
gives
matrices that has the four following properties:1. The determinant of the identity matrix is 1.
2. The exchange of two rows multiplies the determinant by -1.
3. Multiplying a row by a number multiplies the determinant by this number.
4. Adding a multiple of one row to another row does not change the determinant.
The above properties relating to rows (properties 2-4) may be replaced by the corresponding statements with respect to columns.
The determinant is invariant under matrix similarity. This implies that, given a linear endomorphism of a finite-dimensional vector space, the determinant of the matrix that represents it on a basis does not depend on the chosen basis. This allows defining the determinant of a linear endomorphism, which does not depends on the choice of a coordinate system.
Determinants occur throughout mathematics. For example, a matrix is often used to represent the coefficients in a system of linear equations, and determinants can be used to solve these equations (Cramer's rule), although other methods of solution are computationally much more efficient. Determinants are used for defining the characteristic polynomial of a square matrix, whose roots are the eigenvalues. In geometry, the signed n-dimensional volume of a n-dimensional parallelepiped is expressed by a determinant, and the determinant of a linear endomorphism determines how the orientation and the n-dimensional volume are transformed under the endomorphism. This is used in calculus with exterior differential forms and the Jacobian determinant, in particular for changes of variables in multiple integrals.
It appears to me that if one wants to make progress in mathematics, one should study the masters and not the pupils. - Niels Henrik Abel.
Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.
Offline
Distributive property - I
In mathematics, the distributive property of binary operations is a generalization of the distributive law, which asserts that the equality
{\displaystyle x\cdot (y+z)=x\cdot y+x\cdot z}
gives
{\displaystyle 2\cdot (1+3)=(2\cdot 1)+(2\cdot 3).}
gives
Therefore, one would say that multiplication distributes over addition.
This basic property of numbers is part of the definition of most algebraic structures that have two operations called addition and multiplication, such as complex numbers, polynomials, matrices, rings, and fields. It is also encountered in Boolean algebra and mathematical logic, where each of the logical and (denoted
{\displaystyle \,\land \,}
gives
){\displaystyle \,\lor \,}
gives
) distributes over the other.Definition
Given a set
{\displaystyle S}
gives
and two binary operators{\displaystyle \,*\,}
gives
and{\displaystyle \,+\,}
gives
on{\displaystyle S,}
gives
{\displaystyle \,*\,}
gives
{\displaystyle \,+\,}
gives
if, given any elements{\displaystyle x,y,{\text{ and }}z}
gives
of{\displaystyle S,}
gives
{\displaystyle x*(y+z)=(x*y)+(x*z);}
gives
iii) the operation
{\displaystyle \,*\,}
gives
is right-distributive over{\displaystyle \,+\,}
gives
if, given any elements{\displaystyle x,y,{\text{ and }}z}
gives
of{\displaystyle S,}
gives
{\displaystyle (y+z)*x=(y*x)+(z*x);}
gives
iii) and the operation
{\displaystyle \,*\,}
is
is distributive over{\displaystyle \,+\,}
gives
if it is left- and right-distributive.{\displaystyle \,*\,}
gives
is commutative, the three conditions above are logically equivalent.Meaning
The operators used for examples in this section are those of the usual addition
{\displaystyle \,+\,}
gives
and multiplication{\displaystyle \,\cdot .\,}
gives
If the operation denoted
{\displaystyle \cdot }
gives
is not commutative, there is a distinction between left-distributivity and right-distributivity:(left-distributive)
{\displaystyle a\cdot \left(b\pm c\right)=a\cdot b\pm a\cdot c\qquad {\text{ (left-distributive) }}}
gives
{\displaystyle (a\pm b)\cdot c=a\cdot c\pm b\cdot c\qquad {\text{ (right-distributive) }}.}
gives
In either case, the distributive property can be described in words as:
To multiply a sum (or difference) by a factor, each summand (or minuend and subtrahend) is multiplied by this factor and the resulting products are added (or subtracted).
If the operation outside the parentheses (in this case, the multiplication) is commutative, then left-distributivity implies right-distributivity and vice versa, and one talks simply of distributivity.
One example of an operation that is "only" right-distributive is division, which is not commutative:
{\displaystyle (a\pm b)\div c=a\div c\pm b\div c.}
gives
In this case, left-distributivity does not apply:
{\displaystyle a\div (b\pm c)\neq a\div b\pm a\div c}
gives
The distributive laws are among the axioms for rings (like the ring of integers) and fields (like the field of rational numbers). Here multiplication is distributive over addition, but addition is not distributive over multiplication.
Examples of structures with two operations that are each distributive over the other are Boolean algebras such as the algebra of sets or the switching algebra.
Multiplying sums can be put into words as follows: When a sum is multiplied by a sum, multiply each summand of a sum with each summand of the other sum (keeping track of signs) then add up all of the resulting products.
It appears to me that if one wants to make progress in mathematics, one should study the masters and not the pupils. - Niels Henrik Abel.
Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.
Offline
Determinant - II
Two by two matrices
The determinant of a
2 \times 2
gives
matrix{\displaystyle {\begin{pmatrix}a&b\\c&d\end{pmatrix}}}
gives
{\displaystyle \det {\begin{pmatrix}a&b\\c&d\end{pmatrix}}={\begin{vmatrix}a&b\\c&d\end{vmatrix}}=ad-bc.}
gives
{\displaystyle \det {\begin{pmatrix}3&7\\1&-4\end{pmatrix}}={\begin{vmatrix}3&7\\1&{-4}\end{vmatrix}}=(3\cdot (-4))-(7\cdot 1)=-19.}
gives
First properties
The determinant has several key properties that can be proved by direct evaluation of the definition for
{\displaystyle 2\times 2}
gives
{\displaystyle {\begin{pmatrix}1&0\\0&1\end{pmatrix}}}
gives
{\displaystyle {\begin{vmatrix}a&b\\a&b\end{vmatrix}}=ab-ba=0.}
gives
{\displaystyle {\begin{vmatrix}a&b+b'\\c&d+d'\end{vmatrix}}=a(d+d')-(b+b')c={\begin{vmatrix}a&b\\c&d\end{vmatrix}}+{\begin{vmatrix}a&b'\\c&d'\end{vmatrix}}.}
gives
Finally, if any column is multiplied by some number r (i.e., all entries in that column are multiplied by that number), the determinant is also multiplied by that number:
{\displaystyle {\begin{vmatrix}r\cdot a&b\\r\cdot c&d\end{vmatrix}}=rad-brc=r(ad-bc)=r\cdot {\begin{vmatrix}a&b\\c&d\end{vmatrix}}.}
gives
It appears to me that if one wants to make progress in mathematics, one should study the masters and not the pupils. - Niels Henrik Abel.
Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.
Offline
Determinant - III
If an
n \times n
gives
real matrix A is written in terms of its column vectors{\displaystyle A=\left[{\begin{array}{c|c|c|c}\mathbf {a} _{1}&\mathbf {a} _{2}&\cdots &\mathbf {a} _{n}\end{array}}\right]}
gives
{\displaystyle A{\begin{pmatrix}1\\0\\\vdots \\0\end{pmatrix}}=\mathbf {a} _{1},\quad A{\begin{pmatrix}0\\1\\\vdots \\0\end{pmatrix}}=\mathbf {a} _{2},\quad \ldots ,\quad A{\begin{pmatrix}0\\0\\\vdots \\1\end{pmatrix}}=\mathbf {a} _{n}.}
gives
It appears to me that if one wants to make progress in mathematics, one should study the masters and not the pupils. - Niels Henrik Abel.
Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.
Offline
Determinant - IV
Definition
Let A be a square matrix with n rows and n columns, so that it can be written as
{\displaystyle A={\begin{bmatrix}a_{1,1}&a_{1,2}&\cdots &a_{1,n}\\a_{2,1}&a_{2,2}&\cdots &a_{2,n}\\\vdots &\vdots &\ddots &\vdots \\a_{n,1}&a_{n,2}&\cdots &a_{n,n}\end{bmatrix}}.}
gives
The entries
{\displaystyle a_{1,1}}
gives
etc. are, for many purposes, real or complex numbers. As discussed below, the determinant is also defined for matrices whose entries are in a commutative ring.The determinant of A is denoted by det(A), or it can be denoted directly in terms of the matrix entries by writing enclosing bars instead of brackets:
{\displaystyle {\begin{vmatrix}a_{1,1}&a_{1,2}&\cdots &a_{1,n}\\a_{2,1}&a_{2,2}&\cdots &a_{2,n}\\\vdots &\vdots &\ddots &\vdots \\a_{n,1}&a_{n,2}&\cdots &a_{n,n}\end{vmatrix}}.}
gives
There are various equivalent ways to define the determinant of a square matrix A, i.e. one with the same number of rows and columns: the determinant can be defined via the Leibniz formula, an explicit formula involving sums of products of certain entries of the matrix. The determinant can also be characterized as the unique function depending on the entries of the matrix satisfying certain properties. This approach can also be used to compute determinants by simplifying the matrices in question.
It appears to me that if one wants to make progress in mathematics, one should study the masters and not the pupils. - Niels Henrik Abel.
Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.
Offline
Determinant - V
n × n matrices
Generalizing the above to higher dimensions, the determinant of an
{\displaystyle n\times n}
gives
{\displaystyle \{1,2,\dots ,n\}}
gives
{\displaystyle \sigma }
gives
{\displaystyle \sigma (1),\sigma (2),\ldots ,\sigma (n)}
gives
{\displaystyle S_{n}}
gives
.{\displaystyle \operatorname {sgn}(\sigma )}
gives
{\displaystyle \sigma }
gives
{\displaystyle +1,}
gives
{\displaystyle -1.}
gives
{\displaystyle A={\begin{bmatrix}a_{1,1}\ldots a_{1,n}\\\vdots \qquad \vdots \\a_{n,1}\ldots a_{n,n}\end{bmatrix}},}
gives
{\displaystyle \det(A)={\begin{vmatrix}a_{1,1}\ldots a_{1,n}\\\vdots \qquad \vdots \\a_{n,1}\ldots a_{n,n}\end{vmatrix}}=\sum _{\sigma \in S_{n}}\operatorname {sgn}(\sigma )a_{1,\sigma (1)}\cdots a_{n,\sigma (n)}.}
gives
{\displaystyle \det(A)=\sum _{\sigma \in S_{n}}\left(\operatorname {sgn}(\sigma )\prod _{i=1}^{n}a_{i,\sigma (i)}\right)}
gives
{\displaystyle \varepsilon _{i_{1},\ldots ,i_{n}}}
gives
{\displaystyle \{1,\ldots ,n\}}
gives
{\displaystyle \det(A)=\sum _{i_{1},i_{2},\ldots ,i_{n}}\varepsilon _{i_{1}\cdots i_{n}}a_{1,i_{1}}\!\cdots a_{n,i_{n}},}
gives
{\displaystyle \{1,\ldots ,n\}.}
gives
It appears to me that if one wants to make progress in mathematics, one should study the masters and not the pupils. - Niels Henrik Abel.
Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.
Offline
Distributive Property - II
Examples
Real numbers
In the following examples, the use of the distributive law on the set of real numbers R
{\displaystyle \mathbb {R} }
gives
is illustrated. When multiplication is mentioned in elementary mathematics, it usually refers to this kind of multiplication. From the point of view of algebra, the real numbers form a field, which ensures the validity of the distributive law.First example (mental and written multiplication)
During mental arithmetic, distributivity is often used unconsciously:
{\displaystyle 6\cdot 16=6\cdot (10+6)=6\cdot 10+6\cdot 6=60+36=96}
gives
{\displaystyle 6\cdot 16}
gives
in one's head, one first multiplies{\displaystyle 6\cdot 10}
gives
and{\displaystyle 6\cdot 6}
gives
and add the intermediate results. Written multiplication is also based on the distributive law.Second example (with variables)
{\displaystyle 3a^{2}b\cdot (4a-5b)=3a^{2}b\cdot 4a-3a^{2}b\cdot 5b=12a^{3}b-15a^{2}b^{2}}
gives
It appears to me that if one wants to make progress in mathematics, one should study the masters and not the pupils. - Niels Henrik Abel.
Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.
Offline
Distributive Property - III
Third example (with two sums)
{\displaystyle {\begin{aligned}(a+b)\cdot (a-b)&=a\cdot (a-b)+b\cdot (a-b)=a^{2}-ab+ba-b^{2}=a^{2}-b^{2}\\&=(a+b)\cdot a-(a+b)\cdot b=a^{2}+ba-ab-b^{2}=a^{2}-b^{2}\\\end{aligned}}}
gives
Here the distributive law was applied twice, and it does not matter which bracket is first multiplied out.
Fourth example
Here the distributive law is applied the other way around compared to the previous examples. Consider
{\displaystyle 12a^{3}b^{2}-30a^{4}bc+18a^{2}b^{3}c^{2}\,.}
Since the factor
{\displaystyle 6a^{2}b}
gives
occurs in all summands, it can be factored out. That is, due to the distributive law one obtains{\displaystyle 12a^{3}b^{2}-30a^{4}bc+18a^{2}b^{3}c^{2}=6a^{2}b\left(2ab-5a^{2}c+3b^{2}c^{2}\right).}
gives
It appears to me that if one wants to make progress in mathematics, one should study the masters and not the pupils. - Niels Henrik Abel.
Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.
Offline
Matrix - I
In mathematics, a matrix (pl.: matrices) is a rectangular array or table of numbers, symbols, or expressions, with elements or entries arranged in rows and columns, which is used to represent a mathematical object or property of such an object.
For example,
{\displaystyle {\begin{bmatrix}1&9&-13\\20&5&-6\end{bmatrix}}}
gives
{\displaystyle 2\times 3}
gives
matrix", or a matrix of dimension{\displaystyle 2\times 3}
gives
.Matrices are commonly related to linear algebra. Notable exceptions include incidence matrices and adjacency matrices in graph theory. This article focuses on matrices related to linear algebra, and, unless otherwise specified, all matrices represent linear maps or may be viewed as such.
Square matrices, matrices with the same number of rows and columns, play a major role in matrix theory. Square matrices of a given dimension form a noncommutative ring, which is one of the most common examples of a noncommutative ring. The determinant of a square matrix is a number associated with the matrix, which is fundamental for the study of a square matrix; for example, a square matrix is invertible if and only if it has a nonzero determinant and the eigenvalues of a square matrix are the roots of a polynomial determinant.
In geometry, matrices are widely used for specifying and representing geometric transformations (for example rotations) and coordinate changes. In numerical analysis, many computational problems are solved by reducing them to a matrix computation, and this often involves computing with matrices of huge dimensions. Matrices are used in most areas of mathematics and scientific fields, either directly, or through their use in geometry and numerical analysis.
Matrix theory is the branch of mathematics that focuses on the study of matrices. It was initially a sub-branch of linear algebra, but soon grew to include subjects related to graph theory, algebra, combinatorics and statistics.
Definition
A matrix is a rectangular array of numbers (or other mathematical objects), called the entries of the matrix. Matrices are subject to standard operations such as addition and multiplication. Most commonly, a matrix over a field F is a rectangular array of elements of F. A real matrix and a complex matrix are matrices whose entries are respectively real numbers or complex numbers. More general types of entries are discussed below. For instance, this is a real matrix:
{\displaystyle \mathbf {A} ={\begin{bmatrix}-1.3&0.6\\20.4&5.5\\9.7&-6.2\end{bmatrix}}.}
gives
The numbers, symbols, or expressions in the matrix are called its entries or its elements. The horizontal and vertical lines of entries in a matrix are called rows and columns, respectively.
Size
The size of a matrix is defined by the number of rows and columns it contains. There is no limit to the number of rows and columns, that a matrix (in the usual sense) can have as long as they are positive integers. A matrix with
m rows and n columns is called an
{\displaystyle {m\times n}}
gives
matrix, or{\displaystyle {\mathbf {A} }}
gives
above is a{\displaystyle {3\times 2}}
gives
matrix.Matrices with a single row are called row vectors, and those with a single column are called column vectors. A matrix with the same number of rows and columns is called a square matrix. A matrix with an infinite number of rows or columns (or both) is called an infinite matrix. In some contexts, such as computer algebra programs, it is useful to consider a matrix with no rows or no columns, called an empty matrix.
It appears to me that if one wants to make progress in mathematics, one should study the masters and not the pupils. - Niels Henrik Abel.
Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.
Offline
Matrix - II
Overview of a matrix size
Name : Size : Example : Description : Notation
1 \times n
gives
{\displaystyle {\begin{bmatrix}3&7&2\end{bmatrix}}}
gives
{a_{i}}
gives
:n \times 1
gives
{\displaystyle {\begin{bmatrix}4\\1\\8\end{bmatrix}}}
gives
{a_{j}}
gives
.Square matrix :
n \times n
gives
{\displaystyle {\begin{bmatrix}9&13&5\\1&11&7\\2&6&3\end{bmatrix}}}
gives
A matrix with the same number of rows and columns, sometimes used to represent a linear transformation from a vector space to itself, such as reflection, rotation, or shearing.
It appears to me that if one wants to make progress in mathematics, one should study the masters and not the pupils. - Niels Henrik Abel.
Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.
Offline
Matrix - III
Notation
The specifics of symbolic matrix notation vary widely, with some prevailing trends. Matrices are commonly written in square brackets or parentheses, so that an
{\displaystyle m\times n}
gives
matrix{\displaystyle \mathbf {A} }
gives
is represented as{\displaystyle \mathbf {A} ={\begin{bmatrix}a_{11}&a_{12}&\cdots &a_{1n}\\a_{21}&a_{22}&\cdots &a_{2n}\\\vdots &\vdots &\ddots &\vdots \\a_{m1}&a_{m2}&\cdots &a_{mn}\end{bmatrix}}={\begin{pmatrix}a_{11}&a_{12}&\cdots &a_{1n}\\a_{21}&a_{22}&\cdots &a_{2n}\\\vdots &\vdots &\ddots &\vdots \\a_{m1}&a_{m2}&\cdots &a_{mn}\end{pmatrix}}.}
gives
It appears to me that if one wants to make progress in mathematics, one should study the masters and not the pupils. - Niels Henrik Abel.
Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.
Offline
Matrix - IV
This may be abbreviated by writing only a single generic term, possibly along with indices, as in
{\displaystyle \mathbf {A} =\left(a_{ij}\right),\quad \left[a_{ij}\right],\quad {\text{or}}\quad \left(a_{ij}\right)_{1\leq i\leq m,\;1\leq j\leq n}}
gives
{\displaystyle \mathbf {A} =(a_{i,j})_{1\leq i,j\leq n}}
gives
It appears to me that if one wants to make progress in mathematics, one should study the masters and not the pupils. - Niels Henrik Abel.
Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.
Offline
Matrix - V
Addition
The sum A+B of two m-by-n matrices A and B is calculated entrywise:
(A + B)_{i,j} = A_{i,j} + B_{i,j}
gives
1\leq_i
and
1_\leq_j
1 \leq i \leq m, 1 \leq j \leq n
gives
For example,
{\displaystyle {\begin{bmatrix}1&3&1\\1&0&0\end{bmatrix}}+{\begin{bmatrix}0&0&5\\7&5&0\end{bmatrix}}={\begin{bmatrix}1+0&3+0&1+5\\1+7&0+5&0+0\end{bmatrix}}={\begin{bmatrix}1&3&6\\8&5&0\end{bmatrix}}}
gives
It appears to me that if one wants to make progress in mathematics, one should study the masters and not the pupils. - Niels Henrik Abel.
Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.
Offline
Matrix - VI
Scalar Multiplication
The product cA of a number c (also called a scalar in this context) and a matrix A is computed by multiplying every entry of A by c:
(cA)_{i,j} = c \cdot A_{i,j}
gives
.This operation is called scalar multiplication, but its result is not named "scalar product" to avoid confusion, since "scalar product" is often used as a synonym for "inner product". For example:
{\displaystyle 2\cdot {\begin{bmatrix}1&8&-3\\4&-2&5\end{bmatrix}}={\begin{bmatrix}2\cdot 1&2\cdot 8&2\cdot -3\\2\cdot 4&2\cdot -2&2\cdot 5\end{bmatrix}}={\begin{bmatrix}2&16&-6\\8&-4&10\end{bmatrix}}}
gives
It appears to me that if one wants to make progress in mathematics, one should study the masters and not the pupils. - Niels Henrik Abel.
Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.
Offline
Matrix - VII
Subtraction
The subtraction of two
m \times n
gives
matrices is defined by composing matrix addition with scalar multiplication by -1:{\displaystyle \mathbf {A} -\mathbf {B} =\mathbf {A} +(-1)\cdot \mathbf {B} }
gives
Transposition
The transpose of an m-by-n matrix A is the n-by-m matrix
A^T
gives
(also denotedA^{tr}
gives
or^t{A}
gives
) formed by turning rows into columns and vice versa:(A^T)_{i,j} = A_{j,i}
gives
.For example:
{\displaystyle {\begin{bmatrix}1&2&3\\0&-6&7\end{bmatrix}}^{\mathrm {T} }={\begin{bmatrix}1&0\\2&-6\\3&7\end{bmatrix}}}
gives
Familiar properties of numbers extend to these operations on matrices: for example, addition is commutative, that is, the matrix sum does not depend on the order of the summands: A + B = B + A. The transpose is compatible with addition and scalar multiplication, as expressed by
(cA)^T = c(A^T) \text{ and } (A + B)^T = A^T + B^T.
gives
(A^{T})^{T} = A
gives
.It appears to me that if one wants to make progress in mathematics, one should study the masters and not the pupils. - Niels Henrik Abel.
Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.
Offline
Matrix - VIII
Matrix multiplication
Multiplication of two matrices is defined if and only if the number of columns of the left matrix is the same as the number of rows of the right matrix. If A is an m-by-n matrix and B is an n-by-p matrix, then their matrix product AB is the m-by-p matrix whose entries are given by dot product of the corresponding row of A and the corresponding column of B:
{\displaystyle [\mathbf {AB} ]_{i,j}=a_{i,1}b_{1,j}+a_{i,2}b_{2,j}+\cdots +a_{i,n}b_{n,j}=\sum _{r=1}^{n}a_{i,r}b_{r,j},}
gives
where
1 \leq i \leq m \text { and } 1 \leq j \leq p
gives
For example, the underlined entry 2340 in the product is calculated as
(2 \times 1000) + (3 \times 100) + (4 \times 10) = 2340
gives
{\displaystyle {\begin{aligned}{\begin{bmatrix}{\underline {2}}&{\underline {3}}&{\underline {4}}\\1&0&0\\\end{bmatrix}}{\begin{bmatrix}0&{\underline {1000}}\\1&{\underline {100}}\\0&{\underline {10}}\\\end{bmatrix}}&={\begin{bmatrix}3&{\underline {2340}}\\0&1000\\\end{bmatrix}}.\end{aligned}}}
gives
It appears to me that if one wants to make progress in mathematics, one should study the masters and not the pupils. - Niels Henrik Abel.
Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.
Offline
Matrix - IX
Matrix multiplication satisfies the rules (AB)C = A(BC) (associativity), and
(A + B)C = AC + BC as well as C(A + B) = CA + CB (left and right distributivity), whenever the size of the matrices is such that the various products are defined. The product AB may be defined without BA being defined, namely if A and B are m-by-n and n-by-k matrices, respectively, and
m \leq k
gives
.AB \neq BA
gives
.In other words, matrix multiplication is not commutative, in marked contrast to (rational, real, or complex) numbers, whose product is independent of the order of the factors. An example of two matrices not commuting with each other is:
{\displaystyle {\begin{bmatrix}1&2\\3&4\\\end{bmatrix}}{\begin{bmatrix}0&1\\0&0\\\end{bmatrix}}={\begin{bmatrix}0&1\\0&3\\\end{bmatrix}},}
gives
{\displaystyle {\begin{bmatrix}0&1\\0&0\\\end{bmatrix}}{\begin{bmatrix}1&2\\3&4\\\end{bmatrix}}={\begin{bmatrix}3&4\\0&0\\\end{bmatrix}}.}
gives
Besides the ordinary matrix multiplication just described, other less frequently used operations on matrices that can be considered forms of multiplication also exist, such as the Hadamard product and the Kronecker product. They arise in solving matrix equations such as the Sylvester equation.
It appears to me that if one wants to make progress in mathematics, one should study the masters and not the pupils. - Niels Henrik Abel.
Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.
Offline