Discussion about math, puzzles, games and fun. Useful symbols: ÷ × ½ √ ∞ ≠ ≤ ≥ ≈ ⇒ ± ∈ Δ θ ∴ ∑ ∫ π -¹ ² ³ °

You are not logged in.

- Topics: Active | Unanswered

**Jai Ganesh****Administrator**- Registered: 2005-06-28
- Posts: 48,320

**Fundamental theorem of algebra**

The fundamental theorem of algebra, of Carl Friedrich Gauss and Jean le Rond d'Alembert, states that for any complex numbers (called coefficients)

`a_0, ..., a_n`

gives

, the equation`{\displaystyle a_{n}z^{n}+\dotsb +a_{1}z+a_{0}=0}`

gives

has at least one complex solution z, provided that at least one of the higher coefficients

`a_1, ..., a_n`

gives

is nonzero. This property does not hold for the field of rational numbers

`{\displaystyle \mathbb {Q} }`

gives

(the polynomial

`x^2 - 2`

gives

does not have a rational root, because`\sqrt{2}`

gives

is not a rational number) nor the real numbers`{\displaystyle \mathbb {R} }`

gives

(the polynomial`x^2 + 4`

gives

does not have a real root, because the square of x is positive for any real number x).Because of this fact,

`{\displaystyle \mathbb {C} }`

gives

is called an algebraically closed field. It is a cornerstone of various applications of complex numbers, as is detailed further below. There are various proofs of this theorem, by either analytic methods such as Liouville's theorem, or topological ones such as the winding number, or a proof combining Galois theory and the fact that any real polynomial of odd degree has at least one real root.It appears to me that if one wants to make progress in mathematics, one should study the masters and not the pupils. - Niels Henrik Abel.

Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.

Offline

**Jai Ganesh****Administrator**- Registered: 2005-06-28
- Posts: 48,320

**Matrix representation of complex numbers**

Complex numbers a + bi can also be represented by 2 × 2 matrices that have the form

`{\displaystyle {\begin{pmatrix}a&-b\\b&\;\;a\end{pmatrix}}.}`

gives

Here the entries a and b are real numbers. As the sum and product of two such matrices is again of this form, these matrices form a subring of the ring of 2 × 2 matrices.

A simple computation shows that the map

`{\displaystyle a+ib\mapsto {\begin{pmatrix}a&-b\\b&\;\;a\end{pmatrix}}}`

gives

is a ring isomorphism from the field of complex numbers to the ring of these matrices, proving that these matrices form a field. This isomorphism associates the square of the absolute value of a complex number with the determinant of the corresponding matrix, and the conjugate of a complex number with the transpose of the matrix.

The geometric description of the multiplication of complex numbers can also be expressed in terms of rotation matrices by using this correspondence between complex numbers and such matrices. The action of the matrix on a vector (x, y) corresponds to the multiplication of x + iy by a + ib. In particular, if the determinant is 1, there is a real number t such that the matrix has the form

`{\displaystyle {\begin{pmatrix}\cos t&-\sin t\\\sin t&\;\;\cos t\end{pmatrix}}.}`

gives

In this case, the action of the matrix on vectors and the multiplication by the complex number

`{\displaystyle \cos t+i\sin t}`

gives

are both the rotation of the angle t.

It appears to me that if one wants to make progress in mathematics, one should study the masters and not the pupils. - Niels Henrik Abel.

Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.

Offline

**Jai Ganesh****Administrator**- Registered: 2005-06-28
- Posts: 48,320

**Complex analysis**

The study of functions of a complex variable is known as complex analysis and has enormous practical use in applied mathematics as well as in other branches of mathematics. Often, the most natural proofs for statements in real analysis or even number theory employ techniques from complex analysis.

Unlike real functions, which are commonly represented as two-dimensional graphs, complex functions have four-dimensional graphs and may usefully be illustrated by color-coding a three-dimensional graph to suggest four dimensions, or by animating the complex function's dynamic transformation of the complex plane.

**Convergence**

The notions of convergent series and continuous functions in (real) analysis have natural analogs in complex analysis. A sequence of complex numbers is said to converge if and only if its real and imaginary parts do. This is equivalent to the (ε, δ)-definition of limits, where the absolute value of real numbers is replaced by the one of complex numbers. From a more abstract point of view,

`{\displaystyle \mathbb {C} }`

gives

, endowed with the metric`{\displaystyle \operatorname {d} (z_{1},z_{2})=|z_{1}-z_{2}|}`

gives

is a complete metric space, which notably includes the triangle inequality

`{\displaystyle |z_{1}+z_{2}|\leq |z_{1}|+|z_{2}|}`

gives

for any two complex numbers

`z_1`

gives

and`z_2`

gives

.It appears to me that if one wants to make progress in mathematics, one should study the masters and not the pupils. - Niels Henrik Abel.

Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.

Offline

**Jai Ganesh****Administrator**- Registered: 2005-06-28
- Posts: 48,320

**Complex exponential**

Like in real analysis, this notion of convergence is used to construct a number of elementary functions: the exponential function exp z, also written

`e^z`

given as

, is defined as the infinite series, which can be shown to converge for any z:`{\displaystyle \exp z:=1+z+{\frac {z^{2}}{2\cdot 1}}+{\frac {z^{3}}{3\cdot 2\cdot 1}}+\cdots =\sum _{n=0}^{\infty }{\frac {z^{n}}{n!}}.}`

gives

For example,

`{\displaystyle \exp(1)}`

gives

is Euler's constant`{\displaystyle e\approx 2.718}`

gives

. Euler's formula states:`{\displaystyle \exp(i\varphi )=\cos \varphi +i\sin \varphi }`

gives

for any real number φ. This formula is a quick consequence of general basic facts about convergent power series and the definitions of the involved functions as power series. As a special case, this includes Euler's identity

`{\displaystyle \exp(i\pi )=-1.}`

gives

.Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.

Offline

**Jai Ganesh****Administrator**- Registered: 2005-06-28
- Posts: 48,320

**Complex logarithm - I**

The exponential function maps complex numbers z differing by a multiple of

`{\displaystyle 2\pi i}`

gives

to the same complex number w.

For any positive real number t, there is a unique real number x such that

`{\displaystyle \exp(x)=t}.`

gives

This leads to the definition of the natural logarithm as the inverse

`{\displaystyle \ln \colon \mathbb {R} ^{+}\to \mathbb {R} ;x\mapsto \ln x}`

gives

of the exponential function. The situation is different for complex numbers, since

`{\displaystyle \exp(z+2\pi i)=\exp z\exp(2\pi i)=\exp z}`

gives

by the functional equation and Euler's identity.

In general, given any non-zero complex number w, any number z solving the equation

`{\displaystyle \exp z=w}`

gives

is called a complex logarithm of w, denoted

`{\displaystyle \log w}.`

gives

It can be shown that these numbers satisfy

`{\displaystyle z=\log w=\ln |w|+i\arg w,}`

gives

where arg is the argument defined above, and ln the (real) natural logarithm. As arg is a multivalued function, unique only up to a multiple of

`2\pi`

gives

,log is also multivalued. The principal value of log is often taken by restricting the imaginary part to the interval

`(-\pi, \pi]`

gives

This leads to the complex logarithm being a bijective function taking values in the strip

`{\displaystyle \mathbb {R} ^{+}+\;i\,\left(-\pi ,\pi \right]}`

gives

(that is denoted

`{\displaystyle S_{0}}`

gives

in the above illustration)

`{\displaystyle \ln \colon \;\mathbb {C} ^{\times }\;\to \;\;\;\mathbb {R} ^{+}+\;i\,\left(-\pi ,\pi \right].}`

gives

Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.

Offline

**Jai Ganesh****Administrator**- Registered: 2005-06-28
- Posts: 48,320

**Complex logarithm - II**

If

`{\displaystyle z\in \mathbb {C} \setminus \left(-\mathbb {R} {\geq 0}\right)}`

gives

is not a non-positive real number (a positive or a non-real number), the resulting principal value of the complex logarithm is obtained with

`-\pi < \varphi < \pi`

gives

.It is an analytic function outside the negative real numbers, but it cannot be prolongated to a function that is continuous at any negative real number

`{\displaystyle z\in -\mathbb {R} ^{+}}`

gives

where the principal value is

`ln \ z = ln(-z) + i\pi`

gives

Complex exponentiation zω is defined as

`{\displaystyle z^{\omega }=\exp(\omega \ln z),}`

gives

and is multi-valued, except when

`\omega`

gives

is an integer.For

`\omega = 1 / n`

gives

, for some natural number n, this recovers the non-uniqueness of nth roots mentioned above. If z > 0 is real (and`\omega`

gives

an arbitrary complex number), one has a preferred choice of`{\displaystyle \ln x}`

gives

,the real logarithm, which can be used to define a preferred exponential function.

Complex numbers, unlike real numbers, do not in general satisfy the unmodified power and logarithm identities, particularly when naïvely treated as single-valued functions; see failure of power and logarithm identities. For example, they do not satisfy

`{\displaystyle a^{bc}=\left(a^{b}\right)^{c}.}`

gives

Both sides of the equation are multivalued by the definition of complex exponentiation given here, and the values on the left are a subset of those on the right.

Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.

Offline

**Jai Ganesh****Administrator**- Registered: 2005-06-28
- Posts: 48,320

**Root of Unity**

In mathematics, a root of unity, occasionally called a de Moivre number, is any complex number that yields 1 when raised to some positive integer power n. Roots of unity are used in many branches of mathematics, and are especially important in number theory, the theory of group characters, and the discrete Fourier transform.

Roots of unity can be defined in any field. If the characteristic of the field is zero, the roots are complex numbers that are also algebraic integers. For fields with a positive characteristic, the roots belong to a finite field, and, conversely, every nonzero element of a finite field is a root of unity. Any algebraically closed field contains exactly n nth roots of unity, except when n is a multiple of the (positive) characteristic of the field.

**General definition**

Geometric representation of the 2nd to 6th root of a general complex number in polar form. For the nth root of unity, set r = 1 and

`\varphi = 0`

gives

.An nth root of unity, where n is a positive integer, is a number z satisfying the equation

`{\displaystyle z^{n}=1.}`

gives

Unless otherwise specified, the roots of unity may be taken to be complex numbers (including the number 1, and the number -1 if n is even, which are complex with a zero imaginary part), and in this case, the nth roots of unity are

`{\displaystyle \exp \left({\frac {2k\pi i}{n}}\right)=\cos {\frac {2k\pi }{n}}+i\sin {\frac {2k\pi }{n}},\qquad k=0,1,\dots ,n-1.}`

gives

However, the defining equation of roots of unity is meaningful over any field (and even over any ring) F, and this allows considering roots of unity in F. Whichever is the field F, the roots of unity in F are either complex numbers, if the characteristic of F is 0, or, otherwise, belong to a finite field. Conversely, every nonzero element in a finite field is a root of unity in that field.

An nth root of unity is said to be primitive if it is not an mth root of unity for some smaller m, that is if

`z^n = 1`

gives

and

`z^m \neq 1`

gives

for

`{\displaystyle z^{n}=1\quad {\text{and}}\quad z^{m}\neq 1{\text{ for }}m=1,2,3,\ldots ,n-1.}`

gives

If n is a prime number, then all nth roots of unity, except 1, are primitive.

In the above formula in terms of exponential and trigonometric functions, the primitive nth roots of unity are those for which k and n are coprime integers.

Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.

Offline

**Jai Ganesh****Administrator**- Registered: 2005-06-28
- Posts: 48,320

**Elementary properties**

Every nth root of unity z is a primitive ath root of unity for some

`a \leq n`

gives

, which is the smallest positive integer such that`z^a = 1`

gives

.Any integer power of an nth root of unity is also an nth root of unity, as

`{\displaystyle (z^{k})^{n}=z^{kn}=(z^{n})^{k}=1^{k}=1.}`

gives

This is also true for negative exponents. In particular, the reciprocal of an nth root of unity is its complex conjugate, and is also an nth root of unity:

`{\displaystyle {\frac {1}{z}}=z^{-1}=z^{n-1}={\bar {z}}.}`

gives

If z is an nth root of unity and a ≡ b (mod n) then

`z^a = z^b`

gives

. Indeed, by the definition of congruence modulo n, a = b + kn for some integer k, and hence`{\displaystyle z^{a}=z^{b+kn}=z^{b}z^{kn}=z^{b}(z^{n})^{k}=z^{b}1^{k}=z^{b}.}`

gives

Therefore, given a power

`z^a`

gives

of z, one has`z^a = z^r`

gives

, where`0 \leq r < n`

gives

is the remainder of the Euclidean division of a by n.Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.

Offline

**Jai Ganesh****Administrator**- Registered: 2005-06-28
- Posts: 48,320

**Determinant - I**

In mathematics, the determinant is a scalar-valued function of the entries of a square matrix. The determinant of a matrix A is commonly denoted det(A), det A, or |A|. Its value characterizes some properties of the matrix and the linear map represented, on a given basis, by the matrix. In particular, the determinant is nonzero if and only if the matrix is invertible and the corresponding linear map is an isomorphism.

The determinant is completely determined by the two following properties: the determinant of a product of matrices is the product of their determinants, and the determinant of a triangular matrix is the product of its diagonal entries.

The determinant of a

`2 \times 2`

gives

matrix is`{\displaystyle {\begin{vmatrix}a&b\\c&d\end{vmatrix}}=ad-bc,}`

gives

and the determinant of a

`3 \times 3`

gives

matrix is

`{\displaystyle {\begin{vmatrix}a&b&c\\d&e&f\\g&h&i\end{vmatrix}}=aei+bfg+cdh-ceg-bdi-afh.}`

gives

The determinant of an n × n matrix can be defined in several equivalent ways, the most common being Leibniz formula, which expresses the determinant as a sum of

`{\displaystyle n!}`

gives

(the factorial of n) signed products of matrix entries. It can be computed by the Laplace expansion, which expresses the determinant as a linear combination of determinants of submatrices, or with Gaussian elimination, which allows computing a row echelon form with the same determinant, equal to the product of the diagonal entries of the row echelon form.Determinants can also be defined by some of their properties. Namely, the determinant is the unique function defined on the

`n \times n`

gives

matrices that has the four following properties:1. The determinant of the identity matrix is 1.

2. The exchange of two rows multiplies the determinant by -1.

3. Multiplying a row by a number multiplies the determinant by this number.

4. Adding a multiple of one row to another row does not change the determinant.

The above properties relating to rows (properties 2-4) may be replaced by the corresponding statements with respect to columns.

The determinant is invariant under matrix similarity. This implies that, given a linear endomorphism of a finite-dimensional vector space, the determinant of the matrix that represents it on a basis does not depend on the chosen basis. This allows defining the determinant of a linear endomorphism, which does not depends on the choice of a coordinate system.

Determinants occur throughout mathematics. For example, a matrix is often used to represent the coefficients in a system of linear equations, and determinants can be used to solve these equations (Cramer's rule), although other methods of solution are computationally much more efficient. Determinants are used for defining the characteristic polynomial of a square matrix, whose roots are the eigenvalues. In geometry, the signed n-dimensional volume of a n-dimensional parallelepiped is expressed by a determinant, and the determinant of a linear endomorphism determines how the orientation and the n-dimensional volume are transformed under the endomorphism. This is used in calculus with exterior differential forms and the Jacobian determinant, in particular for changes of variables in multiple integrals.

Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.

Offline

**Jai Ganesh****Administrator**- Registered: 2005-06-28
- Posts: 48,320

**Distributive property - I**

In mathematics, the distributive property of binary operations is a generalization of the distributive law, which asserts that the equality

`{\displaystyle x\cdot (y+z)=x\cdot y+x\cdot z}`

gives

is always true in elementary algebra. For example, in elementary arithmetic, one has

`{\displaystyle 2\cdot (1+3)=(2\cdot 1)+(2\cdot 3).}`

gives

Therefore, one would say that multiplication distributes over addition.

This basic property of numbers is part of the definition of most algebraic structures that have two operations called addition and multiplication, such as complex numbers, polynomials, matrices, rings, and fields. It is also encountered in Boolean algebra and mathematical logic, where each of the logical and (denoted

`{\displaystyle \,\land \,}`

gives

)and the logical or denoted

`{\displaystyle \,\lor \,}`

gives

) distributes over the other.**Definition**

Given a set

`{\displaystyle S}`

gives

and two binary operators`{\displaystyle \,*\,}`

gives

and`{\displaystyle \,+\,}`

gives

on`{\displaystyle S,}`

gives

i) the operation

`{\displaystyle \,*\,}`

gives

is left-distributive over (or with respect to)

`{\displaystyle \,+\,}`

gives

if, given any elements`{\displaystyle x,y,{\text{ and }}z}`

gives

of`{\displaystyle S,}`

gives

`{\displaystyle x*(y+z)=(x*y)+(x*z);}`

gives

iii) the operation

`{\displaystyle \,*\,}`

gives

is right-distributive over`{\displaystyle \,+\,}`

gives

if, given any elements`{\displaystyle x,y,{\text{ and }}z}`

gives

of`{\displaystyle S,}`

gives

`{\displaystyle (y+z)*x=(y*x)+(z*x);}`

gives

iii) and the operation

`{\displaystyle \,*\,}`

is

is distributive over`{\displaystyle \,+\,}`

gives

if it is left- and right-distributive.When

`{\displaystyle \,*\,}`

gives

is commutative, the three conditions above are logically equivalent.**Meaning**

The operators used for examples in this section are those of the usual addition

`{\displaystyle \,+\,}`

gives

and multiplication`{\displaystyle \,\cdot .\,}`

gives

If the operation denoted

`{\displaystyle \cdot }`

gives

is not commutative, there is a distinction between left-distributivity and right-distributivity:(left-distributive)

`{\displaystyle a\cdot \left(b\pm c\right)=a\cdot b\pm a\cdot c\qquad {\text{ (left-distributive) }}}`

gives

(right-distributive)

`{\displaystyle (a\pm b)\cdot c=a\cdot c\pm b\cdot c\qquad {\text{ (right-distributive) }}.}`

gives

In either case, the distributive property can be described in words as:

To multiply a sum (or difference) by a factor, each summand (or minuend and subtrahend) is multiplied by this factor and the resulting products are added (or subtracted).

If the operation outside the parentheses (in this case, the multiplication) is commutative, then left-distributivity implies right-distributivity and vice versa, and one talks simply of distributivity.

One example of an operation that is "only" right-distributive is division, which is not commutative:

`{\displaystyle (a\pm b)\div c=a\div c\pm b\div c.}`

gives

In this case, left-distributivity does not apply:

`{\displaystyle a\div (b\pm c)\neq a\div b\pm a\div c}`

gives

The distributive laws are among the axioms for rings (like the ring of integers) and fields (like the field of rational numbers). Here multiplication is distributive over addition, but addition is not distributive over multiplication.

Examples of structures with two operations that are each distributive over the other are Boolean algebras such as the algebra of sets or the switching algebra.

Multiplying sums can be put into words as follows: When a sum is multiplied by a sum, multiply each summand of a sum with each summand of the other sum (keeping track of signs) then add up all of the resulting products.

Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.

Offline

**Jai Ganesh****Administrator**- Registered: 2005-06-28
- Posts: 48,320

**Determinant - II**

**Two by two matrices**

The determinant of a

`2 \times 2`

gives

matrix`{\displaystyle {\begin{pmatrix}a&b\\c&d\end{pmatrix}}}`

gives

is denoted either by "det" or by vertical bars around the matrix, and is defined as

`{\displaystyle \det {\begin{pmatrix}a&b\\c&d\end{pmatrix}}={\begin{vmatrix}a&b\\c&d\end{vmatrix}}=ad-bc.}`

gives

For example,

`{\displaystyle \det {\begin{pmatrix}3&7\\1&-4\end{pmatrix}}={\begin{vmatrix}3&7\\1&{-4}\end{vmatrix}}=(3\cdot (-4))-(7\cdot 1)=-19.}`

gives

**First properties**

The determinant has several key properties that can be proved by direct evaluation of the definition for

`{\displaystyle 2\times 2}`

gives

-matrices, and that continue to hold for determinants of larger matrices. They are as follows: first, the determinant of the identity matrix

`{\displaystyle {\begin{pmatrix}1&0\\0&1\end{pmatrix}}}`

gives

is 1. Second, the determinant is zero if two rows are the same:

`{\displaystyle {\begin{vmatrix}a&b\\a&b\end{vmatrix}}=ab-ba=0.}`

gives

This holds similarly if the two columns are the same. Moreover,

`{\displaystyle {\begin{vmatrix}a&b+b'\\c&d+d'\end{vmatrix}}=a(d+d')-(b+b')c={\begin{vmatrix}a&b\\c&d\end{vmatrix}}+{\begin{vmatrix}a&b'\\c&d'\end{vmatrix}}.}`

gives

Finally, if any column is multiplied by some number r (i.e., all entries in that column are multiplied by that number), the determinant is also multiplied by that number:

`{\displaystyle {\begin{vmatrix}r\cdot a&b\\r\cdot c&d\end{vmatrix}}=rad-brc=r(ad-bc)=r\cdot {\begin{vmatrix}a&b\\c&d\end{vmatrix}}.}`

gives

Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.

Offline

**Jai Ganesh****Administrator**- Registered: 2005-06-28
- Posts: 48,320

**Determinant - III**

If an

`n \times n`

gives

real matrix A is written in terms of its column vectors`{\displaystyle A=\left[{\begin{array}{c|c|c|c}\mathbf {a} _{1}&\mathbf {a} _{2}&\cdots &\mathbf {a} _{n}\end{array}}\right]}`

gives

`{\displaystyle A{\begin{pmatrix}1\\0\\\vdots \\0\end{pmatrix}}=\mathbf {a} _{1},\quad A{\begin{pmatrix}0\\1\\\vdots \\0\end{pmatrix}}=\mathbf {a} _{2},\quad \ldots ,\quad A{\begin{pmatrix}0\\0\\\vdots \\1\end{pmatrix}}=\mathbf {a} _{n}.}`

gives

Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.

Offline

**Jai Ganesh****Administrator**- Registered: 2005-06-28
- Posts: 48,320

**Determinant - IV**

**Definition**

Let A be a square matrix with n rows and n columns, so that it can be written as

`{\displaystyle A={\begin{bmatrix}a_{1,1}&a_{1,2}&\cdots &a_{1,n}\\a_{2,1}&a_{2,2}&\cdots &a_{2,n}\\\vdots &\vdots &\ddots &\vdots \\a_{n,1}&a_{n,2}&\cdots &a_{n,n}\end{bmatrix}}.}`

gives

The entries

`{\displaystyle a_{1,1}}`

gives

etc. are, for many purposes, real or complex numbers. As discussed below, the determinant is also defined for matrices whose entries are in a commutative ring.The determinant of A is denoted by det(A), or it can be denoted directly in terms of the matrix entries by writing enclosing bars instead of brackets:

`{\displaystyle {\begin{vmatrix}a_{1,1}&a_{1,2}&\cdots &a_{1,n}\\a_{2,1}&a_{2,2}&\cdots &a_{2,n}\\\vdots &\vdots &\ddots &\vdots \\a_{n,1}&a_{n,2}&\cdots &a_{n,n}\end{vmatrix}}.}`

gives

There are various equivalent ways to define the determinant of a square matrix A, i.e. one with the same number of rows and columns: the determinant can be defined via the Leibniz formula, an explicit formula involving sums of products of certain entries of the matrix. The determinant can also be characterized as the unique function depending on the entries of the matrix satisfying certain properties. This approach can also be used to compute determinants by simplifying the matrices in question.

Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.

Offline

**Jai Ganesh****Administrator**- Registered: 2005-06-28
- Posts: 48,320

**Determinant - V**

**n × n matrices**

Generalizing the above to higher dimensions, the determinant of an

`{\displaystyle n\times n}`

gives

matrix is an expression involving permutations and their signatures. A permutation of the set

`{\displaystyle \{1,2,\dots ,n\}}`

gives

is a bijective function

`{\displaystyle \sigma }`

gives

from this set to itself, with values

`{\displaystyle \sigma (1),\sigma (2),\ldots ,\sigma (n)}`

gives

exhausting the entire set. The set of all such permutations, called the symmetric group, is commonly denoted

`{\displaystyle S_{n}}`

gives

.The signature

`{\displaystyle \operatorname {sgn}(\sigma )}`

gives

of a permutation

`{\displaystyle \sigma }`

gives

is

`{\displaystyle +1,}`

gives

if the permutation can be obtained with an even number of transpositions (exchanges of two entries); otherwise, it is

`{\displaystyle -1.}`

gives

Given a matrix,

`{\displaystyle A={\begin{bmatrix}a_{1,1}\ldots a_{1,n}\\\vdots \qquad \vdots \\a_{n,1}\ldots a_{n,n}\end{bmatrix}},}`

gives

the Leibniz formula for its determinant is, using sigma notation for the sum,

`{\displaystyle \det(A)={\begin{vmatrix}a_{1,1}\ldots a_{1,n}\\\vdots \qquad \vdots \\a_{n,1}\ldots a_{n,n}\end{vmatrix}}=\sum _{\sigma \in S_{n}}\operatorname {sgn}(\sigma )a_{1,\sigma (1)}\cdots a_{n,\sigma (n)}.}`

gives

Using pi notation for the product, this can be shortened into

`{\displaystyle \det(A)=\sum _{\sigma \in S_{n}}\left(\operatorname {sgn}(\sigma )\prod _{i=1}^{n}a_{i,\sigma (i)}\right)}`

gives

The Levi-Civita symbol

`{\displaystyle \varepsilon _{i_{1},\ldots ,i_{n}}}`

gives

is defined on the n-tuples of integers in

`{\displaystyle \{1,\ldots ,n\}}`

gives

as 0 if two of the integers are equal, and otherwise as the signature of the permutation defined by the n-tuple of integers. With the Levi-Civita symbol, the Leibniz formula becomes

`{\displaystyle \det(A)=\sum _{i_{1},i_{2},\ldots ,i_{n}}\varepsilon _{i_{1}\cdots i_{n}}a_{1,i_{1}}\!\cdots a_{n,i_{n}},}`

gives

where the sum is taken over all n-tuples of integers in

`{\displaystyle \{1,\ldots ,n\}.}`

gives

Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.

Offline

**Jai Ganesh****Administrator**- Registered: 2005-06-28
- Posts: 48,320

**Distributive Property - II**

**Examples**

**Real numbers**

In the following examples, the use of the distributive law on the set of real numbers R

`{\displaystyle \mathbb {R} }`

gives

is illustrated. When multiplication is mentioned in elementary mathematics, it usually refers to this kind of multiplication. From the point of view of algebra, the real numbers form a field, which ensures the validity of the distributive law.**First example (mental and written multiplication)**

During mental arithmetic, distributivity is often used unconsciously:

`{\displaystyle 6\cdot 16=6\cdot (10+6)=6\cdot 10+6\cdot 6=60+36=96}`

gives

Thus, to calculate

`{\displaystyle 6\cdot 16}`

gives

in one's head, one first multiplies`{\displaystyle 6\cdot 10}`

gives

and`{\displaystyle 6\cdot 6}`

gives

and add the intermediate results. Written multiplication is also based on the distributive law.**Second example (with variables)**

`{\displaystyle 3a^{2}b\cdot (4a-5b)=3a^{2}b\cdot 4a-3a^{2}b\cdot 5b=12a^{3}b-15a^{2}b^{2}}`

gives

Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.

Offline

**Jai Ganesh****Administrator**- Registered: 2005-06-28
- Posts: 48,320

**Distributive Property - III**

Third example (with two sums)

`{\displaystyle {\begin{aligned}(a+b)\cdot (a-b)&=a\cdot (a-b)+b\cdot (a-b)=a^{2}-ab+ba-b^{2}=a^{2}-b^{2}\\&=(a+b)\cdot a-(a+b)\cdot b=a^{2}+ba-ab-b^{2}=a^{2}-b^{2}\\\end{aligned}}}`

gives

Here the distributive law was applied twice, and it does not matter which bracket is first multiplied out.

**Fourth example**

Here the distributive law is applied the other way around compared to the previous examples. Consider

`{\displaystyle 12a^{3}b^{2}-30a^{4}bc+18a^{2}b^{3}c^{2}\,.}`

Since the factor

`{\displaystyle 6a^{2}b}`

gives

occurs in all summands, it can be factored out. That is, due to the distributive law one obtains`{\displaystyle 12a^{3}b^{2}-30a^{4}bc+18a^{2}b^{3}c^{2}=6a^{2}b\left(2ab-5a^{2}c+3b^{2}c^{2}\right).}`

gives

Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.

Offline

**Jai Ganesh****Administrator**- Registered: 2005-06-28
- Posts: 48,320

**Matrix - I**

In mathematics, a matrix (pl.: matrices) is a rectangular array or table of numbers, symbols, or expressions, with elements or entries arranged in rows and columns, which is used to represent a mathematical object or property of such an object.

For example,

`{\displaystyle {\begin{bmatrix}1&9&-13\\20&5&-6\end{bmatrix}}}`

gives

is a matrix with two rows and three columns. This is often referred to as a "two-by-three matrix", a "

`{\displaystyle 2\times 3}`

gives

matrix", or a matrix of dimension`{\displaystyle 2\times 3}`

gives

.Matrices are commonly related to linear algebra. Notable exceptions include incidence matrices and adjacency matrices in graph theory. This article focuses on matrices related to linear algebra, and, unless otherwise specified, all matrices represent linear maps or may be viewed as such.

Square matrices, matrices with the same number of rows and columns, play a major role in matrix theory. Square matrices of a given dimension form a noncommutative ring, which is one of the most common examples of a noncommutative ring. The determinant of a square matrix is a number associated with the matrix, which is fundamental for the study of a square matrix; for example, a square matrix is invertible if and only if it has a nonzero determinant and the eigenvalues of a square matrix are the roots of a polynomial determinant.

In geometry, matrices are widely used for specifying and representing geometric transformations (for example rotations) and coordinate changes. In numerical analysis, many computational problems are solved by reducing them to a matrix computation, and this often involves computing with matrices of huge dimensions. Matrices are used in most areas of mathematics and scientific fields, either directly, or through their use in geometry and numerical analysis.

Matrix theory is the branch of mathematics that focuses on the study of matrices. It was initially a sub-branch of linear algebra, but soon grew to include subjects related to graph theory, algebra, combinatorics and statistics.

**Definition**

A matrix is a rectangular array of numbers (or other mathematical objects), called the entries of the matrix. Matrices are subject to standard operations such as addition and multiplication. Most commonly, a matrix over a field F is a rectangular array of elements of F. A real matrix and a complex matrix are matrices whose entries are respectively real numbers or complex numbers. More general types of entries are discussed below. For instance, this is a real matrix:

`{\displaystyle \mathbf {A} ={\begin{bmatrix}-1.3&0.6\\20.4&5.5\\9.7&-6.2\end{bmatrix}}.}`

gives

The numbers, symbols, or expressions in the matrix are called its entries or its elements. The horizontal and vertical lines of entries in a matrix are called rows and columns, respectively.

**Size**

The size of a matrix is defined by the number of rows and columns it contains. There is no limit to the number of rows and columns, that a matrix (in the usual sense) can have as long as they are positive integers. A matrix with

m rows and n columns is called an

`{\displaystyle {m\times n}}`

gives

matrix, orm-by-n matrix, where

m and n are called its dimensions. For example, the matrix

`{\displaystyle {\mathbf {A} }}`

gives

above is a`{\displaystyle {3\times 2}}`

gives

matrix.Matrices with a single row are called row vectors, and those with a single column are called column vectors. A matrix with the same number of rows and columns is called a square matrix. A matrix with an infinite number of rows or columns (or both) is called an infinite matrix. In some contexts, such as computer algebra programs, it is useful to consider a matrix with no rows or no columns, called an empty matrix.

Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.

Offline

**Jai Ganesh****Administrator**- Registered: 2005-06-28
- Posts: 48,320

**Matrix - II**

**Overview of a matrix size**

Name : Size : Example : Description : Notation

`1 \times n`

gives

`{\displaystyle {\begin{bmatrix}3&7&2\end{bmatrix}}}`

gives

A matrix with one row, sometimes used to represent a vector

`{a_{i}}`

gives

:`n \times 1`

gives

`{\displaystyle {\begin{bmatrix}4\\1\\8\end{bmatrix}}}`

gives

A matrix with one column, sometimes used to represent a vector

`{a_{j}}`

gives

.**Square matrix** :

`n \times n`

gives

`{\displaystyle {\begin{bmatrix}9&13&5\\1&11&7\\2&6&3\end{bmatrix}}}`

gives

A matrix with the same number of rows and columns, sometimes used to represent a linear transformation from a vector space to itself, such as reflection, rotation, or shearing.

Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.

Offline

**Jai Ganesh****Administrator**- Registered: 2005-06-28
- Posts: 48,320

**Matrix - III**

**Notation**

The specifics of symbolic matrix notation vary widely, with some prevailing trends. Matrices are commonly written in square brackets or parentheses, so that an

`{\displaystyle m\times n}`

gives

matrix`{\displaystyle \mathbf {A} }`

gives

is represented as`{\displaystyle \mathbf {A} ={\begin{bmatrix}a_{11}&a_{12}&\cdots &a_{1n}\\a_{21}&a_{22}&\cdots &a_{2n}\\\vdots &\vdots &\ddots &\vdots \\a_{m1}&a_{m2}&\cdots &a_{mn}\end{bmatrix}}={\begin{pmatrix}a_{11}&a_{12}&\cdots &a_{1n}\\a_{21}&a_{22}&\cdots &a_{2n}\\\vdots &\vdots &\ddots &\vdots \\a_{m1}&a_{m2}&\cdots &a_{mn}\end{pmatrix}}.}`

gives

Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.

Offline

**Jai Ganesh****Administrator**- Registered: 2005-06-28
- Posts: 48,320

**Matrix - IV**

This may be abbreviated by writing only a single generic term, possibly along with indices, as in

`{\displaystyle \mathbf {A} =\left(a_{ij}\right),\quad \left[a_{ij}\right],\quad {\text{or}}\quad \left(a_{ij}\right)_{1\leq i\leq m,\;1\leq j\leq n}}`

gives

or

`{\displaystyle \mathbf {A} =(a_{i,j})_{1\leq i,j\leq n}}`

gives

Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.

Offline

**Jai Ganesh****Administrator**- Registered: 2005-06-28
- Posts: 48,320

**Matrix - V**

**Addition**

The sum A+B of two m-by-n matrices A and B is calculated entrywise:

`(A + B)_{i,j} = A_{i,j} + B_{i,j}`

gives

where

`1\leq_i`

and

`1_\leq_j`

`1 \leq i \leq m, 1 \leq j \leq n`

gives

For example,

`{\displaystyle {\begin{bmatrix}1&3&1\\1&0&0\end{bmatrix}}+{\begin{bmatrix}0&0&5\\7&5&0\end{bmatrix}}={\begin{bmatrix}1+0&3+0&1+5\\1+7&0+5&0+0\end{bmatrix}}={\begin{bmatrix}1&3&6\\8&5&0\end{bmatrix}}}`

gives

Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.

Offline

**Jai Ganesh****Administrator**- Registered: 2005-06-28
- Posts: 48,320

**Matrix - VI**

**Scalar Multiplication**

The product cA of a number c (also called a scalar in this context) and a matrix A is computed by multiplying every entry of A by c:

`(cA)_{i,j} = c \cdot A_{i,j}`

gives

.This operation is called scalar multiplication, but its result is not named "scalar product" to avoid confusion, since "scalar product" is often used as a synonym for "inner product". For example:

`{\displaystyle 2\cdot {\begin{bmatrix}1&8&-3\\4&-2&5\end{bmatrix}}={\begin{bmatrix}2\cdot 1&2\cdot 8&2\cdot -3\\2\cdot 4&2\cdot -2&2\cdot 5\end{bmatrix}}={\begin{bmatrix}2&16&-6\\8&-4&10\end{bmatrix}}}`

gives

Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.

Offline

**Jai Ganesh****Administrator**- Registered: 2005-06-28
- Posts: 48,320

**Matrix - VII**

**Subtraction**

The subtraction of two

`m \times n`

gives

matrices is defined by composing matrix addition with scalar multiplication by -1:`{\displaystyle \mathbf {A} -\mathbf {B} =\mathbf {A} +(-1)\cdot \mathbf {B} }`

gives

**Transposition**

The transpose of an m-by-n matrix A is the n-by-m matrix

`A^T`

gives

(also denoted`A^{tr}`

gives

or`^t{A}`

gives

) formed by turning rows into columns and vice versa:`(A^T)_{i,j} = A_{j,i}`

gives

.For example:

`{\displaystyle {\begin{bmatrix}1&2&3\\0&-6&7\end{bmatrix}}^{\mathrm {T} }={\begin{bmatrix}1&0\\2&-6\\3&7\end{bmatrix}}}`

gives

Familiar properties of numbers extend to these operations on matrices: for example, addition is commutative, that is, the matrix sum does not depend on the order of the summands: A + B = B + A. The transpose is compatible with addition and scalar multiplication, as expressed by

`(cA)^T = c(A^T) \text{ and } (A + B)^T = A^T + B^T.`

gives

Finally,

`(A^{T})^{T} = A`

gives

.Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.

Offline

**Jai Ganesh****Administrator**- Registered: 2005-06-28
- Posts: 48,320

**Matrix - VIII**

**Matrix multiplication**

Multiplication of two matrices is defined if and only if the number of columns of the left matrix is the same as the number of rows of the right matrix. If A is an m-by-n matrix and B is an n-by-p matrix, then their matrix product AB is the m-by-p matrix whose entries are given by dot product of the corresponding row of A and the corresponding column of B:

`{\displaystyle [\mathbf {AB} ]_{i,j}=a_{i,1}b_{1,j}+a_{i,2}b_{2,j}+\cdots +a_{i,n}b_{n,j}=\sum _{r=1}^{n}a_{i,r}b_{r,j},}`

gives

where

`1 \leq i \leq m \text { and } 1 \leq j \leq p`

gives

For example, the underlined entry 2340 in the product is calculated as

`(2 \times 1000) + (3 \times 100) + (4 \times 10) = 2340`

gives

`{\displaystyle {\begin{aligned}{\begin{bmatrix}{\underline {2}}&{\underline {3}}&{\underline {4}}\\1&0&0\\\end{bmatrix}}{\begin{bmatrix}0&{\underline {1000}}\\1&{\underline {100}}\\0&{\underline {10}}\\\end{bmatrix}}&={\begin{bmatrix}3&{\underline {2340}}\\0&1000\\\end{bmatrix}}.\end{aligned}}}`

gives

Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.

Offline

**Jai Ganesh****Administrator**- Registered: 2005-06-28
- Posts: 48,320

**Matrix - IX**

Matrix multiplication satisfies the rules (AB)C = A(BC) (associativity), and

(A + B)C = AC + BC as well as C(A + B) = CA + CB (left and right distributivity), whenever the size of the matrices is such that the various products are defined. The product AB may be defined without BA being defined, namely if A and B are m-by-n and n-by-k matrices, respectively, and

`m \leq k`

gives

.Even if both products are defined, they generally need not be equal, that is:

`AB \neq BA`

gives

.In other words, matrix multiplication is not commutative, in marked contrast to (rational, real, or complex) numbers, whose product is independent of the order of the factors. An example of two matrices not commuting with each other is:

`{\displaystyle {\begin{bmatrix}1&2\\3&4\\\end{bmatrix}}{\begin{bmatrix}0&1\\0&0\\\end{bmatrix}}={\begin{bmatrix}0&1\\0&3\\\end{bmatrix}},}`

gives

whereas

`{\displaystyle {\begin{bmatrix}0&1\\0&0\\\end{bmatrix}}{\begin{bmatrix}1&2\\3&4\\\end{bmatrix}}={\begin{bmatrix}3&4\\0&0\\\end{bmatrix}}.}`

gives

Besides the ordinary matrix multiplication just described, other less frequently used operations on matrices that can be considered forms of multiplication also exist, such as the Hadamard product and the Kronecker product. They arise in solving matrix equations such as the Sylvester equation.

Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.

Offline