Linear algebra
The linear algebra is a branch of mathematics that studies concepts such as vectors, matrices, dual space, systems of linear equations and in its more formal approach, vector spaces and their linear transformations.
In other words, Linear algebra is the branch of mathematics that deals with linear equations such as:
- a1x1+ +anxn=b,{displaystyle a_{1}x_{1}+cdots +a_{n}x_{n}=b,}
and linear applications such as:
- (x1,...... ,xn) a1x1+ +anxn,{displaystyle (x_{1},ldotsx_{n})mapsto a_{1}x_{1}x_{1}x_{n},}
and their representations in vector spaces and through matrices.
Linear algebra is fundamental to almost all areas of mathematics. For example, linear algebra is essential in modern presentations of geometry, even for defining basic objects such as lines, planes, and rotations. Furthermore, functional analysis, a branch of mathematical analysis, can basically be considered as the application of linear algebra to spaces of functions.
Linear algebra is also used in most sciences and engineering fields, because it allows many natural phenomena to be modeled, and to compute efficiently with those models. For nonlinear systems, which cannot be modeled with linear algebra, it is often used to deal with first-order approximations, using the fact that the differential of a 'multivariate function' at a point is the linear map that best approximates the function near that point as well as functional analysis, differential equations, operations research, computer graphics, engineering, and more.
The history of modern linear algebra dates back to 1843, when William Rowan Hamilton (from whom the term vector comes from) created quaternions inspired by complex numbers; and to 1844, when Hermann Grassmann published his book Die lineare Ausdehnungslehre (The Linear Theory of Extension).
History
The procedure for solving simultaneous linear equations now called Gaussian elimination appears in the ancient Chinese mathematical text Calculus of Rods#System of Linear Equations; Eighth chapter: Rectangular matrices of The nine chapters on the mathematical art. Its use is illustrated in eighteen problems, with two to five equations.
Systems of linear equations arose in Europe with the introduction in 1637 by René Descartes of coordinates in geometry. In fact, in this new geometry, now called Cartesian geometry, lines and planes are represented by linear equations, and calculating their intersections is equivalent to solving systems of linear equations.
The first systematic methods for solving linear systems used determinants, first considered by Leibniz in 1693. In 1750, Gabriel Cramer used them to give explicit solutions of linear systems, now called Cramer's rule. Gauss later further described the removal method, which was initially billed as an advance in geodesy.
In 1844 Hermann Grassmann published his "Theory of Extension" which included new foundational themes of what is now called linear algebra. In 1848, James Joseph Sylvester introduced the term womb, which is Latin for womb.
The linear algebra grew with ideas noted in the complex plane. For example, two numbers w and z in C have a difference w - z, and line segments wz! ! {displaystyle {overline {wz}}} and 0(w− − z)! ! {displaystyle {overline {0(w-z)}}} they have the same length and direction. Segments are teamlentes. The four-dimensional H system of quaternions began in 1843. The term vector was introduced as v = x + and j z k representing a point in space. The difference of quaternions p - q also produces a team segment pq! ! .{displaystyle {overline {pq}}}. ! Other hypercomplete number systems also used the idea of a linear space with a base.
Arthur Cayley introduced matrix multiplication and matrix inverse in 1856, making the general linear group possible. The group representation mechanism became available to describe complex and hypercomplex numbers. Essentially, Cayley used a single letter to denote an array, thus treating an array as an aggregate object. He, too, realized the connection between matrices and determiners, and wrote "There would be much to say about this theory of matrices which should, it seems to me, precede the theory of determiners."
Benjamin Peirce published his Associative Linear Algebra (1872), and his son Charles Sanders Peirce later expanded the work.
The telegraph required an explanatory system, and the 1873 publication of A Treatise on Electricity and Magnetism instituted a theory of force fields and required differential geometry for its expression. Linear algebra is plane differential geometry and works on tangent spaces to manifolds. The electromagnetic symmetries of spacetime are expressed by Lorentz transformations, and much of the history of linear algebra is the history of Lorentz transformations.
The first modern and most precise definition of a vector space was introduced by Peano in 1888; by 1900 a theory of linear transformations of finite-dimensional vector spaces had emerged. Linear algebra took its modern form in the first half of the 20th century, when many ideas and methods from earlier centuries became widespread as abstract algebra. The development of computers increased the search for efficient algorithms for Gaussian elimination and matrix decompositions, and linear algebra became an essential tool for modeling and simulations.
General context
In a more formal way, linear algebra studies sets called vector spaces, which consist of a set of vectors and a set of scalars that have a field structure, with a vector addition operation and a product operation between scalars and vectors that satisfy certain properties (for example, that addition is commutative).
It also studies linear transformations, which are functions between vector spaces that satisfy the linearity conditions:
T(u+v)=T(u)+T(v),T(r⋅ ⋅ u)=r⋅ ⋅ T(u).{displaystyle T(u+v)=T(u)+T(v),qquad T(rcdot u)=rcdot T(u). !
Unlike the example developed in the previous section, vectors are not necessarily n-adas of scalars, but can be elements of any set (in fact, from any set it can be a vector space can be constructed over a fixed field).
Finally, linear algebra also studies the properties that appear when additional structure is imposed on vector spaces, one of the most frequent being the existence of an inner product (a kind of product between two vectors) that allows us to introduce notions such as length of vectors and angle between a pair of them.
Vector spaces
Background
Until the 19th century, linear algebra was presented through systems of linear equations and matrices. In modern mathematics, the presentation via vector spaces is generally preferred, as it is more synthetic, more general (not limited to the finite-dimensional case), and conceptually simpler, though more abstract.
Some basic operations
A vector space over a field F, often the field of real numbers, is a Set V endowed with two binary operations that satisfy the following axioms. The elements of V are called vectors, and the elements of F are called scalars.
The first operation, vector addition, is expressed as follows: take any two vectors v and w; the addition results in a third v + w vector.
The second operation, scalar multiplication, is expressed as follows: take any scalar a and any vector v and produces a new vector av. The axioms that must satisfy scalar addition and multiplication are the following, being in the following list, u, v and w arbitrary elements of V; and a and b are arbitrary scalars in the F.
Axioma | Meaning |
Associative of Addendum | u + (v + w) = (u + v+ w |
Addition switch | u + v = v + u |
Neutral element of addition | There is an element 0 in V, called the vector zeroor simplify zero, such that v + 0 = v is fulfilled for everything v of the whole V. |
The symmetrical element is added | For everything v in V, there is an element −v in V, called the inverse additive of v, such that v + (−v) = 0 |
Distribution of scale multiplication with respect to vector sum | a(u + v) = au + av |
Distribution of scale multiplication with respect to the sum of fields | (a + b)v = av + bv |
Compatibility of scale multiplication with field multiplication | a(bv) = (ab)v |
Identity element of scale multiplication | 1v = vWhere 1 indicates the neutral element F. |
Linear maps
Linear mappings are mappings between vector spaces that preserve the structure of the vector space. Given two vector spaces V and W over a F, a linear map, also called in some contexts, linear transformation or linear mapping, is a map or application
- T:V→ → W{displaystyle T:Vto W}
which supports addition and scalar multiplication, i.e.
- T(u+v)=T(u)+T(v),T(av)=aT(v){displaystyle T(mathbf {u} +mathbf {v})=T(mathbf {u})+T(mathbf {v}),quad T(amathbf {v})=aT(mathbf {v})}}}
for vectors u,v in V and a ladders in F.
This implies that for any vector u, v in V and scalars a, b in F, we have
- T(au+bv)=T(au)+T(bv)=aT(u)+bT(v){displaystyle T(amathbf {u} +bmathbf {v})=T(amathbf {u})+T(bmathbf {v})=aT(mathbf {u})+bT(mathbf {v})}}
Where V = W are the same vector space, a linear map T:V→ → V{displaystyle T:Vto V} also known as a linear operator in V.
A bijective linear map between two vector spaces, that is, each vector in the second space is associated with exactly one in the first, is an isomorphism. Since an isomorphism preserves linear structure, two isomorphic vector spaces are "essentially equal" from the point of view of linear algebra, in the sense that they cannot be distinguished using the properties of the vector space. An essential question in linear algebra is to prove whether a linear map is an isomorphism or not, and, if it is not an isomorphism, to find its rank (or image) and the set of elements that are mapped to the zero vector, called the kernel of the map. map. All these questions can be solved by using Gaussian elimination or some variant of this algorithm.
Subspaces, interval and base
The study of those subsets of vector spaces that are themselves vector spaces under the induced operations is fundamental, as it is for many mathematical structures. These subsets are called linear subspaces. More precisely, a linear subspace of a V vector space over a F is a subset of W of V such that u + v and au are in W, for all u, v in W, and all a in F. These conditions are sufficient to imply that W is a vector space.
For example, given a linear field T:V→ → W{displaystyle T:Vto W}, the image T(V) of Vand the reverse image T−1(0) of 0, called core or kernelThey're linear subspaces. W and Vrespectively.
Another important way to form a subspace is to consider the linear combinations of a S set of vectors: the set of all sums
- a1v1+a2v2+ +akvk,{displaystyle a_{1}mathbf {v} _{1}+a_{2}mathbf {v} _{2}+cdots +a_{k}mathbf {v},}
where v1, v2, …, vk are in S, and a1, a2,..., ak are in F form a linear subspace called the S generator set. The spanning system of S is also the intersection of all linear subspaces containing S. In other words, it is the linear subspace, smallest for the inclusion relation, that contains S.
A set of vectors is linearly independent if none is at the interval of others. An equivalent set S vectors are linearly independent if the only way to express the zero vector as a linear combination of elements S is to take zero for each coefficient ai.{displaystyle a_{i}. !
A set of vectors covering a vector space is called expansion package or generator system. If a generator set S That's it. linearly dependent (which is not linearly independent), then some element w of S is in the lapse of the other elements S and the lapse would remain the same if one removed w of S. You can continue to remove elements S until you get a set of expansion linearly independent. A linearly independent set covering a vector space V is called base V. The importance of the bases lies in the fact that there are minimal generators and maximum independent groups together. More precisely, if S is a linearly independent set T is a set of expansion such that S T,{displaystyle Ssubseq T,}So there's a base B such as S B T.{displaystyle Ssubseq Bsubseteq T.}
If any two bases of a vector space V have the same cardinality which is called dimension; this is the Vector Space Dimension Theorem. Also, two vector spaces over the same F field are isomorphic if and only if they have the same dimension.
If some base of V (and thus every base) has a finite number of elements, V is a finite-dimensional vector space. If U is a subspace of V, then dim U ≤ dim V. In the case where V is finite dimensional, equality of dimensions implies that U = V.
If U1 and U2 are subspaces of V then
- dim (U1+U2)=dim U1+dim U2− − dim (U1 U2),{displaystyle dim(U_{1}+U_{2})=dim U_{1+}dim U_{2}-dim(U_{1}cap U_{2}),}}
where U1+U2{displaystyle U_{1}+U_{2}}} denotes the lapse of U1 U2.{displaystyle U_{1}cup U_{2}. !
Arrays
The matrix is a rectangular arrangement of numbers, symbols or expressions, whose dimensions are described in the number of rows (usually m) by the number of columns (n) they own. Matrix arrangements are particularly studied by linear algebra and are widely used in science and engineering.
Matrixes allow explicit manipulation of finite-dimensional vector spaces and linear maps. Therefore, his theory is an essential part of linear algebra.
Let V be a finite-dimensional vector space over a field F, and (v1, v2, …, vm)) is a base of V, therefore m is the dimension of V). By definition, from a base, the map
- (a1,...... ,am) a1v1+ amvmFm→ → V{displaystyle {begin{aligned}(a_{1},ldotsa_{m}} alienmapsto a_{1}mathbf {v} _{1}+cdots a_{m}mathbf {v} _{m}F^{m}{m}{to Vend{aligned}}}}}}}}}
It's a bijection. Fm,{displaystyle F^{m},} the set of sequences of m elements V on the V. This is an isomorphism of vectorial spaces, if Fm{displaystyle F^{m}} is equipped with its standard vector space structure, where the sum of vectors and scale multiplication are made component by component.
This isomorphism allows to represent a vector by its reverse image under this isomorphism, that is to say by the components of a coordinate vector { Displaystyle (a_ {1}, ldots, a_ {m})}{ Displaystyle (a_ {1}, ldots, a_ {m}}}o by the column matrix (a1,...... ,am){displaystyle (a_{1},ldotsa_{m})} or through the vertical matrix
- [chuckles]a1 am].{displaystyle {begin{bmatrix}a_{1}vdots \a_{m}end{bmatrix}}}}. !
Yeah. W is another finite dimension vectorial space (possibly the same), with a base (w1,...... ,wn),{displaystyle (mathbf {w} _{1},ldotsmathbf {w} _{n}),}, a linear map f of W a V is well defined by its values in the base elements, i.e. { Displaystyle (f ( mathbf {w} _ {1}), ldots, f ( mathbf {w} _ {n}).}{ Displaystyle (f ( mathbf {w} _ {1}), ldots, f ( mathbf {w} _ {n})} Therefore, f is well represented by the list of the corresponding column matrices. I mean, yeah.
- f(wj)=a1,jv1+ +am,jvm,{displaystyle f(w_{j})=a_{1,j}v_{1}+cdots +a_{m,j}v_{m},}
with j = 1,..., n, then f is represented by an array:
- [chuckles]a1,1 a1,n am,1 am,n],{displaystyle {begin{bmatrix}a_{1,1} aliencdots &a_{1,n}vdots &ddotsvdots \vdots \a_{m,1}{cdots &a_{m,n}end{bmatrix}}}}},}
with m rows and n columns.
Matrix multiplication is defined such that the product of two matrices is the matrix of the composition of the corresponding linear maps, and the product of a matrix and a column matrix is the column matrix representing the result of applying the represented linear map to the represented vector. It follows that the theory of finite-dimensional vector spaces and the theory of matrices are two different languages for expressing exactly the same concepts.
Two matrices encoding the same linear transformation in different bases are called similar matrices. Two matrices can be shown to be similar if and only if one can be transformed into the other by elementary row and column operations. For an array representing a linear map from W to V, row operations correspond to changing bases in V and column operations correspond to rebase in W. Every matrix is similar to an identity matrix bordered by null rows and columns. In terms of vector spaces, this means that, for any linear map from W to V, there are bases such that a part of the base of W is maps bijectively on a part of the base of V, and that the remaining elements of the base of W, if any, map to zero. Gaussian elimination is the basic algorithm for finding these elementary operations and proving these results.
Linear systems
A finite set of linear equations in a finite set of variables, for example x1,x2,...... ,xn{displaystyle x_{1},x_{2},ldotsx_{n} or x,and,...... ,z{displaystyle x,y,ldotsz} It's called linear equations system or Linear system.
Systems of linear equations are a fundamental part of linear algebra. Historically, linear algebra and matrix theory have been developed to solve such systems. In the modern presentation of linear algebra using vector spaces and matrices, many problems can be interpreted in terms of linear systems.
For example,
2x+and− − z=8− − 3x− − and+2z=− − 11− − 2x+and+2z=− − 3{displaystyle {begin{alignedat}{7}2x fake}{7};+; fake-skinky-style {;-; fake-skinky-out}{7}{7}{7}2x fake;+; fake-y-skinky}{;-; fake-skinky-skin'}{bs}{posi-out-out-out-out-out-out-out-out-s, purse-out}{cccccccccccccccsor-sor-sor-out}{ccsor-sor-sor-sor-sor-sor-sor-sor-sor-sor-sor-sor-sor-sor-sor-out}{cccccc;}{ccccccc;}{c | (S) |
is a linear system.
This system can be associated with its matrix
- M=[chuckles]21− − 1− − 3− − 12− − 212]{displaystyle M=left[{begin{array}{rr}2 fake1 fake-1-3 fake-1 fake2-2 fake1}{array}}}}{right]}}
and its right-hand vector
- v=[chuckles]8− − 11− − 3].{displaystyle mathbf {v} ={begin{bmatrix}8-11-3end{bmatrix}}}}. !
Let T be the linear transformation associated with the matrix M. A solution of the system (S) is a vector
- X=[chuckles]xandz]{displaystyle mathbf {X} ={begin{bmatrix}xyzend{bmatrix}}}}
such that
- T(X)=v,{displaystyle T(mathbf {X})=mathbf {v}}
which is an element of the reverse image of v by T.
Let (S') be the associated homogeneous system, where the right-hand sides of the equations set to zero:
2x+and− − z=0− − 3x− − and+2z=0− − 2x+and+2z=0{displaystyle {begin{alignedat}{7}2x fake}{7};+; fake-skinky-style {;-; fake-skin`;=; fake-skin`;-; fake-y-spot}{;-;-; fake-small-small}{nex}{7}{7}2x;+;+; expose-; alien-;}{;-; | (S') |
The solutions of (S') are exactly the core elements of T or, equivalently, M.
Gaussian elimination consists of performing elementary row operations on the augmented matrix.
- [chuckles]Mv]=[chuckles]21− − 18− − 3− − 12− − 11− − 212− − 3]{displaystyle left!{begin{array}{cąc}M fakemathbf {v} end{array}}}}right]=left[{begin{array}{rrrriv}{r}2 pretend1 fake1-1 fake8-3 stranger-1 pretend2-11-2{1 fake}{array}{right}}}{right}{exer}}{exer?
to put it in reduced staggered form. These row operations do not change the solution set of the system of equations. In the example, the reduced echelon form is
- [chuckles]Mv]=[chuckles]10020103001− − 1],{displaystyle left!{begin{array}{cąc}M fakemathbf {v} end{array}}}}right]=left[{begin{array}{rrrriv}{r}1 fake0 pretend0 fake0}2 stranger1{ pretend0}{ fake1⁄4}{array}{right}}}}}}}}{ stranger0 purr}}}{ stranger0 fake0 stranger0 fake0 fake0 purr
showing
- x=2and=3z=− − 1.{displaystyle {begin{aligned}x fake=2\y stranger=3z stranger=-1.end{aligned}}}}}
From this matrix interpretation of linear systems it follows that the same methods can be applied to solve linear systems and to many operations on matrices and linear transformations, which include calculating ranges, kernels, and matrix inverses that the system (S) has the unique solution
- x=2and=3z=− − 1.{displaystyle {begin{aligned}x fake=2\y stranger=3z stranger=-1.end{aligned}}}}}
From this matrix interpretation of linear systems it follows that the same methods can be applied to solving linear systems and to many operations on matrices and linear transformations, including range calculus, kernels, and matrix inverses.
Endomorphisms and square matrices
A linear endomorphism is a linear map that maps a vector space V to itself. If V has a base of n elements, such an endomorphism is represented by a square matrix of size n.
Regarding general linear maps, linear endomorphisms and square matrices have some specific properties that make their study an important part of linear algebra, which is used in many parts of mathematics, including geometric transformations. coordinate changes, quadratic forms and many others. of mathematics.
Determinants
The determinant of a square matrix A is defined as
- ␡ ␡ σ σ 한 한 Sn(− − 1)σ σ a1σ σ (1) anσ σ (n),{displaystyle sum _{sigma in S_{n}}(-1)^{sigma }a_{1sigma (1)}cdots a_{nsigma (n)},}
where
- Sn{displaystyle S_{n}} is the group of all permutations n elements,
- σ σ {displaystyle sigma } It's a permutation.
- and (− − 1)σ σ {displaystyle (-1)^{sigma }} the parity of permutation.
A matrix is invertible if and only if the determinant is invertible (ie, nonzero if the scalars belong to a field).
Cramer's rule is a closed form expression, in terms of determinants, of the solution of a system of n linear equations in n unknowns. Cramer's rule is useful for reasoning about the solution, but except for n = 2 or 3, it is rarely used to calculate a solution, since Gaussian elimination is a faster algorithm.
The determinant of an endomorphism is the determinant of the matrix that represents the endomorphism in terms of some ordered basis. This definition makes sense, since this determinant is independent of the choice of basis.
Eigenvalues and eigenvectors
If f is a linear endomorphism of a vector space V over a field F, an eigenvector of f is a vector v of V not null such that f(v ) = av for some scalar a in F. This scalar a is an eigenvalue of f.
If the dimension of V is finite, and a base has been chosen, f and v can be represented, respectively, by a square matrix M and an array of columns z; the equation defining the eigenvectors and eigenvalues becomes
- Mz=az.{displaystyle Mz=az. !
Using the identity matrix I, whose entries are all zero, except those on the main diagonal, which are equal to one, this can be rewritten
- (M− − aI)z=0.{displaystyle (M-aI)z=0. !
How am I supposed to z It's different from zero, this means M - aI is a singular matrix, and therefore its determinant det(M− − aI){displaystyle det(M-aI)} equals zero. The own values are therefore the roots of polynomial
- det(xI− − M).{displaystyle det(xI-M). !
If V is of dimension n, it is a monic polynomial of degree n, called the characteristic polynomial of the matrix (or endomorphism), and there are, at most, n eigenvalues.
If there exists a basis consisting only of eigenvectors, the matrix of f in this basis has a very simple structure: is a diagonal matrix such that the entries on the main diagonal are eigenvalues, and the other entries are zero. In this case, the endomorphism and the matrix are said to be diagonalizable. More generally, an endomorphism and a matrix are also said to be diagonalizable, if they become diagonalizable after extending the field of scalars. In this extended sense, if the characteristic polynomial is square-free, then the matrix is diagonalizable.
A symmetric matrix is always diagonalizable. There are non-diagonalizable matrices, the simplest being
- [chuckles]0100]{displaystyle {begin{bmatrix}0 fake1\0 fake0end{bmatrix}}}}
(cannot be diagonalizable since its square is the zero matrix, and the square of a non-zero diagonal matrix is never zero).
When an endomorphism is not diagonalizable, there are bases in which it has a simple form, although not as simple as the diagonal form. The Frobenius normal form does not need to extend the field of scalars and makes the characteristic polynomial immediately readable over the matrix. Jordan normal form requires extending the field of scalars to contain all eigenvalues, and differs from diagonal form only by a few entries that are just above the main diagonal and are equal to 1.
Duality
A linear shape is a linear map from a vector space V{displaystyle V} on a field F{displaystyle F} to the field of climbing F{displaystyle F}seen as a vector space on itself. Equipped by punctual addition and multiplication by a scaler, linear forms form a vector space, called dual space of V{displaystyle V}and usually denoted V↓ ↓ {displaystyle V^{*}(Katznelson, Katznelson and 2008, p. 37 §2.1.3) or V♫{displaystyle V'}(Halmos and 1974, p. 20, §13)(Axler and 2015, p. 101, §3.94)
Linear map of a vector space to its field of scalars
Yeah. v1,...... ,vn{displaystyle mathbf {v} _{1},ldotsmathbf {v} _{n}} It's a base. V{displaystyle V} (this implies that V is of finite dimension), then can be defined, to i = 1,... n, a linear map vi↓ ↓ {displaystyle v_{i}{i}{}{displaystyle such as vi↓ ↓ (ei)=1{displaystyle v_{i}^{mathbf {e} _{i})=1} and vi↓ ↓ (ej)=0{displaystyle v_{i}{mathbf {e} _{j})=0} Yeah. j I was. i. These linear maps form a base of V↓ ↓ ,{displaystyle V^{*},} called the dual base v1,...... ,vn.{displaystyle mathbf {v} _{1},ldotsmathbf {v} _{n}. ! Yes V is not of finite dimension, the vi↓ ↓ {displaystyle v_{i}{i}{}{displaystyle they can be defined in a similar way; they are linearly independent, but do not form a basis.
Stop. v{displaystyle mathbf {v} } in V{displaystyle V}, the map
- f→ → f(v){displaystyle fto f(mathbf {v}}}
is a linear form V↓ ↓ .{displaystyle V^{*}. ! This defines the canonical linear map V{displaystyle V} in V↓ ↓ ↓ ↓ ,{displaystyle V^{**},} dual V↓ ↓ ,{displaystyle V^{*},} called the bid of V{displaystyle V}. This canonical map is an isomorphism if V{displaystyle V} is finite-dimensional, and this allows to identify V{displaystyle V} with your bidder.
There is, then, a complete symmetry between a finite-dimensional vector space and its dual. This motivates the frequent use, in this context, of the bra-ket notation
- f,x {displaystyle langle f,mathbf {x} rangle } to denote f(x){displaystyle f(mathbf {x})}.
Dual Map
Let's let
- f:V→ → W{displaystyle f:Vto W}
let be a linear map. For all linear forms h over W, the composite function h ∘ f is a linear form over V. This defines a linear map
- f↓ ↓ :W↓ ↓ → → V↓ ↓ {displaystyle f^{*}:W^{to V^{*}}}
between dual spaces, which is called the dual or transpose' of f.
Yeah. V and W are of finite dimension, and M is the matrix f in terms of some ordered bases, then the matrix f↓ ↓ {displaystyle f^{*} on dual bases is the transposition MT{displaystyle M^{mathsf {T}}} of Mobtained by exchanging rows and columns.
If the elements of vector spaces and their duals are represented by column vectors, this duality can be expressed in bra-ket notation by
- hT,Mv = hTM,v .{displaystyle langle h^{mathsf {T}},Mmathbf {v}rangle =langle h^{mathsf {T}M,mathbf {v} rangle. !
To emphasize this symmetry, the two members of this equality are sometimes written
- hT M v .{displaystyle langle h^{mathsf {T}}mid Mmid mathbf {v} rangle. !
Commonly used vector spaces
Within finite-dimensional vector spaces, the following two types of vector spaces are in wide use:
Vectors in Rn
This vector space is formed by the set of vectors of n dimensions (that is, with n number of components). We can find an example of them in the vectors R2, which are famous for representing Cartesian coordinates: (2,3), (3,4),...
Vector space of polynomials in the same variable
An example of a vector space is given by all polynomials whose degree is less than or equal to 2 with real coefficients on a variable x.
Examples of such polynomials are:
4x2− − 5x+1,2x27− − 3,8x+4,5{displaystyle 4x^{2}-5x+1,quad {frac {2x^{2}{7}}}{7}}}}-3,quad 8x+4,quad 5}
The sum of two polynomials whose degree does not exceed 2 is another polynomial whose degree does not exceed 2:
(3x2− − 5x+1)+(4x− − 8)=3x2− − x− − 7{displaystyle (3x^{2}-5x+1)+(4x-8)=3x^{2}-x-7}
The field of scalars is naturally that of real numbers, and it is possible to multiply a number by a polynomial:
5⋅ ⋅ (2x+3)=10x+15{displaystyle 5cdot (2x+3)=10x+15}
where the result is again a polynomial (ie a vector).
An example of a linear transformation is the derivative operator D, which assigns to each polynomial the result of its derivative:
D(3x2− − 5x+7)=6x− − 5.{displaystyle D(3x^{2}-5x+7)=6x-5. !
The derivative operator satisfies the linearity conditions, and although it is possible to prove it rigorously, we simply illustrate it with an example of the first linearity condition:
D((4x2+5x− − 3)+(x2− − x− − 1))=D(5x2+4x− − 4)=10x+4{displaystyle D(4x^{2}+5x-3)+(x^{2}-x-1))=D(5x^{2}+4x-4)=10x+4}
and on the other hand:
D(4x2+5x− − 3)+D(x2− − x− − 1)=(8x+5)+(2x− − 1)=10x+4.{displaystyle D(4x^{2}+5x-3)+D(x^{2}-x-1)=(8x+5)+(2x-1)=10x+4. !
Any vector space has a representation in coordinates similar to Rn{displaystyle mathbb {R} ^{n}, which is obtained by choosing a base (algebra) (i.e., a special set of vectors), and one of the recurring themes in the linear algebra is the choice of appropriate bases for the coordinate vectors and matrices representing the linear transformations to have simple shapes or specific properties.
Generalization and related topics
Since linear algebra is a very successful theory, its methods have proliferated in other areas of mathematics: in module theory, which replaces the field in scalars with a ring; in multilinear algebra, one deals with 'multiple variables' in a linear mapping problem, in which each number of the different variables is directed to the tensor concept, and even in the field of programming since nowadays the indexing of web pages is based on linear algebra methods; in the spectrum theory of infinite-dimensional matrix control operators, applying mathematical analysis in a theory that is not purely algebraic. In all these cases the technical difficulties are much greater.
Relation to geometry
There is a strong relationship between linear algebra and geometry, which began with the introduction by René Descartes, in 1637, of Cartesian coordinates. In this new (at the time) geometry, now called Cartesian geometry, points are represented by Cartesian coordinates, which are sequences of three real numbers (in the case of usual three-dimensional space). The basic objects of geometry, which are lines and planes, are represented by linear equations. Therefore, calculating the intersections of lines and planes is equivalent to solving systems of linear equations. This was one of the main motivations for developing linear algebra.
Most geometric transformations, such as translations, rotations, reflections, rigid motions, isometries, and projections transform lines into lines. It follows that they can be defined, specified and studied in terms of linear maps. This is also the case for homographies and Möbius transformations, when they are considered as transformations of a projective space.
Until the end of the XIX century, geometric spaces were defined by axioms relating points, lines, and planes (synthetic geometry). Around this date, it appeared that geometric spaces can also be defined by constructions involving vector spaces (see, for example, Projective space and Affine space). The two approaches have been shown to be essentially equivalent. In classical geometry, the vector spaces involved are vector spaces over real ones, but the constructions can be extended to vector spaces over any field, allowing geometry to be considered over arbitrary fields, including fields finite.
Currently, most textbooks introduce geometric spaces from linear algebra, and geometry is often presented, at the elementary level, as a subfield of linear algebra.
Use and applications
Linear algebra is used in almost all areas of mathematics, so it is relevant in almost all scientific fields that use mathematics. These apps can be divided into several broad categories.
Geometry of environmental space
The model of environmental space is based on geometry. The sciences that deal with this space make extensive use of geometry. This is the case of mechanics and robotics, to describe the dynamics of rigid bodies; geodesy to describe the shape of the Earth; perspective, computer vision and computer graphics, to describe the relationship between a scene and its representation on the plane; and many other scientific domains.
In all of these applications, synthetic geometry is often used for general descriptions and a qualitative approach, but for the study of explicit situations, one has to calculate with coordinates. This requires the intensive use of linear algebra.
Functional analysis
Functional Analysis studies spaces of functions. These are vector spaces with additional structure, like Hilbert spaces. Linear algebra is therefore a fundamental part of functional analysis and its applications, which include, in particular, quantum mechanics (wave function).
Study of complex systems
Most physical phenomena are modeled using partial differential equations. To solve them, the space in which the solutions are sought is usually broken down into small cells that interact with each other. For linear systems this interaction implies a linear function. For nonlinear systems, this interaction is often approximated by linear functions. In both cases, very large matrices are often involved. A typical example is weather forecasting, where the entire atmosphere of the Earth is divided into cells that are, say, 100 km wide and 100 m high.
Scientific calculation
Almost all scientific calculations involve linear algebra. Consequently, linear algebra algorithms have been highly optimized. The BLAS and LAPACK are the best known implementations. To improve efficiency, some of them configure the algorithms automatically, at runtime, to adapt them to the specifics of the computer (cache size, number of cores available,...).
Some processors, typically Graphics Processing Unit (GPU), are designed with a matrix structure, to optimize linear algebra operations.
Main sources
- Anton, Howard (1987), Elementary Linear Algebra (5th edition), New York: Wiley, ISBN 0-471-84819-0.
- Axler, Sheldon (2015), Linear Algebra Done Right, Undergraduate Texts in Mathematics (3rd edition), Springer Publishing, ISBN 978-319-11079-0.
- Beauregard, Raymond A.; Fraleigh, John B. (1973), A First Course In Linear Algebra: with Optional Introduction to Groups, Rings, and Fields, Boston: Houghton Mifflin Company, ISBN 0-395-14017-X, (requires registration).
- Burden, Richard L.; Faires, J. Douglas (1993), Numerical Analysis (5th edition), Boston: Prindle, Weber and Schmidt, ISBN 0-534-93219-3, (requires registration).
- Golub, Gene H.; Van Loan, Charles F. (1996), Matrix Computations, Johns Hopkins Studies in Mathematical Sciences (3rd Edition), Baltimore: Johns Hopkins University Press, ISBN 978-0-8018-5414-9.
- Halmos, Paul Richard (1974), Finite-Dimensional Vector Spaces, Undergraduate Texts in Mathematics (1958 2nd edition), Springer Publishing, ISBN 0-387-90093-4.
- Harper, Charlie (1976), Introduction to Mathematical Physics, New Jersey: Prentice-Hall, ISBN 0-13-487538-9.
- Katznelson, Yitzhak; Katznelson, Yonatan R. (2008), A (Terse) Introduction to Linear Algebra, American Mathematical Society, ISBN 978-0-8218-4419-9.
- Roman, Steven (22 March 2005), Advanced Linear Algebra, Graduate Texts in Mathematics (2nd edition), Springer, ISBN 978-0-387-24766-3.
Contenido relacionado
Twenty six
Interval
Numerical derivation