Abstract algebra

format_list_bulleted Contenido keyboard_arrow_down
ImprimirCitar

Abstract algebra, sometimes called modern algebra or higher algebra, is the part of mathematics that studies algebraic structures such as those of group, ring, field (sometimes called a field), vector space, etc. Many of these structures were formally defined in the 19th century, and, in fact, the study of abstract algebra was motivated by the need for more exact mathematical definitions.

In abstract algebra, elements combined by various operations are generally not interpretable as numbers, which is why abstract algebra cannot be considered a simple extension of arithmetic. The study of abstract algebra has made it possible to clearly observe the intrinsic logical statements on which all mathematics and the natural sciences are based, and it is used today in practically all branches of mathematics. Furthermore, throughout history, algebraists have discovered that apparently different logical structures can very often be characterized in the same way with a small set of axioms.

The term abstract algebra is used to distinguish this field from elementary algebra or high school algebra that shows the correct rules for manipulate formulas and algebraic expressions that concern real numbers and complex numbers. Abstract algebra was known during the first half of the 20th century as modern algebra".

History

Before the 19th century, algebra meant the study of solving polynomial equations. Abstract algebra arose during the 19th century as more complex problems and methods of solution were developed. The concrete problems and examples came from number theory, geometry, analysis and solutions of algebraic equations. Most of the theories that are now recognized as parts of abstract algebra began as collections of disparate facts from various branches of mathematics, acquired a common theme that served as a nucleus around which various results were grouped, and eventually became unified on the basis of a common set of concepts. This unification occurred in the first decades of the 20th century and led to the formal axiomatic definitions of various algebraic structures as groups, rings and fields. This historical development is almost the opposite of the treatment found in popular textbooks, such as van der Waerden's Modern Algebra, which begin each chapter with a formal definition of a structure and then follow it with concrete examples.

Elementary Algebra

The study of polynomial equations or algebraic equations has a long history. Around 1700 B.C. C., the Babylonians were able to solve quadratic equations specified as word problems. This stage of word problems is classified as rhetorical algebra and was the dominant approach until the 16th century. Muhammad ibn Mūsā al-Khwārizmī originated the word "algebra" in the year 830 AD. C., but his work was entirely rhetorical algebra. Fully symbolic algebra did not appear until François Viète's New Algebra of 1591, and even this had some words spelled given symbols in Descartes's La Géométrie of 1637. The formal study of solving symbolic equations led Leonhard Euler to accept what were then considered "meaningless" roots, such as negative numbers and imaginary numbers, in the late 18th century. However, European mathematicians for the most part resisted these concepts until the middle of the 19th century.

The Algebra Treaty George Peacock of 1830 was the first attempt to place algebra on a strictly symbolic basis. He distinguished a new symbolic algebra, different from the ancient arithmetic algebra. While in the arithmetic algebra a− − b{displaystyle a-b} is restricted to a≥ ≥ b{displaystyle ageq b}In the symbolic algebra all the rules of operations remain unrestricted. Using this Peacock could show laws like (− − a)(− − b)=ab{displaystyle (-a)(-b)=ab}, leaving a=0,c=0{displaystyle a=0,c=0} in (a− − b)(c− − d)=ac+bd− − ad− − bc{displaystyle (a-b)(c-d)=ac+bd-ad-bc}. Peacock used what called the principle of permanence of equivalent forms to justify his argument, but his reasoning adowed the problem of induction. For example, sqrtab=ab{displaystyle sqrt{a}{sqrt {b}}={sqrt {ab}}}} is met for non-negative real numbers, but not for general complex numbers.

Primitive Group Theory

Several areas of mathematics led to the study of groups. Lagrange's 1770 study of the solutions of the quintic led to the Galois group of a polynomial. Gauss's 1801 study of Fermat's little theorem led to the ring of integers modulo n, the multiplicative group of integers modulo n, and the more general concepts of cyclic group and abelian group. Klein's 1872 Erlangen program studied geometry and led to symmetry groups such as the Euclidean Group and the projective transformation group. In 1874, Lie introduced the theory of Lie groups, with the aim of developing "the Galois theory of differential equations". In 1976, Poincaré and Klein introduced the group of Möbius transformations, and its subgroups, such as the modular group and the Fuch group, based on work on automorphic functions in analysis.

The abstract concept of the group slowly emerged in the middle of the centuryXIX. Galois in 1832 was the first to use the term “group”, meaning a collection of closed permutations under composition. Arthur Cayley's work About group theory published in 1854 defined a group as a set with an associative composition operation and identity 1, currently called a monoid. In 1870 Kronecker defined an abstract binary operation that was closed, commutative, associative and had the left cancellation property bI was. I was. c→ → a⋅ ⋅ bI was. I was. a⋅ ⋅ c{displaystyle bneq cto acdot bneq acdot c}similar to modern laws for a finite abelian group. The definition of Weber of 1882 of a group was a closed binary operation that was associative and had left and right cancellation. Walther von Dyck in 1882 was the first to require reverse elements as part of the definition of a group.

Once this abstract group concept emerged, the results were reframed in this abstract setting. For example, Sylow's theorem was proved again by Frobenius in 1887 directly from the laws of a finite group, although Frobenius commented that the theorem followed from Cauchy's theorem on permutation groups and from the fact that every finite group is a subgroup of a permutation group. Otto Hölder was particularly prolific in this area, defining quotient groups in 1889, group automorphisms in 1893, as well as simple groups. He also completed the Jordan-Hölder theorem. Dedekind and Miller independently characterized Hamiltonian groups and introduced the notion of a two-element commutator. Burnside, Frobenius, and Molien created the representation theory of finite groups in the late 19th century. J. A. de's monograph Séguier's 1905 Elements of Abstract Group Theory presented many of these results in an abstract and general way, relegating "concrete" to an appendage, although it was limited to finite groups. The first monograph on both finite and infinite abstract groups was O. K. Schmidt's Abstract Theory of Groups in 1916.

Early Ring Theory

Noncommutative ring theory began with extensions of complex numbers to hypercomplex numbers, specifically William Rowan Hamilton's quaternions in 1843. Many other number systems followed shortly thereafter. In 1844, Hamilton introduced biquaternions, Cayley introduced octonions, and Grassman introduced exterior algebras. James Cockle introduced bicomplex numbers in 1848 and coquaternions in 1849. William Kingdon Clifford introduced divided biquaternions in 1873. Cayley also introduced group algebras. on real and complex numbers in 1854 and square matrices in two papers from 1855 and 1858.

Once there were enough examples, they were classified. In a monograph of 1870, Benjamin Peirce classified the more than 150 hypercomplete numerical systems of less than 6, and gave an explicit definition of an associative algebra. He defined the nilpotent and idempotent elements and demonstrated that any algebra contains one or another. He also defined Peirce's breakdown. Frobenius in 1878 and Charles Sanders Peirce in 1881 proved independently that the only algebras of finite dimension division over R{displaystyle mathbb {R} } were real numbers, complex numbers and quaternions. In the 1880s, Killing and Cartan showed that Lie's semi-simples algebras could break down into simple and classified all the simple Lie algebras. Inspiring in this, in the 1890s Cartan, Frobenius and Molien demonstrated (independently) that a finite-dimensional associative algebra on R{displaystyle mathbb {R} } or C{displaystyle mathbb {C} } it is unevenly broken in the direct sum of a nilpotent algebra and a semisimple algebra that is the product of a certain number of simple algebras, square matrices on algebras of division. Cartan was the first to define concepts as a direct sum and simple algebra, which were very influential. In 1907 Wedderburn extended the results of Cartan to an arbitrary field, in what is now known as the principal theorem of Wedderburn and theorem of Artin-Wedderburn.

In the case of switching rings, several areas together led to the theory of switching rings. In two articles from 1828 and 1832, Gauss formulated the entire Gaussians and demonstrated that they form a single factoring domain (DFU) and demonstrated the law of biquatic reciprocity. Jacobi and Eisenstein, more or less at the same time, demonstrated a cubic reciprocity law for the Eisenstein integers. The study of Fermat's last theorem led to the algebraic integers. In 1847, Gabriel Lamé thought he had shown the FLT, but his demonstration was defective, since he assumed that all the cyclonomic fields were DFUs, however, as Kummer pointed out, Q(γ γ 23)){displaystyle mathbb {Q} (zeta _{23})} It wasn't a DFU. In 1846 and 1847, Kummer introduced the ideal numbers and demonstrated the unique factoring in ideal primes for cyclonomic fields. Dedekind extended this in 1871 to demonstrate that every ideal different from zero in the domain of the integers of an algebraic numerical field is a unique product of ideal cousins, a precursor of Dedekind's mastery theory. In general, Dedekind's work created the theme of algebraic number theory.

Definition, structures and examples

Historical definition

Birkhoff and McLane tell us:

"The abstract algebra can be defined as the study of the properties of algebraic systems that are preserved in isomorphisms."
Vid p. 37 of his Modern Algebra (1960), Barcelona

Historically, some topics arose in a discipline other than algebra - the case of linear spaces and Boolean algebra. Subsequently, they have been axiomatized and then studied in their own right within said framework. Therefore, this subject has numerous and fruitful connections with all other branches of mathematics and beyond.

List of algebraic structures (systems)

  • With a single law of composition:
    • Magmas.
    • Semigroups.
    • Almost groups.
    • Monoids.
    • Groups.
  • With two or more composition laws:
    • Rings and bodies.
    • Modules and Vector Spaces.
    • Associative algebras and Lie Algebras.
    • Boole's reticles and algebras.

Universal algebra is a field of mathematics that provides the formalism for comparing different algebraic structures. Beyond the previous structures, other types of algebraic structures can be defined:

  • Homologic algebra.
  • Formal languages (conceived as well formed signal chains).
  • Algebra on a body.

An example

Systematic study is not true but algebra has allowed mathematicians to carry under a common logical description concepts seemingly different. For example, we can consider two quite different operations: the composition of applications, f(g(x)){displaystyle f(g(x)}and the product of matrices, AB{displaystyle} AB. These two operations are, in fact, the same. We can see this informally as follows: multiply two square matrices (AB){displaystyle (AB)} by a vector of a column, x{displaystyle x}. This, in fact, defines a function that is equivalent to compose Aand{displaystyle Ay} with Bx{displaystyle Bx} Aand{displaystyle Ay} = A(Bx){displaystyle A(Bx)} = (AB)x{displaystyle (AB)x}. The functions under composition and the matrices under multiplication form structures called monoids. A monoid under operation is associative for all its elements ((ab)c=a(bc)){displaystyle (ab)c=a(bc))} and contains an element e{displaystyle e} such that, for any value of a{displaystyle a}, ae=ea=a{displaystyle ae=ea=a}. Certainly, that two isomorphic assemblies are considered identical, what interests them are the operations and their laws in such assemblies.

Contenido relacionado

Quaternions and rotation in space

Unit quaternions provide a mathematical notation for representing the orientations and rotations of objects in three dimensions. Compared to Euler angles...

Fields Medal

The International Medal for Outstanding Discoveries in Mathematics, better known as the Fields Medal, is a distinction awarded since 1936 by the International...

Circumference

A circle is a flat, closed curve such that all its points are at the same distance from the...
Más resultados...
Tamaño del texto:
undoredo
format_boldformat_italicformat_underlinedstrikethrough_ssuperscriptsubscriptlink
save