Statistics

ImprimirCitar
A normal distribution

The statistician (the feminine form of the German term Statistik, itself derived from the Italian statista, "statesman& #34;) is a science that studies the variability, collection, organization, analysis, interpretation, and presentation of data, as well as the random process that generates them following the laws of probability. Statistics is a formal science, with their own knowledge, dynamic and in continuous development obtained through the formal scientific method. Sometimes factual sciences need to use statistical techniques during their factual investigation process, in order to obtain new knowledge based on experimentation and observation. In these cases, the application of statistics allows the analysis of data from a representative sample, which seeks to explain the correlations and dependencies of a physical or natural phenomenon, of random or conditional occurrence.

Statistics is useful for a wide variety of factual sciences, from physics to the social sciences, from the health sciences to quality control. In addition, it is used in business areas or government institutions with the aim of describing the set of data obtained for decision making, or to make generalizations about the observed characteristics.

Currently, statistics applied to factual sciences makes it possible to study a certain population based on the collection of information, data analysis and interpretation of results. In the same way, it is also an essential science for the quantitative study of mass or collective phenomena.

Statistics is divided into two large areas:

  • Descriptive statistics: It is dedicated to the description, visualization and summary of data originated from the phenomena of study. Data can be numerically or graphically summarized. Its objective is to organize and describe the characteristics of a dataset in order to facilitate its application, usually with the support of graphs, tables or numerical measures.
    • Basic examples of statistical parameters are: average and standard deviation.
    • Graphic examples are: histogram, population pyramid, circular chart, among others.
  • Inferential statistics: It is dedicated to the generation of models, inferences and predictions associated with the phenomena in question, taking into account the randomity of the observations. It is used to model patterns in data and extract inferences about the population under study. These inferences may take the form of answers to questions yes/no (the hypothesis test), estimates of numerical characteristics (estimation), forecasts of future observations, descriptions of association (correlation) or modeling relationships between variables (regression analysis). Other modeling techniques include variance analysis, time series and data mining. Its objective is to obtain useful conclusions to make deductions about all the observations made, based on numerical information.

Both branches (descriptive and inferential) are used in applied statistics. Inferential statistics, for its part, are divided into parametric statistics and non-parametric statistics.

There is also a discipline called mathematical statistics that establishes the theoretical bases of statistical techniques. The word "statistics" refers to the summary of statistical results, usually descriptive, as in economic statistics, criminal statistics, etc.

History

Origin

The German term Statistik, originally introduced by Gottfried Achenwall in 1749, referred to the data analysis of the State, that is, the "science of the State" (or rather, of the city- condition). It was also called political arithmetic according to the literal English translation. It was not until the 19th century that the term statistics acquired the meaning of collecting and classifying data. This concept was introduced by the Scottish agronomist and statesman sir John Sinclair (1754-1835).

Originally, therefore, statistics were associated with states or free cities, to be used by the government and administrative bodies (often centralized). The collection of data about states and localities continues widely through national and international statistical services. In particular, the censuses began to provide regular information about the population of each country. Thus, statistical data originally referred to the demographic data of a given city or state. And that is why in the Melvil Dewey decimal classification, used in libraries, all works on statistics are located next to works on or on demography.

Graphic representations and other measurements on skins, rocks, wooden sticks, and cave walls were already used to control the number of people, animals, or certain merchandise. Around the year 3000 B.C. C. the Babylonians already used small molded clay containers to collect data on agricultural production and the goods sold or exchanged. The Egyptians analyzed the country's population and income data long before they built the pyramids in the 11th century BCE. C. The Biblical books of Numbers and Chronicles include statistical works in some parts. The first contains two censuses of the population of the Land of Israel and the second describes the material well-being of the various Jewish tribes. Similar numerical records existed in China prior to the year 2000 B.C. C. The ancient Greeks carried out censuses whose information was used around 594 B.C. C. to collect taxes.

Use of statistics in ancient civilizations

In ancient times, statistics consisted of preparing censuses (of population and land). Its objective was to facilitate the management of tax work, obtain data on the number of people who could serve in the army or establish distributions of land or other assets.

  • In the Middle East, under the Sumerian rule, Babylon had nearly 6000 inhabitants. The clay tablets were found in it that recorded the business and legal affairs of the city.
  • In Egypt: Statistics begin with Dynasty I, in 3050 BC. The Pharaohs ordered censuses in order to obtain data on land and wealth in order to plan the construction of pyramids.
  • In China: Year 2238 B.C. Emperor Yao produces a general census on agricultural, industrial and commercial activity.
  • The census in the Jewish people served, in addition to military purposes, to calculate the amount of temple revenue.
  • In Ancient Greece: Census was conducted to quantify the distribution and possession of land and other wealth, organize military service and determine the right to vote.
  • In Ancient Rome: During the Roman Empire, birth and death records were established, and studies on citizens, their lands and their wealth were prepared.
  • In Mexico: Year 1116, during the second migration of the Chinese tribes, King Xólotl ordered a census of the population.

In the Middle Ages

During the Middle Ages, statistics did not present great advances, but the work of Isidoro de Sevilla stands out, who compiled and classified data of a diverse nature whose results were published in the work Originum sive Etymologiarum.

In the Modern Age

  • In Spain the censuses of: Pecheros (1528); that of the Bishops (1587); the Census of the Millions (1591); and the Census of the Count of Aranda (1768).
  • In England, the plague of the 1500s resulted in the multiplication of accounting data referring mainly to deaths and births.

Origins in probability

The statistical-mathematical methods emerged from probability theory, which dates back to the correspondence between Pascal and Pierre de Fermat (1654). Christian Huygens (1657) gives the first known scientific treatment of matter. Jakob Bernoulli's Ars coniectandi (posthumous, 1713) and Abraham de Moivre's Doctrine of Possibilities (1718) studied the subject as a branch of mathematics. modern era, Kolmogorov's work has been a pillar in the formulation of the fundamental model of Probability Theory, which is used through statistics.

The theory of errors can be traced back to Roger Cotes's Opera Miscellaneous (posthumous, 1722) and to the work prepared by Thomas Simpson in 1755 (printed 1756) which applied for the first time the theory of errors. theory of observation error discussion. The reprint (1757) of this work includes the axiom that positive and negative errors are equally probable and that there are certain assignable limits within which all errors lie; continuous errors and a probability curve are described.

Pierre-Simon Laplace (1774) makes the first attempt to derive a rule for the combination of observations from the principles of probability theory. Laplace represented the Law of Error Probabilities by a curve and derived a formula for the mean of three observations. He too, in 1871, obtained the formula for the law of ease of error (a term introduced by Lagrange, 1744) but with unmanageable equations. Daniel Bernoulli (1778) introduces the principle of the maximum product of the probabilities of a system of concurrent errors.

Photograph of Ceres by Hubble Space Telescope. The position was estimated by Gauss through the method of square minimums.

The minimum square method, which was used to minimize errors in measurements, was independently published by Adrien-Marie Legendre (1805), Robert Adrain (1808), and Carl Friedrich Gauss (1809). Gauss had used the method in his famous prediction of the location of the dwarf planet Ceres in 1801. Additional tests were written by Laplace (1810, 1812), Gauss (1823), James Ivory (1825, 1826), Hagen (1837), Friedrich Bessel (1838), W. F. Donkin (1844, 1856), John Herschel (1850) and Morgan Crofton (1870). Other contributors were Ellis (1844), Augustus De Morgan (1864), Glaisher (1872) and Giovanni Schiaparelli (1875). The Peters formula for r{displaystyle r}, the probable error of a simple observation is well known.

The 19th century includes authors such as Laplace, Silvestre Lacroix (1816), Littrow (1833), Richard Dedekind (1860), Helmert (1872), Hermann Laurent (1873), Liagre and Didion. Augustus De Morgan and George Boole improved the presentation of the theory. Adolphe Quetelet (1796-1874), was another important founder of statistics and who introduced the notion of the "average man" (l'homme moyen) as a means of understanding complex social phenomena such as rates crime rates, marriage rates or suicide rates.

20th century

Karl Pearson, a founder of mathematical statistics.

The modern field of statistics emerged in the early 20th century led by the work of Francis Galton and Karl Pearson, who transformed statistics into a rigorous mathematical discipline used for analysis, not only in science but in manufacturing and politics. Galton's contributions include the concepts of standard deviation, correlation, regression analysis, and the application of these methods to the study of a variety of characteristics—height, weight, and so on. Pearson developed the Pearson correlation coefficient, defined as a moment-product, the method of moments by fitting the distributions to the samples, and the Pearson distribution, among other things. Galton and Pearson founded Biometrika as their first journal of statistics. mathematics and biostatistics (then known as biometrics). Pearson also founded the first department of statistics at University College London.

During the XX century, the creation of precise instruments for public health issues (epidemiology, biostatistics, etc.) and economic and social purposes (unemployment rate, econometrics, etc.) required substantial advances in statistical practices.

The second wave of the 1910s and 1920s began with William Gosset, and culminated in the work of Ronald Fisher, who wrote the textbooks that were to define the academic discipline at universities around the world. His most important publications were his 1918 paper The Correlation between Relatives on the Supposition of Mendelian Inheritance, which was the first to use the statistical term variance, his 1925 classic Statistical Methods for Research Workers and his 1935 The Design of Experiments, where he developed rigorous models of experimental design. He originated the concept of sufficiency and Fisher's information. In his 1930 book The Genetical Theory of Natural Selection he applied statistics to various concepts in biology such as Fisher's Principle (on the ratio of sex), the Fisherian runaway, a concept in sexual selection about a positive feedback effect found in evolution.

Current state

Today the use of statistics has extended beyond its origins as a service to the State or government. People and organizations use statistics to understand data and make decisions in the natural and social sciences, medicine, business, and other areas. Statistics is a sub-area of mathematics whose application in the field of factual sciences is useful for the advancement of factual scientific knowledge, being considered as a formal "allied" science of factual science. Many universities have academic departments of mathematics (with a specialization in statistics) or statistics separately. Statistics is taught in departments as diverse as psychology, sociology, education, and public health.

Linear regression – scatter graphics in statistics.

When applying statistics to a scientific, industrial, or social problem, one begins with a process or population to be studied. This may be the population of a country, of crystallized grains in a rock, or of goods manufactured by a particular factory during a given period. It could also be a process observed at various instants and the data collected in this way constitute a time series.

For practical reasons, instead of collecting data from an entire population, a selected subset of the population, called a sample, is usually studied. Data about the sample are collected observationally or experimentally. The data is then statistically analyzed which serves two purposes: description and inference.

The concept of correlation is particularly valuable. Statistical analysis of a data set can reveal that two variables (that is, two properties of the population under consideration) tend to vary together, as if there were a connection between them. For example, a study of annual income and age at death might find that poor people tend to have shorter life spans than people with higher incomes. The two variables are said to be correlated. However, the existence of a causal relationship between the two variables cannot be immediately inferred. The correlated phenomenon could be the cause of a third, previously unconsidered variable called the confounding variable.

If the sample is representative of the population, inferences and conclusions made in the sample can be extended to the entire population. A major problem is that of determining how representative the drawn sample is. Statistics offers measures to estimate and correct for randomness in the sample and in the data collection process, as well as methods to design robust experiments as a first measure, see experimental design.

The fundamental mathematical concept used to understand randomness is that of probability. Mathematical statistics (also called statistical theory) is the branch of applied mathematics that uses probability theory and mathematical analysis to examine the theoretical foundations of statistics.

The use of any statistical method is valid only when the system or population under consideration satisfies the mathematical assumptions of the method. The misuse of statistics can produce serious errors in the description and interpretation, which could affect social policies, medical practice and the quality of structures such as bridges and nuclear reaction plants.

Even when statistics are correctly applied, the results can be difficult to interpret for the inexperienced. For example, the statistical significance of a trend in the data, which measures the degree to which the trend may be caused by random variation in the sample, may not agree with the intuitive sense. The set of basic statistical skills (and skepticism) that a person needs to handle information on a day-to-day basis is referred to as "statistical literacy."

Statistical methods

Experimental and Observational Studies

A common goal for a research project is to investigate causality, and in particular to draw a conclusion on the effect that some changes in the values of predictors or independent variables have on a response or dependent variables. There are two main types of studies to study causality: experimental and observational studies. In both types of studies, the effect of one or more independent variables on the behavior of a dependent variable is observed. The difference between the two types is the way in which the study is conducted. Each of them can be very effective and statistics play a very important role in the analysis of information.

Measuring levels

There are four types of measurements or measurement scales in statistics: levels of measurement (nominal, ordinal, interval, and ratio). They have varying degrees of use in statistical research. Ratio measures, where a zero value and distances between different measurements are defined, give the greatest flexibility in statistical methods that can be used to analyze the data. Interval measures have interpretable distances between measurements, but a meaningless zero value (like IQ or Celsius temperature measurements). Ordinal measures have imprecise differences between consecutive values, but an interpretable order for their values. Nominal measurements have no interpretable range between their values.

The nominal measurement scale can be considered the lowest level scale. It's about grouping objects into classes. The ordinal scale, for its part, uses the property of "order" of numbers. The equal interval scale is characterized by a common and constant unit of measure. It is important to note that the zero point on equal interval scales is arbitrary, and does not reflect the absence of the magnitude that we are measuring at any time. This scale, in addition to having the characteristics of the ordinal scale, allows determining the magnitude of the intervals (distance) between all the elements of the scale. The scale of coefficients or Ratios is the highest level of measurement and differs from the scales of equal intervals only by having its own zero point as origin; that is to say that the zero value of this scale means the absence of the magnitude that we are measuring. If a total lack of property is observed, a unit of measure is available for the effect. Equal differences between the assigned numbers correspond to equal differences in the degree of attribute present in the object of study.

Statistical analysis techniques

Some tests and procedures for investigating observations are:

  • Accumulated frequency analysis
  • Regression analysis
  • Variance analysis (ANOVA)
  • Confirmative factorial analysis


  • Correlation: Pearson correlation coefficient and Spearman correlation coefficient
  • Statistical frequency
  • Statistical Graphics
  • Iconography of correlations
  • χ2 test or square ji test
  • Evidence of Fisher's less significant difference
  • Student t-test
  • U de Mann-Whitney

Language and symbology

Population and sample

  • Population: It is the whole or a set of all existing values, whether people, measures or objects that can be expressed by a variable and also have a characteristic; that they are of statistical interest for a specific study. The complete analysis of the population is also known as Census.
  • Finite population: It is the one that expresses that it is possible to be surpassed when counting or attained; therefore, it is the one that has or includes a limited number either of objects, measures or persons. For example: spending on food for a certain time, a set of qualifications or the total number of students studying at a university.
  • Infinite population: It is the one that includes a large number of observations or measures that cannot be reached with the count. This means that it has an unlimited number of values, for example: the future production of a machine or the release of dice or a coin.
  • Sample: It is that subset belonging to a population. This means that it is formed by some data of this, whether certain objects, people, or measures of the population. The study of this concept is usually known as sampling.
  • Representative sample: It is that representative subset of a population, but to be considered this way certain selection procedures should be followed, or a sampling method. It is said that the appropriate sample is that which contains essential characteristics of the population to achieve the objective of making generalizations with respect to the total data without examining each of them.

Parameter

  • Parameter: It is the measure of a certain numerical characteristic of a population that is usually expressed by Greek symbols (μ or σ).

Specialized disciplines

Some fields of research use statistics so extensively that they have specialized terminology. These disciplines include:

  • Process analysis (for data analysis in analytical chemistry and chemical engineering)
  • Space analysis
  • Biostatistics (statistics applied to health sciences)
  • actuarial sciences
  • Cienciometry
  • Statistical reliability
  • Quality control
  • Statistical culture
  • Demographic
  • Econometry (state applied to the economy)
  • Computational statistics
  • Statistics on education, education and training
  • Business statistics
  • Environment statistics
  • Statistics on marketing
  • Statistics in epidemiology
  • Engineering statistics
  • Health science statistics
  • Medicine statistics
  • Statistics in veterinary medicine and animal husbandry
  • Statistics on nutrition
  • Statistics in agronomy
  • Statistics in planning
  • Statistics in research
  • Statistics in psychology (psychometry)
  • Statistics on restoration of works
  • Statistics in literature
  • Statistics in astronomy (astrostatistics)
  • Statistics in anthropology (anthropometry)
  • Statistics in history
  • Space statistics
  • Industrial statistics
  • Military statistics
  • Consulting statistics
  • Sports statistics
  • Social statistics
  • Sample surveys
  • Statistical physics
  • Geostatistics
  • Geography
  • Investigation of operations
  • Statistical mathematics
  • data mining
  • Processing of images
  • Productivity
  • Chemometry (for data analysis in analytical chemistry and chemical engineering)
  • Recognition of patterns for data knowledge)
  • geographic information systems

Statistics is an essential science for business and production. It is used to understand the variability of measurement systems, for statistical process control (CEP or SPC), to compile data, and to make decisions in situations of uncertainty. In all these applications it is a key science.

Statistical computing

The rapid and sustained increase in computing power of computing since the second half of the XX century has had a substantial impact on the practice of statistical science. Old statistical models were almost always of the class of linear models. Now, complex computers together with appropriate numerical algorithms have caused a revival of interest in nonlinear models (especially neural networks and decision trees) and the creation of new types such as generalized linear models and multilevel models.

The increase in computational power has also led to the growth in popularity of computationally intensive methods based on resampling, such as permutation and bootstrap tests, while techniques such as Gibbs sampling have made Bayesian methods more accessible. The revolution in computers has implications for the future of statistics, with a new emphasis on "experimental" and "empirical" statistics. A large number of statistical packages are now available to researchers. Dynamic systems and chaos theory began to interest the Hispanic community a decade ago, since in the Anglo-Saxon community of the United States "chaotic behavior in nonlinear dynamic systems" was already established with 350 books by 1997 and some work was beginning. in the fields of social sciences and in applications of physics. Its use in analytics was also being contemplated.

Misuse of statistics

Sometimes there is a perception that statistical knowledge is not used correctly, finding ways to interpret the data that are favorable to the presenter. A famous saying, reportedly by Benjamin Disraeli, is: "There are three kinds of lies: small lies, big lies, and statistics." The popular book How to lie with statistics by Darrell Huff discusses many cases of misuse of statistics, with an emphasis on graphs. misinterpreted. By choosing, rejecting, or modifying a sample of observations using non-randomized procedures, the results of your analysis may be biased; for example, by selectively removing outliers (outliers). This may be the result of both fraud and unintended bias on the part of researchers with little statistical knowledge. Lawrence Lowell (dean of Harvard University) wrote in 1909 that statistics, "like some cake, are good if you know Who made them and are you sure of the ingredients? For this reason, it is essential that the statistical analysis of data is carried out by qualified professionals, with a Degree in Mathematics with a specialization in Statistics or with a Degree in Statistics, since it is increasingly necessary for statistics to be a profession regulated by the State.

When unqualified people use statistics to analyze data from an experiment or observational research in factual sciences, one of the most frequent errors is unawareness of the type I error, which can lead to false conclusions. The probability of finding a spurious or casual association between two variables, and mistakenly believing that a real association has been found, increases when statistical hacking of a database (P-hacking) is performed instead of applying the scientific method. This bad practice consists of playing with a database and relating the dependent variable to all the possible independent variables until a statistically significant association is found, without having previously established a conceptual framework and a research hypothesis that justifies why these are going to be studied. relationships. The publication of these results in scientific journals is one of the causes of the loss of credibility and reproducibility of science, which has led many scientists to issue warning statements. Statistical hacking is one of the reasons why which some studies contradict the results obtained in others: one day it is affirmed that coffee, cheese or red wine protect against heart disease and then another study affirms that these same foods cause those diseases. In addition, each study usually uses different procedures and different statistical techniques, which are not always applied ace correctly. Or small samples are used whose results are not confirmed in larger studies. However, many factual science professionals who read these publications in specialized journals do not perceive these bad practices, the media amplify the information around the study, and public mistrust begins to grow.

Classical inference and Bayesian inference

The widely used hypothesis testing approach requires the establishment of a null hypothesis to subsequently test the agreement of the data with this hypothesis. A misinterpretation of the results may exaggerate the importance of small differences in studies with large samples that may not have any practical relevance.

See also test reviews of null hypothesis and controversy.

In the fields of psychology and medicine, especially with regard to the approval of new drugs by the Food and Drug Administration, criticism of the hypothesis test has increased in recent years. An answer has been a great emphasis on the p-value rather than simply reporting whether the hypothesis was rejected at the level of significance α α {displaystyle alpha } given. Again, however, this summarizes the evidence for an effect but not the size of the effect. One possibility is to report confidence intervals, as these indicate the size of the effect and uncertainty. This helps to interpret the results, such as the confidence interval for a α α {displaystyle alpha } given simultaneously indicating the statistical significance and the size effect. The p-value and the confidence intervals are based on the same fundamental calculations as those for the corresponding hypothesis tests. The results are presented in a more detailed format, rather than the "yes" of the hypothesis tests and with the same statistical methodology.

Another type of approximation is the use of Bayesian methods. This approach, however, has also been criticized.

The strong desire for good drugs to be approved and dangerous or infrequently used drugs to be rejected creates tensions and conflicts (type I and II errors in hypothesis testing parlance).

Teaching statistics in the social sciences

On the teaching of statistics in the social sciences, some research “suggests that students learn more when teachers use concrete examples and problems relevant to them.”

In order to have concrete examples and problems relevant to students, it is possible to propose learning activities that link quantitative methods to qualitative approaches, since the latter are used more frequently in the study plans of undergraduate degrees in Social Sciences. About this combination of methods we find that one of its main virtues is that "the collection of the rich descriptive details of the qualitative data can be used to quantify and generalize the results".

Among the advantages of teaching that combines quantitative and qualitative methods is that students develop the ability to triangulate results, which reduces the fallibility inherent in each approach. For example, errors that can be attributed to the source data, given that the methods use different types of data, the results will be more reliable if they come from a triangulation of methods.

Even, the teaching of quantitative methods can be contemplated within the programs of the different axes of the academic curriculum. Currently it is common for the various subjects to exercise the use of qualitative research methods but not quantitative ones. This should change because "introducing quantitative reasoning into substantive courses makes it possible to link training in quantitative methods, especially statistical ones, with the core issues of the social sciences."

Statistics in the field of education

Statistical data analysis is a widely used resource in the educational field. Statistics are used very frequently in the field of physical, social, medical, economic and many other sciences and, in all of them, the didactic or applied use of statistics predominates. An important and novel use is the use of statistics on the number of visits to the different Wikipedia articles: this analysis serves to distinguish the articles developed in the study plans of the different countries, from those that are not found in said plans.. In the German Wikipedia, for example, a subtopic called Didaktic (Didactics) appears in many of the educational topics, which indicates some suggestions that serve both students and teachers in order to increase the benefit derived from reading. about the topic. An automatic link appears on the talk page of this article that takes us to see the number of visits to the article over a certain period (20, 30, 60, 90 days; 1 year or a certain period between two dates). of this article (statistics). A dot below a certain date indicates that it is Monday, so we can see the low number of page queries on the weekend and the large increase on intermediate days of the week (generally on Wednesdays and Thursdays).). Sometimes, there are subjects that do not always follow these ideas and this may be due to particular reasons of the subject or the study plan, for example, taking the exams on a Friday, which would mean many consultations of the article after the exam to see in what went right or wrong. See note ().

The teaching of statistics should focus on the correct selection of quantitative tests, the interpretation of results, and the use of freely available statistical software. This hands-on approach to teaching statistics allows students to develop the confidence to select and apply the appropriate tests.

Statistical Law

One of the many professions that maintains a continuous interrelationship with the different disciplines is the legal profession, which maintains a specific relationship with statistics. Taking into account that statistics is a science that is characterized by being an ally of the different fields since it provides them with the necessary data. Said link is established mainly to be able to know what aspects of society should be regulated, maintaining (reliability and validity) as its main function.

Torres Manrique states "Statistics law is the systematized legal discipline that quantitatively and qualitatively studies the trend and frequency of phenomena that occur on a massive scale, in order to contribute to the development of human life in society, regulating it for to make it fairer and less manipulable.”

As a point of emphasis, it is established that it promotes great support since it is a tool for the administration of justice and manifests the demands that are currently established, but also an infinity of established data that can be used in an wrong. It must be taken into account that these are influential in the trials since they can determine the innocence or wrong of the client, unfortunately the rigorous statistical approaches that are established in many Latin American countries are minimal.

In order to make a more accurate judgment, studies such as Statistical Criminology must be established, in which we realize that the link established between these two disciplines is necessary, since it works correctly, intelligently and progressively way that equips legal professionals with the most important principles and techniques to carry out their investigations


Contenido relacionado

Luis Banchero Rossi

Luis Banchero Rossi was an important Peruvian businessman dedicated to the export of fishmeal and fish oil, who became one of the main promoters of the...

Coordination number

In solid-state physics and chemistry, the coordination number of an atom in a chemical compound is the number of atoms directly attached to it. For example...

Twin prime number

In mathematics, and more specifically in number theory, two prime numbers are twin prime numbers if, being q > p, q – p = 2 holds. All prime numbers...
Más resultados...
Tamaño del texto:
Copiar