Mathematical hope)

AjustarCompartirImprimirCitar
In mathematics, specifically in the statistical branch, the hope (also called the expected value, the average population or the average) of a random variable {displaystyle X} is the number {displaystyle mathbb {E} [X]}}} or {displaystyle {text{E}}[X]} which forms the medium value idea of a random phenomenon.

In mathematics, specifically in the statistical branch, the hope (also called expected value, average population or average) of a random variable X{displaystyle X}It's the number. E[chuckles]X]{displaystyle mathbb {E} [X]} or E[chuckles]X]{displaystyle {text{E}}[X]}} that forms the idea of average value of a random phenomenon. It is a concept similar to the arithmetic mean of a set of data.

When the random variable is discrete, the expectation is equal to the sum of the probability of each possible random event multiplied by the value of that event. Therefore, it represents the average amount What is "expected" as a result of a random experiment when the probability of each event is held constant and the experiment is repeated a large number of times. It should be said that the value that the mathematical expectation takes in some cases may not be "expected" in the most general sense of the word (the value of the expectation may be improbable or even impossible).

For example, the expected value when rolling a fair 6-sided die is 3.5. We can do the calculation

E (X)=1⋅ ⋅ 16+2⋅ ⋅ 16+3⋅ ⋅ 16+4⋅ ⋅ 16+5⋅ ⋅ 16+6⋅ ⋅ 16=1+2+3+4+5+66=3.5{cHFFFFFF}{cHFFFFFF}{cHFFFFFF}{cHFFFFFF}{cHFFFFFF}{cHFFFFFF}{1}{1}{1}{1}{1}{1}{1}{1}{1}{1⁄2}{1}{1⁄2}}{1}}{1st}{1st}{cd}{cd} {cd} {cd}{cd}{cd}{cd}{1st}{cd} {cd}{1st}{1st} {cd}{cd}{1st}{cd}{cd}{1st}{cd}}{cd}{1st}{1st}{1st}{1st}{1st}{cd}{cd}{cd} {cd} {cd} {cd}{cd} {cd

and note that 3.5 is not a possible value when rolling the die. In this case, in which all events are of equal probability, the expectation is equal to the arithmetic mean.

A common application of mathematical expectation is in betting or gambling. For example, French roulette has 37 equiprobable cells. The profit to hit a bet on a single number pays 35 to 1 (that is, we charge 35 times what we have bet). Therefore, considering the 37 possible outcomes, the expectation of the profit for betting on a single number is:

(− − 1⋅ ⋅ 3637)+(35⋅ ⋅ 137){displaystyle left(-1cdot {frac {36}{37}}}}right)+left(35cdot {frac {1}{37}}}}}}}{right)}

which is approximately -0.027027. Therefore, one would expect, on average, to lose about 2.7 cents for each euro that they bet, and the expected value to bet 1 euro is 0.972973 euros. In the gambling world, a game where the expected payoff is zero (we neither win nor lose) is called a "fair game."

Note: The first term is the "expectation" of losing the €1 bet, so the value is negative. The second term is the mathematical expectation of winning the €35. The expectation of profit is the expected value to gain minus the expected value to lose.

Definition

Inconspicuous case

For a discreet random variable X{displaystyle X} with probability function P [chuckles]X=xi]{displaystyle operatorname {P} [X=x_{i}}}}} with i=1,2,...... ,n{displaystyle i=1,2,dotsn} hope is defined as

E [chuckles]X]=␡ ␡ i=1nxiP [chuckles]X=xi]{displaystyle operatorname {E} [X]=sum _{i=1}^{n}x_{i}operatorname {P} [X=x_{i}}}}}}

Case continuous

For a continuous random variable X{displaystyle X} with density function fX(x){displaystyle f_{X}(x)} the expected value is defined as the Riemann integral

E [chuckles]X]=∫ ∫ RxfX(x)dx{displaystyle operatorname {E} [X]=int _{mathbb {R} }xf_{X}(x)dx}

General case

In general, if X{displaystyle X} is a random variable defined in the probability space (Ω Ω ,F,P){displaystyle (Omega{mathcal {F}},operatorname {P}}} then the expected value of X{displaystyle X}, denoted by E [chuckles]X]{displaystyle operatorname {E} [X]}}, is defined as the Lebesgue integral

E [chuckles]X]=∫ ∫ Ω Ω X(ω ω )dP(ω ω ){displaystyle operatorname {E} [X]=int _{Omega }X(omega)dP(omega),!}

For multidimensional random variables, their expected value is defined by component, that is

E [chuckles](X1,...... ,Xn)]=(E [chuckles]X1],...... ,E [chuckles]Xn]){displaystyle operatorname {E} [(X_{1},dotsX_{n})]=(operatorname {E} [X_{1}],dotsoperatorname {E} [X_{n}])}

and, for a random matrix X{displaystyle X} with elements Xi,j{displaystyle X_{i,j}}, (E [chuckles]X])i,j=E [chuckles]Xi,j]{displaystyle (operatorname {E} [X])_{i,j}=operatorname {E} [X_{i,j}}}}.

Moments

The hopes

E [chuckles]Xk]{displaystyle operatorname {E} [X^{k}}}

for k=1,2,...{displaystyle k=1,2,... ! It's called moments of order k{displaystyle k,!} or k{displaystyle k}-the moment of the random variable X{displaystyle X}. Most important are the focused moments E [chuckles](X− − E [chuckles]X])k]{displaystyle operatorname {E} [X-operatorname {E} [X])^{k}}}}.

Not all random variables have an expected value. For example, the Cauchy distribution does not have it.

History

The idea of expected value originated in the mid-17th century from the study of the so-called point problem, which seeks to distribute the bets fairly between two players, who have to finish their game before it is properly finished. This issue had been debated for centuries. Many conflicting proposals and solutions were suggested over the years when the French writer and amateur mathematician Chevalier de Méré posed it to Blaise Pascal in 1654. Méré claimed that this problem could not be solved and that it showed how flawed mathematics was when it came to its application to the real world. Pascal, being a mathematician, felt provoked and determined to solve the problem once and for all.

He began discussing the problem in the famous series of letters to Pierre de Fermat. Soon enough, they both came up with a solution independently. They solved the problem in different computational ways, but their results were identical because their calculations were based on the same fundamental principle. The principle is that the value of a future profit must be directly proportional to the possibility of obtaining it. This principle seemed to have come naturally to both of them. They were very pleased by the fact that they had found essentially the same solution, and this in turn made them absolutely convinced that they had solved the problem conclusively; however, they did not publish their findings. They only informed a small circle of mutual scientific friends in Paris about it.

In the book by the Dutch mathematician Christiaan Huygens, he considered the problem of points and presented a solution based on the same principle as the solutions of Pascal and Fermat. Huygens published his treatise in 1657 (see Bibliography-Huygens (1657)) & # 34; De ratiociniis in ludo aleæ & # 34; in probability theory just after visiting Paris. The book expanded the concept of expectation by adding rules for how to calculate expectations in situations more complicated than the original problem (for example, for three or more players), and can be seen as the first successful attempt to lay the foundations of the theory of probability.

In the foreword to his treatise, Huygens wrote:

It must be said, too, that for some time some of the best mathematicians in France have taken care of this kind of calculation so that no one will attribute to me the honor of the first invention. This doesn't belong to me. But these sages, though they test one another by proposing many difficult questions to resolve, have concealed their methods. Therefore, I have had to examine and deepen for myself in this matter beginning with the elements, and it is impossible for me to say that I have even left the same principle. But I have finally found that my answers in many cases do not differ from theirs.
Edwards (2002)

During his visit to France in 1655, Huygens learned of de Méré's Problem. From his correspondence with Carcavine a year later (in 1656), he realized that his method was essentially the same as Pascal's. Therefore, he knew about Pascal's priority on this subject before his book went to print in 1657.

In the mid-19th century, Pafnuty Chebyshev became the first person to think systematically in terms of the expectations of random variables.

Properties

Yeah. X{displaystyle X} and And{displaystyle Y} are random variables with finite hope and a,b,c한 한 R{displaystyle a,b,cin mathbb {R} } are constant then

  1. E [chuckles]c]=c{displaystyle operatorname {E} [c]=c}.
  2. E [chuckles]cX]=cE [chuckles]X]{displaystyle operatorname {E} [cX]=coperatorname {E} [X]}}.
  3. Yeah. X≥ ≥ 0{displaystyle Xgeq 0} then. E [chuckles]X]≥ ≥ 0{displaystyle operatorname {E} [X]geq 0}.
  4. Yeah. X≤ ≤ And{displaystyle Xleq Y} then. E [chuckles]X]≤ ≤ E [chuckles]And]{displaystyle operatorname {E} [X]leq operatorname {E} [Y]}.
  5. Yeah. X{displaystyle X} is bound by two real numbers, a{displaystyle a} and b{displaystyle b}, this is <math alttext="{displaystyle a<Xa.X.b{displaystyle a backwards<img alt="{displaystyle a<X So is his average, too, that is, <math alttext="{displaystyle a<operatorname {E} [X]a.E [chuckles]X].b{displaystyle a prescriptionoperatorname {E} [X] taxb}<img alt="{displaystyle a<operatorname {E} [X].
  6. Yeah. And=a+bX{displaystyle Y=a+bX}, then E [chuckles]And]=E [chuckles]a+bX]=a+bE [chuckles]X]{displaystyle operatorname {E} [Y]=operatorname {E} [a+bX]=a+boperatorname {E} [X]}}}.
  7. In general, E [chuckles]XAnd]I was. I was. E [chuckles]X]E [chuckles]And]{displaystyle operatorname {E} [XY]neq operatorname {E} [X]operatorname {E} [Y]}equality is fulfilled only when random variables are independent.

Linearity

The operator hopes E [chuckles]⋅ ⋅ ]{displaystyle operatorname {E} [cdot ]} is a linear operator in the sense that for any random variables X{displaystyle X} and And{displaystyle Y} and any c한 한 R{displaystyle cin mathbb {R} }

E [chuckles]X+And]=E [chuckles]X]+E [chuckles]And]E [chuckles]cX]=cE [chuckles]X]{displaystyle {begin{aligned}operatorname {E} [X+Y] nightmare=operatortorname {E} [X]+operatorname {E} [Y]operatorname {E} [cX] `operatorname {E} [X]end{aligned}}}}}}

Demonstrate this result is simple, if we consider that X{displaystyle X} and And{displaystyle Y} are discrete random variable then

E [chuckles]X+And]=␡ ␡ x,and(x+and)P [chuckles]X=x,And=and]=␡ ␡ x,andxP [chuckles]X=x,And=and]+␡ ␡ x,andandP [chuckles]X=x,And=and]=␡ ␡ xx␡ ␡ andP [chuckles]X=x,And=and]+␡ ␡ andand␡ ␡ xP [chuckles]X=x,And=and]=␡ ␡ xxP [chuckles]X=x]+␡ ␡ andandP [chuckles]And=and]=E [chuckles]X]+E [chuckles]And]{cHFFFFFF}{cHFFFFFF}{cHFFFFFF}{cHFFFFFF}{cHFFFFFF}{cH00FFFFFF}{cHFFFFFFFF00} {cHFFFFFFFF00}{cHFFFFFF00} {cHFFFFFFFFFF00}{cHFFFFFFFFFFFFFF00}{cHFFFFFFFFFFFFFFFF00}{cH00}{cH00}{cH00}{cH00}{cH00}{cHFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFF00}{cH00}{cH00}{cH00}{cH00}{cHFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFF00}{cH00}{cH00}{cHFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFF

Independence

Yeah. X{displaystyle X} and And{displaystyle Y} are independent random variables then

E [chuckles]XAnd]=E [chuckles]X]E [chuckles]And]{displaystyle operatorname {E} [XY]=operatorname {E} [X]operatorname {E} [Y]}

The proof of this result is very simple, you only have to consider the concept of independence, the result is proved only for the discrete case (the proof of the continuous case is analogous)

E [chuckles]XAnd]=␡ ␡ x␡ ␡ andxandP [chuckles]X=x,And=and]=␡ ␡ x␡ ␡ andxandP [chuckles]X=x]P [chuckles]And=and]=␡ ␡ xxP [chuckles]X=x]␡ ␡ andandP [chuckles]And=and]=E [chuckles]X]E [chuckles]And]{displaystyle {begin{aligned}operatorname {E} [XY] fake=sum _{x}sumption _{y}xyoperatorname {P} [X=x, y=y]{sumption}{x}{x1}{operatorname {P}{x1}{operatorname {P}{x1}{x1}{x1}{x1⁄4}{x1⁄4}}{x1⁄4}{x1⁄4}}{x1⁄4}}{x1⁄4}}}}{operatorname

Contenido relacionado

Friendly numbers

Two different natural numbers related in such a way that the sum of the proper divisors of each one is equal to the other number are called friendly numbers....

Ϝ

Digamma is an obsolete letter of the Greek alphabet and has a numerical value of 6...

Bailey-Borwein-Plouffe formula

The Bailey-Borwein-Plouffe formula allows us to calculate the nth digit of π in base 2 without having to find the precedents, quickly and using very little...
Más resultados...
Tamaño del texto: