Random variable

ImprimirCitar
a random variable is a function that assigns a value, usually numerical, to the result of a random experiment.

In probability and statistics, a random variable is a function that assigns a value, usually numeric, to the outcome of a random experiment. For example, the possible results of rolling a die twice: (1, 1), (1, 2), etc. or a real number (e.g., the maximum temperature measured throughout the day in a specific city).

The possible values of a random variable can represent the possible results of an experiment not yet performed, or the possible values of a quantity whose currently existing value is uncertain (e.g., as a result of an incomplete or imprecise measurement). Intuitively, a random variable can be taken as a quantity whose value is not fixed but can take on different values; A probability distribution is used to describe the probability that different values will occur. In formal terms, a random variable is a function defined on a probability space.

Random variables usually take real values, but random values can be thought of as boolean values, functions, or any type of element (of a measurable space). The term random element is used to encompass all such related concepts. A related concept is that of a stochastic process, a set of ordered random variables (usually by order or time).

Definition

Intuitive concept

A random variable can be thought of as a numerical value that is affected by chance. Given a random variable, it is not possible to know with certainty the value that it will take when measured or determined, although it is known that there is a probability distribution associated with the set of possible values. For example, in a cholera epidemic, it is known that any person may or may not become ill (event), but it is not known which of the two events will occur. It can only be said that there is a probability that the person will get sick.

To work solidly with random variables in general it is necessary to consider a large number of random experiments, for statistical treatment, quantify the results so that a real number is assigned to each of the possible results of the experiment. In this way, a functional relationship is established between elements of the sample space associated with the experiment and real numbers.

Formal definition

One Random variable (v.a.) X{displaystyle X} is a real function defined in the probability space (Ω Ω ,A,P){displaystyle(Omega{mathcal {A}},P)}associated with a random experiment.

X:Ω Ω → → R{displaystyle X:Omega to mathbb {R} }

The previous formal definition involves sophisticated mathematical concepts derived from the theory of measure, specifically the notion σ-algebra or the measure of probability. Given a probability space (Ω Ω ,A,P){displaystyle(Omega{mathcal {A}},P)} and a measurable space (S,・ ・ ){displaystyle (S,Sigma)}an application X:Ω Ω → → S{displaystyle X:Omegato S} It's a Random variable if it is an application A,・ ・ {displaystyle {mathcal {A}},Sigma }-Measurable. In regular use, points ω ω 한 한 Ω Ω {displaystyle omega in Omega } are not directly observable, only the value of the variable at the point X(ω ω ){displaystyle X(omega)} so the probabilistic element resides in the lack of knowledge of the specific point ω ω {displaystyle omega }.

In most practical uses you have that the measurable space of arrival is (S,・ ・ )=(R,B(R)){displaystyle (S,Sigma)=(mathbb {R}{mathcal {B}}(mathbb {R})}}the definition thus remains:

Given a probability space (Ω Ω ,A,P){displaystyle(Omega{mathcal {A}},P)} a real random variable is any function A/B(R){displaystyle {mathcal {A}}/{mathcal {B}}(mathbb {R}}}}- Where? B(R){displaystyle {mathcal {B}(mathbb {R})} It's the northern algebra.

Range of a random variable

His name is range of a random variable X{displaystyle X} and we'll denote it. RX{displaystyle R_{X}}, to the image or function range X{displaystyle X}, that is to say, to the set of real values that this can take, according to the application X{displaystyle X}. In other words, the range of a v.a. is the tour of the function by which it is defined

RX={x한 한 R日本語consuming consuming ω ω 한 한 Ω Ω :X(ω ω )=x!{displaystyle R_{X}={xin mathbb {R} Δexists ,omega in Omega:X(omega)=x}}}}

Examples

Example 1

Suppose two coins are tossed. The sample space, that is, the set of possible elementary outcomes associated with the experiment, is:

Ω Ω ={cc,cx,xc,xxx!{displaystyle Omega =left{textrm {cc,cx,xc,xx}}}{right}}

wherec represents "sale face" and x"Sale cross"). We can then assign to each elementary event of the experiment the number of faces obtained. This would define the random variable X{displaystyle X} like function

X:Ω Ω → → R{displaystyle X:Omega to mathbb {R} }

given by

cc→ → 2{displaystyle {textrm {cc}to 2}
cx,xc→ → 1{displaystyle {textrm {cx}},{textrm {xc}}to 1}
xx→ → 0{displaystyle {textrm {xx}}to 0}

The range of this function, RX, is the set

RX={0,1,2!{displaystyle R_{X}=left{0,1,2right}}
Example 2

The level X{displaystyle X} of precipitation recorded a specific day of the year, in a city by a specific weather station. The sample space that includes all possible results can be represented by the interval RX(Ω Ω )=[chuckles]0,∞ ∞ ){displaystyle R_{X}(Omega)=[0,infty)}. In this case the sample space is more complicated because it would include specifying the state of the full atmosphere (an approximation would be to describe the set of positions and speeds of all molecules of the atmosphere, which would be a number of monumental information or to use a more or less complex model in terms of macroscopic variables, such as current weather models).

We can review the historical series of precipitations and approach the distribution of probability FX(x){displaystyle F_{X}(x)} of X and build an approximation F! ! X(x){displaystyle {bar {F}_{X}(x)}. Note that in this case the distribution of probability is not known, only the sample distribution (the historical series) is known and it is conjecture that the actual distribution is not far from this approximation. FX(x)≈ ≈ F! ! X(x){displaystyle F_{X}(x)approx {bar {F}}_{X}(x)}. If the historical series is long enough and represents a climate that does not differ significantly from the current two functions will differ very little.

Characterization of random variables

Types of random variables

To understand in a more comprehensive and rigorous way the types of variables, it is necessary to know the definition of discreet set. A set is discreet if it is formed by a finite number of elements, or if its elements can be enumerated in sequence so that there is a first element, a second element, a third element, and so on (i.e., an infinite numberable set without accumulation points). For variables with values R{displaystyle mathbb {R} } random variables are usually classified in:

  • Variable discreet random: a v.a. is discreet if your tour is a discreet set. The variable of the previous example is discreet. Their odds are collected in the amount function. (See discreet variable distributions).
  • Continuous variable: a v.a. is continuous if your tour is not numberable. Intuitively this means that the set of possible variable values encompasses an entire interval of real numbers. For example, the variable that assigns the stature to a person extracted from a particular population is a continuous variable since, theoretically, any value between, let's put by case, 0 and 2.50 m, is possible. (See the continuous variable distributions).

Previous definitions can easily be generalized to random variables with values on Rn{displaystyle mathbb {R} ^{n} or Cn{displaystyle mathbb {C} ^{n}. This does not exhaust the type of random variables since the value of a random variable can also be a partition, as happens in the stochastic process of the Chinese restaurant or the set of values of a random variable can be a set of functions such as the stochastic process of Dirichlet.

Distribution function

Sea (Ω Ω ,A,P){displaystyle (Omega{mathcal {A}},operatorname {P}}} a space of probability and X:Ω Ω → → R{displaystyle X:Omega to mathbb {R} } a random variable, the distribution function of X{displaystyle X}, denoted by FX(x){displaystyle F_{X}(x)} or simply by F(x){displaystyle F(x)}, it's the function FX:R→ → [chuckles]0,1]{displaystyle F_{X}:mathbb {R} to [0.1]}} defined by

FX(x)=P [chuckles]{ω ω 한 한 Ω Ω :X(ω ω )≤ ≤ x!]=P [chuckles]X≤ ≤ x]{displaystyle F_{X}(x)=operatorname {P} [{omega in Omega:X(omega)leq x}]=operatorname {P} [Xleq x]}}}

which satisfies the following three conditions:

  1. limx→ → − − ∞ ∞ F(x)=0{displaystyle lim _{xto -infty }F(x)=0} and limx→ → ∞ ∞ F(x)=1{displaystyle lim _{xto infty }F(x)=1}
  2. It's still on the right.
  3. It's undecreasing monotone.

The probability distribution of a v.a. theoretically describes how the results of a random experiment vary. Intuitively, this would be a list of the possible outcomes of an experiment with the probabilities one would expect to see associated with each outcome.

Density function

Sea (Ω Ω ,A,P){displaystyle (Omega{mathcal {A}},operatorname {P}}} a space of probability and X:Ω Ω → → R{displaystyle X:Omega to mathbb {R} } a random variable, the density function of X{displaystyle X} typically denoted by fX(x){displaystyle f_{X}(x)} or simply by f(x){displaystyle f(x)}, is used for the purpose of knowing how the odds of an event or event are distributed, in relation to the outcome of the event.

The density function is the derivative (ordinary or in the sense of distributions) of the probability distribution function FX(x){displaystyle F_{X}(x)}, or inversely, the distribution function is the integral of the density function:

F(x)=∫ ∫ − − ∞ ∞ xf(t)dt{displaystyle F(x)=int _{-infty }^{x}f(t),dt}

The density function of a v.a. determines the concentration of probability around the values of a continuous random variable.

Functions of random variables

Be a random variable X{displaystyle X} defined (Ω Ω ,A,P){displaystyle(Omega{mathcal {A}},P)} and g:R→ → R{displaystyle g:mathbb {R} rightarrow mathbb {R} } a measurable function of Borel, then And=g(X){displaystyle Y=g(X)} will also be a random variable on (Ω Ω ,A,P){displaystyle(Omega{mathcal {A}},P)} since the composition of measurable functions is also measurable (However, this is not true if g{displaystyle g} is a measurable function of Lebesgue). The same procedure that allows you to go from a probability space (Ω Ω ,P){displaystyle(OmegaP)} a (R,dFX){displaystyle (mathbb {R}dF_{X}}}} can be used to get distribution And{displaystyle Y}. The cumulative distribution function And{displaystyle Y} That's it.

FAnd(and)=P [chuckles]g(X)≤ ≤ and].{displaystyle F_{Y}(y)=operatorname {P} [g(X)leq and]. !

If the function g{displaystyle g} It's invertible, I mean. g− − 1{displaystyle g^{-1} exists, and is monotonous growing then the previous relationship can be extended to obtain

FAnd(and)=P [chuckles]g(X)≤ ≤ and]=P [chuckles]X≤ ≤ g− − 1(and)]=FX(g− − 1(and)){displaystyle F_{Y}(y)=operatorname {P} [g(X)leq and]=operatorname {P} [Xleq g^{-1}(y)]=F_{X}(g^{-1}(y)}}

y, working again under the same hypotheses of invertibility of g and also assuming differentiability, we can find the relationship between the probability density functions by differentiating both terms with respect to y, getting

fAnd(and)=fX(g− − 1(and))日本語dg− − 1(and)dand日本語{displaystyle f_{Y}(y)=f_{X}(g^{-1}(y)))left️{frac {dg^{-1}(y)}{dy}}}{right understood}.

Yeah. g{displaystyle g} It's not invertible but every one and{displaystyle and} has a finite root number, then the previous relationship with the probability density function can be generalized as

fAnd(and)=␡ ␡ ifX(gi− − 1(and))日本語dgi− − 1(and)dand日本語{displaystyle f_{Y}(y)=sum _{i}f_{X}(g_{i}{}{}{1}{}{1(y))left ultimate{frac {dg_{i}{1}{1}{y}}{right presumption}

where xi=gi− − 1(and){displaystyle x_{i}=g_{i}^{-1}(y)}. Density formulas do not require g{displaystyle g} be growing.

Example 1

Sean. X{displaystyle X} a continuous random variable and And=X2{displaystyle Y=X^{2}} then.

FAnd(and)=P [chuckles]And≤ ≤ and]=P [chuckles]X2≤ ≤ and]{displaystyle F_{Y}(y)=operatorname {P} [Yleq y]=operatorname {P} [X^{2}leq y]}}

Yeah. <math alttext="{displaystyle yand.0{displaystyle and tax}<img alt="{displaystyle y then. P [chuckles]X2=and]=0{displaystyle operatorname {P} [X^{2}=y]=0} for what

<math alttext="{displaystyle F_{Y}(y)=0quad {hbox{si}}quad yFAnd(and)=0Yeah.and.0{displaystyle F_{Y}(y)=0quad {hbox{si}}}quad yι0}<img alt="{displaystyle F_{Y}(y)=0quad {hbox{si}}quad y

Yeah. and≥ ≥ 0{displaystyle andgeq 0} then.

P [chuckles]X2≤ ≤ and]=P [chuckles]日本語X日本語≤ ≤ and]=P [chuckles]− − and≤ ≤ X≤ ≤ and]{displaystyle operatorname {P} [X^{2}leq y]=operatorname {P} [ATAXIDOleq {sqrt {y}}]=operatorname {P} [-{sqrt {y}}{leq Xleq {sqrt {y}}}}}}

therefore

FAnd(and)=FX(and)− − FX(− − and)Yeah.and≥ ≥ 0{displaystyle F_{Y}(y)=F_{X}({sqrt {y})-F_{X}(-{sqrt {y}})quad {hbox{si}}}{quad ygeq 0}

Example 2

Sea X{displaystyle X} a random variable with accumulated distribution function

FX(x)=P [chuckles]X≤ ≤ x]=1(1+e− − x)θ θ {displaystyle F_{X}(x)=operatorname {P} [Xleq x]={frac {1}{(1+e^{-x})^{theta }}}}}}}}}}

where 0}" xmlns="http://www.w3.org/1998/Math/MathML">θ θ ▪0{displaystyle theta }0}" aria-hidden="true" class="mwe-math-fallback-image-inline" src="https://wikimedia.org/api/rest_v1/media/math/render/svg/0b0ac07626379d065418cc158ce6be9aeccf33b9" style="vertical-align: -0.338ex; width:5.351ex; height:2.176ex;"/> It's a parameter. Consider the random variable And=ln (1+e− − X){displaystyle Y=ln(1+e^{-X}}}} then.

FAnd(and)=P [chuckles]And≤ ≤ and]=P [chuckles]ln (1+e− − X)≤ ≤ and]=P [chuckles]X≥ ≥ − − ln (eand− − 1)]{displaystyle F_{Y}(y)=operatorname {P} [Yleq y]=operatortorname {P} [ln(1+e^{-X})leq y]=operatorname {P} [Xgeq -ln(e^{y}-1)}}}}}

The above expression can be calculated in terms of the accumulated distribution function X{displaystyle X} Like

<math alttext="{displaystyle {begin{aligned}F_{Y}(y)&=operatorname {P} [Xgeq -ln(e^{y}-1)]\&=1-operatorname {P} [XFAnd(and)=P [chuckles]X≥ ≥ − − ln (eand− − 1)]=1− − P [chuckles]X.− − ln (eand− − 1)]=1− − FX(− − ln (eand− − 1))=1− − 1(1+eln (eand− − 1))θ θ =1− − e− − θ θ and{displaystyle {begin{aligned}F_{Y}(y) fake=operatorname {P} [Xgeq -ln(e^{y}-1)pos(e)}{1-operatorname {P}{x1}{ln(e{y}{y}{1-F1-F}{x1}{e(e(e}{1-F)}{<img alt="{displaystyle {begin{aligned}F_{Y}(y)&=operatorname {P} [Xgeq -ln(e^{y}-1)]\&=1-operatorname {P} [X

which corresponds to the cumulative distribution function of the exponential distribution.

Example 3

Supposing that X{displaystyle X} is a random variable with X♥ ♥ N(0,1){displaystyle Xsim N(0.1)} so your density function is given by

fX(x)=12π π e− − x2/2{displaystyle f_{X}(x)={frac {1}{sqrt {2pi }}}}{-x^{2}/2}}}

Consider the random variable And=X2{displaystyle Y=X^{2}}, we can the density function And{displaystyle Y} using the variable change formula:

fAnd(and)=␡ ␡ ifX(gi− − 1(and))日本語dgi− − 1(and)dand日本語{displaystyle f_{Y}(y)=sum _{i}f_{X}(g_{i}{}{}{1}{}{1(y))left ultimate{frac {dg_{i}{1}{1}{y}}{right presumption}

In this case the change is not monotic because each value of And{displaystyle Y} has associated two possible values X{displaystyle X} (one positive and one negative), however, by symmetry, both values will be transformed identically, this is

fAnd(and)=2fX(g− − 1(and))日本語dg− − 1(and)dand日本語{displaystyle f_{Y}(y)=2f_{X}(g^{-1}(y)))left️{frac {dg^{-1}(y)}{dy}}}{right understood}

The inverse transformation is

x=g− − 1(and)=and{displaystyle x=g^{-1}(y)={sqrt {y}}}}

its derivative is

dg− − 1(and)dand=12and{displaystyle {frac {dg^{1}}{y}{dy}}}{{frac {1}{2{sqrt {y}}}}}}}}

then

fAnd(and)=212π π eand/212and=12π π ande− − and/2{displaystyle {begin{aligned}f_{Y}(y) fake=2{frac {1}{sqrt {2pi }}}}}{;e^{y/2};{frac {1}{2{sqrt {y}}}{'}{sqigny {1}{2end-in}}}{1}{

which corresponds to the density function of the distribution χ² distribution with one degree of freedom.

Parameters related to a random variable

The density function or probability distribution of a random variable (a.v.) exhaustively contains all the information about the variable. However, it is convenient to summarize its main characteristics with a few numerical values. Among these are expectation and variance (although additional statistical parameters are needed to fully characterize the probability distribution).

Hope

La mathematical hope (or simply) hopeor expected value of a random variable is the sum of the product of the probability of each event for the value of that event. If all events are of equal probability then hope is the arithmetic mean. For a discrete random variable with support x1,x2...... xn{displaystyle x_{1},x_{2}ldots x_{n},!} and if your probabilities represented by the probability function p(xi){displaystyle p(x_{i}}}} hope is calculated as:

E [chuckles]X]=␡ ␡ i=1nxip(xi){displaystyle operatorname {E} [X]=sum _{i=1}^{n}x_{i}p(x_{i}),!}

For a continuous random variable hope is calculated by the integral of all values and density function f(x){displaystyle f(x),!}:

E [chuckles]X]=∫ ∫ − − ∞ ∞ ∞ ∞ xf(x)dx{displaystyle operatorname {E} [X]=int _{-infty }^{infty} xf(x)dx}

or

E [chuckles]X]=∫ ∫ Ω Ω XdP{displaystyle operatorname {E} [X]=int _{Omega }X,{text{d}P,}

Hope is also often symbolized with μ μ =E [chuckles]X]{displaystyle mu =operatorname {E} [X]}

The concept of hope is commonly associated in games of chance with the average profit or long-term expected profit.

Variance

La varianza is a dispersion measure of a random variable X{displaystyle X,!} concerning their hope E [chuckles]X]{displaystyle operatorname {E} [X]}}. It is defined as the hope of transformation (X− − E[chuckles]X])2{displaystyle left(X-mathbb {E} [X]right)^{2},!}:

σ σ =Var(X){displaystyle sigma ={sqrt {{text{Var}}(X)}},}!

or else

σ σ 2=Var(X){displaystyle sigma ^{2}={text{Var}}(X),!}

Higher order moments

Given a continuous probability distribution, the set of its moments completely characterizes the distribution. Two of these moments have already appeared, the expected value coincides with the first-order moment, while the variance can be expressed as a combination of the second-order moment and the square of the first-order moment. In general, the moment of order n of a real random variable with probability density defined almost everywhere is computed as:

MX(n)=E [chuckles]Xn]=∫ ∫ RxnfX(x)dx{displaystyle M_{X}^{(n)}=operatorname {E} [X^{n}}=int _{mathbb {R} }x^{n}f_{X}(x)dx}

These moments can be obtained from the derivatives n-themes of the characteristic function φ φ X(x){displaystyle varphi _{X}(x)} associated with variable X:

dφ φ X(n)(0)dxn=inE [chuckles]Xn]{displaystyle {frac {dvarphi _{X}{x}{(n)}{dx^{n}}}=i^{n}operatorname {E} [X^{n}}}}}}

or analogously the moment generating function:

MX(n)(0)=dnMX(0)dx{displaystyle M_{X}^{(n)}(0)={frac {d^{n}M_{X}{dx}}}}}}}

Contenido relacionado

Neutral element

That is, a neutral element has a neutral effect when used in the operation {displaystyle circledast }. When operating any element of the set with the...

Area (surface unit)

An area or square decameter is a unit of area equal to 100 square meters. It was the surface unit implemented by the original decimal metric system. Its...

Probability theory

The theory of probability is a branch of mathematics that studies random and stochastic phenomena. Random phenomena are contrasted with deterministic...
Más resultados...
Tamaño del texto:
Copiar