Geometric distribution
In probability theory and statistics, the geometric distribution is any of the following two discrete probability distributions:
- Yeah. X={1,2,...... !{displaystyle X={1,2,dots }} is the number necessary to obtain a success.
- Yeah. X={0,1,2,...... !{displaystyle X={0,1,2,dots }} is the number of failures before the first success.
Definition
Notation
If a discreet random variable X{displaystyle X} follows a geometric distribution with parameter <math alttext="{displaystyle 0<p0.p.1{displaystyle 0 backwards}<img alt="{displaystyle 0<p Then we'll write X♥ ♥ Geometrica (p){displaystyle Xsim operatorname {Geometrica} (p)} or simply X♥ ♥ Geo (p){displaystyle Xsim operatorname {Geo} (p)}.
Probability function
If the discreet random variable X{displaystyle X} is used to model the number of failures before obtaining the first success in a succession of independent essays Bernoulli where in each of them the probability of success is p{displaystyle p} then the probability function X♥ ♥ Geometrica (p){displaystyle Xsim operatorname {Geometrica} (p)} That's it.
- P [chuckles]X=x]=p(1− − p)x− − 1{displaystyle operatorname {P} [X=x]=p(1-p)^{x-1}}}
for x=0,1,2,3,...... {displaystyle x=0,1,2,3,dots }
Distribution function
Yeah. X♥ ♥ Geometrica (p){displaystyle Xsim operatorname {Geometrica} (p)} then the distribution function is given by
- P [chuckles]X≤ ≤ x]=␡ ␡ k=0xp(1− − p)k=p␡ ␡ k=0x(1− − p)k=p(1− − (1− − p)x+11− − (1− − p))=1− − (1− − p)x+1{displaystyle {begin{aligned}operatorname {P} [Xleq x] fake=sum _{k=0}{x}{xp(1-p){k}{k}{x}{1x}{x1}{x1}{x1-p)}{x1(1x1-p)}{1x1-p
for x=0,1,2,3,...... {displaystyle x=0,1,2,3,dots }
Properties
Yeah. X♥ ♥ Geometrica (p){displaystyle Xsim operatorname {Geometrica} (p)} considering X{displaystyle X} model the number of failures before the first success then the random variable X{displaystyle X} meets some properties:
Media
Average X{displaystyle X}as long as X{displaystyle X} model the number of tests until you get the first success, is given by
- E [chuckles]X]=1p{displaystyle operatorname {E} [X]={frac {1}{p}}}
and this is easily demonstrated if we consider the definition of hope
- E [chuckles]X]=␡ ␡ x=1∞ ∞ xp(1− − p)x− − 1=p␡ ␡ x=1∞ ∞ x(1− − p)x− − 1=p␡ ␡ x=1∞ ∞ ddp(− − (1− − p)x)=p(− − ddp␡ ␡ x=1∞ ∞ (1− − p)x)=p(− − ddp(1− − pp))=p(ddp(1− − 1p))=p1p2=1p{cHFFFFFF}{cHFFFFFF}{cHFFFFFF00}{cHFFFFFF00}{cHFFFFFF00}{cHFFFFFFFF00}{cHFFFFFFFF00}{cHFFFFFFFF00}{cHFFFFFFFF00}{cHFFFFFFFFFFFFFFFFFFFFFF00}{cH00}{cHFFFFFFFFFFFFFFFF00}{cH00}{cHFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFF00}{cH00}{cHFFFFFFFFFFFFFF00}{cHFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFF00}{cH00}{cHFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFF
where the geometric series was considered
- ␡ ␡ n=0∞ ∞ α α n=11− − α α {displaystyle sum _{n=0}^{infty }alpha ^{n}={frac {1}{1-alpha }}}}}}
Yeah. <math alttext="{displaystyle |alpha |日本語α α 日本語.1{displaystyle 日本語alpha ⋅1<img alt="{displaystyle |alpha |.
Variance
The variance X{displaystyle X} is given by
- Var (X)=1− − pp2{displaystyle operatorname {Var} (X)={frac {1-p}{p^{2}}}}}}.
Probability generating function
The probability generating function f.g.p is given by
- GX(t)=p1− − t(1− − p){displaystyle G_{X}(t)={frac {p}{1-t(1-p)}}}}.
Yeah. <math alttext="{displaystyle |t|日本語t日本語.(1− − p)− − 1{displaystyle Șt has (1-p)^{-1}<img alt="{displaystyle |t|.
Moment generating function
The moment generating function is given by
- MX(t)=pet1− − (1− − p)et{displaystyle M_{X}(t)={frac {pe^{t}}{1-(1-p)e^{t}}}}}}
Yeah. <math alttext="{displaystyle tt.− − ln (1− − p){displaystyle t tax-ln(1-p)}<img alt="{displaystyle t.
Memory Loss
The geometric distribution has the property of memory loss, that is, for any m,n≥ ≥ 0{displaystyle m,ngeq 0}
- m+n|X>m]=operatorname {P} [X>n]}" xmlns="http://www.w3.org/1998/Math/MathML">P [chuckles]X▪m+n日本語X▪m]=P [chuckles]X▪n]{displaystyle operatorname {P} [X purm+nUSBX purm]=operatorname {P} [X parentn]}m+n|X>m]=operatorname {P} [X>n]}" aria-hidden="true" class="mwe-math-fallback-image-inline" src="https://wikimedia.org/api/rest_v1/media/math/render/svg/888c9cbded75cb64f2cbd421d188cf59947712cc" style="vertical-align: -0.838ex; width:34.444ex; height:2.843ex;"/>.
Its analogous distribution, the exponential distribution, also has the property of memory leak. This means that if we try to repeat the experiment until the first success, then, since the first success has not yet occurred, the conditional distribution of the number of additional trials does not depend on how many failures have been observed. The dice or coin that one throws has no "memory" of these failures.
The geometric distribution is the only discrete distribution that has the property of memory leak.
Related distributions
- The geometric distribution And{displaystyle Y} is a particular case of negative binomial distribution with parameter k=1{displaystyle k=1}. More generally, if And1,And2,...... ,Andk{displaystyle Y_{1}, y_{2},dotsY_{k}} independent random variables distributed geometrically with parameter p{displaystyle p} then.
- Z=␡ ␡ m=1kAndm♥ ♥ BN (k,p){displaystyle Z=sum _{m=1}^{k}Y_{m}sim operatorname {B}N (k,p)}
- I mean, Z{displaystyle Z} follows a negative binomial distribution with parameters k{displaystyle k} and p{displaystyle p}.
- The geometric distribution is a special case of the composite distribution of Poisson.
- Yeah. And1,And2,...... ,Andr{displaystyle Y_{1}, y_{2},dotsY_{r}} are geometrically distributed independent random variables (with different success parameters) pm possible), then its minimum
- W=minmAndm{displaystyle W=min _{m}Y_{m}}
- is also geometrically distributed with parameter
- p=1− − m(1− − pm){displaystyle p=1-prod _{m}(1-p_{m})}.