Jump to content

Behrens–Fisher distribution

From Wikipedia, the free encyclopedia

In statistics, the Behrens–Fisher distribution, named after Ronald Fisher and Walter Behrens, is a parameterized family of probability distributions arising from the solution of the Behrens–Fisher problem proposed first by Behrens and several years later by Fisher. The Behrens–Fisher problem is that of statistical inference concerning the difference between the means of two normally distributed populations when the ratio of their variances is not known (and in particular, it is not known that their variances are equal).[1]

Definition

[edit]

The Behrens–Fisher distribution is the distribution of a random variable of the form

where T1 and T2 are independent random variables each with a Student's t-distribution, with respective degrees of freedom ν1 = n1 − 1 and ν2 = n2 − 1, and θ is a constant. Thus the family of Behrens–Fisher distributions is parametrized by ν1ν2, and θ.

Derivation

[edit]

Suppose it were known that the two population variances are equal, and samples of sizes n1 and n2 are taken from the two populations:

where "i.i.d" are independent and identically distributed random variables and N denotes the normal distribution. The two sample means are

The usual "pooled" unbiased estimate of the common variance σ2 is then

where S12 and S22 are the usual unbiased (Bessel-corrected) estimates of the two population variances.

Under these assumptions, the pivotal quantity

has a t-distribution with n1 + n2 − 2 degrees of freedom. Accordingly, one can find a confidence interval for μ2 − μ1 whose endpoints are

where A is an appropriate quantile of the t-distribution.

However, in the Behrens–Fisher problem, the two population variances are not known to be equal, nor is their ratio known. Fisher considered[citation needed] the pivotal quantity

This can be written as

where

are the usual one-sample t-statistics and

and one takes θ to be in the first quadrant. The algebraic details are as follows:

The fact that the sum of the squares of the expressions in parentheses above is 1 implies that they are the squared cosine and squared sine of some angle.

The Behren–Fisher distribution is actually the conditional distribution of the quantity (1) above, given the values of the quantities labeled cos θ and sin θ. In effect, Fisher conditions on ancillary information.

Fisher then found the "fiducial interval" whose endpoints are

where A is the appropriate percentage point of the Behrens–Fisher distribution. Fisher claimed[citation needed] that the probability that μ2 − μ1 is in this interval, given the data (ultimately the Xs) is the probability that a Behrens–Fisher-distributed random variable is between −A and A.

Fiducial intervals versus confidence intervals

[edit]

Bartlett[citation needed] showed that this "fiducial interval" is not a confidence interval because it does not have a constant coverage rate. Fisher did not consider that a cogent objection to the use of the fiducial interval.[citation needed]


Further reading

[edit]
  • Kendall, Maurice G., Stuart, Alan (1973) The Advanced Theory of Statistics, Volume 2: Inference and Relationship, 3rd Edition, Griffin. ISBN 0-85264-215-6 (Chapter 21)

References

[edit]
  1. ^ Kim, Seock-Ho; Cohen, Allan S. (December 1998). "On the Behrens-Fisher Problem: A Review". Journal of Educational and Behavioral Statistics. 23 (4): 356–377. doi:10.3102/10769986023004356. ISSN 1076-9986. S2CID 85462934.