Jump to content

Q-Gaussian distribution: Difference between revisions

From Wikipedia, the free encyclopedia
Content deleted Content added
m cleanup including punct move per WP:REFPUNCT using AWB
No edit summary
Line 1: Line 1:
{{About|the Tsallis q-Gaussian|a different q-analog|Gaussian q-distribution}}
{{About|the Tsallis q-Gaussian|a different q-analog|Gaussian q-distribution}}
{{DISPLAYTITLE:'''q''-Gaussian distribution}}
{{lowercase title}}
{{Probability distribution |
{{Probability distribution |
name =q-Gaussian|
name =q-Gaussian|
type =density|
type =density|
pdf_image =[[File:The PDF of QGaussian.svg|325px|Probability density plots of q-gaussian distributions]]|
pdf_image =[[File:The PDF of QGaussian.svg|325px|Probability density plots of ''q''-Gaussian distributions]]|
parameters =<math>q < 3 </math> [[shape parameter|shape]] ([[Real number|real]]) <br /> <math> \beta > 0 </math> ([[Real number|real]]) |
parameters =<math>q < 3 </math> [[shape parameter|shape]] ([[Real number|real]]) <br /> <math> \beta > 0 </math> ([[Real number|real]]) |
support =<math>x \in (-\infty; +\infty)\!</math> for <math>1\le q < 3 </math> <br /> <math>x \in \left[\pm {1 \over \sqrt{\beta(1-q)}}\right] </math> for <math>q < 1 </math>|
support =<math>x \in (-\infty; +\infty)\!</math> for <math>1\le q < 3 </math> <br /> <math>x \in \left[\pm {1 \over \sqrt{\beta(1-q)}}\right] </math> for <math>q < 1 </math>|
Line 21: Line 21:
}}
}}


The '''q-Gaussian''' is a probability distribution arising from the maximization of the [[Tsallis entropy]] under appropriate constraints. It is one example of a [[Tsallis distribution]]. The q-Gaussian is a generalization of the Gaussian in the same way that Tsallis entropy is a generalization of standard [[Entropy (statistical thermodynamics)|Boltzmann–Gibbs entropy]] or [[Entropy (information theory)|Shannon entropy]].<ref>Tsallis, C. Nonadditive entropy and nonextensive statistical mechanics-an overview after 20 years. Braz. J. Phys. 2009, 39, 337–356</ref> The [[normal distribution]] is recovered as ''q''&nbsp;→&nbsp;1.
The '''''q''-Gaussian''' is a probability distribution arising from the maximization of the [[Tsallis entropy]] under appropriate constraints. It is one example of a [[Tsallis distribution]]. The ''q''-Gaussian is a generalization of the Gaussian in the same way that Tsallis entropy is a generalization of standard [[Entropy (statistical thermodynamics)|Boltzmann–Gibbs entropy]] or [[Entropy (information theory)|Shannon entropy]].<ref>Tsallis, C. Nonadditive entropy and nonextensive statistical mechanics-an overview after 20 years. Braz. J. Phys. 2009, 39, 337–356</ref> The [[normal distribution]] is recovered as ''q''&nbsp;→&nbsp;1.


The q-Gaussian has been applied to problems in the fields of [[statistical mechanics]], [[geology]], [[anatomy]], [[astronomy]], [[economics]], [[finance]], and [[machine learning]]. The distribution is often favored for its [[heavy tails]] in comparison to the Gaussian for 1 < ''q'' < 3. For <math> q <1 </math> the q-Gaussian distribution is the PDF of a bounded [[random variable]]. This makes in biology and other domains<ref>d'Onofrio A. (ed.) Bounded Noises in Physics, Biology, and Engineering. Birkhauser (2013)</ref> the q-Gaussian distribution more suitable than Gaussian distribution to model the effect of external stochasticity. A generalized [[q-analog]] of the classical [[central limit theorem]]<ref name="Umarov2008">{{cite journal |last1=Umarov |first1=Sabir |author2=Tsallis, Constantino |author3=Steinberg, Stanly |year=2008 |title=On a q-Central Limit Theorem Consistent with Nonextensive Statistical Mechanics |journal=Milan j. math. |volume=76 |issue= |pages=307–328 |publisher=Birkhauser Verlag |doi=10.1007/s00032-008-0087-y |url=http://www.cbpf.br/GrupPesq/StatisticalPhys/pdftheo/UmarovTsallisSteinberg2008.pdf |accessdate=2011-07-27}}</ref> was proposed in 2008, in which the independence constraint for the [[Independent and identically distributed random variables|i.i.d. variables]] is relaxed to an extent defined by the ''q'' parameter, with independence being recovered as ''q''&nbsp;→&nbsp;1. However, a proof of such a theorem is still lacking.<ref name="Hilhorst">{{Citation |last1=Hilhorst |first1=H.J.|year=2010 |title=Note on a q-modified central limit theorem |journal=Journal of Statistical Mechanics: Theory and Experiment|volume=2010 |issue=10 |pages= P10023|doi=10.1088/1742-5468/2010/10/P10023|arxiv=1008.4259|postscript=.}}</ref>
The ''q''-Gaussian has been applied to problems in the fields of [[statistical mechanics]], [[geology]], [[anatomy]], [[astronomy]], [[economics]], [[finance]], and [[machine learning]]. The distribution is often favored for its [[heavy tails]] in comparison to the Gaussian for 1 < ''q'' < 3. For <math> q <1 </math> the ''q''-Gaussian distribution is the PDF of a bounded [[random variable]]. This makes in biology and other domains<ref>d'Onofrio A. (ed.) Bounded Noises in Physics, Biology, and Engineering. Birkhauser (2013)</ref> the ''q''-Gaussian distribution more suitable than Gaussian distribution to model the effect of external stochasticity. A generalized [[q-analog|''q''-analog]] of the classical [[central limit theorem]]<ref name="Umarov2008">{{cite journal |last1=Umarov |first1=Sabir |author2=Tsallis, Constantino |author3=Steinberg, Stanly |year=2008 |title=On a ''q''-Central Limit Theorem Consistent with Nonextensive Statistical Mechanics |journal=Milan j. math. |volume=76 |issue= |pages=307–328 |publisher=Birkhauser Verlag |doi=10.1007/s00032-008-0087-y |url=http://www.cbpf.br/GrupPesq/StatisticalPhys/pdftheo/UmarovTsallisSteinberg2008.pdf |accessdate=2011-07-27}}</ref> was proposed in 2008, in which the independence constraint for the [[Independent and identically distributed random variables|i.i.d. variables]] is relaxed to an extent defined by the ''q'' parameter, with independence being recovered as ''q''&nbsp;→&nbsp;1. However, a proof of such a theorem is still lacking.<ref name="Hilhorst">{{Citation |last1=Hilhorst |first1=H.J.|year=2010 |title=Note on a ''q''-modified central limit theorem |journal=Journal of Statistical Mechanics: Theory and Experiment|volume=2010 |issue=10 |pages= P10023|doi=10.1088/1742-5468/2010/10/P10023|arxiv=1008.4259|postscript=.}}</ref>


In the heavy tail regions, the distribution is equivalent to the [[Student's t-distribution|Student's ''t''-distribution]] with a direct mapping between ''q'' and the [[degrees of freedom (statistics)|degrees of freedom]]. A practitioner using one of these distributions can therefore parameterize the same distribution in two different ways. The choice of the q-Gaussian form may arise if the system is [[Nonextensive entropy|non-extensive]], or if there is lack of a connection to small samples sizes.
In the heavy tail regions, the distribution is equivalent to the [[Student's t-distribution|Student's ''t''-distribution]] with a direct mapping between ''q'' and the [[degrees of freedom (statistics)|degrees of freedom]]. A practitioner using one of these distributions can therefore parameterize the same distribution in two different ways. The choice of the ''q''-Gaussian form may arise if the system is [[Nonextensive entropy|non-extensive]], or if there is lack of a connection to small samples sizes.


==Characterization==
==Characterization==


===Probability density function===
===Probability density function===
The q-Gaussian has the probability density function <ref name="Umarov2008"/>
The ''q''-Gaussian has the probability density function <ref name="Umarov2008"/>


: <math>f(x) = {\sqrt{\beta} \over C_q} e_q(-\beta x^2) </math>
: <math>f(x) = {\sqrt{\beta} \over C_q} e_q(-\beta x^2) </math>
Line 38: Line 38:
:<math>e_q(x) = [1+(1-q)x]^{1 \over 1-q}</math>
:<math>e_q(x) = [1+(1-q)x]^{1 \over 1-q}</math>


is the [[Tsallis statistics#q-exponential|q-exponential]] and the normalization factor <math> C_q</math> is given by
is the [[Tsallis statistics#q-exponential|''q''-exponential]] and the normalization factor <math> C_q</math> is given by


:<math>C_q = {{2 \sqrt{\pi} \Gamma\left({1 \over 1-q}\right)} \over {(3-q) \sqrt{1-q} \Gamma\left({3-q \over 2(1-q)}\right)}} \text{ for } -\infty < q < 1 </math>
:<math>C_q = {{2 \sqrt{\pi} \Gamma\left({1 \over 1-q}\right)} \over {(3-q) \sqrt{1-q} \Gamma\left({3-q \over 2(1-q)}\right)}} \text{ for } -\infty < q < 1 </math>
Line 46: Line 46:
:<math>C_q = { {\sqrt{\pi} \Gamma\left({3-q \over 2(q-1)}\right)} \over {\sqrt{q-1} \Gamma\left({1 \over q-1}\right)}} \text{ for }1 < q < 3 .</math>
:<math>C_q = { {\sqrt{\pi} \Gamma\left({3-q \over 2(q-1)}\right)} \over {\sqrt{q-1} \Gamma\left({1 \over q-1}\right)}} \text{ for }1 < q < 3 .</math>


Note that for <math> q <1 </math> the q-Gaussian distribution is the PDF of a bounded [[random variable]].
Note that for <math> q <1 </math> the ''q''-Gaussian distribution is the PDF of a bounded [[random variable]].


== Entropy ==
== Entropy ==
Just as the [[normal distribution]] is the maximum [[information entropy]] distribution for fixed values of the first moment <math>\operatorname{E}(X)</math> and second moment <math>\operatorname{E}(X^2)</math> (with the fixed zeroth moment <math>\operatorname{E}(X^0)=1</math> corresponding to the normalization condition), the q-Gaussian distribution is the maximum [[Tsallis entropy]] distribution for fixed values of these three moments.
Just as the [[normal distribution]] is the maximum [[information entropy]] distribution for fixed values of the first moment <math>\operatorname{E}(X)</math> and second moment <math>\operatorname{E}(X^2)</math> (with the fixed zeroth moment <math>\operatorname{E}(X^0)=1</math> corresponding to the normalization condition), the ''q''-Gaussian distribution is the maximum [[Tsallis entropy]] distribution for fixed values of these three moments.


==Related distributions==
==Related distributions==
Line 56: Line 56:
While it can be justified by an interesting alternative form of entropy, statistically it is a scaled reparametrization of the [[Student's t-distribution|Student's ''t''-distribution]] introduced by W. Gosset in 1908 to describe small-sample statistics. In Gosset's original presentation the degrees of freedom parameter ''ν'' was constrained to be a positive integer related to the sample size, but it is readily observed that Gosset's density function is valid for all real values of ''ν''.{{citation needed|date=February 2012}} The scaled reparametrization introduces the alternative parameters ''q'' and ''β'' which are related to ''ν''.
While it can be justified by an interesting alternative form of entropy, statistically it is a scaled reparametrization of the [[Student's t-distribution|Student's ''t''-distribution]] introduced by W. Gosset in 1908 to describe small-sample statistics. In Gosset's original presentation the degrees of freedom parameter ''ν'' was constrained to be a positive integer related to the sample size, but it is readily observed that Gosset's density function is valid for all real values of ''ν''.{{citation needed|date=February 2012}} The scaled reparametrization introduces the alternative parameters ''q'' and ''β'' which are related to ''ν''.


Given a Student's ''t''-distribution with ''ν'' degrees of freedom, the equivalent q-Gaussian has
Given a Student's ''t''-distribution with ''ν'' degrees of freedom, the equivalent ''q''-Gaussian has
:<math>q = \frac{\nu+3}{\nu+1}\text{ with }\beta = \frac{1}{3-q}</math>
:<math>q = \frac{\nu+3}{\nu+1}\text{ with }\beta = \frac{1}{3-q}</math>


Line 68: Line 68:


===Three-parameter version===
===Three-parameter version===
As with many distributions centered on zero, the q-gaussian can be trivially extended to include a location parameter ''μ''. The density then becomes defined by
As with many distributions centered on zero, the ''q''-Gaussian can be trivially extended to include a location parameter ''μ''. The density then becomes defined by


:<math>{\sqrt{\beta} \over C_q} e_q({-\beta (x-\mu)^2}) .</math>
:<math>{\sqrt{\beta} \over C_q} e_q({-\beta (x-\mu)^2}) .</math>


==Generating random deviates==
==Generating random deviates==
The [[Box–Muller transform]] has been generalized to allow random sampling from q-gaussians.<ref>W. Thistleton, J.A. Marsh, K. Nelson and C. Tsallis, Generalized Box–Muller method for generating q-Gaussian random deviates, IEEE Transactions on Information Theory 53, 4805 (2007)</ref> The standard Box–Muller technique generates pairs of independent normally distributed variables from equations of the following form.
The [[Box–Muller transform]] has been generalized to allow random sampling from ''q''-Gaussians.<ref>W. Thistleton, J.A. Marsh, K. Nelson and C. Tsallis, Generalized Box–Muller method for generating ''q''-Gaussian random deviates, IEEE Transactions on Information Theory 53, 4805 (2007)</ref> The standard Box–Muller technique generates pairs of independent normally distributed variables from equations of the following form.


:<math>Z_1 = \sqrt{-2 \ln(U_1)} \cos(2 \pi U_2) </math>
:<math>Z_1 = \sqrt{-2 \ln(U_1)} \cos(2 \pi U_2) </math>
:<math>Z_2 = \sqrt{-2 \ln(U_1)} \sin(2 \pi U_2) </math>
:<math>Z_2 = \sqrt{-2 \ln(U_1)} \sin(2 \pi U_2) </math>


The generalized Box–Muller technique can generates pairs of q-gaussian deviates that are not independent. In practice, only a single deviate will be generated from a pair of uniformly distributed variables. The following formula will generate deviates from a q-Gaussian with specified parameter ''q'' and <math> \beta = {1 \over {3-q}}</math>
The generalized Box–Muller technique can generates pairs of ''q''-Gaussian deviates that are not independent. In practice, only a single deviate will be generated from a pair of uniformly distributed variables. The following formula will generate deviates from a ''q''-Gaussian with specified parameter ''q'' and <math> \beta = {1 \over {3-q}}</math>
:<math>Z = \sqrt{-2 \text{ ln}_{q'}(U_1)} \text{ cos}(2 \pi U_2) </math>
:<math>Z = \sqrt{-2 \text{ ln}_{q'}(U_1)} \text{ cos}(2 \pi U_2) </math>
where <math>\text{ ln}_q</math> is the [[Tsallis statistics#q-logarithm|q-logarithm]] and <math>q' = { {1+q} \over {3-q}}</math>
where <math>\text{ ln}_q</math> is the [[Tsallis statistics#q-logarithm|''q''-logarithm]] and <math>q' = { {1+q} \over {3-q}}</math>


These deviates can be transformed to generate deviates from an arbitrary q-Gaussian by
These deviates can be transformed to generate deviates from an arbitrary ''q''-Gaussian by
:<math> Z' = \mu + {Z \over \sqrt{\beta (3-q)}}</math>
:<math> Z' = \mu + {Z \over \sqrt{\beta (3-q)}}</math>


Line 88: Line 88:


=== Physics ===
=== Physics ===
It has been shown that the momentum distribution of cold atoms in dissipative optical lattices is a q-Gaussian.<ref>{{Cite journal | last1 = Douglas | first1 = P. | last2 = Bergamini | first2 = S. | last3 = Renzoni | first3 = F. | title = Tunable Tsallis Distributions in Dissipative Optical Lattices | doi = 10.1103/PhysRevLett.96.110601 | journal = Physical Review Letters | volume = 96 | issue = 11 | year = 2006 | pmid = 16605807| pmc = |bibcode = 2006PhRvL..96k0601D | page=110601}}</ref>
It has been shown that the momentum distribution of cold atoms in dissipative optical lattices is a ''q''-Gaussian.<ref>{{Cite journal | last1 = Douglas | first1 = P. | last2 = Bergamini | first2 = S. | last3 = Renzoni | first3 = F. | title = Tunable Tsallis Distributions in Dissipative Optical Lattices | doi = 10.1103/PhysRevLett.96.110601 | journal = Physical Review Letters | volume = 96 | issue = 11 | year = 2006 | pmid = 16605807| pmc = |bibcode = 2006PhRvL..96k0601D | page=110601}}</ref>


The q-Gaussian distribution is also obtained as the asymptotic [[probability density function]] of the position of the unidimensional motion of a mass subject to two forces: a deterministic force of the type <math>F_1(x) = - 2 x/(1-x^2)</math> (determining an infinite potential well) and a stochastic white noise force <math>F_2(t)= \sqrt{2(1-q)} \xi(t)</math>, where <math> \xi(t)</math> is a [[white noise]]. Note that in the overdamped/small mass approximation the above-mentioned convergence fails for <math>q <0 </math>, as recently shown.<ref>Domingo D., d'Onofrio A., Flandoli F. Boundedness vs unboundedness of a noise linked to Tsallis q-statistics: The role of the overdamped approximation.Journal of Mathematical Physics 58, 033301 (2017); doi: 10.1063/1.4977081</ref>
The ''q''-Gaussian distribution is also obtained as the asymptotic [[probability density function]] of the position of the unidimensional motion of a mass subject to two forces: a deterministic force of the type <math>F_1(x) = - 2 x/(1-x^2)</math> (determining an infinite potential well) and a stochastic white noise force <math>F_2(t)= \sqrt{2(1-q)} \xi(t)</math>, where <math> \xi(t)</math> is a [[white noise]]. Note that in the overdamped/small mass approximation the above-mentioned convergence fails for <math>q <0 </math>, as recently shown.<ref>Domingo D., d'Onofrio A., Flandoli F. Boundedness vs unboundedness of a noise linked to Tsallis ''q''-statistics: The role of the overdamped approximation.Journal of Mathematical Physics 58, 033301 (2017); doi: 10.1063/1.4977081</ref>


===Finance===
===Finance===
Financial return distributions in the New York Stock Exchange, NASDAQ and elsewhere have been interpreted as q-Gaussians.<ref>L. Borland, Option pricing formulas based on a non-Gaussian stock price model, Phys. Rev. Lett. 89, 098701 (2002)</ref><ref>L. Borland, The pricing of stock options, in Nonextensive Entropy – Interdisciplinary Applications, eds. M. Gell-Mann and C. Tsallis (Oxford University Press, New York, 2004)</ref>
Financial return distributions in the New York Stock Exchange, NASDAQ and elsewhere have been interpreted as ''q''-Gaussians.<ref>L. Borland, Option pricing formulas based on a non-Gaussian stock price model, Phys. Rev. Lett. 89, 098701 (2002)</ref><ref>L. Borland, The pricing of stock options, in Nonextensive Entropy – Interdisciplinary Applications, eds. M. Gell-Mann and C. Tsallis (Oxford University Press, New York, 2004)</ref>


== See also ==
== See also ==
Line 101: Line 101:
* [[Tsallis entropy]]
* [[Tsallis entropy]]
* [[Tsallis distribution]]
* [[Tsallis distribution]]
* [[q-exponential distribution]]
* [[q-exponential distribution|''q''-exponential distribution]]


== Notes ==
== Notes ==

Revision as of 04:12, 19 December 2017

q-Gaussian
Probability density function
Probability density plots of q-Gaussian distributions
Parameters shape (real)
(real)
Support for
for
PDF
Mean , otherwise undefined
Median
Mode
Variance

Skewness
Excess kurtosis

The q-Gaussian is a probability distribution arising from the maximization of the Tsallis entropy under appropriate constraints. It is one example of a Tsallis distribution. The q-Gaussian is a generalization of the Gaussian in the same way that Tsallis entropy is a generalization of standard Boltzmann–Gibbs entropy or Shannon entropy.[1] The normal distribution is recovered as q → 1.

The q-Gaussian has been applied to problems in the fields of statistical mechanics, geology, anatomy, astronomy, economics, finance, and machine learning. The distribution is often favored for its heavy tails in comparison to the Gaussian for 1 < q < 3. For the q-Gaussian distribution is the PDF of a bounded random variable. This makes in biology and other domains[2] the q-Gaussian distribution more suitable than Gaussian distribution to model the effect of external stochasticity. A generalized q-analog of the classical central limit theorem[3] was proposed in 2008, in which the independence constraint for the i.i.d. variables is relaxed to an extent defined by the q parameter, with independence being recovered as q → 1. However, a proof of such a theorem is still lacking.[4]

In the heavy tail regions, the distribution is equivalent to the Student's t-distribution with a direct mapping between q and the degrees of freedom. A practitioner using one of these distributions can therefore parameterize the same distribution in two different ways. The choice of the q-Gaussian form may arise if the system is non-extensive, or if there is lack of a connection to small samples sizes.

Characterization

Probability density function

The q-Gaussian has the probability density function [3]

where

is the q-exponential and the normalization factor is given by

Note that for the q-Gaussian distribution is the PDF of a bounded random variable.

Entropy

Just as the normal distribution is the maximum information entropy distribution for fixed values of the first moment and second moment (with the fixed zeroth moment corresponding to the normalization condition), the q-Gaussian distribution is the maximum Tsallis entropy distribution for fixed values of these three moments.

Student's t-distribution

While it can be justified by an interesting alternative form of entropy, statistically it is a scaled reparametrization of the Student's t-distribution introduced by W. Gosset in 1908 to describe small-sample statistics. In Gosset's original presentation the degrees of freedom parameter ν was constrained to be a positive integer related to the sample size, but it is readily observed that Gosset's density function is valid for all real values of ν.[citation needed] The scaled reparametrization introduces the alternative parameters q and β which are related to ν.

Given a Student's t-distribution with ν degrees of freedom, the equivalent q-Gaussian has

with inverse

Whenever , the function is simply a scaled version of Student's t-distribution.

It is sometimes argued that the distribution is a generalization of Student's t-distribution to negative and or non-integer degrees of freedom. However, the theory of Student's t-distribution extends trivially to all real degrees of freedom, where the support of the distribution is now compact rather than infinite in the case of ν < 0.[citation needed]

Three-parameter version

As with many distributions centered on zero, the q-Gaussian can be trivially extended to include a location parameter μ. The density then becomes defined by

Generating random deviates

The Box–Muller transform has been generalized to allow random sampling from q-Gaussians.[5] The standard Box–Muller technique generates pairs of independent normally distributed variables from equations of the following form.

The generalized Box–Muller technique can generates pairs of q-Gaussian deviates that are not independent. In practice, only a single deviate will be generated from a pair of uniformly distributed variables. The following formula will generate deviates from a q-Gaussian with specified parameter q and

where is the q-logarithm and

These deviates can be transformed to generate deviates from an arbitrary q-Gaussian by

Applications

Physics

It has been shown that the momentum distribution of cold atoms in dissipative optical lattices is a q-Gaussian.[6]

The q-Gaussian distribution is also obtained as the asymptotic probability density function of the position of the unidimensional motion of a mass subject to two forces: a deterministic force of the type (determining an infinite potential well) and a stochastic white noise force , where is a white noise. Note that in the overdamped/small mass approximation the above-mentioned convergence fails for , as recently shown.[7]

Finanzbranche

Financial return distributions in the New York Stock Exchange, NASDAQ and elsewhere have been interpreted as q-Gaussians.[8][9]

See also

Notes

  1. ^ Tsallis, C. Nonadditive entropy and nonextensive statistical mechanics-an overview after 20 years. Braz. J. Phys. 2009, 39, 337–356
  2. ^ d'Onofrio A. (ed.) Bounded Noises in Physics, Biology, and Engineering. Birkhauser (2013)
  3. ^ a b Umarov, Sabir; Tsallis, Constantino; Steinberg, Stanly (2008). "On a q-Central Limit Theorem Consistent with Nonextensive Statistical Mechanics" (PDF). Milan j. math. 76. Birkhauser Verlag: 307–328. doi:10.1007/s00032-008-0087-y. Retrieved 2011-07-27.
  4. ^ Hilhorst, H.J. (2010), "Note on a q-modified central limit theorem", Journal of Statistical Mechanics: Theory and Experiment, 2010 (10): P10023, arXiv:1008.4259, doi:10.1088/1742-5468/2010/10/P10023.
  5. ^ W. Thistleton, J.A. Marsh, K. Nelson and C. Tsallis, Generalized Box–Muller method for generating q-Gaussian random deviates, IEEE Transactions on Information Theory 53, 4805 (2007)
  6. ^ Douglas, P.; Bergamini, S.; Renzoni, F. (2006). "Tunable Tsallis Distributions in Dissipative Optical Lattices". Physical Review Letters. 96 (11): 110601. Bibcode:2006PhRvL..96k0601D. doi:10.1103/PhysRevLett.96.110601. PMID 16605807.
  7. ^ Domingo D., d'Onofrio A., Flandoli F. Boundedness vs unboundedness of a noise linked to Tsallis q-statistics: The role of the overdamped approximation.Journal of Mathematical Physics 58, 033301 (2017); doi: 10.1063/1.4977081
  8. ^ L. Borland, Option pricing formulas based on a non-Gaussian stock price model, Phys. Rev. Lett. 89, 098701 (2002)
  9. ^ L. Borland, The pricing of stock options, in Nonextensive Entropy – Interdisciplinary Applications, eds. M. Gell-Mann and C. Tsallis (Oxford University Press, New York, 2004)

Further reading