Jump to content

Talk:Long-range dependence

Page contents not supported in other languages.
From Wikipedia, the free encyclopedia

Whether or not you merge these, it's helpful to keep the term "long dependence" searchable & closely associated here as it's one Mandelbrot uses; also helps those trying to connect the distribution to its practical significance.

Mandelbrot more commonly used the term "Joseph Effect" for LRD and "Noah Effect" for Heavy Tails. I would not like to see them merged anyway. LRD is a different but related phenomenon. "The long tail" seems to be a slightly meaningless marketting version of the idea of "heavy tail".--Richard Clegg 23:24, 16 February 2006 (UTC)[reply]
I agree. The terms should not have been merged. They are different, as well as their applications.

The two terms should not be merged. Long-range dependence should follow from a temporal phenomenon (thus the notion of dependence over a long time period), rather than having something to do with independent draws from a heavy-tailed distribution.


Cleanup

[edit]

Currently all the images in this article seems to be redlinks - can someone knowledgeable about Wikipedia check to see if they have been renamed, deleted or what... --Neo 21:12, 15 November 2006 (UTC) Citation 4 is a broken link. 193.184.145.2 (talk) 13:37, 23 February 2010 (UTC) Harri 23.2.2010[reply]

Needs a rewrite

[edit]

This article conflates heavy-tailed distributions with long-range dependency. One can cause the other, but there is no inherent logical need for a heavy-tailed distribution to be caused by long-range dependency effects. -- The Anome 23:43, 23 November 2006 (UTC)[reply]

The definition given here for a heavy-tailed distribution, although it is used by a few authors, is not the commonly used definition, and excludes heavy-tailed distributions such as Weibull, Log-normal, and many more. See the heavy-tailed article for more information. PoochieR (talk) 09:30, 24 January 2008 (UTC)[reply]

Moved large section

[edit]

I have moved a large slice of what was here to Self-similar process as it did not fit well under this title, which has a rather more general meaning. Melcombe (talk) 17:30, 29 January 2009 (UTC)[reply]

Hurst parameter

[edit]
Any pure random process has H = 0.5

I am skeptical of this. Is the Hurst parameter the same as the Hurst exponent of self-similar processes? In that case, a "pure random" process (what?) *might* mean a process with stationary independent increments, in which case H can take any value greater than 0.5 - Brownian motion corresponds to 0.5 —Preceding unsigned comment added by 138.38.106.191 (talk) 11:13, 18 January 2011 (UTC)[reply]

In the Brownian case (I think both editors are correct) these "raggedness" ranges occur:

• if H = 1/2 then the process is in fact a Brownian motion or Wiener process; • if H > 1/2 then the increments of the process are positively correlated; • if H < 1/2 then the increments of the process are negatively correlated. Pdecalculus (talk) 20:05, 10 August 2013 (UTC)[reply]

From the article: The Hurst parameter H is a measure of the extent of long-range dependence in a time series (while it has another meaning in the context of self-similar processes).
Are they really that different? I would think they're essentially the same. 2A02:1210:2642:4A00:85A5:3302:64C8:5866 (talk) 09:03, 28 October 2023 (UTC)[reply]

Long memory

[edit]

Added redirects and disambig corrections for the more current terms, wiki had no mention of LRD in disambigs, and their LM links all related to neurons etc. BTW, as Engineers we often use H=.5 as a rule of thumb, perhaps it comes from Brownian?! Pseudo random cellular automata also use the value, so I think both editors above are correct, and it certainly CAN go higher in calcs by CAS systems and matrices. Pdecalculus (talk) 19:42, 10 August 2013 (UTC)[reply]