Jump to content

Talk:Golden-section search: Difference between revisions

Page contents not supported in other languages.
From Wikipedia, the free encyclopedia
Content deleted Content added
SineBot (talk | contribs)
m Signing comment by Nosirrom - "→‎Problems with algorithm?: new section"
Nosirrom (talk | contribs)
Deleted section I just added, as it was foolishly redundant with others' earlier comments
Line 104: Line 104:


There are two algorithm's both named Brent's method. The link to article on Brent's method links to an article on root-finding, not function minimization. <small><span class="autosigned">—&nbsp;Preceding [[Wikipedia:Signatures|unsigned]] comment added by [[User:Rwsaustin|Rwsaustin]] ([[User talk:Rwsaustin|talk]] • [[Special:Contributions/Rwsaustin|contribs]]) 20:12, 21 October 2013 (UTC)</span></small><!-- Template:Unsigned --> <!--Autosigned by SineBot-->
There are two algorithm's both named Brent's method. The link to article on Brent's method links to an article on root-finding, not function minimization. <small><span class="autosigned">—&nbsp;Preceding [[Wikipedia:Signatures|unsigned]] comment added by [[User:Rwsaustin|Rwsaustin]] ([[User talk:Rwsaustin|talk]] • [[Special:Contributions/Rwsaustin|contribs]]) 20:12, 21 October 2013 (UTC)</span></small><!-- Template:Unsigned --> <!--Autosigned by SineBot-->

== Problems with algorithm? ==

The recursive algorithm presented asserts that f(x) cannot equal f(b), but surely this isn't correct: even though f is unimodal, f(x) can equal f(b) if the minimum lies between x and b. Or am I missing something?

It seems needlessly confusing that the description and diagram use x[i] for the various points, but the algorithm uses a, b, c. And that's made worse by the diagram using a, b, c not for points, but for intervals. <small><span class="autosigned">—&nbsp;Preceding [[Wikipedia:Signatures|unsigned]] comment added by [[User:Nosirrom|Nosirrom]] ([[User talk:Nosirrom|talk]] • [[Special:Contributions/Nosirrom|contribs]]) 11:32, 20 November 2013 (UTC)</span></small><!-- Template:Unsigned --> <!--Autosigned by SineBot-->

Revision as of 11:35, 20 November 2013

WikiProject iconMathematics Start‑class Low‑priority
WikiProject iconThis article is within the scope of WikiProject Mathematics, a collaborative effort to improve the coverage of mathematics on Wikipedia. If you would like to participate, please visit the project page, where you can join the discussion and see a list of open tasks.
StartThis article has been rated as Start-class on Wikipedia's content assessment scale.
NiedrigThis article has been rated as Low-priority on the project's priority scale.

Motivation, etc

Fibonacci search was actually introduced by J. Kiefer in 1953. Kiefer also introduced the golden section search ,i.e., using the limit of

F(n-1)/F(n) ~ 0.618 as the constant ratio. — Preceding unsigned comment added by 109.66.185.182 (talk) 17:14, 24 December 2011 (UTC)[reply]

I have changed a few things on the recent addition.

The motivation previously provided might not be as clear as it could be. I have added a sentence in the beginning. The new motivation was first of all wrong, and secondly, it removed the explanation of the intervals a,b,c in the diagram.

The selection of the "probe" point in the golden ratio does not guarantee the fastest possible convergence. The fastest possible convergence ratio is not known a priori. The selection of the probe point yields a constant narrowing ratio and search interval, so that at each point, no one interval is favored over another in the search.

I think the other additions are excellent. PAR 21:38, 13 February 2007 (UTC)[reply]

Hi David - on the recent additions - yes, thats the part that was missing. It looks a lot better. PAR 22:04, 13 February 2007 (UTC)[reply]

Thanks — I'm glad we seem to have converged to something we both agree is an improvement. —David Eppstein 22:08, 13 February 2007 (UTC)[reply]

Problem with choosing the next d in example code?

d = c + resphi*(b - c)

The above line does not work for both branches the algorithm can take. —Preceding unsigned comment added by 71.237.136.91 (talk) 05:41, 30 May 2010 (UTC)[reply]

Problem with the Termination Condition section?

I hesitate to make the change myself, but there is clearly at least a notation issue in that section. It starts out by listing , , and as the relevant variables, but in the equation we find and no . In the original text (Numerical Recipes in C, page 402), there is no and the equation is written as:

as opposed to

in this article. The fact that didn't change tells me that this is not just an off-by-1 problem. I think the issue arises from the fact that earlier in the article the notation is 1-based, not 0-based. My proposed change would be to list the formula as:

The biggest reason I didn't change it though is the following line that refers to the precision of and , and I don't know which notation scheme this is referring to (I imagine it's the 1-based one).

(sorry for the deletion in the history - realized I wasn't logged in when I wrote it, wanted to get it put back in my name so I can address comments/questions)

Why?

What if any and when there are benefits over binary search? —Preceding unsigned comment added by 193.184.83.235 (talk) 10:19, 13 May 2011 (UTC)[reply]

They solve different problems. Binary search looks for a particular value in a monotonic function (one that only goes up, or only goes down) while this one looks for the value where a function that goes up and then down takes its maximum value. —David Eppstein (talk) 15:59, 13 May 2011 (UTC)[reply]

x1, x2, etc vs. a, b, c

It seems odd that the top part of the article talks about a, b, c being segment lengths, while the pseudocode algorithm assumes them to be points on the real line. (the comment line notwithstanding.) I think it would be much clearer to use x1, x2, in the pseudocode. Am I missing something? Shabbychef (talk) 18:46, 6 February 2012 (UTC)[reply]


just real quick in c++ you might want to do the following (based on the provided code)

const float resphi = 2 - (1 +sqrt(5)) / 2;

template <typename T>
float goldenSectionSearch(float a, float b, float c, float tau, T f);
template <typename T>
float goldenSectionSearch(float a, float b, float c, float tau, T f) 

{ double x; if (c - b > b - a) x = b + resphi * (c - b); else x = b - resphi * (b - a); if (fabs(c - a) < tau * (fabs(b) + fabs(x))) return (c + a) / 2; if (f(x) > f(b)) { if (c - b > b - a) return goldenSectionSearch(b, x, c, tau, f) ; else return goldenSectionSearch(a, x, b, tau, f) ; } else { if (c - b > b - a) return goldenSectionSearch(a, b, x, tau, f) ; else return goldenSectionSearch(x, b, c, tau, f) ; } }

Strictly monotonous

It seems that the code as presented does not adequately handle the case where f(x) == f(b), yet Wikipedia's definition of unimodal function only says the function needs to be monotonous, which is defined as non-decreasing (ie. constant values are okay). As I don't think there's any good way of remedying this, maybe the article should be made more precise instead? I wouldn't know if e.g. “strictly unimodal” would make any sense, though. Sesse (talk) 14:37, 13 August 2012 (UTC)[reply]

What is the "correct" result? f(x)==f(b) has no minimum. PAR (talk) 22:22, 13 August 2012 (UTC)[reply]
In that case, any x would do. With a function like e.g. f(x)==abs(floor(x)), there would certainly be a minimum at f(x)=1 (even if we exclude the discontinuity at x=0), but the algorithm as stated might very well wander off and pick e.g. x=3 instead. Maybe this should be emphasized with an assert(f(x) != f(b)) in the code? -Sesse (talk) 18:58, 14 August 2012 (UTC)[reply]
Can you give a specific example involving f(x)==abs(floor(x)), including lower boundary (x1), center value (x2) and upper boundary (x3) where x1<=x2<=x3 for which it "wanders off" and gives an incorrect answer? PAR (talk) 01:42, 15 August 2012 (UTC)[reply]
It's not exactly the “wandering off” problem I saw in my earlier testing (my function is actually quite a lot more complicated, and reducing it to a simple test case is not all that easy), but for f(x)=abs(ceil(x)), a=-100, b=1, c=16, tau=1e-3, the algorithm (at least when I try it in C) returns x=1.000009, whose f(x) = 2. In testing this, I also found another bug; if b=x=0, the algorithm will never terminate since the right side of the inequality is zero. You can see this with e.g. f(x) = abs(ceil(x)), a=-10, b=-1 and c=10; relatively quickly, it converges on a=b=c=0 and just loops infinitely from there. Just changing < to <= fixes that one easily enough, though. -Sesse (talk 09:38, 15 August 2012 (UTC)[reply]
I found a slightly better example. f(x)=abs(ceil(x)), a=-1000, b=-989, c=303. At some point, this reduces to a=-3.532451, b=-1.999827, c=0.480010, for which x=-1.052613, and then the algorithm picks the wrong way by setting c=x, taking the correct solution outside the [a,c] interval. It eventually chooses to return x=-2.000174, for which f(x)=2, which obviously is wrong (e.g. x=-0.5 would give f(x)=0). -Sesse (talk 09:55, 15 August 2012 (UTC)[reply]
As there's been no response for over a month, I'm making the change. -Sesse (talk) 22:58, 19 September 2012 (UTC)[reply]

There are two algorithm's both named Brent's method. The link to article on Brent's method links to an article on root-finding, not function minimization. — Preceding unsigned comment added by Rwsaustin (talkcontribs) 20:12, 21 October 2013 (UTC)[reply]