Jump to content

Talk:Golden-section search

Page contents not supported in other languages.
From Wikipedia, the free encyclopedia

This is an old revision of this page, as edited by Av life (talk | contribs) at 01:41, 25 September 2015. The present address (URL) is a permanent link to this revision, which may differ significantly from the current revision.

WikiProject iconMathematics Start‑class Low‑priority
WikiProject iconThis article is within the scope of WikiProject Mathematics, a collaborative effort to improve the coverage of mathematics on Wikipedia. If you would like to participate, please visit the project page, where you can join the discussion and see a list of open tasks.
StartThis article has been rated as Start-class on Wikipedia's content assessment scale.
NiedrigThis article has been rated as Low-priority on the project's priority scale.

Motivation, etc

Fibonacci search was actually introduced by J. Kiefer in 1953. Kiefer also introduced the golden section search ,i.e., using the limit of

F(n-1)/F(n) ~ 0.618 as the constant ratio. — Preceding unsigned comment added by 109.66.185.182 (talk) 17:14, 24 December 2011 (UTC)[reply]

I have changed a few things on the recent addition.

The motivation previously provided might not be as clear as it could be. I have added a sentence in the beginning. The new motivation was first of all wrong, and secondly, it removed the explanation of the intervals a,b,c in the diagram.

The selection of the "probe" point in the golden ratio does not guarantee the fastest possible convergence. The fastest possible convergence ratio is not known a priori. The selection of the probe point yields a constant narrowing ratio and search interval, so that at each point, no one interval is favored over another in the search.

I think the other additions are excellent. PAR 21:38, 13 February 2007 (UTC)[reply]

Hi David - on the recent additions - yes, thats the part that was missing. It looks a lot better. PAR 22:04, 13 February 2007 (UTC)[reply]

Thanks — I'm glad we seem to have converged to something we both agree is an improvement. —David Eppstein 22:08, 13 February 2007 (UTC)[reply]

Problem with choosing the next d in example code?

d = c + resphi*(b - c)

The above line does not work for both branches the algorithm can take. —Preceding unsigned comment added by 71.237.136.91 (talk) 05:41, 30 May 2010 (UTC)[reply]

Problem with the Termination Condition section?

I hesitate to make the change myself, but there is clearly at least a notation issue in that section. It starts out by listing , , and as the relevant variables, but in the equation we find and no . In the original text (Numerical Recipes in C, page 402), there is no and the equation is written as:

as opposed to

in this article. The fact that didn't change tells me that this is not just an off-by-1 problem. I think the issue arises from the fact that earlier in the article the notation is 1-based, not 0-based. My proposed change would be to list the formula as:

The biggest reason I didn't change it though is the following line that refers to the precision of and , and I don't know which notation scheme this is referring to (I imagine it's the 1-based one).

(sorry for the deletion in the history - realized I wasn't logged in when I wrote it, wanted to get it put back in my name so I can address comments/questions)

Why?

What if any and when there are benefits over binary search? —Preceding unsigned comment added by 193.184.83.235 (talk) 10:19, 13 May 2011 (UTC)[reply]

They solve different problems. Binary search looks for a particular value in a monotonic function (one that only goes up, or only goes down) while this one looks for the value where a function that goes up and then down takes its maximum value. —David Eppstein (talk) 15:59, 13 May 2011 (UTC)[reply]

x1, x2, etc vs. a, b, c

It seems odd that the top part of the article talks about a, b, c being segment lengths, while the pseudocode algorithm assumes them to be points on the real line. (the comment line notwithstanding.) I think it would be much clearer to use x1, x2, in the pseudocode. Am I missing something? Shabbychef (talk) 18:46, 6 February 2012 (UTC)[reply]

Absolutely agree. I think it should be fixed. Also gr=(math.sqrt(5)-1)/2 in python program, gr is bad name for it. as this is not a golden ratio

Av life (talk) 00:53, 25 September 2015 (UTC)[reply]

Strictly monotonous

It seems that the code as presented does not adequately handle the case where f(x) == f(b), yet Wikipedia's definition of unimodal function only says the function needs to be monotonous, which is defined as non-decreasing (ie. constant values are okay). As I don't think there's any good way of remedying this, maybe the article should be made more precise instead? I wouldn't know if e.g. “strictly unimodal” would make any sense, though. Sesse (talk) 14:37, 13 August 2012 (UTC)[reply]

What is the "correct" result? f(x)==f(b) has no minimum. PAR (talk) 22:22, 13 August 2012 (UTC)[reply]
In that case, any x would do. With a function like e.g. f(x)==abs(floor(x)), there would certainly be a minimum at f(x)=1 (even if we exclude the discontinuity at x=0), but the algorithm as stated might very well wander off and pick e.g. x=3 instead. Maybe this should be emphasized with an assert(f(x) != f(b)) in the code? -Sesse (talk) 18:58, 14 August 2012 (UTC)[reply]
Can you give a specific example involving f(x)==abs(floor(x)), including lower boundary (x1), center value (x2) and upper boundary (x3) where x1<=x2<=x3 for which it "wanders off" and gives an incorrect answer? PAR (talk) 01:42, 15 August 2012 (UTC)[reply]
It's not exactly the “wandering off” problem I saw in my earlier testing (my function is actually quite a lot more complicated, and reducing it to a simple test case is not all that easy), but for f(x)=abs(ceil(x)), a=-100, b=1, c=16, tau=1e-3, the algorithm (at least when I try it in C) returns x=1.000009, whose f(x) = 2. In testing this, I also found another bug; if b=x=0, the algorithm will never terminate since the right side of the inequality is zero. You can see this with e.g. f(x) = abs(ceil(x)), a=-10, b=-1 and c=10; relatively quickly, it converges on a=b=c=0 and just loops infinitely from there. Just changing < to <= fixes that one easily enough, though. -Sesse (talk 09:38, 15 August 2012 (UTC)[reply]
I found a slightly better example. f(x)=abs(ceil(x)), a=-1000, b=-989, c=303. At some point, this reduces to a=-3.532451, b=-1.999827, c=0.480010, for which x=-1.052613, and then the algorithm picks the wrong way by setting c=x, taking the correct solution outside the [a,c] interval. It eventually chooses to return x=-2.000174, for which f(x)=2, which obviously is wrong (e.g. x=-0.5 would give f(x)=0). -Sesse (talk 09:55, 15 August 2012 (UTC)[reply]
As there's been no response for over a month, I'm making the change. -Sesse (talk) 22:58, 19 September 2012 (UTC)[reply]

There are two algorithm's both named Brent's method. The link to article on Brent's method links to an article on root-finding, not function minimization. — Preceding unsigned comment added by Rwsaustin (talkcontribs) 20:12, 21 October 2013 (UTC)[reply]

why this algorithm sometimes doesn't find the global maximum

As you probably alread know, the bisection method is guaranteed to always find a zero of the function, even if the function has several zeros.

Sometimes I want to find the maximum of a function that I suspect is not "strictly unimodal". For example, let's say I have a polynomial function p(x) that I suspect has two local maxima. For example, let's say I have a continuous piecewise linear function L(x) with a finite number of pieces, and none of the pieces are horizontal, but I suspect it has several maxima.

For the above functions, once I've found locations x1, x2, x3 such that x1 < x2 < x3 and f(x1) < f(x2) and f(x3) < f(x2), I know that at least one local maximum is in the range x1 to x3.

When given those points x1, x2, x3, is the golden section search guaranteed to converge on a local maximum (which may or may not be the global maximum)?

I think this article would be improved by a few words discussing what happens when this algorithm is applied to a function that is not "a strictly unimodal function". In other words, I think this article would be improved by explaining why this algorithm sometimes doesn't find the global maximum when the function is not a strictly unimodal function. --DavidCary (talk) 15:35, 17 March 2015 (UTC)[reply]

Is python program for golden section search broken?

I may be wrong there, but seems there at least three problems:

1. It is inconsistent with article. a,b used for points and not for interval length. phi/gr in program is not a golden ration. 2. On each step it calculates values in 2 points. Calculating function in only one point on each step IMHO main advantage of it over ternary search There is commented code, but IMHO it creates more confusion then adds value. 3. It doesn't work. If you try to run it on [-z,z] interval, on first step it will produce bigger interval ~[0.2z, 2.2z] instead of shrinking it

Av life (talk) 01:41, 25 September 2015 (UTC)[reply]