Jump to content

Talk:Bellman equation

Page contents not supported in other languages.
From Wikipedia, the free encyclopedia

This is an old revision of this page, as edited by Rinconsoleao (talk | contribs) at 18:56, 19 September 2012 (→‎Borken stochastic eqn). The present address (URL) is a permanent link to this revision, which may differ significantly from the current revision.

WikiProject iconEconomics Start‑class Mid‑importance
WikiProject iconThis article is within the scope of WikiProject Economics, a collaborative effort to improve the coverage of Economics on Wikipedia. If you would like to participate, please visit the project page, where you can join the discussion and see a list of open tasks.
StartThis article has been rated as Start-class on Wikipedia's content assessment scale.
MidThis article has been rated as Mid-importance on the project's importance scale.
WikiProject iconMathematics Start‑class Mid‑priority
WikiProject iconThis article is within the scope of WikiProject Mathematics, a collaborative effort to improve the coverage of mathematics on Wikipedia. If you would like to participate, please visit the project page, where you can join the discussion and see a list of open tasks.
StartThis article has been rated as Start-class on Wikipedia's content assessment scale.
MidThis article has been rated as Mid-priority on the project's priority scale.


Shuroo (talk) 09:19, 29 May 2009 (UTC) This explanation is too specific. We need a general explanation of the Bellman equation.[reply]

BPO

I can't find the "principle of optimality" anywhere in Bellman's 1952 paper. Perhaps this should be his 1957 book. —Preceding unsigned comment added by 129.100.144.166 (talk) 17:01, 11 December 2007 (UTC)[reply]

Bellman's definition from Dynamic Programming, 1957 (Dover 2003 edition, p. 83):

Principle of Optimality. An optimal policy has the property that whatever the initial state and initial decision are, the remaining decisions must constitute an optimal policy with regard to the state resulting from the first decision.

--Rinconsoleao (talk) 19:57, 25 May 2009 (UTC)[reply]

Notation for dynamic problems

The equation given in the article is wrong, or at least not true in general. The right equuation is

where T(x,y) is the transformation of the state given a state x and an action y.

Shuroo —Preceding unsigned comment added by Shuroo (talkcontribs) 20:27, 27 May 2009 (UTC)[reply]

That depends on how we are defining the notation. The way the problem was set up (I didn't write it myself) defines transitions by stating that the feasible future states are . So the 'transformation of the state' you mention is captured by that notation. Of course, I don't mind moving to clearer notation if others prefer. --Rinconsoleao (talk) 11:04, 28 May 2009 (UTC)[reply]

By the way, the notation appears to come from Stokey, Lucas, Prescott (1989), Chapter 4. --Rinconsoleao (talk) 11:07, 28 May 2009 (UTC)[reply]

The notation I have suggested is easier to understand and much more practical for dynamic programming problems with finite horizon. (It also better matches with the article in Vikipedia on dynamic programming.) —Preceding unsigned comment added by Shuroo (talkcontribs) 14:28, 28 May 2009 (UTC)[reply]

Notation for stochastic problems

Yet another argument for my notation is that it can be easily extended to stochastic problems (e.g., inventory management). To obtain the stochastic form of the equation we simply add E (expectation) before the second term since the new state is a random variable.

where T(x,y) is the transformation of the state given a state x and an action y. Shuroo (talk) 09:21, 29 May 2009 (UTC)shuroo[reply]

I agree that the form you are proposing is probably easier to understand. Go ahead and change the notation if you want. (Notice that you will need to change the notation on the original sequence problem too, for consistency with the form of the Bellman equation you are proposing.) I'll keep an eye open for errors. --Rinconsoleao (talk) 10:21, 29 May 2009 (UTC)[reply]

Done Shuroo (talk) 08:51, 30 May 2009 (UTC)[reply]

I'm sorry, I thought your notation for the stochastic case would show more explicitly how the next state is determined. The notation T(x,y) strikes me as misleading, because it looks like a function, which suggests that it should take a unique value. A rigorous way to define it would be to write a c.d.f. of the form representing the probability distribution of possible states conditional on the current state and action . Alternatively, we could write something like , where is a mean-zero shock, but that is more restrictive. --Rinconsoleao (talk) 13:47, 1 June 2009 (UTC)[reply]

What I wrote was a simple but valid formula that could explain the example next section. (I used T for the needed random variable but indicated it was a random variable , not a function.) It seems to me that adding too much details would make the article much less attractive because in order to introduce conditional distributions properly you need to add some heavy machinery.

Pity that you erased it.Shuroo (talk) 18:47, 1 June 2009 (UTC)[reply]

By the way, the current article uses the version that you erased, in the example, but in an implicit way. Surely it is better to write it explicitly. Shuroo (talk) 19:32, 1 June 2009 (UTC)[reply]

Now I've added a lot more discussion, and a stochastic example with (I think) rigorous notation. Let me know if you find errors.--Rinconsoleao (talk) 18:16, 4 June 2009 (UTC)[reply]

If you want to do the stochastic part proporly better write it in terms of a Markov decision process. Shuroo (talk) 10:13, 5 June 2009 (UTC)[reply]

I wrote it this way because the G(y|x,a) notation is actually very close to your y=T(x,a) notation, except that it makes explicit the fact that we are talking about a random variable. And it allowed me to write a very similar Bellman equation for the stochastic and deterministic cases (so the stochastic subsection ends up very short and simple without being incorrect). But we could certainly include a link to MDP too.--Rinconsoleao (talk) 11:03, 6 June 2009 (UTC)[reply]

In any case, if you want the article to be rigorous,at least to some extent, you certainly need to state the Markovian property of the system. Shuroo (talk) 18:57, 6 June 2009 (UTC)[reply]

Wording

"It breaks a dynamic optimization problem into simpler subproblems, writing the value of a decision problem at a certain point in time in terms of the payoff from some initial choices and the value of the remaining decision problem that results from those initial choices, as Bellman's Principle of Optimality prescribes." this sentance is VERY long and obfuscated ; it may deserve a reformulation dont you think —Preceding unsigned comment added by 124.180.239.182 (talk) 09:35, 20 July 2010 (UTC)[reply]

Greedy?

As I understand it, the mathematical expression given under the Bellman's Principle of Optimality section is incorrect. The claim is made that it is the same as (the RHS of, presumably) the previous equation, but I don't think that's true -- as it stands, it is actually a description of a greedy solution, not a solution involving the principle of optimal substructure. The reason being that the first term greedily selects the action at time 0 giving maximum payoff, when in fact a suboptimal action at this time may maximise the value overall by bringing the system into a different state at time 1 whose value (maximum possible discounted sum of payoffs), when added to this (suboptimal) time-0 payoff, is greater. Not sure what the correct expression would be I'm afraid. 130.123.96.22 (talk) 03:18, 15 February 2011 (UTC)[reply]

No, the expression does not refer to a 'greedy' algorithm-- in other words, it is not maximizing the first-period payoff in isolation. The control a0 is chosen so that it maximizes the first-period payoff PLUS the continuation value of the rest of the problem (which need not imply that it maximizes the first-period payoff by itself, as a greedy algorithm would). I will add another set of parentheses to make this clearer. Rinconsoleao (talk) 09:06, 15 February 2011 (UTC)[reply]

Value function

I think the term "value function" deserves an article alone!Luizabpr (talk) 19:07, 21 May 2011 (UTC)[reply]

Borken stochastic eqn

There is something wrong with the equation defining the conditional probability, but am not quite sure of the fix. Currently it says:

But this can't be right, since is a state, and so in general, the idea of is impossible to define. Perhaps the following was intended:

??? I dunno, please fix. linas (talk) 17:00, 28 June 2012 (UTC)[reply]

I don't understand what you mean when you say that is impossible to define. Perhaps you are assuming that the state is chosen at t, and is therefore deterministic at t. But the Bellman equation, as written, shows that the decision-maker chooses an action a at time t, which does not exactly determine . Instead, it alters the probability distribution of . That's what the conditional distribution shows.
So I see no problem with the equation. Removing the discussion flag.Rinconsoleao (talk) 18:56, 19 September 2012 (UTC)[reply]