Questions tagged [dynamic-programming]
68 questions
22
votes
3 answers
When Optimal Control fails (?)
In order to "ask my question", I have to solve a model first. I will omit some steps but still, this will unavoidably make this post very long -so this is also a test to see whether this community likes such kind of questions.
Before starting, I…
Alecos Papadopoulos
- 31,608
- 1
- 42
- 107
15
votes
1 answer
Solving the Hamilton-Jacobi-Bellman equation; necessary and sufficient for optimality?
Consider the following differential equation
\begin{align}
\dot x(t)=f(x(t),u(t))
\end{align}
where $x$ is the state and $u$ the control variable. The solution is given by
\begin{align}
x(t)=x_0 + \int^t_0f(x(s),u(s))ds.
\end{align}
where…
clueless
- 1,509
- 8
- 21
12
votes
6 answers
References to learn continuous-time dynamic programming
Does anyone know of good references to learn continuous-time dynamic programming? The references don't have to be books. They could be links to online resources as well. Links to clear, concise discussions of even just the basics would be helpful.
jmbejara
- 9,087
- 4
- 28
- 74
8
votes
1 answer
Guess and Verify
In dynamic programming, the method of undetermined coefficients is sometimes known as "guess and verify." I've periodically heard there are canonical guesses one might make.
In particular, I've seen
$V(k) = A + B\ln(k)$
$V(k) =…
Pat W.
- 395
- 1
- 3
- 12
7
votes
1 answer
How to verify Value Function in nonzero sum two player Differential Game?
There are two agents $i=1,2$. The state $k$ is governed by $\tau_i\in[0,1]$ where
\begin{align}
\dot{k} = f(k,\tau_1,\tau_2).
\end{align}
Define the value function of player $i$ by
\begin{align}
v_i(k) := \sup_{\tau_i}\int^\infty_0{e^{-\rho…
clueless
- 1,509
- 8
- 21
7
votes
3 answers
Solution Method for Infinite-Horizon Maximization Problem
Full disclosure: this problem was part of a final exam that none of our class could really solve definitively. Below the general form is a specific utility function we worked with that I'll try to replicate my work for. Any help on the solution…
Kitsune Cavalry
- 6,389
- 3
- 16
- 51
6
votes
2 answers
Multiple equilibria: which one to select?
There are two agents $i=1,2$. Consider the following programm
\begin{align}
&V_1(x_0) := \max_u \int^\infty_0 e^{-\rho t}F_1(x(t),u(t),v(t))dt\\
&V_2(x_0) := \max_v \int^\infty_0 e^{-\rho t}F_2(x(t),u(t),v(t))dt\\
s.t.~&\dot…
clueless
- 1,509
- 8
- 21
6
votes
1 answer
How do I begin to approach this dynamic discrete choice model?
I'm working through an old problem set (that sadly I don't have solutions for) and I got stuck. It is a dynamic model of entrepreneurship and invention. I'm looking for guidance on this model as well as references or papers that discuss it. Here's…
Michael A
- 161
- 5
6
votes
1 answer
Time costs and the St. Petersburg paradox
In the St. Petersburg paradox, we end up with the problem that a rational agent should be willing to play the game for any wager, if we look at expected income or utility of expected income. The standard "solution" to this is to instead look at…
user169
5
votes
0 answers
Optimization in discrete time
I have made optimizations in continuous time that belong to the control theory, for example one case:
$\max(\min)V[u(t)]=\int_0^Tf(t,x(t),u(t))dt$
constraint to: $\dot x=g(t,x(t),u(t))$
Where:
$x(t)$: state variable.
$u(t)$: control variable.
And…
manifold
- 715
- 1
- 13
5
votes
2 answers
One-shot deviation principle for infinite repeated games and dynamic programming
In a context that future return is discounted by a constant parameter, one-shot deviation principle holds for both repeated games and dynamic programming.
Because, in repeated games, a one-shot deviation refers to one history, so on equilibrium…
Metta World Peace
- 1,416
- 9
- 20
5
votes
0 answers
Solution to Dynamic Programming (Bellman Equation) Problem
Could someone please provide pointers on how to solve the below? If any theoretical approximations are possible, that would be very helpful. If numerical solutions are the right approach, could you suggest how we can do this in R (restricted to free…
user249613
- 151
- 3
4
votes
1 answer
Conjecture Steady State from limit properties
The question is related to this thread. I'd like to derive a unique steady state for an optimal control problem.
Consider the following programm
\begin{align}
&V(x_0) := \max_u \int^\infty_0 e^{-\rho t}F(x(t),u(t))dt\\
s.t.~&\dot…
clueless
- 1,509
- 8
- 21
4
votes
1 answer
Dynamic programming in infinite horizon model
Using an infinite horizon model, a dynamic programming approach uses a fixed point to solve the model: $V = \Gamma(V)$.
How do I interpret the meaning of $V$? For example, when we decide a investment level in next time $k_{t+1}$ given the existing…
hrkshr
- 195
- 6
4
votes
0 answers
Value function iteration with habit
I would like to know how I could write a value function when there are habits in preferences. I have the following equations:
$$
u\left(C, t, H_{t}, L_{t}\right)=\frac{\left(C_{t} / H_{t}^{\kappa}\right)^{1-\gamma}}{1-\gamma}-\theta \frac{L_{t}^{1+1…
BAL
- 377
- 1
- 5