Concave utility reinforcement learning with zero-constraint violations

M Agarwal, Q Bai, V Aggarwal - arXiv preprint arXiv:2109.05439, 2021 - arxiv.org
arXiv preprint arXiv:2109.05439, 2021arxiv.org
We consider the problem of tabular infinite horizon concave utility reinforcement learning
(CURL) with convex constraints. For this, we propose a model-based learning algorithm that
also achieves zero constraint violations. Assuming that the concave objective and the
convex constraints have a solution interior to the set of feasible occupation measures, we
solve a tighter optimization problem to ensure that the constraints are never violated despite
the imprecise model knowledge and model stochasticity. We use Bellman error-based …
We consider the problem of tabular infinite horizon concave utility reinforcement learning (CURL) with convex constraints. For this, we propose a model-based learning algorithm that also achieves zero constraint violations. Assuming that the concave objective and the convex constraints have a solution interior to the set of feasible occupation measures, we solve a tighter optimization problem to ensure that the constraints are never violated despite the imprecise model knowledge and model stochasticity. We use Bellman error-based analysis for tabular infinite-horizon setups which allows analyzing stochastic policies. Combining the Bellman error-based analysis and tighter optimization equation, for interactions with the environment, we obtain a high-probability regret guarantee for objective which grows as $\Tilde{O}(1/\sqrt{T})$, excluding other factors. The proposed method can be applied for optimistic algorithms to obtain high-probability regret bounds and also be used for posterior sampling algorithms to obtain a loose Bayesian regret bounds but with significant improvement in computational complexity.
arxiv.org