Ciara Pike-Burke - A unifying view of optimism in episodic reinforcement learning

Date:

September 2, 2021

Author:

Hrvoje Stojic

A unifying view of optimism in episodic reinforcement learning


Abstract

The principle of optimism in the face of uncertainty underpins many theoretically successful reinforcement learning algorithms. In this paper we provide a general framework for designing, analyzing and implementing such algorithms in the episodic reinforcement learning problem. This framework is built upon Lagrangian duality, and demonstrates that every model-optimistic algorithm that constructs an optimistic MDP has an equivalent representation as a value-optimistic dynamic programming algorithm. Typically, it was thought that these two classes of algorithms were distinct, with model-optimistic algorithms benefiting from a cleaner probabilistic analysis while value-optimistic algorithms are easier to implement and thus more practical. With the framework developed in this paper, we show that it is possible to get the best of both worlds by providing a class of algorithms which have a computationally efficient dynamic-programming implementation and also a simple probabilistic analysis. Besides being able to capture many existing algorithms in the tabular setting, our framework can also address largescale problems under realizable function approximation, where it enables a simple model-based analysis of some recently proposed methods.


Notes


  • An arXiv pre-print is available here.​​

  • Dr Ciara Pike-Burke is a Lecturer in Statistics at Imperial College London. Her website can be found here.

Share on social media

Share on social media

Share on social media

Share on social media

Related Seminars

Mickael Binois - Leveraging replication in active learning

We were recently joined by Mickael Binois, to talk about 'Leveraging replication in active learning'.

Jun 24, 2024

Mickael Binois - Leveraging replication in active learning

We were recently joined by Mickael Binois, to talk about 'Leveraging replication in active learning'.

Jun 24, 2024

Mickael Binois - Leveraging replication in active learning

We were recently joined by Mickael Binois, to talk about 'Leveraging replication in active learning'.

Jun 24, 2024

Mickael Binois - Leveraging replication in active learning

We were recently joined by Mickael Binois, to talk about 'Leveraging replication in active learning'.

Jun 24, 2024

Ilija Bogunovic - From Data to Confident Decisions

We were recently joined by Ilija Bogunovic, to talk about 'Robust and Efficient Algorithmic Decision Making'.

Jun 13, 2024

Ilija Bogunovic - From Data to Confident Decisions

We were recently joined by Ilija Bogunovic, to talk about 'Robust and Efficient Algorithmic Decision Making'.

Jun 13, 2024

Ilija Bogunovic - From Data to Confident Decisions

We were recently joined by Ilija Bogunovic, to talk about 'Robust and Efficient Algorithmic Decision Making'.

Jun 13, 2024

Ilija Bogunovic - From Data to Confident Decisions

We were recently joined by Ilija Bogunovic, to talk about 'Robust and Efficient Algorithmic Decision Making'.

Jun 13, 2024

Dario Azzimonti - Preference learning with Gaussian processes

We were recently joined by Dario Azzimonti, to talk about 'Preference learning with Gaussian processes'.

May 23, 2024

Dario Azzimonti - Preference learning with Gaussian processes

We were recently joined by Dario Azzimonti, to talk about 'Preference learning with Gaussian processes'.

May 23, 2024

Dario Azzimonti - Preference learning with Gaussian processes

We were recently joined by Dario Azzimonti, to talk about 'Preference learning with Gaussian processes'.

May 23, 2024

Dario Azzimonti - Preference learning with Gaussian processes

We were recently joined by Dario Azzimonti, to talk about 'Preference learning with Gaussian processes'.

May 23, 2024

Mojmír Mutný - Optimal Experiment Design in Markov Chains

We were recently joined by Mojmír Mutný (ETH Zurich), to talk about 'Optimal Experiment Design in Markov Chains'.

Mar 28, 2024

Mojmír Mutný - Optimal Experiment Design in Markov Chains

We were recently joined by Mojmír Mutný (ETH Zurich), to talk about 'Optimal Experiment Design in Markov Chains'.

Mar 28, 2024

Mojmír Mutný - Optimal Experiment Design in Markov Chains

We were recently joined by Mojmír Mutný (ETH Zurich), to talk about 'Optimal Experiment Design in Markov Chains'.

Mar 28, 2024

Mojmír Mutný - Optimal Experiment Design in Markov Chains

We were recently joined by Mojmír Mutný (ETH Zurich), to talk about 'Optimal Experiment Design in Markov Chains'.

Mar 28, 2024