[FAN] Efficiency vs. Expressivity in Offline Reinforcement Learning
/ 8 min read
Table of Contents
Online vs. Offline RL
Reinforcement learning (RL) is about learning through trial and error. An agent takes an action, observes the result, and receives a reward signal. Over time, a policy is learned to maximize reward. This setting is known as , where the agent continuously interacts with the environment while learning. Online RL is flexible, but can be expensive, slow, or unsafe in real-world, e.g., deep-sea exploration or space environments.
In , the agent does not interact with the environment during training. Instead, it learns from a fixed pre-collected dataset. This setup is generally safer and more cost-effective, especially when real-world interaction is risky. However, it is also more challenging; the agent cannot gather new data to correct its errors and must generalize solely based on the dataset’s support.
Therefore, the key challenge in offline RL is optimizing the learning policy while constraining it to the available dataset. Actions not in the dataset may appear good during offline training, but can actually perform poorly online. To tackle this challenge,
We need effective "policy", "constraint", "value" estimators.
Expressive Offline RL
How does prior work make these three components effective? Given the use of neural networks, these function approximators should be “expressive” enough, just as in other fields of machine learning (ML).
(1) Expressive Policy
Let’s start with a simple yet expressive formulation of the policy. The agent’s deterministic action can be sampled using the deterministic function , where is the state and is any vector modeling the stochasticity of the policy. One standard approach is to sample from a normal distribution, i.e., , where is the dimension of the action space. Samples from forms the learning policy distribution , i.e., . Here, the push-forward function maps the distribution of to the state-conditional distribution of actions, i.e., . Even with deterministic outputs, our policy becomes stochastic thanks to the noise input .
This has been widely used since DPG and DDPG. Here, becomes more expressive than the standard Gaussian policies, since it can model all possible action distributions. Gaussian policies, i.e., , are limited to unimodal distributions.
(2) Expressive Constraint
Now, we need not to deviate too much from the behavior of the dataset. Among various approaches, diffusion and flow models have proven highly effective for setting up such constraints. With these methods, we train a behavior policy , which represents the policy that originally generated the offline dataset. This is then used as the policy constraint (e.g., ), forcing to output actions similar to the dataset. Learning is equivalent to a standard generative modeling problem, which explains why diffusion and flow matching are so successful here. Traditional Gaussian policies often assume a single mode and struggle to model , whereas flow-based policies can easily capture complex, multimodal behaviors.
FQL is one good example. Their constraint is , the Wasserstein-2 distance between the learning policy and the behavior policy trained with flow matching.
(3) Expressive Value
Next, is there a better way to estimate future returns other than with expectations? Yes, indeed. Distributional RL is one line of work that models more expressive values. Instead of simply estimating the expected sum of rewards, distributional critics estimate the full distribution of returns. Consider two scenarios: a guaranteed return of 0, versus a 50% chance of 100 with a 50% chance of -100. A standard critic evaluates these as equivalent (10 = 0.5100 + 0.5(-100)), but a distributional critic captures the information of a higher potential maximum return in the second scenario.
A common way to represent these return distributions is by using finite discrete bins or quantiles that correspond to different levels of the cumulative distribution function (CDF). But, yeah… quite hard to understand… Let’s walk through a simple example to get some sense. Consider the following environment:
Here, starting from the initial state , the number on each edge represents the reward that the agent gets for each action (left or right). Let’s first estimate the expected return, given a fixed policy going left and right with the same probabilities:
In this case, the high sparse reward of +30 is not captured effectively. However, for the distributional critics:
The highest possible return (+24) is captured through high values using distributional quantile critics. Here, determines the coefficient of the return CDF.
Inefficiencies
While sampling actions from is highly effective and efficient, relying on the behavior flow policy () for constraints and using distributional critics is compute heavy.
Great efficiency.
Behavior flow policies typically require multiple forward iterations to generate a single dataset action. Consequently, the cost of setting up the constraint scales proportionally with the number of flow steps.
These critics process multiple return samples (such as multiple bins or quantiles), which causes the critic’s computational cost to scale linearly with the number of these samples.
Key Observations
To summarize, higher performance in expressive offline RL comes with more computation:
Efficiency vs. Expressivity trade-off exists in offline RL.
However, can these expressive mechanisms be made more efficient? Since the expressive policy is already modeled very efficiently, let’s focus on how to improve the efficiencies of (1) the flow-based constraint and (2) the distributional value.
(Constraint) Is Flow Iteration Necessary?
Flow-based policy constraints require solving the ordinary differential equations (ODEs).
For instance, FQL directly compares action outcomes and , which are sampled from the policy distributions and , respectively. Therefore, during training, FQL must iterate over multiple times to obtain , leading to more computation. But,
What if we compare directions within the flow, rather than the action outputs directly?
Our intuition is that accurate sampling of is actually not a strict requirement for successful learning. is merely used for “regularizing” to the dataset.
(Value) Should we model CDFs?
There is actually an alternative way to model return distributions other than with CDFs.
Consider the problem of generative modeling. The goal is to find a function that maps the prior distribution to the target distribution. This function can be viewed as the push-forward function , forwarding the prior distribution (e.g., ) to our target distribution (e.g., ). The distribution is modeled through this push-forward function , which is distinct from the CDF-based modeling. Then, similarly,
What if we model return distributions with push-forwards?
Our intuition is that if we use push-forwards, all distributional critic samples have a similar meaning (i.e., each is a possible return outcome sampled randomly). Consequently, using a single critic sample may be sufficient. In contrast, different samples from the distributional quantile critic have different meanings, since each sample models a different part of the return CDF.
Actually, Value Flows is one work that does exactly this. However, they solve ODEs for sampling the critic values, making it hard to say that efficiency has improved.
Proposed Method
Based on these insights, we propose:
Flow Anchored Noise-conditioned Q-Learning (FAN)
We use “Flow Anchoring” for expressive policy constraints, and apply “Noise-conditioned Q-Learning” for expressive values.
Constraint using Single Flow Iteration
For the behavior regularization part, we modify the prior objectives as follows:
Here is the variable breakdown:
- : Offline dataset state
- : Prior noise vector
- : Random time step
- : Learning policy action
- : Dataset action (Iterative ODE)
- : Interpolated point at time
- : Policy direction
- : Behavior flow velocity field (Single-step)
Our work only requires a single flow step, and is guaranteed to minimize the Wasserstein-2 distance between and !
Distributional Critic with Push-Forwards
For the distributional value estimates, we propose the following novel Bellman operator:
Don’t get afraid with the ! It stands for “essential supremum”, which is just taking the maximum value, ignoring rare, zero-probability edge cases. With the simple environment example above, the critic values are calculated as follows:
Understand this as the distributional extension of Q-Learning!! i.e., . In Q-Learning, the maximum operation is taken over the “next” action, but we are doing it for , which is highly related to the “next-next” action. In our case, the next action is tied to our value function input .
Result
We tested FAN on numerous offline RL benchmarks such as D4RL and OGBench. Please check out our paper, FAN, for more details.
To summarize, FAN achieves the best success rates on average with the highest computational efficiency. Specifically, its computational efficiency is similar to that of highly efficient non-distributional offline RL algorithms (e.g., ReBRAC, FQL), while outperforming relatively inefficient distributional approaches (e.g., CODAC, Value Flows).
Takeaways
Just remember these about FAN:
-
through:
- Flow policy constraint
- Distributional critic
-
through:
- Flow Anchoring for the offline dataset constraint.
- Noise-conditioned Q-Learning for return distributions.
I hope you enjoyed this blog post!! 🤩
Acknowledgments
Huge thanks to Professor Pingali and Eshan for organizing the content together for easier understanding. 🙏