Online Reinforcement Learning in Non-Stationary Context-Driven Environments


Pouya Hamadanian1      Arash Nasr-Esfahany1     
Malte Schwarzkopf2      Siddhartha Sen3      Mohammad Alizadeh1     

1Computer Science and Artificial Intelligence Laboratory (MIT CSAIL)
2Computer Science Department, Brown University
3Microsoft Research, AI Frontiers


Abstract


We study online reinforcement learning (RL) in non-stationary environments, where a time-varying exogenous context process affects the environment dynamics. Online RL is challenging in such environments due to "catastrophic forgetting" (CF). The agent tends to forget prior knowledge as it trains on new experiences. Prior approaches to mitigate this issue assume task labels (which are often not available in practice), employ brittle regularization heuristics or use off-policy methods that suffer from instability and poor performance.
We present Locally Constrained Policy Optimization (LCPO), an online RL approach that combats CF by anchoring policy outputs on old experiences while optimizing the return on current experiences. To perform this anchoring, LCPO locally constrains policy optimization using samples from experiences that lie outside of the current context distribution. We evaluate LCPO in Mujoco, classic control and computer systems environments with a variety of synthetic and real context traces, and find that it outperforms a variety of baselines in the non-stationary setting, while achieving results on-par with a "prescient" agent trained offline across all context traces.


Paper


Online Reinforcement Learning in Non-Stationary Context-Driven Environments
Pouya Hamadanian, Arash Nasr-Esfahany, Malte Schwarzkopf, Siddhartha Sen, Mohammad Alizadeh
The Thirteenth International Conference on Learning Representations (ICLR '25)
ICLR Spotlight Paper!
[PDF]


Code


[GitHub]


Supporters


This project is supported by NSF and a CSAIL-MSR Trustworthy AI collaboration.