motivation
Current RL methods fail to generalize due to two issues:
- test generalization
data is scarce especially in the sense of real-world data. So RL models often overfit to the training scenarios. - simulation transfer
the environment and the physics of the real-world is different from that of the experiemts. So the learned policy not necessarily transfer to the real-world setting.
Can we model the gap between the simulation and the real-world? How?
- list all possible factors that may vary between the two cases.
Impractical because the space is inifinite - viewing the real-world from the simulation being applied disturbances.
The goal of this work is to learn a policy that is robust to modeling errors in simulation and mismatch between training and testing scenarios.
The basic idea is to resort to a joinly-learned adversary to apply disturbances to the system, in two ways:
- the adversary create tough situations where the protagonist easily fail to gain high rewards, i.e., sample hard examples.
e.g. driving a car with two driver seats - the adversary can be endowed with domain knowledge
e.g. where exactly people want to attack the protagonist, or even the control over the environment.
Model formulation
This work is set on MDP with some variations: 1) the transition function is dependent on both actions from the adversary and the protagonist, 2) the reward function is dependent on both actions, and 3) this is a two-player zero-sum game, where the reward is actually shared with the two player but with opposite signs.
When defining the objective - the expected cumulative reward, the author stresses that it is a conditional expectation especially on the transition function