Most existing imitation methods often require many demonstrations, even in simple games.
→
\to
→
Most existing imitation methods require many demonstrations even in simple games.
The output is a personalized policy for each player.
→
\to
→
The output is a personalized policy for the given player.
meta-learning is adapted to construct an adaptive framework
→
\to
→
meta-learning is adopted to construct an adaptive framework
From the data viewpoint, the feature extractor we proposed preprocess the state of the game board. which enhances the learning efficiency of players with limited records.
→
\to
→
From the data viewpoint, the feature extractor preprocess the state of the game board.
With this technique, the learning effect is enhanced with limited game records of the given player.
The player behavior models are learning from each player’s behavior by adversarial imitation learning.
→
\to
→
The player behavior models are trained using each player’s records by adversarial imitation learning.
Polices are used as generators to produce a series of behavior records.
→
\to
→
Polices are employed by generators to produce a series of behavior records.
It was established through optimizationbased meta-learning.
→
\to
→
It is
the feature extractor does need to
→
\to
→
the feature extractor does not need to
quickly applicable to new players
→
\to
→
quickly adapted to new players
this way can learn more diverse behaviors.
→
\to
→
our approach can learn more diverse behaviors.
We combine meta-learning and imitation learning to handle the problem of individual player imitation with limited records.
→
\to
→
We combine meta-learning and imitation learning for individual player imitation with limited records.
- Preliminaries
This section first , then .