AlphaStar: Mastering the Real-Time Strategy Game StarCraft II 博客阅读

原文:https://deepmind.com/blog/alphastar-mastering-real-time-strategy-game-starcraft-ii

 

SL = supervised learning, RL = reinforcement learning

 

  • how AlphaStar is trained

units, properties -> DNN -> instructions

DNN: transform torso(relational deep RL), deep LSTM core, auto-regressive policy head with pointer network, centralised value baseline

train: SL -> mico/macro strategies

        compete -> hyper parameters updated by RL -> Nash distribution -> final agent

multi-agent RL: play against each other: population-based, multi-agent RL -> huge strategic space -> defeat strongest and eariler ones

 

explore new build orders, unit compositions, micro-management plans

personal objective: beat specific competitor/beat distribution of competitors/building more of specific unit

NN weights: off-policy actor-critic RL with experience replay, self-imitation learning, policy distillation

 

run on TPUs, final agent: Nash distribution of the league: best mixture of strategies

 

  • how AlphaStar plays and how to evaluate

TLO/MaNa  ~ 100 APM

agent  ~ 1000, 10000 APM

AlphaStar vs. TLO/MaNa  ~280 APM (read screen frames use raw interface)

AlphaStar act: observation -> action: 350ms/avg, process every frame

results: 5:0

 

转载于:https://www.cnblogs.com/yaoyaohust/p/10815039.html

  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值