python 编写围棋_一个参考AlphaGo Zero论文实现的围棋引擎

这是一个根据DeepMind的AlphaGo Zero论文实现的Python和TensorFlow围棋引擎。通过纯强化学习,它超越了监督学习加强化学习的性能。项目提供训练模型下载、数据预处理、模型训练以及在Sabaki中与AI对弈的功能。
摘要由CSDN通过智能技术生成

AlphaGOZero (python tensorflow implementation)

This is a trial implementation of DeepMind's Oct19th publication: Mastering the Game of Go without Human Knowledge.

From Paper

Pure RL has outperformed supervised learning+RL agent

SL evaluation

Download trained model

Set up

Install requirement

python 3.6 tensorflow/tensorflow-gpu (version 1.4, version >= 1.5 can't load trained models)

pip install -r requirement.txt

Download Dataset (kgs 4dan)

Under repo's root dir

cd data/download

chmod +x download.sh

./download.sh

Preprocess Data

It is only an example, feel free to assign your local dataset directory

python preprocess.py preprocess ./data/SGFs/kgs-*

Train A Model

python main.py --mode=train

Play Against An A.I.

python main.py --mode=gtp —-gtp_poliy=greedypolicy --model_path='./savedmodels/your_model.ckpt'

Play in Sabaki

In console:

which python

add result to the headline of main.py with #! prefix.

Add the path of main.py to Sabaki's manage Engine with argument --mode=gtp

TODO:

AlphaGo Zero Architecture

Supervised Training

Self Play pipeline

Go Text Protocol

Sabaki Engine enabled

Tabula rasa (failed)

Distributed learning

Credit (orderless):

*Brain Lee *Ritchie Ng *Samuel Graván *森下 健 *yuanfengpang

  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值