题意:如何在 Keras-RL / OpenAI GYM 中实现自定义环境?
问题背景:
I'm a complete newbie to Reinforcement Learning and have been searching for a framework/module to easily navigate this treacherous terrain. In my search I've come across two modules keras-rl & OpenAI GYM.
我是强化学习的完全新手,一直在寻找一个框架/模块来轻松探索这片充满挑战的领域。在我的搜索中,我遇到了两个模块:keras-rl 和 OpenAI GYM。
I can get both of them two work on the examples they have shared on their WIKIs but they come with predefined environments and have little or no information on how to setup my own custom environment.
我可以让它们在各自的 WIKI 上分享的示例中运行,但这些示例带有预定义的环境,对于如何设置我自己的自定义环境几乎没有信息或没有说明。
I would be really thankful if