2 强化学习——Multi-armed Bandits

The most important feature distinguishing reinforcement learning from other types of learning is that it uses training information that evaluates the actions taken rather than instructs by giving correct actions.

强化学习区别与其他类型学习的最重要特征是:训练信息的作用是被用于“估计”采取的“动作”的优劣,而不是“指导”:直接给出正确的/最优的动作。

比如说走迷宫,从入口找到出口,假设只有一条路可以走通。那么,一般的学习算法给出的结果就是这条路,比如遗传算法或蚁群算法,他们指出的是在某一点你应该往哪个方向走(指导);而强化学习给出的是你在某一点,往各个方向走的优劣:比如在某一点你可以有四种选择“东、西、南、北”,那么强学习给出的是,若满分为100,往东走得60分,往西走40分,往北走80分,往南走30分。你可以在这一步选择“explicit(利用)”策略,即选择得分最高的那个动作,也可以在这一步选择“explore(探索)”,在四个方向中任意选择一个。

This is what creates the need for active exploration, for an explicit search for good behavior.

这就导致了在搜索好的行为的时候,需要主动探索。

Purely evaluative feedback indicates how good the action taken was, but not whether it was the best or the worst action possible. Purely instructive feedback, on the other hand, indicates the correct action to take, independently of the action actually taken. This kind of feedback is the basis of supervised learning, which includes large parts of pattern classificaction, artifical neural networks, and system identification.

纯估计的反馈显示已采取的动作的优劣,但是并不会显示出它是不是最好或最坏的动作。纯指导的反馈则不同,它显示出当前应该采取的正确动作,和已采取的动作无关。这种反馈是有监督学习的基础,他包含了大量的模式分类,人工神经网络,和系统辨识。

In their pure forms, these two kinds of feedback are quite distinct: evaluative feedback depends entirely on the action taken, whereas instructive feedback is independent of the action taken.

在他们的单纯形式(应该是指没有混合别的方法的情况)下,这两种反馈是有很大区别的:估计反馈取决于已采取的动作,而指导反馈则与已采取的动作无关。

In this chapter we study the evaluative aspect of reinforcement learning in a simplified setting, one that dose not involve learning to act in more than one situation. This nonassociative setting is the one in which most prior work involving evaluative feedback has been done, and it avoids much of the complexity of the full reinforcement learning problem. Studying this case enables us to see most clearly how evaluative feedback differs from, and yet can be combined with, instructive feedback.

在本章我们将学习强化学习的估计部分,我们对强化学习进行一定简化设定,即除一种情况外算法不包含学习如何动作。这种非联合的设定是很多之前的工作包括估计反馈曾经使用过的,他避免了完全强化学习问题中的大部分难点。通过学习这个问题,我们将更清楚估计和指导反馈的区别与联系。

This particular nonassociative, evaluative feedback problem that we explore is a simple version of the k-armed bandit problem. We use this problem to introduce a number of basic learning methods which we extend in later chapters to apply to the full reinforcement learning problem. At the end of this chapter, we take a step closer to the full reinforcement learning problem by discussing what happens when the bandit problem becomes associative, that is, when actions are taken in more than one situation.

本章即将探索的非联合的,估计反馈的问题是一个简单版本的k-摇臂老虎机问题(k个摇臂的老虎机)。我们用这个问题来介绍一些基本的学习方法,在后续章节中我们将其扩展并应用于完全强化学习问题中。在本章结尾,通过讨论当动作不止一种的情况下,老虎机问题会如何变化,我们将对完全强化学习问题有更进一步的了解。

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值