Online Learning 5: Exp3 algorithm, adversarial setting
1 Adversarial setting
1.1 Stochastic setting
We talked about stochastic setting, meaning every time you play a arm, you will get a reward, which is random drawn from some distribution, and every time, every successive time you played, you’ll get a new sample but from the same distribution independent of all past samples from the graph. So this is so called the IID, or the independent and item distributed setting
1.2 Adversarial setting
-
At each time t, the adversary needs to choose an award for each arm j.
- It knows the distribution with which the player is going to play the arms.
- The adversary also knows all the previous sample actions and rewards that the player receives.
- Based on all this, it’s allowed to set a reward for each one of these arms.
-
Player has information about it’s action and rewards. Now the player using that policy, that distribution, essentially, tosses the coin and depending on how that coin lands, it decides to play that particular arm. And that, in which case, the player incurs a reward of x at time t for that particular arm j.
-
Unstructured feedback, it can only get the information of the arm it plays. Other arm’s rewards are not available.
1.3 Regret
In the adversarial setting, regret is with respect to the best fixed arm policy in hindsight.
- Why the best fixed arm in hindsight?
- One motivation is in the setting of classification, where the goal is to use n n n labled samples, and return a weighted vector characterizing a classifier.
2 Exp3 (exponential weighting algorithm for exploration and exploitation) algorithm
2.1 Intuition
- Idea 1: Suppose players had access to the rewards for all arms. Then, exponentially boost the probability of that arm that has been best in hindsight at time t t t.
- Idea 2: We cannot access rewards for all arms, so we use unbiased estimators (importance sampling estimator) for unknow rewards.
- Importance sampling estimator: For the arm, I’ll amplify the reward. I’ll amplify it inversely, and the amplification is essentially inversely proportional to the probability with which I play that arm.
- Unbiased
- High variance: good if rewards (losses) are small.
2.2 Algorithm
p65, Algorithm, notebook
p72, The Exp3 algorithm, notebook
- Compute the exponential probability distribution using the accumulative loss/reward of each arm.
- Sample an Arm from the probability distribution.
- Update the accumulative loss/reward.
2.3 Regret
For η = ln k n k \eta=\sqrt{\frac{\ln k}{nk}} η=nklnk. η \eta η determines how aggressively you want to learn and update the distribution.
R n ≤ 2 n k ln k R_n\leq 2\sqrt{nk\ln k} Rn≤2nklnk
Instead of bandit feedback, suppose we have full feedback.
R
n
≤
n
ln
k
2
R_n\leq \sqrt{\frac{n\ln k}{2}}
Rn≤2nlnk