DQN+Active Learning

关于MarkDown公式详细编辑可以参考博客



Initialize replay memory M M to capacity N
Initialize action-value function Q Q with random weights
for episode = 1,2,...,N do
     Dl D l ← model and shuffle D D
    ϕ ← Random
    for i=1 i = 1 , |D| | D | do
        Construct the state si s i using xi x i
        With probability ϵ ϵ select a random action ai a i
        Otherwise select ai=arg a i = a r g   maxQπ(si,a;θ) m a x Q π ( s i , a ; θ )
        if ai a i = 1 then
            Obtain the annotation yi y i
             Dl D l Dl+(xi,yi) D l + ( x i , y i )
            Updata model ϕ ϕ based on Dl D l
        end if
        Receive a reward ri r i from test data
         if |Dl| | D l | = B B then
            Store transition (si,si,ri,Terminate) in M M
            Break
        end if
        Construct the new state si+1
        Store transition (si,si,ri,si+1) ( s i , s i , r i , s i + 1 ) in M M
        Sample random minibatch of transitionsfrom M, and perform gradient descent step on L(θ) L ( θ )
        Update policy with θ θ
    end for
end for

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值