Soft Actor-Critic (SAC)算法
Kullback-Leibler Divergence (KL divergence)
定义
假设对随机变量 ξ \xi ξ,存在两个概率分布 P , Q P, Q P,Q。如果 ξ \xi ξ 为离散随机变量,定义从 P P P 到 Q Q Q 的KL散度为:
D KL ( P ∣ ∣ Q ) = ∑ i P ( i ) ln ( P ( i ) Q ( i ) ) \mathbb{D}_{\text{KL}}(P\vert \vert Q)=\sum_{i}P(i)\ln(\frac{P(i)}{Q(i)}) DKL(P∣∣Q)=i∑P(i)ln(Q(i)P(i))
如果 ξ \xi ξ 为连续随机变量,定义从 P P P 到 Q Q Q 的KL散度为:
D KL ( P ∣ ∣ Q ) = ∫ − ∞ ∞ p ( x ) ln ( p ( x ) q ( x ) ) d x \mathbb{D}_{\text{KL}}(P\vert \vert Q)=\int_{-\infty}^{\infty}p(\mathbf{x})\ln(\frac{p(\mathbf{x})}{q(\mathbf{x})})d\mathbf{x} DKL(P∣∣Q)=∫−∞∞p(x)ln(q(x)p(x))dx
KL散度的基本性质
KL 散度是衡量两个概率密度分布差异性的指标,越大则差异性越大,最小值是0,仅在二者完全相同时取到.
- 非负性
D KL ( P ∣ ∣ Q ) ≥ 0 \mathbb{D}_{\text{KL}}(P\vert \vert Q)\geq 0 DKL(P∣∣Q)≥0
D KL = 0 \mathbb{D}_{\text{KL}}=0 DKL=0当且仅当 P = Q P=Q P=Q.
我们仅对离散情况进行证明,对于连续随机变量情况,我们将积分视为求和的极限后可以用相同方式证明
我们只需要证明 ∑ i P ( i ) ln ( Q ( i ) P ( i ) ) ≤ 0 \sum_{i}P(i)\ln(\frac{Q(i)}{P(i)})\leq 0 ∑iP(i)ln(P(i)Q(i))≤0。采用不等式 ln ( x ) ≤ x − 1 , ∀ x > 0 \ln(\mathbf{x})\leq \mathbf{x}-1,\forall x>0 ln(x)≤x−1,∀x>0,则:
∑ i P ( i ) ln ( Q ( i ) P ( i ) ) ≤ ∑ i P ( i ) ( Q ( i ) P ( i ) − 1 ) = 0 \sum_{i}P(i)\ln(\frac{Q(i)}{P(i)})\leq \sum_{i}P(i)(\frac{Q(i)}{P(i)}-1)=0 i∑P(i)ln(P(i)Q(i))≤i∑P(i)(P(i)Q(i)−1)=0
等号当且仅当对于任意的 i i i, Q ( i ) P ( i ) = 1 \frac{Q(i)}{P(i)}=1 P(i)Q(i)=1时取得,此时有 P = Q P=Q P=Q.
- 仿射变换不变性
假设 y = a x + b \mathbf{y}=a\mathbf{x}+b y=ax+b,那么:
D KL ( P ( x ) ∥ Q ( x ) ) = D KL ( P ( y ) ∥ Q ( y ) ) \mathbb{D}_{\text{KL}}(P(\mathbf{x})\Vert Q(\mathbf{x}))=\mathbb{D}_{\text{KL}}(P(\mathbf{y})\Vert Q(\mathbf{y})) DKL(P(x)∥Q(x))=DKL(P(y)∥Q(y))
证明:
利用随机变量的变换公式 p ( y ) d y = p ( x ) d x p(\mathbf{y})d\mathbf{y}=p(\mathbf{x})d\mathbf{x} p(y)dy=p(x)dx,我们有:
D KL ( P ( y ) ∥ Q ( y ) ) = ∫ P ( y ) log ( P ( y ) Q ( y ) ) d y = ∫ P ( x ) log ( P ( x ) Q ( x ) ) d x = D KL ( P ( x ) ∥ Q ( x ) ) \mathbb{D}_{\text{KL}}(P(\mathbf{y})\Vert Q(\mathbf{y}))=\int P(\mathbf{y})\log(\frac{P(\mathbf{y})}{Q(\mathbf{y})}) d\mathbf{y}\\= \int P(\mathbf{x})\log(\frac{P(\mathbf{x})}{Q(\mathbf{x})}) d\mathbf{x} =\mathbb{D}_{\text{KL}}(P(\mathbf{x})\Vert Q(\mathbf{x})) DKL(P(y)∥Q(y))=∫P(y)log(Q(y)P(y))dy=∫P(x)log(Q(x)P(x))dx=DKL(P(x)∥Q(x))
- 非对易性
D KL ( P ∣ ∣ Q ) ≠ D KL ( Q ∣ ∣ P ) \mathbb{D}_{\text{KL}}(P\vert \vert Q)\neq \mathbb{D}_{\text{KL}}(Q\vert \vert P) DKL(P∣∣Q)=DKL(Q∣∣P)
- 值域
D KL ( P ∣ ∣ Q ) \mathbb{D}_{\text{KL}}(P\vert \vert Q) DKL(P∣∣Q) 在一定条件下可以趋向于无穷。
参见 关于KL散度(Kullback-Leibler Divergence)的笔记
SAC 算法
SAC 算法是针对stochastic-policy MDP的一种强化学习算法, 下面我们对其原理进行介绍.
SAC 算法研究的MDP和我们上面介绍的MDP并不完全相同, 其修改了Q value function的定义:
Q π ( s , a ) ≜ Q π ( s , a ) − ln ( π ( a ∣ s ) ) \textcolor{red}{Q^{\pi}(s,a)} \triangleq Q^{\pi}(s,a)-\ln(\pi(a|s)) Qπ(s,a)≜Qπ(s,a)−ln(π(a∣s))
假定 V 和 Q 依旧满足 Bellman function, 则 V value function的定义也要相应变化为
V π ( s ) ≜ V π ( s ) − H ( π ( ⋅ ∣ s ) ) \textcolor{red}{V^{\pi}(s)} \triangleq V^{\pi}(s)-\mathscr{H}(\pi(\cdot|s)) Vπ(s)≜Vπ(s)−H(π(⋅∣s))
H \mathscr{H} H 表示 entropy.
SAC 算法的流程借鉴了 DDPG, 但是由于 V 和 Q 函数的定义发生了变化, 因此相应的 loss function 也要代入新的定义.
V’s loss:
E ( s , a , r , s ′ ) ∼ D s e ( V ( s ) , E a ′ ∼ π , a ′ ∈ A Q ( s , a ′ ) − π ( s , a ′ ) ) \mathop{\mathbb{E}}\limits_{(s,a,r,s')\sim \mathcal{D}}\mathrm{se} (V(s), \mathop{\mathbb{E}}\limits_{a'\sim \pi, \atop a'\in \mathcal{A}}Q(s,a')-\pi(s, a')) (s,a,r,s′)∼DEse(V(s),a′∈Aa′∼π,EQ(s,a′)−π(s,a′))
Q’s loss:
E ( s , a , r , s ′ ) ∼ D s e ( Q ( s , a ) , r + γ E s ′ ∼ p ( ⋅ ∣ s , a ) ( V ′ ( s ′ ) ) ) \mathop{\mathbb{E}}\limits_{(s,a,r,s')\sim \mathcal{D}}\mathrm{se} (Q(s,a), r+\gamma \mathop{\mathbb{E}}\limits_{s'\sim p(\cdot|s,a)}(V'(s'))) (s,a,r,s′)∼DEse(Q(s,a),r+γs′∼p(⋅∣s,a)E(V′(s′)))
policy’s loss:
E ( s , a , r , s ′ ) ∼ D D KL ( π ( ⋅ ∣ s ) ∣ ∣ exp Q ( s , ⋅ ) ) \mathop{\mathbb{E}}\limits_{(s,a,r,s')\sim \mathcal{D}}\mathbb{D}_{\text{KL}}(\pi(\cdot|s)\vert \vert \exp Q(s, \cdot)) (s,a,r,s′)∼DEDKL(π(⋅∣s)∣∣expQ(s,⋅))
让policy取某一个动作的概率和它所对应的Q值成正比.
policy improvement 证明(参见论文Lemma 2):
由 update rule 可知
E a ∼ π n e w ( ⋅ ∣ s ) [ ln π n e w ( s , a ) − Q o l d ( s , a ) ] < E a ∼ π o l d ( ⋅ ∣ s ) [ ln π o l d ( s , a ) − Q o l d ( s , a ) ] = V o l d ( s ) \mathop{\mathbb{E}}\limits_{a\sim \pi_{new}(\cdot|s)}[\ln \pi_{new}(s,a)-Q_{old}(s,a)] < \mathop{\mathbb{E}}\limits_{a\sim \pi_{old}(\cdot|s)}[\ln \pi_{old}(s,a)-Q_{old}(s,a)] = V_{old}(s) a∼πnew(⋅∣s)E[lnπnew(s,a)−Qold(s,a)]<a∼πold(⋅∣s)E[lnπold(s,a)−Qold(s,a)]=Vold(s)
证明:
Q o l d ( s t , a t ) = r t + γ E s t + 1 ∼ p ( ⋅ ∣ s t , a t ) ( V ′ ( s t + 1 ) ) ≤ r t + γ E s t + 1 ∼ p ( ⋅ ∣ s t , a t ) { E a t + 1 ∼ π n e w ( ⋅ ∣ s t + 1 ) [ Q o l d ( s t + 1 , a t + 1 ) − ln π n e w ( s , a ) ] } ≤ . . . ≤ E ( s t , a t , . . . ) ∼ π n e w { r t + γ r t + 1 + . . . ∣ s t , a t } = Q n e w ( s t , a t ) \begin{aligned} Q_{old}(s_t,a_t) & = r_t + \gamma \mathop{\mathbb{E}}\limits_{s_{t+1}\sim p(\cdot|s_t,a_t)}(V'(s_{t+1})) \\ &\leq r_t + \gamma \mathop{\mathbb{E}}\limits_{s_{t+1}\sim p(\cdot|s_t,a_t)}\{\mathop{\mathbb{E}}\limits_{a_{t+1}\sim \pi_{new}(\cdot|s_{t+1})}[\textcolor{red}{Q_{old}(s_{t+1},a_{t+1})} - \ln \pi_{new}(s,a)]\} \\ & \leq ... \leq \mathop{\mathbb{E}}\limits_{(s_t,a_t,...)\sim \pi_{new}}\{r_{t}+\gamma r_{t+1}+...|s_t, a_t\}=Q_{new}(s_t,a_t) \end{aligned} Qold(st,at)=rt+γst+1∼p(⋅∣st,at)E(V′(st+1))≤rt+γst+1∼p(⋅∣st,at)E{at+1∼πnew(⋅∣st+1)E[Qold(st+1,at+1)−lnπnew(s,a)]}≤...≤(st,at,...)∼πnewE{rt+γrt+1+...∣st,at}=Qnew(st,at)
最新修订于 2023/08/21