Soft Actor-Critic¶
Table of Contents
背景¶
(前一节 背景 for TD3)
Soft Actor Critic (SAC) is an algorithm which optimizes a stochastic policy in an off-policy way, forming a bridge between stochastic policy optimization and DDPG-style approaches. It isn’t a direct successor to TD3 (having been published roughly concurrently), but it incorporates the clipped double-Q trick, and due to the inherent stochasticity of the policy in SAC, it also winds up benefiting from something like target policy smoothing.
A central feature of SAC is entropy regularization. The policy is trained to maximize a trade-off between expected return and entropy, a measure of randomness in the policy. This has a close connection to the exploration-exploitation trade-off: increasing entropy results in more exploration, which can accelerate learning later on. It can also prevent the policy from prematurely converging to a bad local optimum.
速览¶
- SAC is an off-policy algorithm.
- The version of SAC implemented here can only be used for environments with continuous action spaces.
- An alternate version of SAC, which slightly changes the policy update rule, can be implemented to handle discrete action spaces.
- The Spinning Up implementation of SAC does not support parallelization.
关键方程¶
To explain Soft Actor Critic, we first have to introduce the entropy-regularized reinforcement learning setting. In entropy-regularized RL, there are slightly-different equations for value functions.
Entropy-Regularized Reinforcement Learning¶
Entropy is a quantity which, roughly speaking, says how random a random variable is. If a coin is weighted so that it almost always comes up heads, it has low entropy; if it’s evenly weighted and has a half chance of either outcome, it has high entropy.
Let be a random variable with probability mass or density function
. The entropy
of
is computed from its distribution
according to
In entropy-regularized reinforcement learning, the agent gets a bonus reward at each time step proportional to the entropy of the policy at that timestep. This changes the RL problem to:
where is the trade-off coefficient. (Note: we’re assuming an infinite-horizon discounted setting here, and we’ll do the same for the rest of this page.) We can now define the slightly-different value functions in this setting.
is changed to include the entropy bonuses from every timestep:
is changed to include the entropy bonuses from every timestep except the first:
With these definitions, and
are connected by:
and the Bellman equation for is
你应该知道
The way we’ve set up the value functions in the entropy-regularized setting is a little bit arbitrary, and actually we could have done it differently (eg make include the entropy bonus at the first timestep). The choice of definition may vary slightly across papers on the subject.
Soft Actor-Critic¶
SAC concurrently learns a policy , two Q-functions
, and a value function
.
Learning Q. The Q-functions are learned by MSBE minimization, using a target value network to form the Bellman backups. They both use the same target, like in TD3, and have loss functions:
The target value network, like the target networks in DDPG and TD3, is obtained by polyak averaging the value network parameters over the course of training.
Learning V. The value function is learned by exploiting (a sample-based approximation of) the connection between and
. Before we go into the learning rule, let’s first rewrite the connection equation by using the definition of entropy to obtain:
The RHS is an expectation over actions, so we can approximate it by sampling from the policy:
SAC sets up a mean-squared-error loss for based on this approximation. But what Q-value do we use? SAC uses clipped double-Q like TD3 for learning the value function, and takes the minimum Q-value between the two approximators. So the SAC loss for value function parameters is:
Importantly, we do not use actions from the replay buffer here: these actions are sampled fresh from the current version of the policy.
Learning the Policy. The policy should, in each state, act to maximize the expected future return plus expected future entropy. That is, it should maximize , which we expand out (as before) into
The way we optimize the policy makes use of the reparameterization trick, in which a sample from is drawn by computing a deterministic function of state, policy parameters, and independent noise. To illustrate: following the authors of the SAC paper, we use a squashed Gaussian policy, which means that samples are obtained according to
你应该知道
This policy has two key differences from the policies we use in the other policy optimization algorithms:
1. The squashing function. The in the SAC policy ensures that actions are bounded to a finite range. This is absent in the VPG, TRPO, and PPO policies. It also changes the distribution: before the
the SAC policy is a factored Gaussian like the other algorithms’ policies, but after the
it is not. (You can still compute the log-probabilities of actions in closed form, though: see the paper appendix for details.)
2. The way standard deviations are parameterized. In VPG, TRPO, and PPO, we represent the log std devs with state-independent parameter vectors. In SAC, we represent the log std devs as outputs from the neural network, meaning that they depend on state in a complex way. SAC with state-independent log std devs, in our experience, did not work. (Can you think of why? Or better yet: run an experiment to verify?)
The reparameterization trick allows us to rewrite the expectation over actions (which contains a pain point: the distribution depends on the policy parameters) into an expectation over noise (which removes the pain point: the distribution now has no dependence on parameters):
To get the policy loss, the final step is that we need to substitute with one of our function approximators. The same as in TD3, we use
. The policy is thus optimized according to
which is almost the same as the DDPG and TD3 policy optimization, except for the stochasticity and entropy term.
探索与利用¶
SAC trains a stochastic policy with entropy regularization, and explores in an on-policy way. The entropy regularization coefficient explicitly controls the explore-exploit tradeoff, with higher
corresponding to more exploration, and lower
corresponding to more exploitation. The right coefficient (the one which leads to the stablest / highest-reward learning) may vary from environment to environment, and could require careful tuning.
At test time, to see how well the policy exploits what it has learned, we remove stochasticity and use the mean action instead of a sample from the distribution. This tends to improve performance over the original stochastic policy.
你应该知道
Our SAC implementation uses a trick to improve exploration at the start of training. For a fixed number of steps at the beginning (set with the start_steps
keyword argument), the agent takes actions which are sampled from a uniform random distribution over valid actions. After that, it returns to normal SAC exploration.
文档¶
-
spinup.
sac
(env_fn, actor_critic=<function mlp_actor_critic>, ac_kwargs={}, seed=0, steps_per_epoch=5000, epochs=100, replay_size=1000000, gamma=0.99, polyak=0.995, lr=0.001, alpha=0.2, batch_size=100, start_steps=10000, max_ep_len=1000, logger_kwargs={}, save_freq=1)[源代码]¶ 参数: - env_fn – A function which creates a copy of the environment. The environment must satisfy the OpenAI Gym API.
- actor_critic –
A function which takes in placeholder symbols for state,
x_ph
, and action,a_ph
, and returns the main outputs from the agent’s Tensorflow computation graph:Symbol Shape Description mu
(batch, act_dim) Computes mean actions from policygiven states.pi
(batch, act_dim) Samples actions from policy givenstates.logp_pi
(batch,) Gives log probability, according tothe policy, of the action sampled bypi
. Critical: must be differentiablewith respect to policy parameters allthe way through action sampling.q1
(batch,) Gives one estimate of Q* forstates inx_ph
and actions ina_ph
.q2
(batch,) Gives another estimate of Q* forstates inx_ph
and actions ina_ph
.q1_pi
(batch,) Gives the composition ofq1
andpi
for states inx_ph
:q1(x, pi(x)).q2_pi
(batch,) Gives the composition ofq2
andpi
for states inx_ph
:q2(x, pi(x)).v
(batch,) Gives the value estimate for statesinx_ph
. - ac_kwargs (dict) – Any kwargs appropriate for the actor_critic function you provided to SAC.
- seed (int) – Seed for random number generators.
- steps_per_epoch (int) – Number of steps of interaction (state-action pairs) for the agent and the environment in each epoch.
- epochs (int) – Number of epochs to run and train agent.
- replay_size (int) – Maximum length of replay buffer.
- gamma (float) – Discount factor. (Always between 0 and 1.)
- polyak (float) –
Interpolation factor in polyak averaging for target networks. Target networks are updated towards main networks according to:
where
is polyak. (Always between 0 and 1, usually close to 1.)
- lr (float) – Learning rate (used for both policy and value learning).
- alpha (float) – Entropy regularization coefficient. (Equivalent to inverse of reward scale in the original SAC paper.)
- batch_size (int) – Minibatch size for SGD.
- start_steps (int) – Number of steps for uniform-random action selection, before running real policy. Helps exploration.
- max_ep_len (int) – Maximum length of trajectory / episode / rollout.
- logger_kwargs (dict) – Keyword args for EpochLogger.
- save_freq (int) – How often (in terms of gap between epochs) to save the current policy and value function.
保存的模型的内容¶
记录的计算图包括:
键 | 值 |
---|---|
x |
Tensorflow placeholder for state input. |
a |
Tensorflow placeholder for action input. |
mu |
Deterministically computes mean action from the agent, given states in x . |
pi |
Samples an action from the agent, conditioned on states in x . |
q1 |
Gives one action-value estimate for states in x and actions in a . |
q2 |
Gives the other action-value estimate for states in x and actions in a . |
v |
Gives the value estimate for states in x . |
可以通过以下方式访问此保存的模型
- 使用 test_policy.py 工具运行经过训练的策略,
- 或使用 restore_tf_graph 将整个保存的图形加载到程序中。
注意:对于SAC,正确的评估策略由 mu
而不是 pi
给出。可以将策略 pi
视为探索策略,而将 mu
视为开发策略。