MPI工具

核心MPI工具

spinup.utils.mpi_tools.mpi_avg(x)[源代码]

Average a scalar or vector over MPI processes.

spinup.utils.mpi_tools.mpi_fork(n, bind_to_core=False)[源代码]

Re-launches the current script with workers linked by MPI.

Also, terminates the original process that launched it.

Taken almost without modification from the Baselines function of the same name.

参数:
  • n (int) – Number of process to split into.
  • bind_to_core (bool) – Bind each MPI process to a core.
spinup.utils.mpi_tools.mpi_statistics_scalar(x, with_min_and_max=False)[源代码]

Get mean/std and optional min/max of scalar x across MPI processes.

参数:
  • x – An array containing samples of the scalar to produce statistics for.
  • with_min_and_max (bool) – If true, return min and max of x in addition to mean and std.
spinup.utils.mpi_tools.num_procs()[源代码]

Count active MPI processes.

spinup.utils.mpi_tools.proc_id()[源代码]

Get rank of calling process.

MPI + Tensorflow 工具

spinup.utils.mpi_tf 包含一些工具,可以轻松地在许多MPI流程中使用AdamOptimizer。 这有点极客──如果你正在寻找更复杂和通用的东西,请考虑 horovod

class spinup.utils.mpi_tf.MpiAdamOptimizer(**kwargs)[源代码]

Adam optimizer that averages gradients across MPI processes.

The compute_gradients method is taken from Baselines MpiAdamOptimizer. For documentation on method arguments, see the Tensorflow docs page for the base AdamOptimizer.

apply_gradients(grads_and_vars, global_step=None, name=None)[源代码]

Same as normal apply_gradients, except sync params after update.

compute_gradients(loss, var_list, **kwargs)[源代码]

Same as normal compute_gradients, except average grads over processes.

spinup.utils.mpi_tf.sync_all_params()[源代码]

Sync all tf variables across MPI processes.