MPIParallelizer
Parallelizes using mpi4py
MPICommunicator: MPICommunicator
__init__(self, root=0, comm=None, contract=None, logger=None):
get_nprocs(self):
get_id(self):
initialize(self):
finalize(self, exc_type, exc_val, exc_tb):
@property
comm(self):
Returns the communicator used by the paralellizer
:returns
:MPIParallelizer.MPICommunicator
@property
on_main(self):
broadcast(self, data, **kwargs):
Sends the same data to all processes
data
:Any
kwargs
:Any
:returns
:_
scatter(self, data, shape=None, **kwargs):
Performs a scatter of data to the different
available parallelizer processes.
NOTE: unlike in the MPI case, data
does not
need to be evenly divisible by the number of available
processes
data
:Any
kwargs
:Any
:returns
:_
gather(self, data, shape=None, **kwargs):
Performs a gather of data from the different available parallelizer processes
data
:Any
kwargs
:Any
:returns
:_
map(self, func, data, input_shape=None, output_shape=None, **kwargs):
Performs a parallel map of function over the held data on different processes
function
:Any
data
:Any
kwargs
:Any
:returns
:_
apply(self, func, *args, **kwargs):
Applies func to args in parallel on all of the processes. For MPI, since jobs are always started with mpirun, this is just a regular apply
func
:Any
args
:Any
kwargs
:Any
:returns
:_
from_config(**kw):