在上一篇中我們介紹了 mpi4py 的若干使用技巧授段,并且簡要介紹了 caput 及其 mpiutil 模塊,下面我們將介紹 mpiutil 中提供的若干方便和易用的函數(shù)柔纵,這些函數(shù)可以使我們更加方便地進行 Python 并行編程炬搭,并且使我們的程序很容易地做到兼容非 MPI 編程環(huán)境忍坷。
函數(shù)接口
以下介紹的所有函數(shù)都可以兼容非 MPI 環(huán)境(此時 _comm 為 None),當(dāng) mpi4py 可用時 _comm 是 MPI.COMM_WORLD深胳,也可以傳遞一個其它的通信子绰疤,此時將在該通信子上執(zhí)行相應(yīng)的操作。
mpilist(full_list, method='con', comm=_comm)
將一個序列 full_list
按照方法 method
劃分成 n (n 為 comm
的 size舞终,當(dāng) comm
為 None 時轻庆, 其 size = 1) 份并分配給每個進程癣猾,每個進程分得的元素數(shù)目相等(如果可以均分)或相差 1 (如果不能均分,rank 較小的進程會多 1 個)余爆,返回每個進程得到的子序列纷宇。當(dāng) fullist
的元素數(shù)目少于進程數(shù)目時,后面的進程會因為分不到元素而返回一個空序列蛾方。method
的取值可以為:
- 'con':連續(xù)劃分像捶,即 rank 較小的進程得到
full_list
中前面的元素,為默認值桩砰; - 'alt':交替劃分作岖,即 rank = 0 的進程得到位置為 [0, n, 2n, ...] 的元素, rank == 1 的進程得到位置為 [1, n+1, 2n+1, ...] 的元素五芝, 等痘儡;
- 'rand':隨機劃分,效果相當(dāng)于對原系列隨機重排后再按確定的方式分配給每個進程枢步。
comm
可以是 None 或一個通信子對象沉删,其默認值 _comm 為 None (mpi4py 不可用時) 或者 MPI.COMM_WORLD (mpi4py 可用時),也可以傳遞一個其它的通信子對象醉途,此時將 full_list
劃分給該通信子對象所包含的每個進程矾瑰。
mpirange(*args, **kargs)
MPI 版本的 range 函數(shù),參數(shù)同 range 函數(shù)隘擎,另外加可選的參數(shù) method
(默認值 con
) 和 comm
(默認值為 _comm)殴穴,這兩個參數(shù)的意思同上面介紹的 mpilist。這個函數(shù)的執(zhí)行效果是 mpilist(range(*args), method, comm)货葬,即將 range 函數(shù)生成的序列作為 full_list
調(diào)用 mpilist 函數(shù)采幌,返回每個進程得到的子序列。
barrier(comm=_comm)
柵障同步震桶,參數(shù) comm
的意義同函數(shù) mpilist休傍。
bcast(data, root=0, comm=_comm)
廣播操作,數(shù)據(jù)從 root
進程廣播到所有其它進程蹲姐,參數(shù) comm
的意義同函數(shù) mpilist磨取。
reduce(sendobj, root=0, op=None, comm=_comm)
規(guī)約操作,將數(shù)據(jù)按照方法 op
(默認值 None 會執(zhí)行 MPI.SUM) 規(guī)約到 root
進程柴墩,參數(shù) comm
的意義同函數(shù) mpilist忙厌。
allreduce(sendobj, op=None, comm=_comm)
全規(guī)約操作,將數(shù)據(jù)按照方法 op
作全規(guī)約江咳,參數(shù) comm
的意義同函數(shù) mpilist逢净。
gather_list(lst, root=None, comm=_comm)
將各個進程的列表 lst
收集到 root
進程中并合并成一個新的列表,如果 root
為一個整數(shù),則只有 rank 為該整數(shù)的進程會收集到數(shù)據(jù)并返回合并的列表汹胃,其它進程返回 None婶芭,如果 root
為 None,則所有進程都會收集(全收集操作)着饥,每個進程都會返回合并的列表犀农。參數(shù) comm
的意義同函數(shù) mpilist。
parallel_map(func, glist, root=None, method='con', comm=_comm)
將序列 glist
按照方法 method
劃分給每個進程宰掉,然后將函數(shù) func
作用到每個進程所得的子序列的每個元素上呵哨,函數(shù)的返回值會被收集到 root
中經(jīng)合并后返回。如果 root
為一個整數(shù)轨奄,則只有 rank 為該整數(shù)的進程會收集到數(shù)據(jù)并返回合并的列表孟害,其它進程返回 None,如果 root
為 None挪拟,則所有進程都會收集(全收集操作)挨务,每個進程都會返回合并的列表。參數(shù) method
和 comm
的意義同函數(shù) mpilist玉组。
split_all(n, comm=_comm)
將一個長度為 n
的序列順序連續(xù)地劃分給每個進程谎柄,返回一個三元組 (num, start, end),其中 num惯雳,start朝巫,end 都為長度為 comm
的 size (1 如果 comm
為 None)的 numpy 數(shù)組,分別給出每個進程分配到的元素數(shù)目石景,每個進程分配到的元素在原系列中的起始和結(jié)束位置劈猿。參數(shù) comm
的意義同函數(shù) mpilist。
split_local(n, comm=_comm)
將一個長度為 n
的序列順序連續(xù)地劃分給每個進程潮孽,返回一個三元組 (num, start, end)揪荣,其中 num,start恩商,end 都為整數(shù)变逃,分別給出該進程自身分配到的元素數(shù)目必逆,該進程分配到的元素在原系列中的起始和結(jié)束位置怠堪。參數(shù) comm
的意義同函數(shù) mpilist。
gather_local(global_array, local_array, local_start, root=0, comm=_comm)
將各個進程中的 numpy 數(shù)組 local_array
收集到 root
進程的 global_array
中名眉,local_start
指明 local_array
放置在 global_array
中的起始位置粟矿,是一個長度為 global_array
維數(shù)的 tuple,其每一個元素指明放置在該維的起始位置损拢。如果 root
為一個整數(shù)陌粹,則 local_array
只會被收集到 rank 為該整數(shù)的進程中,其它進程可以設(shè)置 global_array
為 None福压,如果 root
為 None掏秩,則 local_array
會被收集到所有進程中或舞。參數(shù) comm
的意義同函數(shù) mpilist。
gather_array(local_array, axis=0, root=0, comm=_comm)
將各個進程中的 numpy 數(shù)組 local_array
沿著軸 axis
收集到 root
進程蒙幻,合并成一個大的 numpy 數(shù)組后返回映凳。如果 root
為一個整數(shù),則 local_array
只會被收集到 rank 為該整數(shù)的進程中邮破,其它進程會返回 None诈豌,如果 root
為 None,則 local_array
會被收集到所有進程中抒和。參數(shù) comm
的意義同函數(shù) mpilist矫渔。
scatter_local(global_array, local_array, local_start, root=None, comm=_comm)
將 numpy 數(shù)組 global_array
散發(fā)到各個進程的 local_array
中,local_start
指明從 global_array
散發(fā)的起始位置摧莽,是一個長度為 global_array
維數(shù)的 tuple庙洼,其每一個元素指明該維的起始位置。如果 root
為一個整數(shù)镊辕,則只會從 rank 為該整數(shù)的進程的 global_array
中散發(fā)數(shù)據(jù)到所有其它進程中送膳,因此其它進程的 global_array
可以為 None,如果 root
為 None丑蛤,則每個進程會從各自的 global_array
中獲取對應(yīng)的數(shù)據(jù)放置到 local_array
中叠聋,因此一般要求每個進程的 global_array
都相同(但也可以不同)。參數(shù) comm
的意義同函數(shù) mpilist受裹。
scatter_array(global_array, axis=0, root=None, comm=_comm)
將 numpy 數(shù)組 global_array
按照軸 axis
散發(fā)到各個進程碌补,各個進程返回所得到的子數(shù)組。global_array
的 axis
軸會盡量均分棉饶,如果不能均分厦章,則 rank 較小的進程會多 1,如果不夠分照藻,則 rank 最大的若干進程會返回空數(shù)組袜啃。如果 root
為一個整數(shù),則只會從 rank 為該整數(shù)的進程的 global_array
中散發(fā)數(shù)據(jù)到所有其它進程中幸缕,因此其它進程的 global_array
可以為 None群发,如果 root
為 None,則每個進程會從各自的 global_array
中獲取對應(yīng)的數(shù)據(jù)发乔,因此一般要求每個進程的 global_array
都相同(但也可以不同)熟妓。參數(shù) comm
的意義同函數(shù) mpilist。
例程
下面給出以上介紹的若干函數(shù)的使用例程栏尚。
# mpiutil_funcs.py
"""
Demonstrates the usage of mpilist, mpirange, bcast, gather_list, parallel_map,
split_all, split_local, gather_array, scatter_array.
Run this with 4 processes like:
$ mpiexec -n 4 python mpiutil_funcs.py
"""
import sys
import time
import numpy as np
from caput import mpiutil
rank = mpiutil.rank
size = mpiutil.size
sec = 5 # seconds to wait
def separator(sec, tag):
# sleep, sync, and flush to avoid output of different parts being mixed
time.sleep(sec)
mpiutil.barrier()
sys.stdout.flush()
if rank == 0:
print
print '-' * 35 + ' ' + tag + ' ' + '-' * 35
# mpilist
separator(sec, 'mpilist')
full_list = [1, 2.5, 'a', True, (3, 4), {'x':1}]
local_list = mpiutil.mpilist(full_list)
print "rank %d has %s with method = 'con'" % (rank, local_list)
local_list = mpiutil.mpilist(full_list, method='alt')
print "rank %d has %s with method = 'alt'" % (rank, local_list)
local_list = mpiutil.mpilist(full_list, method='rand')
print "rank %d has %s with method = 'rand'" % (rank, local_list)
# mpirange
separator(sec, 'mpirange')
local_ary = mpiutil.mpirange(1, 7)
print "rank %d has %s with method = 'con'" % (rank, local_ary)
local_ary = mpiutil.mpirange(1, 7, method='alt')
print "rank %d has %s with method = 'alt'" % (rank, local_ary)
local_ary = mpiutil.mpirange(1, 7, method='rand')
print "rank %d has %s with method = 'rand'" % (rank, local_ary)
# bcast
separator(sec, 'bcast')
if rank == 0:
sendobj = 'obj'
else:
sendobj = None
sendobj = mpiutil.bcast(sendobj, root=0)
print 'rank %d has sendobj = %s after bcast' % (rank, sendobj)
# gather_list
separator(sec, 'gather_list')
if rank == 0:
lst = [0.5, 2]
elif rank == 1:
lst = ['a', False, 'xy']
elif rank == 2:
lst = [{'x': 1}]
else:
lst = []
lst = mpiutil.gather_list(lst, root=None)
print 'rank %d has %s after gather_list' % (rank, lst)
# parallel_map
separator(sec, 'parallel_map')
glist = range(6)
result = mpiutil.parallel_map(lambda x: x*x, glist, root=0)
if rank == 0:
print 'result = %s' % result
# split_all
separator(sec, 'split_all')
print 'rank %d has: %s' % (rank, mpiutil.split_all(6))
# split_local
separator(sec, 'split_local')
print 'rank %d has: %s' % (rank, mpiutil.split_local(6))
# gather_array
separator(sec, 'gather_array')
if rank == 0:
local_ary = np.array([[0, 1], [6, 7]])
elif rank == 1:
local_ary = np.array([[2], [8]])
elif rank == 2:
local_ary = np.array([[3], [9]])
if rank == 3:
local_ary = np.array([[4, 5], [10, 11]])
global_ary = mpiutil.gather_array(local_ary, axis=1, root=0)
if rank == 0:
print 'global_ary = %s' % global_ary
# scatter_array
separator(sec, 'scatter_array')
local_ary = mpiutil.scatter_array(global_ary, axis=1, root=0)
print 'rank %d has local_ary = %s' % (rank, local_ary)
運行結(jié)果如下:
$ mpiexec -n 4 python mpiutil_funcs.py
Starting MPI rank=3 [size=4]
Starting MPI rank=2 [size=4]
Starting MPI rank=0 [size=4]
Starting MPI rank=1 [size=4]
----------------------------------- mpilist -----------------------------------
rank 1 has ['a', True] with method = 'con'
rank 1 has [2.5, {'x': 1}] with method = 'alt'
rank 2 has [(3, 4)] with method = 'con'
rank 2 has ['a'] with method = 'alt'
rank 0 has [1, 2.5] with method = 'con'
rank 0 has [1, (3, 4)] with method = 'alt'
rank 3 has [{'x': 1}] with method = 'con'
rank 3 has [True] with method = 'alt'
rank 0 has [{'x': 1}, 'a'] with method = 'rand'
rank 3 has [True] with method = 'rand'
rank 1 has [2.5, (3, 4)] with method = 'rand'
rank 2 has [1] with method = 'rand'
----------------------------------- mpirange -----------------------------------
rank 1 has [3, 4] with method = 'con'
rank 1 has [2, 6] with method = 'alt'
rank 0 has [1, 2] with method = 'con'
rank 0 has [1, 5] with method = 'alt'
rank 2 has [5] with method = 'con'
rank 2 has [3] with method = 'alt'
rank 3 has [6] with method = 'con'
rank 3 has [4] with method = 'alt'
rank 3 has [3] with method = 'rand'
rank 2 has [2] with method = 'rand'
rank 0 has [5, 1] with method = 'rand'
rank 1 has [4, 6] with method = 'rand'
----------------------------------- bcast -----------------------------------
rank 1 has sendobj = obj after bcast
rank 3 has sendobj = obj after bcast
rank 2 has sendobj = obj after bcast
rank 0 has sendobj = obj after bcast
----------------------------------- gather_list -----------------------------------
rank 1 has [0.5, 2, 'a', False, 'xy', {'x': 1}] after gather_list
rank 3 has [0.5, 2, 'a', False, 'xy', {'x': 1}] after gather_list
rank 2 has [0.5, 2, 'a', False, 'xy', {'x': 1}] after gather_list
rank 0 has [0.5, 2, 'a', False, 'xy', {'x': 1}] after gather_list
----------------------------------- parallel_map -----------------------------------
result = [0, 1, 4, 9, 16, 25]
----------------------------------- split_all -----------------------------------
rank 3 has: [[2 2 1 1]
[0 2 4 5]
[2 4 5 6]]
rank 0 has: [[2 2 1 1]
[0 2 4 5]
[2 4 5 6]]
rank 2 has: [[2 2 1 1]
[0 2 4 5]
[2 4 5 6]]
rank 1 has: [[2 2 1 1]
[0 2 4 5]
[2 4 5 6]]
----------------------------------- split_local -----------------------------------
rank 1 has: [2 2 4]
rank 0 has: [2 0 2]
rank 2 has: [1 4 5]
rank 3 has: [1 5 6]
----------------------------------- gather_array -----------------------------------
global_ary = [[ 0 1 2 3 4 5]
[ 6 7 8 9 10 11]]
----------------------------------- scatter_array -----------------------------------
rank 0 has local_ary = [[0 1]
[6 7]]
rank 1 has local_ary = [[2 3]
[8 9]]
rank 3 has local_ary = [[ 5]
[11]]
rank 2 has local_ary = [[ 4]
[10]]
以上我們介紹了 mpiutil 中提供的若干方便和易用的函數(shù)起愈,在下一篇中我們將介紹建立在 numpy array 基礎(chǔ)上的并行分布式數(shù)組 MPIArray。