ACSE6 L2 非阻碍通信

非阻碍通信

简介

前面所讲的 MPI_Send 的通信模式为阻塞通信模式,在这种模式下,当一个阻塞通信正确返回后,可以得到下面的信息:

  • 通信操作已正确完成,即消息已成功发出或者接收
  • 通信占用的缓冲区可以使用,若是发送操作,则该缓冲区可以被其他操作更新,若是接收操作,那么该缓冲区中的数据已经完整,可以被正确使用
    下面是阻塞消息发送和接收的示意图:
    在这里插入图片描述
    在阻塞通信中,对于接收进程,在接受消息时,要保证按照消息发送的顺序接受消息.例如进程 0 向进程 1 连续发送了 2 条消息,记为消息0 和消息1,消息0先发送,这时即便消息1 先到达了进程1,进程1 也无法接受消息1,必须要等到消息0 被接收之后,消息1 才可以被接收。
    与阻塞通信不同,非阻塞通信不必等到通信操作完成完成便可以返回, 相对应的通信操作会交给特定的通信硬件去完成,在该通信硬件进行通信操作的同时,处理器可以同时进行计算 . 通过通信与计算的重叠,可以大大提高程序执行的效率。下面是非阻塞消息发送和接收的示意图:
    在这里插入图片描述

非阻塞通信模式

在前面阻塞通信中,我们知道有 4 种基本的通信模式:

  • 标准通信模式
  • 缓存通信模式
  • 同步通信模式
  • 就绪通信模式
    非阻塞通信和这四种通信模式相结合,也有四种不同的模式。同时针对某些通信是在一个循环中重复执行的情况, MPI 又引入了重复非阻塞通信方式,以进一步提高效率。对于重复非阻塞通信,和四种通信模式相结合,又有四种不同的具体形式。下面是具体的通信模式:
    在这里插入图片描述
    在这里插入图片描述

非阻塞通信函数

1. MPI_Isend

//发送
MPI_Isend(
    void * buf,             // 发送缓冲区起始地址
    int count,              // 发送数据个数
    MPI_Datatype datatype,  // 发送数据的数据类型
    int dest,               // 目标进程号
    int tag,                // 消息标志
    MPI_Comm comm,          // 通信域
    MPI_Request * request   // 返回的非阻塞通信对象
)

2. 接收 MPI_Irecv

//接收
MPI_Irecv(
    void * buf,             // 接受缓冲区的起始地址
    int count,              // 接受数据的最大个数
    MPI_Datatype datatype,  // 数据类型
    int source,             // 源进程标识
    int tag,                // 消息标志
    MPI_Comm comm,          // 通信域
    MPI_Request * request   // 非阻塞通信对象
)

其余的三种通信模式和阻塞通信的函数形式类似,只是函数名称修改了一下,这里不做详细介绍。

// 同步通信模式
MPI_Issend(void * buf, int count, MPI_Datatype datatype, int dest, int tag,
    MPI_Comm comm, MPI_Request * request)

// 缓存通信模式
MPI_Ibsend(void * buf, int count, MPI_Datatype datatype, int dest, int tag,
    MPI_Comm comm, MPI_Request * request)

// 就绪通信模式
MPI_Irsend(void * buf, int count, MPI_Datatype datatype, int dest, int tag,
    MPI_Comm comm, MPI_Request * request)

3. MPI_Wait/ MPI_Waitall

与同步版本的MPI_Send不同,MPI_Isend是非阻塞的。如果进程使用这一函数发送数据,函数执行完后立即返回,不用等待数据发送完毕
由于非阻塞通信返回并不意味着该通信已经完成,因此 MPI 提供了一个非阻塞通信对象 – MPI_Request 来查询通信的状态。通过结合 MPI_Request 和下面的一些函数,我们等待或者检测阻塞通信。
不过,需要注意的是,非阻塞调用在数据发送结束之前返回。所以函数MPI_Isend发送数据使用的缓冲区必须要等到数据发送结束才能够使用。例如,如下的代码是不安全的:

int i = 123;
MPI_Request myRequest;
MPI_Isend(&i, 1, MPI_INT, 1, MY_LITTLE_TAG, MPI_COMM_WORLD, &myRequest);
i = 234;

在上面的代码中,注意到变量i的地址传递给MPI_Isend函数作为数据发送的缓冲区地址,此处只发送一个整型数据。现在注意到在函数MPI_Isend调用之后,立刻修改变量i。这一做法实际上是不安全的,因为当我们修改变量i的值以后,数据传输没有开始或没有完成,MPI程序有可能会读取它,此时发送的数据就由123变成了234. 这是一种竞争条件,很难调试出来。
避免竞争条件:
如果进程使用函数MPI_Isend和数据缓冲区buffer发送数据,并且在MPI_Isend调用后的某一执行点,需要继续使用buffer,那么就有必要等待异步发送结束后。MPI提供给我们一个函数用来检测异步发送是否完成:MPI_Wait
(1) MPI_Wait 会阻塞当前进程,一直等到相应的非阻塞通信完成之后再返回. MPI_Waitall is used to wait for a list of communications to complete.

int MPI_Wait(MPI_Request *request, MPI_Status *status)
int MPI_Waitall(int count, MPI_Request *request_list, MPI_Status *status_list)
  • request is a pointer to a single request object. This must match a single MPI_Isend or MPI_Irecv
  • status is a pointer to an object which will receive the communication status. This can be MPI_STATUS_IGNORE if you do not need the status information
  • count is the number of communications that MPI_Waitall will be processing. These can be a combination of MPI_Isend and MPI_Irecvs
  • request_list is an array that contains count request objects, one for each communication
  • status_list is an array of status objects in which the status of each communication will be stored. This can again be MPI_STATUS_IGNORE

(2) Note that MPI_Wait and MPI_Waitall are blocking for the specific communications involved. It will continue once its specific communications are finished, irrespective of communications involving other processes.

int i = 123;
MPI_Request myRequest;
MPI_Isend(&i, 1, MPI_INT, 1, MY_LITTLE_TAG, MPI_COMM_WORLD, &myRequest);
 
MPI_Status myStatus;
MPI_Wait(&myRequest, &myStatus);
i = 234;

在上面的代码中,我们看到,进程在通信的过程中,仍然可以同时做一些其他的计算操作非阻塞调用可以实现计算与通信的并行,这极大的提高了并行程序的效率
(3) 下面是一个简单的例子

#include <mpi.h>
#include <iostream>
#include <cstdlib>
#include <time.h>
using namespace std;
int id, p;

int main(int argc, char* argv[])
{
	MPI_Init(&argc, &argv);
	MPI_Comm_rank(MPI_COMM_WORLD, &id);  //进程id
	MPI_Comm_size(MPI_COMM_WORLD, &p);   //进程size
	srand(time(NULL) + id * 10);
	int tag_num = 1;
	if (id == 0)
	{
		// Request 数组
		MPI_Request* request = new MPI_Request[p - 1];
		int* send_data = new int[p - 1];
		// 逐步向每一个进程发送单个数字。 这里是数字, 如果send_data的话就是数组
		for (int i = 1; i < p; i++)
		{
			send_data[i - 1] = rand();
			MPI_Isend(&send_data[i - 1], 1, MPI_INT, i, tag_num, MPI_COMM_WORLD, &request[i - 1]);
			cout << send_data[i - 1] << " sent to processor " << i << endl;
			cout.flush();
		}
		// int MPI_Waitall(int count, MPI_Request *request_list, MPI_Status *status_list)
		MPI_Waitall(p - 1, request, MPI_STATUS_IGNORE);
		// 我有一个数组来存储发送数据, all values are still available until MPI_Waitall
		delete[] send_data;
		delete[] request;
	}
	else
	{
		int recv_data;
		MPI_Request request;
		MPI_Irecv(&recv_data, 1, MPI_INT, 0, tag_num, MPI_COMM_WORLD, &request);
		//您可以在此处完成不需要沟通即可完成的工作
		MPI_Wait(&request, MPI_STATUS_IGNORE);
		cout << recv_data << " received on processor " << id << endl;
		cout.flush();
	}
	MPI_Finalize();
}

/*
PS D:\桌面\C++ Assi\MPI\x64\Debug> mpiexec -n 5 MPI.exe
15019 sent to processor 1
15019 received on processor 1
7173 sent to processor 2
21672 sent to processor 3
7173 received on processor 2
18390 sent to processor 4
21672 received on processor 3
18390 received on processor 4
*/

4. MPI_Test and MPI_Testall

(1) MPI_Test 只是用来检测通信是否完成,它会立即返回,不会阻塞当前进程。如果通信完成,将 flag 置为 true,如果通信还没完成,则将 flag 置为 false。MPI_Testall 用来检测是否所有的非阻塞通信都已经完成,

MPI_Test(
    MPI_Request * request,      // 非阻塞通信对象
    int * flag,                 // 操作是否完成,完成 - true,未完成 - false
    MPI_Status * status         // 返回的状态
);

MPI_Testall(
    int count,                          // 非阻塞通信对象个数
    MPI_Request * array_of_requests,    // 非阻塞通信对象数组
    int * flag,                         // 所有非阻塞通信对象是否都完成
    MPI_Status * array_of_statuses      // 状态数组
);

flag parameter is a pointer to an integer that will be 1 or 0 (true or false) depending on whether the communications associated with request or request_list have completed or not
(2) Exercise 2: Doing work while waiting for communications
重写我之前给出的所有进程之间进行通信的示例(即每个进程向其他每个进程发送数据),但是这次发送10,000个double的数组,因此设置数据和通信都需要更长的时间.
Write a function that does some calculations (you can decide what this important extra work is!) and repeatedly call this function until all the communications for that process have been completed. Write to the screen how many cycles of this task were completed by each process.

#include <mpi.h>
#include <iostream>
#include <cstdlib>
#include <time.h>

using namespace std;

int id, p;
int data_size = 10000;

void Do_work(void) {
	int sum = 0;
	for (int i = 0; i < 100; i++) sum = sum + 10;
}



int main(int argc, char* argv[])
{
	MPI_Init(&argc, &argv);

	MPI_Comm_rank(MPI_COMM_WORLD, &id);  // 进程id
	MPI_Comm_size(MPI_COMM_WORLD, &p);   //进程size
	srand(time(NULL) + id * 10);

	int tag_num = 1;
	//Request 的数组  长度为2*(p-1)
	MPI_Request* request = new MPI_Request[(p - 1) * 2];
	// 二维数组
	double** send_data = new double* [p];
	double** recv_data = new double* [p];

	// 对每一行的长度为data_size
	for (int i = 0; i < p; i++) {
		send_data[i] = new double[data_size];
		recv_data[i] = new double[data_size];
	}

	int cnt = 0;

	// 每个进程从 其他进程接收其他进程的数据
	for (int i = 0; i < p; i++)
		if (i != id)
		{
			MPI_Irecv(recv_data[i], data_size, MPI_DOUBLE, i, tag_num, MPI_COMM_WORLD, &request[cnt]);
			cnt++;
		}

	// 每个进程  向其他进程发送数据
	for (int i = 0; i < p; i++)
		if (i != id)
		{
			for (int j = 0; j < data_size; j++) {
				send_data[i][j] = (double)(i * j) / (double)p;
			}
			// 注意这个MPI_Isend
			MPI_Isend(send_data[i], data_size, MPI_DOUBLE, i, tag_num, MPI_COMM_WORLD, &request[cnt]);
			cnt++;
		}

	int cnt_work = 0;
	int flag;
	MPI_Testall(cnt, request, &flag, MPI_STATUSES_IGNORE);
	// 如果通信没完成就接着循环
	while (flag == 0) {
		Do_work();
		cnt_work++;
		MPI_Testall(cnt, request, &flag, MPI_STATUSES_IGNORE);
	}

	cout << "Process " << id << " did " << cnt_work << " cycles of work while waiting" << endl;


	delete[] request;
	for (int i = 0; i < p; i++) {
		delete[] recv_data[i];
		delete[] send_data[i];
	}

	delete[] recv_data;
	delete[] send_data;

	MPI_Finalize();
}


/*
PS D:\桌面\C++ Assi\MPI\x64\Debug> mpiexec -n 10 MPI.exe
Process 3 did 123324 cycles of work while waiting
Process 0 did 107067 cycles of work while waiting
Process 1 did 126860 cycles of work while waiting
Process 9 did 127102 cycles of work while waiting
Process 8 did 126751 cycles of work while waiting
Process 5 did 122397 cycles of work while waiting
Process 6 did 91568 cycles of work while waiting
Process 4 did 25 cycles of work while waiting
Process 7 did 97338 cycles of work while waiting
Process 2 did 65928 cycles of work while waiting
*/

Sending from everyone to everyone

  1. Typically do non-blocking communications such as this where processes are communicating with a subset of the other processes and where these subsets overlap. Domain decomposition is an example of this type of problem.
#include <mpi.h>
#include <iostream>
#include <cstdlib>
#include <time.h>

using namespace std;

int id, p;

int main(int argc, char* argv[])
{
	MPI_Init(&argc, &argv);

	MPI_Comm_rank(MPI_COMM_WORLD, &id);
	MPI_Comm_size(MPI_COMM_WORLD, &p);
	srand(time(NULL) + id * 10);

	int tag_num = 1;

	MPI_Request* request = new MPI_Request[(p - 1) * 2];
	double* send_data = new double[p];
	double* recv_data = new double[p];

	int cnt = 0;

	for (int i = 0; i < p; i++)
		if (i != id)
		{
			MPI_Irecv(&recv_data[i], 1, MPI_DOUBLE, i, tag_num, MPI_COMM_WORLD, &request[cnt]);
			cnt++;
		}
		else recv_data[i] = 0;

	for (int i = 0; i < p; i++)
		if (i != id)
		{
			send_data[i] = (double)id / (double)p;
			MPI_Isend(&send_data[i], 1, MPI_DOUBLE, i, tag_num, MPI_COMM_WORLD, &request[cnt]);
			cnt++;
		}
		else send_data[i] = 0;

	MPI_Waitall(cnt, request, MPI_STATUS_IGNORE);

	for (int i = 0; i < p; i++)
		cout << "Processor " << id << " recieved " << recv_data[i] << " from processor " << i << endl;

	delete[] request;
	delete[] recv_data;
	delete[] send_data;

	MPI_Finalize();
}
/*
PS D:\桌面\C++ Assi\MPI\x64\Debug> mpiexec -n 3 MPI.exe
Processor 0 recieved 0 from processor 0
Processor 1 recieved 0 from processor 0
Processor 2 recieved 0 from processor 0
Processor 0 recieved 0.333333 from processor 1
Processor 1 recieved 0 from processor 1
Processor 2 recieved 0.333333 from processor 1
Processor 0 recieved 0.666667 from processor 2
Processor 2 recieved 0 from processor 2
Processor 1 recieved 0.666667 from processor 2
*/

(1) With the MPI_Isend I had to store the values of each of the data points as a separate variable in an array.

  • 这是因为可以在调用MPI_Waitall之前随时发送数据
  • 这表示在此间隔内不会更改/覆盖数据或超出范围
  • 这与MPI_Send不同,在MPI_Send完成之前,要发送的数据已被缓冲和/或实际发送
  • MPI_Irecv也是如此,尽管在要获取数据时更明显
    (2) You will notice that I set up the receives before the sends. This is because data can be sent as soon as there is a matching pair of sends and receives
  • 因此,准备接收并等待发送的效率稍高.
  • Remember that all the non-blocking communications are occurring in the background on a separate communications thread
  1. Exercise 1: Non-blocking communications
    编写一个程序,在该程序中,每个进程必须将数据发送到其他两个进程,and receive data from however many they are assigned to receive from.每个流程应随机决定它们将要与之通信的两个流程(remember to make sure that these are different from one another and not the current process).
    Next, have a set of communications between every process that allows the processes to tell the other processes why they will be communicating with (您可以将单个布尔变量从每个进程发送到每个其他进程)。
    Once the processes know who they will be talking to do 10 rounds of communication between these processes (each process should be sending data to 2 process, but each will be receiving data from different numbers of processes, determined in the previous step). 您可以决定要发送什么数据,但可以向stdout写入内容以显示正在发生的通信。 请记住在通信回合之间增加标签,以防止出现任何竞赛情况。
#include <mpi.h>
#include <iostream>
#include <cstdlib>
#include <time.h>

using namespace std;

int id, p;

bool* send_status, * recv_status;

int tagnum = 0;
int send_procs[2];
int* recv_procs;
int num_recv_procs = 0;

void Setup_Comms(void) {
	MPI_Request* request_list = new MPI_Request[(p - 1) * 2];

	//设置send_procs[0]和send_procs[1]的值
	while ((send_procs[0] = rand() % p) == id);
	while ((send_procs[1] = rand() % p) == id || send_procs[0] == send_procs[1]);

	send_status = new bool[p];
	for (int i = 0; i < p; i++) {
		send_status[i] = false;
	}

	send_status[send_procs[0]] = true;
	send_status[send_procs[1]] = true;

	recv_status = new bool[p];
	int cnt = 0;

	for (int i = 0; i < p; i++) {
		if (i != id) {
			MPI_Irecv(&recv_status[i], 1, MPI_C_BOOL, i, tagnum, MPI_COMM_WORLD, &request_list[cnt]);
			cnt++;
			MPI_Isend(&send_status[i], 1, MPI_C_BOOL, i, tagnum, MPI_COMM_WORLD, &request_list[cnt]);
			cnt++;
		}
		else {
			recv_status[i] = false;
		}
	}
	MPI_Waitall(cnt, request_list, MPI_STATUSES_IGNORE);
	tagnum++;



	cout << "Process " << id << " sending: ";
	for (int i = 0; i < p; i++) {
		if (send_status[i]) {
			cout << i << "\t";
		}
	}
	cout << "\t receivinng: ";

	for (int i = 0; i < p; i++) {
		if (recv_status[i]) {
			cout << i << "\t";
			num_recv_procs++;
		}
	}
	cout << endl;
	cout.flush();

	cnt = 0;
	recv_procs = new int[num_recv_procs];
	for (int i = 0; i < p; i++) {
		if (recv_status[i]) {
			recv_procs[cnt] = i;
			cnt++;
		}
	}
	delete[] request_list;
	
}
/*
单纯的这个函数输出
PS D:\桌面\C++ Assi\MPI\x64\Debug> mpiexec -n 10 MPI.exe
Process 3 sending: 2    4                receivinng:
Process 5 sending: 2    6                receivinng: 0  2       8
Process 7 sending: 8    9                receivinng: 1  6
Process 2 sending: 5    8                receivinng: 0  3       5       6
Process 4 sending: 8    9                receivinng: 3
Process 8 sending: 5    6                receivinng: 2  4       7
Process 6 sending: 2    7                receivinng: 5  8
Process 1 sending: 7    9                receivinng: 9
Process 9 sending: 0    1                receivinng: 1  4       7
Process 0 sending: 2    5                receivinng: 9
*/



void do_comms(void) {
	MPI_Request* request_list;
	request_list = new MPI_Request[num_recv_procs + 2];
	double send_data[2];
	double* recv_data = new double[num_recv_procs];

	int cnt = 0;
	for (int i = 0; i < num_recv_procs; i++) {
		MPI_Irecv(&recv_data[i], 1, MPI_DOUBLE, recv_procs[i], tagnum, MPI_COMM_WORLD, &request_list[cnt]);
		cnt++;
	}

	for (int i = 0; i < 2; i++) {
		send_data[i] = id * (i + 1) / 100;
		MPI_Isend(&send_data[i], 1, MPI_DOUBLE, send_procs[i], tagnum, MPI_COMM_WORLD, &request_list[cnt]);
		cnt++;
	}

	MPI_Waitall(cnt, request_list, MPI_STATUSES_IGNORE);
	tagnum++;

	delete[] recv_data;
	delete[] request_list;
}



int main(int argc, char* argv[])
{
	MPI_Init(&argc, &argv);

	MPI_Comm_rank(MPI_COMM_WORLD, &id);
	MPI_Comm_size(MPI_COMM_WORLD, &p);
	srand(time(NULL) + id * 100);

	Setup_Comms();


	for (int i = 0; i < 10; i++) {
		do_comms();
	}

	delete[] send_status;
	delete[] recv_status;
	delete[] recv_procs;
	MPI_Finalize();
}
  1. Excercise 3
    Divide your p processes into a grid of size m x n (请尝试确保m和n是彼此尽可能接近的整数。例如,如果p为9,则m和n都应为3,而如果p为12, 一个应该是3,另一个应该是4). On this grid calculate an i and j index for each process such that id=i+mj.
    每个进程都应与其垂直,水平和对角线旁边的进程进行通信(即,网格中间的进程将与8个邻居进行通信,边缘处的进程与5个邻居进行通信,而角落中的进程则与3个邻居在角落进行通). Send the neighbours the source’s id as well as their i and j coordinates and display these
    与此类似的通信模式也很常用。 如果您正在为域分解问题进行对等通信,则可能会遇到此问题(稍后将对此进行详细介绍)
#include <mpi.h>
#include <iostream>

using namespace std;

int id, p, tag_num = 1;

// 这个将总进程划分为行和列
void find_dimensions(int p, int& rows, int& columns)		//A bit brute force - this can definitely be made more efficient!
{
	int min_gap = p;
	int top = sqrt(p) + 1;
	for (int i = 1; i <= top; i++)
	{
		if (p % i == 0)
		{
			int gap = abs(p / i - i);

			if (gap < min_gap)
			{
				min_gap = gap;
				rows = i;
				columns = p / i;
			}
		}
	}

	if (id == 0)
		cout << "Divide " << p << " into " << rows << " by " << columns << " grid" << endl;
}

int rows, columns;
int id_row, id_column;

// 根据id判断在grid的坐标
void id_to_index(int id, int& id_row, int& id_column)
{
	id_column = id % columns;
	id_row = id / columns;
}

// 根据在grid的坐标 可以生成其id值
int id_from_index(int id_row, int id_column)
{
	if (id_row >= rows || id_row < 0)
		return -1;
	if (id_column >= columns || id_column < 0)
		return -1;

	return id_row * columns + id_column;
}


int main(int argc, char* argv[])
{
	MPI_Init(&argc, &argv);

	MPI_Comm_rank(MPI_COMM_WORLD, &id);  // 进程id
	MPI_Comm_size(MPI_COMM_WORLD, &p);   // 进程size

	// 将p个进程生成为rows×colunms的矩阵
	find_dimensions(p, rows, columns);

	// 将id值转换为在这个矩阵上的索引
	id_to_index(id, id_row, id_column);

	// 一维数组 八个邻居,包含id及横坐标纵坐标3个数据
	int* data_to_send = new int[8 * 3];
	int* data_to_recv = new int[8 * 3];

	int cnt = 0;

	// 跟上面的例子一样也是乘2, 难道是MPI_Irecv一个, MPI_Isend一个?
	MPI_Request* request = new MPI_Request[8 * 2];

	// 周围八个邻居, 所以为-1, 0, 1
	for (int i = -1; i <= 1; i++)
		for (int j = -1; j <= 1; j++)
		{
			int com_i = id_row + i;
			int com_j = id_column + j;

			// 根据在矩阵的索引可以得出其对应的id值
			int com_id = id_from_index(com_i, com_j);

			if (com_id != id && com_id >= 0 && com_id < p)
			{
				data_to_send[cnt * 3] = id;
				data_to_send[cnt * 3 + 1] = id_row;
				data_to_send[cnt * 3 + 2] = id_column;

				// 这里是非阻塞通信 继续执行
				// 这里是一维数组,前面有&, 3个数据代表之后连续的三个数据?
				MPI_Isend(&data_to_send[cnt * 3], 3, MPI_INT, com_id, tag_num, MPI_COMM_WORLD, &request[cnt * 2]);
				MPI_Irecv(&data_to_recv[cnt * 3], 3, MPI_INT, com_id, tag_num, MPI_COMM_WORLD, &request[cnt * 2 + 1]);
				cnt++;
			}
		}
	// 等待所有的发送接收都结束了,接着下面执行
	MPI_Waitall(cnt * 2, request, MPI_STATUS_IGNORE);

	// id from 邻居id与邻居在矩阵中的地址
	for (int i = 0; i < cnt; i++)
	{
		cout << id << " from " << data_to_recv[i * 3] << " ( " << data_to_recv[i * 3 + 1] << ", " << data_to_recv[i * 3 + 2] << ")" << endl;
	}

	MPI_Finalize();

	delete[] data_to_send;
	delete[] data_to_recv;
	delete[] request;
}
/*
PS D:\桌面\C++ Assi\AMPI\x64\Debug> mpiexec -n 9 AMPI.exe                                                               Divide 9 into 3 by 3 grid
0 from 1 ( 0, 1)
0 from 3 ( 1, 0)
2 from 1 ( 0, 1)
0 from 4 ( 1, 1)
2 from 4 ( 1, 1)
2 from 5 ( 1, 2)
3 from 0 ( 0, 0)
1 from 0 ( 0, 0)
5 from 1 ( 0, 1)
3 from 1 ( 0, 1)
1 from 2 ( 0, 2)
5 from 2 ( 0, 2)
3 from 4 ( 1, 1)
1 from 3 ( 1, 0)
3 from 6 ( 2, 0)
5 from 4 ( 1, 1)
1 from 4 ( 1, 1)
3 from 7 ( 2, 1)
5 from 7 ( 2, 1)
7 from 3 ( 1, 0)
1 from 5 ( 1, 2)
5 from 8 ( 2, 2)
7 from 4 ( 1, 1)
6 from 3 ( 1, 0)
7 from 5 ( 1, 2)
7 from 6 ( 2, 0)
7 from 8 ( 2, 2)
8 from 4 ( 1, 1)
6 from 4 ( 1, 1)
4 from 0 ( 0, 0)
8 from 5 ( 1, 2)
4 from 1 ( 0, 1)
8 from 7 ( 2, 1)
4 from 2 ( 0, 2)
6 from 7 ( 2, 1)
4 from 3 ( 1, 0)
4 from 5 ( 1, 2)
4 from 6 ( 2, 0)
4 from 7 ( 2, 1)
4 from 8 ( 2, 2)

3*3的矩阵
0 1 2
3 4 5
6 7 8
可以看出,进程0收到来自1, 3, 4的数据
并且还有邻居process在矩阵上的坐标
*/
  • 1
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 1
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论 1
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值