ACSE6 L4 Datatypes

派生数据类型

为什么使用派生数据类型
(1) 使用MPI时,最好进行尽可能少的通信. 尽量在单个大型通信中而不是在多个较小的通信中发送所有必需的数据
(2) 如果数据是连续的并且都是相同类型(例如数组),则很容易实现. 通常,您希望发送不连续的信息和/或混合类型的变量(例如,对象中仅某些成员变量). 我们可以通过使自己的MPI变量类型包含许多简单的MPI类型(甚至是我们自己创建的其他类型)来实现此目的

1. 函数

1.1 MPI_Type_commit/MPI_Type_free

  1. 新定义的数据在使用之前,必须先使用MPI_Type_commit递交给 MPI 系统,下面是函数原型:
int MPI_Type_commit(MPI_Datatype * datatype)
  1. 如果需要释放已经递交的数据类型,可以使用MPI_Type_free,下面是函数原型:
int MPI_Type_free(MPI_Datatype * datatype)

1.2 MPI_Get_address

  1. 如果发送数据的函数知道数据项的类型以及在内存中数据项集合的相对位置,就可以在数据项被发送出去之前将数据项聚集起来。类似地,接收数据的函数可以在数据项被接收后数据分发到它们的内存中的正确的目标地址中。
    正式地,一个派生数据类型是由一系列的MPI基本数据类型每个数据类型的偏移所组成的。如下面
变量地址
a24
b40
c48

下面的派生数据类型可以表明这些数据项{(MPI_DOUBLE,0),(MPI_DOUBLE,16),(MPI_INT,24)}. 每一对数据项的第一个元素表明数据类型,第二个元素是该数据项相对于起始位置的偏移. 假设从a开始, a的偏移为0, b=40-24=16, c=48-24=24. 后面我们算的都是与其实位置的偏移

  1. MPI_Type_create_struct的第三个参数array_of_displacements指定了距离消息起始位置的偏移量, 单位为字节. 所以有array_of_displacements[]={0,16,24}. 为了找到这些值,可以使用MPI_Get_address, 下面是函数原型:
int MPI_Get_address(
    void * location_p,    // in
    MPI_Aint *address   // out
)

它返回的是location_p所指向的内存单元地址. 这个特殊类型MPI_Aint是整数型,他的长度足够表示系统地址. 因此,为了取得array_of_displacements的各个值, 我们可以使用以下代码

    MPI_Aint a_addr, b_addr, n_addr;
	MPI_Get_address(&a, &a_addr);
	array_of_displacement[0] = 0;
	MPI_Get_address(&b, &b_addr);
	array_of_displacement[1] = b_addr- a_addr;
	MPI_Get_address(&n, &n_addr);
	array_of_displacement[2] = n_addr- a_addr;

1.3 MPI_Type_create_struct

创建由不同基本数据类型的元素所组成的派生数据类型

int MPI_Type_create_struct(
    int  count, // in 数据类型中元素个数(例:double,double,int = 3)
    int  array_of_blocklengths[] //in, 每个数据项的元素个数(可以是数组)(例:{1,1,1})
    MPI_Aint  array_of_displacements[] //in, 每个数据项距离消息起始位置的偏移量(例:{0,16,24})
    MPI_Datatype  array_of_types[] //in, 每个数据项的类型(例:{MPI_DOUBLE,MPI_DOUBLE,MPI_INT})
    MPI_Datatype*  new_type_p //out, 自定义数据类型出口(例:MPI_Datatype input_mpi_t))

1.4 MPI_Get_count/MPI_Get_elements

  1. MPI_Type_extent以字节为单位返回一个数据类型的跨度 extent,下面是函数原型
int MPI_Type_extent(
    MPI_Datatype datatype,  // 数据类型
    MPI_Aint * extent       // 数据类型跨度
)

MPI_Type_size 以字节为单位,返回给定数据有用部分所占空间的大小,即跨度减去类型中的空隙后的空间大小,和 MPI_Type_extent 相比,MPI_Type_size 不包括由于对齐等原因导致的数据类型中的空隙所占的空间。下面是 MPI_Type_size 的函数原型:

int MPI_Type_size(
    MPI_Datatype datatype,  // 数据类型
    int * size              // 数据类型跨度
)
  1. 通过 MPI_Get_countMPI_Get_elements 可以返回接收的数据的个数,下面是两者的函数原型:
int MPI_Get_elements(
    MPI_Status * status,    // 接收操作返回的状态
    MPI_Datatype datatype,  // 接收操作使用的数据类型
    int *count              // 接收到的基本类型元素的个数
)

int MPI_Get_count(
    MPI_Status * status,    // 接收操作返回的状态
    MPI_Datatype datatype,  // 接收操作使用的数据类型
    int *count              // 接收到的指定数据类型的个数
)
  1. MPI_Get_elementsMPI_Get_count不同的是,前者是以基本类型元素为单位的,后者是以指定的数据类型为单位的,假设接收一个结构体,结构体的定义如下
typedef struct _my_struct {
    double d;
    double d2;
    int i;
    char c;
} my_struct;

那么使用MPI_Get_elements 返回的结果就是 4 (两个 double, 一个int,一个char),而使用 MPI_Get_count返回的结果则是 1 。

2. 示例

2.1 课上示例

下面是使用示例

#include <mpi.h>
#include <iostream>
#include <locale>
using namespace std;


int id, p;
class my_class
{
public:
	int I1, I2;
	int var_not_to_send;
	double D1;
	char S1[50];
	static void buildMPIType();  // 创建自己类型的方法
	static MPI_Datatype MPI_type;  // 自己创建的类型
};

MPI_Datatype my_class::MPI_type;

void my_class::buildMPIType()
{
	// 每个部分的长度
	int block_lengths[4]; // 存储每个数据项的元素个数
	//  MPI_Aint 存地址
	// displacement 偏移值数组
	MPI_Aint displacements[4];  // 存储每个数据项距离消息起始位置的偏移量
	MPI_Aint addresses[4], add_start;  // 存储每个数据项的地址
	MPI_Datatype typelist[4];   // 定义各个部分的类型

	my_class temp;

	typelist[0] = MPI_INT;
	block_lengths[0] = 1;
	// 获取各个元素之间的相对地址,注意使用指针
	MPI_Get_address(&temp.I1, &addresses[0]);  // 将temp.I1的地址存储在adresses[0]中
	cout << "temp.I1's address " << &temp.I1 <<
		" addresses[0]'s address " << &addresses[0] << "address[0]存储的地址 是temp.I1's address"<<addresses[0]<< endl;


	typelist[1] = MPI_INT;
	block_lengths[1] = 1;
	MPI_Get_address(&temp.I2, &addresses[1]);
	cout << "temp.I2's address " << &temp.I2 <<
		" addresses[1]'s address " << &addresses[1] << endl;

	typelist[2] = MPI_DOUBLE;
	block_lengths[2] = 1;
	MPI_Get_address(&temp.D1, &addresses[2]);
	cout << "temp.D1's address " << &temp.D1 <<
		" addresses[2]'s address " << &addresses[2] << endl;

	typelist[3] = MPI_CHAR;
	block_lengths[3] = 50;
	MPI_Get_address(temp.S1, &addresses[3]);  
	cout << "temp.S1's address " << &temp.S1 <<
		" addresses[3]'s address " << &addresses[3] << endl;

	//find the pointer to the beginning of the object and then subtract 
	//from the pointers to each of the items you wanting to send
	MPI_Get_address(&temp, &add_start);
	cout << "temp's address    " << &temp << " add_start's address    " << &add_start << endl;
	// 计算偏移值数组
	for (int i = 0; i < 4; i++) {
		//这里存储的是我们想要发送的数据与the begnning of temp的offset/displacement
		displacements[i] = addresses[i] - add_start;
	}
	for (int i = 0; i < 4; i++) {
		cout << i << "th displacement " << displacements[i] << endl;
	}
	// use all information to make MPI type
	MPI_Type_create_struct(4, block_lengths, displacements, typelist, &MPI_type);
	MPI_Type_commit(&MPI_type);
}


int main(int argc, char* argv[])
{
	MPI_Init(&argc, &argv);
	MPI_Comm_rank(MPI_COMM_WORLD, &id); // 进程 id
	MPI_Comm_size(MPI_COMM_WORLD, &p);  //进程size
	my_class::buildMPIType();  // 该函数创建自己的数据类型
	my_class data;
	if (id == 0)
	{
		data.I1 = 6;
		data.I2 = 3.0;
		data.D1 = 10.0;
		data.var_not_to_send = 25;
		strncpy_s(data.S1, "My test string", 50);  //该函数用于将"My test string"复制到data.S1中
	}
	MPI_Bcast(&data, 1, my_class::MPI_type, 0, MPI_COMM_WORLD);
	cout << "On process " << id << " I1=" << data.I1 <<
		" I2 = " << data.I2 << " D1 = " << data.D1 << " S1 = " << data.S1 <<
		". The unsent variable is " << data.var_not_to_send << endl;
	MPI_Type_free(&my_class::MPI_type);
	MPI_Finalize();
}


/*
PS D:\桌面\C++ Assi\MPI\x64\Debug> mpiexec -n 10 MPI.exe                                                                
On process 0 I1=6 I2=3 D1= 10 S1=My test string. The unsent variable is 25
On process 1 I1=6 I2=3 D1= 10 S1=My test string. The unsent variable is -858993460
On process 8 I1=6 I2=3 D1= 10 S1=My test string. The unsent variable is -858993460
On process 2 I1=6 I2=3 D1= 10 S1=My test string. The unsent variable is -858993460
On process 9 I1=6 I2=3 D1= 10 S1=My test string. The unsent variable is -858993460
On process 4 I1=6 I2=3 D1= 10 S1=My test string. The unsent variable is -858993460
On process 3 I1=6 I2=3 D1= 10 S1=My test string. The unsent variable is -858993460
On process 6 I1=6 I2=3 D1= 10 S1=My test string. The unsent variable is -858993460
On process 5 I1=6 I2=3 D1= 10 S1=My test string. The unsent variable is -858993460
On process 7 I1=6 I2=3 D1= 10 S1=My test string. The unsent variable is -858993460
*/


/*
PS D:\桌面\C++ Assi\MPI\x64\Debug> mpiexec -n 1 MPI.exe                                                                 temp.I1's address 0000005BBE5BF570 addresses[0]'s address 0000005BBE5BF4D8
temp.I2's address 0000005BBE5BF574 addresses[1]'s address 0000005BBE5BF4E0
temp.D1's address 0000005BBE5BF580 addresses[2]'s address 0000005BBE5BF4E8
temp.S1's address 0000005BBE5BF588 addresses[3]'s address 0000005BBE5BF4F0
temp's address    0000005BBE5BF570 add_start's address    0000005BBE5BF518
0th displacement 0
1th displacement 4
2th displacement 16
3th displacement 24
On process 0 I1=6 I2 = 3 D1 = 10 S1 = My test string. The unsent variable is 25

上面的偏移量分别为
098
098
098
098
058
*/

(1) When we have sent data previously we have sent a pointer to that data
(2) 我们仍然想发送一个指针来指示要发送的数据, 但是由于数据可能是不连续的和/或不同类型的, we need to create a list of the data types and sizes to be sent together with the offset of that data’s memory location from the pointer
(3) The set of offsets must be the same for every object of that type. 我们无法做到如下, 例如, pointers stored within an object as the offset for the data pointed to will be different for each object. STL容器也是如此,因为它们的数据是动态分配的,因此在内存中没有一致的相对位置
(4) 理解

class my_class
{
public:
	int I1, I2;
	int var_not_to_send;
	double D1;
	char S1[50];
	/*错误示范
    int* data;
    vector<int> v1;
   */
	static void buildMPIType();
	static MPI_Datatype MPI_type;
};

my_class data, data2;
01234567
I1I2var_not_to_sendD1S1
data
offset01234
2021222324252627
I1I2var_not_to_sendD1S1
data2
offset01234
  • We create a temporary object of the data my_class data. 对于同一类的所有对象,成员变量的偏移量将相同
  • 然后,我们存储有关构成MPI数据类型的各个变量的信息.typelist [3] = MPI_CHAR. 该变量的长度block_lengths [3] = 50
  • 我们获得指向该变量的指针MPI_Get_address(&temp.D1,&addresses [2]). 请注意,这不是我们实际需要的偏移量,而是原始地址
  • Once we have gathered this data for all the member variables that we wish to send using this MPI datatype we can calculate the offsets in the memory location
    首先, 获取对象开头的存储位置MPI_Get_address(&temp, &add_start)
    然后,我们从所有地址中减去此值以获得它们的偏移量offsets[i] = addresses[i] - add_start
  • 一旦有了所有这些信息, we can use it to create the structure for the MPI_datatype. 在此例中, MPI_Type_create_struct(4, block_lengths, offsets, typelist, & my_class::MPI_type)
  • 一旦创建了数据类型的结构,就必须先将其提交,然后才能在通信中使用它MPI_Type_commit(&my_class :: MPI_type)
  • 一旦不再需要类型,则应将其释放MPI_Type_free(&my_class::MPI_type). 请注意,可以在建立通信后立即释放MPI类型,而无需等待通信实际完成

2.2 Exercise 1

Exercise 1: Creating an MPI type for a class
When carrying out Lagrangian simulations it is useful to have a class that stores the position of a point/particle (as well as all its other properties).创建一个类来存储位置和速度. 为您的域选择最大垂直和水平范围,然后在处理器0上的这些范围内创建10000个随机定位的粒子。 将域划分为宽度相等的垂直条纹,以便可以为每个进程分配一个域。
创建一个MPI类型,该类型可以一起发送该类类型的对象中的所有信息。Send the particles individually from the root to the appropriate processor according to its horizontal position. This can be done using either blocking or non-blocking sends and receives. 您可以发送空通信以指示所有粒子均已发送。

#include <mpi.h>
#include <iostream>
#include <locale>
#include <vector>
using namespace std;


int id, p;

class particle {
public:
	double x[2];
	double v[2];


	static MPI_Datatype MPI_Type;
	static void create_type(void);
};

MPI_Datatype particle::MPI_Type;

void particle::create_type(void) {
	vector<int> block_lengths;
	vector<MPI_Aint> addresses;
	MPI_Aint add_start, temp_add;
	vector <MPI_Datatype> typelist;

	particle temp;

	typelist.push_back(MPI_DOUBLE);
	block_lengths.push_back(2);
	MPI_Get_address(&temp.x, &temp_add);
	addresses.push_back(temp_add);


	typelist.push_back(MPI_DOUBLE);
	block_lengths.push_back(2);
	MPI_Get_address(&temp.v, &temp_add);
	addresses.push_back(temp_add);
  
	MPI_Get_address(&temp, &add_start);
	for (int i = 0; i < addresses.size(); i++) {
		addresses[i] = addresses[i] - add_start;
	}

	MPI_Type_create_struct(addresses.size(), block_lengths.data(), addresses.data(), typelist.data(), &MPI_Type);
	MPI_Type_commit(&MPI_Type);
}

const double x_max[2] = { 1,1 };

double random() {
	return ((double)rand()) / (((double)RAND_MAX)+1);
}

int proc_from_x(double* x) {
	return (int)((x[0] * p) / x_max[0]);
}

vector<particle> particle_list;
const int max_particles = 10000;

void create_and_send_particles(void) {
	for (int i = 0; i < max_particles; i++) {
		particle temp_particle;

		for (int cnt = 0; cnt < 2; cnt++) {
			temp_particle.x[cnt] = random() * x_max[cnt];
		}

		int destnation = proc_from_x(temp_particle.x);

		if (destnation == 0) {
			particle_list.push_back(temp_particle);
		}
		else {
			MPI_Send(&temp_particle, 1, particle::MPI_Type, destnation, 0, MPI_COMM_WORLD);
		}
	}

	for (int i = 1; i < p; i++) {
		MPI_Send(nullptr, 0, particle::MPI_Type, i, 0, MPI_COMM_WORLD);
	}
}

void receive_particle(void) {
	particle temp_particle;
	MPI_Status status;
	do {
		MPI_Recv(&temp_particle, 1, particle::MPI_Type, 0, 0, MPI_COMM_WORLD, &status);
		int count;
		MPI_Get_count(&status, particle::MPI_Type, &count);

		if (count == 1) {
			particle_list.push_back(temp_particle);
		}
		else if(count==0) {
			break;
		}
		else {
			cout << "Unexpected number of particles received" << endl;
			break;
		}
	} while (true);

}

int main(int argc, char* argv[])
{
	MPI_Init(&argc, &argv);
	MPI_Comm_rank(MPI_COMM_WORLD, &id);
	MPI_Comm_size(MPI_COMM_WORLD, &p);
	srand(time(NULL) + id * 1000);
	particle::create_type();

	if (id == 0) {
		create_and_send_particles();
	}
	else {
		receive_particle();
	}

	cout << id << ": is responsible for " << particle_list.size() << " particles" << endl;
	MPI_Type_free(&particle::MPI_Type);
	MPI_Finalize();
}
/*
PS D:\桌面\C++ Assi\MPI\x64\Debug> mpiexec -n 10 MPI.exe                                                                3: is responsible for 1010 particles
4: is responsible for 1003 particles
7: is responsible for 1041 particles
2: is responsible for 995 particles
1: is responsible for 1046 particles
6: is responsible for 980 particles
0: is responsible for 1021 particles
8: is responsible for 964 particles
9: is responsible for 987 particles
5: is responsible for 953 particles
*/

在problem_sheet中写法如下

#include <mpi.h>
#include <iostream>
#include <cstdlib>
#include <time.h>
#include <vector>

using namespace std;

int id, p, tag_num = 1;

MPI_Datatype MPI_Particle;  // 创建名为MPI_Particle的类型数据

class CParticle
{
public:
	double x[2];  //位置
	double v[2];   //速度

	static void buildMPIType(); //该函数用于创建MPI_Particle的类型
};

double Random()
{
	// 在 RAND_MAX+1确保返回值一定小于1
	return (double)rand() / ((double)RAND_MAX + 1.0);
}

void CParticle::buildMPIType()
{
	int block_lengths[2]; // 每个数据项的长度
	MPI_Aint offsets[2];  // 每个数据项的偏移地址
	MPI_Aint addresses[2], add_start;  // 每个数据项的地址
	MPI_Datatype typelist[2];   //每个数据项的类型

	CParticle temp;

	typelist[0] = MPI_DOUBLE;
	block_lengths[0] = 2;
	MPI_Get_address(temp.x, &addresses[0]);

	typelist[1] = MPI_DOUBLE;
	block_lengths[1] = 2;
	MPI_Get_address(temp.v, &addresses[1]);

	MPI_Get_address(&temp, &add_start);
	for (int i = 0; i < 2; i++) {
		offsets[i] = addresses[i] - add_start;
	}

	MPI_Type_create_struct(2, block_lengths, offsets, typelist, &MPI_Particle); //创建新的类型
	MPI_Type_commit(&MPI_Particle);  //提交
}

CParticle* full_particle_list = nullptr; // 这是一个list存储所有的particle
vector<CParticle> proc_particle_list;  //这是一个vector存储符合要求的粒子
int num_particle_total = 10000;

double domain_size[2] = { 1.0, 1.0 }, max_vel = 1.0;   // domain_size是位置的范围

// Send the particles individually from the root to the appropriate processor
// according to its horizontal position
// 此函数用于得到将要发送到的processor
int proc_from_x(double x)
{
	return (int)((p * x) / domain_size[0]);
}

void distribute_particles(void)
{
	if (id == 0)
	{
		for (int i = 0; i < num_particle_total; i++)
		{
			int send_proc = proc_from_x(full_particle_list[i].x[0]);//send_proc是将要发送到的process

			if (send_proc == id)  //此处id = 0  如果将要发送到的id=0, 说明自己发送自己, 那就不发送
				proc_particle_list.push_back(full_particle_list[i]);
			else
				MPI_Send(&full_particle_list[i], 1, MPI_Particle, send_proc, tag_num, MPI_COMM_WORLD);
		}

		for (int i = 1; i < p; i++)
		{
			MPI_Send(nullptr, 0, MPI_Particle, i, tag_num, MPI_COMM_WORLD);
		}
	}
	else
	{
		MPI_Status status;
		do
		{
			int count;

			CParticle temp;
			MPI_Recv(&temp, 1, MPI_Particle, 0, tag_num, MPI_COMM_WORLD, &status);
			MPI_Get_count(&status, MPI_Particle, &count);  // 通过get_conut我们可以得到接收到的指定数据类型的个数
			// cout << "count" << count << endl; w我们得出count一直都是1  因为之前也有发null的, 所以count也可能为0
			if (count == 0)
				break;
			else proc_particle_list.push_back(temp);

		} while (true);
	}
}

int main(int argc, char* argv[])
{
	MPI_Init(&argc, &argv);

	MPI_Comm_rank(MPI_COMM_WORLD, &id);
	MPI_Comm_size(MPI_COMM_WORLD, &p);
	srand(time(NULL) + id * 1000);

	CParticle::buildMPIType();

	if (id == 0)
	{
		full_particle_list = new CParticle[num_particle_total];

		// 对初始值进行复制. domain_size和max_vel分贝是其位置和速度范围
		for (int i = 0; i < num_particle_total; i++)
		{
			for (int j = 0; j < 2; j++)
			{
				full_particle_list[i].x[j] = Random() * domain_size[j];
				full_particle_list[i].v[j] = Random() * max_vel;
			}
		}
	}

	distribute_particles();

	cout << "Process " << id << " received " << proc_particle_list.size() << " particles" << endl;

	MPI_Type_free(&MPI_Particle);
	MPI_Finalize();

	delete[] full_particle_list;
}


/*
PS D:\桌面\C++ Assi\MPI\x64\Debug> mpiexec -n 10 MPI.exe                                                                               
Process 7 received 993 particles
Process 9 received 1029 particles
Process 3 received 1048 particles
Process 1 received 1027 particles
Process 2 received 977 particles
Process 4 received 993 particles
Process 0 received 957 particles
Process 5 received 985 particles
Process 6 received 1000 particles
Process 8 received 991 particles
*/

2.3. Exercise 2

Exercise 2: Sending rows and columns of a matrix
在每个process上创建相同大小的2D矩阵. On each process also create 4 MPI_Datatypes one each for the top, bottom, left and right hand boundaries of the 2D matrix. Put data into the matrix on process zero and transfer the edges of this data to all the other processes.

#include <mpi.h>
#include <iostream>
#include <vector>

using namespace std;

int id, p;

MPI_Datatype Datatype_left, Datatype_right, Datatype_top, Datatype_bottom;

void createdatatypes(double** data, int m, int n) {
	vector<int> block_lengths;
	vector<MPI_Datatype> typelist;
	vector<MPI_Aint> addresses;
	MPI_Aint add_start;

	//left
	for (int i = 0; i < m; i++) {
		block_lengths.push_back(1);
		typelist.push_back(MPI_DOUBLE);
		MPI_Aint temp_address;
		MPI_Get_address(&data[i][0], &temp_address);
		addresses.push_back(temp_address);
	}
	MPI_Get_address(data, &add_start);
	for (int i = 0; i < m; i++)
		addresses[i] = addresses[i] - add_start;
	// 这部分加上data  该函数返回一个指向数组中第一个元素的指针
	MPI_Type_create_struct(m, block_lengths.data(), addresses.data(), typelist.data(), &Datatype_left);
	MPI_Type_commit(&Datatype_left);
	
	// right
	block_lengths.resize(0);
	typelist.resize(0);
	addresses.resize(0);
	for (int i = 0; i < m; i++) {
		block_lengths.push_back(1);
		typelist.push_back(MPI_DOUBLE);
		MPI_Aint temp_address;
		MPI_Get_address(&data[i][n - 1], &temp_address);
		addresses.push_back(temp_address);
	}
	for (int i = 0; i < m; i++)
		addresses[i] = addresses[i] - add_start;
	MPI_Type_create_struct(m, block_lengths.data(), addresses.data(), typelist.data(), &Datatype_right);
	MPI_Type_commit(&Datatype_right);


	// top - only need one value
	int block_length = n;
	MPI_Datatype typevalue = MPI_DOUBLE;
	MPI_Aint address;
	MPI_Get_address(data[0], &address);
	address = address - add_start;
	MPI_Type_create_struct(1, &block_length,&address,&typevalue,&Datatype_top);
	MPI_Type_commit(&Datatype_top);

	// bottom
	MPI_Get_address(data[m - 1], &address);
	address = address - add_start;
	MPI_Type_create_struct(1, &block_length, &address, &typevalue, &Datatype_bottom);
	MPI_Type_commit(&Datatype_bottom);
}


double** data_array;
const int i_max = 200;
const int j_max = 200;


int main(int argc, char* argv[]) {
	MPI_Init(&argc, &argv);
	MPI_Comm_rank(MPI_COMM_WORLD, &id);
	MPI_Comm_size(MPI_COMM_WORLD, &p);

	data_array = new double* [i_max];
	for (int i = 0; i < i_max; i++) {
		data_array[i] = new double[j_max];
	}
	createdatatypes(data_array, i_max, j_max);

	if (id == 0) {
		for (int i = 0; i < i_max; i++) 
			for (int j = 0; j < j_max; j++) 
				data_array[i][j] = (double)(i * j_max + j);
	}
    // 发送了整个数据 但是只有边缘有数据
	MPI_Bcast(data_array, 1, Datatype_left, 0, MPI_COMM_WORLD);
	MPI_Bcast(data_array, 1, Datatype_right, 0, MPI_COMM_WORLD);
	MPI_Bcast(data_array, 1, Datatype_top, 0, MPI_COMM_WORLD);
	MPI_Bcast(data_array, 1, Datatype_bottom, 0, MPI_COMM_WORLD);
	cout << *data_array << endl;


	MPI_Type_free(&Datatype_left);
	MPI_Type_free(&Datatype_right);
	MPI_Type_free(&Datatype_top);
	MPI_Type_free(&Datatype_bottom);

	MPI_Finalize();
}

2.4 Exercise 3

Exercise 3: Creating a temporary MPI type
在练习1中,进行大量通信以发送所有粒子是非常低效的. Modify the above code to do the transfer as a single communication for each process. On the zero process 您需要创建临时变量以将数据传输到每个process. This is because the data for each processes will be randomly scattered within the list of data created on processor zero. On the other processes you will need to use a probe来确定要接收多少粒子. 您无需在这些进程上创建临时类型,因为您可以将它们存储为连续的内存,并且可以使用阻塞接收,因为您仅接收一条信息.

骚操作,没看懂

#include <mpi.h>
#include <iostream>
#include <cstdlib>
#include <time.h>
#include <vector>

using namespace std;

int id, p, tag_num = 1;

MPI_Datatype MPI_Particle;  //自己创建的类型MPI_Particle

class CParticle
{
public:
	double x[2];   // 位置
	double v[2];   //速度

	static void buildMPIType();
};

double random()
{
	// 在 RAND_MAX+1确保返回值一定小于1
	return (double)rand() / (RAND_MAX + 1.0);
}

// 该函数用于创建MPIType
void CParticle::buildMPIType()
{
	int block_lengths[2];  //每个数据项的长度
	MPI_Aint offsets[2];   // 每个数据项的偏移地址
	MPI_Aint addresses[2], add_start;    //每个数据项的地址
	MPI_Datatype typelist[2];  //每个数据项的类型

	CParticle temp;

	typelist[0] = MPI_DOUBLE;
	block_lengths[0] = 2;
	MPI_Get_address(temp.x, &addresses[0]);

	typelist[1] = MPI_DOUBLE;
	block_lengths[1] = 2;
	MPI_Get_address(temp.v, &addresses[1]);

	MPI_Get_address(&temp, &add_start);
	for (int i = 0; i < 2; i++) 
		offsets[i] = addresses[i] - add_start;
	// 创建新类型
	MPI_Type_create_struct(2, block_lengths, offsets, typelist, &MPI_Particle);
	// 提交
	MPI_Type_commit(&MPI_Particle);
}

CParticle* full_particle_list = nullptr;  //所有粒子的集合
vector<CParticle> proc_particle_list;   // 部分符合要求的粒子
int num_particle_total = 10000;

// 规定位置范围  速度范围
double domain_size[2] = { 1.0, 1.0 }, max_vel = 1.0;


// Send the particles individually from the root to the appropriate processor
// according to its horizontal position
// 此函数用于得到将要发送到的processor
int proc_from_x(double x)
{
	return (int)((p * x) / domain_size[0]);
}


void distribute_particles(void)
{
	if (id == 0)
	{
		// 三个二维矩阵, 用于存储除了process 0的其他进程的信息
		vector<vector<int>> block_length(p - 1);
		vector<vector<MPI_Aint>> addresses(p - 1);
		vector<vector<MPI_Datatype>> typelist(p - 1);


		for (int i = 0; i < num_particle_total; i++)
		{
			//send_proc是将要发送到的process
			int send_proc = proc_from_x(full_particle_list[i].x[0]);

			if (send_proc == id) //此处id为0
				proc_particle_list.push_back(full_particle_list[i]);//将该粒子添加到vector中
			else
			{
				block_length[send_proc - 1].push_back(1);
				MPI_Aint temp;
				MPI_Get_address(&full_particle_list[i], &temp);
				addresses[send_proc - 1].push_back(temp);
				typelist[send_proc - 1].push_back(MPI_Particle);
			}
		}

		MPI_Request* request = new MPI_Request[p - 1];

		for (int i = 1; i < p; i++)
		{
			MPI_Datatype temp_type;  //临时创建一个新的类型
			MPI_Type_create_struct(block_length[i - 1].size(), &block_length[i - 1][0], &addresses[i - 1][0], &typelist[i - 1][0], &temp_type);
			MPI_Type_commit(&temp_type);

			MPI_Isend(MPI_BOTTOM, 1, temp_type, i, tag_num, MPI_COMM_WORLD, &request[i - 1]);
			MPI_Type_free(&temp_type);
		}

		MPI_Waitall(p - 1, request, MPI_STATUSES_IGNORE);

	}
	else
	{
		MPI_Status status;
		int count;

		MPI_Probe(0, tag_num, MPI_COMM_WORLD, &status);

		MPI_Get_count(&status, MPI_Particle, &count);
		proc_particle_list.resize(count);

		MPI_Recv(proc_particle_list.data(), count, MPI_Particle, 0, tag_num, MPI_COMM_WORLD, &status);
	}
}

int main(int argc, char* argv[])
{
	MPI_Init(&argc, &argv);

	MPI_Comm_rank(MPI_COMM_WORLD, &id);
	MPI_Comm_size(MPI_COMM_WORLD, &p);
	srand(time(NULL) + id * 1000);

	CParticle::buildMPIType();

	if (id == 0)
	{
		full_particle_list = new CParticle[num_particle_total];

		for (int i = 0; i < num_particle_total; i++)
		{
			for (int j = 0; j < 2; j++)
			{
				full_particle_list[i].x[j] = random() * domain_size[j];
				full_particle_list[i].v[j] = random() * max_vel;
			}
		}
	}

	distribute_particles();

	cout << "Process " << id << " received " << proc_particle_list.size() << " particles" << endl;

	MPI_Type_free(&MPI_Particle);
	MPI_Finalize();

	delete[] full_particle_list;
}
/*
PS D:\桌面\C++ Assi\MPI\x64\Debug> mpiexec -n 10 MPI.exe                                                                               
Process 1 received 999 particles
Process 2 received 974 particles
Process 3 received 989 particles
Process 4 received 1005 particles
Process 5 received 978 particles
Process 6 received 991 particles
Process 7 received 995 particles
Process 8 received 1021 particles
Process 9 received 1043 particles
Process 0 received 1005 particles
*/

例子

#include <mpi.h>
#include <iomanip>
#include <iostream>
#include <cstdlib>
#include <time.h>
#include <chrono>
#include <vector>

using namespace std;

int id, p;

MPI_Datatype strip_top, strip_bottom;
void built_stripDatatypes(double* str_value, int m, int n) {
	vector<int> block_lengths;
	vector<MPI_Datatype> typelist;
	vector<MPI_Aint> addresses;
	MPI_Aint add_start;

	int block_length = n;
	MPI_Datatype typevalue = MPI_DOUBLE;
	MPI_Aint address;

	MPI_Get_address(str_value, &add_start);
	//strip top  因为第0行是后期添上去的, 所以发送的top数据从第1行开始
	MPI_Get_address(&str_value[1 * n], &address);
	address = address - add_start;
	MPI_Type_create_struct(1, &block_length, &address, &typevalue, &strip_top);
	MPI_Type_commit(&strip_top);

	// strip bottom
	MPI_Get_address(&str_value[(m - 2) * n], &address);
	address = address - add_start;
	MPI_Type_create_struct(1, &block_length, &address, &typevalue, &strip_bottom);
	MPI_Type_commit(&strip_bottom);
}


int main(int argc, char* argv[]) {
	MPI_Init(&argc, &argv);
	MPI_Comm_rank(MPI_COMM_WORLD, &id);
	MPI_Comm_size(MPI_COMM_WORLD, &p);
	srand(time(NULL) + id * 10);

	//cout << id << endl;
	double* A = new double[15];    // (5,3)
	double* B = new double[30];   // (10,3)
	built_stripDatatypes(A, 5, 3);
	MPI_Request* request = new MPI_Request[2];
	if (id == 0) {
		for (int i = 0; i < 5; i++) {
			for (int j = 0; j < 3; j++) {
				A[i * 3 + j] = i;
			}
		}
	}
	MPI_Barrier(MPI_COMM_WORLD);
	if (id == 0) {
		
		//MPI_Isend(A, 1, strip_top, 1, 1, MPI_COMM_WORLD, &request[0]);
		MPI_Isend(A, 1, strip_bottom, 1, 1, MPI_COMM_WORLD, &request[0]);
	}
	//MPI_Bcast(A, 1, strip_bottom, 0, MPI_COMM_WORLD);
	//MPI_Bcast(A, 1, strip_top, 0, MPI_COMM_WORLD);
	if (id == 1) {
		MPI_Irecv(B, 1, strip_bottom, 0, 1, MPI_COMM_WORLD, &request[1]);
	}
	MPI_Barrier(MPI_COMM_WORLD);
	//MPI_Waitall(2, request, MPI_STATUS_IGNORE);
	//MPI_Barrier(MPI_COMM_WORLD);
	cout << "Bar" << endl;
	if (id == 1) {
		for (int i = 0; i < 10; i++) {
			for (int j = 0; j < 3; j++) {
				cout << B[i * 3 + j] << " ";
			}
			cout << endl;
		}
	}
	MPI_Finalize();
}
  • 1
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 1
    评论
提供的源码资源涵盖了安卓应用、小程序、Python应用和Java应用等多个领域,每个领域都包含了丰富的实例和项目。这些源码都是基于各自平台的最新技术和标准编写,确保了在对应环境下能够无缝运行。同时,源码中配备了详细的注释和文档,帮助用户快速理解代码结构和实现逻辑。 适用人群: 这些源码资源特别适合大学生群体。无论你是计算机相关专业的学生,还是对其他领域编程感兴趣的学生,这些资源都能为你提供宝贵的学习和实践机会。通过学习和运行这些源码,你可以掌握各平台开发的基础知识,提升编程能力和项目实战经验。 使用场景及目标: 在学习阶段,你可以利用这些源码资源进行课程实践、课外项目或毕业设计。通过分析和运行源码,你将深入了解各平台开发的技术细节和最佳实践,逐步培养起自己的项目开发和问题解决能力。此外,在求职或创业过程中,具备跨平台开发能力的大学生将更具竞争力。 其他说明: 为了确保源码资源的可运行性和易用性,特别注意了以下几点:首先,每份源码都提供了详细的运行环境和依赖说明,确保用户能够轻松搭建起开发环境;其次,源码中的注释和文档都非常完善,方便用户快速上手和理解代码;最后,我会定期更新这些源码资源,以适应各平台技术的最新发展和市场需求。
ASN.1(Abstract Syntax Notation One)是一种用于描述数据结构的标准,ACSE(Association Control Service Element)是ASN.1的一个应用协议。ACSE是OSI(Open System Interconnection)模型中的会话层协议,用于控制通信会话的建立、维护和终止。 下面是一个简单的ASN.1 ACSE编程示例代码: ``` -- 定义ACSE编码格式 ACSE-PROTOCOL ::= CLASS { &Association-Information, -- 关联信息 &PDU -- PDU类型 } WITH SYNTAX { [APDU] EXPLICIT SEQUENCE { aarq [APPLICATION 0] IMPLICIT AARQ, -- 请求关联 aare [APPLICATION 1] IMPLICIT AARE, -- 响应关联 rlrq [APPLICATION 2] IMPLICIT RLRQ, -- 请求释放 rlre [APPLICATION 3] IMPLICIT RLRE, -- 响应释放 abrt [APPLICATION 4] IMPLICIT ABRT -- 中止关联 } } -- 定义请求关联(AARQ)PDU类型 AARQ ::= [APPLICATION 0] IMPLICIT SEQUENCE { protocol-version EXPLICIT INTEGER OPTIONAL, -- 协议版本号 called-ap-title EXPLICIT AP-title, -- 被叫AP标题 calling-ap-title EXPLICIT AP-title OPTIONAL,-- 主叫AP标题 called-ae-qualifier EXPLICIT AE-qualifier, -- 被叫AE限定符 calling-ae-qualifier EXPLICIT AE-qualifier OPTIONAL,-- 主叫AE限定符 user-information [30] EXPLICIT SEQUENCE OF User-information OPTIONAL -- 用户信息 } -- 定义响应关联(AARE)PDU类型 AARE ::= [APPLICATION 1] IMPLICIT SEQUENCE { protocol-version EXPLICIT INTEGER OPTIONAL, -- 协议版本号 application-context-name EXPLICIT Application-context-name, -- 应用上下文名 result EXPLICIT Associate-result, -- 关联结果 result-source-diagnostic EXPLICIT Result-source-diagnostic OPTIONAL, -- 关联结果源诊断 responding-ap-title EXPLICIT AP-title OPTIONAL, -- 响应方AP标题 called-ap-invocation-id EXPLICIT AP-invocation-identifier OPTIONAL, -- 被叫AP调用标识符 responding-a-e-invocation-id EXPLICIT AE-invocation-identifier OPTIONAL, -- 响应方AE调用标识符 responder-acse-requirements EXPLICIT ACSE-requirements OPTIONAL, -- 响应方ACSE要求 mechanism-name EXPLICIT Mechanism-name OPTIONAL, -- 机制名 response-priority EXPLICIT INTEGER OPTIONAL, -- 响应优先级 user-information [30] EXPLICIT SEQUENCE OF User-information OPTIONAL -- 用户信息 } -- 定义请求释放(RLRQ)PDU类型 RLRQ ::= [APPLICATION 2] IMPLICIT SEQUENCE { reason EXPLICIT Release-request-reason OPTIONAL, -- 中止原因 user-information [30] EXPLICIT SEQUENCE OF User-information OPTIONAL -- 用户信息 } -- 定义响应释放(RLRE)PDU类型 RLRE ::= [APPLICATION 3] IMPLICIT SEQUENCE { reason EXPLICIT Release-response-reason OPTIONAL, -- 中止原因 user-information [30] EXPLICIT SEQUENCE OF User-information OPTIONAL -- 用户信息 } -- 定义中止关联(ABRT)PDU类型 ABRT ::= [APPLICATION 4] IMPLICIT SEQUENCE { abort-source EXPLICIT Abort-source OPTIONAL, -- 中止源 provider-reason EXPLICIT Provider-reason OPTIONAL, -- 提供者原因 user-information [30] EXPLICIT SEQUENCE OF User-information OPTIONAL -- 用户信息 } ```

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论 1
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值