背景
Slurm集群调度系统支持创建集群联合(Federation),并在集群之间以对等方式调度作业。提交到联合集群的作业将收到唯一的作业ID,该ID在联合集群中的所有群集中都是唯一的。作业提交到本地集群(集群在slurm.conf中定义),然后跨联盟中的群集进行复制。然后,每个集群根据自己的调度策略独立地尝试调度作业。集群与“原始”集群(作业提交到的集群)协调以调度作业。
利用 Federation 作业调度,可以实现本地-云端混合 HPC 调度,提升本地已有 Slurm 集群的资源弹性和扩展,Slurm 本地与云端集群组成 Federation 集群联合,用户可以像往常一样在本地 Slurm 中提交作业,作业会同时复制到云端Slurm 集群中,每个集群都会试图调度作业运行,为作业分配资源。如果成功,它将通知原始集群(作业提交集群)它启动了作业,原始集群会通知其它集群终止和删除这个作业并置于吊销状态。
基本流程
1. 客户登录本地集群(on-prem)
2. 客户提交作业到本地集群
3. Slurm集群会将作业拷贝到 亚马逊云科技云上 Slurm 集群(亚马逊云科技)
4. 如果本地集群可以执行作业,则通知云上集群(亚马逊云科技)取消作业
5. 如果本地集群无调度执行作业,而云上集群可以调度执行,则云上集群(亚马逊云科技)开始调度执行作业,并通知本地集群(on-prem)取消(revoke)作业
6. 可以使用 sinfo –federation, squeue –federation, sacct –federation命令查看所有的作业执行情况。
验证配置
1. 使用 Parallecluster在一个亚马逊云科技区域建立一个 Slurm 集群,最大节点和最小节点相同以模拟本地集群(on-prem);
2. 启用本地集群 slurmdbd 进程和 accounting 服务,Slurm 多集群依赖 accounting 服务;
3. 使用 Parallecluster 在另外一个区域建立一个 Slurm 集群,作为云上 cloudburst 集群(亚马逊云科技);
4. 使用 VPC-peering,连接两个集群模拟混合云,必须配置 DNS 机器名解析;
5. 配置多集群;
6. 配置 Federation;
7. 提交测试作业验证 Federation 集群调度;
具体配置流程
1. 安装Parallecluster ,使用虚拟环境,以便安装两套 Parallecluster
安装 升级 pip和 virtualenv
1$ python3 -m pip install --upgrade pip
2$ python3 -m pip install --user --upgrade virtualenv
创建虚拟环境
1$ python3 -m virtualenv ~/.pcluster
激活虚拟环境
1$ source ~/.pcluster/bin/activate
安装 Parallecluster 到虚拟环境中
1(.pcluster) a483e778a9b5:~ xinxx$ python3 -m pip install --upgrade aws-parallelcluster
验证Parallecluster 安装
1(.pcluster) a483e778a9b5:~ xinxx$ pcluster version
22.10.0
2. 使用 Parallecluster 建立模拟本地 Slurm 集群(on-prem),可以配置initial_queue_size=max_queue_size,以模拟本地固定集群情况。在安装的时候,指定 pcluster 的配置文件,以区别本地(on-perm)和云端(亚马逊云科技)集群。
配置 on-perm 集群
1(.pcluster) a483e778a9b5:~ xinxx$ pcluster configure -c ~/.parallelcluster/pcluster-config-on-perm
2INFO: Configuration file /Users/xinxx/.parallelcluster/pcluster-config-on-perm will be written.
3Press CTRL-C to interrupt the procedure.
4
1Allowed values for AWS Region ID:
21. cn-north-1
32. cn-northwest-1
4AWS Region ID [cn-northwest-1]:
5Allowed values for EC2 Key Pair Name:
61. xinxx-key-nx
7EC2 Key Pair Name [xinxx-key-nx]:
8Allowed values for Scheduler:
91. sge
102. torque
113. slurm
124. awsbatch
13Scheduler [slurm]:
14Allowed values for Operating System:
151. alinux
162. alinux2
173. centos7
184. centos8
195. ubuntu1604
206. ubuntu1804
21Operating System [alinux2]:
22Minimum cluster size (instances) [0]: 2
23Maximum cluster size (instances) [10]: 2
24Master instance type [t2.micro]:
25Compute instance type [t2.micro]:
26Automate VPC creation? (y/n) [n]: y
27Allowed values for Network Configuration:
281. Master in a public subnet and compute fleet in a private subnet
292. Master and compute fleet in the same public subnet
30Network Configuration [Master in a public subnet and compute fleet in a private subnet]: 2
31Beginning VPC creation. Please do not leave the terminal until the creation is finalized
32Creating CloudFormation stack...
33Do not leave the terminal until the process has finished
34Stack Name: parallelclusternetworking-pub-20201211120145
35Status: parallelclusternetworking-pub-20201211120145 - CREATE_COMPLETE
36The stack has been created
37Configuration file written to /Users/xinxx/.parallelcluster/pcluster-config-on-perm
38You can edit your configuration file or simply run 'pcluster create -c /Users/xinxx/.parallelcluster/pcluster-config-on-perm cluster-name' to create your cluster
建立 on-perm 集群
1(.pcluster) a483e778a9b5:~ xinxx$ pcluster create on-perm -c /Users/xinxx/.parallelcluster/pcluster-config-on-perm
2Beginning cluster creation for cluster: on-perm
3Creating stack named: parallelcluster-on-perm
4Status: parallelcluster-on-perm - CREATE_COMPLETE
5MasterPublicIP: 52.82.115.178
6ClusterUser: ec2-user
7MasterPrivateIP: 10.0.3.94
3. 修改本地集群
更新集群名称为:on-perm
编辑 vi /opt/slurm/etc/slurm.conf,修改 ClusterName 参数为 on-perm
1ClusterName=on-perm
停止 slurm 集群
1[root@ip-10-0-3-94 ]# systemctl stop slurmctld
删除/var/spool/slurm.state/下的所有文件
1[root@ip-10-0-3-94 ]# rm -rf /var/spool/slurm.state/*
重启 Slurm
1[root@ip-10-0-3-94 ]# systemctl start slurmctld
检查 Slurm 集群运行情况
1[root@ip-10-0-3-94 ]# lsid
2Slurm 20.02.4, Feb 1 2020
3Copyright SchedMD LLC, 2010-2017.
4
5My cluster name is on-perm
6My master name is ip-10-0-3-94
7
8[root@ip-10-0-3-94 ]# sinfo
9PARTITION AVAIL TIMELIMIT NODES STATE NODELIST
10compute* up infinite 2 idle~ compute-st-t2micro-[1-2]
在on-perm 集群的管理节点安装 SlurmDBD,用于 Accounting 信息记录,多集群下这个服务必须安装
安装 MariaDB,本例为 Amazon Linux2作为管理节点,使用 root 身份执行下列命令
1[root@ip-10-0-3-94 ~]# yum install -y mariadb mariadb-server
启动 MariaDB
1[root@ip-10-0-3-94 ]# systemctl start mariadb
2[root@ip-10-0-3-94 ]# systemctl enable mariadb
设置 MariaDB root 密码
1[root@ip-10-0-3-94 ]# mysqladmin -u root password <yourpassword>
登录
1[root@ip-10-0-3-94 ]# mysql -u root -p
2Enter password:
3Welcome to the MariaDB monitor. Commands end with ; or \g.
创建Slurm Accounting 需要的 database
1MariaDB [(none)]> create user 'slurm'@'localhost' identified by '<password>';
2Query OK, 0 rows affected (0.00 sec)
3
4MariaDB [(none)]> grant all on slurm_acct_db.* TO 'slurm'@'localhost';
5Query OK, 0 rows affected (0.00 sec)
6
7MariaDB [(none)]> grant all on slurm_acct_db.* TO 'slurm'@'system0';
8Query OK, 0 rows affected (0.00 sec)
9
10MariaDB [(none)]> create database slurm_acct_db;
11Query OK, 1 row affected (0.00 sec)
12
13MariaDB [(none)]> quit
14Bye
在配置文件/opt/slurm/etc/slurm.conf中增加Accounting参数,JobCompHost用于保存作业完成信息,目前只支持直接访问 MySQL,所以,JobCompHost=
,JobCompPass=MySQL用户「slurm」的密码。
AccountingStorageHost=slurmdbd进程运行的机器名。注意:缺省的配置中在 LOGGING 段中已有 JobCompType=jobcomp/none,需要注释掉
1# LOGGING
2SlurmctldDebug=info
3SlurmctldLogFile=/var/log/slurmctld.log
4SlurmdDebug=info
5SlurmdLogFile=/var/log/slurmd.log
6#JobCompType=jobcomp/none
7
8...
9
10# JobComp
11JobCompType=jobcomp/mysql
12JobCompHost=localhost
13JobCompPort=3306
14JobCompPass=<your_mariadb_slurm_password>
15JobCompUser=slurm
16JobCompLoc=slurm_acct_db
17#JobCompLoc=
18#
19# ACCOUNTING
20JobAcctGatherType=jobacct_gather/linux
21JobAcctGatherFrequency=30
22#
23AccountingStorageType=accounting_storage/slurmdbd
24AccountingStorageHost=ip-10-0-3-94
25AccountingStoragePort=6819
26#AccountingStorageLoc=
27#AccountingStoragePass=
28#AccountingStorageUser=
29#
30DebugFlags=NO_CONF_HASH
建立配置文件/opt/slurm/etc/slurmdbd.conf
1# LOGGING
2SlurmctldDebug=info
3SlurmctldLogFile=/var/log/slurmctld.log
4SlurmdDebug=info
5SlurmdLogFile=/var/log/slurmd.log
6#JobCompType=jobcomp/none
7
8...
9
10# slurmDBD info
11DbdHost=localhost
12DbdPort=6819
13SlurmUser=slurm
14#MessageTimeout=60
15DebugLevel=6
16#DefaultQOS=normal
17LogFile=/var/log/slurmdbd.log
18PidFile=/var/run/slurmdbd.pid
19PluginDir=/opt/slurm/lib/slurm
20#PrivateData=accounts,users,usage,jobs
21#TrackWCKey=yes
22# Database info
23StorageType=accounting_storage/mysql
24StorageHost=localhost
25StoragePort=3306
26StoragePass=Letmein123
27StorageUser=slurm
28StorageLoc=slurm_acct_db
重启 slurmctld,和启动 slurmdbd
1# LOGGING
2SlurmctldDebug=info
3SlurmctldLogFile=/var/log/slurmctld.log
4SlurmdDebug=info
5SlurmdLogFile=/var/log/slurmd.log
6#JobCompType=jobcomp/none
7
8...
9[root@ip-10-0-3-94 etc]# systemctl stop slurmctld
10[root@ip-10-0-3-94 etc]# systemctl start slurmctld
11[root@ip-10-0-3-94 etc]# /opt/slurm/sbin/slurmdbd
检查 accounting 状态
1Internal DBD rollup last ran Sun Dec 13 03:25:33 2020 (1607829933)
2 Last cycle: 44
3 Max cycle: 44
4 Total time: 44
5 Total cycles: 1
6 Mean cycle: 44
7
8Remote Procedure Call statistics by message type
9 SLURM_PERSIST_INIT ( 6500) count:9 ave_time:380 total_time:3423
10 DBD_FINI ( 1401) count:9 ave_time:172 total_time:1552
11 DBD_CLUSTER_TRES ( 1407) count:1 ave_time:640 total_time:640
12 DBD_GET_JOBS_COND ( 1444) count:1 ave_time:526 total_time:526
13 DBD_GET_ACCOUNTS ( 1409) count:1 ave_time:488 total_time:488
14 DBD_GET_CLUSTERS ( 1412) count:1 ave_time:479 total_time:479
15
16Remote Procedure Call statistics by user
17 root ( 0) count:20 ave_time:302 total_time:6058
18 slurm ( 990) count:2 ave_time:525 total_time:1050
4. 建立云端集群
使用新的配置文件配置 pcluster
手动建立一个 VPC,因为组成多集群,CIDR不能重叠,使用10.100.0.0/16,注意要启用 DNS 主机名和解析
运行 pcluster configure -c ~/.parallecluster/pcluster-config-aws,使用上一部创建的 VPC
1(.pcluster) a483e778a9b5:~ xinxx$ pcluster configure -c .parallelcluster/pcluster-config-aws
2INFO: Configuration file .parallelcluster/pcluster-config-aws will be written.
3Press CTRL-C to interrupt the procedure.
4
5
6Allowed values for AWS Region ID:
71. cn-north-1
82. cn-northwest-1
9AWS Region ID [cn-northwest-1]: 1
10Allowed values for EC2 Key Pair Name:
111. xin-key-bj
12EC2 Key Pair Name [xin-key-bj]:
13Allowed values for Scheduler:
141. sge
152. torque
163. slurm
174. awsbatch
18Scheduler [slurm]:
19Allowed values for Operating System:
201. alinux
212. alinux2
223. centos7
234. centos8
245. ubuntu1604
256. ubuntu1804
26Operating System [alinux2]:
27Minimum cluster size (instances) [0]:
28Maximum cluster size (instances) [10]:
29Master instance type [t2.micro]:
30Compute instance type [t2.micro]:
31Automate VPC creation? (y/n) [n]: n
32Allowed values for VPC ID:
33 # id name number_of_subnets
34--- --------------------- ----------------------------------- -------------------
35 1 vpc-022aa918fe6dbe46f ParalleCluster-cloud 0
36 2 vpc-6e....a 2
37
38VPC ID [vpc-022aa918fe6dbe46f]: 1
39There are no qualified subnets. Starting automatic creation of subnets...
40Allowed values for Network Configuration:
411. Master in a public subnet and compute fleet in a private subnet
422. Master and compute fleet in the same public subnet
43Network Configuration [Master in a public subnet and compute fleet in a private subnet]:
44Creating CloudFormation stack...
45Do not leave the terminal until the process has finished
46Stack Name: parallelclusternetworking-pubpriv-20201213061955
47Status: parallelclusternetworking-pubpriv-20201213061955 - CREATE_COMPLETE
48The stack has been created
49Configuration file written to .parallelcluster/pcluster-config-aws
50You can edit your configuration file or simply run 'pcluster create -c .parallelcluster/pcluster-config-aws cluster-name' to create your cluster
建立集群
1(.pcluster) a483e778a9b5:~ xinxx$ pcluster create -c .parallelcluster/pcluster-config-aws aws
2Beginning cluster creation for cluster: aws
3Creating stack named: parallelcluster-aws
4Status: parallelcluster-aws - CREATE_COMPLETE
5ClusterUser: ec2-user
6MasterPrivateIP: 10.100.0.216
检查集群
1[ec2-user@ip-10-100-0-216 ~]$ lsid
2Slurm 20.02.4, Feb 1 2020
3Copyright SchedMD LLC, 2010-2017.
4
5My cluster name is parallelcluster
6My master name is ip-10-100-0-216
7[ec2-user@ip-10-100-0-216 ~]$ sinfo
8PARTITION AVAIL TIMELIMIT NODES STATE NODELIST
9compute* up infinite 10 idle~ compute-dy-t2micro-[1-10]
登录管理节点,修改 ClusterName=aws
登录管理节点,编辑/opt/slurm/etc/slurm.conf
1[root@ip-10-100-0-216 etc]# cat slurm.conf
2#
3# Example slurm.conf file. Please run configurator.html
4# (in doc/html) to build a configuration file customized
5# for your environment.
6#
7#
8# slurm.conf file generated by configurator.html.
9#
10# See the slurm.conf man page for more information.
11#
12# CLUSTER SETTINGS
13ClusterName=aws
14...
删除/var/spool/slurm.state/*,重启集群
1[root@ip-10-100-0-216 etc]# systemctl stop slurmctld
2[root@ip-10-100-0-216 etc]# rm -rf /var/spool/slurm.state/*
3[root@ip-10-100-0-216 etc]# ls /var/spool/slurm.state/
4[root@ip-10-100-0-216 etc]# systemctl start slurmctld
5[root@ip-10-100-0-216 etc]# lsid
6Slurm 20.02.4, Feb 1 2020
7Copyright SchedMD LLC, 2010-2017.
8
9My cluster name is aws
10My master name is ip-10-100-0-216
5. 配置 VPC peer,打开 双方 VPC 的DNS 解析,修改路由,安全组,建立双方连接
创建 VPC Peer
打开双方 DNS 解析
修改双方路由表,确保通过 DNS机器名 可以访问
修改本地集群(on-perm)和云端集群(亚马逊云科技)的安全组的入站规则,允许互相访问
本地(on-perm)管理节点
云端(亚马逊云科技)集群管理节点
修改 VPC 路由表,确认双方可以互相通信
6. 配置云端集群 (亚马逊云科技)的 Accounting 配置
编辑/opt/slurm/etc/slurm.conf,增加下列内容
1#
2# ACCOUNTING
3JobAcctGatherType=jobacct_gather/linux
4JobAcctGatherFrequency=30
5#
6AccountingStorageType=accounting_storage/slurmdbd
7AccountingStorageHost=ip-10-0-3-94
8AccountingStoragePort=6819
9#AccountingStorageLoc=
10#AccountingStoragePass=
11#AccountingStorageUser=
重启集群
1[root@ip-10-100-0-216 etc]# systemctl restart slurmctld
7. 登录本地集群(on-perm)管理节点,注册集群
注册集群,如果报告云端集群(亚马逊云科技)已经注册,请忽略
1[root@ip-10-0-3-94 log]# sacctmgr --immediate add cluster Name=on-perm
2 Adding Cluster(s)
3 Name = on-perm
4[root@ip-10-0-3-94 log]# sacctmgr --immediate add cluster Name=aws
检查多集群状态
1[root@ip-10-0-3-94 etc]# sacctmgr show cluster format=cluster,controlhost,controlport
2 Cluster ControlHost ControlPort
3---------- --------------- ------------
4 aws 10.100.0.216 6820
5 on-perm 10.0.3.94 6820
测试多集群作业提交,切换到普通用户 ec2-user
建立测试程序,赋予可执行权限
1[ec2-user@ip-10-0-3-94 ~]$ vi host_batch
2[ec2-user@ip-10-0-3-94 ~]$ chmod a+x host_batch
3[ec2-user@ip-10-0-3-94 ~]$ cat host_batch
4#!/bin/bash
5#
6#SBATCH --job-name=hostname_sleep_sample
7#SBATCH --output=out_%j.txt
8#
9#SBATCH --nodes=1
10
11srun hostname
12sleep 60
指定集群,提交作业
提交到本地集群(on-perm)
1[ec2-user@ip-10-0-3-94 ~]$ sbatch host_batch
2Submitted batch job 2
3[ec2-user@ip-10-0-3-94 ~]$ squeue
4 JOBID PARTITION NAME USER ST TIME NODES NODELIST(REASON)
5 2 compute hostname ec2-user R 0:05 1 compute-st-t2micro-1
6[ec2-user@ip-10-0-3-94 ~]$ sacct
7 JobID JobName Partition Account AllocCPUS State ExitCode
8------------ ---------- ---------- ---------- ---------- ---------- --------
92 hostname_+ compute 1 RUNNING 0:0
102.batch batch 1 RUNNING 0:0
112.0 hostname 1 COMPLETED 0:0
提交到云端集群(亚马逊云科技),使用-M指定集群。注意 squeue 和 sacct 也需要指定-M 参数才可以看到作业在云端集群(亚马逊云科技)的执行情况
1[ec2-user@ip-10-0-3-94 ~]$ sbatch -M aws host_batch
2Submitted batch job 2 on cluster aws
3[ec2-user@ip-10-0-3-94 ~]$ squeue
4 JOBID PARTITION NAME USER ST TIME NODES NODELIST(REASON)
5[ec2-user@ip-10-0-3-94 ~]$ squeue -M aws
6CLUSTER: aws
7 JOBID PARTITION NAME USER ST TIME NODES NODELIST(REASON)
8 2 compute hostname ec2-user CF 0:10 1 compute-dy-t2micro-1
9[ec2-user@ip-10-0-3-94 ~]$ sacct -M aws
10 JobID JobName Partition Account AllocCPUS State ExitCode
11------------ ---------- ---------- ---------- ---------- ---------- --------
122 hostname_+ compute 1 RUNNING 0:0
8. 建立集群 Federation 联合
使用 root,执行下面命令
1[root@ip-10-0-3-94 ~]# sacctmgr add federation cloudburst clusters=on-perm,aws
2 Adding Federation(s)
3 cloudburst
4 Settings
5 Cluster = on-perm
6 Cluster = aws
7Would you like to commit changes? (You have 30 seconds to decide)
8(N/y): y
9[root@ip-10-0-3-94 ~]# sacctmgr show federation
10Federation Cluster ID Features FedState
11---------- ---------- -- -------------------- ------------
12cloudburst aws 2 ACTIVE
13cloudburst on-perm 1 ACTIVE
9. 提交测试作业,检查在 Federation 的执行情况
提交大量作业
1[ec2-user@ip-10-0-3-94 ~]$ sbatch host_batch
2Submitted batch job 67108870
3
4...
5Submitted batch job 67108871
6Submitted batch job 67108872
检查作业情况,可以看到已经有作业在本地集群开始执行
1[ec2-user@ip-10-0-3-94 ~]$ squeue --federation
2 JOBID PARTITION NAME USER ST TIME NODES NODELIST(REASON)
3 67108872 compute hostname ec2-user CF 0:31 1 compute-dy-t2micro-1
4 67108873 compute hostname ec2-user CF 0:23 1 compute-dy-t2micro-3
5 67108874 compute hostname ec2-user CF 0:23 1 compute-dy-t2micro-4
6 67108875 compute hostname ec2-user CF 0:23 1 compute-dy-t2micro-5
7 67108876 compute hostname ec2-user CF 0:17 1 compute-dy-t2micro-6
8 67108877 compute hostname ec2-user CF 0:17 1 compute-dy-t2micro-7
9 67108878 compute hostname ec2-user CF 0:16 1 compute-dy-t2micro-8
10 67108869 compute hostname ec2-user R 0:05 1 compute-dy-t2micro-2
11 67108870 compute hostname ec2-user R 0:32 1 compute-st-t2micro-1
12 67108871 compute hostname ec2-user R 0:32 1 compute-st-t2micro-2
检查集群情况
1[ec2-user@ip-10-0-3-94 ~]$ sinfo --federation
2PARTITION CLUSTER AVAIL TIMELIMIT NODES STATE NODELIST
3compute* aws up infinite 7 alloc# compute-dy-t2micro-[1,3-8]
4compute* aws up infinite 2 idle~ compute-dy-t2micro-[9-10]
5compute* on-perm up infinite 2 alloc compute-st-t2micro-[1-2]
6compute* aws up infinite 1 alloc compute-dy-t2micro-2
检查云端 集群的 EC2,发现已经有 EC2开始启动,并作为计算节点加入集群
检查最终运行情况,当本地资源不足的情况,作业会在云端集群分配计算节点并执行
1[ec2-user@ip-10-0-3-94 ~]$ sacct --federation -o JobID,JobName,State,Cluster,NodeList
2 JobID JobName State Cluster NodeList
3------------ ---------- ---------- ---------- ---------------
42 hostname_+ COMPLETED on-perm compute-st-t2m+
52.batch batch COMPLETED on-perm compute-st-t2m+
62.0 hostname COMPLETED on-perm compute-st-t2m+
767108867 hostname_+ COMPLETED on-perm compute-st-t2m+
867108867.ba+ batch COMPLETED on-perm compute-st-t2m+
967108867.0 hostname COMPLETED on-perm compute-st-t2m+
1067108868 hostname_+ COMPLETED on-perm compute-st-t2m+
1167108868.ba+ batch COMPLETED on-perm compute-st-t2m+
1267108868.0 hostname COMPLETED on-perm compute-st-t2m+
1367108869 hostname_+ COMPLETED aws compute-dy-t2m+
1467108869.ba+ batch COMPLETED aws compute-dy-t2m+
1567108869.0 hostname COMPLETED aws compute-dy-t2m+
1667108870 hostname_+ COMPLETED on-perm compute-st-t2m+
1767108870.ba+ batch COMPLETED on-perm compute-st-t2m+
1867108870.0 hostname COMPLETED on-perm compute-st-t2m+
1967108871 hostname_+ COMPLETED on-perm compute-st-t2m+
2067108871.ba+ batch COMPLETED on-perm compute-st-t2m+
2167108871.0 hostname COMPLETED on-perm compute-st-t2m+
2267108872 hostname_+ COMPLETED aws compute-dy-t2m+
2367108872.ba+ batch COMPLETED aws compute-dy-t2m+
2467108872.0 hostname COMPLETED aws compute-dy-t2m+
2567108873 hostname_+ COMPLETED aws compute-dy-t2m+
2667108873.ba+ batch COMPLETED aws compute-dy-t2m+
2767108873.0 hostname COMPLETED aws compute-dy-t2m+
2867108874 hostname_+ COMPLETED aws compute-dy-t2m+
2967108874.ba+ batch COMPLETED aws compute-dy-t2m+
3067108874.0 hostname COMPLETED aws compute-dy-t2m+
3167108875 hostname_+ COMPLETED aws compute-dy-t2m+
3267108875.ba+ batch COMPLETED aws compute-dy-t2m+
3367108875.0 hostname COMPLETED aws compute-dy-t2m+
3467108876 hostname_+ COMPLETED aws compute-dy-t2m+
3567108876.ba+ batch COMPLETED aws compute-dy-t2m+
3667108876.0 hostname COMPLETED aws compute-dy-t2m+
3767108877 hostname_+ COMPLETED aws compute-dy-t2m+
3867108877.ba+ batch COMPLETED aws compute-dy-t2m+
3967108877.0 hostname COMPLETED aws compute-dy-t2m+
4067108878 hostname_+ COMPLETED aws compute-dy-t2m+
4167108878.ba+ batch COMPLETED aws compute-dy-t2m+
4267108878.0 hostname COMPLETED aws compute-dy-t2m+
参考资料
SchedMD Homepage:
https://www.schedmd.com/
Slurm on GCP ReadMe:
https://github.com/SchedMD/slurm/blob/master/contribs/gcp/README.md
Slurm Quickstart Guide:
https://slurm.schedmd.com/quickstart.html
Slurm MAN Pages:
https://slurm.schedmd.com/man_index.html
Slurm Command Summary (PDF):
https://slurm.schedmd.com/pdfs/summary.pdf
Slurm Accounting Guide:
https://slurm.schedmd.com/accounting.html
本篇作者
李沐
亚马逊云科技首席科学家
信欣
亚马逊云科技资深解决方案架构师
目前负责基于亚马逊云科技云计算方案架构的咨询和设计。在加入亚马逊云科技之前曾就职于IBM,有超过十年的 HPC 产品研发和架构设计经验。
听说,点完下面4个按钮
就不会碰到bug了!