Linux如何卸载slurm,Slurm基本用法(入门必看)

本文描述Linux集群的基本Slurm用法。

1. 一个简单的Slurm脚本

$ cat slurm-job.sh

#!/usr/bin/env bash

#SBATCH -o slurm.sh.out

#SBATCH -p defq

echo "In the directory: `pwd`"

echo "As the user: `whoami`"

echo "write this is a file" > analysis.output

sleep 60

2. 提交作业

$ module load slurm

$ sbatch slurm-job.sh

Submitted batch job 106

3. 列出作业

$ squeue

JOBID PARTITION NAME USER ST TIME NODES NODELIST(REASON)

106 defq slurm-jo rstober R 0:04 1 atom01

4. 获取作业细节

$ scontrol show job 106

JobId=106 Name=slurm-job.sh

UserId=rstober(1001) GroupId=rstober(1001)

Priority=4294901717 Account=(null) QOS=normal

JobState=RUNNING Reason=None Dependency=(null)

Requeue=1 Restarts=0 BatchFlag=1 ExitCode=0:0

RunTime=00:00:07 TimeLimit=UNLIMITED TimeMin=N/A

SubmitTime=2013-01-26T12:55:02 EligibleTime=2013-01-26T12:55:02

StartTime=2013-01-26T12:55:02 EndTime=Unknown

PreemptTime=None SuspendTime=None SecsPreSuspend=0

Partition=defq AllocNode:Sid=atom-head1:3526

ReqNodeList=(null) ExcNodeList=(null)

NodeList=atom01

BatchHost=atom01

NumNodes=1 NumCPUs=2 CPUs/Task=1 ReqS:C:T=*:*:*

MinCPUsNode=1 MinMemoryNode=0 MinTmpDiskNode=0

Features=(null) Gres=(null) Reservation=(null)

Shared=0 Contiguous=0 Licenses=(null) Network=(null)

Command=/home/rstober/slurm/local/slurm-job.sh

WorkDir=/home/rstober/slurm/local

5. Suspend a job (root only)

# scontrol suspend 135

# squeue

JOBID PARTITION NAME USER ST TIME NODES NODELIST(REASON)

135 defq simple.s rstober S 0:10 1 atom01

6. Resume a job (root only)

# scontrol resume 135

# squeue

JOBID PARTITION NAME USER ST TIME NODES NODELIST(REASON)

135 defq simple.s rstober R 0:13 1 atom01

7. Kill a job

用户可以杀死自己的作业,root可以杀死任何作业。

$ scancel 135

$ squeue

JOBID PARTITION NAME USER ST TIME NODES NODELIST(REASON)

8. Hold a job

$ squeue

JOBID PARTITION NAME USER ST TIME NODES NODELIST(REASON)

139 defq simple rstober PD 0:00 1 (Dependency)

138 defq simple rstober R 0:16 1 atom01

$ scontrol hold 139

$ squeue

JOBID PARTITION NAME USER ST TIME NODES NODELIST(REASON)

139 defq simple rstober PD 0:00 1 (JobHeldUser)

138 defq simple rstober R 0:32 1 atom01

9. Release a job

$ scontrol release 139

$ squeue

JOBID PARTITION NAME USER ST TIME NODES NODELIST(REASON)

139 defq simple rstober PD 0:00 1 (Dependency)

138 defq simple rstober R 0:46 1 atom01

10. List partitions

$ sinfo

PARTITION AVAIL TIMELIMIT NODES STATE NODELIST

defq* up infinite 1 down* atom04

defq* up infinite 3 idle atom[01-03]

cloud up infinite 2 down* cnode1,cnodegpu1

cloudtran up infinite 1 idle atom-head1

11. 作业依赖

首先提交一个简单的作业:

#!/usr/bin/env bash

#SBATCH -p defq

#SBATCH -J simple

sleep 60

Submit the job

$ sbatch simple.sh

Submitted batch job 149

现在,我们将提交另一个依赖于先前作业的作业。 有许多方法可以指定依赖条件,但是“singleton ”是最简单的。 Slurm -d singleton 参数告诉Slurm在之前所有具有相同名称的作业完成之前不要调度此作业。

$ sbatch -d singleton simple.sh

Submitted batch job 150

$ squeue

JOBID PARTITION NAME USER ST TIME NODES NODELIST(REASON)

150 defq   simple  rstober  PD  0:00  1 (Dependency)

149 defq   simple  rstober   R  0:17  1 atom01

前提作业完成后,将调度从属作业。

$ squeue

JOBID PARTITION NAME USER ST TIME  NODES NODELIST(REASON)

150 defq   simple  rstober   R   0:31  1 atom01

  • 0
    点赞
  • 2
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值