0
点赞
收藏
分享

微信扫一扫

Reason=Low socket*core*thread count, Low CPUs [slurm@2021-09-15T15:18:53]


提交作业:

# srun hostname

srun: Required node not available (down, drained or reserved)
srun: job 58 queued and waiting for resources

查看作业状态:

squeue

58   compute hostname     root PD       0:00      1 (Nodes required for job are DOWN, DRAINED or reserved for jobs in higher priority partitions)

作业所需的节点已关闭、耗尽或保留给优先级较高的分区中的作业

查看MPI作业详细信息
scontrol show jobs

显示队列或节点状态

sinfo

PARTITION AVAIL  TIMELIMIT  NODES  STATE NODELIST 
control up infinite 1 drain* m1
compute* up infinite 1 drain c1

终止一个作业步骤:
scancel 命令与这个作业 ID 来终止该作业步骤

scancel 58

修改节点状态:

scontrol update NodeName=m1 State=idle

查看日志:

/var/log/slurmctld.log

error: Nodes m1 not responding

查看节点状态:

scontrol show node

计算节点的状态 Reason=Low socket*core*thread count, Low CPUs [slurm@2021-09-15T15:18:53]

排查 了一下,查看配置

vim /etc/slurm/slurm.conf

把 CPUs=1 CoresPerSocket=1 ThreadsPerCore=1 RealMemory=900 Procs=1 改小就行了,根据自己的服务器资源酌情配置

NodeName=m1 NodeAddr=192.168.8.150  CPUs=1 CoresPerSocket=1 ThreadsPerCore=1 RealMemory=900 Procs=1 State=UNKNOWN
NodeName=c1 NodeAddr=192.168.8.145 CPUs=1 CoresPerSocket=1 ThreadsPerCore=1 RealMemory=900 Procs=1 State=UNKNOWN
PartitionName=control Nodes=m1 Default=NO MaxTime=INFINITE State=UP
PartitionName=compute Nodes=c1 Default=YES MaxTime=INFINITE State=UP



举报
0 条评论