參考文章:超算平臺安裝 CentOS7下安裝slurm20.11
注:所有命令均來自上述參考文章娜谊,僅做整理和版本問題修改
〇笤喳、部署背景
- 初始環(huán)境:CentOS 7.9 最小化安裝:
- 192.168.2.130 mgr (實際中直接使用node1作為管理節(jié)點)
- 192.168.2.131 node1
- 192.168.2.132 node2
- 上傳slurm-21.08.0的安裝包到管理節(jié)點的/home/share/下面滩愁。
一蛛勉、安裝mysql(管理節(jié)點)
yum -y install mariadb-server
systemctl start mariadb
systemctl enable mariadb
mysql
在mysql
中執(zhí)行語句:
set password=password('123456');
create database slurm_acct_db;
quit
如需更改密碼可使用mysql -p
命令,輸入密碼進入mysql
后重新執(zhí)行set password
二、安裝samba,用于傳輸文件,可忽略(管理節(jié)點)
yum -y install samba`
mkdir /home/share
chmod 777 /home/share/
echo "[global]
log file = /var/log/samba/log.%m
max log size = 50
security = user
map to guest = Bad User
[share]
path=/home/share
readonly=yes
browseable=yes
writable = yes
guest ok=yes" > /etc/samba/smb.conf
systemctl start smb
systemctl enable smb
systemctl disable firewalld
systemctl stop firewalld
三静陈、配置初始設(shè)定(每個節(jié)點上執(zhí)行,包括管理和計算節(jié)點)
export MUNGEUSER=991 && groupadd -g $MUNGEUSER munge
useradd -m -c "MUNGE Uid 'N' Gid Emporium" -d /var/lib/munge -u $MUNGEUSER -g munge -s /sbin/nologin munge
export SLURMUSER=992 && groupadd -g $SLURMUSER slurm
useradd -m -c "SLURM workload manager" -d /var/lib/slurm -u $SLURMUSER -g slurm -s /bin/bash slurm
配置錯誤或重新設(shè)置時請使用:
yum remove munge munge-libs munge-devel -y
userdel -r munge
userdel: user munge is currently used by process xxxxx
kill xxxxx
userdel -r munge
安裝以下庫前請先確保聯(lián)網(wǎng)诞丽,同時預先安裝epel
yum -y install epel-release
yum -y install openssh-clients munge munge-libs munge-devel rng-tools openssl openssl-devel pam-devel numactl numactl-devel hwloc hwloc-devel lua lua-devel readline-devel rrdtool-devel ncurses-devel man2html libibmad libibumad python3-pip perl-ExtUtils-MakeMaker gcc rpm-build mysql-devel json-c json-c-devel http-parser http-parser-devel
yum -y install ucx* hdf5 hdf5-devel freeipmi
yum -y install gcc make libffi-devel openssl-devel
yum -y install gcc gcc-c++ make autoconf m4 automake libtool
yum -y install libgpg-error libgcrypt
rngd -r /dev/urandom
echo '192.168.2.130 mgr
192.168.2.131 node1
192.168.2.132 node2' >> /etc/hosts
四鲸拥、開始安裝和配置(在管理節(jié)點執(zhí)行)
出現(xiàn)服務啟動失敗可通過journalctl -xe
查看具體失敗原因:
echo mgr >> /etc/hostname
ssh-keygen
ssh-copy-id node1
ssh-copy-id node2
ssh node1 "echo node1 > /etc/hostname"
ssh node2 "echo node2 > /etc/hostname"
ssh node1 reboot
ssh node2 reboot
reboot
注意此時重啟可能導致XTerm
等工具無法遠程服務器(可能是同ssh
端口沖突)
/usr/sbin/create-munge-key -r
dd if=/dev/urandom bs=1 count=1024 > /etc/munge/munge.key
chown munge: /etc/munge/munge.key && chmod 400 /etc/munge/munge.key
scp /etc/munge/munge.key node1:/etc/munge/
scp /etc/munge/munge.key node2:/etc/munge/
chown -R munge: /etc/munge/ /var/log/munge/ && chmod 0700 /etc/munge/ /var/log/munge/
systemctl enable munge
systemctl start munge
systemctl status munge
后續(xù)節(jié)點機器與管理機操作基本相同,如管理與節(jié)點一體可以跳過一部分操作:
ssh node1 "chown -R munge: /etc/munge/ /var/log/munge/ && chmod 0700 /etc/munge/ /var/log/munge/"
ssh node1 systemctl enable munge
ssh node1 systemctl start munge
ssh node1 systemctl status munge
ssh node2 "chown -R munge: /etc/munge/ /var/log/munge/ && chmod 0700 /etc/munge/ /var/log/munge/"
ssh node2 systemctl enable munge
ssh node2 systemctl start munge
ssh node2 systemctl status munge
systemctl stop firewalld
systemctl disable firewalld
ssh node1 systemctl stop firewalld
ssh node1 systemctl disable firewalld
ssh node2 systemctl stop firewalld
ssh node2 systemctl disable firewalld
確認已經(jīng)安裝:
yum -y install ucx* hdf5 hdf5-devel freeipmi
yum -y install gcc make libffi-devel openssl-devel
yum -y install gcc gcc-c++ make autoconf m4 automake libtool
yum -y install libgpg-error libgcrypt
cd /home/share/
rpmbuild -ta --with mysql slurm-21.08.0-0rc2.tar.bz2
cd
cp -rf rpmbuild/RPMS/x86_64 ./
yum localinstall x86_64/slurm-*.rpm -y
scp -r x86_64/ node1:
scp -r x86_64/ node2:
ssh node1 "yum localinstall x86_64/slurm-*.rpm -y"
ssh node2 "yum localinstall x86_64/slurm-*.rpm -y"
cp /etc/slurm/slurm.conf.example /etc/slurm/slurm.conf
cp /etc/slurm/slurmdbd.conf.example /etc/slurm/slurmdbd.conf
cp /etc/slurm/cgroup.conf.example /etc/slurm/cgroup.conf
本文最后準備了slurm.conf 和 slurmdbd.conf文件供參考
cat /home/share/slurm.conf > /etc/slurm/slurm.conf
cat /home/share/slurmdbd.conf > /etc/slurm/slurmdbd.conf
#cat /home/share/cgroup.conf > /etc/slurm/cgroup.conf #使用默認即可
scp /etc/slurm/slurm.conf node1:/etc/slurm/slurm.conf
scp /etc/slurm/slurm.conf node2:/etc/slurm/slurm.conf
scp /etc/slurm/slurmdbd.conf node1:/etc/slurm/slurmdbd.conf
scp /etc/slurm/slurmdbd.conf node2:/etc/slurm/slurmdbd.conf
scp /etc/slurm/cgroup.conf node1:/etc/slurm/cgroup.conf
scp /etc/slurm/cgroup.conf node2:/etc/slurm/cgroup.conf
mkdir /var/spool/slurmctld && chown slurm: /var/spool/slurmctld && chmod 755 /var/spool/slurmctld
mkdir /var/log/slurm && touch /var/log/slurm/slurmctld.log && chown slurm: /var/log/slurm/slurmctld.log
touch /var/log/slurm/slurm_jobacct.log /var/log/slurm/slurm_jobcomp.log && chown slurm: /var/log/slurm/slurm_jobacct.log /var/log/slurm/slurm_jobcomp.log
ssh node1 "mkdir /var/spool/slurmd && chown slurm: /var/spool/slurmd && chmod 755 /var/spool/slurmd"
ssh node1 "mkdir /var/log/slurm && touch /var/log/slurm/slurmd.log && chown slurm: /var/log/slurm/slurmd.log"
ssh node2 "mkdir /var/spool/slurmd && chown slurm: /var/spool/slurmd && chmod 755 /var/spool/slurmd"
ssh node2 "mkdir /var/log/slurm && touch /var/log/slurm/slurmd.log && chown slurm: /var/log/slurm/slurmd.log"
systemctl enable slurmdbd
systemctl start slurmdbd
systemctl status slurmdbd
systemctl enable slurmctld
systemctl start slurmctld
systemctl status slurmctld
ssh node1 systemctl enable slurmd
ssh node1 systemctl restart slurmd
ssh node1 systemctl status slurmd
ssh node2 systemctl enable slurmd
ssh node2 systemctl restart slurmd
ssh node2 systemctl status slurmd
至此slurm安裝完畢僧免,如果啟動服務的過程中報錯刑赶,使用調(diào)試方式啟動查看啟動服務的過程中報錯
$ slurmctld -Dvvvvv
$ slurmdbd -Dvvvvv
$ slurmd -Dvvvvv
slurm.conf
更改:fatal: The AccountingStoreJobComment option has been removed, please use AccountingStoreFlags
更改:error: Bad TaskPluginParam: Sched
SlurmctldHost=mgr
#
SlurmctldDebug=info
SlurmdDebug=debug3
GresTypes=gpu
MpiDefault=none
ProctrackType=proctrack/cgroup
SlurmctldPidFile=/var/run/slurmctld.pid
SlurmctldPort=6817
SlurmdPidFile=/var/run/slurmd.pid
SlurmdPort=6818
SlurmdSpoolDir=/var/spool/slurmd
SlurmUser=root
StateSaveLocation=/var/spool/slurmctld
SwitchType=switch/none
TaskPlugin=task/affinity,task/cgroup
# Fix Mentioned Error
# TaskPluginParam=Sched
TaskPluginParam=verbose
# TIMERS
InactiveLimit=0
KillWait=15
ResumeTimeout=600
MinJobAge=300
#OverTimeLimit=0
SlurmctldTimeout=12
SlurmdTimeout=300
Waittime=0
# SCHEDULING
SchedulerType=sched/backfill
SelectType=select/cons_tres
SelectTypeParameters=CR_Core
# LOGGING AND ACCOUNTING
AccountingStorageEnforce=associations
AccountingStorageHost=mgr
AccountingStoragePort=6819
AccountingStorageType=accounting_storage/slurmdbd
# Fix Mentioned Error
# AccountingStoreJobComment=YES
AccountingStoreFlags=job_comment
ClusterName=slurm20_cluster
JobCompHost=localhost
JobCompPass=123456
JobCompPort=3306
JobCompType=jobcomp/mysql
JobCompUser=root
JobAcctGatherFrequency=1
JobAcctGatherType=jobacct_gather/linux
SlurmctldLogFile=/var/log/slurm/slurmctld.log
SlurmdLogFile=/var/log/slurm/slurmd.log
# POWER SAVE SUPPORT FOR IDLE NODES (optional)
SuspendTime=70
NodeName=node[1-2] Procs=1 State=UNKNOWN
PartitionName=debug Nodes=ALL Default=YES MaxTime=INFINITE State=UP
slurmdbd.conf
# Authentication info
AuthType=auth/munge
AuthInfo=/var/run/munge/munge.socket.2
#DebugLevel=info
# slurmDBD info
DbdAddr=192.168.2.130
DbdHost=localhost
DbdPort=6819
SlurmUser=root
DebugLevel=verbose
LogFile=/var/log/slurm/slurmdbd.log
PidFile=/var/run/slurmdbd.pid
# Database info
StorageType=accounting_storage/mysql
StorageHost=localhost
StoragePort=3306
StoragePass=123456
StorageUser=root
StorageLoc=slurm_acct_db
五、問題記錄
> 1. error: This host (node1/node1) not a valid controller
問題發(fā)現(xiàn):管理節(jié)點systemctl status slurmctld
狀態(tài)為failed
懂衩,查看日志文件vi /var/log/slurm/slurmctld.log
發(fā)現(xiàn)該問題
產(chǎn)生原因:同時使用一臺機器作為管理節(jié)點和運算節(jié)點
解決方法:在slurm.conf
中將SlurmctldHost
注釋撞叨,替換為ControlMachine=node1
和ControlAddr=192.168.2.131
> 2. slurm_recv_timeout at 0 of 4, recv zero bytes
問題發(fā)現(xiàn):計算節(jié)點systemctl status slurmctld
狀態(tài)中發(fā)現(xiàn)該問題,使用sinfo
發(fā)現(xiàn)有兩個節(jié)點相互獨立
產(chǎn)生原因:兩臺機器時鐘未同步
解決方法:使用tzselect
重新選擇時區(qū)浊洞,聯(lián)網(wǎng)更新
> 3. Low socket*core*thread count, Low CPUs
問題發(fā)現(xiàn):查看節(jié)點sinfo
狀態(tài)中發(fā)現(xiàn)該有一個節(jié)點處于draind
狀態(tài)牵敷,使用scontrol show node
發(fā)現(xiàn)上述原因
產(chǎn)生原因:未知(配置文件未發(fā)現(xiàn)異常)
解決方法:使用scontrol update nodename=THE_NODE_NAME state=resume
刷新當前節(jié)點后消失