This repository has been archived by the owner on Feb 8, 2024. It is now read-only.
-
Notifications
You must be signed in to change notification settings - Fork 40
Teardown Node(s)
Yashodhan Pise edited this page Jun 8, 2021
·
7 revisions
Teardown allows user to remove components and cleanup what had been setup as part of provisioning. This could be achieved either for entire system or for each component individually.
Since there are various inter dependencies of Cortx cluster services the same script to teardown cluster does not always work and the Cortx cluster needs different ways of teardown in different situation/scenarios.
cortx cluster stop --all
This might take some time.
provisioner destroy
provisioner destroy --states ha
provisioner destroy --states controlpath
provisioner destroy --states iopath
provisioner destroy --states utils
provisioner destroy --states prereq
provisioner destroy --states system
- Using provisioner CLI
provisioner destroy --states bootstrap
- Cleanup SSH (execute on each node)
rm -rf /root/.ssh
- Unmount gluster volumes
MOUNT_ENDPOINT=$(mount -l | grep gluster | cut -d ' ' -f3) [[ -n ${MOUNT_ENDPOINT} ]] && umount ${MOUNT_ENDPOINT}
- Reclaim metadata and data storage
# Wipe the MBR of metadata volume for vggroup in $(vgdisplay | egrep "vg_srvnode-"|tr -s ' '|cut -d' ' -f 4); do echo "Removing volume group ${vggroup}" vgremove --force ${vggroup} done partprobe device_list=$(lsblk -nd -o NAME -e 11|grep -v sda|sed 's|sd|/dev/sd|g'|paste -s -d, -) for device in ${device_list} do wipefs --all ${device} done
- Stop services
systemctl status glustersharedstorage >/dev/null 2>&1 && systemctl stop glustersharedstorage systemctl status glusterfsd>/dev/null 2>&1 && systemctl stop glusterfsd systemctl status glusterd>/dev/null 2>&1 && systemctl stop glusterd systemctl status salt-minion >/dev/null 2>&1 && systemctl stop salt-minion systemctl status salt-master >/dev/null 2>&1 && systemctl stop salt-master
- Uninstall the rpms
yum erase -y cortx-prvsnr cortx-prvsnr-cli # Cortx Provisioner packages yum erase -y glusterfs-fuse glusterfs-server glusterfs # Gluster FS packages yum erase -y salt-minion salt-master salt-api # Salt packages yum erase -y python36-m2crypto # Salt dependency yum erase -y python36-cortx-prvsnr # Cortx Provisioner API packages yum erase -y *cortx* # Brute force cleanup for any remnants yum autoremove -y yum clean all # Remove cortx-py-utils pip3 uninstall -y cortx-py-utils # Cleanup pip packages pip3 freeze|xargs pip3 uninstall -y
- Cleanup bricks and other directories
unalias rm # Cleanup yum test -e /var/cache/yum && rm -rf /var/cache/yum # Cleanup pip config test -e /etc/pip.conf && rm -f /etc/pip.conf test -e ~/.cache/pip && rm -rf ~/.cache/pip # Cortx software dirs test -e /opt/seagate/cortx_configs && rm -rf /opt/seagate/cortx_configs test -e /opt/seagate/cortx && rm -rf /opt/seagate/cortx test -e /opt/seagate && rm -rf /opt/seagate test -e /etc/csm && rm -rf /etc/csm test -e /etc/cortx/ha && rm -rf /etc/cortx # Bricks cleanup test -e /var/lib/seagate && rm -rf /var/lib/seagate test -e /srv/glusterfs && rm -rf /srv/glusterfs test -e /var/lib/glusterd && rm -rf /var/lib/glusterd || true # Cleanup Salt test -e /var/cache/salt && rm -rf /var/cache/salt test -e /etc/salt && rm -rf /etc/salt test -e /etc/yum.repos.d/RELEASE_FACTORY.INFO && rm -f /etc/yum.repos.d/RELEASE_FACTORY.INFO # Cleanup Provisioner profile directory test -e /opt/isos && rm -rf /opt/isos || true test -e /root/.provisioner && rm -rf /root/.provisioner || true test -e /etc/yum.repos.d/RELEASE_FACTORY.INFO && rm -f /etc/yum.repos.d/RELEASE_FACTORY.INFO || true
- Cleanup SSH
rm -rf /root/.ssh