Sysadmin Garden of Eden Docs

Version 1337.42.0

OSDs


Table of Contents


OSD Maintenance

Gracefully remove OSD

First thing is to set the crush weight to zero, either instantly to 0.0 or a bit gracefully*. (*gracefully should always be used when the cluster is in use, though any OSD weight change will cause data redistribution)

ceph osd crush reweight osd.<ID> 0.0

or graceful:

for i in {9 1}; do
    ceph osd crush reweight osd.<ID> 0.$i
    # Wait five minutes each step or longer depending on your Ceph cluster recovery speed
    sleep 300
done

After the reweight, set the OSD out and remove it (+ its credentials):

ceph osd out <ID>

NOTE: For Rook Ceph clusters, scale down the OSD Deployment to 0:

kubectl scale -n rook-ceph deployment rook-ceph-osd-<ID> --replicas=0
ceph osd crush remove osd.<ID>
ceph auth del osd.<ID>
ceph osd rm <ID>
Last updated on 30 Sep 2019
Published on 30 Sep 2019