It wasn’t readily obvious to me how to do this until I realised this was managed by the cass-operator so I thought I’d document it here.
# Setting rollingRestartRequested to true will have Cass Operator do a rolling
# restart on this CassDC at the next opportunity. The operator will set this
# back to false once the restart is in progress.
rollingRestartRequested: false
In my case, I had a vanilla installation of K8ssandra with 3 Cassandra nodes/pods:
$ kubectl get pods
NAME READY STATUS RESTARTS AGE
k8ssandra-cass-operator-7c4dc64969-9b428 1/1 Running 0 10d
k8ssandra-dc1-default-sts-0 2/2 Running 0 29m
k8ssandra-dc1-default-sts-1 2/2 Running 0 23m
k8ssandra-dc1-default-sts-2 2/2 Running 0 26m
k8ssandra-dc1-stargate-68d5574cc7-dzdrr 1/1 Running 0 10d
k8ssandra-grafana-b6f7978c4-7lxz8 2/2 Running 0 10d
k8ssandra-kube-prometheus-operator-5556885bd6-clvd9 1/1 Running 0 10d
k8ssandra-reaper-6bcf89ddb7-jl5vf 1/1 Running 13 10d
k8ssandra-reaper-operator-f6bc9b77b-jrkrm 1/1 Running 0 10d
k8ssandra-reaper-schema-m4g76 0/1 Completed 0 10d
prometheus-k8ssandra-kube-prometheus-prometheus-0 2/2 Running 1 10d
Here are the steps I took to trigger a rolling restart.
STEP 1 - Get the YAML for the Cassandra DC resource:
$ kubectl get cassdc dc1 -o yaml > cassdc-dc1.yaml
STEP 2 - Modify cassdc-dc1.yaml
and add rollingRestartRequested: true
in the spec
:
...
spec:
allowMultipleNodesPerWorker: true
clusterName: k8ssandra
config:
...
configBuilderResources: {}
dockerImageRunsAsCassandra: true
managementApiAuth:
insecure: {}
rollingRestartRequested: true <----- INSERTED HERE
podTemplateSpec:
...
STEP 3 - Apply the update:
$ kubectl apply -f cassdc.yaml
The Cassandra pods should now be restarted one by one.
You can monitor its progress with:
$ watch kubectl get pods
FWIW I’ve logged K8SSAND-568 to have the steps added to the Docs > Tasks section of the K8ssandra.io website. Cheers!