In accordance to this message @ github, I managed to successfully stop an automatically launched cleanup process by simply:
- deleting the relevant
k8ssandratask
object - stopping the cleanup process manually via the nodetool
I reached my goal (not going to cover it here), but the problem is, now both my k8ssandracluster
and cassandradatacenter
objects are apparently stuck in the “Updating” state:
k8ssandracluster k8s status
$ kubectl get k8ssandraclusters.k8ssandra.io dc1 -o yaml
...
status:
...
datacenters:
data:
cassandra:
cassandraOperatorProgress: Updating
conditions:
- ...
status: "True"
type: Healthy
- ...
status: "False"
type: Stopped
- ...
status: "False"
type: ReplacingNodes
- ...
status: "True"
type: Updating
- .....
status: "False"
type: RollingRestart
- ...
status: "False"
type: Resuming
- ...
status: "False"
type: ScalingDown
- ...
status: "True"
type: Valid
- ...
status: "True"
type: Initialized
- ...
status: "True"
type: Ready
- ...
status: "True"
type: ScalingUp
...
trackedTasks:
- name: cleanup-1724866034
namespace: ns1
error: None
cassandradatacenter k8s status
$ kubectl get cassandradatacenters.cassandra.datastax.com dc1 -o yaml
...
status:
cassandraOperatorProgress: Updating
conditions:
- ...
status: "True"
type: Healthy
- ...
status: "False"
type: Stopped
- ...
status: "False"
type: ReplacingNodes
- ...
status: "True"
type: Updating
- ...
status: "False"
type: RollingRestart
- ...
status: "False"
type: Resuming
- ...
status: "False"
type: ScalingDown
- ...
status: "True"
type: Valid
- ...
status: "True"
type: Initialized
- ...
status: "True"
type: Ready
- ...
status: "True"
type: ScalingUp
...
trackedTasks:
- name: cleanup-1724866034
namespace: ns1
As you can see, for both objects above these lines are present:
status:
cassandraOperatorProgress: Updating
conditions:
...
- ...
status: "True"
type: Updating
...
- ...
status: "True"
type: ScalingUp
...
trackedTasks:
- name: cleanup-1724866034
namespace: ns1
, although there are no new nodes being added anymore, and the k8ssandratask mentioned (cleanup-1724866034
) doesn’t exist anymore (I removed it manually via kubectl delete
).
edit: Also, if I add new nodes to the cluster in question - auto-cleanup doesn’t launch anymore.
The question: how can I restore the cluster status back to normal?
The only idea I have for now is to edit both k8ssandracluster
and cassandradatacenter
objects’ statuses via kubectl (not even sure is it technically possible or not) and to remove the “trackedTasks” block (and pray it actually helps and doesn’t break anything else ).
Are there any other [more proper] ways to fix this?