How to reuse the same PV / PVC when we restart the cluster or Cassandra datacenter

This is regarding how to overcome issue reported in K8SSAND-1486(Awaiting solution). When we restart the Datacenter / Cluster( to fix the reported issue) then we are losing the VPC and PV.

We need a way to reuse/re-assign the old PV to new PVC or retain the PVC(Adding Volume Claim template to statefulset) so that it will re-assigned when we restart the cluster.

Little Background : We are using helm charts to Spin up clusters and following are the versions(v2) we are using

  • Cass-operator - 1.10.4
  • K8ssandra operator - 1.0.1
  • K8ssandra - 1.5.1

I had similar issue today so hope someone on team responds. I was reconfiguring our GitOps tool flux and it forced deletion and re-install of cluster. Since PVCs got deleted, I had to start over. Fortunately a dev environment. Similar Operator products allow for this to be configured - See Run Elasticsearch on ECK | Elastic Cloud on Kubernetes [2.11] | Elastic

They have option to retain the PVCs
volumeClaimDeletePolicy: DeleteOnScaledownOnly

This would be nice for recovery. Any way to do similar on K8ssandra cluster’s PVC’s?

Strimzi Kafka also has similar setting

Example
storage:
type: jbod
volumes:
- id: 0
type: persistent-claim
class: managed-premium
size: 8Gi
deleteClaim: false

The process is a bit different in our solution right now. If you set Stopped = True, the cluster would be scaled down 0 nodes, but all the PVCs would be kept. But deleting the K8ssandraCluster / CassandraDatacenter object will delete the PVCs.

That said, you wouldn’t lose data, if you have your StorageClass reclaimPolicy set to the “Retain”, in which case deleting PVCs would not delete the PVs (which actually hold the data) as you could then remount the existing PVs back to PVCs. This setting is available in PVs also, which we also do not delete.

Thanks - I was thinking similar. I am going to experiment with this
for a recovery scenario. I’ll post back results

Yes we are following the same - changed storageclass to Retain and whenever we wanted to restart cluster ( disable and re-enable ) we will have to apply patch on PV to reclaim policy as null so that same PV will be re-attached.

Thanks
Kumar

All good news.

  • Created custom storageclass with “reclaimPolicy: Retain”
  • Created new K8ssandra cluster using new storageclass
  • Seeded cassandra with data
  • Deleted new cluster which deleted PVCs but left the associated PVs
  • Set old PVs claimref to null with following command
  • kubectl patch pv $PV_NAME_i -n $PV_NAMESPACE -p ‘{“spec”:{“claimRef”: null}}’
  • Recreated old cluster using same name and config - new PVCs picked up old PVs
  • All data was retained

Thanks for all the input