This is regarding how to overcome issue reported in K8SSAND-1486(Awaiting solution). When we restart the Datacenter / Cluster( to fix the reported issue) then we are losing the VPC and PV.
We need a way to reuse/re-assign the old PV to new PVC or retain the PVC(Adding Volume Claim template to statefulset) so that it will re-assigned when we restart the cluster.
Little Background : We are using helm charts to Spin up clusters and following are the versions(v2) we are using
I had similar issue today so hope someone on team responds. I was reconfiguring our GitOps tool flux and it forced deletion and re-install of cluster. Since PVCs got deleted, I had to start over. Fortunately a dev environment. Similar Operator products allow for this to be configured - See Run Elasticsearch on ECK | Elastic Cloud on Kubernetes [2.11] | Elastic
They have option to retain the PVCs
volumeClaimDeletePolicy: DeleteOnScaledownOnly
This would be nice for recovery. Any way to do similar on K8ssandra cluster’s PVC’s?
The process is a bit different in our solution right now. If you set Stopped = True, the cluster would be scaled down 0 nodes, but all the PVCs would be kept. But deleting the K8ssandraCluster / CassandraDatacenter object will delete the PVCs.
That said, you wouldn’t lose data, if you have your StorageClass reclaimPolicy set to the “Retain”, in which case deleting PVCs would not delete the PVs (which actually hold the data) as you could then remount the existing PVs back to PVCs. This setting is available in PVs also, which we also do not delete.
Yes we are following the same - changed storageclass to Retain and whenever we wanted to restart cluster ( disable and re-enable ) we will have to apply patch on PV to reclaim policy as null so that same PV will be re-attached.