Cassandra fails to bootstrap

I’ve been fighting a strange issue with Cassandra failing to bootstrap when a K8ssandraCluster.yaml is deployed to a k8ssandra-operator control plane deployment in a development GKE k8s cluster. The pods get stuck in 1/2 status, with server-system-logger up and running, and the liveness probe repeatedly killing the cassandra pods (eventually ending up in CrashLoopBackoff). I thought it might be due to namespaces, but even when running within k8ssandra-operator (where k8ssandra-operator and cass-operator reside), The logs (see below) indicate that some memory related sysctls and locking need to take place, and that seems to be possible running Cassandra privileged, but since I’m deploying K8ssandraCluster and the k8ssandra-operator is creating the CassandraDatacenter configs, I don’t see a way to pass that along to cassandra (and I’m not 100% certain that will resolve the bootstrap issue). The seed service seems to be broken, though I’m not sure if that’s due to Cassandra not running on any of the pods or due to some other issue. Pods are configured with 4 cores and 16G RAM each.

Any insight into troubleshooting is greatly appreciated!

$ kubectl logs sre-test-cassandra-us-central1-us-central1-b-sts-0 -n k8ssandra-operator -c per-node-config
merged /per-node-config/sre-test-cassandra-us-central1-us-central1-b-sts-0_cassandra.yaml into /config/cassandra.yaml
done merging per-node config for pod sre-test-cassandra-us-central1-us-central1-b-sts-0

$ kubectl logs sre-test-cassandra-us-central1-us-central1-b-sts-0 -n k8ssandra-operator -c server-config-init
$  # no output here?  Surely server-config-init should do *something*?

$ kubectl logs sre-test-cassandra-us-central1-us-central1-b-sts-0 -n k8ssandra-operator -c cassandra| grep WARN
WARN  [main] 2023-02-24 16:06:31,168 - Small commitlog volume detected at /opt/cassandra/data/commitlog; setting commitlog_total_space_in_mb to 2494.  You can override this in cassandra.yaml
WARN  [main] 2023-02-24 16:06:31,169 - Small cdc volume detected at /opt/cassandra/data/cdc_raw; setting cdc_total_space_in_mb to 1247.  You can override this in cassandra.yaml
WARN  [main] 2023-02-24 16:06:31,320 - Only 9.745GiB free across all data volumes. Consider adding more capacity to your cluster or removing obsolete snapshots
WARN  [main] 2023-02-24 16:06:31,531 - Unable to lock JVM memory (ENOMEM). This can result in part of the JVM being swapped out, especially with mmapped I/O enabled. Increase RLIMIT_MEMLOCK or run Cassandra as root.
WARN  [main] 2023-02-24 16:06:31,547 - Maximum number of memory map areas per process (vm.max_map_count) 65530 is too low, recommended value: 1048575, you can change it with sysctl.
WARN  [main] 2023-02-24 16:06:31,561 - Directory /opt/cassandra/data/data doesn't exist
WARN  [main] 2023-02-24 16:06:31,567 - Directory /opt/cassandra/data/commitlog doesn't exist
WARN  [main] 2023-02-24 16:06:31,569 - Directory /opt/cassandra/data/saved_caches doesn't exist
WARN  [main] 2023-02-24 16:06:31,570 - Directory /opt/cassandra/data/hints doesn't exist
WARN  [main] 2023-02-24 16:06:34,290 - No host ID found, created 211322b8-4f9a-4c42-86b3-0a14650c7a1e (Note: This should happen exactly once per node).

Disregard, apologies for the noise. I broke the cassandra-mgmt-api images accidentally when I imported them to our private registry.

thanks for the update