Failed k8ssandra installation on AWS EKS, pods keep restarting

Reposted from https://community.datastax.com/questions/11522/:

I plan to deploy k8ssandra on AWS followed the instruction: Amazon Elastic Kubernetes Service | K8ssandra, Apache Cassandra on Kubernetes

I skipped the step of creating resources with Terraform because we have had these resources which were not created by Terraform. Then I run

helm install prod-k8ssandra k8ssandra/k8ssandra -f eks.values.yaml

But the pods prod-k8ssandra-dc1-us-east-1a-sts-x started failed and kept restarts.

I’m not sure should I add some special configurations to file values.yaml for the resources that Cassandra used are not created by Terraform which may have specific names? Or how can I print logs when I run the above commands?

I’m a new dev on Kubernetes. I don’t know what the problem is. Can anyone help me out of this?

There isn’t enough information for us to troubleshoot the problem you’re experiencing. But the fact that the pod keeps restarting indicates a configuration or resource issue.

As a start, please run the following command which will generate a text file:

$ kubectl describe pod prod-k8ssandra-dc1-us-east-1a-sts-x > describe_pod.txt

It will be too big to paste the contents of the file here so please upload it to a file-sharing site like https://gist.github.com/ and post the URL here so we could review it. Cheers!