Hello!
I am building multicluster k8ssandra with listed spec below
apiVersion: k8ssandra.io/v1alpha1
kind: K8ssandraCluster
metadata:
name: demo
namespace: k8ssandra-operator
spec:
auth: true
cassandra:
serverVersion: “4.0.8”
storageConfig:
cassandraDataVolumeClaimSpec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 30Gi
storageClassName: gp3
config:
jvmOptions:
heapSize: 512Mi
mgmtAPIHeap: 64Mi
datacenters:
- metadata:
name: dc1
k8sContext: on107-infrastructure-3
size: 2
stopped: false
- metadata:
name: dc2
k8sContext: on107-infrastructure-7
size: 2
stopped: false
stargate:
size: 2
heapSize: 256Mi
On data plane nodes where dc1 is hosted stargate containers are not coming up with error
0/2 nodes are available: 2 node(s) didn’t match pod anti-affinity rules. preemption: 0/2 nodes are available
The pod affinity rules set on stargate pods are. Not sure about this antiaffinity rule. stargate pods are suppossed to come up on host having dc1 pods running.
podAntiAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
- labelSelector:
matchExpressions:
- key: cassandra.datastax.com/cluster
operator: In
values:
- demo
- key: cassandra.datastax.com/datacenter
operator: In
values:
- dc1
- key: cassandra.datastax.com/rack
operator: In
values:
- default