Hi Team,
Is it possible to have control plane and data plane in the two different namespaces of the same cluster…
If so, could you guys help me whether it is possible or not.
Thanks & Regards,
Ajith
Hi Team,
Is it possible to have control plane and data plane in the two different namespaces of the same cluster…
If so, could you guys help me whether it is possible or not.
Thanks & Regards,
Ajith
Hi @Ajith_Palani, you can create a single control plane installation of K8ssandra-operator (which will be deployed in the k8ssandra-operator
for example) and then a multi DC cluster that will place each DC in a different namespace (k8ssandra-one
and k8ssandra-two
here):
apiVersion: k8ssandra.io/v1alpha1
kind: K8ssandraCluster
metadata:
name: test-cluster
namespace: k8ssandra-operator
spec:
auth: true
cassandra:
datacenters:
- metadata:
name: dc1
namespace: k8ssandra-one
size: 3
- metadata:
name: dc2
namespace: k8ssandra-two
size: 3
You’ll just need to create the K8ssandraCluster object in the k8ssandra-operator
namespace so that the operator can pick it up.
Thanks @alexander … Appreciate your reply!!
Hi @alexander ,
Is is possible to install k8ssandra operator without cert-manager operator…?
Does the k8ssandra operator has a strong dependency with cert-manager operator…?
Just curious about if we have an option having cassandra cluster without tls encryption …
cert-manager is required indeed, and it’s not used for TLS encryption, but for generating webhook certificates.
Thanks @alexander …now it makes sense…
Hi @alexander
How to do the networking for pod to pod communication across multiple kubernetes cluster…?
Do you have any suggestion for that which can fit for production…?
Thanks ,
Ajith
Hi @Ajith_Palani / @alexander
Can you send me steps you followed to deploy control and dataplane on same k8 cluster. I am trying to deploy on existing on-prem k8 cluster.
I was able to deploy both in different name space but have issue in communicating.
Thanks in advance.
@Ajith_Palani @alexander I am also trying to implement k8ssandra on aws.
I have 2 eks cluster and in that in one cluster i have one eks cluster which contains one control plane and data plane namespace and another cluster have only data plane namespace.
When I am creating K8ssandraCluster object it has created sts but not creating stargate deployment pod and k8cs shows error as
-bash-4.2$ kg k8cs
NAME ERROR
demo CALL list keyspaces system_traces failed on all datacenter dc1 pods
Please suggest
Hi,
we’d need the K8ssandraCluster manifest you’re using.
Note that port 8080 should be open between your EKS clusters (and between nodes in each cluster) in order to allow the operator to communicate with the sts pods through the management api.
You should check the logs of cass-operator and k8ssandra-operator to get the full error traces, and check the logs of the cassandra container of the sts pods for the corresponding call failures.
Hi @alexander
cass-operator issue :
2024-06-18T06:35:46.991Z INFO controllers.CassandraDatacenter CassandraDatacenter resource not found. Ignoring since object must be deleted. {“cassandradatacenter”: {“name”:“dc2”,“namespace”:“k8ssandra-operator”}, “requestNamespace”: “k8ssandra-operator”, “requestName”: “dc2”, “loopID”: “0ba60547-722c-4761-8ff2-2ab423a6ff5a”}
2024-06-18T06:35:46.991Z INFO controllers.CassandraDatacenter Reconcile loop completed {“cassandradatacenter”: {“name”:“dc2”,“namespace”:“k8ssandra-operator”}, “requestNamespace”: “k8ssandra-operator”, “requestName”: “dc2”, “loopID”: “0ba60547-722c-4761-8ff2-2ab423a6ff5a”, “duration”: 0.000178253}
2024-06-18T06:35:46.991Z ERROR Reconciler error {“controller”: “cassandradatacenter_controller”, “controllerGroup”: “cassandra.datastax.com”, “controllerKind”: “CassandraDatacenter”, “CassandraDatacenter”: {“name”:“dc2”,“namespace”:“k8ssandra-operator”}, “namespace”: “k8ssandra-operator”, “name”: “dc2”, “reconcileID”: “a817ee5f-2669-4c5a-ad24-386fdcc0d42d”, “error”: “terminal error: CassandraDatacenter.cassandra.datastax.com "dc2" not found”}
sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).reconcileHandler
/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.17.2/pkg/internal/controller/controller.go:329
sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).processNextWorkItem
/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.17.2/pkg/internal/controller/controller.go:266
sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).Start.func2.2
/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.17.2/pkg/internal/controller/controller.go:227
k8ssandra-operator issue :
2024-06-18T12:36:50.525Z ERROR Failed to CALL list keyspaces system_traces on pod demo-dc1-default-sts-0 {“controller”: “k8ssandracluster”, “controllerGroup”: “k8ssandra.io”, “controllerKind”: “K8ssandraCluster”, “K8ssandraCluster”: {“name”:“demo”,“namespace”:“k8ssandra-operator”}, “namespace”: “k8ssandra-operator”, “name”: “demo”, “reconcileID”: “4925ed5d-8e5f-43e6-8133-2173d4e4fa11”, “K8ssandraCluster”: “k8ssandra-operator/demo”, “CassandraDatacenter”: “k8ssandra-operator/dc1”, “K8SContext”: “eks_demofresh-uat-eks”, “error”: “Get "http://100.127.0.166:8080/api/v0/ops/keyspace?keyspaceName=system_traces\”: context deadline exceeded"}
github.com/k8ssandra/k8ssandra-operator/pkg/cassandra.(*defaultManagementApiFacade).ListKeyspaces
/workspace/pkg/cassandra/management.go:197
github.com/k8ssandra/k8ssandra-operator/pkg/cassandra.(*defaultManagementApiFacade).EnsureKeyspaceReplication
/workspace/pkg/cassandra/management.go:291
github.co
k8s.yaml :
apiVersion: k8ssandra.io/v1alpha1
kind: K8ssandraCluster
metadata:
name: demo
spec:
cassandra:
serverVersion: “4.0.1”
storageConfig:
cassandraDataVolumeClaimSpec:
storageClassName: gp2
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 5Gi
config:
jvmOptions:
heapSize: 512M
networking:
hostNetwork: true
datacenters:
- metadata:
name: dc1
k8sContext: eks_df-uat-eks (other cluster)
size: 3
- metadata:
name: dc2
k8sContext: eks_ip-uat-eks (current cluster)
size: 3
stargate:
size: 1
heapSize: 512M
@alexander
Can you please suggest how to resolve this
Open port 8080 between the EKS clusters.
You also need to have non conflicting CIDRs between your Kubernetes clusters, and proper VPC peering with appropriate routes.