Deploying to Kubeadm cluster with non-default cluster name and domain breaks Stargate

Hello,
I figured I would ask this question here too. I have a bug ticket open with more in-depth details.

This is a new deployment on K8ssandra via Operator. The Stargate pod cannot start as it cannot find the seed-service due to the name is incorrect.
Startgate logs:

Using environment for config
Running java -server -XX:+CrashOnOutOfMemoryError -Xms268435456 -Xmx268435456 -Dstargate.libdir=./stargate-lib -Djava.awt.headless=true -jar ./stargate-lib/stargate-starter-1.0.67.jar --cluster-name demo --cluster-version 4.0 --cluster-seed demo-seed-service.k8ssandra-operator.svc.cluster.local --listen 192.168.43.202 --dc test-dc1 --rack rack1 --enable-auth --disable-bundles-watch
Unable to resolve seed node address demo-seed-service.k8ssandra-operator.svc.cluster.local

Service name should be:
demo-seed-service.k8ssandra-operator.svc.k8s-clst01.k8s-domain01.local

I cannot find anywhere in the CRDs where I can change or set this, why I opened a bug. Hopefully I just missed something and someone has a quick answer on this.

Environment:

  • Kubeadm
    • kubernetes v1.24.2
      • CNI Calico v3.24.0
      • CRI Containerd v1.5.9

K8ssandra:

  • Helm install
    • k8ssandra-operator v1.4.0 via chart v0.38.2
    • k8ssandracluster manifest
---
apiVersion: k8ssandra.io/v1alpha1
kind: K8ssandraCluster
metadata:
  name: demo
spec:
  auth: true
  cassandra:
    serverVersion: "4.0.1"
    softPodAntiAffinity: true
    datacenters:
      - metadata:
          name: test-dc1
        racks:
          - name: rack1
        size: 4
        resources:
          limits:
            cpu: "500m"
            memory: 4Gi
          requests:
            cpu: "500m"
            memory: 4Gi
        config:
          jvmOptions:
            heap_initial_size: 1G
            heap_max_size: 2G
        storageConfig:
          cassandraDataVolumeClaimSpec:
            storageClassName: rook-ceph-block-hdd7k
            accessModes:
              - ReadWriteOnce
            resources:
              requests:
                storage: 20Gi
        stargate:
          size: 1
          resources:
            limits:
              cpu: "250m"
              memory: 512Mi
            requests:
              cpu: "250m"
              memory: 512Mi
          heapSize: 256Mi
          allowStargateOnDataNodes: true
          affinity:
            podAffinity:
              preferredDuringSchedulingIgnoredDuringExecution:
...

This is a legit bug.
Solutions will be tracked via ticket in Git.