Background
Both cass-operator (starting in v1.8.0) and k8ssandra-operator support installation/configuration via kustomize. There has been ongoing discussion about kustomize in https://github.com/k8ssandra/k8ssandra-operator/pull/144 but I think the discussion is a bit outside of the scope of the PR and is better suited for the forum.
I am going to reference the example from the operator-sdk docs. If you create an operator project with:
operator-sdk init --domain example.com --repo github.com/example/memcached-operator
and then generate a controller with:
operator-sdk create api --group cache --version v1alpha1 --kind Memcached --resource --controller
a config directory is generated with several kustomization directories. It looks like this:
config
├── crd
│ ├── kustomization.yaml
│ ├── kustomizeconfig.yaml
│ └── patches
│ ├── cainjection_in_memcacheds.yaml
│ └── webhook_in_memcacheds.yaml
├── default
│ ├── kustomization.yaml
│ ├── manager_auth_proxy_patch.yaml
│ └── manager_config_patch.yaml
├── manager
│ ├── controller_manager_config.yaml
│ ├── kustomization.yaml
│ └── manager.yaml
├── manifests
│ └── kustomization.yaml
├── prometheus
│ ├── kustomization.yaml
│ └── monitor.yaml
├── rbac
│ ├── auth_proxy_client_clusterrole.yaml
│ ├── auth_proxy_role.yaml
│ ├── auth_proxy_role_binding.yaml
│ ├── auth_proxy_service.yaml
│ ├── kustomization.yaml
│ ├── leader_election_role.yaml
│ ├── leader_election_role_binding.yaml
│ ├── memcached_editor_role.yaml
│ ├── memcached_viewer_role.yaml
│ ├── role_binding.yaml
│ └── service_account.yaml
├── samples
│ ├── cache_v1alpha1_memcached.yaml
│ └── kustomization.yaml
└── scorecard
├── bases
│ └── config.yaml
├── kustomization.yaml
└── patches
├── basic.config.yaml
└── olm.config.yaml
After running make manifests you will also wind up with config/rbac/role.yaml. Running make manifests will create/update role.yaml any time you modify kubebuilder RBAC annotations such as this one:
// +kubebuilder:rbac:groups=core,namespace="k8ssandra",resources=pods;secrets,verbs=get;list;watch
You will find this same directory structure in the cass-operator and k8ssandra-operator projects. For the purposes of this discussion we are primarily focused on the following directories:
crddefaultmanagerrbac
We often refer to bases and overlays with kustomize. A base is a directory with a kustomization.yaml that includes resources, e.g., Deployment, to be configured and created.
An overlay is a directory with a kustomization.yaml that refers to other kustomization directories as it bases.
The crd, manager, and rbac directories are bases. I consider the default directory an overlay.
Questions
With some background covered I want to raise some questions around some of the discussion in https://github.com/k8ssandra/k8ssandra-operator/pull/144.
Where should non-generated kustomization directories live?
There was some discussion in the PR about whether additional kustomization directories should live under config or some other directory. I am in favor of keeping mostly everything under the config directory. There was a concern raised that this would complicate operator-sdk upgrades. I have not found that to be the case in my experience. The scaffolding of the config directory is a one-time operation with the exception of files that are modified by controller-gen. We don’t regenerate the scaffolding when upgrading operator-sdk.
More importantly the config directory is familiar to contributors as that is the convention used by kubebuilder/operator-sdk.
What bases should exist?
crd, manager, rbac, and webhook (not listed above but used in cass-operator eventually used in k8ssandra-operator) are simply defaults generated by the kubebuilder scaffolding. There is no hard and fast rule that a project has to have these and only these. With that said I think that these are sensible defaults and it makes sense to keep them.
As the projects evolve and as we refactor the kustomize-related code, we may introduce additional bases.
Where should bases be defined?
I am leaning towards creating a config/bases directory and moving all of the base directories under it. I think this would provide some clarity. This looks to be the approach with the istio-operator project. See here.
Where should overlays be defined
Similar to bases, I propose that overlays live under config/overlays.
What overlays should exist?
Both operator projects should have overlays for creating namespace-scoped and cluster-scoped deployments.
cass-operator should have a webhook overlay that includes cert-manager. We don’t need this yet in k8ssandra-operator since we haven’t implemented any webhooks for it.
k8ssandra-operator should have additional overlays for control-plane and data-plane deployments.
Then we need some combinations:
- namespace-scoped, control-plane
- namespace-scoped, data-plane
- cluster-scoped, control-plane
- cluster-scoped, data-plane
Should bases create namespaces?
In short, no. The scaffolding generates `config/manager/manager.yaml. This file includes:
apiVersion: v1
kind: Namespace
metadata:
labels:
control-plane: controller-manager
name: system
This creates problems as I have described in K8SSAND-877 ⁃ Default kustomization should not create namespace · Issue #167 · k8ssandra/cass-operator · GitHub.
I think that there should separate, dedicated bases should be used to create namespaces. This would of course be done in conjunction with overlays.
This article has some interesting things to say on the topic.