kubara: Bootstrapping Guide - ControlPlane Setup
Introduction
This guide provides a step-by-step process for bootstrapping your platform running on Kubernetes, including the necessary prerequisites, architecture setup, and deployment instructions. Try to follow the instructions first. If you have any questions or issues, please reach out directly via Teams. If you're interested in the setup details, explore the Wiki pages.
1. Getting Started
Whether you're running on STACKIT Cloud or STACKIT Edge, we recommend using the Terraform modules introduced in Step 2. If you already have a Kubernetes cluster without DNS, secrets management, etc., simply disable those services in the config.yaml file, which will be generated in the next steps.
1.1 Environment Configuration
Refer to the Prerequisites guide and ensure all non-optional tasks in that guide are completed.
Don't forget to create a new Git repository - all following steps should be executed from within that newly created repository.
The easiest way is to run kubara inside the repository (but do not add the binary to Git).
1.2 Generate preparation files
-
Run this command to scaffold essential setup files:
kubara init --prepThis will generate:
- A
.gitignorefile to help prevent accidental commits of sensitive or unnecessary files - An
.envfile that serves as a template for your environment configuration. Fill all placeholders (<...>) before runningkubara init.
- A
-
Update the values inside
.env⚠️ Handling .env Files
.envfiles contain sensitive credentials and must be treated as secrets. Never commit a plain.envfile directly into Git. If you really need it in the repository, make sure it is stored in encrypted form only. Always add.envto.gitignoreto avoid accidental commits. For team collaboration, proven approaches include encrypted.envfiles in the repository, centralized secret management, or helper tools likedotenv. Important: A plain.envfile in Git exposes all secrets and must be avoided. -
Check your values
⚠️ Keep in mind that weak passwords such as
123456forARGOCD_WIZARD_ACCOUNT_PASSWORDare a bad idea, since your platform will be publicly available by default via your DNS zone.
1.3 Generate Base Configuration
Initialize your configuration:
kubara init
This command creates a config.yaml file based on the values from your .env.
If you make changes to .env later, you can re-run the command with --overwrite to update the configuration.
When using --overwrite, only values from .env are replaced.
Additional settings in your existing config.yaml are preserved and merged.
This currently applies only to the first cluster entry.
1.4 Validate your config.yaml against schema (optional, recommended)
Generate a JSON schema file:
kubara schema
By default this creates config.schema.json in your current directory.
You can set a custom output file:
kubara schema --output custom-config.schema.json
For editor integration (e.g. VS Code with YAML language server), reference the schema in your config.yaml:
# yaml-language-server: $schema=./config.schema.json
1.5 Update and Prepare Templates
💡 What is "type:" in
config.yaml:controlplaneis your hub cluster,workeris your spoke cluster Hub and Spoke Cluster 💡 Not using STACKIT Edge? Just remove the load balancer IPs from yourconfig.yaml.
Example:
clusters:
- name: project-name-from-env-file
stage: project-stage-something-like-dev
type: <controlplane or worker>
dnsName: <cp.demo-42.stackit.run>
ingressClassName: traefik
privateLoadBalancerIP: 0.0.0.0
publicLoadBalancerIP: 0.0.0.0
ssoOrg: <oidc-org>
ssoTeam: <org-team>
terraform:
projectId: <project-id>
kubernetesType: <ske or edge>
kubernetesVersion: 1.34
dns:
name: <dns-name>
email: <email>
...
kubara generates resources in two stages:
- Terraform modules and overlays to provision infrastructure and the Kubernetes cluster
- Helm templates to deploy Argo CD and platform services
If you are not using Terraform, you can skip directly to Step 3.
2. Terraform: Provisioning Kubernetes and Infrastructure (Optional, Recommended)
⚠️ In kubara version
0.2.0, this step does not merge user-customized Terraform values and will overwrite existing Terraform files.
Generate Terraform modules:
kubara generate --terraform
Commit and push the generated files to your Git repository.
📘 You will need access to the STACKIT API. Setup instructions are available in the Terraform provider documentation & STACKIT Docs. Make sure your created Service Account has Project Owner permissions.
2.1 Terraform Bootstrap
Before the first terraform init, prepare and load your environment variables:
cd customer-service-catalog/terraform/<cluster-name>
cp set-env-changeme.sh set-env.sh
source set-env.sh
# or for PowerShell
# cp set-env-changeme.ps1 set-env.ps1
# . .\set-env.ps1
Set at least STACKIT_SERVICE_ACCOUNT_KEY_PATH in set-env.sh / set-env.ps1 before sourcing.
Then navigate to:
cd bootstrap-tfstate-backend
Run:
terraform init
terraform plan
terraform apply
Use the output to configure Terraform backend credentials:
terraform output debug
You can set AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY in set-env.sh / set-env.ps1 and source the file again, or export them directly:
export AWS_ACCESS_KEY_ID="<from terraform output>"
export AWS_SECRET_ACCESS_KEY="<from terraform output>"
2.2 Provisioning Infrastructure
Then proceed to:
cd ../infrastructure
Run:
terraform init
terraform plan
env.auto.tfvars, which is automatically applied in your Terraform deployment.
terraform apply
This creates the Kubernetes cluster and all required infrastructure.
Export your kubeconfig:
# change command accordingly to your needs. For example change the name of your kubeconfig, to not overwrite any files
terraform output -raw kubeconfig_raw > $HOME/.kube/kubara.yaml
Keep this kubara.yaml local and do not commit it to Git.
Review Terraform outputs:
terraform output
Use Terraform outputs to update values in config.yaml where needed (for example, on Edge: privateLoadBalancerIP and publicLoadBalancerIP).
Do not export Secrets Manager credentials into .env; these provider-specific .env variables were removed.
Sensitive output example:
terraform output vault_user_ro_password_b64
If you use OAuth2, create a GitHub application as shown here.
If you want Terraform to create OAuth2-related Vault entries:
- Use
set-env.sh/set-env.ps1forTF_VAR_*incustomer-service-catalog/terraform/<cluster-name>/ - In
customer-service-catalog/terraform/<cluster-name>/infrastructure, copysecrets.tf-exampletosecrets-2.tfand adjust values if needed
Load the variables and apply:
cp secrets.tf-example secrets-2.tf
source ../set-env.sh
# or for PowerShell
# Copy-Item secrets.tf-example secrets-2.tf
. ..\set-env.ps1
terraform apply
⚠️ You need to set these environment variables again before re-applying Terraform if they are not persisted in your shell/session setup.
To clean up:
terraform state rm \
vault_kv_secret_v2.image_pull_secret \
vault_kv_secret_v2.oauth2_creds \
vault_kv_secret_v2.argo_oauth2_creds \
vault_kv_secret_v2.grafana_oauth2_creds \
random_password.oauth2_cookie_secret
2.3 STACKIT Edge-Specific Notes
The provisioning steps remain the same. The only difference lies in the Terraform output:
- You'll retrieve additional values like
privateLoadBalancerIPandpublicLoadBalancerIP - These need to be added to
config.yaml
You must manually create the Kubernetes cluster via the cloud portal. This will be automated in the future.
Now continue with Step 3.
3. Helm
This step extends the service catalog:
- Generates an umbrella Helm chart in
managed-service-catalog/ - Creates cluster-specific overlays in
customer-service-catalog/
kubara generate --helm
There are several Helm chart values.yaml files with dummy change-me values that need to be adjusted.
Example:
# ... previous content of yaml file
admin: change-me
# ... rest of yaml
customer-service-catalog/helm/<cluster>/<chart>/values.yaml
Source templates are embedded in the binary under go-binary/templates/embedded/..., but you should only edit generated files in your repository.
The chart directories where values usually need review are:
- argo-cd
- cert-manager
- external-dns
- external-secrets
- homer-dashboard
- traefik
- kube-prometheus-stack
- kyverno-policy-reporter
- kyverno
- loki
- longhorn
- metallb
- metrics-server
- oauth2-proxy
3.1 Additional value files and CI value files
Every generated app supports:
values.yamlas the main customer overlay file- optional
additional-values.yamlfor overrides/extra values
kubara's generated ApplicationSet already references both files and ignores missing files, so you can add additional-values.yaml only when needed.
Merge behavior reminder:
- maps/dictionaries are merged recursively
- lists/arrays replace previous values completely
CI-specific values can be stored in chart-local CI files (for example ci/ci-values.yaml) to keep pipeline-only settings out of runtime overlays.
⚠️ Don't forget to commit and push your changes to Git!
4. Deploying Argo CD
4.1 Bootstrap the Control Plane
⚠️ This command requires access to a Kubernetes cluster and, by default, uses
~/.kube/config. To target a specific cluster, provide your own config with--kubeconfig your-kubeconfig
For external-secrets, create provider credential secret(s) first (for example via kubectl create secret ...), then:
A) recommended for first bootstrap: pass a ClusterSecretStore manifest to bootstrap with --with-es-css-file together with --with-es-crds,
or B) apply your ClusterSecretStore manually (only if external-secrets CRDs are already installed on the cluster).
If the namespace does not exist yet, create it once before creating the provider credential secret(s):
kubectl create namespace external-secrets
Example provider credential secret(s) for the ClusterSecretStore:
## Bitwarden
kubectl -n external-secrets create secret generic bitwarden-access-token \
--from-literal=token="<BITWARDEN_MACHINE_ACCOUNT_TOKEN>"
## STACKIT Secrets Manager
kubectl -n external-secrets create secret generic stackit-secrets-manager-cred \
--from-literal=username="<USERNAME>" \
--from-literal=password="<PASSWORD>"
Example clustersecretstore.yaml for --with-es-css-file (templating with {{ .cluster.name }} / {{ .cluster.stage }} is supported):
apiVersion: external-secrets.io/v1
kind: ClusterSecretStore
metadata:
labels:
argocd.argoproj.io/instance: {{ .cluster.name }}-external-secrets
name: "{{ .cluster.name }}-{{ .cluster.stage }}"
spec:
provider:
vault:
auth:
userPass:
path: userpass
secretRef:
key: password
name: stackit-secrets-manager-cred
namespace: external-secrets
username: "<USERNAME>"
path: "<VAULT_PATH>"
server: "https://<your-secrets-manager-endpoint>"
version: v2
kubara bootstrap <cluster-name-from-config-yaml> --with-es-crds --with-prometheus-crds
Recommended for the first bootstrap with external-secrets: let kubara apply a templated ClusterSecretStore manifest during bootstrap:
kubara bootstrap <cluster-name-from-config-yaml> \
--kubeconfig k8s.yaml \
--with-es-css-file clustersecretstore.yaml \
--with-es-crds --with-prometheus-crds
After a successful bootstrap run, your platform should be operational.
5. Access the Argo CD Dashboard
Username:
wizardPassword: From.env(ARGOCD_WIZARD_ACCOUNT_PASSWORD)
- Start port-forwarding:
kubectl port-forward svc/argocd-server -n argocd 8080:443
-
Open your browser at: http://localhost:8080/argocd
-
Log in with the credentials above.
Enjoy your new platform!
What's also possible?
This section will be extended in the future to describe not just technical changes, but also other supported possibilities when bootstrapping.
Bootstrapping Multiple ControlPlanes
You can bootstrap multiple ControlPlanes.
We recommend not to reuse the same config.yaml file for multiple ControlPlanes.
Why?
During the bootstrap process, the .env file is used to provide credentials.
If you reuse the same .env file, you would have to constantly adjust it for each ControlPlane — which is error-prone.
Since version 0.2.0, this is much easier. You can simply provide a different env file:
./kubara init --prep --env-file .env2
.env2 with the required values. Generate a new config file from it:
./kubara --config-file config2.yaml --env-file .env2 init
This will use the values from .env2 to generate config2.yaml.
Render Terraform modules and Helm charts for the new ControlPlane:
./kubara --config-file config2.yaml generate --terraform
./kubara --config-file config2.yaml generate --helm
Finally, bootstrap your additional ControlPlane:
./kubara --env-file .env2 bootstrap <cluster-name-from-config2-yaml> --with-es-crds --with-prometheus-crds
What's Next?
After bootstrapping your platform, you can: