This is the multi-page printable view of this section. Click here to print.
Configuration Management
- 1: Introduction
- 2: Apply
- 3: Secrets and ConfigMaps
- 4: Container Images
- 5: Namespaces and Names
- 6: Labels and Annotations
- 7: Bespoke Application
- 8: Off The Shelf Application
- 9: Kustomize Components
1 - Introduction
TL;DR
- Apply manages Applications through files defining Kubernetes Resources (i.e. Resource Config)
- Kustomize is used to author Resource Config
Declarative Application Management
This section covers how to declaratively manage Workloads and Applications.
Workloads in a cluster may be configured through files called Resource Config. These files are typically checked into source control, and allow cluster state changes to be reviewed before they are audited and applied.
There are 2 components to Application Management.
Client Component
The client component consists of authoring Resource Config which defines the desired state
of an Application. This may be done as a collection of raw Resource Config files, or by
composing and overlaying Resource Config authored by separate teams
(using the -k flag with a kustomization.yaml).
Kustomize offers low-level tooling for simplifying the authoring of Resource Config. It provides:
- Generating Resource Config from other canonical sources - e.g. ConfigMaps, Secrets
- Reusing and Composing one or more collections of Resource Config
- Customizing Resource Config
- Setting cross-cutting fields - e.g. namespace, labels, annotations, name-prefixes, etc
Example: One user may define a Base for an application, while another user may customize a specific instance of the Base.
Server Component
The server component consists of a human applying the authored Resource Config to the cluster to create or update Resources. Once Applied, the Kubernetes cluster will set additional desired state on the Resource - e.g. defaulting unspecified fields, filling in IP addresses, autoscaling replica count, etc.
Note that the process of Application Management is a collaborative one between users and the Kubernetes system itself - where each may contribute to defining the desired state.
Example: An Autoscaler Controller in the cluster may set the scale field on a Deployment managed by a user.
2 - Apply
TL;DR
- Apply Creates and Updates Resources in a cluster through running
kubectl applyon Resource Config. - Apply manages complexity such as ordering of operations and merging user defined and cluster defined state.
Apply
Motivation
Apply is a command that will update a Kubernetes cluster to match state defined locally in files.
kubectl apply
- Fully declarative - don't need to specify create or update - just manage files
- Merges user owned state (e.g. Service
selector) with state owned by the cluster (e.g. ServiceclusterIp)
Definitions
- Resources: Objects in a cluster - e.g. Deployments, Services, etc.
- Resource Config: Files declaring the desired state for Resources - e.g. deployment.yaml. Resources are created and updated using Apply with these files.
kubectl apply Creates and Updates Resources through local or remote files. This may be through either raw Resource Config or kustomization.yaml.
Usage
Though Apply can be run directly against Resource Config files or directories using -f, it is recommended
to run Apply against a kustomization.yaml using -k. The kustomization.yaml allows users to define
configuration that cuts across many Resources (e.g. namespace).
Command / Examples
Check out the reference for commands and examples.Users run Apply on directories containing kustomization.yaml files using -k or on raw
ResourceConfig files using -f.
Multi-Resource Configs
A single Resource Config file may declare multiple Resources separated by\n---\n.
CRUD Operations
Creating Resources
Any Resources that do not exist and are declared in Resource Config when Apply is run will be Created.
Updating Resources
Any Resources that already exist and are declared in Resource Config when Apply is run may be Updated.
Added Fields
Any fields that have been added to the Resource Config will be set on the Resource.
Updated Fields
Any fields that contain different values for the fields specified locally in the Resource Config from what is in the Resource will be updated by merging the Resource Config into the live Resource. See merging for more details.
Deleted Fields
Fields that were in the Resource Config the last time Apply was run, will be deleted from the Resource, and return to their default values.
Unmanaged Fields
Fields that were not specified in the Resource Config but are set on the Resource will be left unmodified.
Deleting Resources
Declarative deletion of Resources does not yet exist in a usable form, but is under development.
Continuously Applying The Hard Way
In some cases, it may be useful to automatically Apply changes when ever the Resource Config is changed.
This example uses the unix watch command to periodically invoke Apply against a target.
watch -n 60 kubectl apply -k https://github.com/myorg/myrepo
Resource Creation Ordering
Certain Resource Types may be dependent on other Resource Types being created first. e.g. Namespaced Resources on the Namespaces, RoleBindings on Roles, CustomResources on the CRDs, etc.
When used with a kustomization.yaml, Apply sorts the Resources by Resource type to ensure Resources
with these dependencies are created in the correct order.
3 - Secrets and ConfigMaps
TL;DR
- Generate Secrets from files and literals with
secretGenerator - Generate ConfigMaps from files and literals with
configMapGenerator - Rolling out changes to Secrets and ConfigMaps
Motivation
The source of truth for Secret and ConfigMap Resources typically resides
somewhere else, such as a .properties file. Apply offers native support
for generating both Secrets and ConfigMaps from other sources such as files and
literals.
Additionally, Secrets and ConfigMaps require rollouts to be performed differently than for most other Resources in order for the changes to be rolled out to Pods consuming them.
Generators
Secret and ConfigMap Resources can be generated by adding secretGenerator
or configMapGenerator entries to the kustomization.yaml file.
The generated Resources name's will have suffixes that change when their data changes. See Rollouts for more on this.
ConfigMapsGenerator
Consider we have to generate ConfigMap from a preset values stored in .properties file. One can make use
of the following kustomization.yaml file to do so.
# kustomization.yaml
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
configMapGenerator:
- name: my-application-properties
files:
- application.properties
# application.properties
FOO=Bar
We get a generated ConfigMap YAML as output:
apiVersion: v1
data:
application.properties: |-
FOO=Bar
kind: ConfigMap
metadata:
name: my-application-properties-f7mm6mhf59
SecretGenerator
Rollouts
ConfigMap values are consumed by Pods as: environment variables, command line arguments and files.
This is important because Updating a ConfigMap will:
- immediately update the files mounted by all Pods consuming them
- not update the environment variables or command line arguments until the Pod is restarted
Typically users want to perform a rolling update of the ConfigMap changes to Pods as soon as the ConfigMap changes are pushed.
Apply facilitates rolling updates for ConfigMaps by creating a new ConfigMap for each change to the data. Workloads (e.g. Deployments, StatefulSets, etc) are updated to point to a new ConfigMap instead of the old one. This allows the change to be gradually rolled the same way other Pod Template changes are rolled out.
Each generated Resources name has a suffix appended by hashing the contents. This approach ensures a new ConfigMap is generated each time the contents is modified.
Note: Because the Resource names will contain a suffix, when looking for them with kubectl get,
their names will not match exactly what is specified in the kustomization.yaml file.
4 - Container Images
TL;DR
- Override or set the Name and Tag for Container Images
Container Images
Motivation
It may be useful to define the tags or digests of container images which are used across many Workloads.
Container image tags and digests are used to refer to a specific version or instance of a container
image - e.g. for the nginx container image you might use the tag 1.15.9 or 1.14.2.
- Update the container image name or tag for multiple Workloads at once
- Increase visibility of the versions of container images being used within the project
- Set the image tag from external sources - such as environment variables
- Copy or Fork an existing Project and change the Image Tag for a container
- Change the registry used for an image
Consider the following deployment.yaml file,
# deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: the-deployment
spec:
template:
spec:
containers:
- name: mypostgresdb
image: postgres:8
- name: nginxapp
image: nginx:1.7.9
- name: myapp
image: my-demo-app:latest
- name: alpine-app
image: alpine:3.7
the image tag under containers specified the image that has to be pulled from the container registry.
Some of things that can be done with images:
- Setting a Name
- Setting a Tag
- Setting a Digest
- Setting a Tag from the latest commit SHA
- Setting a Tag from an Environment Variable
5 - Namespaces and Names
TL;DR
- Set the Namespace for all Resources within a Project with
namespace - Prefix the Names of all Resources within a Project with
namePrefix - Suffix the Names of all Resources within a Project with
nameSuffix
Setting Namespaces and Names
Motivation
It may be useful to enforce consistency across the namespace and names of all Resources within a Project.
- Ensure all Resources are in the correct Namespace
- Ensure all Resources share a common naming convention
- Copy or Fork an existing Project and change the Namespace / Names
Setting Namespace
Reference:
The Namespace for all namespaced Resources declared in the Resource Config may be set with namespace.
This sets the namespace for both generated Resources (e.g. ConfigMaps and Secrets) and non-generated
Resources.
Example: Set the namespace specified in the kustomization.yaml on the namespaced Resources.
Input: The kustomization.yaml and deployment.yaml files
# kustomization.yaml
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
namespace: my-namespace
resources:
- deployment.yaml
# deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment
labels:
app: nginx
spec:
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx
Applied: The Resource that is Applied to the cluster
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: nginx
name: nginx-deployment
# The namespace has been added
namespace: my-namespace
spec:
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- image: nginx
name: nginx
Overriding Namespaces: Setting the namespace will override the namespace on Resources if it is already set.
Command / Examples
Check out the nameprefix / namesuffix for commands and examples forsetting nameprefix / namesuffix to kubernetes resources
Propagation of the Name to Object References
Resources such as Deployments and StatefulSets may reference other Resources such as ConfigMaps and Secrets in the Pod Spec.
This sets a name prefix or suffix for both generated Resources (e.g. ConfigMaps and Secrets) and non-generated Resources.
The namePrefix or nameSuffix that is applied is propagated to references to updated resources
- e.g. references to Secrets and ConfigMaps are updated with the namePrefix and nameSuffix.
References
Apply will propagate the namePrefix to any place Resources within the project are referenced by other Resources
including:
- Service references from StatefulSets
- ConfigMap references from PodSpecs
- Secret references from PodSpecs
6 - Labels and Annotations
TL;DR
- Set Labels for all Resources declared within a Project with
commonLabels - Set Annotations for all Resources declared within a Project with
commonAnnotations
Setting Labels and Annotations
Motivation
Users may want to define a common set of labels or annotations for all the Resource in a project.
- Identify the Resources within a project by querying their labels.
- Set metadata for all Resources within a project (e.g. environment=test).
- Copy or Fork an existing Project and add or change labels and annotations.
Setting Labels
Example: Add the labels declared in commonLabels to all Resources in the project.
Important: Once set, commonLabels should not be changed so as not to change the Selectors for Services or Workloads.
Input: The kustomization.yaml and deployment.yaml files
# kustomization.yaml
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
commonLabels:
app: foo
environment: test
resources:
- deployment.yaml
# deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment
labels:
app: nginx
bar: baz
spec:
selector:
matchLabels:
app: nginx
bar: baz
template:
metadata:
labels:
app: nginx
bar: baz
spec:
containers:
- name: nginx
image: nginx
Applied: The Resource that is Applied to the cluster
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: foo # Label was changed
environment: test # Label was added
bar: baz # Label was ignored
name: nginx-deployment
spec:
selector:
matchLabels:
app: foo # Selector was changed
environment: test # Selector was added
bar: baz # Selector was ignored
template:
metadata:
labels:
app: foo # Label was changed
environment: test # Label was added
bar: baz # Label was ignored
spec:
containers:
- image: nginx
name: nginx
Propagating Labels to Selectors
In addition to updating the labels for each Resource, any selectors will also be updated to target the labels. e.g. the selectors for Services in the project will be updated to include the commonLabels in addition to the other labels.
Note: Once set, commonLabels should not be changed so as not to change the Selectors for Services or Workloads.
Common Labels
The k8s.io documentation defines a set of Common Labeling Conventions that may be applied to Applications.
Note: commonLabels should only be set for immutable labels, since they will be applied to Selectors.
Labeling Workload Resources makes it simpler to query Pods - e.g. for the purpose of getting their logs.
Setting Annotations
Setting Annotations is very similar to setting labels as seen above. Check out the reference for commands and examples.
Propagating Annotations
In addition to updating the annotations for each Resource, any fields that contain ObjectMeta (e.g. PodTemplate) will also have the annotations added.7 - Bespoke Application
In this workflow, all configuration (resource YAML) files are owned by the user. No content is incorporated from version control repositories owned by others.

Following are the steps involved:
-
Create a directory in version control
Speculate some overall cluster application called ldap; we want to keep its configuration in its own repo.
git init ~/ldap -
Create a base
mkdir -p ~/ldap/baseIn this directory, create and commit a kustomization file and a set of resources.
-
Create overlays
mkdir -p ~/ldap/overlays/staging mkdir -p ~/ldap/overlays/productionEach of these directories needs a kustomization file and one or more patches.
The staging directory might get a patch that turns on an experiment flag in a configmap.
The production directory might get a patch that increases the replica count in a deployment specified in the base.
-
Bring up variants
Run kustomize, and pipe the output to apply.
kustomize build ~/ldap/overlays/staging | kubectl apply -f - kustomize build ~/ldap/overlays/production | kubectl apply -f -You can also use kubectl-v1.14.0 to apply your variants.
kubectl apply -k ~/ldap/overlays/staging kubectl apply -k ~/ldap/overlays/production
8 - Off The Shelf Application
In this workflow, all files are owned by the user and maintained in a repository under their control, but they are based on an off-the-shelf configuration that is periodically consulted for updates.

Following are the steps involved:
-
Clone it as your base
The base directory is maintained in a repo whose upstream is an OTS configuration, in this case some user's
ldaprepo:mkdir ~/ldap git clone https://github.com/$USER/ldap ~/ldap/base cd ~/ldap/base git remote add upstream git@github.com:$USER/ldap -
Create overlays
As in the bespoke case above, create and populate an overlays directory.
The overlays are siblings to each other and to the base they depend on.
mkdir -p ~/ldap/overlays/staging mkdir -p ~/ldap/overlays/productionThe user can maintain the
overlaysdirectory in a distinct repository. -
Bring up variants
kustomize build ~/ldap/overlays/staging | kubectl apply -f - kustomize build ~/ldap/overlays/production | kubectl apply -f -You can also use kubectl-v1.14.0 to apply your variants.
kubectl apply -k ~/ldap/overlays/staging kubectl apply -k ~/ldap/overlays/production -
(Optionally) Capture changes from upstream
The user can periodically rebase their base to capture changes made in the upstream repository.
cd ~/ldap/base git fetch upstream git rebase upstream/master
9 - Kustomize Components
As of v3.7.0 Kustomize supports a special type of kustomization that allows
one to define reusable pieces of configuration logic that can be included from
multiple overlays.
Components come in handy when dealing with applications that support multiple optional features and you wish to enable only a subset of them in different overlays, i.e., different features for different environments or audiences.
For more details regarding this feature you can read the Kustomize Components KEP.
Use case
Suppose you've written a very simple Web application:
apiVersion: apps/v1
kind: Deployment
metadata:
name: example
spec:
template:
spec:
containers:
- name: example
image: example:1.0
You want to deploy a community edition of this application as SaaS, so you add support for persistence (e.g. an external database), and bot detection (e.g. Google reCAPTCHA).
You've now attracted enterprise customers who want to deploy it on-premises, so you add LDAP support, and disable Google reCAPTCHA. At the same time, the devs need to be able to test parts of the application, so they want to deploy it with some features enabled and others not.
Here's a matrix with the deployments of this application and the features enabled for each one:
| External DB | LDAP | reCAPTCHA | |
|---|---|---|---|
| Community | ✔️ | ✔️ | |
| Enterprise | ✔️ | ✔️ | |
| Dev | ✅ | ✅ | ✅ |
(✔️ enabled, ✅: optional)
So, you want to make it easy to deploy your application in any of the above three environments. Here's how you can do this with Kustomize components: each opt-in feature gets packaged as a component, so that it can be referred to from multiple higher-level overlays.
First, define a place to work:
DEMO_HOME=$(mktemp -d)
Define a common base that has a Deployment and a simple ConfigMap, that
is mounted on the application's container.
BASE=$DEMO_HOME/base
mkdir $BASE
# $BASE/kustomization.yaml
resources:
- deployment.yaml
configMapGenerator:
- name: conf
literals:
- main.conf=|
color=cornflower_blue
log_level=info
# $BASE/deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: example
spec:
template:
spec:
containers:
- name: example
image: example:1.0
volumeMounts:
- name: conf
mountPath: /etc/config
volumes:
- name: conf
configMap:
name: conf
Define an external_db component, using kind: Component, that creates a
Secret for the DB password and a new entry in the ConfigMap:
EXT_DB=$DEMO_HOME/components/external_db
mkdir -p $EXT_DB
# $EXT_DB/kustomization.yaml
apiVersion: kustomize.config.k8s.io/v1alpha1 # <-- Component notation
kind: Component
secretGenerator:
- name: dbpass
files:
- dbpass.txt
patchesStrategicMerge:
- configmap.yaml
patchesJson6902:
- target:
group: apps
version: v1
kind: Deployment
name: example
path: deployment.yaml
# $EXT_DB/deployment.yaml
- op: add
path: /spec/template/spec/volumes/0
value:
name: dbpass
secret:
secretName: dbpass
- op: add
path: /spec/template/spec/containers/0/volumeMounts/0
value:
mountPath: /var/run/secrets/db/
name: dbpass
# $EXT_DB/configmap.yaml
apiVersion: v1
kind: ConfigMap
metadata:
name: conf
data:
db.conf: |
endpoint=127.0.0.1:1234
name=app
user=admin
pass=/var/run/secrets/db/dbpass.txt
Define an ldap component, that creates a Secret for the LDAP password
and a new entry in the ConfigMap:
LDAP=$DEMO_HOME/components/ldap
mkdir -p $LDAP
# $LDAP/kustomization.yaml
apiVersion: kustomize.config.k8s.io/v1alpha1
kind: Component
secretGenerator:
- name: ldappass
files:
- ldappass.txt
patchesStrategicMerge:
- configmap.yaml
patchesJson6902:
- target:
group: apps
version: v1
kind: Deployment
name: example
path: deployment.yaml
# $LDAP/deployment.yaml
- op: add
path: /spec/template/spec/volumes/0
value:
name: ldappass
secret:
secretName: ldappass
- op: add
path: /spec/template/spec/containers/0/volumeMounts/0
value:
mountPath: /var/run/secrets/ldap/
name: ldappass
# $LDAP/configmap.yaml
apiVersion: v1
kind: ConfigMap
metadata:
name: conf
data:
ldap.conf: |
endpoint=ldap://ldap.example.com
bindDN=cn=admin,dc=example,dc=com
pass=/var/run/secrets/ldap/ldappass.txt
Define a recaptcha component, that creates a Secret for the reCAPTCHA
site/secret keys and a new entry in the ConfigMap:
RECAPTCHA=$DEMO_HOME/components/recaptcha
mkdir -p $RECAPTCHA
# $RECAPTCHA/kustomization.yaml
apiVersion: kustomize.config.k8s.io/v1alpha1
kind: Component
secretGenerator:
- name: recaptcha
files:
- site_key.txt
- secret_key.txt
# Updating the ConfigMap works with generators as well.
configMapGenerator:
- name: conf
behavior: merge
literals:
- recaptcha.conf=|
enabled=true
site_key=/var/run/secrets/recaptcha/site_key.txt
secret_key=/var/run/secrets/recaptcha/secret_key.txt
patchesJson6902:
- target:
group: apps
version: v1
kind: Deployment
name: example
path: deployment.yaml
# $RECAPTCHA/deployment.yaml
- op: add
path: /spec/template/spec/volumes/0
value:
name: recaptcha
secret:
secretName: recaptcha
- op: add
path: /spec/template/spec/containers/0/volumeMounts/0
value:
mountPath: /var/run/secrets/recaptcha/
name: recaptcha
Define a community variant, that bundles the external DB and reCAPTCHA
components:
COMMUNITY=$DEMO_HOME/overlays/community
mkdir -p $COMMUNITY
# $COMMUNITY/kustomization.yaml
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
resources:
- ../../base
components:
- ../../components/external_db
- ../../components/recaptcha
Define an enterprise overlay, that bundles the external DB and LDAP
components:
ENTERPRISE=$DEMO_HOME/overlays/enterprise
mkdir -p $ENTERPRISE
# $ENTERPRISE/kustomization.yaml
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
resources:
- ../../base
components:
- ../../components/external_db
- ../../components/ldap
Define a dev overlay, that points to all the components and has LDAP
disabled:
DEV=$DEMO_HOME/overlays/dev
mkdir -p $DEV
# $DEV/kustomization.yaml
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
resources:
- ../../base
components:
- ../../components/external_db
#- ../../components/ldap
- ../../components/recaptcha
Now, the workspace has the following directories:
├── base
│ ├── deployment.yaml
│ └── kustomization.yaml
├── components
│ ├── external_db
│ │ ├── configmap.yaml
│ │ ├── dbpass.txt
│ │ ├── deployment.yaml
│ │ └── kustomization.yaml
│ ├── ldap
│ │ ├── configmap.yaml
│ │ ├── deployment.yaml
│ │ ├── kustomization.yaml
│ │ └── ldappass.txt
│ └── recaptcha
│ ├── deployment.yaml
│ ├── kustomization.yaml
│ ├── secret_key.txt
│ └── site_key.txt
└── overlays
├── community
│ └── kustomization.yaml
├── dev
│ └── kustomization.yaml
└── enterprise
└── kustomization.yaml
With this structure, you can generate the YAML manifests for each deployment
using kustomize build:
kustomize build overlays/community
kustomize build overlays/enterprise
kustomize build overlays/dev