Skip to content

Secrets and ConfigMaps

So what happened until now? We have deployed our guestbook application and load-balancer for it. Furthermore we have deployed a MongoDB and a ClusterIP service for it. But our both applications won’t talk to each other, yet. We have to configure them correctly. For that reason we will add a configuration file to our guestbook service and configure the MongoDB credentials (as well as the connection details for our guestbook application) in this chapter.

If something went wrong in your deployments during the last task, you can copy the solutions from 2 + 3 - Deployment and Resourcelimits and apply them.

1. Add Secret

Now we need to create a secret that holds our mongoDB username and password in order to write to the db.

You may choose your own username and password. Create the secret called mongo-secret in the mongo namespace, using imperative commands or write your own definition file, following the Kubernetes Documentation.

name: mongo-secret
namespace: mongo
type: generic
mongo-root-username: [your-encoded-username]
mongo-root-password: [your-encoded-password]
HINT

1. Get help for the kubectl create secret command:

Solution

$ kubectl create secret --help


2. Use the imperative command:
Solution

$ kubectl create secret generic mongo-secret --from-literal mongo-root-username=[your-username] --from-literal mongo-root-password=[your-password]


1. You can take the definition file 6 - Secrets and Configmaps/1-mongo-secret.yaml as a reference

If you wrote the definition file, you can create the secret now within the mongo namespace. When it’s done, inspect the secret object.

Decode the encoded data from username and password to double check their values.

HINT

1. Copy the content of the secret and decode it in the command line:

Solution

$ kubectl get secret mongo-secret -o yaml
$ echo [your-encoded-username/password] | base64 --decode


Next, we need to provide the secrets data to the mongo-deployment.

Edit the yaml file and add username and password from mongo-secret as environment variables. Name the environment variables MONGO_INITDB_ROOT_USERNAME and MONGO_INITDB_ROOT_PASSWORD

HINT

1. Add the environment variables under spec.templates.spec.containers[0].env
2. One entry should look like this:

Solution

env:
    - name: MONGO_INITDB_ROOT_USERNAME
      valueFrom:
        secretKeyRef:
          name: mongo-secret
          key: mongo-root-username

Example Solution: You can take the definition file 6 - Secrets and Configmaps/2-mongo-deploy.yaml as a reference

Save the Deployment definition file and apply it if wanted.

Do the same in the Deployment manifest file 4 - Namespace Mongo/setup/app-deploy.yaml. This time, add the environment variables under the keys the example-app expects TP_CONFIGURATION_MONGO_USER and TP_CONFIGURATION_MONGO_PASSWORD.

Solution

Take the manifest 6 - Secrets and Configmaps/3-example-app-secret-deploy.yaml as a reference.

Save the definition file and apply it if wanted.



2. Create Configmap Volume

In this step we add some more configuration to our Application Deployment. The approach is to feed them into the Deployment definition file as a Config Map Volume.

Therefore, inspect the configuration in the file 6 - Secrets and Configmaps/app.yaml. This file will be mounted into our container and read by our example-app.

Create a Config Map named settings-cm from the file.

HINT

1. Use the imperative command with the --from-file option:

Solution

$ kubectl create configmap settings-cm --from-file [path-to-file]
e.g.
$ k create cm settings-cm --from-file exercise/kubernetes/6\ -\ Secrets\ and\ Configmaps/app.yaml


Inspect the configmap and make sure all data is present. Compare the content with the environment variables in the Deployment from the last Lab.

To set our backend to mongo, feed the Config Map as a volume to the blueprint of the example-app:

volumes: configMap
path: app.yaml
cm-name: settings-cm
volumeMounts:
  name: config-volume
  mountPath: /app/k8s-example-app/app.yaml
HINT

1. Volumes section should look like this:

Solution

volumes:
  - name: config-volume
    configMap:
      name: settings-cm


2. Volume Mounts section should look like this:
Solution

volumeMounts:
  - name: config-volume
    mountPath: /app/k8s-example-app/app.yaml
    subPath: app.yaml


1. Take the Deployment definition file from 6 - Secrets and Configmaps/5-example-app-cm-deploy.yaml as a reference

Now everything should be configured correctly for our deployments. Go ahead and create the objects you prepared so far (if not already deployed):

  • Secret (if not already deployed)
  • Config Map (if not already deployed)
  • Application Deployment
  • Application Load Balancer Service
  • Mongo Deployment


If you run into problems deploying all manifests in one command, make sure to deploy the Secret and Configmap before all the other objects!

Test the results by inspecting all objects in the cluster and accessing the application in your browser.

Discuss what’s still missing and if the setup matches our needs. Take your experiences from the previous Labs into account. Some aspects to think about are:

  • Scaling of the mongoDB
  • Cluster networking - how to access all our Deployments
  • Volumes and Storage Classes