Installing Kubernetes Deployments with Ansible

I use ansible for most Kubernetes installations, and there are a few common patterns I use that I thought other people might find useful. These techniques mostly help make sure a deployment or other setup is running before moving to the next one. This is particularly helpful when one depends on another and will fail if brought up before the previous one is running.

Readiness probes are critical for this to work

Without an accurate readiness probes, most of the ansible commands will not wait for deployments to properly come up. I will demonstrate some that I use, but this will depend on the services being set up.

Setting up and waiting for a mongodb deployment

I am not going to go through the whole configuration, but will show some important parts. After transferring and then installing the deployment/service, I use a shell command to check the status of the deployment, and loop it with until/retries/delay until it is successful.

  - name: Install mongodb deployment
    k8s:
      state: present
      src: "{{ destdir }}/deployment_mongodb.yml"

  - name: Install mongodb service
    k8s:
      state: present
      src: "{{ destdir }}/service_mongodb.yml"

  - name: wait for mongodb deployment to be ready for connections
    shell: kubectl get deployments -n psapp mongodb-ex | awk 'NR>1 { printf "%s\n",$4}'
    register: mongo_deploy_status
    until: mongo_deploy_status.stdout|int > 0
    retries: 7
    delay: 10

If this times out, there is probably a problem with the deploy and I need to check the configurations and the cluster.

Here is the Readiness probe I use for Mongodb

    readinessProbe:
      exec:
        command:
        - mongo
        - --port
        - "27017"
        - --eval
        - "printjson(db.serverStatus())"
      initialDelaySeconds: 10
      periodSeconds: 20

Same thing for a Redis deployment

You will notice that the pattern has not changed at all, and the only differences are the Readiness probe for Redis and the amount of time I wait, since redis tends to come up faster.

  - name: Install redis deployment
    k8s:
      state: present
      src: "{{ destdir }}/deployment_redis.yml"

  - name: Install redis service
    k8s:
      state: present
      src: "{{ destdir }}/service_redis.yml"

  - name: wait for redis deployment to be ready for connections
    shell: kubectl get deployments -n psapp redis-ex | awk 'NR>1 { printf "%s\n",$4}'
    register: redis_deploy_status
    until: redis_deploy_status.stdout|int > 0
    retries: 4
    delay: 10
    ignore_errors: yes

And the Readiness probe I use for Redis

    readinessProbe:
      exec:
        command:
        - redis-cli
        - ping
      initialDelaySeconds: 10
      periodSeconds: 20

A few other readiness probes I have used

I am not sure if they are the best way to do it, but they have at least worked for the ansible installs. Here are readiness probes for express, elasticsearch, kibana.
Elasticsearch

    readinessProbe:
      httpGet:
        path: /
        port: 9200
      initialDelaySeconds: 7
      periodSeconds: 10

Kibana

    readinessProbe:
      httpGet:
        path: /
        port: 5601
      initialDelaySeconds: 7
      periodSeconds: 10

Express

    readinessProbe:
      httpGet:
        path: /health-check
        port: 3000
      initialDelaySeconds: 10
      periodSeconds: 20

A more involved install; Filebeats

Here is an example of a complete filebeats install. Most of this comes from documentation on filebeats or dissection of helm installs that I had previously used, so credit goes to various sources. I found it instructive on how to set up role/binding/serviceaccount as well as a daemonset with more involved configs, so I though I would share it.

Filebeat needs to use the apiserver

Because of this, it needs role, rolebinding, and a serviceaccount.
This is how the install will work when all put together:

  - name: Install the filebeat daemonset, serviceaccount, role, rolebinding
    k8s:
      state: present
      src: "{{ dl_dir }}/{{ item }}"
    with_items:
      - filebeat_serviceaccount.yml
      - filebeat_role.yml
      - filebeat_rolebinding.yml
      - filebeat_DaemonSet.yml

Service account for filebeat:

apiVersion: v1
kind: ServiceAccount
metadata:
  name: filebeat
  namespace: logging
  labels:
    psapp: filebeat

The role for filebeat:

apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRole
metadata:
  name: filebeat
  labels:
    psapp: filebeat
rules:
- apiGroups: [""] # "" indicates the core API group
  resources:
  - namespaces
  - pods
  verbs:
  - get
  - watch
  - list

The rolebinding for filebeat:

apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
  name: filebeat
subjects:
- kind: ServiceAccount
  name: filebeat
  namespace: logging
roleRef:
  kind: ClusterRole
  name: filebeat
  apiGroup: rbac.authorization.k8s.io

And finally the daemonset:

Most of this should work on various clusters. If you are not using docker then you would need to change the configuration to use the container runtime that you are using. The configuration file would also likely be very different, but that is why I am not showing the contents, just the configmap set used.

apiVersion: extensions/v1beta1
kind: DaemonSet
metadata:
  name: filebeat
  namespace: logging
  labels:
    psapp: filebeat
spec:
  template:
    metadata:
      labels:
        psapp: filebeat
    spec:
      serviceAccountName: filebeat
      terminationGracePeriodSeconds: 30
      hostNetwork: true
      dnsPolicy: ClusterFirstWithHostNet
      containers:
      - name: filebeat
        image: elastic/filebeat:7.2.1
        args: [
          "-c", "/etc/filebeat.yml",
          "-e",
        ]
        env:
        - name: NODE_NAME
          valueFrom:
            fieldRef:
              fieldPath: spec.nodeName
        securityContext:
          runAsUser: 0
          # If using Red Hat OpenShift uncomment this:
          #privileged: true
        resources:
          limits:
            memory: 200Mi
          requests:
            cpu: 100m
            memory: 100Mi
        volumeMounts:
        - name: config
          readOnly: true
          mountPath: /etc/filebeat.yml
          subPath: filebeat.yml
        - name: data
          mountPath: /usr/share/filebeat/data
        - name: varlibdockercontainers
          mountPath: /var/lib/docker/containers
          readOnly: true
        - name: varlog
          mountPath: /var/log
          readOnly: true
        - name: dockersock
          mountPath: "/var/run/docker.sock"
      volumes:
      - name: config
        configMap:
          defaultMode: 0600
          name: filebeat-config
          items:
            - key: filebeat_k8s.yml
              path: filebeat.yml
      - name: varlibdockercontainers
        hostPath:
          path: /var/lib/docker/containers
      - name: dockersock
        hostPath:
          path: /var/run/docker.sock
      - name: varlog
        hostPath:
          path: /var/log
      - name: data
        hostPath:
          path: /var/lib/filebeat-data
          type: DirectoryOrCreate

Summary; ansible works well for deploying services

Except for a few service I have set up with helm because I just don’t know enough to custom install them myself, I use ansible with all the configuation files to get a cluster setup. With the proper configuration in place you can wait for deployments/pods to be active before moving on to make sure that the install runs smoothly. I also have delete/uninstall protocals, but that can be for another time.