Deploying a Simple Crafter CMS installation in Kubernetes

This tutorial shows you how to deploy a simple Crafter CMS installation in a Kubernetes cluster. The installation consists of one Authoring Pod, one Delivery Pod and one Elasticsearch Pod, and it’s mainly intended for development and testing, not for production.

Pre-requisites

You need to have a Kubernetes cluster, and the kubectl command-line tool must be configured to communicate with your cluster. If you do not already have a cluster, you can create one by using Minikube: https://github.com/kubernetes/minikube.

The nodes in your cluster should at least have 4 CPUs and 8 GB of space, to avoid performance issues and out of memory errors. In Minikube, to start a node with this characteristics, you can run a command similar to the following: minikube start --cpus 4 --memory 8192.

Create the SSH Keys Secret

The Delivery Pod will need SSH access to the Authoring Pod to pull the site content. For this, you need to generate an SSH public/private key pair for authentication and provide the key pair as a Kubernetes Secret to the Pods:

  1. Run ssh-keygen -b 4096 -t rsa -C "your_email@example.com" to generate the key pair. When being asked for the filename of the key, just enter id_rsa (so that the keys are saved in the current folder). Do not provide a passphrase.

    Note

    Crafter requires the key to be RSA and does not support keys generated using an algorithm other than RSA. The Jsch library that Jgit uses only supports RSA and does not support other keys such as OpenSSH. Make sure when you generate the key to specify the type as rsa:

    ssh-keygen -b 4096 -t rsa -C "your_email@example.com"
    

    For users on macOS 10.14 and above (macOS Mojave and onwards) or users using OpenSSH 7.8 and above, ssh-keygen writes OpenSSH format private keys by default (RFC7416 format) instead of using OpenSSL’s PEM format.

    To generate keys using PEM format, add option -m PEM into your ssh-keygen command. For example, you can run the command below to force ssh-keygen to export as PEM format:

    ssh-keygen -m PEM -t rsa -b 4096 -C "your_email@example.com"
    

    Also, check that the file starts with the following header: -----BEGIN RSA PRIVATE KEY----- to verify that the key is using RSA. Crafter also currently doesn’t support using a passphrase with SSH keys. Remember to NOT use a passphrase when creating your keys.

  2. Create a copy of the public key and rename it to authorized_keys: cp id_rsa.pub authorized_keys.

  3. In the same folder, create a config file with the following, to disable StrictHostKeyChecking for automatic connection to the Authoring SSH server:

    config
    1
    2
    Host authoring-ssh-service
        StrictHostKeyChecking no
    
  4. Create Secret ssh-keys with the files just generated:

    kubectl create secret generic ssh-keys --from-file=authorized_keys --from-file=id_rsa --from-file=id_rsa.pub --from-file=config
    

Create the Deployment files

Copy the following Kubernetes deployment configuration files somewhere in your machine:

elasticsearch-deployment.yaml
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
# Elasticsearch Service
apiVersion: v1
kind: Service
metadata:
  name: elasticsearch-service
spec:
  type: ClusterIP
  selector:
    component: elasticsearch
  ports:
  - port: 9200
    targetPort: 9200
---
# Elasticsearch PV Claim
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: elasticsearch-pv-claim
spec:
  accessModes:
    - ReadWriteOnce 
  resources:
    requests:
      storage: 5Gi
---
# Elasticsearch Deployment
apiVersion: apps/v1
kind: Deployment
metadata:
  name: elasticsearch-deployment
spec:
  replicas: 1
  selector:
    matchLabels:
      component: elasticsearch
  template:
    metadata:
      labels:
        component: elasticsearch
    spec:
      volumes:
        - name: data
          persistentVolumeClaim:
            claimName: elasticsearch-pv-claim
        - name: logs
          emptyDir: {}
      containers:
        - name: elasticsearch
          image: docker.elastic.co/elasticsearch/elasticsearch:6.6.0
          ports:
            - containerPort: 9200
          volumeMounts:
            - name: data
              mountPath: /usr/share/elasticsearch/data
            - name: logs
              mountPath: /usr/share/elasticsearch/logs
          env:
            - name: discovery.type
              value: single-node
            - name: bootstrap.memory_lock
              value: 'true'
            - name: ES_JAVA_OPTS
              value: '-server -Xss1024K -Xmx2G'
            - name: TAKE_FILE_OWNERSHIP
              value: 'true'
authoring-deployment.yaml
  1
  2
  3
  4
  5
  6
  7
  8
  9
 10
 11
 12
 13
 14
 15
 16
 17
 18
 19
 20
 21
 22
 23
 24
 25
 26
 27
 28
 29
 30
 31
 32
 33
 34
 35
 36
 37
 38
 39
 40
 41
 42
 43
 44
 45
 46
 47
 48
 49
 50
 51
 52
 53
 54
 55
 56
 57
 58
 59
 60
 61
 62
 63
 64
 65
 66
 67
 68
 69
 70
 71
 72
 73
 74
 75
 76
 77
 78
 79
 80
 81
 82
 83
 84
 85
 86
 87
 88
 89
 90
 91
 92
 93
 94
 95
 96
 97
 98
 99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
# Authoring LB
apiVersion: v1
kind: Service
metadata:
  name: authoring-service
spec:
  type: LoadBalancer
  selector:
    component: authoring
  ports:
    - port: 8080
      targetPort: 8080
---
# Authoring SSH Service
apiVersion: v1
kind: Service
metadata:
  name: authoring-ssh-service
spec:
  type: ClusterIP
  selector:
    component: authoring
  ports:
  - port: 22
    targetPort: 22
---
# Authoring PV Claim
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: authoring-data-pv-claim
spec:
  accessModes:
    - ReadWriteOnce 
  resources:
    requests:
      storage: 5Gi
---
# Authoring Deployment
apiVersion: apps/v1
kind: Deployment
metadata:
  name: authoring-deployment
spec:
  replicas: 1
  selector:
    matchLabels:
      component: authoring
  template:
    metadata:
      labels:
        component: authoring
    spec:
      volumes:
        - name: ssh-keys
          secret:
            secretName: ssh-keys
        - name: data
          persistentVolumeClaim:
            claimName: authoring-data-pv-claim
        - name: logs
          emptyDir: {}
        - name: temp
          emptyDir: {}
      containers:
        - name: tomcat
          image: craftercms/authoring_tomcat:3.1.3
          imagePullPolicy: 'Always'      
          ports:
            - containerPort: 8080
          volumeMounts:
            - name: ssh-keys
              mountPath: /opt/crafter/.ssh            
            - name: data
              mountPath: /opt/crafter/data
            - name: temp
              mountPath: /opt/crafter/temp              
            - name: logs
              mountPath: /opt/crafter/logs
          env:
            - name: ES_HOST
              value: elasticsearch-service
            - name: ES_PORT
              value: '9200'
        - name: deployer
          image: craftercms/deployer:3.1.3
          imagePullPolicy: 'Always'           
          ports:
            - containerPort: 9191
          volumeMounts:
            - name: ssh-keys
              mountPath: /opt/crafter/.ssh            
            - name: data
              mountPath: /opt/crafter/data
            - name: temp
              mountPath: /opt/crafter/temp              
            - name: logs
              mountPath: /opt/crafter/logs
          env:
            - name: ES_HOST
              value: elasticsearch-service
            - name: ES_PORT
              value: '9200'
        - name: git-ssh-server
          image: craftercms/git_ssh_server:3.1.3
          imagePullPolicy: 'Always'     
          ports:
            - containerPort: 22
          volumeMounts:
            - name: ssh-keys
              mountPath: /opt/crafter/.ssh            
            - name: data
              mountPath: /opt/crafter/data
delivery-deployment.yaml
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
# Delivery LB
apiVersion: v1
kind: Service
metadata:
  name: delivery-service
spec:
  type: LoadBalancer
  selector:
    component: delivery
  ports:
    - port: 9080
      targetPort: 8080
---
# Delivery PV Claim
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: delivery-data-pv-claim
spec:
  accessModes:
    - ReadWriteOnce 
  resources:
    requests:
      storage: 5Gi
---
# Delivery Deployment
apiVersion: apps/v1
kind: Deployment
metadata:
  name: delivery-deployment
spec:
  replicas: 1
  selector:
    matchLabels:
      component: delivery
  template:
    metadata:
      labels:
        component: delivery
    spec:
      volumes:
        - name: ssh-keys
          secret:
            secretName: ssh-keys      
        - name: data
          persistentVolumeClaim:
            claimName: delivery-data-pv-claim
        - name: logs
          emptyDir: {}
        - name: temp
          emptyDir: {}
      containers:
        - name: tomcat
          image: craftercms/delivery_tomcat:3.1.3
          imagePullPolicy: 'Always'      
          ports:
            - containerPort: 8080
          volumeMounts:
            - name: ssh-keys
              mountPath: /opt/crafter/.ssh
            - name: data
              mountPath: /opt/crafter/data
            - name: temp
              mountPath: /opt/crafter/temp              
            - name: logs
              mountPath: /opt/crafter/logs
          env:
            - name: ES_HOST
              value: elasticsearch-service
            - name: ES_PORT
              value: '9200'
        - name: deployer
          image: craftercms/deployer:3.1.3
          imagePullPolicy: 'Always'           
          ports:
            - containerPort: 9191
          volumeMounts:
            - name: ssh-keys
              mountPath: /opt/crafter/.ssh
            - name: data
              mountPath: /opt/crafter/data
            - name: temp
              mountPath: /opt/crafter/temp              
            - name: logs
              mountPath: /opt/crafter/logs
          env:
            - name: ES_HOST
              value: elasticsearch-service
            - name: ES_PORT
              value: '9200'

Apply the Deployment Files

To create the 3 deployments, assuming your current directory is where you copied the deployment files, you can just run:

kubectl apply -f .

Check the status of the deployments by running kubectl get deployments, and the status of the Pods by running kubectl get pods. You can also tail the logs of the tomcat and deployer containers, for both Authoring and Delivery Pods, with the command:

kubectl logs -f -c CONTAINER_NAME POD_NAME

For example: kubectl logs -f -c tomcat authoring-deployment-5df746c4d8-lv9gd

Create a Site in Authoring

In order to access Studio, you need the URL endpoint of the authoring-service. If you’re using Minikube, you can get it with the command:

minikube service authoring-service --url

The response should be like this:

http://192.168.39.5:31242

From your browser then, just enter the URL with /studio at the end, login to Studio, and create a site from any of the available blueprints or pull an existing site from a remote repository.

Bootstrap the Site in Delivery

Now you need to setup the site in Delivery. If you don’t know the name of the Delivery Pod yet, run kubectl get pods and check for the one that has a name like delivery-deployment-XXXXX. Then, run the following command (remember to replace the pod name and the site name with the actual values):

kubectl exec -it DELIVERY_POD_NAME --container deployer -- gosu crafter ./bin/init-site.sh SITE_NAME ssh://authoring-ssh-service/opt/crafter/data/repos/sites/SITE_NAME/published

This command will create the Deployer site target and create the index in Elasticsearch. After a minute or two, the Deployer should have pulled the site content from Authoring (you can check it by gettting the Delivery Deployer log: kubectl logs -c deployer DELIVERY_POD_NAME).

Now you can access the site in Delivery:

  1. Get the delivery endpoint URL (in Minikube: minikube service delivery-service --url).

  2. From your browser, enter the URL with ?crafterSite=SITE_NAME at the end. You should see your site. Also, when making a change in Authoring and publishing it, the change will be reflected in Delivery after a minute.