• Document Up to Date

Deploying a Simple CrafterCMS installation in Kubernetes

This tutorial shows you how to deploy a simple CrafterCMS installation in a Kubernetes cluster. The installation consists of one Authoring Pod, one Delivery Pod and one Elasticsearch Pod, and it’s mainly intended for development and testing, not for production.

Pre-requisites

You need to have a Kubernetes cluster, and the kubectl command-line tool must be configured to communicate with your cluster. If you do not already have a cluster, you can create one by using Minikube: https://github.com/kubernetes/minikube.

The nodes in your cluster should at least have 4 CPUs and 8 GB of space, to avoid performance issues and out of memory errors. In Minikube, to start a node with this characteristics, you can run a command similar to the following: minikube start --cpus 4 --memory 8192.

Create the SSH Keys Secret

The Delivery Pod will need SSH access to the Authoring Pod to pull the site content. For this, you need to generate an SSH public/private key pair for authentication and provide the key pair as a Kubernetes Secret to the Pods:

  1. Run ssh-keygen -m PEM -b 4096 -t rsa -C "your_email@example.com" to generate the key pair. When being asked for the filename of the key, just enter id_rsa (so that the keys are saved in the current folder). Do not provide a passphrase.

    Note

    Crafter requires the key to be RSA and does not support keys generated using an algorithm other than RSA. The Jsch library that Jgit uses only supports RSA and does not support other keys such as OpenSSH. Make sure when you generate the key to specify the type as rsa:

    ssh-keygen -m PEM -b 4096 -t rsa -C "your_email@example.com"
    

    Check that the file starts with the following header: -----BEGIN RSA PRIVATE KEY----- to verify that the key is using RSA. Crafter also currently doesn’t support using a passphrase with SSH keys. Remember to NOT use a passphrase when creating your keys.

  2. Create a copy of the public key and rename it to authorized_keys: cp id_rsa.pub authorized_keys.

  3. In the same folder, create a config file with the following, to disable StrictHostKeyChecking for automatic connection to the Authoring SSH server:

    config
    1Host authoring-ssh-service
    2    StrictHostKeyChecking no
    
  4. Create Secret ssh-keys with the files just generated:

    kubectl create secret generic ssh-keys --from-file=authorized_keys --from-file=id_rsa --from-file=id_rsa.pub --from-file=config
    

Create the Deployment files

Copy the following Kubernetes deployment configuration files somewhere in your machine:

elasticsearch-deployment.yaml
 1# Elasticsearch Service
 2apiVersion: v1
 3kind: Service
 4metadata:
 5  name: elasticsearch-service
 6spec:
 7  type: ClusterIP
 8  selector:
 9    component: elasticsearch
10  ports:
11  - port: 9200
12    targetPort: 9200
13---
14# Elasticsearch PV Claim
15apiVersion: v1
16kind: PersistentVolumeClaim
17metadata:
18  name: elasticsearch-pv-claim
19spec:
20  accessModes:
21    - ReadWriteOnce 
22  resources:
23    requests:
24      storage: 5Gi
25---
26# Elasticsearch Deployment
27apiVersion: apps/v1
28kind: Deployment
29metadata:
30  name: elasticsearch-deployment
31spec:
32  replicas: 1
33  selector:
34    matchLabels:
35      component: elasticsearch
36  template:
37    metadata:
38      labels:
39        component: elasticsearch
40    spec:
41      volumes:
42        - name: data
43          persistentVolumeClaim:
44            claimName: elasticsearch-pv-claim
45        - name: logs
46          emptyDir: {}
47      containers:
48        - name: elasticsearch
49          image: docker.elastic.co/elasticsearch/elasticsearch:6.6.0
50          ports:
51            - containerPort: 9200
52          volumeMounts:
53            - name: data
54              mountPath: /usr/share/elasticsearch/data
55            - name: logs
56              mountPath: /usr/share/elasticsearch/logs
57          env:
58            - name: discovery.type
59              value: single-node
60            - name: bootstrap.memory_lock
61              value: 'true'
62            - name: ES_JAVA_OPTS
63              value: '-server -Xss1024K -Xmx2G'
64            - name: TAKE_FILE_OWNERSHIP
65              value: 'true'
authoring-deployment.yaml
  1# Authoring LB
  2apiVersion: v1
  3kind: Service
  4metadata:
  5  name: authoring-service
  6spec:
  7  type: LoadBalancer
  8  selector:
  9    component: authoring
 10  ports:
 11    - port: 8080
 12      targetPort: 8080
 13---
 14# Authoring SSH Service
 15apiVersion: v1
 16kind: Service
 17metadata:
 18  name: authoring-ssh-service
 19spec:
 20  type: ClusterIP
 21  selector:
 22    component: authoring
 23  ports:
 24  - port: 22
 25    targetPort: 22
 26---
 27# Authoring PV Claim
 28apiVersion: v1
 29kind: PersistentVolumeClaim
 30metadata:
 31  name: authoring-data-pv-claim
 32spec:
 33  accessModes:
 34    - ReadWriteOnce 
 35  resources:
 36    requests:
 37      storage: 5Gi
 38---
 39# Authoring Deployment
 40apiVersion: apps/v1
 41kind: Deployment
 42metadata:
 43  name: authoring-deployment
 44spec:
 45  replicas: 1
 46  selector:
 47    matchLabels:
 48      component: authoring
 49  template:
 50    metadata:
 51      labels:
 52        component: authoring
 53    spec:
 54      volumes:
 55        - name: ssh-keys
 56          secret:
 57            secretName: ssh-keys
 58        - name: data
 59          persistentVolumeClaim:
 60            claimName: authoring-data-pv-claim
 61        - name: logs
 62          emptyDir: {}
 63        - name: temp
 64          emptyDir: {}
 65      containers:
 66        - name: tomcat
 67          image: craftercms/authoring_tomcat:3.1.3
 68          imagePullPolicy: 'Always'      
 69          ports:
 70            - containerPort: 8080
 71          volumeMounts:
 72            - name: ssh-keys
 73              mountPath: /opt/crafter/.ssh            
 74            - name: data
 75              mountPath: /opt/crafter/data
 76            - name: temp
 77              mountPath: /opt/crafter/temp              
 78            - name: logs
 79              mountPath: /opt/crafter/logs
 80          env:
 81            - name: ES_HOST
 82              value: elasticsearch-service
 83            - name: ES_PORT
 84              value: '9200'
 85        - name: deployer
 86          image: craftercms/deployer:3.1.3
 87          imagePullPolicy: 'Always'           
 88          ports:
 89            - containerPort: 9191
 90          volumeMounts:
 91            - name: ssh-keys
 92              mountPath: /opt/crafter/.ssh            
 93            - name: data
 94              mountPath: /opt/crafter/data
 95            - name: temp
 96              mountPath: /opt/crafter/temp              
 97            - name: logs
 98              mountPath: /opt/crafter/logs
 99          env:
100            - name: ES_HOST
101              value: elasticsearch-service
102            - name: ES_PORT
103              value: '9200'
104        - name: git-ssh-server
105          image: craftercms/git_ssh_server:3.1.3
106          imagePullPolicy: 'Always'     
107          ports:
108            - containerPort: 22
109          volumeMounts:
110            - name: ssh-keys
111              mountPath: /opt/crafter/.ssh            
112            - name: data
113              mountPath: /opt/crafter/data
delivery-deployment.yaml
 1# Delivery LB
 2apiVersion: v1
 3kind: Service
 4metadata:
 5  name: delivery-service
 6spec:
 7  type: LoadBalancer
 8  selector:
 9    component: delivery
10  ports:
11    - port: 9080
12      targetPort: 8080
13---
14# Delivery PV Claim
15apiVersion: v1
16kind: PersistentVolumeClaim
17metadata:
18  name: delivery-data-pv-claim
19spec:
20  accessModes:
21    - ReadWriteOnce 
22  resources:
23    requests:
24      storage: 5Gi
25---
26# Delivery Deployment
27apiVersion: apps/v1
28kind: Deployment
29metadata:
30  name: delivery-deployment
31spec:
32  replicas: 1
33  selector:
34    matchLabels:
35      component: delivery
36  template:
37    metadata:
38      labels:
39        component: delivery
40    spec:
41      volumes:
42        - name: ssh-keys
43          secret:
44            secretName: ssh-keys      
45        - name: data
46          persistentVolumeClaim:
47            claimName: delivery-data-pv-claim
48        - name: logs
49          emptyDir: {}
50        - name: temp
51          emptyDir: {}
52      containers:
53        - name: tomcat
54          image: craftercms/delivery_tomcat:3.1.3
55          imagePullPolicy: 'Always'      
56          ports:
57            - containerPort: 8080
58          volumeMounts:
59            - name: ssh-keys
60              mountPath: /opt/crafter/.ssh
61            - name: data
62              mountPath: /opt/crafter/data
63            - name: temp
64              mountPath: /opt/crafter/temp              
65            - name: logs
66              mountPath: /opt/crafter/logs
67          env:
68            - name: ES_HOST
69              value: elasticsearch-service
70            - name: ES_PORT
71              value: '9200'
72        - name: deployer
73          image: craftercms/deployer:3.1.3
74          imagePullPolicy: 'Always'           
75          ports:
76            - containerPort: 9191
77          volumeMounts:
78            - name: ssh-keys
79              mountPath: /opt/crafter/.ssh
80            - name: data
81              mountPath: /opt/crafter/data
82            - name: temp
83              mountPath: /opt/crafter/temp              
84            - name: logs
85              mountPath: /opt/crafter/logs
86          env:
87            - name: ES_HOST
88              value: elasticsearch-service
89            - name: ES_PORT
90              value: '9200'

Apply the Deployment Files

To create the 3 deployments, assuming your current directory is where you copied the deployment files, you can just run:

kubectl apply -f .

Check the status of the deployments by running kubectl get deployments, and the status of the Pods by running kubectl get pods. You can also tail the logs of the tomcat and deployer containers, for both Authoring and Delivery Pods, with the command:

kubectl logs -f -c CONTAINER_NAME POD_NAME

For example: kubectl logs -f -c tomcat authoring-deployment-5df746c4d8-lv9gd

Create a Site in Authoring

In order to access Studio, you need the URL endpoint of the authoring-service. If you’re using Minikube, you can get it with the command:

minikube service authoring-service --url

The response should be like this:

http://192.168.39.5:31242

From your browser then, just enter the URL with /studio at the end, login to Studio, and create a site from any of the available blueprints or pull an existing site from a remote repository.

Bootstrap the Site in Delivery

Now you need to setup the site in Delivery. If you don’t know the name of the Delivery Pod yet, run kubectl get pods and check for the one that has a name like delivery-deployment-XXXXX. Then, run the following command (remember to replace the pod name and the site name with the actual values):

kubectl exec -it DELIVERY_POD_NAME --container deployer -- gosu crafter ./bin/init-site.sh SITE_NAME ssh://authoring-ssh-service/opt/crafter/data/repos/sites/SITE_NAME/published

This command will create the Deployer site target and create the index in Elasticsearch. After a minute or two, the Deployer should have pulled the site content from Authoring (you can check it by gettting the Delivery Deployer log: kubectl logs -c deployer DELIVERY_POD_NAME).

Now you can access the site in Delivery:

  1. Get the delivery endpoint URL (in Minikube: minikube service delivery-service --url).

  2. From your browser, enter the URL with ?crafterSite=SITE_NAME at the end. You should see your site. Also, when making a change in Authoring and publishing it, the change will be reflected in Delivery after a minute.