Joget DX 8 Stable Released
The stable release for Joget DX 8 is now available, with a focus on UX and Governance.
This guide will go through creation process with the Azure portal, if you want to create a cluster through Azure CLI please refer to the article Azure CLI.
From the Azure portal, go to the Kubernetes services then Create a Kubernetes cluster.
In the Basics page, choose the Subscription, Resource Group and input the Kubernetes cluster name. Adjust the other configuration settings as desired, or leave as default.
In the Node pools tab, you can configure to add node pools into the cluster. Read on multiple node pools in AKS. For this guide, we will use a single node configuration.
For other tab options - Access, Networking, Integrations, Advanced and Tags, you can leave the default options or make adjustments/changes as necessary. After that, you can click on the Review + create and deploy the Kubernetes cluster.
When the resource has completed their deployment, you can then connect to the cluster (read here) using Azure CLI/Azure Cloud Shell.
Once we have a running cluster, you will need to deploy a database to be used by the Joget platform. You can pretty much follow the same method of deploying MySQL DB as in the Joget Kubernetes page.
kubectl apply -f https://k8s.io/examples/application/mysql/mysql-pv.yaml
kubectl apply -f https://k8s.io/examples/application/mysql/mysql-deployment.yaml
kubectl describe deployment mysql kubectl get pods -l app=mysql kubectl describe pvc mysql-pv-claim
You need to modify the original yaml files for production usage (eg. using different version of MySQL image and setting up secret instead of plain password in the yaml).
If you are running a multiple node Kubernetes cluster, you will need to allocate shared persistent storage with read write access by multiple nodes. In Azure, you can set up Azure NFS volume to be used in the Azure Kubernetes cluster. Refer to the official documentation here for detailed info and steps. You can also read more on other options for storage in Azure Kubernetes here.
From the link, you can use this script to set up the NFS server (edit the variables as necessary especially the AKS_SUBNET).
#!/bin/bash # This script should be executed on Linux Ubuntu Virtual Machine EXPORT_DIRECTORY=${1:-/export/data} DATA_DIRECTORY=${2:-/data} AKS_SUBNET=${3:-*} echo "Updating packages" apt-get -y update echo "Installing NFS kernel server" apt-get -y install nfs-kernel-server echo "Making data directory ${DATA_DIRECTORY}" mkdir -p ${DATA_DIRECTORY} echo "Making new directory to be exported and linked to data directory: ${EXPORT_DIRECTORY}" mkdir -p ${EXPORT_DIRECTORY} echo "Mount binding ${DATA_DIRECTORY} to ${EXPORT_DIRECTORY}" mount --bind ${DATA_DIRECTORY} ${EXPORT_DIRECTORY} echo "Giving 777 permissions to ${EXPORT_DIRECTORY} directory" chmod 777 ${EXPORT_DIRECTORY} parentdir="$(dirname "$EXPORT_DIRECTORY")" echo "Giving 777 permissions to parent: ${parentdir} directory" chmod 777 $parentdir echo "Appending bound directories into fstab" echo "${DATA_DIRECTORY} ${EXPORT_DIRECTORY} none bind 0 0" >> /etc/fstab echo "Appending localhost and Kubernetes subnet address ${AKS_SUBNET} to exports configuration file" echo "/export ${AKS_SUBNET}(rw,async,insecure,fsid=0,crossmnt,no_subtree_check)" >> /etc/exports echo "/export localhost(rw,async,insecure,fsid=0,crossmnt,no_subtree_check)" >> /etc/exports nohup service nfs-kernel-server restart
After the NFS server has been set up, you can then create the PersistentVolume and PersistentVolumeClaim.
Example azurenfsstorage.yaml;
apiVersion: v1 kind: PersistentVolume metadata: name: aks-nfs labels: type: nfs spec: capacity: storage: 1Gi accessModes: - ReadWriteMany nfs: server: NFS_INTERNAL_IP path: NFS_EXPORT_FILE_PATH --- apiVersion: v1 kind: PersistentVolumeClaim metadata: name: aks-nfs spec: accessModes: - ReadWriteMany storageClassName: "" resources: requests: storage: 1Gi selector: matchLabels: type: nfs
Replace the values for NFS_INTERNAL_IP, NFS_NAME and NFS_EXPORT_FILE_PATH with the actual settings from your NFS Server.
kubectl apply -f azurenfsstorage.yaml
With the prerequisite database and persistent storage available, you can now deploy Joget. You can apply the example joget-dx7-tomcat9-aks.yaml file to deploy.
Example joget-dx7-tomcat9-aks.yaml;
apiVersion: apps/v1 kind: Deployment metadata: name: joget-dx7-tomcat9 labels: app: joget-dx7-tomcat9 spec: replicas: 1 selector: matchLabels: app: joget-dx7-tomcat9 template: metadata: labels: app: joget-dx7-tomcat9 spec: initContainers: - name: init-volume image: busybox:1.28 command: ['sh', '-c', 'chmod -f -R g+w /opt/joget/wflow; exit 0'] volumeMounts: - name: joget-dx7-tomcat9-volume mountPath: "/opt/joget/wflow" volumes: - name: joget-dx7-tomcat9-volume persistentVolumeClaim: claimName: aks-nfs securityContext: runAsUser: 1000 fsGroup: 0 containers: - name: joget-dx7-tomcat9 image: jogetworkflow/joget-dx7-tomcat9:latest ports: - containerPort: 8080 - containerPort: 9080 volumeMounts: - name: joget-dx7-tomcat9-volume mountPath: /opt/joget/wflow env: - name: KUBERNETES_NAMESPACE valueFrom: fieldRef: fieldPath: metadata.namespace --- apiVersion: v1 kind: Service metadata: name: joget-dx7-tomcat9 labels: app: joget-dx7-tomcat9 spec: ports: - name: http port: 8080 targetPort: 8080 - name: https port: 9080 targetPort: 9080 selector: app: joget-dx7-tomcat9 type: ClusterIP --- apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: joget-dx7-tomcat9-clusterrolebinding roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: view subjects: - kind: ServiceAccount name: default namespace: default
You can then check the deployment progress from the Azure portal. (Or use kubectl commands eg. kubectl get deployment joget-dx7-tomcat9)
You can then expose the application for external access through Ingress. You can read more regarding Ingress in Kubernetes here. In this guide, we will use Nginx Ingress Controller as an example to access Joget.
Deploy Nginx Ingress Controller to AKS cluster
You can refer to the AKS documentation regarding creating ingress-nginx and also the nginx-ingress document.
There are 2 known methods of deploying the Nginx Ingress Controller to the AKS cluster;
Install using Helm
Using Azure CLI/Cloud shell, set up the Helm for Nginx Ingress
helm repo add ingress-nginx https://kubernetes.github.io/ingress-nginx helm repo update helm install ingress-nginx ingress-nginx/ingress-nginx -create-namespace -namespace nginx-ingress
Install using yaml file
You can use kubectl apply command.
kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v1.4.0/deploy/static/provider/cloud/deploy.yaml
After the Ingress Controller has been deployed, we can then apply the Ingress yaml so that we can access the Joget application externally.
Example joget-ingress.yaml;
apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: joget-dx7-tomcat9-ingress annotations: nginx.ingress.kubernetes.io/affinity: cookie nginx.ingress.kubernetes.io/ssl-redirect: "false" spec: ingressClassName: nginx rules: - http: paths: - path: /jw pathType: Prefix backend: service: name: joget-dx7-tomcat9 port: number: 8080
After the Ingress deployment is completed, you can get the public IP from the Kubernetes resources > Services and Ingresses pane in the Azure portal (eg. http://<external-ip>/jw).
Setup Database
To complete the Joget deployment, you need to perform a one-time Database Setup. Key in the MySQL service name and the Database Username and Password. Click on Save.
Once the setup is complete, click on Done and you will be brought to the Joget App Center.
Before starting the TLS setup, you need to set ‘enable-underscores-in-headers’ as true for Ingress by using configmap.
Example ingress-configmap.yaml;
apiVersion: v1 kind: ConfigMap metadata: name: ingress-nginx-controller namespace: ingress-nginx data: enable-underscores-in-headers: "true" allow-snippet-annotations: "true"
Update the Ingress configuration with kubectl apply -f ingress-configmap.yaml
Install cert-manager into the cluster
Similar to installing the ingress controller, you can install cert-manager either through Helm or through yaml file. Refer to the cert-manager official documentation here for detail. For this guide we will be using the yaml file method.
**Before going further with these steps, make sure that you have set up DNS to the public IP of the ingress that has been generated by AKS earlier.
kubectl apply -f https://github.com/cert-manager/cert-manager/releases/download/v1.10.0/cert-manager.yaml
Configure Let’s Encrypt issuer
Example stagingissuer.yaml file;
apiVersion: cert-manager.io/v1 kind: ClusterIssuer metadata: name: letsencrypt-staging spec: acme: # The ACME server URL server: https://acme-staging-v02.api.letsencrypt.org/directory # Email address used for ACME registration email: [update email here] # Name of a secret used to store the ACME account private key privateKeySecretRef: name: letsencrypt-staging # Enable the HTTP-01 challenge provider solvers: - http01: ingress: class: nginx
kubectl apply -f stagingissuer.yaml
You can check on the status of the issuer resource after you have deployed it
kubectl describe issuer letsencrypt-staging
Deploy/Update the Ingress with TLS configuration
As we have previously deploy the Ingress without TLS configuration, we can update the Ingress yaml file to include the TLS configuration.
Example Ingress yaml with TLS;
apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: joget-dx7-tomcat9-ingress annotations: nginx.ingress.kubernetes.io/affinity: cookie nginx.ingress.kubernetes.io/ssl-redirect: "true" cert-manager.io/cluster-issuer: "letsencrypt-staging" spec: ingressClassName: nginx tls: - hosts: - exampledomain.com secretName : aks-jogetworkflow rules: - host: exampledomain.com http: paths: - path: /jw pathType: Prefix backend: service: name: joget-dx7-tomcat9 port: number: 9080
This staging procedure is to ensure that the certificate is generated correctly before we setup the Issuer with Let’s Encrypt production.
kubectl get certificate
[ ~/jogetaks ]$ kubectl get certificate NAME READY SECRET AGE aks-jogetworkflow True aks-jogetworkflow 30s
kubectl describe certificate aks-jogetworkflow
If the certificate is generated correctly then we can set up the production Issuer.
Example productionissuer.yaml file;
apiVersion: cert-manager.io/v1 kind: ClusterIssuer metadata: name: letsencrypt-prod spec: acme: # The ACME server URL server: https://acme-v02.api.letsencrypt.org/directory # Email address used for ACME registration email: [update email here] # Name of a secret used to store the ACME account private key privateKeySecretRef: name: letsencrypt-prod # Enable the HTTP-01 challenge provider solvers: - http01: ingress: class: nginx
Update the ingress yaml file with the production annotation.
apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: joget-dx7-tomcat9-ingress annotations: nginx.ingress.kubernetes.io/affinity: cookie nginx.ingress.kubernetes.io/ssl-redirect: "true" cert-manager.io/issuer: "letsencrypt-prod" spec: ingressClassName: nginx tls: - hosts: - exampledomain.com secretName : aks-jogetworkflow rules: - host: exampledomain.com - http: paths: - path: /jw pathType: Prefix backend: service: name: joget-dx7-tomcat9 port: number: 9080
After applying the updated ingress yaml, you need to delete the previous secret so that the new certificate can be generated for the production.
kubectl delete secret aks-jogetworkflow
Then run back the describe command to check on the cert status
kubectl describe certificate aks-jogetworkflow
After the new certificate has been issued, you can then access the Joget domain with https to ensure that everything is working properly.
While you can set the nodes or pods to autoscale in AKS (read here), you can also scale the number of nodes or pods manually. To scale the number of pods running Joget, you can use the kubectl command.
kubectl scale –replicas=3 deployment/joget-dx7-tomcat9
Adjust the replica number as you desired and the desired number of pods will initialize and startup.
As for the node, you can scale the node count of the node pool from the Azure portal. Go to the Cluster in the Kubernetes service (in this guide example jogetakscluster) > Settings > Node pools. Select the node pool and then click on the Scale node pool. Choose Manual as the Scale method and input the desired node count (maximum available resource is based on the VM size that you have chosen).