English |
---|
The following guide will show steps to deploy Joget on EKS using Terraform |
Prerequisites- Ensure that you have these CLI tools installed:
- Configured AWS CLI with Access Keys or assumed role with sufficient permissions
- You have downloaded the Terraform IaC from here
Configuring Terraform Remote Backend
Disclaimer: The Terraform code provisions the minimum required infrastructure. You may have to modify some of the parameters to ensure that it works in your environment. You may refer to the official AWS and Hashicorp documentation for more details
- Create a terraformterraform.tfvars file in the backend directory and ensure the following variables are included
”<your
name>”
- Run
terraform init
- Run
terraform plan
to observe the resources that will be deployed (optional) - Once verified, run
terraform apply -auto-approve
- Once the backend has been deployed, go to the
infrastructure
directory and open main.tf
- Find the block:
Code Block |
---|
title | Terraform Backend Settings |
---|
|
backend "s3" {
bucket = "xxx"
key = "terraform.infrastructure.tfstate"
region = "xxx"
dynamodb_table = "xxx"
} |
- Fill in the xxx with the service details that you have created on step 1 - 5
...
Note: This process will create a local Terraform state. The remote state will only apply for infrastructure.
Deploying AWS Infrastructure
- Create a
terraform.tfvars
file and ensure the following variables are included Code Block |
---|
|
app_name=”<your-app-name>”
cluster_name=”<your-eks-cluster-name>”
rds_username=”<your-rds-username>”
rds_password=”<your-rds-password>” |
- Run
terraform init
- Run
terraform plan
to observe the resources that will be deployed (optional) - Once verified, run
terraform apply -auto-approve
Note: This step will take some time, around 20-30 minutes.
Core Services and Resource Deployed
These are the core services and resources (non exhaustive) list deployed from Terraform:
- Virtual Private Cloud (VPC)
- Elastic Kubernetes Service (EKS)
- Elastic File System (EFS)
- Relational Database System (RDS) - Serverless
- Helm Charts:
- AWS Load Balancer ControllerAWS EBS CSI Driver
- AWS EFS CSI Driver
- EC2 Servers - Created through EKS provisioning
Deploying Joget DX 8
- Download the Kubernetes manifest here.
- Modify the fileSystemId in StorageClass to the one that has been deployed through terraform in prior steps.
- Run
kubectl apply -f joget-dx8-tomcat9-deployment.yaml
- Wait for the containers to initialize. Run
kubectl get pods -A
to obtain the status of the pods.
Note: The manifest file creates a Storage Class with EFS CSI as provisioner which will dynamically create a Persistent Volume.
Accessing Joget through Load Balancer
- Run
kubectl get ingress -A
. You should see the DNS under Address column as follows: Code Block |
---|
|
k8s-namespace-RANDOM-STRING.REGION.elb.amazonaws.com |
Image Added
- Use the Address and go to
/jw
. It will redirect you to the database setup.
- Enter your database information on the above page
Note: The Terraform IaC has RDS Aurora Serverless included in the Infrastructure, and as such, it will be deployed alongside the EKS. You may use the RDS to better synergize with the VPC configuration. Ensure that you use the writer endpoint when setting up Joget database
- Click Save. Wait for the database to be setup
- Once the setup is complete, click Done. It will redirect you to the Joget main page
Using EFS with ReadWriteMany access mode
By default, the EKS cluster's Nodes will use EBS through EBS CSI Driver which only supports ReadWriteOnce. The Terraform already containing script to deploy EFS CSI Driver. To use EFS,
Create a StorageClass manifest as follow:kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
name: <your-efs-sc-name>
provisioner: efs.csi.aws.com
parameters:
provisioningMode: efs-ap
fileSystemId: <efs-file-id>
directoryPerms: "775"
reclaimPolicy: Retain
- Apply the StorageClass by running
kubectl apply -f
<storageclass>.yaml - Modify the PVC from the
joget-dx8-tomcat9-deployment.yaml
file to this: apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: efs-claim
spec:
accessModes:
- ReadWriteMany
storageClassName: <your-efs-sc-name>
resources:
requests:
storage: 5GiDelete the current PVC - kubectl delete pvc efs-claim
, then recreate it using kubectl apply -f joget-dx8-tomcat9-deployment.yaml
- Wait for the Persistent Volume to be created
Once the Persistent Volume is created, modify the Joget Deployment to the following:apiVersion: apps/v1
kind: Deployment
metadata:
name: joget-dx8-tomcat9
labels:
app: joget-dx8-tomcat9
spec:
replicas: 1
selector:
matchLabels:
app: joget-dx8-tomcat9
template:
metadata:
labels:
app: joget-dx8-tomcat9
spec:
volumes:
- name: <efs-pv-name>
persistentVolumeClaim:
claimName: joget-dx8-tomcat9-pvc
securityContext:
runAsUser: 1000
fsGroup: 0
containers:
- name: joget-dx8-tomcat9
image: jogetworkflow/joget-dx8-tomcat9:latest
ports:
- containerPort: 8080
volumeMounts:
- name: <efs-pv-name>
mountPath: /opt/joget/wflow
env:
- name: KUBERNETES_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
- Wait for the new pods to spawn. The new pods will now be using the EFS storage instead of EBS
Common Errors
Terraform
Error: error configuring S3 Backend: no valid credential sources for S3 Backend found.
- You may have not setup your AWS Credentials yet, or if you are assuming role, your session may have expire.
- Solution: Run aws configure and input the Access Keys or export the Access Keys into your terminal environment or assume the previous role once again to get new session credentials.
Kubernetes/EKS
- You may have not setup your AWS Credentials yet, or if you are assuming role, your session may have expire.
- Solution: Run aws configure and input the Access Keys or export the Access Keys into your terminal environment or assume the previous role once again to get new session credentials.
You must be logged in to the server (Unauthorized)
- This happens when you are using different credentials - different users or roles to access the cluster. If you are the cluster creator, you should be able to access the cluster
- Solution:
- In the Terraform Iac, go to infrastructure/compute/eks/eks.tf
- Under the module “eks”, add the following
- If you are using users credential:
aws_auth_users= [
{
userarn = "arn:aws:iam::<account-id>:user/<username>"
username = "<username>"
groups = ["system:masters"]
}
]
If you are using roles, you may append the aws_auth_roles block like so:
{
rolearn = “arn:aws:iam::<account-id>:role/<role-name>”
username = "<role-name>"
groups = ["system:masters"]
}
...
AWS Marketplace
- There can be numerous reasons the stack can be failed.
- The most common and important reason is the Helm chart failed to be deployed.
- Check the reason of failure from the Cloudformation console > Quicklaunch stack > Helm stack in the reason column.