Joget DX 8 Stable Released
The stable release for Joget DX 8 is now available, with a focus on UX and Governance.
Table of Contents |
---|
This document is intended to describe the steps required to deploy Joget Workflow v5 Large Enterprise Edition (LEE) in a clustered environment for scalability and redundancy.
In order for clustering to work, the Large Enterprise Edition is required. The standard Enterprise Edition will not work due to licensing restrictions. Clustering requires several layers to be prepared and configured:
...
There are many ways to design the clustering architecture, but the core concepts will be similar. In this document, the architecture used is as follows:
This guide describes the steps required to setup Joget Workflow LEE clustering. The exact steps will depend on the actual products used in each layer.
Warning |
---|
IMPORTANT: Please note that there is minimal configuration required in Joget Workflow LEE itself, and almost all the work is done on the separate layers so it is vital to ensure that you have sufficient expertise in your chosen products. |
...
Before the clustering installation can be done, the following prerequisites are needed:
Common directory to be accessed by the application servers with read/write permissions. This directory is used to store shared configuration files, system generated files, and uploaded files. Verify that the shared directory is mounted on the application servers and that files can be accessed with read and write permissions.
...
Common database to be accessed by the application servers with permission to select, update, delete, create and alter tables. Verify that the application servers can connect and query the shared database.
Java web application server to be installed and running on each server in the cluster. Verify that each application server has been installed correctly and can be accessed with a web browser.
...
Session replication to be configured on the application servers and network. Verify that session replication has been configured for each application server and the network.
...
Load balancer (hardware or software) to be installed and configured to direct traffic for requests beginning with /jw to the application servers. Verify that the load balancer has been installed and configured correctly so that web traffic is directed to the individual application servers.
...
It is important to ensure that the pre-deployment requirements have been verified. Once verified, the Joget Workflow specific steps are as follows:
Configure the datasource properties files in the shared directory
...
Code Block |
---|
workflowDriver=com.mysql.jdbc.DriverworkflowUrl=jdbc\:mysql\://host\:port/database_name?characterEncoding\=UTF-8 workflowUser=username profileName= workflowPassword=password |
Deploy Joget WAR files to the application servers and configure the startup properties to point to the shared directory.
...
Code Block |
---|
export JAVA_OPTS="-XX:MaxPermSize=128m -Xmx1024M -Dwflow.home=/shared_directory_path"
|
Activate license for each server. Each server has a unique system key and requires a separate license activation.
...
Once the pre-deployment and clustering configuration has been done, the testing is a matter of using a web browser to access the load balancer.
...
Warning |
---|
IMPORTANT: Please note that this is not a comprehensive guide and does not cover production-level requirements e.g. user permissions, network and database security, etc. Please ensure that these are covered by your system, network and database administrators. |
...
...
In the file server, install the NFS server
Code Block |
---|
sudo apt-get install portmap nfs-kernel-server |
Create shared directory and set permission
Code Block |
---|
sudo mkdir -p /export/wflow sudo chown nobody:nogroup /export/wflow |
Configure NFS to export the shared directory, edit /etc/exports to export the directory to the local 192.168.1.0 subnetwork with your favourite editor
Code Block |
---|
sudo vim /etc/exports |
...
Export the shares and restart NFS service
Code Block |
---|
sudo exportfs -ra sudo service nfs-kernel-server restart |
...
...
In the application servers, install the NFS client
Code Block |
---|
apt-get install nfs-common |
Create new directory /opt/joget/shared/wflow to mount the shared directory and set the directory permissions
Code Block |
---|
sudo mkdir -p /opt/joget/shared/wflow sudo chmod 777 /opt/joget/shared/wflow |
Mount the shared directory.
Code Block |
---|
sudo mount -t nfs wflow:/export/wflow /opt/joget/shared/wflow |
Test read-write permissions to confirm that the directory sharing works.
Code Block |
---|
echo test123 > /opt/joget/shared/wflow/test.txt |
...
Install MySQL (https://help.ubuntu.com/14.04/serverguide/mysql.html)
Code Block |
---|
sudo apt-get install mysql-server |
Create a database called jwedb accessible to the application servers.
Code Block |
---|
mysql -u root |
Run the following MySQL commands to create a blank database
...
Populate the newly created database with the Joget database schema
Code Block |
---|
mysql -uroot jwedb < /path/to/jwdb-mysql.sql |
Configure database permissions
Code Block |
---|
mysql -u root |
Run the following MySQL commands to grant permissions to user joget and password joget
...
Configure MySQL to listen to database connections from remote hosts. Edit the my.cnf file with your favourite editor
Code Block |
---|
sudo vim /etc/mysql/my.cnf |
...
Code Block |
---|
sudo service mysql restart |
...
In the application server, test a remote connection to the database server database_host
...
...
Install Apache Tomcat on each of the application servers. In each application server, run the following to extract tomcat into /opt/joget:
Code Block |
---|
sudo mkdir -p /opt/joget/ sudo tar xvfz apache-tomcat-8.0.20.tar.gz /opt/joget/ |
Start each application server
Code Block |
---|
sudo cd /opt/joget/apache-tomcat-8.0.20 sudo ./bin/catalina.sh start |
Open a web browser and access each server to confirm that http://server:8080/jw
Configure Tomcat for clustering by editing apache-tomcat-8.0.20/conf/server.xml. Add jvmRoute="node01" to the Engine tag and uncomment the Cluster tag.
Code Block |
---|
<Engine name="Catalina" defaultHost="localhost" jvmRoute="node01"> <Cluster className="org.apache.catalina.ha.tcp.SimpleTcpCluster"/> |
Configure local domain IP. Verify that the local server name resolves to the IP and not 127.0.1.1. Assuming the server name is server1 and the IP is 192.168.1.10, edit /etc/hosts and set:
Code Block |
---|
192.168.1.10 server1 |
Verify multicast is enabled between the application servers by running ifconfig and look for MULTICAST. Try http://blogs.agilefaqs.com/2009/11/08/enabling-multicast-on-your-macos-unix/ if there are issues.
Restart the Tomcat servers.
Code Block |
---|
sudo cd /opt/joget/apache-tomcat-8.0.20 sudo ./bin/catalina.sh stop sudo ./bin/catalina.sh start |
Verify session replication working between the application servers. The catalina.out log file in apache-tomcat-8.0.20/logs should show something similar to:
Code Block |
---|
INFO: Starting clustering manager at localhost#/jw Jan 17, 2016 11:21:32 AM org.apache.catalina.ha.session.DeltaManager getAllClusterSessions INFO: Manager [localhost#/jw], requesting session state from org.apache.catalina.tribes.membership.MemberImpl[tcp://{127, 0, 0, 1}:4001,{127, 0, 0, 1},4001, alive=55733886, securePort=-1, UDP Port=-1, id={-57 118 -98 -98 110 -38 64 -68 -74 -25 -29 101 46 103 5 -48 }, payload={}, command={}, domain={}, ]. This operation will timeout if no session state has been received within 60 seconds. Jan 17, 2016 11:21:32 AM org.apache.catalina.ha.session.DeltaManager waitForSendAllSessions INFO: Manager [localhost#/jw]; session state send at 1/17/16 11:21 AM received in 104 ms. |
More information on Tomcat clustering is at http://tomcat.apache.org/tomcat-8.0-doc/cluster-howto.html
In the load balancer server, install Apache HTTP Server
Code Block |
---|
sudo apt-get install apache2 |
Install proxy and balancer modules
Code Block |
---|
sudo a2enmod headers proxy proxy_balancer proxy_http |
Configure a new site with the proxy and balancer modules. Create a new file in /etc/apache2/sites-available, named jwsite
Code Block |
---|
sudo vim /etc/apache2/sites-available/jwsite.conf |
Add
...
the
...
contents
Code Block |
---|
NameVirtualHost * <VirtualHost *> DocumentRoot "/var/www/jwsite" ServerName localhost ServerAdmin support@mycompany.com DirectoryIndex index.html index.htm <Proxy balancer://cluster> BalancerMember http://server1:8080 route=node01 BalancerMember http://server2:8080 route=node02 Order deny,allow Allow from all </Proxy> ProxyPreserveHost On ProxyPass /jw balancer://cluster/jw stickysession=JSESSIONID ProxyPassReverse /jw balancer://cluster/jw </VirtualHost> |
Enable the new site and restart Apache
Code Block |
---|
sudo a2ensite jwsite sudo service apache2 reload |
...