This parameter is the one that is taken for granted in the current era. All applications are assumed to be stable enough to work 24×7, 365 days a year. This is considered as the assumed parameter of measurement.
1). Scope
To ascertain that the parameters of speed, concurrency and stability are handled when an application is being developed, developers/designers have to work on two levels:
- Application level
- Server level
Application Level
This is the level where the architecture of the application is designed, the optimum software’s to be used for writing the business logic is decided and the best design is brought up to bring out the best results for all the above three parameters mentioned apart of a whole lot of others.
Server Level
This is the level where certain critical decisions are taken with respect to the server on which the application is built on. The server level decisions play a major part in obtaining the optimum results for the above parameters. Whatever is done in the application level, an incorrect decision made at the server level for make everything futile. Hence, consultants/designers have to brainstorm on various aspects to arrive at the final and optimum selection of server and the corresponding configuration to obtain the best results of speed, concurrency and stability of the web application.
In this paper, I will only be focusing on the server level decision that has to be made to achieve best results of speed, concurrency and stability. I will be comparing the pros and cons of various web servers in various measurement parameters and would highlight the implementation details for integrating Apache and Tomcat servers to achieve the maximum throughput of the above parameters for certain specific application requirements.
2). Design Intention
2.1 Design level server parameters
The main design parameters that are considered while taking server related design decisions are: –
Page Load Time
This refers to the Average Page Duration as measured with each server under an increasing load. When the load on the server is less, Tomcat outperforms WebSphere and WebLogic and is considered to be very fast. But, Tomcat performs poorly as the load increases. That is, as the number of concurrent users increase, performance comes down in the case of Tomcat. Other application servers like WebSphere and WebLogic are found to be more stable when compared to Tomcat. Tomcat typically supports around 150 concurrent users without much problem while WebSphere and WebLogic support around 400 concurrent users. This is one major reason for the architects to go for WebSphere and WebLogic when compared to Tomcat as it tends to become unstable as the load increases. Apache server is considered to be as stable as WebSphere and WebLogic and fast for static content.
Error Count
This refers to the errors that come up when the load on the server is increased. As proved by many reviews and benchmarks, WebSphere and WebLogic throw up less number of errors as the load is increased when compared to Tomcat. Apache also throws up less number of errors.
Cost and Support
Cost of WebSphere is termed to be around $12000 for 1 CPU and WebLogic comes with a price tag of around $10000 per CPU. Tomcat and Apache servers come for free. As evident, WebSphere and WebLogic come up with support while Tomcat and Apache do not.
Other Features
WebSphere and WebLogic are fully integrated J2EE application servers while Tomcat is not. They provide for EJBs and have build in JMS queue implementations. The commercially available server comes up with a host of other useful features that reduce the application development effort. Tomcat provides very minimal ease of development features.
2.2 Architectural Decision
As an architect/designer/consultant, we have to make decisions on which server the application will be deployed. All the above parameters discussed above will be considered. In the existing outsourcing revolution, customers pay minimal attention on the ease of development of an application as development is not longer their headache. As most of the development is being outsourced, customers are primary concerned with the amount of fixed or variable cost they have incurred for building an application. Thus, thinking from the cost perspective of the customer, they will be more inclined to go for Tomcat than any of the commercially available servers. Customers are increasing assuming that the applications being developed will not require any server level support in the future. Customers are assuming that applications will be bug free and stable. The parameters that are primary for any customer with respect to selection of the appropriate server are: –
- Low/no cost
- Low/no support cost
- High stability
hus an architect working in the software services industry has to arrive at the decision of the server taking the above parameters into consideration. As cost is a factor that is above the control of an architect, he/she will be more inclined towards using Tomcat. But Tomcat has a dismal stability record though it provides the best throughput. In a nutshell, Tomcat scores on cost but not on stability
Apache (httpd) server is another server from the Apache Jakarta group which is a freeware. This server scores well on stability and speed for static content but low on performance throughput. This does not support JSP and servlets.
The next few sections of this paper will focus on clustering Tomcat and Apache servers to leverage the low cost advantage of Tomcat with the high stability and speed advantage of Apache.
3). Technical – Apache-Tomcat Clustering
3.1 Tomcat Worker
Tomcat worker refers to one single thread of Tomcat that is run as a slave server and is controlled by the master Apache server. Clustering involves the use of Tomcat workers to achieve the dual advantage of speed and concurrency. By limiting the number of requests to a Tomcat worker to its optimum level, the throughput requirement for that many requests can be achieved. A tomcat worker will be listening to its custom port which will be hid from the requesting user. A worker will act as a standalone Tomcat server though it will be controlled and monitored by the master Apache server.
3.2 Apache Master Server
To provide support for many concurrent users, we club the concept of Tomcat worker and Apache Master server. To support concurrent users, more than one Tomcat workers are spawned based on the optimum throughput obtained for one single Tomcat worker. Eg: – We find that one single Tomcat standalone server provides the maximum throughput without any error count till X number of concurrent users. The application has a requirement to support Y number of concurrent users. Then the formula to decide on the number of Tomcat workers is Y/X: –
Once the number of tomcat workers is decided, we would have arrived at a number that would support the required concurrent users with the optimum throughput. But the question is, each of the Tomcat worker will be listening to its own port, resulting is so many URLs. Thus, we need a master server that can receive requests in a single URL and delegate the requests to the available Tomcat workers. This results in a single URL for the end user. This configuration results in the application to leverage the speed of Tomcat and the stability of Apache server at no cost at all.
Once the number of tomcat workers is decided, we would have arrived at a number that would support the required concurrent users with the optimum throughput. But the question is, each of the Tomcat worker will be listening to its own port, resulting is so many URLs. Thus, we need a master server that can receive requests in a single URL and delegate the requests to the available Tomcat workers. This results in a single URL for the end user. This configuration results in the application to leverage the speed of Tomcat and the stability of Apache server at no cost at all.
3.3 High Level Clustering Diagram
4). Implementation Details
4.1 Compile, Install and Configure Apache
4.1.1Â Â Â Install Apache and Tomcat
Download and install Apache in the system. You can download Apache from the site http://ww.apache.com/dist/httpd by selecting the downloadable file corresponding to your operating system.
I have downloaded Apache into /usr/local/apache2.
Download Tomcat also into the system. You can download Tomcat from the site http://tomcat.apache.org/download-41.cgi by selecting the downloadable file corresponding to your operating system.
I have downloaded Tomcat into /usr/local/tomcat.
NOTE: In the rest of the section, we will be dealing with Apache 2 and Tomcat 4x versions.
4.1.2 Configure the JK Module in httpd.conf
Edit the Apache server’s configuration file httpd.conf
which is located in the /usr/local/apache2/conf
directory.
Below “# LoadModule foo_module modules/mod_foo.so”, insert the following lines:
#
# Load mod_jk
#
LoadModule jk_module modules/mod_jk.so
#
# Configure mod_jk
#
JkWorkersFile conf/workers.properties
JkLogFile logs/mod_jk.log
JkLogLevel info
NOTE: You will need to change mod_jk.so to mod_jk.dll for Windows.
4.1.3 Create the workers.properties file
Now we will create a file called workers.properties, and we will place it under /usr/local/tomcat/jk/apache2. The workers.properties file tells Apache about the various Tomcat servers that are running, and on which port they are listening.
In my setup, I installed the two Tomcat workers in different directories, on the same machine as Apache. Feel free to put your Tomcat workers on different machines.
I made the first Tomcat worker’s AJP13 connector listen on port 11009 instead of the default port that is 8009, and the second one listens on port 12009.
I have decided to name my tomcat workers tomcat1 and tomcat2.
Create the file exactly like this:
#
# workers.properties
#
# In Unix, we use forward slashes:
ps=/
# list the workers by name
worker.list=tomcat1, tomcat2, loadbalancer
# ————————
# First tomcat server
# ————————
worker.tomcat1.port=11009
worker.tomcat1.host=localhost
worker.tomcat1.type=ajp13
# Specify the size of the open connection cache.
#worker.tomcat1.cachesize
#
# Specifies the load balance factor when used with
# a load balancing worker.
# Note:
#Â —-> lbfactor must be > 0
#Â —-> Low lbfactor means less work done by the worker.
worker.tomcat1.lbfactor=100
# ————————
# Second tomcat server
# ————————
worker.tomcat2.port=12009
worker.tomcat2.host=localhost
worker.tomcat2.type=ajp13
# Specify the size of the open connection cache.
#worker.tomcat2.cachesize
#
# Specifies the load balance factor when used with
# a load balancing worker.
# Note:
#Â —-> lbfactor must be > 0
#Â —-> Low lbfactor means less work done by the worker.
worker.tomcat2.lbfactor=100
# ————————
# Load Balancer worker
# ————————
#
# The loadbalancer (type lb) worker performs weighted round-robin
# load balancing with sticky sessions.
# Note:
#Â —-> If a worker dies, the load balancer will check its state
#Â Â Â Â Â Â Â once in a while. Until then all work is redirected to peer
#Â Â Â Â Â Â Â worker.
worker.loadbalancer.type=lb
worker.loadbalancer.balanced_workers=tomcat1, tomcat2
#
# END workers.properties
#
4.2 Install and Configure the Tomcat Workers
Now let’s suppose that Java 1.4.x is installed under /usr/local/jdk1.4.x/. Create two Tomcat 4.x workers under /usr/local/:
For this, you need to copy the directories conf, logs, temp and webapps from the actually installed Tomcat server directory (/usr/local/tomcat) to two new directories say /usr/local/tomcat1 and /usr/local/tomcat2.
In both /usr/local/tomcat1 and /usr/local/tomcat2, the same files will be modified. I here by present the modifications made to the files contained in the /usr/local/tomcat1 directory tree structure. You should also apply the same changes to the corresponding files located under the /usr/local/tomcat2 directory tree structure.
4.2.1 Modify conf/server.xml
Change the control port
At line 13, replace:
<Server port=”8005?
with:
<Server port=”11005?
For the tomcat2 server, replace port 8005 with 12005. This will prevent the two workers from conflicting.
Change the AJP13 port
At line 75, in the AJP 13 connector definition, replace:
port=”8009?
with:
port=”11009?
For the tomcat2 worker, replace port 8009 with 12009.
Disable the standalone HTTP port
We don’t want or need our tomcat servers to directly respond to HTTP requests. So we comment out the HttpConnector section between lines and 58 in the server.xml file.
Example:
<!– Define a non-SSL HTTP/1.1 Connector on port 8080 –>
<!–
<Connector className=”org.apache.catalina.connector.http.HttpConnector”
port=”8080? minProcessors=”5? maxProcessors=”75?
enableLookups=”true” redirectPort=”8443?
acceptCount=”10? debug=”0? connectionTimeout=”60000?/>
–>
NOTE: If you don’t comment this out, you will need to change the port numbers so they don’t conflict between tomcat instances.
Disable the WARP connector
At line 314, comment out the <Connector…WarpConnector…> tag.
Example:
<Service name=”Tomcat-Apache”>
<!–
<Connector className=”org.apache.catalina.connector.warp.WarpConnector”
port=”8008? minProcessors=”5? maxProcessors=”75?
enableLookups=”true” appBase=”webapps”
acceptCount=”10? debug=”0?/>
–>
Do not forget to do the same thing to tomcat2?s server.xml file.
NOTE: You might want to comment out the entire <Service name=”Tomcat-Apache”> element. If so, make sure and remove the comments within it – XML doesn’t like comments within comments.
4.2.2 Create test JSP pages (index.jsp)
Create a file named index.jsp
and put it in the /usr/local/tomcat1/webapps/ROOT
directory:
<html>
<body bgcolor=”red”>
<center>
<%= request.getSession ().getId () %>
<h1>Tomcat 1</h1>
</body>
</html>
Create a file named index.jsp
and put it in the /usr/local/tomcat2/webapps/ROOT
directory:
<html>
<body bgcolor=”blue”>
<center>
<%= request.getSession().getId() %>
<h1>Tomcat 2</h1>
</body>
</html>
4.2.3 Start Tomcat1, Tomcat2 and Apache
Set the JAVA_HOME environment variable to the location where JDK1.4 installed
export JAVA_HOME=/usr/local/jdk1.4.x
For starting tomcat1 worker, set the Tomcat environment variable CATALINA_HOME to the tomcat1 worker.
export CATALINA_HOME=/usr/local/tomcat1
Run the following command: –
/usr/local/tomcat/bin/startup.sh
This starts up the first Tomcat worker namely tomcat1.
For starting tomcat2 worker, set the Tomcat environment variable CATALINA_HOME to the tomcat2 worker.
export CATALINA_HOME=/usr/local/tomcat2
Run the following command: –
/usr/local/tomcat/bin/startup.sh
This starts up the second Tomcat worker namely tomcat2.
To start Apache, run the following command:
/usr/local/apache2/bin/apachectl start
4.2.4 Test your Installation
Now is the time to test your setup. First, verify that Apache serves static content.
Click on: http://localhost/. You should see the default Apache index.html page.
Now test that tomcat (either Tomcat 1 or Tomcat 2) is serving Java Server Pages.
Click on: http://localhost/index.jsp
If you get a red page, the page was served by the tomcat1 server, and if you get a blue page, it was served by the tomcat2 server.
Now test that session affinity – also known as sticky sessions – works within the load balancer. Hit the reload button of your web browser several times and verify that the index.jsp page you get is always received from the same tomcat server.
4.3 Configuring Private JVMs
For configuring Apache/Tomcat for private Tomcat instances, you can add one of the following in the file mod_jk.conf in /usr/local/tomcat/jk/apache2
4.3.1 Name-based (1 IP address or NIC)
NameVirtualHost *
<VirtualHost *>
ServerName localhost1
JkMount /*.jsp tomcat1
JkMount /servlet/* tomcat1
JkMount / loadbalancer
JkMount /* loadbalancer
</VirtualHost>
<VirtualHost *>
ServerName localhost2
JkMount /*.jsp tomcat2
JkMount /servlet/* tomcat2
JkMount / loadbalancer
JkMount /* loadbalancer
</VirtualHost>
4.3.2 IP-based (different IP for each site)
# First Virtual Host.
#
Listen 192.168.0.1:80
<VirtualHost 192.168.0.1:80>
ServerName localhost
JkMount /*.jsp tomcat1
JkMount /servlet/* tomcat1
JkMount / loadbalancer
JkMount /* loadbalancer
</VirtualHost>
# Second Virtual Host.
#
<VirtualHost 192.168.0.2:80>
ServerName localhost2
JkMount /*.jsp tomcat2
JkMount /servlet/* tomcat2
JkMount / loadbalancer
JkMount /* loadbalancer
</VirtualHost>
Where the serverNames are fully-qualified host names in a DNS Server.
NOTE: When using SSL with multiple Virtual Hosts, you must use an ip-based configuration. This is because SSL requires you to configure a specific port (443), whereas name-based specifies all ports (*). You might get the following error if you try to mix name-based virtual hosts with SSL.
[error] VirtualHost _default_: 443 -- mixing * ports and non-* ports with a NameVirtualHost address is not supported, proceeding with undefined results.
4.4 Workflow Description
The workflow description for the above configuration is given below: –
- Tomcat workers are configured to AJP3 connectors with different ports.
- The JK module of Apache server listens to all the connected AJP3 connectors.
- Once Apache server is started, the entire tomcat workers are registered and the load balancing configurations are also registered.
- For any given request matching the URI specified in mod_jk.conf file, Apache server receives the request.
- The received request is then checked against the deployed application in each of worker’s webapps directory through the AJP3 connector and the request is sent to the first worker matching the request URI and which is free according to the load-balancing configuration.
- The response takes the same route.
5). Benefits
- Scalability is very easy to achieve. Creating a copy of a worker into a new directory and starting the same by setting the CATALINA_HOME directory.
Thus, if an application cannot support a fixed number of concurrent users, the number can be increased by deploying the same application in as many Tomcat workers as needed.
- Using the load balancing facility of Apache JK module. If the systems in which Tomcat workers are created have different CPU and memory configuration, then the load balancing facility can be used to make the Tomcat workers residing on a high-end system to process more number of requests than the other workers.
- Automatic restart of any failed tomcat worker by Apache server. The JK module of Apache monitors each Tomcat worker and if any of the worker instance crashes, Apache server restarts the Tomcat worker automatically, thereby ensuring maximum availability at any point of point.
- Low cost advantage. Both Apache and Tomcat are freeware.
- Low cost vendor specific support can be obtained. Specific vendors like HP are providing an integrated version of the cluster and also provide support for the same at a minimum cost.
- Use of the speed of Apache server for processing static content.
- Use of the speed of Tomcat for processing JSP and servlet files for low requests
6). Finally
In this era of outsourcing where customers want the same level of throughput, stability and scalability at a low cost, the clustering mechanism of Apache and Tomcat servers proves to be a winning combination. By leveraging the advantages of both these servers through the use of JK integration module, an architect’s dream product seems to be in the making. Though this combination does not provide ease of development, it still proves to be better than other commercially available servers in the critical parameters of cost, stability and scalability from the customer’s point of view. Thus the clustering of Apache with Tomcat gives us a product which is costless, faster, more stable and easily scalable when compared to many other commercially available servers.
Recent Comments