|
2. Install package
#yum install –y tigervnc-server.x86_64 0:1.0.90-0.15.20110314svn4359.el6
3. Edit vncservers file:
# vim /etc/sysconfig/vncservers
Uncomment the two lines below and edit it with your own username:
VNCSERVERS=”2:myusername”
VNCSERVERARGS[2]=”-geometry 800×600 -nolistentcp -nohttpd -localhost”
Example:
VNCSERVERS=”2:root”
VNCSERVERARGS[2]=”-geometry 1024×768?
4. Configuring Desktop Environment if needed:
# /root/.vnc/xstartup
5. Set the password for vncserver user:
# vncpasswd
Password: myvncpassword
Verify: myvncpassword
Start the VNCServer:
Note: Its must to set the password for VNC otherwise the service may not start.
6. Configure other services:
# service vncserver start
# chkconfig vncserver on
Configure firewall for vnc users, to disable it:
Also stop NetworkManager& disable SELinux
# /etc/init.d/NetworkManager stop
# chkconfigNetworkManager off
Edit here for disable SELINUX.
#vim /etc/sysconfing/selinux
SELINUX=disabled
Save &quit!
Stop the IPtables.
# /etc/init.d/iptables save
# /etc/init.d/iptables stop
# chkconfigiptables off
B. Client configuration:
Now to acces the VNC console from the client, you need to install VNC client on the system on both Linux or Windows which is “Tigervnc Client”. You can download it from ‘sourceforge’ website.
Now, Run tigervnc Client
Give IP as: 172.16.XX.XX
The default port number is 5900+1 for each user:
The second user will be 5902 (VNCSERVERS=”2:root”).
You can Access the Server machine now.
Note: vnc server & client machine in same Network.
You can restart all the servers in a cluster through WAS Admin COnsole by selecting the server and clicking on ripple star

Yourr you can execute the following wasadmin command
AdminControl.invoke('WebSphere:name=<clustername>,process=dmgr,platform=common,node=dmgrCellManager01,version=6.1.0.25,type=Cluster,mbeanIdentifier=cluster1,cell=dmgrCell01,spec=1.0', 'rippleStart')

We will quickly review some of the basic concepts of cell, node, server, and so on.
An Application Server? in this context is a single WebSphere application server,
A server is a runtime environment. Servers are Java process responsible for serving J2EE requests (for example, serving JSP pages, serving EJB calls, consuming JMS queues, and so on).
The Admin Console ?is a browser based application that is pre-installed in your WebSphere environment that enables you to manage your application servers and applications.
A cell is a grouping of nodes into a sigle administrative domain. For WebSphere this means that if you group several servers within a cell, then you can use one admin console to administer them.
Network Deployment Manager? is an application server running an instance of the Admin console. You have administrative control over all other app servers in the same cell.
The Deployment Manager is a process (a special WebSphere instance) responsible for managing the installation and maintenance of Applications and other resources related to a J2EE environment. It also maintaining user repositories for authentication and authorization for WebSphere and other applications running in the environment. The Deployment Manager communicates with the Nodes through another special WebSphere process, the Node Agent.
A node is a grouping of servers that share common configuration on a physical machine. It is comprised of a Node Agent and one or more Server instances. Multiple WebSphere nodes can be configured on the same physical computer system.
The Node Agent is the administrative process responsible for spawning and killing server processes and also is responsible for synchronizing configuration between the Deployment Manager and the Node. Note that multiple WebSphere nodes can be configured on the same physical computer system. A single Node Agent supports all application servers running on the same node.
Clusters are virtual units that group Servers. They can contain multiple instances of the same application server and can span multiple nodes. Resources added to the Cluster are propagated to every Server that makes up the cluster, This usually affects the nodes in the server grouping.
Horizontal clustering
Horizontal clustering, sometimes referred to as scaling out, is adding physical machines to increase the performance or capacity of a cluster pool. Typically, horizontal scaling increases the availability of the clustered application at the cost of increased maintenance. Horizontal clustering can add capacity and increased throughput to a clustered application; use this type of clustering in most instances.
Vertical clustering
Vertical clustering, sometimes referred to as scaling up, is adding WebSphere Application Server instances to the same machine. Vertical scaling is useful for taking advantage of unused resources in large SMP servers. You can use vertical clustering to create multiple JVM processes that, together, can use all of the available processing power.
Hybrid horizontal and vertical clustering
Hybrid clustering is a combination of horizontal and vertical clustering. In this configuration, disparate hardware configurations are members of the same cluster. Larger, more capable machines might contain multiple WebSphere Application Server instances; smaller machines might be horizontally clustered and only contain one WebSphere Application Server instance.
When you use vertical clustering, be cautious. The only way to determine what is correct for your environment and application is to tune a single instance of an application server for throughput and performance, and then add it to a cluster and incrementally add additional cluster members. Test performance and throughput as each member is added to the cluster. When you configure a vertical scaling topology, always monitor memory usage carefully; do not exceed the amount of addressable user space or the amount of available physical memory on a machine.
IBM HTTP Server
The first tier is the HTTP server, which handles requests from Web clients and relieves the application server from serving static content. It provides a logical URL that encompasses ancillary applications, such as the IBM Rational® Asset Manager application, the Rational Asset Manager Help application, and the Rational Asset Manager Asset Based Development application. Note that in a large configuration, a cache server is deployed in front of the HTTP server.
Load Balancer
A load balancer distributes load across a number of systems. If you have more than one HTTP server, you must use a load balancer. For moderately sized deployments, use a software-based load balancer, such as Edge Component. For larger deployments, which support a large number of concurrent users, use a hardware-based load balancer.
Cache Proxy
A forward-caching proxy system stores application data for clients in a cache and relieves load from other server systems. If your Rational Asset Manager server supports a moderate number of concurrent users, you need only one forward proxy system. If your Rational Asset Manager server supports a large number of concurrent users, you might need multiple proxy systems.
Scalability
Scalability is how easily a site can expand. The number of users, assets, and communities for a given Rational Asset Manager installation must be able to expand to support an increasing load. The increasing load can come from many sources, such as adding additional teams or departments to the set of Rational Asset Manager users or importing large sets of historical assets into Rational Asset Manager.
Scalability is an architectural consideration that drives the design of your architecture. While you might improve scalability by adding additional hardware to your system, it might not improve performance and throughput.
The choice between scaling up (vertical clustering) and scaling out (horizontal clustering) is usually a decision of preference, cost, and the nature of your environment. However, application resiliency issues can change your preferences.
Scaling up implements vertical scaling on a small number of machines with many processors and large amounts of addressable user space memory. This can present significant single points of failure (SPOF) because your environment is composed by fewer large machines.
Scaling out uses a larger number of smaller machines. In this scenario, it is unlikely that the failure of one small server will create a complete application outage. However, scaling out creates more maintenance needs.
Availability
Also referred to as fault-tolerance or resiliency, availability is the ability of a system to provide operational continuity in spite of failed components and systems. Architectural decisions, such as horizontal versus vertical scaling and using backup load balancers (that is, dispatchers), can impact the availability of your Rational Asset Manager application. Consider availability for all shared resources, networks, and disk storage systems that compose your Rational Asset Manager environment. In a fault-tolerant design, if an application or server fails, other members of the cluster can continue to service clients.
There are two categories of failover: server failover and session failover. When server failover occurs, sessions on the failed cluster member are lost (a user will have to log in again) but services are still available to the clients. In session failover, the existing sessions are resumed by other members of the cluster as if the cluster member had not failed (although the last transaction can have been lost). If a redundant infrastructure is configured to support server failover, Rational Asset Manager will support it.
Restricting unused HTTP methods
The HTTP method is supplied in the request line and specifies the operation that the client has requested. Browsers will generally just use two methods to access and interact with web sites; GET for queries that can be safely repeated and POST for operations that may have side effects. This means, we need to disable unused http methods. some of them are:(PUT|DELETE|TRACE|TRACK|COPY|MOVE|LOCK|UNLOCK|PROPFIND|PROPPATCH|SEARCH|MKCOL). Check with the application teams, if they need any of these methods for the application to work, before disabling them.
Testing before limiting http methods:
telnet rmohan.com 80
Trying xx.xx.xx.xx…
Connected to rmohan.com.
Escape character is ‘^]’.
OPTIONS / HTTP/1.1
Host: rmohan.com
HTTP/1.1 200 OK
Date: Thu, 14 Sep 2010 00:11:57 GMT
Server: Apache Web Server
Content-Length: 0
Allow: GET, HEAD, POST, PUT, DELETE, CONNECT, OPTIONS, PATCH, PROPFIND, PROPPATCH, MKCOL, COPY, MOVE, LOCK, UNLOCK, TRACE
Connection closed by foreign host.
your IBM http servers configuration file [httpd.conf] has 2 sections named main and virtualhost sections. you need to add the following code at both the places.
I am explaining this task using mod_rewrite module. So, first make sure that… mod_rewrite is enabled. then, add the following lines to your http.conf files main and virtualhost sections.
RewriteEngine On
RewriteCond %{REQUEST_METHOD} ^(PUT|DELETE|TRACE|TRACK|COPY|MOVE|LOCK|UNLOCK|PROPFIND|PROPPATCH|SEARCH|MKCOL)
RewriteRule .* – [F]
Restart the web server after adding the above lines.
Now, when someone tried to use one of these http methods, they will get forbidden response since we specified [F] in the rewrite rule.
Testing after adding and restarting web server
telnet rmohan.com 80
Trying xx.xx.xx.xx…
Connected to rmohan.com.
Escape character is ‘^]’.
OPTIONS / HTTP/1.1
Host: rmohan.com
HTTP/1.1 200 OK
Date: Thu, 14 Sep 2010 00:15:44 GMT
Server: Apache Web Server
Content-Length: 0
Allow: GET, POST
Connection closed by foreign host.
Testing TRACE methods
telnet rmohan.com 80
Trying xx.xx.xx.xx…
Connected rmohan.com
Escape character is ‘^]’.
TRACE / HTTP/1.0
Host: rmohan.com
testing… <- ENTER twice HTTP/1.1 403 Forbidden Date: Thu, 14 Sep 2010 00:18:31 GMT Server: Apache Web Server Content-Length: 320 Connection: close Content-Type: text/html; charset=iso-8859-1
Forbidden
You don’t have permission to access / on this server.
Connection closed by foreign host.
Disable verbose HTTP headers:
you might have seen this … when the web server [apache or ibm http server] throws errors page, sometimes it might show the information related to its version, build, modules etc. This is a security issue since you are giving away the details about your web server. for example, take a look at this:
Server: Apache/2.0.53 (Ubuntu) PHP/4.3.10-10ubuntu4 Server at xx.xx.xx.xx Port 80
The line in the server header expose important version and variant information about the Linux operating system and Apache software used on the machine, indirectly expose the possible security holes that are existed to the hackers, or at least make malicious attackers easier to identify your system for available attack points.
To ensure that the Apache HTTP web server does not broadcast this message to the whole world publicly and fix possible security issue, modify these two directives ServerTokes and ServerSignature in httpd.conf configuration file.
ServerTokens
This directive configures what you return as the Server HTTP response Header. The built-in default is ‘Full’ which sends information about the OS-type and compiled in modules. The recommended value is ‘Prod’ which sends the least information.
Options: Full | OS | Minor | Minimal | Major | Prod
“ServerTokens Prod”
This configures Apache to return only Apache as product in the server response header on very page request, suppressing OS, major and minor version info.
ServerSignature
This directive lets you add a line containing the server version and virtual host name to server-generated pages. It is recommended to set it to OFF and Set to “EMail” to also include a mailto: link to the ServerAdmin.
Options: On | Off | EMail
“ServerSignature Off”
This instructs Apache not to display a trailing footer line under server-generated documents, which displays server version number, ServerName of the serving virtual host, email setting etc..
hanges in v8.5
Administration changes
ü Some new required ports
ü A number of minor default setting changes
ü Information provided in the v8.5 InfoCenter
http://pic.dhe.ibm.com/infocenter/wasinfo/v8r5/topic/com.ibm.websphere.nd.doc/ae/welc_transition.html
Development changes
ü Development tool changes
ü Java7 upgrade – Java6 is the default
Breaking changes: (AWT, Internationalization, IO, JAXP, Language,Networking, Text and Utilities)
ü JPA (2)
Custom settings are provided to provide compatibility
Conversion of existing applications to Liberty
Changes in v8.0
Administration changes
ü Installation changes
ü Centralized Install Manager
ü Install Factory alternative
ü WebServer Plug-in installation and configuration
ü Java Garbage collection and dump format changes
ü Security default changes
ü Other miscellaneous changes
Development changes
ü Development tool changes
ü JEE 1.6
ü WebSphere API changes
Changes in v7.0
Administration changes
ü SessionInitiationProtocol(SIP) Migration Considerations
ü zOS Migration tool
ü Administration script required changes
ü Port usage
ü Security Migration considerations
ü Mixed version considerations
Development changes
ü Development tool change
ü JRE 6 impacts
ü JEE 5 impacts
ü WebSphere removed features
ü Increased usage of Open Source implementations included in WAS
Changes in v6.1
Administration changes
ü Administration script required changes
ü zOS Migration tool
ü Install response file format changes
ü Port usage
ü Profile directory structure
ü New administrative tool IDE
ü Migration and Feature Packs
Development changes
ü Development tool change
ü JRE 5 impacts
ü WebSphere changes and removed features
Changes in v6.0
Administration changes
ü Administration script required changes
ü Port usage
ü Profiles
ü JMS engine redesign
ü CoreGroup considerations
Development changes
ü Development tool change
ü J2EE 1.4 impacts
WebSphere API migration details

On a regular basis, WebSphere fix packs and individual interim fixes for public download. Fix packs contain multiple corrections and improvements in the areas of function, security, stability, and performance. It is recommended that you run your WebSphere with current fix pack levels.
PROCEDURE
1. Download the Update Installer for WebSphere Software
2. Install the Update Installer.
3. Extract the Update Installer package to a temporary directory.
4. Navigate to the UpdateInstaller subdirectory and start the installation wizard with the following command:
a. Windows: install.exe
b. UNIX/Linux: ./install
Follow the instructions in the wizard. When the Installation summary panel is displayed, review the summary which provides links to information resources for the Update Installer. Click Next to begin the installation or click Back to make changes to previous panels. After the installation process completes, verify the success of the installer program.
5. Download packages to the maintenance directory:
a. Windows:was_root_dir\UpdateInstaller\maintenance
b. UNIX/Linux: was_root_dir/UpdateInstaller/maintenance
6. Stop all WebSphere Application Server and IBM HTTP Server processes.
7. Backing up a repository using backupConfig
a. C:\WebSphere\AppServer\bin>backupConfig -nostop
8. Install the maintenance packages.
Navigate to the UpdateInstaller directory in the was-root-di> and start the application with the following command:
a. Windows: update.bat
b. UNIX/Linux: ./update.sh
On the Product Selection page, select the installation location from the dropdown list, type it into the edit box, or use Browse to browse to select the location.
On the Maintenance Operation Selection page, select Install maintenance package.
On the Maintenance Package Directory Selection page, specify the location of the downloaded fix packs (was_root_dir\UpdateInstaller\maintenance.).
On the Available Maintenance Package to Install page, accept the selection for recommended updates. Then, click Next.
After the installation finishes, verify that the results say that the installation completed successfully.
9. Restore a configuration using restoreConfig
C:\WebSphere\AppServer\bin>restoreConfig WebSphereConfig_[date].zip
We all know that, the performance of your e-business hosting environment is key to the overall success of your organization’s e-business. So there is always a major focus on tuning your application hosting environment. WebSphere Application Server provides tunable settings for its major components to enable you to make adjustments to better match the run-time environment to the characteristics of your application. For many applications, the default settings will be sufficient to run them with optimal performance. Whereas some application may need some tuning like more heap size or more connection etc. Performance tuning can yield significant gains in performance even if an application is not optimized for performance. But remember, its not only the websphere application server tuning but also the other factors like application design and hard also effects the overall performance.
In this article i’ll try to how we can tune the application hosting environment for better performance. So, this article focuses on the tunable parameters of the major WebSphere Application Server components, and provides insight about how these parameters affect performance.
Here, we will discuss majority of the tuning parameters of the websphere. Let’s take them in 3 categories.
1. JVM and DB connectivity
2. Messaging/JMS
3. Others (like Caching, transport channels etc)
1. JVM and DB Connectivity:
In section1, we discuss JVM and DB connectivity related tuning parameters. Namely
a. JVM heap size
b. Thread pool size
c. Connection pool size
d. Data source statement cache size
e. ORB pass by reference
1A. JVM Heap size:
Heap size is the most important tuning parameter related to JVM, as it directly influences the performance.
- Having less Heap size makes the Garbage Collection (GC) to occur more frequently and less no of objects will be created in JVM and hence you may see application failures.
- Increasing the Heap size makes more objects to be created before the application failure occurs and triggers a GC. This eventually enables the application to run more time between the GC cycles. But more heap size means more time for GC. Hence inthat period your application many not respond sometimes.
Another important parameter in JVM tuning is the garbage collection policy.
There 3 main GC policies are available:
- Optthruput: performs the mark and sweep operations during garbage collection when the applicaiton is paused to maximize the application throughput. This is the default setting.
- Optavgpause: performs the mark and sweep concurrently while the application is running to minimize pause times. This setting provides best application response times
- Gencon: Treats short-lived and long-lived objects differently to provide a combination of lower pause times and high application throughput.
Tuning:
So tuning the JVM heap size and getting a balance between time between 2GCs and time needed for GC to occur, is important. The first step in tuning Heap size is to enable Verbose GC. Enabling verbose GC prints useful JVM information such as amount of free and used bytes in heap, interval between GCs. All this information will logged to native_stderr.logs. You can use various tools available to visualize the heap usage.
Defaults:
Websphere application server default heap settings are 50MB for initial heap and 256MB for maximum.
Note: What happens if we set initial and max heap sizes same?
This prevents JVM from dynamically resizing the heap. Also avoids overhead of allocating and de-allocating more memory. But the startup of JVM will be slower as it has to allocate the more heap at the startup.
Tools to analyze verbose GC output – IBM monitoring and diagnostic tools for Java – Garbage collection and memory visualizer tool (integrated in IBM support assistant).
1B. Thread Pools:
A thread pool enables components of server to reuse the threads, thereby eliminating the need to create new threads at runtime to service each new request.
Most commonly used thread pools in application server are:
1. Default: used when requests come in for a message driven bean (MDB) or if a particular transport chain has not been defined to a specific thread pool.
2. ORB: used when remote requests come over RMI/IIOP for an enterprise bean from an EJB application client, remote EJB interface or another application server.
3. Web container: used when the requests come over http.
Tuning parameters for Thread pools:
– Minimum size: The minimum number of threads permitted in the pool. When an application server starts, no threads are initially assigned to the thread pool. Threads are added to the thread pool as the workload assigned to the application server requires them, until the number of threads in the pool equals the number specified in the minimum size field. After this point in time, additional threads are added and removed as the workload changes. However, the number of threads in the pool never decreases below the number specified in the minimum size field, even if some of the threads are idle.
– Maximum size: Specifies the maximum number of threads to maintain in the default thread pool.
– Thread inactivity timeout: Specifies the amount of inactivity (in milliseconds) that should elapse before a thread is reclaimed. A value of 0 indicates not to wait, and a negative value (less than 0) means to wait forever.
Defaults:
ThreadPool |
Minimum |
Maximun |
Inactivity Timeout |
Default
|
20
|
20
|
5000ms
|
ORB
|
10
|
50
|
3500ms
|
Web Container
|
50
|
50
|
60000ms
|
Tuning
WebSphere application servers Integrated Tivoli performance viewer lets you view the performance monitoring infrastructure (PMI) data associated with thread pools, if you’ve enabled the PMI.
In the Tivoli performance viewer, select the server and expand the parameters list. Go to performance modules->thread pools and select web container. You can see pool size which is average number of threads in the pool and active count which is number of concurrently active threads. Using this information you can decide how many threads are required in a pool. Also you can make use of performance advisors to get recommendations.
1C. Connection pool
When an application uses a database resource, a connection must be established, maintained and then released when the operation is complete. These processes consume time and resources. The complexity of accessing data from web applications imposes a strain on the system.
An application server enables you to establish a pool of back-end connections that applications can share on the application server. Connection pooling spreads the connection overhead across several user requests, there by conserving application resources for further requests.
Connection pooling is the process of creating predefined number of database connections to a single data source. This process allows multiple users to share connections without requiring each user to incur the overhead of connecting and disconnecting from the database.
Tuning Options:
- Minimum Connections: The minimum number of physical connections to maintain. If the size of the connection pool is at or below the minimum connection pool size, an unused timeout thread will not discard physical connections. However, the pool does not create connections solely to ensure that the minimum connection pool size is maintained.
- Maximum Connections: The maximum number of physical connections that can be created in this pool. These are the physical connections to the back-end data store. When this number is reached, no new physical connections are created; requestors must wait until a physical connection that is currently in use is returned to the pool, or until a ConnectionWaitTimeoutException is thrown
- Thread Inactivity timeout: Specifies the amount of inactivity (in milliseconds) that should elapse before a thread is reclaimed. A value of 0 indicates not to wait and a negative value means to wait forever.
Tuning:
The goal of tuning connection pool is to ensure that each thread that needs a connection to the database has one, and the requests are not queued up waiting to access the database. Since each thread performs a task, each concurrent thread needs a database connection.
· Generally, the maximum connection pool size should be at least as large as maximum size of the web container thread pool.
· Use the same method to both obtain and close the connections.
· Minimize the number of JNDI looksups.
· Do not declare connections as static objects.
· Do not close the connections in the finalize method
· If you open a connection, close the connection
· Do not manage data access in the container managed persistence (CMP) beans.
1D. Data source statement cache size
Data source statement cache size specifies the number of prepared JDBC statements that can be cached per connection. A callable statement removes the need for the SQL compilation process entirely by making a stored procedure call. A statement call is a class that can execute an arbitrary string that is passed to it. The SQL statement is compiled prior to execution, which is a slow process. Applications that repeatedly execute the same SQL statement can decrease processing time by using a prepared statement.
WebSphere application server data source optimizes the processing of prepared statements and callable statements by caching those statements that are being in an active connection.
Tuning
One method is to review the application code for all unique prepared statements and ensure the cache size is larger than that value.
Second options is to iteratively increase the cache size and run the application under peak steady state load until PMI metrics report no more cache discards.
1E. ORB pass by reference
The ORB [object request broker] pass by reference option determines if pass by reference or pass by value semantics should be used when handling parameter objects involved in an EJB request. The ORB pass by reference option treats the invoked EJB method as a local call and avoids the requisite object copy.
The ORB pass by reference option will only provide a benefit when the EJB client and invoked EJB module are located within the same classloader. This means both EJB client and EJB module must be deployed in the same EAR file and running on the same application server instance. If the EJB client and EJB module are mapped to different application server instances, then the EJB module must be invoked remotely using pass by value semantics.
By default, this option is disabled and a copy of each parameter object is made and passed to the invoked EJB method.
2. Messaging/JMS components tuning
There are 2 configurations that can effect the performance of the messaging components in WebSphere
1. Message store type
2. Message reliability
2A. Message store type:
Message stores play an essential part in the operation of messaging engines. Each messaging engine has one and only one message store, which can be either a file store or a data store. A message store enables a messaging engine to preserve operating information and to retain those objects that messaging engines need for recovery in the event of a failure.
- Local derby database: This is a local, in-process derby database used to store the operational information and messages associated with the messaging engine. This is best suitable for development environments. This configuration uses memory within application server to manage the stored messages.
- File based: This is the default option. If this is used, operating information and messages are persisted to the file system. If we are using faster disks or RAID etc, this can perform better than the derby database option.
- Remote Database: In this, a database hosted on a different machine acts as a data store. This enables the application server JVM to free up the memory it used in case of Derby or file store configurations. This is the best option for Production environments.
Tuning Considerations:
- 1. Better performance
- To achieve best performance using a data store, you often need to use a separate remote database server. A file store can exceed the performance of a data store using a remote database server without needing a separate server.
- 2. Low administration requirements
- The file store combines high throughput with little or no administration. This makes it suitable for those who do not want to worry about where the messaging engine is storing its recoverable data. A file store improves on the throughput, scalability, and resilience of Derby.
- 3. Lower deployment costs
- Use of data store might require database administration to configure and manage your messaging engines. File store can be used in environments without a database server.
2B. Message reliability
Websphere provides 5 options for message reliability
- Best effort non-persistent
- Messages are discarded when a messaging engine stops or fails. Messages might also be discarded if a connection used to send them becomes unavailable and as a result of constrained system resources.
· Express non-persistent
- Messages are discarded when a messaging engine stops or fails. Messages might also be discarded if a connection used to send them becomes unavailable.
- Reliable non-persistent
- Messages are discarded when a messaging engine stops or fails.
- Reliable persistent
- Messages might be discarded when a messaging engine fails.
- Assured persistent
- Messages are not discarded.
Persistent messages are always stored in some form of persistent store. Non-persistent messages are stored generally in volatile memory. Message reliability and message delivery speed are always inversely proportional. Means, non-persistent messages will be delivered fast but will not survive messaging engine stops, crash etc.. Where as persistent messages can survive but the delivery of the messages will be slow compared to non-persistent messages.
Refer to learn more about Message reliability:http://publib.boulder.ibm.com/infocenter/wasinfo/v6r1/index.jsp?topic=/com.ibm.websphere.pmc.doc/tasks/tjm0003_.html
3. Others
3A. Caching
Caching is always important in any performance tuning. Websphere servers provide some options for caching as well.
- DynaCache provides in-memory caching service for objects and page fragments generated by the server. The distributed map and distributedObjectCache interfaces can be used within an application to cache and share java objects by storing references to these objects in the cache.
- Servlet caching enables servlet and JSP response to be stored and managed by a set of cache rules.
for more information on this topic: refer to ‘dynamic cahing‘ posted earlier.
3B. Disable unused services
Again this is generic for any performance tuning. Always turn-off the features which doesn’t require. This will make sure you use less memory to run your websphere. One such example is PMI. If you are using a third party application for monitoring and doesn’t need in-built PMI features, turn it off.
3C. Web Server
Try to keep the web server on a different machine, so that Websphere and Web server do not need to share the operating system resources like process. memory etc..
3D. Http transport connections
Persistent connections indicates that, an outgoing HTTP response should use a keep-alive connection instead of a connection that closes after one response or request exchange. So, by increasing the maximum no of persistent requests per connection, we can see some performance gain. Also we can tune no of requests handled per connection. Sometimes, keeping a connection open can be a security concern.
httpd.conf
CustomLog “|/opt/IBM/HTTPServer/bin/rotatelogs -l /opt/IBM/HTTPServer/logs/access_log.%Y-%m-%d-%H_%M_%S 86400” common
ErrorLog “|/opt/IBM/HTTPServer/bin/rotatelogs -l /opt/IBM/HTTPServer/logs/error_log.%Y-%m-%d-%H_%M_%S 86400”
ErrorLog /usr/IBMHttpServer/logs/error_log
ErrorLog “|/usr/IBMHttpServer/bin/rotatelogs
/data/httparch/error_log.%d-%b-%Y-%H-%M 86400 -360”
#
# LogLevel: Control the number of messages logged to the error_log.
# Possible values include: debug, info, notice, warn, error, crit,
# alert, emerg.
#
LogLevel debug
#
# The following directives define some format nicknames for use with
# a CustomLog directive (see below).
#
LogFormat “%h %l %u %t \”%r\” %>s %b \”%{Referer}i\” \”%{User-Agent}i\””
combined
LogFormat “%h %l %u %t \”%r\” %>s %b” common
LogFormat “%{Referer}i -> %U” referer
LogFormat “%{User-agent}i” agent
#
# The location and format of the access logfile (Common Logfile Format).
# If you do not define any access logfiles within a
# container, they will be logged here. Contrariwise, if you *do*
# define per- access logfiles, transactions will be
# logged therein and *not* in this file.
#
CustomLog /usr/IBMHttpServer/logs/access_log common
CustomLog “|/usr/IBMHttpServer/bin/rotatelogs
/data/httparch/access_log.%d-%b-%Y-%H-%M 86400 -360” common
The piped portions should be on one line, it seems the post cut the line
in half.
access_log works correctly, appending to the access_log and piping to an
access_log in a different location everyday starting at midnight. So I
have a daily access_log file and one big access_log file.
error_log on the other hand. What it does is create the daily log, but
does not append to the regular error_log. Since I implemented this, the
large error_log has not been written to, but the daily logs are created.
WebSphere® Application Server V8.5.5 extends the capabilities provided in version 8.5, including significant enhancements to the Liberty profile, a highly composable, fast to start, and ultra lightweight profile of the application server that is optimized for developer productivity and web application deployment.
Enhancements to the Liberty profile are as follows:
• Certification to the Java™ EE 6 Web Profile, providing the assurance that applications leverage standards-compliant programming models
• Additional programming models such as web services that enable the expansion of Liberty profile applications beyond web applications
• New messaging capabilities, including support for Java Message Service (JMS) and message-driven beans, and a new single server message provider
• Ability to add Liberty features through a new system programming interface, enabling the customization of Liberty profile capabilities to meet your business needs through insertion of custom Liberty features
• Liberty support for the NoSQL database MongoDB, a scalable, well- performing, and easy-to-use document-style NoSQL database
• Enhancement to security support, such as federated repositories, custom user registry, trust association interceptor, password hashing, and encryption of passwords in server configurations, which improves security for Liberty application deployments
• High Performance Extensible Logging (HPEL) for Liberty servers, which enables better administration and serviceability
• New Liberty administration features
• Clustering of server instances
• Distributed caching with WebSphere eXtreme Scale
• Ability to install the entitled WebSphere Application Server edition on developer machines for development and unit testing purposes
• Support for WebSphere Web Cache (DynaCache)
• WebSphere Application Server V8.5.5 tooling bundles updated with Rational® Application Developer (RAD) V9 and the WebSphere Application Server Developer Tools (WDT) V8.5.5
WebSphere Application Server V8.5.5 also introduces a new Liberty profile-only solution. The WebSphere Application Server Liberty Core edition is built to leverage the lightweight and dynamic aspects of the Liberty profile. Scoped to the capabilities of Web Profile applications, the new edition is ideal for lightweight production servers.
IBM® WebSphere Application Server is the leading open standards-based application foundation, offering accelerated delivery of innovative applications and unmatched operational efficiency, reliability, administration, security, and control.
1. List the files in current directory sorted by size ?
ls -l | grep ^- | sort -nr
2. List the hidden files in current directory ?
ls -a1 | grep “^\.”
3. Delete blank lines in a file ?
cat sample.txt | grep -v ‘^$’ > new_sample.txt
4. Search for a sample string in particular files ?
grep “Debug” *.confHere grep uses the string “Debug” to search in all files with extension“.conf” under current directory.
5. Display the last newly appending lines of a file during appendingdata to the same file by some processes ?
tail –f Debug.logHere tail shows the newly appended data into Debug.log by some processes/user.
6. Display the Disk Usage of file sizes under each directory in currentDirectory ?
du -k * | sort –nr (or) du –k . | sort -nr
7. Change to a directory, which is having very long name ? –
cd CDMA_3X_GEN*Here original directory name is – “CDMA_3X_GENERATION_DATA”.
8. Display the all files recursively with path under current directory ?
find . -depth -print
9
. Set the Display automatically for the current new user ?
export DISPLAY=`eval ‘who am i | cut -d”(” -f2 | cut -d”)” -f1’`Here in above command, see single quote, double quote, grave ascent is used. Observe carefully.
10. Display the processes, which are running under yourusername ?
ps –aef | grep mohan , mohan is the username.
11. List some Hot Keys for bash shell ?
Ctrl+l – Clears the Screen. Ctrl+r – Does a search in previously given commands in shell. Ctrl+u – Clears the typing before the hotkey. Ctrl+a – Places cursor at the beginning of the command at shell. Ctrl+e – Places cursor at the end of the command at shell. Ctrl+d – Kills the shell. Ctrl+z – Places the currently running process into background.
12. Display the files in the directory by file size ?
ls –ltr | sort –nr –k 5
13. How to save man pages to a file ?
man | col –b > Example : man top | col –b > top_help.txt
14. How to know the date & time for – when script is executed ?
Add the following script line in shell script.eval echo “Script is executed at `date`” >> timeinfo.infHere, “timeinfo.inf” contains date & time details ie., when script is executed and history related to execution.
15. How do you find out drive statistics ?
iostat -E
16. Display disk usage in Kilobytes ?
du -k
17. Display top ten largest files/directories ?
du -sk * | sort -nr | head
18. How much space is used for users in kilobytes ?
quot -af
19. How to create null file ?
cat /dev/null > filename1
20. Access common commands quicker ?
ps -ef | grep -i $@
21. Display the page size of memory ?
pagesize -a
22. Display Ethernet Address arp table ?
arp -a
23. Display the no.of active established connections to localhost ?
netstat -a | grep EST
24. Display the state of interfaces used for TCP/IP traffice ?
netstat -i
25. Display the parent/child tree of a process ?
ptree Example: ptree 1267
26. Show the working directory of a process ?
pwdx Example: pwdx 1267
27. Display the processes current open files ?
pfiles Example: pfiles 1267
28. Display the inter-process communication facility status ?
ipcs
29. Display the top most process utilizing most CPU ?
top –b 1
30. Alternative for top command ?
prstat -a
31. How can we find RAM size in solaris server?
Prtconf | grep Memory or prtdiag | grep Memory
32. How to find 32 or 64 bit system instances of OS?
Isainfo –v or –kv
32) All of the standard performance analysis tools are available (for example, vmstat, iostat,
mpstat, pstack, pfiles, gcore, libumem), and a number of additional tools are also provided, including lockstat/plockstat, dtrace, prstat, cpustat, and trapstat. These tools can help identify potential problems or bottlenecks that might be responsible for unnecessarily low performance.
1. vmstat – Report virtual memory statistics
vmstat reports information about processes, memory, paging, block IO, traps, and cpu activity.
The first report produced gives averages since the last reboot. Additional reports give information on a sampling
period of length delay. The process and memory reports are instantaneous in either case.
2. iostat – report I/O statistics
The iostat utility iteratively reports terminal, disk, and tape I/O activity, as well as CPU utilization. The first line of output is for all time since boot; each subsequent line is for the prior interval only.
iostat 2 4
3. mpstat – report per-processor or per-processor-set statistics
The mpstat command reports processor statistics in tabular form. Each row of the table represents the activity of one processor. The first table summarizes all activity since boot. Each subsequent table summarizes activity for the preceding interval.
All values are rates listed as events per second unless otherwise noted.
mpstat -ap 2 4
4. prstat – report active process statistics
The prstat utility iteratively examines all active processes on the system and reports statistics based on the selected output mode and sort order. prstat provides options to examine only processes matching specified PIDs, UIDs, zone IDs, CPU IDs, and processor set IDs.
-a
Report information about processes and users. In this mode prstat displays separate reports about processes and users at the same time.
-p pidlist
Report only processes whose process ID is in the given list.
-t
Report total usage summary for each user. The summary includes the total number of processes or LWPs owned by the user, total size of process images, total resident set size, total cpu time, and percentages of recent cpu time and system memory.
-u
Report only processes whose effective user ID is in the given list. Each user ID may be specified as either a login name or a numerical user ID.
Proc tools:
The proc tools are utilities that exercise features of /proc (see proc(4)). Most of them take a list of process-ids (pid). The tools that do take process-ids also accept /proc/nnn as a process-id, so the shell expansion /proc/* can be used to specify all processes in the system.
pflags Print the /proc tracing flags, the pending and held signals, and other /proc status information for each lwp in each process.
pcred Print or set the credentials (effective, real, saved UIDs and GIDs) of each process.
pldd List the dynamic libraries linked intoeach process, including shared objects explicitly attached using dlopen(3C).
psig List the signal actions and handlers of each process. See signal.h(3HEAD).
pstack Print a hex+symbolic stack trace for each lwp in each process.
pfiles Report fstat(2) and fcntl(2) information for all open files in each process. In addition, a path to the file is reported if the information is available from /proc/pid/path. This is not necessarily the same name used to open the file. See proc(4) for more information.
pwdx Print the current working directory of each process.
pstop Stop each process (PR_REQUESTED stop).
prun Set each process running (inverse of pstop).
pwait Wait for all of the specified processes to terminate.
ptime Time the command, like time(1), but using microstate accounting for reproducible precision. Unlike time(1), children of the command are not timed.
OPTIONS
The following options are supported:
-F Force. Grabs the target process even if another
process has control.
-n (psig and pfiles only) Sets non-verbose mode.
psig displays signal handler addresses rather
than names. pfiles does not display verbose
information for each file descriptor. Instead,
pfiles limits its output to the information
that would be retrieved if the process applied
fstat(2) to each of its file descriptors.
-r (pflags only) If the process is stopped,
displays its machine registers.
-v (pwait only) Verbose. Reports terminations to
standard output.
|
|
Recent Comments