We will quickly review some of the basic concepts of cell, node, server, and so on.
An Application Server? in this context is a single WebSphere application server,
A server is a runtime environment. Servers are Java process responsible for serving J2EE requests (for example, serving JSP pages, serving EJB calls, consuming JMS queues, and so on).
The Admin Console ?is a browser based application that is pre-installed in your WebSphere environment that enables you to manage your application servers and applications.
A cell is a grouping of nodes into a sigle administrative domain. For WebSphere this means that if you group several servers within a cell, then you can use one admin console to administer them.
Network Deployment Manager? is an application server running an instance of the Admin console. You have administrative control over all other app servers in the same cell.
The Deployment Manager is a process (a special WebSphere instance) responsible for managing the installation and maintenance of Applications and other resources related to a J2EE environment. It also maintaining user repositories for authentication and authorization for WebSphere and other applications running in the environment. The Deployment Manager communicates with the Nodes through another special WebSphere process, the Node Agent.
A node is a grouping of servers that share common configuration on a physical machine. It is comprised of a Node Agent and one or more Server instances. Multiple WebSphere nodes can be configured on the same physical computer system.
The Node Agent is the administrative process responsible for spawning and killing server processes and also is responsible for synchronizing configuration between the Deployment Manager and the Node. Note that multiple WebSphere nodes can be configured on the same physical computer system. A single Node Agent supports all application servers running on the same node.
Clusters are virtual units that group Servers. They can contain multiple instances of the same application server and can span multiple nodes. Resources added to the Cluster are propagated to every Server that makes up the cluster, This usually affects the nodes in the server grouping.
Horizontal clustering
Horizontal clustering, sometimes referred to as scaling out, is adding physical machines to increase the performance or capacity of a cluster pool. Typically, horizontal scaling increases the availability of the clustered application at the cost of increased maintenance. Horizontal clustering can add capacity and increased throughput to a clustered application; use this type of clustering in most instances.
Vertical clustering
Vertical clustering, sometimes referred to as scaling up, is adding WebSphere Application Server instances to the same machine. Vertical scaling is useful for taking advantage of unused resources in large SMP servers. You can use vertical clustering to create multiple JVM processes that, together, can use all of the available processing power.
Hybrid horizontal and vertical clustering
Hybrid clustering is a combination of horizontal and vertical clustering. In this configuration, disparate hardware configurations are members of the same cluster. Larger, more capable machines might contain multiple WebSphere Application Server instances; smaller machines might be horizontally clustered and only contain one WebSphere Application Server instance.
When you use vertical clustering, be cautious. The only way to determine what is correct for your environment and application is to tune a single instance of an application server for throughput and performance, and then add it to a cluster and incrementally add additional cluster members. Test performance and throughput as each member is added to the cluster. When you configure a vertical scaling topology, always monitor memory usage carefully; do not exceed the amount of addressable user space or the amount of available physical memory on a machine.
IBM HTTP Server
The first tier is the HTTP server, which handles requests from Web clients and relieves the application server from serving static content. It provides a logical URL that encompasses ancillary applications, such as the IBM Rational® Asset Manager application, the Rational Asset Manager Help application, and the Rational Asset Manager Asset Based Development application. Note that in a large configuration, a cache server is deployed in front of the HTTP server.
Load Balancer
A load balancer distributes load across a number of systems. If you have more than one HTTP server, you must use a load balancer. For moderately sized deployments, use a software-based load balancer, such as Edge Component. For larger deployments, which support a large number of concurrent users, use a hardware-based load balancer.
Cache Proxy
A forward-caching proxy system stores application data for clients in a cache and relieves load from other server systems. If your Rational Asset Manager server supports a moderate number of concurrent users, you need only one forward proxy system. If your Rational Asset Manager server supports a large number of concurrent users, you might need multiple proxy systems.
Scalability
Scalability is how easily a site can expand. The number of users, assets, and communities for a given Rational Asset Manager installation must be able to expand to support an increasing load. The increasing load can come from many sources, such as adding additional teams or departments to the set of Rational Asset Manager users or importing large sets of historical assets into Rational Asset Manager.
Scalability is an architectural consideration that drives the design of your architecture. While you might improve scalability by adding additional hardware to your system, it might not improve performance and throughput.
The choice between scaling up (vertical clustering) and scaling out (horizontal clustering) is usually a decision of preference, cost, and the nature of your environment. However, application resiliency issues can change your preferences.
Scaling up implements vertical scaling on a small number of machines with many processors and large amounts of addressable user space memory. This can present significant single points of failure (SPOF) because your environment is composed by fewer large machines.
Scaling out uses a larger number of smaller machines. In this scenario, it is unlikely that the failure of one small server will create a complete application outage. However, scaling out creates more maintenance needs.
Availability
Also referred to as fault-tolerance or resiliency, availability is the ability of a system to provide operational continuity in spite of failed components and systems. Architectural decisions, such as horizontal versus vertical scaling and using backup load balancers (that is, dispatchers), can impact the availability of your Rational Asset Manager application. Consider availability for all shared resources, networks, and disk storage systems that compose your Rational Asset Manager environment. In a fault-tolerant design, if an application or server fails, other members of the cluster can continue to service clients.
There are two categories of failover: server failover and session failover. When server failover occurs, sessions on the failed cluster member are lost (a user will have to log in again) but services are still available to the clients. In session failover, the existing sessions are resumed by other members of the cluster as if the cluster member had not failed (although the last transaction can have been lost). If a redundant infrastructure is configured to support server failover, Rational Asset Manager will support it.
Recent Comments