Escolar Documentos
Profissional Documentos
Cultura Documentos
RED HAT
APPLICATION STACK
IntroductIon
The jboss application server exists to provide services to the applications that run on it. by providing common services such as security, transaction management, messaging, and database connectivity, an application server allows a developer to concentrate on developing business logic unique to each application. The second section of this guide covers the concepts of tuning, including the aims of tuning, the problems typically encountered, and some of the techniques used to overcome them. The ability of the server to host a wide range of applications makes it a very useful piece of software. Thanks to the modular design of jboss, you can add or remove services as required, which further increases its usability. The third section of this guide explains how this is done and shows how to create a custom configuration. Whichever configuration you choose, its important to consider the performance of the server in a production environment. The ability to quickly and reliably process growing numbers of concurrent requests from users or other systems is critical to the success of a business. Thats why its critical to ensure that the hardware and software in your system is performing to the best of its abilities. its unlikely that the out-of-the-box settings of the jboss application server will provide the best application performance. jboss is configured for developers by default to help speed up application development. configuration changes are nearly always required to gain performance, because each application has its own unique requirements. further, running multiple applications on the same server will inevitably use more resources, so applications must be configured correctly to ensure efficient usage. The fourth section of this guide provides step-by-step procedures to configure jboss for maximum performance. The fifth section of this guide covers tuning of the java Virtual machine (jVm). specifically, it examines resizing the heap and adjusting the garbage-collection algorithms to maximize application throughput. faster jVm choices, as well as the benefits of 64-bit versus 32-bit processing, are also discussed.
tunIng concepts
before you begin tuning, you should understand that although performance is important, you should not sacrifice correctness or stability to achieve it. The aim of tuning is to make an application perform in the most efficient manner. This typically means understanding where the bottlenecks are located. The often-quoted dont optimize too early maxim is normally true. The design should avoid potential obvious problems, but only performance testing and careful measurements will show where the true bottlenecks are located.
tunIng AIms
guaranteed response time
j2ee application servers arent designed as real-time systems. neither are most operating systems. guaranteed small response times are not possible.
Increasing throughput
another option is to process as many transactions as possible within a given period. This involves using resources in the most efficient manner without worrying too much about long response times. However, care must be taken that long-running processes do not hold locks; this can reduce concurrency and decrease the throughput.
Bottlenecks
cpu
a computer has only so much cpu power. once you reach 100% cpu utilization, no more processing can be performed. a saturated cpu can lead to longer response times and a decrease in throughput. in these situations, adding more cpus can alleviate these problems and increase performance.
memory
physical memory is also a limited resource. To counter this limitation, modern operating systems use a technique called virtual memory paging. This saves unused regions of data, called pages, from memory to the disk when the memory becomes full so that the memory can be reused by another application. The pages of data are loaded back into the memory when they are next needed by the original application. However, excessive paging can negatively affect performance. for example, collecting garbage from paged memory is many times slower than collecting garbage from data held within memory, because the disk must be accessed repeatedly. using more memory to avoid these kinds of problems is often a good strategy, because memory is a relatively cheap resource. The amount you can add depends on the address space of the machine. for example, 32-bit machines can only address a maximum of 4gb. The scalability of the garbage collector may also be a factor, especially for systems with a great deal of memory (see, section 5, the jVm, for more information about garbage collector options).
threads
Thread management is critically important for an application server because threads allow multiple requests to be processed concurrently. The operating system scheduler is responsible for this concurrency, as it allocates processing time to each thread in turn using a scheduling routine. some modern operating systems have very efficient scheduling routines, but they may still perform slowly when there are too many threads. additionally, each thread has a stack that holds parameters, local variables, and return values for any methods that it calls. These stacks consume memory and, together with the overhead from the scheduler, put a limit on the total number of threads that can be created before performance starts to suffer. adding more cpus can help, because the threads can be shared between them and run in parallel to increase performance.
communication/serialization
communication between machines on a network always involves network latency, which affects response times. network latency becomes more significant when there are many small requests versus a few big requests, because more network calls are made for a given amount of data. other latencies occur when the communication mechanism performs buffering. in this case, a request or response may be complete inside an operating system buffer but its delivery is delayed until the buffer is filled. serialization is the conversion of java objects into a byte format and back again (communication is done in bytes, whether it is a raw byte stream or some other format such as xml). serialization consumes much cpu power, and serialization times often far outweigh network latencies.
locking
in any concurrent system, access to shared resources must be controlled. This is usually achieved with locks, which lead to threads waiting for a resource to become available while another uses it. Thus, a shared resource can become a global point of contention, which can essentially turn a multithreaded application server into a single-threaded machine.
JBoss confIgurAtIon
a jboss configuration is the set of available j2ee-compliant services. The following sections describe the three jboss configurations available out of the box, along with instructions for creating custom configurations.
tunIng JBoss
The following sections examine the various steps required to tune jboss 4.0.4 with embedded Tomcat 5.5.17. The outof-the-box default server configuration is used as the starting point for each step. consequently, any paths given are relative to the jboss-4.0.4.gA/server/default directory.
connectors
enabling/disabling http and Apache tomcat connectors
The jboss default settings enable both direct connections to Tomcat via HTTp and indirect connections via apache/ mod_jk. if both connection styles are used, you can leave the default settings in place. However, if only one connection style is used, you can remove the other style and thereby reduce the footprint and start-up time of jboss. if your users directly connect to Tomcat via HTTp and do not pass through apache/mod_jk: open deploy/jbossweb-tomcat55.sar/server.xml with a text editor remove/comment the following xml fragment: <!-- A AJP 1.3 Connector on port 8009 --> <Connector port="8009" address="${jboss.bind.address}" empySessionPath="true" enableLookups="false" redirectPort="8443" protocol="AJP/1.3"/>
see the following section for specific instructions on tuning the Tomcat HTTp connector. if your users always pass through apache/mod_jk and do not directly connect to Tomcat via HTTp, then: open deploy/jbossweb-tomcat55.sar/server.xml in a text editor remove/comment the following xml fragment: <!-- A HTTP/1.1 Connector on port 8080 --> <Connector port="8080" address="${jboss.bind.address}" maxThreads="250" strategy="ms" maxHttpHeaderSize="8192" emptSessionPath="true" enableLookups="false" redirectPort="8443" acceptCount="100" connectionTimeout="20000" disableUploadTimeout="true"/>
a complete list of the available attributes for this connector, together with their meanings, can be found at: http://jakarta.apache.org/tomcat/tomcat-5.5-doc/config/http.html
the JVm
The jVm provides a runtime execution environment for java bytecode and acts as a layer of abstraction over the operating system. jVm performance, specifically in the area of garbage collection, can have a significant impact on overall system and application performance. However, jVm tuning is a very complex topic and requires a fairly deep understanding of jVm mechanisms and operations. given the effort needed to tune a jVm, the first order of business is to determine whether jVm performance is a problem. The only way to determine this is by testing your application with realistic loads. under these conditions, youll be able to observe the performance characteristics of the garbage collectors and determine whether jVm changes are required. The following section describes how to determine the characteristics of garbage collection in your system/application.
descrIptIon
Turns on the logging of gc information specifies the name of a log file where the verbose:gc information can be logged (instead of standard output) prints the times at which the gcs happen relative to the start of the application gives detailed information about the gcs, such as size of the young and old generation before and after gcs, size of the total heap, and time that it takes for a gc to happen in the young and old generation.
if you need more details to back up any of these points, please visit www.redhat.com for whitepapers, positioning information, and other resources. or contact your red Hat representative for one-on-one answers to your questions.
1
confidential and proprietary to red hat, inc. copyright 2006 red Hat, inc. all rights reserved. red Hat, red Hat linux, and the red Hat shadowman logo are registered trademarks of red Hat, inc. in the us and other countries. linux is a registered trademark of linus Torvalds.