Você está na página 1de 72

Java Performance

Tuning.

Chandrashekhar

1
First word

Best practices and performance practices are different.

Best practices talk about simplicity, readability, cleanliness,


flexibility, extendibility, maintainability and longer life of
application.

The OOPs paradigm incurs a huge cost of ‘Object Churn’.


But that must not be the reason for compromising with it.
Instead, exercise other measures to increase
performance.

2
Why Java?

 Platform independent.
 Memory management
 Powerful exception checking
 Built-in multi-threading
 Dynamic resource check
 Security Checks

3
Why Java is slow at run time?

 JVM is an interpreter
 It is platform dependent to make java Platform
Independent(WORA).
 Features work at runtime
 Great level of run time dynamism: Late Binding
 Garbage collection, extra runtime check
 Drawbacks of OOPs paradigm
 Object Churning
 Memory leakage

Should these be reasons for not choosing Java platform?


4
Other factors limiting performance

 CPU Speed and availability: Reasons- Inefficient algorithms, too


many short lived objects and rigorous working of GC.
 System memory: Application with large memory footprint.
 Disk and Network I/O: The synchronous I/O and Network
operations affect execution seriously.
 Perceived performance: Shabby design of UI giving slow or
hanged execution of application.
 To help user to anticipate delay.
 To show some movements to user while processing going on.
 To interact with user while doing background processing.
 To show streaming in parts.

5
Tuning Strategies

 Select top 5 bottlenecks.


 Address quickest and easiest one first.
 Access improvement in performance level.
 Repeat these steps until expected performance level is achieved.
Sometimes applied fix changes characteristics of application and
thus topmost bottlenecks need not be applied.

 In Distributed application, choose topmost to address first


because each bottleneck may be completely different
component of the system.

6
What to measure?
 The main measurement is always wall clock time
 CPU time: Time allotted on CPU for procedure
 CPU Contention: Number of run-able processes waiting on CPU.
 Paging of processes
 Memory sizes
 Disk throughput
 Disk scanning time
 Network traffic, throughput and latency
 Transaction rate.

For distributed applications , you need to break down measurements into


times spent on each component, times spent preparing data for transfer
and from transfer (e.g., marshalling and un-marshalling objects and
writing to and reading from a buffer), and times spent in network
transfer.

7
JVM Internals
Java
Platform Description
version
1.3.1 No parallel garbage collection. Impact of GC on multi-processing machines is
remarkable.
1.4.2 Serial garbage collector is default choice. The throughput collector may be
chosen explicitly ( For thread extensive applications, needing large memory run
on system along with many processes).
1.5 Most suitable garbage collector is chosen on basis of class of the machine.
1.6 Ergonomics. Dynamic memory management.

8
Infant Mortality

A general mortality graph


 Majority of objects die
young (Infant mortality
pick)
 Some objects do live
longer (Mostly shared
objects)
 A lump at middle shows
some objects are involved
in intermediate
computation.

9
Generations

Different memory pools for objects of different ages.


Minor Collection: To occur when young generation pool fills up.
Quickly collects dead objects and moves surviving objects to
Tenured generation.
Major Collection: Often slower for collecting live objects from
Tenured space to move to Permanent space.
10
Generations

• Young Generation:
•Eden Space: Objects initially allocated to this space. Usually objects created
within method go here and are collected by end of the method.
•Survivor space: One S is always empty. Live objects of Eden Space and
another S space are collected here.
•Tune this space using ‘XX:survivorRatio=6’. For 6 size of ‘Eden’, each
survivor is of size 1.

•Size of Young Generation: Till JDK 1.4, its size is set by -XX:MaxNewSize.
But now its size is being set by ratio: -XX:NewRatio=3. It means the ratio of
old generation and young generation is 1:3.

•Virtual Space-1: A difference between initial and maximum size of Young


Generation.

11
Java Heap Area

• Java Heap Area:


• Area for Java processes to hold objects of Young and Old generation. Can be set for min
and max space.
• For 32 bits Win machines- Theoretical limit is 4 GB.
• For 64 bits Win machines- Theoretical limit is 32 GB.

• Virtual Space-2: Difference between maximum heap size and actual heap size.
12
The Perm and Native Memory.

• Permanent Generation:
• A non-heap space for JVM to keep classes, metadata, methods and
reference objects. The ‘-XX:MaxPermSize’ and ‘-XX:PermSize’ to resize
it. The Class-GC does garbage collection in this area too. This GC can
be disabled using ‘–noclassgc’.
• Native Memory: For JVM’s internal purpose(Code optimization
and intermediate code generation) and for JNI code execution.
• Total Process Size – MaxHeapSize – MaxPermSize
13
Performance Considerations for Heap

Throughput It is percentage of total time not spent on garbage collection


over a long period. It includes time spend in allocation.
Pauses A time for which application becomes un-responsive
because GC is functioning.
Footprint It is a working set of processes measured in pages and
cache lines. It may dictate scalability of application in the
system with limited physical memory.
Promptness A time between object becomes dead and its memory is
made available.

Users have different requirements for different situations...


•Some considers a right metric for Web Server is 'Throughput' as 'Pauses' during garbage
collection are obscured by network latency.
•However, in interactive Graphics program, the 'Pauses' may affect negatively to user's
experience.
•The Promptness becomes important consideration in distributed system.
14
Tuning the Heap

Larger Heap Vs Smaller Heap size…


Larger Heap Size Smaller heap size
Holds more objects in heap thus leads to Whenever GC is called, take more time to
less frequent calls to GC. complete the job thus leading to more
pauses.
Large heap size may lead to swamping of With each filling of heap, GC invocation
physical memory leading to paging on becomes more frequent. If it is non-
virtual RAM. concurrent GC, incurs undesirable cost.

It is atleast two step process…


 Gross Tuning: A big picture to optimize Heap Size
 Fine Tuning: Minimizing pauses, enlarging new space, concurrent GC,
optimizing size of Permanent Heap etc.
15
Gross tuning

 Gross Tuning…
 Choosing optimum minimum and maximum size for Heap. With every
combination of these parameters, improvement in performance can be
tested.

Suggestions…
 Set the starting heap size the same as the maximum heap size.
 Set the starting heap size to the size needed by the maximum
number of live objects (estimated or measured in trials), and set
the maximum heap size to about four times this amount.
 Set the starting heap size to half the maximum heap size.
 Set the starting heap size between 1/10 and 1/4 the maximum
heap size.
 Use the default initial heap size (1 megabyte).

16
Fine Tuning

1. Delay heap expansion as much as possible.


2. Minimize pauses
3. Disable System.gc() calls
4. Tuning RMI garbage collection
5. Loading huge number of classes
6. Limiting per-thread stack size
7. Eliminating finalizers

17
Fine Tuning

1. Delay heap expansion as much as possible.


 Free a space using reclaiming, compacting objects,
defragmenting heap etc.
 Free heap ratio can be set –XX=min/maxFreeHeapRatio. Smaller
is the free space less probability to expand heap.

18
Fine Tuning

2. Minimum pauses
 Larger size of heap creates longer pauses creating bad perception of
performance.
 Identify and eliminate/minimize objects which are churning.
 Reduce heap size to run GC more often but for shorter period.
 Using Incremental Garbage collection algorithm(-Xincgc). It clusters
objects referencing each other and collect clusters individually. It does
shorter pauses but costs more CPU time and total GC time.
 Using concurrent GC(-Xconcgc). It tries to minimize stopping
application thread by doing GC work asynchronously. This GC
minimizes pausing of Application Thread while accessing memory.
Separate GCs can be enabled young space (-XX:+UseParNewGC) and
old space (-XX:+UseConcMarkSweepGC)
 Enlarge new space(young space)
19
GC Collectors

 Serial Collector (-XX:+UseSerialGC)


 Throughput Collectors
 Parallel Scavanging Collector for Young Gen
 -XX:+UseParallelGC

 Parallel Compacting Collector for Old Gen


 -XX:+UseParallelOldGC (on by default with ParallelGC in JDK 6)
 Concurrent Collector
 Concurrent Mark-Sweep (CMS) Collector
 -XX:+UseConcMarkSweepGC

 Concurrent (Old Gen) and Parallel (Young Gen) Collectors


 -XX:+UseConcMarkSweepGC -XX:+UseParNewGC
 The new G1 Collector as of Java SE 6 Update 14 (-
XX:+UseG1GC)
20
GC Collectors

 Performance Goals and Exhibits


A) High Throughput (e.g. batch jobs, long transactions)
B) Low Pause and High Throughput (e.g. portal app)
 • JDK 6
A) -server -Xms2048m -Xmx2048m -XX:+UseParallelGC -
XX:ParallelGCThreads=16
B) -server -Xms2048m -Xmx2048m
-XX:+UseConcMarkSweepGC -XX:+UseParNewGC

21
GC Collectors

Serial Vs Parallel Collectors

App Thread

GC Thread

22
GC Collectors

Concurrent Mark-up collector

App Thread

GC Thread

23
GC Collectors

Concurrent Mark-up collector

App Thread

GC Thread

24
Object Creation

Objects are expensive to create. Should be created and


used judiciously.
1. Constructor Chaining: Light object initialization
2. Object Reuse: Color, Point, Date, Nodes of Linked List,
Exception.
3. Pool Management
4. Object Canonicalization
5. String Canonicalization
6. Changeable objects
7. Weak References
8. Enumerating Constants: True/False, Integer instead of Strings.

25
Avoiding Garbage Collection

 Canonicalization of Objects
 Pooling of objects
 Appropriate handling of strings
 Using primitives over wrappers
 Appropriate conversion methods

26
Eager and Lazy instantiation

 Eager/Pre-allocation: Instantiation when there is ample


spare CPU power available. Can be applied also to…
 Class loading
 Distributed Objects
 Reading external data files

 Lazy: Delay instantiation until the last possible moment.


Treat is as performance tuning technique and not
coding/design practice.

27
String

 Character array in Java with start offset and character


count.
 Advantages…
 Resolved as far as possible at compilation time.
 Has strong support of internationalization
 Has support of ‘+’, ‘=‘ operations for simplifying syntax handling.
 Close coupling between ‘String’ and ‘StringBuffer’.
 Disadvantages…
 A ‘final’ class. Alterations not possible.
 A char[] may give efficient and flexible processing.
 A tight coupling with ‘StringBuffer’ gives surprises in temporary
object creation.
28
Points about String

 Compile time Vs. Run time


 The concatenation operator Vs. StringBuffer.
 Conversion to String
 Prefer left-right algorithm than right-left.
 String Vs. Character Array
 Using wrapper custom classes
 String comparison
 Prefer identity comparison over equality comparison

29
Few more coding practices- 1

Severity: Critical Solution…


String result = new String(str);
String result = str;

Also refer to intern() method.

String uses String pool internally to reduce string instantiations.

30
Few more coding practices- 2

Severity: Critical Solution…


StringTokenizer strtok = new StringTokenizer(str);
while(strtok.hasMoreTokens()) { String[] tokens = str.split(“,”);
SOP(strtok.nextToken());
}

StringTokenizer to be used for complex tokenizing situations.

31
Few more coding practices- 3

Severity: Medium Solution…


1.
String str = "APPPERFECT"; String str = "APPPERFECT";
String str1 = "appperfect"; String str1 = "appperfect";
if(str1.toUpperCase().equals(str)) { if(str1.equalsIgnoreCase(str)) {
SOP("Strings are equals"); SOP("Strings are equals");
} }

2. String str="AppPerfect";
String str="AppPerfect"; System.out.println(str);
System.out.println(str.toString());

Choice of appropriate method on String leads to good performance.

32
Few more coding practices- 4

Severity: Medium Solution…


1. 1.
String sTemp="Data"; final int ZERO = 0;
if (sTemp.startsWith("D")) { final char D = 'D';
sTemp = "data"; String sTemp="Data";
}
if (sTemp.charAt(ZERO) == D) {
2. sTemp = "data";
for(int i=0; i<str.length(); i++) { }
SOP(str.charAt(i)); 2.
} char[] carr = str.toCharArray();
for(int i=0; i<carr.length; i++) {
SOP(carr[i]);
}

Choice of appropriate method on String leads to good performance.

33
Few more coding practices- 5

Severity: Medium Solution…


for(int i=0; i<str.length(); i++) char arr[] = str.toCharArray();
if (str.charAt(i) == 'x') for(int i=0; i<arr.length; i++)
count++; if (arr[i]== 'x')
count++;

Choice of appropriate method on String leads to good performance.

34
Exceptions

 Cost of try-catch if no exception is thrown: May or may


not be with any penalty.
 Cost of try-catch if exception is thrown: Remarkable cost
of picking up snapshot of stack-trace.
 Re-using Exception object: Prevents repeated object
creation but carries wrong state of stack trace.
 Use of fillInStackTrace in re-using an Exception.
 Conditional error checking: For validating pre-conditions
of passed parameters.

35
Assertions

Using assertions generally improves the quality of code and


assists in diagnosing problems in an application.
The difference between an assert statement and a normal
statement is that the assert statement can be disabled at
runtime.

36
Assertion Overhead

if ($assertionsDisabled)
if (!boolean_expression)
throw new AssertionError(String_expression);

37
Casting

 Casts may cost.


 Some casts are resolved at compile time but casting of
object data type mostly resolved at run time.
 Casts of primitive data types are executed quicker and
almost constant than casts of object data type.
 Casts of object data type depends on depth of hierarchy
and whether casting type is interface or class.
 Interfaces are more expensive to use in casting. Their
order of implementation to a class also matters.

38
Cast coding practices-1

Severity: High Solution…


public void method(Component comp) { public void method(Component comp) {
((TextField) comp).setText(""); TextField tf = (TextField) comp;
((TextField) comp).setEditable(false); tf.setText("");
} tf.setEditable(false);
}

Avoid repeated casting. Instead cast it once and


use it as much time as needed.

39
Cast coding practices-2

Severity: High Solution…


List list = new ArrayList(); ArrayList list = new ArrayList();
list.add(“aaa”); list.add(“aaa”);
list.add(“bbb”); list.add(“bbb”);

Or even StringList can also be preferred given by


some implementations.

Avoid casting as much as you can. Use specific


or type specific collection.

40
Variables

 Local variables and method arguments are fastest to


access and update upon as they go on stack and thus
operated directly.
 Static and non-static variables are operated through Java
VM assigned code thus are little slow in accession.
 Methods having less than 4 parameters and local
variables run slightly faster than equivalent method with
large number of parameters and local variables.
 Accessing class instance fields are always more economic
than accessing properties. But this breaches
encapsulation of OOPs so better avoid.
41
Variables

 Prefer manipulation on local variables before being


assigned to heap variables.

 The ‘ints’ are fastest variables to operate upon and for


arithmetic.

 The Short, bytes and chars are widened to ints for any
type of arithmetic operation and then cost of cast is also
involved.

 Floating point arithmetic seems to be worst.


42
Collections

General Rules
 Using appropriate collection in a scenario.
 Java 8 supports Stream API to handle collections more
elegantly. Allows parallel processing on collection(Fork
and Join).
 Concurrent collection is better in many aspects in multi-
thread environment.
 Prefer generic references as it couples a reference loosely
with the specific collection.
 Fail-fast and fail-safe iterators.

43
Iterator

 Fail-fast iterator:
 Throws ConcurrentModificationException for structural changes.
 Does not guarantee perfection in un-synchronized mode.
 Use for debugging
 Uses flag which is set on creating an iterator. Refers this flag for any
structural change.
 Iterators on normal collection
 Fail-Safe iterator:
 Creates a snap-shot of internal data structure.
 Overhead of maintaining double memory.
 May give dirty data
 Prefer when traversals are outnumbering mutations.
 Iterators on concurrent collection.

44
Prefer toArray(array) over toArray()

Severity: High Solution…


public void print() {
Collection c = new ArrayList(); public void print() {
Collection c = new ArrayList();
c.add("AppPerfect");
c.add("TestStudio"); c.add("AppPerfect");
c.add("TestStudio");
Object[] obj = c.toArray();
for(Object entity : obj) { String[] x = (String[]) c.toArray(new String[2]);
String s = (String) entity; }
-----
}
}

Use of toArray() involves explicit casting overhead.

45
Define capacity of ArrayList

Severity: High Solution…


private ArrayList al = new ArrayList(); Private finla int SIZE = 100;
private ArrayList al = new ArrayList(SIZE);
public void method() {
for(int i=0; i<100; i++) { public void method() {
al.add(new String()); for(int i=0; i< SIZE; i++) {
} al.add(new String());
} }
}

Stretching of ArrayList to new size involves reallocation of memory,


data transfer from old object into new object and leaving old object for
Garbage collector to collect. Avoiding of stretching as much as we
can benefit in performance.

46
Choose appropriate collection.

Severity: High
Consecutive arrangement of elements in memory…
• Array, ArrayList, Vector support consecutive arrangement of elements in
memory.
• Arrangement leads to quick random access to elements.
• Also leads to reallocation of memory for achieving Dynamic array.
• Also leads to worst performance in insertion and deletion.

• Array is static by nature. Can not stretch itself dynamically.


• Vector by default is synchronized. It gives synchronized feature even if not
needed.
• ArrayList is available in both flavors- Unsynchronized, synchronized. Can use
as per need.

47
Choose appropriate collection.

Severity: High

SN ArrayList LinkedList
01 Stores elements consecutively. Stores nodes and linked linearly. Can
Random access but limitation of accommodate more elements than ArrayList.
maximum size.
02 Worst efficiency for heavy insertion Good efficiency for heavy insertion and deletion.
and deletion.
03 Good efficiency for quick, random Allows linear search so searching efficiency is
searching. worst.

48
Choose appropriate collection.

Severity: High

SN HashTable HashMap
01 By default synchronized. Comes with By default unsynchronized. But can have
synchronization cost even if not synchronized flavor.
needed.
02 No guaranty of order. No guaranty of order.
03 Does not permit null Permits null.

49
Choose appropriate collection.

Severity: High

SN HashSet TreeSet
01 Good efficiency of insertion and Good efficiency of insertion and deletion.
deletion.
02 Good efficiency of searching Good efficiency of searching
03 Efficiency does not depend on N Efficiency goes down if N becomes large.
(Number of elements)

50
Avoid keySet to access Map

Severity: Medium Solution…


public void method() { public void method() {
Map m = new HashMap(); Map m = new HashMap();
Iterator it = m.keySet().iterator(); Set set = m.entrySet();
Object key = it.next(); Iterator it = set.iterator();
Object v = m.get(key); Object keyValuePair = it.next();
} }

Accessing values of Map through keySet() incur cost of calling get()


on each element. Instead prefer entrySet().

51
Remove element on Iterator.

Severity: Medium Solution…


public void method() { public void method() {
Iterator iter = collection.iterator(); Iterator iter = collection.iterator();
while (iter.hasNext()) { while (iter.hasNext()) {
Object element = iter.next(); Object element = iter.next();
collection.remove(element); iter.remove(element);
} }
} }

Recent java versions give fail fast iterator. If to remove element on


collection do it on Iterator.

52
Threads
 Always keep thread safety of services in mind as while
designing core layer, we are not aware of type of client.
 Synchronization incurs two costs
 Operational cost of managing monitors
 Serialization of synchronized statements
 Thread safety can be achieved in different ways. Need to use
most fitting way.
 Blocking algorithms
 Non-blocking algorithms
 Set of thread-specific objects

 Concurrent API of Java is for applications using thread


extensively. Prefer it for synchronizers, Executors and many
more things.
53
Avoid synchronizing methods

Severity: Critical Solution…


public synchronized void method() { public void method() {
------ // Alternative
} synchronized (this) {
....
}
}

// Synchronization is un-necessary…
Method();

Synchronization any adds overhead to runtime system. When method


is declared as synchronized, this overhead becomes unavoidable even
if synchronized feature of a method is not needed. Synchronized
methods may also lead to ‘Deadlock’ if not handled carefully.

54
Avoid nested synchronized blocks

Severity: Critical
public synchronized void method() {
synchronized (getClass()) {
//.....
synchronized (this) {
//.....
}
//.....
}
}

Nested synchronized blocks are prone to Deadlock. Should be


avoided.

55
Avoid calling synchronized method in loop

Severity: High Solution…


public synchronized Object remove() { public Object remove() {
Object obj = null; Object obj = null;
//... //...
return obj; return obj;
} }
public void removeAll() { public synchronized void removeAll() {
for(;;) { for(;;) {
remove(); remove();
} }
} }

Synchronization adds overhead to runtime system. When method is


declared as synchronized and if repeatedly called, this overhead is
aggravated.

56
Prefer Semaphore over Synchronization block.

Severity: High Solution…


public Object remove() { Semaphore sa;
---- public void run() {
} sa.acquire();
public void run() { for(;;) {
synchronized(this) { remove();
for(;;) { }
remove(); sa.release();
} }
}
}

The synchronizer introduced in Concurrency API- Semaphore is really


a good option to make code thread safe, avoid Deadlock and use non-
blocking algorithms.

57
Prefer non-locking and non-blocking algorithms.

 Use concurrent collection.


 Use concurrent data types.
 Fork and Join is the need of ‘today’.

58
Avoiding overuse of synchronization

 Read only objects not to be synchronized.


 Stateless objects need not be synchronized unless they
are not altering state of the shared unsynchronized object
 If synchronization is being handled by outer object, inner
statefull object may not be synchronized.
 Thread specific objects may not be synchronized.

59
Performance practices

 Write unsynchronized version of a class with wrapper for


synchronized version.
 Initially use wrappers allover.
 Remove wrappers while the performance analysis for
classes synchronization is not needed.
 If synchronization is a bottleneck, use object per thread
policy.
 For short lived thread objects, use thread pool. The
executor framework is tested and reliable alternative.
 For I/O, UI, prefer threads.

60
Performance practices: Load balancing.

A great performance improvement technique when many


activities are processed concurrently.

Load balancing components…


 One point entry for all request (Request Queue)
 One or more request processing objects behind a queue
 Policy of load balancing.

61
JDBC Performance area

Most non-performance prone area


Performance affecting areas…
 Database interaction query
 Caching and paging techniques used
 Vendor implementation of JDBC API
 Factors affecting remote access to database
 Performance of data processing code

 Performance measuring approaches


 Define separate DAO layer to measure DB integration in isolation.
 Use custom wrappers or JDBC Api wrappers
 Use Proxies as wrapper
 Use AOP where measuring code will go into Advices
62
Improving JDBC performance

 Use appropriate driver


 Type 1: Slow, for ODBC DBs, dropped from new JDK.
 Type 2: Faster, Installable, Partly java driver.
 Type 3: For DB middleware, faster.
 Type 4: Full java driver, non-installable, enough faster.

 Connection Pooling
 Pool of open and live connections.
 In-built support from JDBC 2.0 onwards. Third party products
available.
 Need to tune pool for size and timeout.
 Multiple pools can be used with read-only and R/W previledge.

63
Improving JDBC performance

 Optimize SQL
 Nature of database interaction
 The work database needs to do
 The data transfer via JDBC.

 Reduce number of trips to Database: Set based


processing
 Query targeting multiple records at a time
 Batch processing
 Stored Procedures

64
Improving JDBC performance

 Database Server side processing


 Reduce the DB work by simplifying query for upper() like
methods.
 Avoid Insert/Delete if records can be updated
 Use read only connection for Read only access.
 Avoid complex joins if you can
 Use index on large table.

65
Improving JDBC performance

 Minimizing transfer data


 Query tuning to get only required snapshot of data

 Caching
 Cache small sized with infrequent updating data
 Check in-memory data base option.
 Check Journaling

66
Improving JDBC performance

 Minimizing transfer data

 Minimum data conversion


 Prepared Statement for interactive query
 Batching
 Using stored procedure
 Transaction optimization

67
Improving JDBC performance

 Prepared Statement for interactive query


Statement Prepared Statement
Query plan executed once Query plan created once for multiple execution.
Takes less resources Needs more resources to hold query for next execution.
Every creation takes a time Ist creation takes more time while subsequent creations do
not need time.

 For prepared statements, SQL must be identical.


 Can have a pool for each PreparedStatement.

68
Improving JDBC performance

 Batching
 Gives optimum performance for reduced trips.
 Batching on Accessing and updation
 Choose appropriate batch size.
 Batch size can be set on Connection, Statement, ResultSet

69
JVM Monitoring
S.N. Tools Description
01 jconsole A graphical user interface complies to JMX. It provides information about
the performance and resource consumption of applications running on the
Java platform.
02 jmap The jmap prints shared object memory maps or heap memory details of a
given process or core file or a remote debug server.
03 jinfo The jinfo prints Java configuration information for a given Java process or
core file or a remote debug server. Configuration information includes Java
System properties and Java virtual machine command line flags.
04 jstack The jstack prints Java stack traces of Java threads for a given Java process or
core file or a remote debug server.
05 jps The jps tool lists the instrumented HotSpot Java Virtual Machines (JVMs) on
the target system. The tool is limited to reporting information on JVMs for
which it has the access permissions.
06 jVisualVM The VisualVM's graphical user interface enables you to quickly and easily
see information about multiple Java applications.
70
Java Visual VM

71
72

Você também pode gostar