Você está na página 1de 15

Performance Testing, a Practical Guide and Approach

Albert Witteveen Owner/Founder Pluton IT


EuroSTAR

Software Testing

C o n fe re n c e

EuroSTAR

Software Testing

Community

Performance testing, a practical guide and approach

Abstract

This eBook contains a number of chapters from Albert Witteveens book entitled Performance testing, a practical guide and approach. This book was written to give people a start in performance testing. It should help people to understand the basics of how to do performance testing and hopefully how to do a good job. Another target audience is anyone dealing with performance demands or problems. How can you tell if the testers are going to deliver a result that gives tangible insight in the performance? The book aims to give you some handle with which you can judge the approach and see if you get your moneys worth.

since some words we use often are defined in many other ways. Which definition is right is not the point, but is important that we define here what definition we use.

2.2.1 Performance testing


A definition of performance testing is that it is about testing if a system accomplishes its designated functions within given constraints regarding processing time and throughput rate. Performance testing is a superset containing other tests such as load testing, stress testing endurance testing etc.

2.2.2 Load testing


2

2. An Overview of the Basic Activities

2.1 Introduction
What should you learn first? To navigate or to sail? If you learned how to navigate perfectly and then discover that you dont last on a sailing boat for longer than 10 minutes, those navigation skills are a bit useless. Thats why in this book we first get into the actual performance testing itself before discussing planning, modeling and so-forth. The basics will teach you how to simulate many users with tools and what you need to know before you can judge if you created a good test script.

When we are using the term Load we are talking about workload put on a system. Typically this is the load provided by the multiple users that we expect when a system is in production. When you a have a web application of which you expect that under normal circumstances it is used by 200 simultaneous users, those users provide the load. What we test for is the performance of software in terms of response times, throughput etc. at the given load. So when we are discussing load testing, we discuss testing on the performance of the application in terms of response times, throughput etc. when we apply a load that is the same as what we expect during production.

PAGE

2.2 A few definitions in performance testing


First it is time to set some definitions. Especially

2.2.3 Stress testing


Stress testing has many similarities with load testing. It is about testing with high loads. The big difference is that with stress testing we go

Performance testing, a practical guide and approach

way beyond expected load and apply load until the system cant handle this anymore. This is done for a few reasons: we want to know when a system breaks. But most of all it is to determine what happens at that moment. Is there a full system outage? Do we lose transactions? Are there data synchronization issues? Additionally we can monitor to see which part of the system and resources is the bottleneck. That will give a clear indication on what to improve or expand if we have to. Stress testing is therefore about finding out when it breaks and what happens when it breaks.

2.3.1 Simple example of a web application


To explain what a load generation tool does we will use the example of testing a web application. For this example we will use a fairly simple setup for a web application. There are two servers, a web server and a database server. On the web server the software contains and runs the code that provides the web application. The data is stored in the database server and the web application queries this database when it requires data. When this application is in production, we expect that roughly 100 users will be using the application at the same time. That means that by 100 users their web browsers will request pages from the web server. In production, that would mean 100 computers and 100 people in many different places.

2.2.4 Endurance testing


A little less known, but certainly related and important is endurance testing. This is about testing with a given load, usually not all that high, but significant nonetheless, over a prolonged period of time. The reason for performing these tests is to find out the problems and defects that occur over time. A well known example of such a problem is the memory leak. Due to defects, over time, with the load not increasing, the memory consumption does increase. That is a clear indication that there is a memory leak. If this happens in production, the systems need to be restarted to remedy this.

PAGE

2.3.2 Simulating the users


What we do to simulate this is that we use a tool to simulate the HTTP traffic between the browsers and the web server. The web server cannot tell the difference between the requests by real users and by that of our tool. The requests are multiplied by the number of users we aim to simulate. Usually this is referred to as virtual users.

2.3 Load generation


What all tests share in common is that the aim is to provide load similar to what we expect in production. It is not really feasible to do this by hand. You could do that in theory, but it would require many people to do the work. The common approach is to generate the load using a load and stress test tool.

2.3.3 Record and playback


Creating this traffic by hand is a daunting task that would not only require a lot of knowledge of the HTTP protocol, but also it would be nearly impossible to imitate exactly what the browsers and users would do. For that the load generating tools provide record and playback functionality. When the tool is in recording mode, you have

Performance testing, a practical guide and approach

to use your browser and perform the actions as the real user would do. All the traffic between the browser and the server is recorded. Then the tool will turn the recorded traffic into a test script that it can rerun to playback the exact same request as were performed during the recording.

tells you that the script is passed. We however still dont know if it really passes. We know that we send the same requests, but we dont if the answers that we get are the correct ones. Take the following example of a web application. You have created a test script that tests the login procedure of the application. The script sends the requests. But when the login is requested the server, due to a parametrization omission doesnt answer with a page that youre logged in, but it answers instead: login denied. On the server side this means it probably does a lot less than when the login is successful, giving you different results than what you should get. Therefore in any test script you need to build content checks. For instance if the screen of a successful login shows the text: welcome user Username1, build in a check that tests that the response of the server contains the words welcome user. Also it can be necessary to check in systems that should be updated by your test that the updates are done.

2.3.4 Parametrization
Now what would happen if you recorded the creation of a new user account? During recording you would for instance create the user: testuser1. When you rerun the exact same request, you would try to create the user testuser1 one again. This time however the server would reply something like user already exists. All the coming requests would subsequently receive error messages, effectively making sure that the load on the server is not what you wanted to generate. And this is for the simple test of making a new user. For changing data, for most tests, data needs to be used that is available and does not conflict with certain business rules. There is more. Much of the requests that a browser makes are based on returning information the browser received from that server such as cookies and session-ids. Many applications even have their own information going back and forth the keep track on requests. For this a script nearly always needs parametrization before it can be rerun, especially before it can be rerun under load. Parametrization is one of the most difficult and time consuming part of load and stress testing.

PAGE

2.3.6 Test the water


When this is all ready and the script works for one virtual user, it makes sense to try it with just a few virtual users, for instance three users. Then run the test with the three virtual users, and validate in the system under test that everything was updated as expected. This is done to find errors in your scripts such as forgetting to ensure that the script is set to use unique values from the test data. All too often testers parameterized everything correctly, filled all the test data files, to later discover that the script used the first entry in the test data file for all virtual users. Often this requires setting the properties of the parameters correctly. The reason to not jump in right away with a full blown test is that detecting and troubleshooting

2.3.5 Content checks in the script


So when weve done all that, the script runs perfectly. The Load test tool when playback

Performance testing, a practical guide and approach

issues that indeed exist is much easier with just a few virtual users. It also prevents you from burning your test data.

2.4.1 Monitoring tools


Monitoring tools will continuously monitor the systems and record the resource usage that they were made for. Common things to monitor for are: CPU usage Memory usage Swap space used IO usage Database activities Network activity Web server connections Some load test tools have monitoring buildin. Other tools dont provide this and then you have to rely on monitoring outside of the test tool. Even for the tools that have monitoring build-in it often makes sense to also set up some monitoring outside of the test tool. For instance if you want monitoring on the database, most database products feature their own monitoring tools that provide more in depth information and most of all in a format that is well known to the database experts that you will rely on to help you to find and solve issues.

2.3.7 Summarizing the steps of creating a test script


So these are the steps in creating a valid test script in load generating tool for testing load of multiple users: Record the activities by setting the tool on record and perform the actions within the system under test. Parametrize the script so that when we run the load test every virtual user has different data, that the script captures server responses to be resent.
5

PAGE

Collect test data for the parametrized items that need to be filled with data. Build in checks to see if the server gave the correct responds. Verify the parametrization by running a test with a few virtual users to detect parameter property errors. There is much more to tell about creating test scripts. But these are the basic steps.

2.4.2 Selection of monitoring items


S Some tools make it really easy to monitor each and everything. It then becomes very tempting to turn them all on. Especially if there analyzing tools available allowing you to select and deselect every monitoring item during the analyzing phase. Nonetheless too much often is too much. It is not so much that there is not enough space for all the data or that it becomes unusable. Rather it is about that with so much information, people stop to look carefully and see patterns. If you have to deselect all the less relevant stuff for every test it doesnt take long before you stop looking at all and just go to the summary page. In the rare cases that a test is very difficult to rerun you may want to enable a

2.4 Setting up monitoring


Recording beginning and end times are a basic function of the load test tools on the market. But when the time comes to analyze the results we will want more data. For that we need to utilize monitoring software.

Performance testing, a practical guide and approach

bit too much just to be certain. In all other cases, keep it limited to what you would look at before detecting an issue in the first place. When you do detect an issue that needs further investigation, rerun the test with added monitors.

test and to whom to report. All these items will now be explained in more detail.

2.5 Reporting
When everything worked out and we finished a test, it needs to be reported. As always with reporting, what you report depends greatly on who you report to. You may have to report to an accepting party, sometimes a software development project needs a report to show it has met its targets, often you are involved in testing to reproduce performance issues on production and verify that a solution will provide improvements. All of these different reasons for testing require a different focus in your report. Obviously there are also overlaps and similarities, but reporting is important and is equally important to report the right thing. Also reporting issues needs to be done right, although in many ways similar to reporting regular issues or findings, performance testing puts some strong demands on what you include in the issue report. We will explain much more in the chapter on reporting.

PAGE

2.6 Conclusion
The basics were explained here to provide an overview of the activities. Performance testing can mean load testing, stress testing or endurance testing. To perform a test, a tester needs to perform quite a few steps. Recording is just one step, the scripts needs to be customized to handle different data, server responses and check for real results. To provide insightful reports we need proper monitoring. This needs to be set up. When a test is done, the results need to be reported and reporting on performance depends on the purpose of the

Performance testing, a practical guide and approach

5. Get in Line

5.3 A simple queue at the supermarket


We all are familiar with queues in supermarkets. In a very simple model there is one cashier and a waiting line. You have customers arriving and customers leaving.

5.1 Introduction
Computers behave in a very simple way. They stand still or they run at full speed. Yet we often consider the systems not to be running at full capacity. The systems are quite often waiting for something. To understand performance and to describe the performance affecting items queuing theory is a good approach. At minimum a performance tester and or engineer should understand queuing in our systems. Even better is if the tester applies queuing models and monitors all components. That way the behavior gets documented much better and allows us to optimize performance. This chapter explains the basics, but we encourage you to learn more about queuing theory. The book called Analyzing Computer Systems Performance with Perl::PDQ by Neil J. Gunther does this very well.

The customer that is being served is spending time on unloading the cart, paying the cashier and packing up, the service time. The other customers in the line are waiting. The total time it takes you from arrival to departure is called residence time. We all are familiar with queues in supermarket

5.3A simple queue at the

PAGE

5.3.1Small shop with one che


If we draw the queuing model for a small shop with just one checkout lane we get:

waiting line. You have customers arriving and Algorithm 5.1 Residence time served = total waiting The customer that is being is spending t time + service time. packing up, the service time. The other custome from arrival to departure is called residence tim Algorithm 5.1Residence time = total waiting 5.3.1 Small shop with one checkout lane

If we draw the queuing model for a small shop

5.2 Queuing theory for computer systems


In our day to day lifes much of our time is spent on waiting. Waiting for the elevator, for the checkout line and so on. The queuing concept is also very applicable for computer systems. When we request a page, queuing happens throughout the systems. It can happen at the nano second level at CPUs, but it can also be seen in as high a level as components as a web server, a database server etc. The use of the queuing theory started as early as 1917 for understanding the availability of telephone networks. Later it started getting applied for computer systems.

Figure 5.1 simple que Figure 5.1 simple que

5.3.2Multiple lanes
5.3.2 Multiple lanes

Any larger sized supermarket there will be mor Any larger sized supermarket there will be more You do this based on the number of other custo checkout lines. When you arrive you pick your cannot influence your service time [B] [B]to ke lane. You do this based on the number of other very notably slower or faster than the others, b customers in the lines and how full their carts depends on number of customers of you are. You cannot influence your service ahead time [B] This out [B] tomodel keep itturns simple welike will this: ignore that some
cashiers are very notably slower or faster than the others, but you can influence your waiting time. And this depends on number of customers ahead of you and their service time.

Figure 5.2Multiple checkout lanes

Any larger sized supermarket there will be more checkout lines. When you arrive you pick You do this based on the number of other customers in the lines and how full their carts are cannot influence service time [B] [B]to keep it simple we will ignore that some cashi Performance testing, a practicalyour guide and approach very notably slower or faster than the others, but you can influence your waiting time. And depends on number of customers ahead of you and their service time. Other processes will have to wait. Anyone This model turns out like this: This model turns out like this: that encoded a video file on older PCs will

Figure 5.2 Multiple checkout lanes Figure 5.2 Multiple checkout lanes

model of 5.1 As soon as a processor gets a task 5.3.3 One lane, multiple cashiers Other have to wait. Anyone that 5.4.2 processes Web server with multiple And in the larger super markets the cheese sections often usewill a system where you get a num soon as the encoding starts, And until in theyour larger super markets the good cheese processes wait number is up. The thingeffect. is thatAs you do not need to assesprocess for yourself w sections often use a system where you get a operating systems dois have to all will be the fastest for you. Rather than having three queues there one some queuemethods that is handle number and wait until your number is up. The If we look at a higher level at a system with multiple people. The model would look likeservice this: time of the processes is small you may good thing is that you do not need to asses for multiple web servers serving to clients, the residence time ends uppages being small as well
yourself which line will be the fastest for you. Rather than having three queues there is one queue that is handled by multiple people. The model would look like this:

5.3.3One lane, multiple cashiers On a computer system with one (single core)

understand the effect. As soon as the encoding process starts, anything else will hardly get The queues in the supermarket can be compare through. Although operating systems do have few common examples with the models from th some methods to allow for processes to get some CPU time. Now if the service time of the processes is small you may hardly notice. The wait time gets to be so small that the residence time ends up being small as well.

5.4Queues in a server e

5.4.1One CPU system

Figure 5.3 Drawing numbers

Figure 5.3Drawing numbers

that means, in what queue it ends up.

the model will resemble the supermarket with multiple checkout lanes. If there are multiple web servers, some load balancing service is in place that will receive the request and give it to one of the servers. Like with the customer If we look at a higher level at a system with mu deciding on the number of preceding customers will resemble the supermarket with multiple ch and the content of their shopping cart, the load load balancing service is in place that will rec balancing service will also make a decision to deciding on that the number with the customer which server to hand it and basically means, of pr cart, the loadit balancing in what queue ends up. service will also make

5.4.2Web server with multipl

PAGE

5.4 Queues in a server environment


The queues in the supermarket can be compared to queues in our computer systems. Lets compare a few common examples with the models from the supermarket.

Figure 5.4Three webservers Figure 5.4 Three webservers Interesting to see above is that in this very simp locations can occur that cause w Interestingwhere to see queues above is that in this very
simple model you can already see that there are multiple locations where queues can occur that cause wait time.

5.4.3One queue, multiple pro

5.4.1 One CPU system


On a computer system with one (single core) CPU at the CPU level the queuing model resembles the model of 5.1 As soon as a processor gets a task it will do that task, finish, and take on the next task.

And the last model can be seen as well. If there 5.4.3sure One queue, multiple make that such requests are dealt with by m processes software (such as Apache and Microsofts IIS) worker processes available the queue may be s And the last model can be seen as well. If there arrive. It should be noted though that if a single is one service that can handle multiple requests system level will be queuing atwith for instan and make sure there that such requests are dealt

Performance testing, a practical guide and approach

by multiple worker processes, such as most web serving software (such as Apache and Microsofts IIS), you will something similar. If the software has many worker processes available the queue may be small, but there is a central queue where the requests arrive. It should be noted though that if a single server is setup to have many worker processes, on the system level there will be queuing at for instance the CPUs or for memory.

However as the waiting line is larger, the residence time for a request becomes large. Even though the processor did, when it got around to it, calculates just as fast when it was less busy. The service time stays the same. As a result, when the CPU is starting to get more requests, the chance that requests of other different processes are there increases resulting in that processes will find queues at the CPU. There will however still be moments that the CPU is idle until the load is so high that it is continuously occupied. At that time the waiting time is usually large and the overall residence time at the CPU is large as well reducing response times. Overall it feels as if the CPU is doing its calculations slower, whilst really we just have to wait longer.

5.5 Understanding queuing at the CPU level


I once had a colleague report that a certain process was taking almost 100% CPU time. The administrator dryly responded: good then were using the CPU efficiently. He was right, yet for good reasons if a server reports 100% CPU time, we often view this as a problem.

5.5.1 Multicore or multiprocessors

Like we said before, a computer component basically runs at full speed or does nothing. If Server systems usually have more than one CPU. youre CPU monitoring tool tells you the CPU And modern CPUs often have multiple cores. is running at 25% it does not mean that it runs Multiple cores effectively work as multiple 25% of its possible speed. It means that for 25% CPUs. I once had a colleague report that a certain process was taking almost 100% CPU time. The of the measured time it was running at 100% administrator dryly responded: good then were using the CPU efficiently. He was right, yet for good and for 75% it was doing nothing, or as this is the CPU or at the CPU core level, the reasons if a server reports 100% CPU time, we often At view this as level, a problem. usually idle. a computer component basicallyqueuing center does not nothing. change. If We just have Like we called: said before, runs at full speed or does youre multiple centers. Like in the supermarket CPU monitoring tool tells you the CPU is running at 25% it does not mean that it runs 25% of itswhere If we draw a queuing center for a CPU it looks open checkout lanes and do not make possible speed. It means that for 25% of the measuredmore time it was running at 100% for 75% it the was doing nothing, or as this is usually called: idle. service time itself shorter. But the wait time is like this: If we draw a queuing center for a CPU it looks like this: shorter. In a system where we add equally fast CPUs, the residence time at the CPU will decrease if there are enough processes running.

PAGE

5.5Understanding queuing at the CPU level

There is a catch though. Not every process Figure 5.5 Queueing center a single Figure 5.5 Queueing center for for a single core core CPU CPU will divide its workload over the extra CPUs. For most processes this waiting line hardly has any impact. The you service time most requests is so Suppose have a for fully loaded shopping small that the waitingthis linewaiting is hardly noticeable. Even CPU intensive processes do not send one large For most processes line hardly has cart in the supermarket and there are six open request, but many small ones. As a result, even if the CPU is busy, processes can get their requests in. any impact. The service time for most requests is checkout lanes, your residence time would not However as the larger,noticeable. the residence time for a request becomes large. Even though so small that thewaiting waitingline line is is hardly be shorter if you were the only one in the shop. the processor did, when it got around to it, calculates just as fast when it was less busy. The service Even CPU intensive processes do not send one In that case six open lanes will be just as fast as time stays the same. large request, but many small ones. As a result, one lane. Unless you find some way to divide As a result, the CPU is starting to get even if the when CPU is busy, processes can getmore their requests, the chance that requests of other different your cart in six smaller carts and processes are there increases resulting in that processes will find queues at the CPU. There will requests in. however still be moments that the CPU is idle until the load is so high that it is continuously occupied. At that time the waiting time is usually large and the overall residence time at the CPU is large as

Performance testing, a practical guide and approach

10

much slower if you have many cores, but you checkout in six lanes. In computer systems this will have little impact on the other processes. is the same. A process will be handled by one processor or core unless the application was build in such a way that the workload is divided 5.6 Queuing on higher levels over multiple processes or threads. For most processes you will not end up in the situation The principals described work at higher is div that one CPU is core very busy whereas the others was processor or unless the application build in such a way here that the workload level in the same way. You can look at a system are idle. But on the process level the queuing multiple processes or threads. For most processes you will not end up in the situation that on infrastructure level, describing the servers, center for single threaded applications look like very busy whereas the others are idle. But on the process level the queuing center for sing network and thea same principal applies. If is divi this: processor or look core like unless the application was build in such way that the workload applications this: you for instance have a end couple multiple processes or threads. For most processes you will not up of inweb the servers, situation that database servers and a load balancer, if you very busy whereas the others are idle. But on the process level the queuing center for sing apply load you should be able to detect queuing applications look like this: somewhere in the system. Even though possibly none of the systems are stressed in any way, if Figure 5.6 Single thread or process you analyze the processes as they go through Figure 5.6 Single thread or process If a process ends up in a lane it will be in that and handled by thatto core. the queue entire system it mustfully be possible find Multithreaded programs get this center: out the residence times and service times. In If a process ends up in a lane it will be in Figure 5.6 Single thread or process that queue and handled fully by that core. fact there is queuing occurring at least in one If a process ends up in athis lane it will be in that queue and handled fully by If that core. Multithreaded programs get center: location even if it is hard to detect. there was Multithreaded programs get this center: no queuing anywhere, each process would have finished in the blink of an eye.

Figure 5.7Multithreaded yousituations find a queuing center at a higher in smalle The process will end up using both CPUs. If For where a server haslevel, many many it makes sense go down a level at inte there is not much difference. There is however acases big difference if to we have large CPU Figure Multithreaded the queue to find In the example Figure 5.75.7 Multithreaded processes. Multithreaded processes will use as much of allout of more. the CPUs as they we can. You used we could or instance see queuing in the smalle The process will end up using both CPUs. For situations where a server has many compare this with unloading your cart divided over all the checkout lanes. If you are the o The process will enddifference. up using both CPUs. For web a servers. That would we would have there is not much There is however big time difference ifmean we smallest. have large CPU inte shop thatwhere is the a most effective andsmaller overall residence will be the However i situations server has many toas look into the monitoring of the web servers. processes. Multithreaded processes will use much of all of the CPUs as they can. You a supermarket other customers willThere not likeThere this. And sometimes in computer systems we processes, there is not much difference. we all would have to find queuing in for compare this with unloading your cart divided over the checkout lanes. If you are the is however a big difference if we have large CPU this either. When we for instance have a maintenance batch job running on the tell server thato instance the CPU or memory. This would us shop thatprocesses. is the most effective and overall time will While be the smallest. However if intensive Multithreaded processes intensive this will lead to a degradation ofresidence the other services. all CPUs ch much about where the most making time is used and will use as much of all ofcustomers the CPUs aswill they not a supermarket other like this. And sometimes in computer systems d shorten the time of the batch job, thecan. degradation of performance may be a problem. Ifwe you possibly where it can be improved. You can compare this with unloading your cart this either. When we for instance have a maintenance batch processes, job running on the server the batch job is not multithreaded or divided over multiple you will make that surei divided over all the checkout lanes. If you are intensive this will lead to degradation of theslower other services. While making all CPUs chi only one of a that CPU. Itawill run much if you have many cores, but you will the only oncore the shop is the mostthen effective shorten the time oftime the batch the degradation performance may be a problem. If you and overall residence will be job, the smallest. 5.7 Isof there always a queue? impact on the other processes. the batch not multithreaded divided over multiple processes, you will make sure However ifjob youis are in a supermarketor other only one core oflike a CPU. It will then run if you have manythat cores, you will customers will not this. And sometimes in much Weslower once tested a batch process was but taking computer systems we dont like this either. When impact on the other processes. a long time to complete. We started to run this
we for instance have a maintenance batch job running on the server that is CPU intensive this will lead to a degradation of the other services. While making all CPUs chip in will shorten the time of the batch job, the degradation of performance may be a problem. If you make sure the batch job is not multithreaded or divided over multiple processes, you will make sure it will use only one core of a CPU. It will then run process in parallel on one server and we noticed that if we ran three processes instead of one it did indeed finish nearly three times as fast. This is not weird on a multi core CPU or indeed a multi CPU system, but we were not seeing high CPU utilization. Now if there had been another queuing center than the CPU, we suspected not to see almost the same increase in performance

PAGE

Performance testing, a practical guide and approach

as the amount of processes we ran. So we did a deeper monitoring of the CPU. What we then noticed is that the process had many locks. The program had internally a loop with a sleep in it. As this program was called through the batch process in sequence, although it required a small service time, the sleep time meant that it waited until completing. As a result, we got a throttled process which took a long time to complete but was not continuously stressing the CPU. The solution to increasing the performance was laying in getting the sleep time out of the program. What is important to remember is that you should always find a queuing center. If there doesnt seem to be one, continue investigating until you do. Once found, you may find an option to optimize the software. At that moment it is running at full speed. So if you can, especially reporting to technical teams, report where the queuing center is or where you found out that the software is not performing at its maximum performance. You may enable the team to greatly improve the performance. Of course in the case of the batch process, the throttling may be the desired behavior. Considering that it was doing its job without putting much load on the CPU, it also didnt hinder other functionality. If you do use maximum performance, the batch process would end earlier but would have led to other users getting a much slower system.

be much smaller if it has to wait to be serviced. Waiting is done in queues. When load and stress testing using at the very least the concept of queuing helps us find where the most time is spent. Of reporting to technical team, it makes sense to make a model and show where the queuing takes place. If you cant find any component that has its resources fully utilized, you should investigate further until the cause is found, especially if you do see decreasing performance with added load.

11

PAGE

5.8 Conclusion
Similar to queues in day to day life, computer systems behave in a way that mimics these queues. Computer components dont perform at a certain percentage of their capacity. They are performing at a certain percentage of time. The total time it takes a process to complete is called residence time, but the service time may

Performance testing, a practical guide and approach

10. For Managers

performance test team, not a tool operating team.

10.1 Introduction
If you are a typical (test) manager you probably skipped the rest of the book and went straight for this chapter. That is OK; this chapter is here to outline what you should look for in a performance team. We want to give you some handles on how to see if you are getting good work from your team without getting in their way.

4. Experienced performance testers have lessons learned or interesting anecdotes. Eager and enthusiastic testers will often talk about them even when not being asked.

10.3 The Plan


Check for the following important items: 1. Planning: the planning should take into account time to prepare, time to learn how to record your application, time for analysis, time for extra test based on the results (exploratory testing) and time for reporting. 2. The test environment: expect the test team to want test environment that is as production like as possible. If they do not, check if this is explained properly why they can do without. 3. Cooperation with other teams: if the performance team locks itself up in a room and has little interaction with other teams there is a good chance they will not get a good grip on the business functionality. Check if they address that the team will require time and input from others in the organization, particular the other test teams. 4. Monitoring: Check for a strategy that clearly explains how and what they wish to monitor. 5. Support: Unless the development team performs the test, the performance test team should perform the test, not fix issues. The team and all their tools and hardware are usually expensive. They should be able to rely on support of the systems are not working which prevents them from

10.2 Intake
If you are involved in the selection of the team, the vision and technical experience of the team are very important. There are some questions you can ask to help you evaluate this even if you are not an expert yourself: 1. Ask for their vision. A team or tester that will repeat they had the expensive training with the expensive tool and years of experience of using it shows no vision. They only show that they can show up for class and work. 2. If they show: an approach, mention how they usually derive test cases, their approach on monitoring, report, requirements from other teams and expertise desired, etc. they show better understanding of performance testing and not just their tool of choice. 3. Another interesting question is what load testing tool they prefer. There is not best tool for every situation. It does make sense that they have a preference for a tool with which they have a lot of experience, but for some application specialized tools may exist. Sometimes a competitor supports the protocol you need much better than the one you are familiar with. Hire a

12

PAGE

Performance testing, a practical guide and approach

testing. Also remember that specialist may be called in to help such as DBAs. Check if they have taken this into account arranged for this support.

the end that performance is not good enough when even the first test already showed issues. 4. Checks: The results can be very deceiving; the tools may report excellent behavior, only to find out later that nothing really happened. See if they checked that the tests really performed the full business process they set out to. Do not accept the line: just wait until the report.

10.4 The progress


So the plan is right, the team seems to know what they are doing. During this phase it is important to monitor progress. 1. Deliverables to the team: The biggest problems for team are usually deliverables of other teams to the performance test team. Test environments are not available, releases dont work, systems appear to rely on other systems that everyone forgot to mention etc. If you want to get your moneys worth, help the team get what they need. 2. Finish the full first test as soon as possible: When starting the recording and creating the test scripts they will encounter many hurdles, this is because every application has its own peculiarities. But as soon as the first script is fully working and the first load or stress test was done, things get easier. See if you can get to that stage as quickly as possible. Especially if not everything has been delivered. Getting to the first finished test will deal with hardest parts for the team and make sure that when the real deliveries are there, they can move at better speed. Focus on a full test. Not just a working script, but a script that ran with multiple virtual users. 3. Check if they analyze on the go: A good test team will analyze the results as they go. Even if a test seems to perform well on items such as response times, analyzing may trigger good testers to investigate something important they hadnt thought of. And you also dont want to find out in

10.5 The test report


A performance test should produce a test report. Here are some tips to assess the report. 1. Short and understandable: Performance testers can very easily obfuscate sub-par work by a thick report filled with techno babble, graphs and huge tables. Dont accept that. Tell them you expect a legible and short report. Graphs are great for illustrating a point, not for hiding it. Appendices exist to add literal weight to a report (and provide information for anyone that wants to draw their own conclusions based on the data). 2. Resource utilization: It is important to realize that not every good performance tester uses the terminology and approach of this book. So they may not know or use the term queuing center. But you may ask them to report on what exactly in an application was the bottleneck that limited the performance. This can be as simple as a report that says the application was CPU bound or Memory bound. If they have no idea, than their monitoring was not good enough for a technical report. Lack of these items is often also a good indication that their conclusions on response times in

13

PAGE

Performance testing, a practical guide and approach

relation to the actual production situation are not that solid.

3. Modesty: Performance testing cannot give exact results for how the application will behave in production. The report should make the limitations clear. A good indicator of a serious team is that there is modesty in conclusions and statements.

Biography

14

Albert Witteveen has been working both as an operations manager and a professional tester for nearly two decades now. The combination of setting up complex server environments and professional testing almost automatically led to a specialisation in load and stress testing. He wrote a practical guide to load and stress testing which is available at Amazon. Lately his focus has been on performance of applications hosted with Cloud technology which led to insight on the changing needs in performance testing for Cloud based applications.

PAGE

Join the EuroSTAR Community


Access the latest testing news! Our Community strives to provide test professionals with resources that prove beneficial to their day-to-day roles. These resources include catalogues of free on-demand webinars, ebooks, videos, presentations from past conferences and much more...

Follow us on Twitter @esconfs


Remember to use our hash tag #esconfs when tweeting about EuroSTAR 2013!

Become a fan of EuroSTAR on Facebook Join our LinkedIn Group Add us to your circles Contribute to the Blog Check out our free Webinar Archive Download our latest eBook

w w w. e u r o s t a r c o n f e r e n c e s . c o m