Você está na página 1de 28

INTRODUCTION AND OVERVIEW

Introduction VSTFS is a tool which is mainly intended for developers who unit test, web test, and
load test their own application and gain performance for their applications.
In VSTFS, for performing Load Test, we need to record the script as web test or unit test. With
web test, we can record the applications which a user interact using the browser that is same as
HTTP/HTML protocol. And with the unit test, we can record the stand-alone applications if that is
coded in any of the dotnet platform supported languages.
To compare with, the LoadRunner – a famous tool for Load Test and Stress Test, we do the
similar work in web test as what we do in VuGen. All the enhancements are made to the script to
make it run multiple times in the web test mode. And the things we do in load test are similar to
the Controller and the Analysis parts.

TO CREATE A PROJECT

• Click on the Start button and open “Microsoft Visual studio 2008” as shown in the snapshot.
To create a project
• Go to File (on the menu bar) ->New Project

• On the “New Project” window that pops up select any one of the Project types. The type can
be one of the following
 Test Projects
 Visual Basic.Net
 Visual C#
 Visual C++.

The User can choose any of the above depending on which ever language he is comfortable with.
In case he selects “Test Projects” the default language in which the advanced scripting will be done
is Visual Basic.Net

• Select the “Test Project” Template.

• Browse to the location where you want to place your scripts and specify the name for the
project in the “Location” and “Name” textboxes in the “New Project” window.

• Click on OK button and a new Project is thus created.


TO OPEN A EXISTING PROJECT

Open Visual Studio 2008.Go to File (Menu bar) -> Open -> Project. Browse to the required Project
Location and open the <Solution_name>.sln file.

To view all the items under a project

Click on the “View” option on the menu bar and select “Solution Explorer”. The “Solution Explorer”
lists all the items (e.g. Web tests, Load tests, classes etc) included in the project.

SCRIPTING IN WEB TEST

Recording Web Test

To record a script one first needs to create a project.

• Open an existing Project in which you want to record the script. On the main toolbar click on
the “Test” option.
• In the “Test” menu select the “New Test…” option

• In the “Add New Test” Window that pops up select “Web Test” option.

Then Add a New Test by giving the appropriate Test name in the Test Name field box and select
a test project from the Add to Test Project drop down list and click OK (or) select a new test
project from by selecting create new visual basic test project. It pops up a message to enter the
new project name.

Then enter test project name in the New Test Project field then click on Create.

Then start recording the web application by giving the URL. The below pictures shows the start
recording and stop recording, add comment options during recording.
Enter URL
Here

Recording
Options

Web
Browser

• Enter the URL you want to record in the navigation bar and then click on the go button

• Navigate through the application that you want to record. The URLs are recorded by the tool
along with their Query String and Form Post parameters.

• When you are done with recording of the Application click on the Stop button ( ).

You can pause the recording and resume again by clicking on the Pause ( ) and Record (
) buttons respectively. Comments can be entered during the recording by clicking on
the Comment button ( ). Similarly the recorded URLs can be cleared by clicking on the Clear

button ( ).

Once the recording is stopped the browser closes automatically. The recorded URLs can be seen
in the Web Test. Editing of the Web Test Script will be taken up later in the document.

Playing back the Recorded Script

Once the Script has been recorded the tester plays it back to check the validity of the recorded
script.
The Playback is initiated by clicking on the playback button ( ).To start playback you again
need to click on the button

The following window (called Web test Player) is displayed during Playback.

Check field for Extraction and


Validation Rules applied

Contents of various parameters


passed and extracted
To check whether the Extraction or validation rules applied by the tool as well as by the user have
passed or failed click on the “Details” tab. The status and result of the extraction and validation
rules can be seen in the “Validation and Extraction Rules” Grid and the “Context” Grid respectively.

To see the request details click on the Request tab. Similarly to see the response of the request
submitted click on the Response tab. To see the page obtained on submission of request click on
the Web Browser tab

The following are the playback settings provided by VSTS. If you want to change the run time

settings click on the button


Changes made in this window affect the run for which the changes are made.

Number of iterations set


here
Number of iterations equal to number of
rows in database

Browser
option
Network
option
Think time simulation
option

The following options are available


• Number of Playback runs:
This option allows you to state the number of times you would like to play back your
recorded script
• One row per data source row:
This option when selected runs number of iterations equivalent to the number of
data rows in the database bound with the script.
• Browser Type
This option allows you to select the browser type. The browser options are as follows
Internet Explorer 5.5
Internet Explorer 6.0
Netscape 6.0
Pocket IE 3.02
SmartPhone
• Network Type
This option allows you to select the Network type on which you want to run the tests.
The options are as follows
LAN
T3 6.0Mbps
T1
Cable/DSL 1.5Mbps
Cable/DSL 768k
Cable/DSL 384k
Dial-up 56k
Dial-up 33.6k
Dial-up 28.8k
• Think time simulation
The think time simulation can be switched on/off during playback by clicking the

checkbox and also by clicking the think time simulation button ( ).

Once settings are done and checked as per requirements user can start the playback by clicking on
the playback button.

To run the test step by step click on the ( ) button.

The test can be stopped by clicking on the stop button ( ).

In case you want to run the Web Test iterations on agent machines rather than the controller
machine, then you can either
• Double click on “localtestrun.testrunconfig” in “Solution Explorer” under “Solution Items”
Or
• Click from the “Test” option in the menu bar then select the “Edit Test Run Configuration”
option and then click on “Local Test run”.

• In the “localtestrun.testrunconfig” window that pops up select the “Controller configuration”


option and then select “Run test(s) remotely” option.

• In the dropdown below this option select the controller name.

Before running test remotely please check that the agents are in the ready state.
Editing Web Test Scripts

Creating Transactions:

• Right click on the Web Test script name and then click on the “Insert Transaction” option.
‘Add Transaction’ windows will popup.

• Enter the Transaction name in the “Transaction name” text box.


• Select the first item and last item of the Transaction.
• Click on OK button. The items chosen will now shift into the transaction.
(Note: If you delete the transaction then the entire sets of steps that come under it also get
deleted. So in order to delete a transaction that is not required, first drag all the required steps out
of the transaction and then only delete it)

The transaction can also be created by right clicking on any one of the steps to be included in the
transaction and selecting the Insert transaction option.

Parameterization

Parameterization:
• First crate an CSV file which contains the data which we need to parameterize during reply.
• Then right click on the appropriate link, go to properties box there you select the value
which you want to parameterize.

• Then select Add data source which will take you to the new test data source wizard.
• Here select the CSV file which you have created previously
• After adding the CSV file it will show you the number of columns and nimber rows
available in the CSV file like below. Then click finish.
• After clicking on finish the CSV file will be added to the script like below.

• Then add this data pool file to the value which we want to parameterize.
Correlation:

• To do correlation, right click on the URL link above the request URL for which the server
generates the dynamic value that we correlate.
• Select “Add Extraction Rule”. Add Extraction Rule wizard will be opened.

• Then under Select a Rule choose “Extract Text” option, which displays the properties of
the selected rule as appear below.

• In the above screen under the options give the context parameter name (ex: SessionID in
this case), then specify the left and right boundaries of the string in the start with and
ends with text boxes respectively. Then click OK.

• Then that Extraction rule will be added to the script as appeared below.
• Then right click on the session_id value which displays the properties box.
• In the properties box map the session-id value to the extracted value which we created in
the above step.
LOAD TEST DESIGN

Requirements for designing Loadtest:


 Scripts included in Scenario
 Think time for each page or per transaction
 User Load
 Think time between iterations by single user.
 Loadtest controller and Load generators (Agents)
 Result storage location (Result repository connection string)
 Run duration
 Counters to be monitored at respective servers within the architecture.
 Browser and Network mix to be tested across.

1.1 Steps to Designing a Load test in VSTS: (Using Wizard)

1.> Create a new project as explained earlier in this document.


2.> The screen will be displayed as follows:

Solution Name
Project Name

Solution Explorer
Properties Window

Test Results Tab

Figure 1.1: Project Screen before Loadtest is designed.

3.> Import the solutions/ projects which contains the scripts required for running the load test. To
do so

 Right click on the solution name of current project (in this case LoadTestProject1).

 Bring the mouse on “Add” then select “Existing Project” as shown below. A browse window will
pop up asking you to locate the destination where the script lies (you need to locate the
<project_name>.*proj file. For Eg: if creating tests in C Sharp then file extension is ‘csproj’).

Figure 1.2: Adding a project to current Solution


Figure 1.3: Selecting Project to be added

 The Project gets added in the solution explorer window.

4.> Click on the current project folder and select Test


from the menu bar and select New Test as shown.
5.> A wizard window opens up giving you a choice of
which kind of test you will like to design

Figure 1.4: Creating New Test

Select the Load test Template and give a


suitable name to the test with a “*.loadtest”
extension and make sure it is added to the
right Test Project.
Click OK.

Figure 1.5: Selecting type of test

Introduction Screen:
First screen of Wizard gives you a general introduction to load testing and informs you of the
various steps you will have to follow. The left pane lists out the various steps in the order that they
are to be set.

Load Test Design Steps

Figure 1.6: Introduction Screen of Load test Wizard

SCENARIO Screen:
Select a name for the load test scenario –
Give a suitable name for the load test. This
name is different from the name of the load
test file. This name appears in the results of a
load test as the scenario which was tested.

Think time profile – Think time is the


inactive time of a user on a page thinking or
reading the contents, before performing an
operation like clicking on some button etc.

Think Time between Iterations

Figure 1.7: Scenario screen of Wizard

There are three options available for think time profiling


• Use recorded think time
• Use normal distribution centered on recorded think times
• Don’t use think time

Think time between test iterations – Each user of the specified user load will make iterations of the
web test. This is the time gap to be kept between multiple iterations done by a single user.

LOAD PROFILE Screen:

This screen gives you the option to run


a load test with a constant user load or
the load can be ramped up in steps as
Constant desired.
Load

Ramp up
User Load
Figure 1.8: Screen to set the User Load desired

TEST MIX Screen:


In this screen the user can decide which scripts are
to be added to the load test. You can distribute the
user load in percentage among the scripts included
by entering the load percentage in the second
column or by moving the marker that appears in
the third column on addition of a script. To add a
script to the test mix click on Add button. A pop up
appears as shown in the next figure.

Figure 1.9: Test mix screen

Add Test Window:

Select the desired test by clicking on the test name

in the available tests window then click to


confirm that the test is added check in the selected
tests pane. Once all the scripts are added click on
ok and distribute the user load accordingly.

Figure 1.10: Select Test scripts screen

BROWSER MIX Screen:


As any general web application is required to work
across browsers (i.e. like Internet Explorer, Fire
Fox and Netscape, etc.) this screen helps you
ascertain which type of browsers we need to test
on and how much percentage of the market
scenario should it mirror. Here to choose a
particular browser select from the dropdown box in
column 1 to add another kind of browser click on
the Add button and you will get the drop down
option again. The % for these browsers can be set
in column 2.

Figure 1.11: Select browser mix screen


NETWORK MIX Screen:

The Network mix can be set on this screen wherein as


per customer requirements you can set percentage of
load to be run on LAN or cable internet, etc. Click the
Add button select required network from dropdown
box.

Figure 1.12: Select Network mix Screen

COUNTER SETS Screen:


On this screen you can add the machines that you will
be monitoring during the test and can also decide
which counters are required to be monitored closely.
Clicking on button will help add multiple
machines and to add the counters you will need to
check the counter box required. Using the
button we can remove any unnecessary machine.

Figure 1.13: Counter select screen

RUN STTINGS Screen:

This screen helps to set the run duration and warm up


time (a trial period just before load test where we
check to see if the machines we intend to hit are up
and fully functional). Run Duration and the sampling
interval can be made set on this page by clicking on
button.

Figure 1.14: Run Duration Setting screen

With this we have now finished designing the Load Test Template which will look like as shown
below.
Scenario Name

Test Mix Node

Browser Mix Node

Network Mix Node

Load Profile Node

Counter Collection Sets

Run Settings Node

Figure 1.15: Load Test Template

EDITING A LOADTEST

Once a load test has been designed you can alter any of the parameters as and when required.
This helps avoid the need to make new load tests every time any specification changes
First open the Properties window from the VIEW menu.

When you click on the Scenario Name node, the properties of


this link will be visible in the properties window, here you can
change:

1.>Name of the Scenario


2.>Seconds Between Iterations
3.>Think Profile
4.>Browser Mix
5.>Network Mix
6.>Test Mix

Figure 2.1: Properties window for Scenario editing


The changes to properties can done by clicking on the respective attribute name like
and entering a value which will get reflected in the second column of the properties window.
Explanation Box gives a brief definition of the attribute selected.

Similarly when you right click on the Test Mix node you can edit the scripts included in the loadtest
or change the percentage of user load on it. On right clicking on the Browser Mix or Network Mix
node we can alter the previously set choices.
The user load can be altered by Left clicking the mouse on the Load Profile node. Here you can
change the user load from Constant to Ramped up with an interval as desired, by changing these
attributes in the properties window.

Counters can be added or other instances of the counters already present can be included by right
clicking on the counter and selecting the Add counters option. If in
case the counter is not visible in the screen provided then first go to
perfmon and add the counter there wait for a few minutes and then
try to add the counter again. The threshold values of the counters
can also be set by right clicking on the counter once it is
added and selecting the Add Threshold Rule option. A
popup as shown here will appear with 2 choices #
Compare with constant Value
# Compare with another Counter
The required option can be selected and it gets
added to that
particular counter
like

Figure 2.2: Adding of threshold limit to selected counters

The value can be set by clicking on the rule and inserting a value in the properties window.

To change the run settings click on the Run Settings Node and observe the
properties window. Here you can change the following
1.>Run Settings Name
2.>Result storage type (file or database, etc.)
3.>Run Duration
4.>Sampling Rate
5.>Warm Up Duration
6.>Web test Connection (we generally keep it at 100)
We always preferred running the trace from SQL Profiler in our project.

*NOTE: all the readings that are got during Load Test are averaged over the
sampling rate set. Also the Warm Up Duration should always be an integral
multiple of the Sample Rate.

Figure 2.3: Properties window to change Run settings

When using agent machines to execute the test we need to ensure that the test is running on the
remote controller mode. This can be done by opening the
“*.testrunconfig” visible in the solution explorer pane under solution
items. This screen will pop up as shown.
Here select the Controller Configuration tab in the left pane of the
window. The right hand side pane shows the option to select
desired controller. The default option selected will be
which needs to be changed to if
and only if we need to use the agent controller setup, else it is
advised to run tests locally. Save the testrunconfig file and exit.

Figure 2.4: Editing local test run configuration

Once the controller has been selected we need to identify the agent machines (the actual binding
between agent and controller will be done during installation itself). To identify the agent machines
and set their properties click on the Test tab in the
menu and select Administer Test Controller option
the Test Controller window pops up.
First set the Result Connection String by
clicking on the (Browse) button.

Figure 2.5: Rig Administrator

A new
Connection
Properties pane
pops up. Here
select the type
of data source that is to be used for saving results (we
made use of a SQL database). Then select the machine or
server where these results will be stored. In the Connect to
database window select the database name where these
results are going to be stored (this table should already
exist in the database). To set the properties of the result
repository click on the button for the advanced
properties window. Here you can make changes to the result repository database connection string
properties such as connection timeout, etc.

Figure 2.6: Setting Loadtest Result repository connection string

On saving the changes we come back to the previous screen.


Click on the button to add the agents to the
controller and the window shown alongside pops up. Here
give the agent name (previously bound to the controller being
used) in the Name text box and set the weighting (the
amount of total load to be borne by that agent) as desired.
The weighting value is always a percentage of total load to be
generated by the controller.
Once the agents have been added for the controller we
should restart all agents just to check the connection.
For this select the agent machine and click on
button. The status of the agent will change to Not

RUNNING A LOADTEST AND MONITORING

Now that the Loadtest has been designed and is ready we need to start the Loadtest and

monitor it. To start the Loadtest click on the (RUN) button. Once the Loadtest starts we can
monitor the results on the screen shown.
Figure 3.1: Loadtest Start Screen

To add counters you need to expand the machines tab in the left top pane named
COUNTERS, select the desired machine and then add that counter by either right clicking it and
selecting Show Item On Graph Default or by double clicking on the counter. When this selection
is made the counter gets added to the graph visible and also to the Legend provided below. If one
or more counters are found to be cluttering the graph they can be made invisible by un-checking
that respective counter’s check box in the legend. As the counters are added to the legend the
details of the instance of those counters also appear on the legend and their values keep getting
updated at every sample.

To ensure that the test is running remotely verify what is displayed for controller name in
the Summary Pane found at the bottom left side of the screen. If it displays the controller name set
during loadtest design then it means its running remotely and if it shows “Locally” that means the
test is being carried out using the local machine as the controller and agent. All the values shown
in the Summary pane are pertaining to the system being used as the controller. The tab
provided above will give a detailed view of the operations taking place. Once on the details page
you can select to view
# Number of Transactions
# Number of Requests (Failed / Passed)
# Threshold violations that have occurred
# Number of Requests executed by agents
# Number of Errors occurred with Details
# Number of test executed and
# Pages being accessed and their response times.
To return to the Graph screen click on the button in the menu.

Once the loadtest is completed a dialog box pops up as shown:


Figure 3.2: Prompt to retrieve loadtest results from database repository

Select the Yes option so that all the results are fetched to the local machine and then the
results can be analyzed. Once all the results are collected the screen will look something like this.

Figure 3.3: Loadtest Screen after results have been retrieved on completion

Once the test is over the first thing to do would be to save the identification file somewhere,
where we could access it later if required. Each time this file is opened the results are fetched from
the result repository only but this file is required to identify as to which run data is required.
To save this “*.trx” file click on the flap visible on the left bottom of the
screen, the test results window opens as shown
Figure 3.4: Exporting loadtest result

Select the completed loadtest run then click on the export run button and a browser
opens up giving you the choice to store it at any location necessary. If the export test run button is
not available then the run can be saved by going into the following folder
C:\DocumentsandSettings\<current-user>\LocalSettings\Application data\VSEqtDeploymentRoot

Here the test run strings are stored with encoded names such as
Figure 3.5: Snapshot of folder where the result connection string can be found

Here
you need to
identify
which run is
the one you
require by
identifying
the test
start time
which can
be got from
the top left
corner of
your test
result
screen and
identifying
which folder has the same modified time. Select the folder and the “*.trx” with the same name and
copy to desired location. It is normally better to clear this folder before each loadtest so that we
don’t have to search for the required file.
*NOTE: both folder and file should be copied one without the other is of no use.

OPENING LOADTEST RESULT: (USING “*.TRX” FILE)

To review the results at a later stage, you will need to follow these steps:
1.> Open Microsoft visual Studio
2.>Click on the Test Results tab at the bottom of the screen

3.> Click the import test run button


4.>A browser window will pop up, here we browse to the desired location and double click on
the “*.trx” file only.
5.>This file will be visible in the test run window like this
Figure 4.1: Importing Trx file

6.>Right click on the test and select Show Details

Figure 4.2: Opening retrieved results

Você também pode gostar