Escolar Documentos
Profissional Documentos
Cultura Documentos
Attachments:26
Added by Tobias Hauk, last edited by Bret Halford on Jan 31, 2014 (view change)
show comment
Go to start of metadata
1.1
1.2
Step 2: Actions
1.3
Step 3: Conditions
1.4
1.5
1.6
Prerequisite: As described in the configuration guide, the transaction type you want to adapt should be
copied to the customer specific namespace (e.g. ZMHF) first. This applies to the transaction type, and all the
profiles that are included (status, action, text, date, partner, etc.). Once you have copied the transaction type
into the customer namespace, the changes you are about to make will be update-safe. This means they will
not be overwritten when implementing a service pack or update, where SAP delivers updated or changed
standard customizing.
1.1
The first customizing step is to create an own status for a specific transaction type. In this example we insert
an additional approval step called UAT Test which stands for an additional user acceptance test.
This approval step should be included after the status Successfully Tested and before the status Authorized
for Production.
To insert this new status into the status profile, call transaction CRMBS02.
Note: Please make sure you also have copied the status-profile, e.g. from SMHFHEAD to ZMHFHEAD
To insert the new status, we recommend to copy an existing status rather than creating a new status from
scratch. This has the advantage that the corresponding business transactions, that are included in each
status, will also be copied automatically.
The values Lowest and Highest define where you can go to, once you have reached a specific status. In
our example, we can go back to 20 (In Development) but not to 10 (Created) or we can go forward until 80
(Completed), but not to 90 (Withdrawn).
In table TJ30 (transaction SE16) it is possible to look at the corresponding technical status value.
1.2
Step 2: Actions
The next step is to assign the corresponding ChaRM Actions & Conditions.
The Post Processing Framework (PPF) provides SAP applications with a uniform interface for the conditiondependent generation of actions. The actions are generated if specific conditions occur for an application
document - they are processed then either directly or later.
We will start with the action definition; you can reach it via the following path in transaction SPRO:
IMG SAP Solution Manager Capabilities (Optional) Change Management Standard
Configuration Transaction Types Action Profile Define Action Profiles and Actions
Go to the subnode Action Definition inside the dialog structure there you can create a new PPF-Action for
the transaction type ZMHF. As for the status, we recommend to copy an already existing action which also
sets a status (e.g. ZMHF_IN_PROCESS).
Best approach is to check if the source action, which shall be copied, also contains the method
HF_SET_STATUS. This can be checked by selecting subnode Processing Type for a selected action.
As described, to change the status of the transaction in Change Request Management always the method
HF_SET_STATUS is used therefore we now need to configure the input parameters (container value) for
this method.
Select the new action in the Action Definition and open the subnode Processing Types you will see a
screen similar as below:
Click on the edit button of the processing parameters, to be able to change the container values. Furthermore
as a processing parameter the expression USER_STATUS with initial value E0011 is used this is the value
of our newly created status (see table TJ30 with corresponding status profile)
This will enable the action to set the status to UAT Test (technical status name: E0011)
1.3
Step 3: Conditions
In the next step we need to define a condition for our new action. This will also be done in the customizing,
starting with transaction SPRO follow this path:
IMG SAP Solution Manager Capabilities (Optional) Change Management Standard
Configuration Transaction Types Action Profile Define Conditions
Note: Another way to reach the configuration Screen for PPF-Actions & Definition is to call transaction
SPPFCADM and mark application CRM_ORDER.
In this step, the action templates created in the activity Define Actions will be processed. To be able to use
the new action, we need to define the planning condition for each action definition using conditions. This is
required to schedule the action automatically, so it is available in the correct status in the action menu.
As a result of the previous steps, inside the action profile ZMHF_ACTIONS the new PPF-Action UAT
Authorization Test (ZMHF_UAT_TEST) is available by using the Create Button.
During the next step the Condition Urgent Correction has been successfully tested: CRM Web UI will be
added to the new action ZMHF_UAT_TEST as a schedule condition. We can reuse this standard condition in
this case, because our new action UAT Test will replace the standard action Authorize for Production.
Therefore we will later need to create a new condition for Authorize for Production, which is then based on
our new status.
In the tab Schedule Condition you can assign a condition with the F4-value help.
Choose the action Urgent correction has been successfully tested: CRM Web UI
The used condition Urgent correction has been successfully tested: CRM Web UI causes that the PPFAction ZMHF_UAT_TEST is selectable while the status Successfully Tested (User Status: E0005) is set for
the change document ZMHF.
In the details of the condition, you can see the condition definition:
As mentioned above, the next status after UAT Test in the process should be the existing standard-status
Authorized for Production. To enable the action within our new status, we need to create a new condition and
assign it to the PPF action ZMHF_GO_LIVE as a schedule condition.
1.4
Select the action where you want to create a new condition (e.g. Authorize for Production). You will notice
that the same condition, as we have already assigned to our custom action, is assigned.
Before we can start to define a new condition, we need to uncouple the old condition by clicking on the
button No Condition.
After that, to create a new condition you need to click on Edit Condition, which will lead you to the following
screen:
Enter a description text for the new condition and click on Click here to create a new condition. In the
following user interface, you need to define the technical details for the condition:
Open the Container-folder, to get a list of all container variables. Double click on User Status, to add the user
status as an expression factor because we want to have our new condition depending on a user status
(schedule condition)
Use the contain pattern-button as operator and enter the technical user status name in the Constant field
(e.g. E0011ZMHFHEAD).
Note: In the screen you may notice we used the constant E0011+MHFHEAD. This has the reason that + is
used as a wildcard which means that this condition is also true for status E0011 of profile SMHFHEAD or
YHFHEAD. This will enable you to generate a much more flexible setup, because you can re-use the
condition for other action profiles.
As a next step, we need to further define our condition to make sure the action can only be performed if the
document contains no errors. To realize that, we need to add another parameter to the condition which checks
that the transaction is error free: ErrorFreeFlag.
First we need to add a logical link, to combine both parameters in a logical expression. Since both checks
shall be fulfilled, we click on the And button in the Logic area.
After that we can make a double click on the ErrorFreeFlag to add this as a parameter.
Now we also need to define a operator and a constant for this parameter. We use = and the constant X,
which means the ErrorFreeFlag needs to be equal to X.
To finish the condition definition, perform a syntax check and click on the green ok button, if everything is all
right.
This will lead you to the overview screen again:
Now we are able to assign this new condition to the Authorize for Production action.
The created condition Urgent correction has been UAT tested: CRM Web UI causes that the PPF-Action
ZMHF_GO_LIVE is selectable while the status UAT Test (User Status: E0011) is set for the change
document ZMHF.
Because the PPF-Action Reset Status to in Development should be selectable while the status UAT Test is
set for the change document, we also need to adapt the schedule condition of this action.
Choose the action ZMHF_TESTED_AND_NOT_OK and adapt the Condition Urgent Correction Implemented
but not completed by adding the status E0011 as a parameter (similar as above)
The adapted condition Urgent correction implemented but not completed: CRM Web UI causes that the PPFAction ZMHF_TESTED_AND_NOT_OK is selectable while the status UAT Test (User Status: E0011) is set
for the change document ZMHF.
1.5
After we have created our status, and the corresponding actions and conditions we also need to make sure
the new status is recognized by the Change Request Management framework. This is required for the system
logon and the text log, which is written when processing the transaction.
The customizing can be done with the activity Define Status Attributes in the IMG of SAP Solution Manager:
IMG SAP Solution Manager Capabilities (Optional) Change Management Standard
Configuration Change Request Management Framework Define Status Attributes
The field Sequence specifies the sequence of the status values, how they should be processed in the
straight forward process. If you use the report CRM_SOCM_SERVICE_REPORT to trigger the next status
value of a transaction type, the field Sequence is necessary for this report to recognize the correct status
value to set. (e.g. You schedule the report on a daily basis, to close all confirmed Urgent Changes
automatically)
1.6
Besides the Actions and Conditions of the Post Processing Framework there are also specific ChaRM actions
and conditions that can be assigned to a specific status value (e.g. these actions and conditions can be used
to trigger activities regarding the transport management system)
You can customize them via the activity Make Settings for Change Transaction Types:
IMG SAP Solution Manager Capabilities (Optional) Change Management Standard
Configuration Change Request Management Framework Make Settings for Change Transaction Types
The Point Assign Actions within the Dialog Structure allows you to assign specific ChaRM-actions to the new
user status E0011:
With the Point Assign Conditions within the Dialog Structure it is possible to assign specific ChaRMConditions to the new user status E0011.
For a list of existing ChaRM actions and conditions please check the tables in the Upgrade section of this
guide more details about each action/condition and its functionality can be found in the short description of
the action/condition in the system.
This applies when viewing note from SNOTE. For a note that can be implemented, this
gives info whether it is completely implemented or not.
Implementation Status
This is a critical info for any note. This tells if the note contains any code corrections that
needs to be implemented in SAP system or just contains info and / or instructions to
circumvent the issue when observed. The note that has status Can Be Implemented is the
one that has code corrections in it and applying that will resolve the issue. How to apply
the corrections is discussed in sections below.
The note that has status Cannot be Implemented implies that there are no code level
corrections that can fix the issue. Either SAP advises to avoid / take certain practice / steps
to ensure the identified error doesnt occur or just explains the system design and why the
error occurs. Either ways, there are no code level changes and hence no action required
per the note. Notes of this category are not discussed further in this document.
Short Text
Summary of the issue that is being addressed in the note is mentioned here.
Component
This specified SAP component that this note applies to. This is useful to rule out note usage
for certain issues for example if the note is for specific area and the observed issue is not
in that area, this info helps to rule out any help from the note. So a note to fix IS-Oil
component will not be of use if the system is not IS-Oil.
Long Text
This section details out the symptoms, often accompanied by an example scenario,
followed by the details of the error.
Reason and Pre-requisites
This tells why the error occurs and what the conditions that cause the error are. When
given an issue and analyze if the note works for it or not, this helps to see if the conditions
that cause the error are part of the issue making it a valid scenario for note.
Solution
This outlines and then details out the solution approach. For a note that can be applied,
this is a list of codes that need to be updated followed by the code that has to be
added/deleted/changed.
a.
Valid Releases
This is very critical. SAP clearly mentions the component version for which the note is
applicable. The reason SAP will fix known issues in future release and hence the issue
described will only be observed in specified versions or earlier versions. More often,
consultants end up digging up notes that are irrelevant because the notes are for older
versions. This takes the issue off the focus as we tend to assume it is SAP Code issue than
to look for real cause. Hence this has to be carefully checked before working on the note
details.
e.g.:
Implementing Notes
In simple terms, SAP notes are correction instructions from SAP for known issues in SAP
system. The corrections can be mainly categorized into two categories Implementable
and Non-Implementable. A notes header section clearly mentions out if the note can be
implemented or not.
Click search button on top and enter the note number. If the note is already downloaded,
system will point to it. Else, there is one additional step to download it.
If note is already available, ensure it is latest by clicking on Download Latest version from
menu as shown below.
A.
It is easy to notice various icons next to note numbers. They visually indicate
implementation status for each note. The grey diamond indicates that note cannot be
implemented. A play button indicates note can be implemented.
Notes that can be implemented, the best way to implement is thru SNOTE.
Thru SNOTE, we can check the implementation status of each SAP note. If the note says
Can be implemented, then after ensuring that the note is required and can fix the issue,
click execute button on top. System applies the note and updates the status whether the
note was successfully implemented or not.
The code correction technical details can be obtained from the info tree on left. Select
folder Corrections and expand it. It lists the code corrections that will be done to fix the
issue. Each correction mentions a code section from system followed by code changes in
terms of insertion of code, commenting (deleting) the code or both.
Some notes need manual intervention and the note mentions how it has to be done step
by step. For e.g. if system requires to add a new data element or some entry in data
dictionary followed by code changes then the note explains the same.
Note: When code changes are to be done manually, an access key is required to make
changes to SAP code. This needs help from BASIS team as the object ID for the code being
altered (e.g. include name) needs to be updated in SAP service place and then the key is
generated. This key has to be entered in order to edit SAP code.
The purpose of doing this each SAP installation is registered with SAP and hence SAP
keeps track of which SAP code was altered and this comes handy during an upgrade or
during a support request to SAP.
IMPORTANT: Since these are SAP code corrections, Transport Request will be
requested while system applies the code changes. Be prepared to create/provide
TRs mostly workbench requests.
Uninstalling a Note
When a note is applied thru SNOTE by clicking execute, system allows to uninstall it as
well. Technically it revert the code to original state. To uninstall a note, select Reset SAP
Implementation Note from menu as shown below.
This doesnt apply if codes are changed manually by a developer and hence it is advised to
keep a copy of code being altered either in system or as document in a safe place.
Reverting note changes in this case is to again edit the code and put back old version,
preferable from a system where the code is not altered or from a backup system.
Troubleshooting
Things can go wrong when implementing SAP Note. There are many ways to trouble shoot.
Please note that the best way is to uninstall, if allowed, and apply again.
Other way is to perform a code comparison and see which sections of the code were
changed. Now pull up the advised code changes from SAP Note corrections folder and see
whats gone wrong and apply the changes manually.
SAP. When a code correction is applied per a note, the code that is affected is considered
as altered.
Applying is SAP note is an easy task but there are certain best practices that can keep you
safe and revert system in case the note doesnt work or issues occur.
manually.
Never attempt to alter code / write code in own naming convention etc. than what
is mentioned in note.
When applying note automatically, ensure that the TRs (Transport Requests) are
ready or the ID thru which note is being applied has sufficient rights to create TRs.
Notes are to fix issues but are still CODE CHANGES. Hence the functionalities need to be
thoroughly tested after note application no matter how trivial the change is.
You
You
You
You
want your users access SAP server out of LAN without having VPN .
want to get support from SAP.
are planning to implement SAP Solution manager.
want to download SAP notes and corrections via snote assistant
CONFIGURATION:
1. Generating a new certificate request.
a. Goto SAProuter Certificates --> click Apply Now and copy your distinguished
name and click next
b. Open cmd as administrator and navigate to <path_saprouter>\nt-x86_x64\
and execute,
sapgenpse get_pse -v -r certreq -p local.pse "<Distinguished
Name>"
example: sapgenpse get_pse -v -r certreq -p local.pse "CN=example,
OU=00123456, OU=SAProuter, O=SAP, C=DE"
c. It will ask to enter and re-enter a PIN. This is used to access the local.pse, so
better note it down.
b. A file "local.pse" will be created in the saprouter directory. (Ex:
D:\usr\sap\saprouter\local.pse)
d. A file "certreq" will under <dir_saprouter>\nt-x86_x64 (Ex:
D:\usr\sap\saprouter\certreq)
2. Aquiring certificate signed by CA.
a. Open the "certreq" file with notepad and copy the text (including BEGIN and
END)
b. Paste it on the above opened certificate page and click next.
c. You would get a certificate (series of jumbled characters) copy this (including
BEGIN and END)
d. create a new file "routcert.txt" under <dir_saprouter>\nt-x86_x64 and paste
the above certificate text.
3. Importing router certificate.
a. Open cmd as administrator and navigate to <dir_saprouter>\nt-x86_x64\
and execute,
sapgenpse import_own_cert -c routcert.txt -p local.pse
Running the above command would ask you to enter PIN, enter the one you
have given on step 1c
4. Authorizing windows user for accessing SAPRouter.
Execute the following cmd with the saprouter user (sncadm).
sapgenpse seclogin -p local.pse -O <exclusive_user_SAProuter>
example: sapgenpse seclogin -p local.pse -O hostname\sncadm
Check whether a file "cred_v2" is created under saprouter directory.
5. Verifying authorization for the sncadm of saprouter.
log on to user for saprouter, open cmd and navigate to <dir_saprouter>\ntx86_x64\ and execute
sapgenpse get_my_name -v -n Issuer
You should get an output like this. CN=SAProuter CA, OU=SAProuter,
O=SAP, C=DE
Voila ! you have configured your SAPRouter successfully.
But wait.. We have to check whether the router works or not.
Start your sap router using command <dir_saprouter>\saprouter.exe -r
You should be getting an out put "trcfile dev_rout no logging active". This
shows that the router started successfully. But if you close the above cmd prompt,
then your SAPRouter will shutdown.
We can avoid this by registering SAProuter as windows service, so that it can run
on background
2. execute following commands as it is. Replace the <path> with your saprouter
directory path and <your distinguished name>
sc.exe create SAPRouter binPath= "<path>\saprouter.exe service -r -S
3299 -W 60000 -R<path>\saprouttab -K ^p:<distinguished name>^"
example: sc.exe create SAPRouter binPath=
"D:\usr\sap\saprouter\saprouter.exe service -r -S 3299 -W 60000 -R
D:\usr\sap\saprouter\saprouttab -K ^p:CN=example, OU=00123456,
OU=SAProuter, O=SAP, C=DE^"
4. Open "regedit.exe" and edit the string "ImagePath" under following location.
HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\ saprouter
5. Replace ^ with " and click OK. The updated value should look like below
<path>\saprouter.exe service -r -S 3299 -W 60000 -R<path>\saprouttab
-K "p:CN=example, OU=00123456, OU=SAProuter, O=SAP, C=DE"
6. Now open "services" right click "SAPRouter" and choose properties. click on
"Log On" tab and choose "This account".
Type the user ID created for configuring saprouter (sncadm), type password and
then click apply.
7. Now start the saprouter service and you're done.
Runtime Analysis
Performance Analysis
Dump Analysis
Memory Inspector
Runtime Analysis
One tool, the ABAP Runtime Analysis (SE30, also known as ABAP Trace), solves two analysis
problems: tracing a program for analyzing the program flow; and performance analysis of
your ABAP application.ABAP Trace is the only tool which is able to trace the flow logic of
ABAP programs at statement level. You can use ABAP Trace for example to find the location
of the statement you are interested in, or to compare the control flow of an ABAP
application in different systems or even to trace memory consumption.This weblog explains
how to use ABAP Trace to analyze the execution flow of ABAP program. After you have read
the blog, you'll be ready not only to find the exact source code line of an ABAP statement,
but also to analyze long running batch jobs and even trace
HTTP/RFC requests of other users.
ABAP Runtime Analysis (SE30) - How to analyze ABAP program flowABAP Trace can also be
used to measure the performance of your ABAP application. You can use the results of the
Performance Analysis
The various trace functions of a SAP system are grouped together in the test tool Performance Trace.
You can use it to monitor and analyze system behavior during database calls, lock management calls,
remote calls of reports and transactions, and calls of the table buffer administration.This weblog gives
you a quick introduction to the SQL Trace. In particular it shows how to execute SQL trace and to
interpret its results.The SQL Trace (ST05) Quick and Easy
Dump Analysis
Should the ABAP AS no longer be able to execute a program - because of an unhandled exception, a
resource or system problem, or an error in coding - the ABAP runtime environment triggers
an ABAP runtime error. The execution of the program is terminated and a detailed error log
(short dump) is created and saved. This pair of weblogs explores the diagnostic aids and
information resources that an ABAP short dump offers and how to get the most help out of a dump.
Analyzing Problems Using ABAP Short Dumps: Part I
Analyzing Problems Using ABAP Short Dumps: Part II
The ABAP Runtime Analysis (transaction SE30) is the best starting point if you
want to execute performance or flow analysis of your ABAP program.
Unfortunately many people use ABAP Runtime Analysis only to look for
performance bottlenecks and don't know that ABAP Trace is the only tool with
which you can trace the execution flow of an ABAP program at the statement
level. This blog will show you how to use ABAP Trace of ABAP Runtime Analysis
(SE30) to follow the flow logic of your ABAP program.
You could of course start the ABAP Debugger and try to debug in single step. And then after hours or
weeks of intensive debugging you might be lucky enough to find the source code line of the ABAP
statement. But why waste time? Here is how to use the ABAP Runtime Analysis to find this error
message in a couple of minutes.
If you press "?" button or click on the status bar near the error message, you will see the F1 help on the
message, in the performance assistant. This tells informs you that the number of the error message is
DS017. Therefore you have to look for the "message DS017":
1.
2.
To find the message, first start the ABAP Runtime Analysis and create a measurement variant.
Start the ABAP Runtime Analysis (transaction SE30) via System -> Utilities ->
Runtime Analysis -> Execute or call the transaction directly with "/nse30".
Type "SE38" into "Transaction" field.
3.
program logic (what we are doing here). Aggregation summarizes the trace data for a
particular type of event in a single trace record and therefore reduces the number of
entries in the trace file. But to see the complete program flow you need all trace data.
Try to use "Particular units" where possible in order to reduce trace file
size and trace only the code you really need to see. The option "Particular units" allows
you to switch on/off the ABAP trace during the running transaction. The trace will be
started as soon as you enter "/ron" (trace on) in the OK field in your transaction. With "/rof
" the trace is stopped. Alternatively you can also use the menu path: System -> Utilities ->
Runtime Analysis -> Switch On / Switch Of.
1.
2.
3.
1.
2.
3.
4.
Step back to the Runtime Analysis and analyze the trace results:
Press the "Evaluate" button.
Press the "Call Hierarchy" button and you get a list which represents the complete
path through your program.
Search for "message DS017" in the Call Hierarchy list.
Double-click on the entry in the Call Hierarchy list to jump to the source code line,
which initiated the error message.
1.
2.
3.
4.
You can find this out very easily with the ABAP Runtime Analysis. You can use the ABAP Runtime
Analysis (SE30) to trace programs which are running in a parallel session.
Ensure that you run SE30 on the same server as the running process!
You must create or adjust a trace variant for tracing the parallel process. Set
aggregation to "None" again to get the Call Hierarchy.
Press the "Switch On/Of" button to trace processes running in a parallel session.
The Runtime Analysis displays a list of the running processes similar to the Process
Overview (transaction sm50).
Use the "Start measurement/End measurement" buttons to activate and deactivate
trace.
Caution: Deactivate the trace again after short tracing time so that you do not reach the
trace file quota! Before deactivating the trace, refresh the work process display. The dialog
step that was active in the work process with the activated trace may have changed, and
that deactivates the trace automatically.
External session (choose "Any" if you are not sure in which session the application
will run!)
Process Category (dialog, batch, RFC, HTTP, ITS, etc.)
Object Type (transaction, report, function module, any, etc.)
Object (e.g. only transaction se38)
Max. No. of sched. Measurements (specify the maximum number of traces)
Expiration Date and Time (specify the time frame when the trace shall be active)
When the trace is scheduled, the ABAP Runtime Analysis automatically starts the trace as soon as
session that meets your criteria is started on the system. The user you have specified logs on to the
system and executes his task, and the ABAP Runtime Analysis starts to write the trace. The trace
results can be analyzed - as usual - in the ABAP Runtime Analysis (using the "Evaluate" button on
initial screen).
The SQL Trace, which is part of the Performance Trace (transaction ST05), is the
most important tool to test the performance of the database. Unfortunately,
information on how to use the SQL Trace and especially how to interpret its
results is not part of the standard ABAP courses. This weblog tries to give you a
quick introduction to the SQL Trace. It shows you how to execute a trace, which
is very straightforward. And it tells how you can get a very condensed overview
of the results--the SQL statements summary--a feature that many are not so
familiar with. The usefulness of this list becomes obvious when the results are
interpreted. A short discussion of the database explain concludes this
introduction to the SQL Trace.
3.
4.
5.
Switch off the trace. Note, that only one SQL trace can be active on an
application server, so always switch your trace off immediately after your are
finished.
6.
7.
When the trace result is displayed the extended trace list comes up. This list shows all executed
statements in the order of execution (as extended list it includes also the time stamp). One execution of
a statement can result in several lines, one REOPEN and one or several FETCHES. Note that you also
have PREPARE and OPEN lines, but you should not see them, because you only need to analyze
traces of repeated executions. So, if you see a PREPARE line, then it is better to repeat the
measurement, because an initial execution has also other effects, which make an analysis difficult.
If you want to take the quick and easy approach, the extended trace list is much too detailed. To get a
good overview you want to see all executions of the same statement aggregated into one line. Such a
list is available, and can be called by the menu Trace List -> Summary by SQL Statements.
=> The extended trace list is the default result of the SQL Trace. It shows a lot of
and very detailed information. For an overview it is much more convenient to
view an aggregated list of the trace results. This is the Summarized SQL
Statements explained in the next section.
The keys of the list are Obj Name (col. 12), i.e. table name, and SQL Statement (col. 13). When using
the summarized list, keep the following points in mind:
The statement shown can differ from its Open SQL formulation in ABAP.
The displayed length of the field Statement is restricted, but sometimes the
displayed text is identical.
were selected or changed. For these three columns also the totals are interesting; they are displayed in
the last line. The other totals are actually averages, which make them not that interesting.
Three columns are direct problem indicators. These are Identical (col. 2), BfTp (col. 10), i.e. buffer
type, and MinTime/R. (col. 8), the minimal time record.
Additional, but less important information is given in the columns, Time/exec (col. 5), Rec/exec (col. 6),
AvgTime/R. (col. 7), Length (col. 9) and TabType (col. 11).
For each line four functions are possible:
The magnifying glass shows the statement details; these are the actual values that
were used in the execution. In the summary the values of the last execution are displayed
as an example.
The DDIC information provides some useful information about the table and has
links to further table details and technical settings.
The Explain shows how the statement was processed by the database,
particularly which index was used. More information about Explain can be found in the
last section.
The link to the source code shows where the statement comes from and how it
looks in OPEN SQL.
=> The Statement summary, which was introduced here, will turn out to be a
powerful tool for the performance analysis. It contains all information we need in
a very condensed form. The next section explains what checks should be done.
For each line the following 5 columns should be checked, as tuning potential can be deduced from the
information they contain. Select statements and changing database statements, i.e. inserts, deletes and
updates, can behave differently, therefore also the conclusions are different.
For select statements please check the following:
Entry in BfTy = Why is the buffer not used?
The tables which are bufered, i.e. with entries ful for fully buffered, gen for buffered by
generic region and sgl for single record buffer, should not appear in the SQL Trace,
because they should use the table buffer. Therefore, you must check why the buffer was
not used. Reasons are that the statement bypasses the buffer or that the table was in the
buffer during the execution of the program. For the tables that are not bufered, but could
be buffered, i.e. with entries starting with de for deactivated (deful, degen, desgl or
;deact) or the entry cust for customizing table, check whether the buffering could not be
switched on.
Entry in Identical = Superfluous identical executions
The column shows the identical overhead as a percentage. Identical means that not only
the statement, but also the values are identical. Overhead expresses that from 2 identical
executions one is necessary, and the other is superfluous and could be saved.
Entry in MinTime/R larger than 10.000 = Slow processing of statement
An index-supported read from the database should need around 1.000 micro-seconds or
even less per record. A value of 10.000 micro-seconds or even more is a good indication
that there is problem with the execution of that statement. Such statements should be
analyzed in detail using the database explain, which is explained in the last section.
Entry in Records equal zero = No record found
Although this problem is usually completely ignored, no record found should be examined.
First, check whether the table should actually contain the record and whether the
customizing and set-up of the system is not correct. Sometimes No record found is
expected and used to determine program logic or to check whether keys are still available,
etc. In these cases only a few calls should be necessary, and identical executions should
absolutely not appear.
High entries in Executions or Records = Really necessary?
High numbers should be checked. Especially in the case of records, a high number here can
mean that too many records are read.
For changing statements, errors are fortunately much rarer. However, if they occur then they are often
more serious:
Entry in BfTy = Why is a buffered table changed?
Same argument as above just the limit is higher for changing statements.
Entry in Records equal zero = A change with no effect
Changes should also have an effect on the database, so this is usually a real error which
should be checked. However, the ABAP modify statement is realized on the database as an
update followed by an insert if the record was not found. In this case one statement out of
the group should have an effect.
High entries in Executions and Records = Really necessary?
Same problems as discussed above, but in this case even more serious.
=> In this section we explained detailed checks on the statements of the SQL
Statement Summary. The checks are slightly different for selecting and changing
statements. They address questions such as why a statement does not use the
table buffer, why statements are executed identically, whether the processing is
slow, why a statement was executed but no record was selected or changed, and
whether a statement is executed too often or selects too many records.
In this section we show as an example the Explain for a rather simple index-supported table access,
which is one of the most common table accesses:
1.
The database starts with step 1, index unique scan DD02L~0, where the three
fields of the where-condition are used to find a record on the index DD02L~0 (~0 denotes
always the primary key).
2.
In step 2, table access by index rowed DD02L, the rowid is taken from the index to
access the record in the table directly.
Some databases display the execution plan in a graphical layout, where a double-click on the table
gives additional information, as shown on the right side. There the date of the last statistics update and
the number of records in the table are displayed. Also all indexes are listed with their fields and the
number of distinct values for each field, with this information it is possible to calculate the selectivity of
an index.
From this example you should understand the principle of the Explain, so that you can also understand
more complicated execution plans. Some database platforms do not use graphical layouts and are a bit
harder to read, but still show all the relevant information.
=> In this last section we showed an example of a database explain, which is the
only way to find out whether a statement uses an index, and if so, which index.
Especially in the case of a join, it is the proper index support that determines
whether a statement needs fractions of seconds or even minutes to be finished.
Very often, a short dump contains not only the exact diagnosis of the problem that occurred but also the
solution, or at least important pointers toward the solution of the problem. But experience has shown
that developers often don't even read dumps - not even the highly useful Error analysis - much less
make use of the diagnostic resources that short dumps offer.
The lack of attention to short dumps is understandable but regrettable.
Understandable because the report that your program is dumping in a customer system is about the
worst news you can get. It's time to drop everything and switch to emergency mode.
Regrettable, because often developers who don't take a good look at the short dump waste a lot of time
thrashing around in the debugger, trying to understand what went wrong. Taking a good look at the short
dump is usually a better use of your time. And then there are the situations in which the dump is the only
diagnostic resource that you have - when the dump occurred in a production system and is too sensitive
to repeat, when the dump occurred several hours after the background job started, and so on.
In this pair of weblogs, we will take a quick tour through the ABAP short dump as of NetWeaver Release
7.0 EHP1, pointing out important analytic aids that it offers and how to make the best use of them.
In the first weblog, we will just get ready to analyze a dump. The weblog looks at the ABAP dump lists
and how to get the most out of them, as well as at a couple of related sources of information.
If the component is not shown or you think that the component shown is misleading (the dump was
finally triggered in infrastructure code), then you can find the component for an OSS message by
following the path from the suspect program (for example, from the Active Calls/Events section) to the
package of the program to the component. In SE80, go from the attributes of the program to the
package. In the package, the component is displayed.
As you look at the List of Runtime Errors above, you may have the feeling that you can't see the
forest for the trees. There are so many dumps. They are clustered by type, but none of the sets of
dumps seem likely to share causal explanations or to fit the journalistic questions posed above. What's
going on?
In cases like this, the more orderly view of dump traffic offered by the Overview function may help.
Choose Goto -> Overview from the ST22 start screen. Skip over the following selection screen
with Execute. The system shows you what has been dumping in the system, sorted by dump
category.
The foci of dump activity make it clearer what is going on in the system. First, the
LOAD_PROGRAM_LOST short dumps tell us that we are in a development system, in which
infrastructure source code (in this case) is being changed on the fly.
The UNCAUGHT_EXCEPTIONs and OBJECTS_OBJREF_NOT_ASSIGNED_NO may indicate that the
developers have not cleanly implemented some programs as yet. Or perhaps some particular program
or component is not working quite right. The scattering of dump activity among unrelated programs is in
this case also explained by the fact that the system is apparently a development system. For a closer
look, a double-click on one of the entries in the list selects the relevant short dumps for display.
The ABAP AS defines more than 1600 short dumps, all documented in loving detail by the kernel
developers who are responsible for them. Some of these - the usual suspects' in the parlance of the
film Casablanca' - already indicate to the savvy investigator that something other than an ABAP error is
at play in the system. A list of the usual suspects might include these short dump IDs:
Short Dump ID
SYSTEM_CORE_DUMPED
INCL_NOT_ENOUGH_PAGIN
G
INCL_NOT_ENOUGH_PXA
INCL_NOT_ENOUGH_ROLL
MEMORY_</p></td><td width="284" valign="top"><p>In many cases, overuse of
configured ABAP AS memory resources through one or more work processes has
occurred, or the memory parameters in the instance profile are simply not
adequately dimensioned for the size and workload of the instance. <br /><br
/>The System Environment section of the dump shows you the memory
consumption of the dumped program; you can check whether it in fact was the
culprit (for example, it makes major use of Heap storage). <br /><br />Also: Use
SM50 to look for processes in PRIV mode. Use ST02 to check the ABAP buffers
and memory. Use the Memory Analyzer in the new ABAP Debugger to check
suspect programs.</p></td></tr><tr><td width="284"
valign="top"><p>DB_ERR_<DBS>
In many cases, a problem with the database
(not necessarily provoked by misbehavior in an
ABAP program) has occurred.
Use ST04 to check for database problems.
The system cannot identify a remote
server by name or cannot even identify
itself by name.
NI_HOST_TO_ADDR
NI_MY_HOSTNAME
TSV_TNEW_PAGE_ALLOC_FAILED
The first part of this weblog did not quite manage to open a short dump as of Release NW04s 7.0 EHP1
for display. Instead it reviewed ways to extract contextual information from the short dump lists and
elsewhere.
In this second part of the web log we, in the words of W. C. Fields, grab the bull by the tail and face the
issue. In a short dump, you want to answer these primary questions:
Maybe the Short text , What happened, Error Analysis, and Source Code Extract will be enough
to let you diagnose and correct the problem. That's often the case when a dump was caused by a
relatively stupid programming error. But let's take it from the top and see what diagnostic help the short
dump offers, just in case.
The System Environment: Context Information and Where Did That RFC Actually Come
From?
You probably skip over the context information presented under System Environment. But there are
some worthwhile nuggets of information in there.
If you plan to search for OSS notes and messages, then you will need the system release and SP
levels, kernel patch level, and other facts on the scene of the crime' to see whether notes or messages
fit your problem. If you plan to open an OSS message for SAP, then you can simply save and attach the
entire short dump. (From the dump display in ST22, choose System -> List -> Save -> Local file.) That
should help Support to respond quickly to the problem.
If you are analyzing the problem yourself, then here are three important bits of information:
At the bottom of the System environment list, you'll find a compact overview of the
memory usage of the program at the time that it dumped. If you see that the program has
allocated heap memory, then check to see in sectionInformation on where terminated to
see if the program was started in a background job. If the program was notrunning as a
background job, then you might want to take a look at the memory consumption of the
program in the Debugger with the Memory Analyzer or with the ABAP Runtime
Analysis (transaction SAT). A dialog program - one running interactively in a dialog process
- gets heap memory - private, process-local memory - only if the memory resources of the
Web AS have been exhausted. (Just to confuse things, background jobs manage memory
differently, and get heap memory before getting ABAP Extended Memory.) If you see a
dialog program with heap memory, then something is wrong with the program. Or the
memory resources configured in the Web AS are inadequate. Or possibly other processes
are memory hogs and have forced this process into heap memory.
Check the User and Transaction section to see if the dump occurred while
processing a dynpro screen. TheUser and Transaction list specifies the screen and Screen
Line' at which the dump occurred. Screen Line is actually the line in the flow logic of the
dynpro at which the faulty module was called. You'll also get this information out
of the Source Code Extract as well, but here, you won't have to piece together the
information on which module in which dynpro in which program failed.
If you are dealing with an RFC problem in the RFC server, then the Server-Side
Connection Information tells you where the RFC call came from. You can then find the short
dump on the caller side, which may help you to understand the server-side dump.
And of course, the opposite is true. From a dump on the client side of an RFC interaction, you can find
out where the call went.
In the screen shot above, the identification of the faulty program is quite simple, since I was too lazy to
write a faulty method that perhaps resided in a separate include. But you may see more complicated
explanations of the location of the error like this:
The termination occurred in the ABAP program "SAPLSVIM" in
For example, we learned from the short text that my program dumped because it tried to overwrite a
constant. I don't see any constant in the bad code below. I just wanted to complete the fully-qualified
domain names of a list of hosts. Do you see the error in the code?
If you don't see where I try to overwrite a read-only field, then see the seventh point in the discussion
in Error analysis, the one that begins "Accesses using field symbols..." Experience has shown that a
lot of people just skip over the explanations in What happened and Error analysis. This may end up
costing them more time than it saves.
If you can reproduce the problem, then you can set a breakpoint right from the short dump in order to
stop just before the short dump occurs. You can then use all of the tools of the new ABAP debugger to
investigate the cause of the dump.
If the code line shown by the pointer doesn't seem to make any sense in the context of the dump, then
take a look at the previous line of code. Occasionally, the instruction counter may still advance even
after a dump has been triggered, so that the >>>>>> pointer points at the line following the bad line of
code.
modules, functions, methods and form routines through which the path of execution has come. You can
jump into the ABAP Editor at any level in the call stack. This means that you can set breakpoints all
along the way to the dump if you think that a problem at a higher level resulted in the dump at the end of
the stack.
There are two things to remember about the ABAP call stack:
It's a call stack and not a complete history of calls. If the flow of execution returns
from the last callee in the stack, that return from the callee is not shown in the stack. If the
short dump occurs in the caller, then you might wonder why the stack shows a different
program as the end point of execution than the What happened section.
Low-level as it is, the CONT does not care whether statements are in a macro or not - and it shows the
short dump pointer that you know from the Source Code Extract. Unfortunately, a double-click on the
CCB at the dump pointer still takes you only to point in the source code at which the bad macro was
called. But the halfway intelligible CCB names may be enough to show you at which line of code in the
macro the problem occurred.
First of all, if the macro is not too long, then clicking on the CCBs to jump into the ABAP Editor shows
you where the macro started. Then, with a little jumping back and forth between the CONT table and
the ABAP Editor, you can start to equate the CCBs and the statements in the faulty code.
In our case, the SQLS and PAR1 CCBs turn out to reference an SQL SELECT well before the macro
call. CCB 68, BRAF, represents the start of an IF control structure in which the macro is called. The
COND and PAR1 CCBs depict the macro statement that actually failed: CONCATENATE &1 .sap.corp'
into &1.
the short dump. The dump processing starts where you find activity on DB table SNAP, so search for
the problem area before that point.
See help.sap.com for help with using SAT and ST05.
System Variables
As an ABAP program executes, it is accompanied by an entire swarm of system variables, like Jupiter
with its cloud of little moons. Some of these variables are well-known, like SY-SUBRC, the return code
set by many ABAP instructions or SY-TABIX, the counter in LOOP AT and READ TABLE internal table
instructions.
When a short dump occurs, ABAP preserves the state of the system variables at the time of the crash.
You can see the contents of these variables in the Contents of system fields section. Here are some
of the system variables that are most likely to be useful:
SY-SUBRC usually shows the last return code setting before the program crashed.
A non-zero SY-SUBRC from a method or function preceding an instruction that dumped may
illuminate for you what went wrong.
SY-TABIX. In a short dump raised from within a LOOP AT table or after a READ TABLE
instruction, SY-TABIX tells you what record from the internal table was being processed
when the program failed.
SY-INDEX provides the same iteration-count information for DO and WHILE loops.
SY-MSGID and SY-MSGNO, if set, let you look up the last message issued by the
failed program in transaction SE91. SY-MSGV1 - 4 show any message variables that
previously were set (not necessarily for use in the most recent message).
SY-DATUM and SY-UZEIT may show a more accurate and earlier time stamp for the
initial program abort than the date and time associated with the short dump itself. If you
are sifting through the System Log or Developer Traces (see the Part I of this weblog), then
the few seconds difference that you may see can be important in establishing to
chronology of events in a failure.
Program Variables
For the Chosen variables section, the short dump infrastructure takes a quick run through the
collapsing program context grabbing any program and infrastructure variables it finds that are currently
in scope. The situation is a bit like the belated shopper running through a grocery just at closing time there's no guarantee that the shopper will bring home everything that he or she was supposed to buy.
Even though the dump infrastructure may not capture everything, much more often than not you will find
the variables and values that you want to see.
Since SAP_BASIS Release 6.20, the short dump infrastructure has captured a separate set
of Chosen variables for each level in the Active Events/Calls ABAP call stack.
If you are analyzing a data-related problem, then a careful look at the Chosen variables may clarify the
problem. In one recent example, an OSS message reported a short dump because ABAP could not
convert the character value 229812 to an integer (dump ID CONVT_NO_NUMBER). Since this is one of
ABAP's easiest tricks, the dump is at first glance pretty mystifying. A quick look at the character field
in Chosen variables showed, however, that the character field held not 229812' but rather 229812##
####p###'. The fact that the field was either not correctly initialized or was filled with noncharacter data explains the conversion failure, at the very least.
Chosen variables shows the size (here, one record with a length of 3440 bytes) of an internal table, as
well as useful information such as the type of organization of the table (here, a sorted table). The table
display can be useful in analyzing the popular dump of type TSV_TNEW_PAGE_ALLOC_FAILED (no
more memory available for an internal table), since you can see how much memory has been allocated
to hold the rows of each internal table. (The amount of storage allocated for the rows may not,
however, be the amount of storage used by the rows of the table. If, for example, a table holds only data
references to objects, then storage for references may not be all the memory actually consumed by the
table and its contents. The references are relatively short. The objects may occupy much larger
amounts of memory.)
In an upcoming release, the table display will contain at least the start of the contents of each of the first
five records of each internal table that is captured.
Finally, object references that have not been initialized (a favorite cause of
OBJECTS_OBJREF_NOT_ASSIGNED_NO, and others...) are easy to pick out in Chosen variables.
Just use Ctrl - F to search for :initial}'.
Note that a random mouse click in the Chosen variables display switches the display from the
relatively attractive formatted view to an unformatted view. Don't be alarmed. Just click on F3 / Back to
return to the formatted display.
An Ounce of Prevention...
Is worth a pound of cure, as the old saying goes.
Don't forget that ABAP offers logging and checkpoints that can be activated when needed (see
help.sap.com). With these, you can turn on switchable logging, breakpoints, and assertions to help you
with diagnosis and trouble-shooting, should something go wrong in your program after it has reached
your users.
And don't forget the suite of tools that the ABAP Workbench offers to help you find errors before your
users do, starting with tools for static checking like the Code Inspector (Transaction SCI), continuing
with the ABAP Unit Test facility, with which you can even go so far as to practice test-driver
development. The best ABAP short dump is the one that you never have to analyze.