Escolar Documentos
Profissional Documentos
Cultura Documentos
Scenario
The BDLS run takes more than 20 hours in our quality systems post
system refresh.
Database size
2 TB
SAP version
Database Version
Oracle 11.2.0.2.0
OS Version
RHEL 5.6
The objective of this blog is to analyze and find out if there are faster ways
to perform BDLS operation, if we cross boundaries of SAP standard
methods.
Step - 1
We were thinking that the program RBDLS2LS itself has some inbuilt
capability to run in parallel.
We were expecting it to use some RFC groups like parallel_generators, etc
and submit conversion for each table in seperate task/ process like client
copy.
Whereas we found out that this is not the case. The RBDLS2LS does not
do anything in parallel, neither does it use use any RFC groups not it uses
more than one BGD/ DIA processes.
Again SE38 > RBDLS450 > Filled details as below > Executed in
Background
Continued like this for B*, C*,D*...and so on till ...Z* and finally by
excluding A* to Z*.
Achievement
Observations
Step - 2
PART-2
BDLS runs in phases:1. It will first determinies all tables that are using logical system in one
of its fields.
2. It will then find out all tables that need conversion, by running a
SELECT SINGLE command on all tables.
For example;
SELECT /*+ FIRST_ROWS(1) */ LOGSYSP FROM COEP WHERE
MANDT=500 AND LOGSYSP = 'S1Q500' AND ROWNUM <= 1;
means it has to just find one record to mark the given table as
update required (i.e. BDLS conversion required).
3. Update command is run to modify the logical system value. For e.g.
S1P500 to S1Q500.
UPDATE "MKPF" SET "AWSYS" = 'S1Q500' WHERE MANDT =
500 AND "AWSYS" = 'S1P500' AND ROWNUM <= 100000;
a)
ROW found
Unfortunately, most of the tables under BDLS falls under b), because the
LOGSYS fields have nothing (SPACE) in it.
Step - 3
Reading through the code in report RBDLS2LS we could get the list of
tables BDLS would run through.
Below thoughts came to our minds for selecting tables for indexing:
We choose to go with "all tables where total number of rows > 10000",
because BDLS runs SELECT SINGLE command; and most of the tables fall
under category b) as explained above and performing a FULL TABLE
SCAN. Therefore most of the run time depends on the total number of
rows in the table.
Now we had to count number of rows in each table to finalize the tables to
build indexes.
Created 80 indexes on client field (like MANDT) and logical system (like
LOGSYS).
Achieveme
nt
Observatio
ns
Step - 4
Step - 5
We tried with building REVERSE key index; with no significant benifit. The
run times remained more or less the same.
Step - 6
With further studies we found that Oracle Bitmap indexes works better
than B-tree indexes for columns with only a few distinct values (just MALE,
FEMALE for example). They are compressed, very compact and fast to
scan.
MANDT contains very few distinct values and also LOGSYS realted fields
usually contains SPACE or very few distinct values. This characterstic
makes them ideal candidate for bitmap indexes.
Activity on
COEP
Regular
Index
Bitmap
Index
Create index in
2 Min 25
parallel, nologging Sec
1 Min 32
Sec
Size of Index
3425 MB
53 MB
16
Seconds
1.25
Seconds
only with
automation
using sql scripts
PART-3
Possible solution was to drop/ hide indexes having these fields before
performing mass updates.
Step - 6
Achievem
ent
BDLS completed in 8
hours. Not a great benifit
though.
Observati
ons
Table
Table
size in
MB
MKPF
7.78
3253.19
SWW_CONTOB
6.28
4837.54
EDIDC
6.11
6766.91
MLHD
5.52
2786.75
RBKP
5.42
2972.29
VBRK
2.55
1029.07
VBAP
2.25
3333.66
ETXDCH
1.57
266.32
SMMAIN
1.44
96.08
SMPARAM
1.42
55.27
SMSELKRIT
1.41
234.27
We observed that these tables are not very big in size; the largest one is
EDIDC with size 6.6 GB and it take surprisingly more than 6 hours to
update.
Step - 7
With further googling around and reading several articles, we realized that
CTAS (Create table as select) is the best method to perform mass updates.
You may notice the lines highlighted above in the command. The oracle's
decode function performs the conversion (UPDATE).
LOGSYS
END LOGSYS,
RCVSAD, RCVSMN, RCVSNA, RCVSCA, RCVSDF, RCVSLF, RCVLAD, STD,
STDVRS, STDMES, MESCOD, MESFCT, OUTMOD, TEST, SNDPOR, SNDPRT,
decode(MANDT||SNDPRN,'500S1P500','S1Q500',SNDPRN) SNDPRN,
SNDSAD, SNDSMN, SNDSNA, SNDSCA, SNDSDF, SNDSLF, SNDLAD,
REFINT,
REFGRP, REFMES, ARCKEY, CREDAT, CRETIM, MESTYP, IDOCTP,
CIMTYP,
RCVPFC, SNDPFC, SERIAL, EXPRSS, UPDDAT, UPDTIM, MAXSEGNUM
From EDIDC
);
Rebuild indexes from SE14 > EDIDC > Indexes > Create in Background
The total operation took less than 5 minutes for EDIDC , whereas BDLS
conversion took 6 hours.
Achieveme
nt
Observatio
ns
Step - 8
Before dropping any index, you can get the DDL (SQL) Command to build
the index as below:-
set timi on
set echo off
set head off
set long 5000
set pagesize 0
set linesize 150
column DDL format A150
NOT NULL constraints had to be added to table EDIDC for fields RCVPRN
and SNDPRN.
ALTER TABLE "EDIDC" MODIFY RCVPRN NOT NULL;
ALTER TABLE "EDIDC" MODIFY SNDPRN NOT NULL;
Step - 9
Part - 4
Putting together all possible optimization and automation; below are the
brief steps we articulated:-
Activity
Time
5
minutes
Bitmap indexes
30
Minutes
CTAS for 15 tables
15
Minutes
30
Minutes
5
minutes
TOTAL 1:25
hours
You may also plan to create bitmap indexes on source system so that
these indexes are already available on target system. This will save 30
minutes.