Você está na página 1de 5

1. what are settings available in DSO? Explain them each?

As you can see, the following settings are available in a Standard DSO. Type of Data Store Object: By Default, the DSO type is created as a standard type. This can be changed by clicking on the Change icon. SID Generation upon Activation: When checked(Occurs by default), the SIDs Generation Upon Activation box causes the system to generate an integer number known as a Surrogate ID (SID) for each master data value. These SIDs are stored in separate tables called SID tables. For each characteristic InfoObject, SAP Net weaver BW checks the existence of an SID value for each value of an InfoObject in the SID table. The system then generates a new value of SID if an existing value is not found. The SID is used internally by SAP Net weaver BW when a query is based on a DSO. In cases where the Standard DSO is not used for reporting and is just used for staging purposes, it is recommended to uncheck this checkbox. Unique Data Records: This setting is used when there is no chance that the data being loaded to a standard DSO will create a duplicate record. It improves performance by eliminating some internal processes. If this box is checked and it turns out that there are duplicate records, you will receive an error message Because of this, you should only select this box when you are sure that you wont have duplicate data. Set Quality Status to OK Automatically: The Set Quality Status to OK automatically flag results in the quality status of the data being set to OK after being loaded without any technical errors; the status must be set to this to activate newly loaded data in the standard DSO. Only activated data can be passed to further data targets. Activate Data Automatically: Data loaded into standard DSOs first get sorted in the Activation Queue table, which is activated using the activation process. To make this process automatic, you should check this flag. Update Data Automatically: Activated data available in a standard DSO can be passed to other data targets, such as another DSO or an InfoCube. This process can be automated by setting this flag. Explain how you used Start routines in your project? Start routines are used for mass processing of records. In start routine all the records of Data Package is available for processing. So we can process all these records together in start routine. In one of scenario, we wanted to apply size % to the forecast data. For example if material M1 is forecasted to say 100 in May. Then after applying size %(Small 20%, Medium 40%, Large 20%, Extra Large 20%), we wanted to have 4 records against one single record that is coming in the info package. This is achieved in start routine. Basically we use start routine in the deletion criteria and to fetch global data. In the SD 1. Deletion criteria : we dont want to load the data belongs to particular material type(MTART). for example material type is FERT, ABS,RAW ... Here i dont want to load data belongs to fert. SO THE CODE IS: DELETE SOURCE_PACKAGE WHERE MTART = FERT. Then system wont update the data belongs to FERT. 2.Global data fetching I have 3 dso's named as DSO1 , DSO2, DSO3.

I have the mapping between DSO1 and DSO3. In the DSO3 one field not filling with DSO1 field. So i have to fill this field with DSO2 field ,I mean it is lookup on DSO2. If i written code in char routine with select statement it will trigger record by record leads to bad performance. So in the start routine i am fetching this DSO2 data and in the char routine i will assign it. START routine will trigger package by package leads to decrease the number of hits on db gives rise to better loading performance. Basically the start routine is used to filter the data before entering the data into the data target. It depends on the requirement on what to filter the data . Also for SD Scenario: Looking up for the sales document information from a DSO 0CDS_DS02 and populating the fields in a transformation. The lookup is done globally instart routine so that all the other fields in the transformation can be assigned from the global data type gs_sa_item in the individual rules as RESULT= gs_sa_item-field name This is the code to be placed in start routine *$*$ begin of global - insert your declaration only below this line *-* DATA: gs_sa_item TYPE /bi0/acds_ds0200. DATA: gt_orders TYPE STANDARD TABLE OF /bi0/acds_ds0200. DATA: gs_orders TYPE /bi0/acds_ds0200. DATA: gv_matnr TYPE /BI0/OIMATERIAL. DATA: gv_doc_type TYPE /BI0/OIDOC_TYPE. DATA: gv_ord_reason TYPE /BI0/OIORD_REASON. DATA: gv_comp_code TYPE /BI0/OICOMP_CODE. DATA: gv_sold_to TYPE /BI0/OISOLD_TO. DATA: gv_cust_grp1 TYPE /BI0/OICUST_GRP1. DATA: gv_cust_grp2 TYPE /BI0/OICUST_GRP2. DATA: gv_cust_grp3 TYPE /BI0/OICUST_GRP3. *$*$ end of global - insert your declaration only before this line *-* METHOD start_routine. *=== Segments === FIELD-SYMBOLS: TYPE _ty_s_SC_1. DATA: MONITOR_REC TYPE rstmonitor. *$*$ begin of routine - insert your code only below this line *-* DATA: ls_buffer TYPE _TY_S_SC_1. DATA: lt_buffer TYPE STANDARD TABLE OF _TY_S_SC_1. DATA: lv_idx TYPE sy-tabix. DATA: ls_orders TYPE /bi0/acds_ds0200. DATA: ls_sa_item TYPE /bi0/acds_ds0200, ls_co_scl TYPE /bi0/acds_ds0400, ls_monitor TYPE rsmonitor. CONSTANTS: gc_message_id TYPE sy-msgid VALUE 'RS_BCT_APO_CDS'. CLEAR ls_orders. LT_BUFFER[] = SOURCE_PACKAGE[]. LOOP AT lt_buffer into ls_buffer. lv_idx = sy-tabix. IF ls_buffer-vbtyp NE 'C' AND

ls_buffer-vbtyp NE 'I'. DELETE lt_buffer. ELSE. IF ls_buffer-rocancel EQ 'R'. ls_buffer-rocancel = 'X'. MODIFY lt_buffer FROM ls_buffer INDEX lv_idx TRANSPORTING rocancel. ENDIF. ENDIF. ENDLOOP. SOURCE_PACKAGE[] = LT_BUFFER[]. IF SOURCE_PACKAGE[] IS NOT INITIAL. SELECT * FROM /bi0/acds_ds0200 INTO TABLE gt_orders FOR ALL ENTRIES IN SOURCE_PACKAGE WHERE doc_number = SOURCE_PACKAGE-vbeln AND s_ord_item = SOURCE_PACKAGE-posnr. IF gt_orders[] IS NOT INITIAL. SORT gt_orders BY doc_number s_ord_item. ENDIF. ENDIF. Place the code below in one of the field mapping routine which ever comes first. DATA: ls_orders TYPE /bi0/acds_ds0200. CLEAR ls_orders. CLEAR gs_sa_item. READ TABLE gt_orders INTO ls_orders WITH KEY doc_number = SOURCE_FIELDS-vbeln s_ord_item = SOURCE_FIELDS-posnr BINARY SEARCH. IF sy-subrc = 0. gs_sa_item = ls_orders. ENDIF. RESULT = gs_sa_item-lowr_bnd For all other fields you can just assign as RESULT = gs_sa_item-sold_to Start routine is totally concerned with source data , where you need to write logic for source_package.Start routine will execute before transformation execution.Executes packageby package.loop at source_package into source-feilds. End routine works on target structure and we have lo write logic on result_package.Genarally dso lookups are concerned with end routines.loop at result_package into result-feilds. Finance reports used in real time: GL accounts

1.Actual comparison of financial results by period with previous periods: Filter by 1. GL Account
2. By Financial statement version

This report can give you financial statement for the selected period with comparison. The report can be viewed on half yearly / quarterly / periodic basis as well. For 10year comparison use transaction S_ALR_87012257
2. Balance sheet / P&L statement

Filter by
By financial statement By company code

SAP BI/BW Interview Questions asked in Face to Face (F2F) Interviews and Phone Interviews
Company: Wipro Bengaluru (Bangalore). 1. Tell about yourself? 2. What is extended star schema? 3. What are the objects or fields stored in SID table? 4. What you call the fields in fact table other than key figures? 5. What is the difference between time dependent characteristic and time independent characteristic? 6. What are the variable types? 7. What is the difference between formula variable and characteristic value variable? 8. What are the P X Y Tables? CapGemini Bangalore. 1. What is the process key in extraction?

ITC Infotech India 1. Data is loading from other ODS into an infocube thorugh a lookup routine for three fields. The loading time to infocube is high. So this is a perforamnce issue. How do you reduce the loading time for lookup routine (transfer routine)? 2. Data is clear till infocube, but from multiprovider there is a field A-whose data is coming as # and keyfigures values is coming double. Where would be the issue? How can you resolve it? 3. Have you worked on variables. Give me an example where you have worked?-Text with Replacement path. BristleCone 1. What is Attribute Change run? 2. What is RSRT T-Code? 3. What you do if delta load fails?

More Interview Questions will be posted later. HP Client 1. In reporting What is the difference between Key Date and Date in Time Characteristic Date? When you use the Key date? what is the purpose of Key Date? 2. How are performance tuning BW Statistics? How you find where is performance issue? What you do for rectifying performance in aggregations? 3. When sales order cancelled, how you update this in BI? 4. What is you do in your daily job work? Which area of BI you are having good experiences? What are the modules in your project?

Você também pode gostar