Você está na página 1de 5

Testing : BULK DATA LOAD on Database Machine of Exadata. NOTE: This is only for testing environment.

For bulk data loading, Oracle recommends to create a separate dedicated database instance for DBFS on Database Machine Step 1: Configure a database file system (DBFS) and use it to stage a CSV formatted file. Step 2 : Create external table to reference the CSV file. Step 3: CREATE TABLE to copy the CSV file data into a table in your database Step 1: Configure a database file system (DBFS) and use it to stage a CSV formatted file.

i. Create a tablespace for database file system (DBFS) and a new database user to
support DBFS.

create bigfile tablespace dbfs datafile '+DBFS_DG' size 130M; create user dbfs identified by dbfs quota unlimited on dbfs; grant create session, create table, create procedure, dbfs_role to dbfs;
ii. create a new directory named DBFS which will act as the anchor for your database file system mount point.

$mkdir DBFS
iii. Create the database objects for your DBFS store dbfs_create_filesystem_advanced.sql script under $ORACLE_HOME/ rtstxdbs/admin).

sqlplus dbfs/dbfs @?/rtstxdbs/admin/dbfs_create_filesystem_advanced.sql dbfs mydbfs nocompress nodeduplicate noencrypt non-partition


dbfs --> specifies the tablespace where the DBFS store is created. mydbfs --> specifies the name of the DBFS store. The final four parameters specify whether or not to enable various features inside the DBFS store. Typically, it is recommended to leave the advanced features disabled for a DBFS store that is used to stage data files for bulk data loading.

OutPut: -------CREATE STORE: begin tstxdbs_dbfs_sfs.createFilesystem(store_name => 'FS_MYDBFS', tbl_name => 'T_MYDBFS', tbl_tbs => 'dbfs', lob_tbs => 'dbfs', do_partition => false,

partition_key => 1, do_compress => false, compression => '', do_dedup => false, do_encrypt => false); end; -------REGISTER STORE: begin tstxdbs_dbfs_content.registerStore(store_name=> 'FS_MYDBFS', provider_name => 'sample1', provider_package => 'tstxdbs_dbfs_sfs'); end; -------MOUNT STORE: begin tstxdbs_dbfs_content.mountStore(store_name=>'FS_MYDBFS', store_mount=>'mydbfs'); end; -------CHMOD STORE: declare m integer; begin m := tstxdbs_fuse.fs_chmod('/mydbfs', 16895); end; No errors.
iv. For DBFS database user create a file named passwd.txt, which contains the password

[oracle@XXXXXX ~]$ echo dbfs >passwd.txt [oracle@XXXXXX ~]$ ls datagenerator DBFS labs oradiag_oracle passwd.txt setup sql1.sh [oracle@XXXXXX ~]$ vi passwd.txt
v. Mounts database file system by running DBFS client (dbfs_client) -----------------------------------------------------------------------------------------------

[oracle@XXXXXX ~]$ nohup $ORACLE_HOME/bin/dbfs_client \ > dbfs@tstxdb -o allow_other,direct_io /home/oracle/DBFS < passwd.txt & [1] 28161 [oracle@XXXXXX ~]$ nohup: appending output to `nohup.out'
vi. Check the dbfs_client process and database file system ---------------------------------------------------------------------------------

[oracle@XXXXXX ~]$ ps -ef | grep dbfs_client oracle 28161 25684 0 02:30 pts/0 00:00:00 /u01/app/oracle/product/11.2.0/dbhome_1/bin/dbfs_client dbfs@tstxdb -o allow_other,direct_io /home/oracle/DBFS oracle 28238 25684 0 02:31 pts/0 00:00:00 grep dbfs_client [oracle@XXXXXX ~]$ df Filesystem 1K-blocks Used Available Use% Mounted on /dev/xvda2 8022104 4008292 3599740 53% / /dev/xvda1 101086 13048 82819 14% /boot tmpfs 1331200 797316 533884 60% /dev/shm /dev/xvdb1 15480800 11955632 2738788 82% /u01 dbfs-dbfs@tstxdb:/ 101080 152 100928 1% /home/oracle/DBFS

vi. Transfer file the CSV file to stage inside DBFS ------------------------------------------------------------[oracle@XXXXXX ~]$ cp /Test/CSV/customers_tsdbfs.csv DBFS/mydbfs/ [oracle@XXXXXX ~]$ cd DBFS/mydbfs/ [oracle@XXXXXX mydbfs]$ ls -l *.csv -rw-r--r-- 1 oracle oinstall 7552705 Sep 30 02:33 customers_tsdbfs.csv So CSV data file is now staged inside DBFS. vii. SQL> create directory staging as '/home/oracle/DBFS/mydbfs'; grant read, write on directory staging to sh; Directory created. SQL> Grant succeeded.
Step 2: Create the external table to reference the CSV file. i. Create an external table that references the data in your DBFS-staged CSV data file.

SQL> create table ext_customers 2 ( 3 customer_id number(12), 4 cust_first_name varchar2(30), 5 cust_last_name varchar2(30), 6 nls_language varchar2(3), 7 nls_territory varchar2(30), 8 credit_limit number(9,2), 9 cust_email varchar2(100), 10 account_mgr_id number(6) 11 ) 12 organization external 13 ( 14 type oracle_loader 15 default directory staging 16 access parameters 17 ( 18 records delimited by newline 19 badfile staging:'custxt%a_%p.bad' 20 logfile staging:'custxt%a_%p.log' 21 fields terminated by ',' optionally enclosed by '"' 22 missing field values are null 23 ( 24 customer_id, cust_first_name, cust_last_name, nls_language, 25 nls_territory, credit_limit, cust_email, account_mgr_id 26 )

27 ) 28 location (' customers_tsdbfs.csv) 29 ) 30 parallel 31 reject limit unlimited; Table created. ii. Check query execution plans ----------------------------SQL> SQL> set autotrace on explain SQL> select count (*) from ext_customers; COUNT(*) ---------100000
Execution Plan ---------------------------------------------------------Plan hash value: 3054877561 --------------------------------------------------------------------------------------------------------------------| Id | Operation | Name ime | TQ |IN-OUT| PQ Distrib | | Rows | Cost (%CPU)| T

--------------------------------------------------------------------------------------------------------------------| 0 | SELECT STATEMENT 0:00:01 | | | | | 1 | SORT AGGREGATE | | | | | 2 | PX COORDINATOR | | | | | 3 | PX SEND QC (RANDOM) | Q1,00 | P->S | QC (RAND) | | 4| SORT AGGREGATE | Q1,00 | PCWP | | | | | | 8168 | 16 (0)| 0 16 (0)| 0 | | | | | | | :TQ10000 | 1| 1| | | 1| 1| | 16 (0)| 0 | | |

| 5| PX BLOCK ITERATOR 0:00:01 | Q1,00 | PCWC |

| 6| EXTERNAL TABLE ACCESS FULL| EXT_CUSTOMERS | 8168 | 0:00:01 | Q1,00 | PCWP | | ---------------------------------------------------------------------------------------------------------------------

Note : The full table scan of the external table is executed in parallel.

SQL> set autotrace off;

STEP 3: Load the external table data from CSV file into a new table SQL> create table loaded_customers as select * from ext_customers;SQL> 2 Table created. SQL> select count(*) from loaded_customers; COUNT(*) ---------100000 SQL> exit Disconnected from Oracle Database 11g Enterprise Edition Release 11.2.0.2.0 64bit Production With the Partitioning, Real Application Clusters, Automatic Storage Management, OLAP, Data Mining and Real Application Testing options [oracle@XXXXXX mydbfs]$ ls -l total 7382 -rw-r--r-- 1 oracle oinstall 7552705 Sep 30 02:33 customers.csv -rw-r--r-- 1 oracle dba 2752 Sep 30 02:43 custxt000_28630.log -rw-r--r-- 1 oracle dba 2752 Sep 30 02:43 custxt000_3259.log Unmount database file system ------------------------------------[oracle@XXXXXX mydbfs]$ cd [oracle@XXXXXX ~]$ fusermount -u /home/oracle/DBFS/ [oracle@XXXXXX ~]$ df Filesystem 1K-blocks Used Available Use% Mounted on /dev/xvda2 8022104 4008292 3599740 53% / /dev/xvda1 101086 13048 82819 14% /boot tmpfs 1331200 797316 533884 60% /dev/shm /dev/xvdb1 15480800 11957216 2737204 82% /u01 [1]+ Done nohup $ORACLE_HOME/bin/dbfs_client dbfs@tstxdb o allow_other,direct_io /home/oracle/DBFS < passwd.txt [oracle@XXXXXX ~]$ ps -ef | grep dbfs_client oracle 29077 25684 0 02:47 pts/0 00:00:00 grep dbfs_client

Você também pode gostar