Escolar Documentos
Profissional Documentos
Cultura Documentos
Oracle Query
Too often we become impatient when Oracle Query executed by us does not seem to return any result. But
Oracle (10g onwards) gives us an option to check how long a query will run, that is, to find out expected
The option is using v$session_longops. Below is a sample query that will give you percentage of
Script
SELECT
opname,
target,
ROUND((sofar/totalwork),4)*100 Percentage_Complete,
start_time,
CEIL(time_remaining/60) Max_Time_Remaining_In_Min,
FLOOR(elapsed_seconds/60) Time_Spent_In_Min
FROM v$session_longops
WHERE sofar != totalwork;
If you have access to v$sqlarea table, then you can use another version of the above query that will also
show you the exact SQL running. Here is how to get it,
SELECT
opname
target,
ROUND((sofar/totalwork),4)*100 Percentage_Complete,
start_time,
CEIL(TIME_REMAINING /60) MAX_TIME_REMAINING_IN_MIN,
FLOOR(ELAPSED_SECONDS/60) TIME_SPENT_IN_MIN,
AR.SQL_FULLTEXT,
AR.PARSING_SCHEMA_NAME,
AR.MODULE client_tool
FROM V$SESSION_LONGOPS L, V$SQLAREA AR
WHERE L.SQL_ID = AR.SQL_ID
AND TOTALWORK > 0
AND ar.users_executing > 0
AND sofar != totalwork;
NOTE
This query will give you correct result only if a FULL Table Scan or INDEX FAST FULL SCAN are being
performed by the database for your query. In case, there is no full table/index fast full scan, you can force
the first of a two part article that will teach you exactly the things you must know about Query Plan.
When you fire an SQL query to Oracle, Oracle database internally creates a query execution plan in order to
fetch the desired data from the physical tables. The query execution plan is nothing but a set of methods
on how the database will access the data from the tables. This query execution plan is crucial as different
execution plans will need different cost and time for the query execution.
How the Execution Plan is created actually depends on what type of query optimizer is being used in your
Oracle database. There are two different optimizer options – Rule based (RBO) and Cost based (CBO)
Optimizer. For Oracle 10g, CBO is the default optimizer. Cost Based optimizer enforces Oracle to generate
the optimization plan by taking all the related table statistics into consideration. On the other hand, RBO
uses a fixed set of pre-defined rules to generate the query plan. Obviously such fixed set of rules may not
always be able to create the plan that is most efficient in nature. This is because an efficient plan will
depend heavily on the nature and volume of tables’ data. Because of this reason, CBO is preferred over
RBO.
Understanding Oracle Query Execution Plan
But this article is not for comparing RBO and CBO (In fact, there is not much point in comparing these
So let’s begin. I will be using Oracle 10g server and SQL *Plus client to demonstrate all the details.
Let’s start by creating a simple product table with the following structure,
ID number(10)
NAME varchar2(100)
DESCRIPTION varchar2(255)
SERVICE varchar2(30)
PART_NUM varchar2(50)
LOAD_DATE date
Next I will insert 15,000 records into this newly created table (data taken from one of my existing product
So we start our journey by writing a simple select statement on this table as below,
PLAN_TABLE_OUTPUT
----------------------------------------------------------
Plan hash value: 3917577207
-------------------------------------
| Id | Operation | Name |
-------------------------------------
| 0 | SELECT STATEMENT | |
| 1 | TABLE ACCESS FULL | PRODUCT|
-------------------------------------
Note
-----
- rule based optimizer used (consider using cbo)
Notice that optimizer has decided to use RBO instead of CBO as Oracle does not have any statistics for this
table. Let’s now build some statistics for this table by issuing the following command,
PLAN_TABLE_OUTPUT
-----------------------------------------------------
Plan hash value: 3917577207
-----------------------------------------------------
| Id | Operation | Name | Rows | Bytes |
-----------------------------------------------------
| 0 | SELECT STATEMENT | | 15856 | 1254K|
| 1 | TABLE ACCESS FULL | PRODUCT | 15856 | 1254K|
-----------------------------------------------------
You can easily see that this time optimizer has used Cost Based Optimizer (CBO) and has also detailed
The point to note here is, Oracle is reading the whole table (denoted by TABLE ACCESS FULL) which is very
obvious because the select * statement that is being fired is trying to read everything. So, there’s nothing
Now let’s add a WHERE clause in the query and also create some additional indexes on the table.
Index created.
Explained.
PLAN_TABLE_OUTPUT
---------------------------------------------------------
Plan hash value: 2424962071
---------------------------------------------------------
| Id | Operation | Name | Rows | Bytes |
---------------------------------------------------------
| 0 | SELECT STATEMENT | | 1 | 4 |
|* 1 | INDEX UNIQUE SCAN | IDX_PROD_ID | 1 | 4 |
---------------------------------------------------------
So the above statement indicates that CBO is performing Index Unique Scan. This means, in order to fetch
the id value as requested, Oracle is actually reading the index only and not the whole table. Of course this
will be faster than FULL TABLE ACCESS operation shown earlier.
Table Access by Index RowID
Searching the index is a fast and an efficient operation for Oracle and when Oracle finds the desired value
it is looking for (in this case id=100), it can also find out the rowid of the record in product table that has
id=100. Oracle can then use this rowid to fetch further information if requested in query. See below,
Explained.
PLAN_TABLE_OUTPUT
----------------------------------------------------------
Plan hash value: 3995597785
----------------------------------------------------------
| Id | Operation | Name |Rows | Bytes|
----------------------------------------------------------
| 0 | SELECT STATEMENT | | 1 | 81 |
| 1 | TABLE ACCESS BY INDEX ROWID| PRODUCT| 1 | 81 |
|* 2 | INDEX UNIQUE SCAN | IDX_PROD_ID | 1 | |
----------------------------------------------------------
TABLE ACCESS BY INDEX ROWID is the interesting part to check here. Since now we have specified select *
for id=100, so Oracle first use the index to obtain the rowid of the record. And then it selects all the
But what if we specify a >, or between criteria in the WERE clause instead of equality condition? Like below,
Explained.
SQL> select * from table(dbms_xplan.display);
PLAN_TABLE_OUTPUT
---------------------------------------------
Plan hash value: 1288034875
-------------------------------------------------------
| Id | Operation | Name | Rows | Bytes |
-------------------------------------------------------
| 0 | SELECT STATEMENT | | 7 | 28 |
|* 1 | INDEX RANGE SCAN| IDX_PROD_ID | 7 | 28 |
-------------------------------------------------------
So this time CBO goes for an Index Range Scan instead of INDEX UNIQUE SCAN. The same thing will
Now, let’s see another interesting aspect of INDEX scan here by just altering the “ 10”. Before we see the
outcome, just remind yourself that there are 15000 over products with their ids starting from 1 to
15000+. So if we write “10” we are likely to get almost 14990+ records in return. So does Oracle go for an
Explained.
PLAN_TABLE_OUTPUT
------------------------------------------------
Plan hash value: 2179322443
--------------------------------------------------------
| Id | Operation | Name | Rows |Bytes |
--------------------------------------------------------
| 0 | SELECT STATEMENT | | 15849|63396 |
|* 1 | INDEX FAST FULL SCAN| IDX_PROD_ID| 15849|63396 |
---------------------------------------------------------
So, Oracle is actually using a INDEX FAST FULL SCAN to “quickly” scan through the index and return the
records from table. This scan is "quick" because unlike index full scan or index unique scan, INDEX FAST
FULL SCAN utilizes multiple-block input-output (I/O) whereas the formers utilizes single block I/O.
Joins.
This time we will explore and try to understand query plan for joins. Let’s take on joining of two tables
and let’s find out how Oracle query plan changes. We will start with two tables as following,
Product Table
Buyer Table
- Stores 15,000,00 buyers who buy the above products. This table has unique id field as well as a prodid
Before we start, please note, we do not have any index or table statistics present for these tables.
Explained.
PLAN_TABLE_OUTPUT
---------------------------------------------
---------------------------------------
| Id | Operation | Name |
---------------------------------------
| 0 | SELECT STATEMENT | |
| 1 | MERGE JOIN | |
| 2 | SORT JOIN | |
| 3 | TABLE ACCESS FULL| BUYER |
|* 4 | SORT JOIN | |
| 5 | TABLE ACCESS FULL| PRODUCT |
---------------------------------------
Above plan tells us that CBO is opting for a Sort Merge join. In this type of joins, both tables are read
individually and then sorted based on the join predicate and after that sorted results are merged together
(joined).
Joins are always a serial operation even though individual table access can be parallel.
Now let’s create some statistics for these tables and let’s check if CBO does something else than SORT
MERGE join.
HASH JOIN
SQL> analyze table product compute statistics;
Table analyzed.
Table analyzed.
Explained.
SQL> select * from table(dbms_xplan.display);
PLAN_TABLE_OUTPUT
------------------------------------------------------
Plan hash value: 2830850455
------------------------------------------------------
| Id | Operation | Name | Rows | Bytes |
------------------------------------------------------
| 0 | SELECT STATEMENT | | 25369 | 2279K|
|* 1 | HASH JOIN | | 25369 | 2279K|
| 2 | TABLE ACCESS FULL| PRODUCT | 15856 | 1254K|
| 3 | TABLE ACCESS FULL| BUYER | 159K| 1718K|
------------------------------------------------------
CBO chooses to use Hash join instead of SMJ once the tables are analyzed and CBO has enough statistics.
Hash join is a comparatively new join algorithm which is theoretically more efficient than other types of
joins. In hash join, Oracle chooses the smaller table to create an intermediate hash table and a bitmap.
Then the second row source is hashed and checked against the intermediate hash table for matching
joins. The bitmap is used to quickly check if the rows are present in hash table. The bitmap is especially
handy if the hash table is too huge. Remember only cost based optimizer uses hash join.
Also notice the FTS operation in the above example. This may be avoided if we create some index on both
Index created.
Index created.
PLAN_TABLE_OUTPUT
------------------------------------------------------------------
------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes |
------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 25369 | 198K|
|* 1 | HASH JOIN | | 25369 | 198K|
| 2 | INDEX FAST FULL SCAN| IDX_PROD_ID | 15856 | 63424 |
| 3 | INDEX FAST FULL SCAN| IDX_BUYER_PRODID | 159K| 624K|
------------------------------------------------------------------
There is yet another kind of joins called Nested Loop Join. In this kind of joins, each record from one
source is probed against all the records of the other source. The performance of nested loop join depends
heavily on the number of records returned from the first source. If the first source returns more record,
that means there will be more probing on the second table. If the first source returns less record, that
To show a nested loop, let’s introduce one more table. We will just copy the product table into a new
select *
from buyer, product, product_new
where buyer.prodid=product.id
and buyer.prodid = product_new.id;
And then I checked the plan. But the plan shows a HASH JOIN condition and not a NESTED LOOP. This is, in
fact, expected because as discussed earlier hash-join is more efficient compared to other joins. But
remember hash join is only used for cost based optimizer. So if I force Oracle to use rule based optimizer,
I might be able to see nested joins. I can do that by using a query hint. Watch this,
Explained.
PLAN_TABLE_OUTPUT
-----------------------------------------------------------
Plan hash value: 3711554028
-----------------------------------------------------------
| Id | Operation | Name |
-----------------------------------------------------------
| 0 | SELECT STATEMENT | |
| 1 | TABLE ACCESS BY INDEX ROWID | PRODUCT |
| 2 | NESTED LOOPS | |
| 3 | NESTED LOOPS | |
| 4 | TABLE ACCESS FULL | PRODUCT_NEW |
| 5 | TABLE ACCESS BY INDEX ROWID| BUYER |
|* 6 | INDEX RANGE SCAN | IDX_BUYER_PRODID |
|* 7 | INDEX RANGE SCAN | IDX_PROD_ID |
-----------------------------------------------------------
Voila! I got nested loops! As you see, this time I have forced Oracle to use rule based optimizer by
providing /*+ RULE */ hint. So Oracle has now no option but to use nested loops. As apparent from the
plan, Oracle performs a full scan of product_new and index scans for other tables. First it joins buyer with
product_new by feeding each row of buyer to product_new and then it sends the result set to probe
against product.
Ok, with this I will conclude this article. The main purpose of this article and the earlier one was to make
you familiar on Oracle query execution plans. Please keep all these ideas in mind because in my next
article I will show how we can use this knowledge to better tune our SQL Queries. Stay tuned.
way of looking at the data. This article explains how we can unleash the full potential of this.
Analytic functions differ from aggregate functions in the sense that they return multiple rows for each
group. The group of rows is called a window and is defined by the analytic clause. For each row, a sliding
window of rows is defined. The window determines the range of rows used to perform the calculations for
AVG, CORR, COVAR_POP, COVAR_SAMP, COUNT, CUME_DIST, DENSE_RANK, FIRST, FIRST_VALUE, LAG,
LAST, LAST_VALUE, LEAD, MAX, MIN, NTILE, PERCENT_RANK, PERCENTILE_CONT, PERCENTILE_DISC, RANK,
An Example:
ROW_NUMBER()
OVER (PARTITION BY deptno
ORDER BY ename) As Sequence_No
FROM emp
ORDER BY deptno, ename;
The partition clause makes the SUM(sal) be computed within each department, independent of the other
groups. The SUM(sal) is 'reset' as the department changes. The ORDER BY ENAME clause sorts the data
1. Query-Partition-Clause
The PARTITION BY clause logically breaks a single result set into N groups, according to the criteria
set by the partition expressions. The analytic functions are applied to each group independently,
2. Order-By-Clause
The ORDER BY clause specifies how the data is sorted within each group (partition). This will
3. Windowing-Clause
The windowing clause gives us a way to define a sliding or anchored window of data, on which the
analytic function will operate, within a group. This clause can be used to have the analytic function
compute its value based on any arbitrary sliding or anchored window within a group. The default
window is an anchored window that simply starts at the first row of a group an continues to the
current row.
Let's look an example with a sliding window within a group and compute the sum of the current row's
salary column plus the previous 2 rows in that group. i.e ROW Window clause:
ORDER BY ename
ROWS 2 PRECEDING ) AS Sliding_Total
FROM emp
ORDER BY deptno, ename;
Now if we look at the Sliding Total value of SMITH it is simply SMITH's salary plus the salary of two
We can set up windows based on two criteria: RANGES of data values or ROWS offset from the current row
. It can be said, that the existance of an ORDER BY in an analytic function will add a default window clause
of RANGE UNBOUNDED PRECEDING. That says to get all rows in our partition that came before us as
Suppose we want to find out the top 3 salaried employee of each department:
This will give us the employee name and salary with ranks based on descending order of salary for each
department or the partition/group . Now to get the top 3 highest paid employees for each dept.
SELECT * FROM (
The use of a WHERE clause is to get just the first three rows in each partition.
If we look carefully the above output we will observe that the salary of SCOTT and FORD of dept 10 are
same. So we are indeed missing the 3rd highest salaried employee of dept 20. Here we will use
DENSE_RANK function to compute the rank of a row in an ordered group of rows. The ranks are
consecutive integers beginning with 1. The DENSE_RANK function does not skip numbers and will assign
SELECT * FROM (
SELECT deptno, ename, sal, DENSE_RANK()
OVER (
PARTITION BY deptno ORDER BY sal DESC
) Rnk FROM emp
)
WHERE Rnk ≤ 3
move data in as well as out of the database with the help of SQL*Loader and Data Pump functionality.
previous article on Oracle External Tables we have seen the default driver ORACLE_LOADER to read from
external files. Now in this article we will learn how to push data to flat files using the access driver
ORACLE_DATAPUMP.
Here the external file emp.dmp will contain all the data of the table EMP. We can use the same file
generated here as source and then create an oracle external table to retrieve data into some other Oracle
system.
Using UTL_FILE
Another method to read and write external files from oracle is to use the Oracle supplied UTL_FILE
package.
Using the SPOOL sqlplus command we can generate output files in the client machine.
For example:
We create a file in C:\External_Tables named emp_query.sql with the following saved query.
SELECT EMPNO ||',' || ENAME || ',' || SAL || ',' || COMM FROM EMP;
Now let us see how can we output the table data in the form of xml file format. The following code will
display the xml format of the table data. Now the easy way to create an xml file is to use the Oracle SPOOL
Example 1:
SELECT DBMS_XMLGEN.GETXML(
'SELECT empno, ename, deptno FROM emp WHERE deptno = 10'
, 0
) FROM DUAL;
Example 2:
SELECT DBMS_XMLGEN.GETXML(
'SELECT dept.*'
||' ,CURSOR('
||' SELECT emp.*'
||' FROM emp'
||' WHERE emp.deptno = dept.deptno'
||' ) emp_list'
||' FROM dept') xmldata
FROM dual;
Database Performance Tuning
This article tries to comprehensively list down many things one needs to know for Oracle Database
Performance Tuning. The ultimate goal of this document is to provide a generic and comprehensive
guideline to Tune Oracle Databases from both programmer and administrator's standpoint.
Oracle Parser
It performs syntax analysis as well as semantic analysis of SQL statements for execution, expands views
referenced in the query into separate query blocks, optimizing it and building (or locating) an executable
Hard Parse
A hard parse occurs when a SQL statement is executed, and the SQL statement is either not in the shared
pool, or it is in the shared pool but it cannot be shared. A SQL statement is not shared if the metadata for
the two SQL statements is different i.e. a SQL statement textually identical to a preexisting SQL statement,
but the tables referenced in the two statements are different, or if the optimizer environment is different.
Soft Parse
A soft parse occurs when a session attempts to execute a SQL statement, and the statement is already in
the shared pool, and it can be used (that is, shared). For a statement to be shared, all data, (including
metadata, such as the optimizer execution plan) of the existing SQL statement must be equal to the
It generates a set of potential execution plans for SQL statements, estimates the cost of each plan, calls
the plan generator to generate the plan, compares the costs, and then chooses the plan with the lowest
cost. This approach is used when the data dictionary has statistics for at least one of the tables accessed
by the SQL statements. The CBO is made up of the query transformer, the estimator and the plan
generator.
EXPLAIN PLAN
A SQL statement that enables examination of the execution plan chosen by the optimizer for DML
statements. EXPLAIN PLAN makes the optimizer to choose an execution plan and then to put data
describing the plan into a database table. The combination of the steps Oracle uses to execute a DML
statement is called an execution plan. An execution plan includes an access path for each table that the
statement accesses and an ordering of the tables i.e. the join order with the appropriate join method.
Oracle Trace
Oracle utility used by Oracle Server to collect performance and resource utilization data, such as SQL
parse, execute, fetch statistics, and wait statistics. Oracle Trace provides several SQL scripts that can be
used to access server event tables, collects server event data and stores it in memory, and allows data to
SQL Trace
It is a basic performance diagnostic tool to monitor and tune applications running against the Oracle
server. SQL Trace helps to understand the efficiency of the SQL statements an application runs and
generates statistics for each statement. The trace files produced by this tool are used as input for TKPROF.
TKPROF
It is also a diagnostic tool to monitor and tune applications running against the Oracle Server. TKPROF
primarily processes SQL trace output files and translates them into readable output files, providing a
summary of user-level statements and recursive SQL calls for the trace files. It also shows the efficiency of
SQL statements, generate execution plans, and create SQL scripts to store statistics in the database.
Oracle External Tables
The Oracle external tables feature allows us to access data in external sources as if it is a table in the
database. This is a very convenient and fast method to retrieve data from flat files outside Oracle
database.
The Oracle external tables feature allows us to access data in external sources as if it is a table in the
database. External tables are read-only. No data manipulation language (DML) operations is allowed on an
external table. An external table does not describe any data that is stored in the database.
To create an external table in Oracle we use the same CREATE TABLE DDL, but we specify the type of the
table as external by an additional clause - ORGANIZATION EXTERNAL. Also we need to define a set of
other parameters called ACCESS PARAMETERS in order to tell Oracle the location and structure of the
source data. To understand the syntax of all these, let's start by creating an external table right away. First
we will connect to the database and create a directory for the extrnal table.
We will start by trying to load a flat file as an external table. Suppose the flat file is named employee1.dat
empno,first_name,last_name,dob
1234,John,Lee,"31/12/1978"
7777,Sam,vichi,"19/03/1975"
Now we can insert this temporary read only data to our oracle table say employee.
INSERT INTO employee (empno, first_name, last_name, dob) (SELECT empno, first_name, last_name, dob
FROM emp_ext);
The SKIP no_rows clause allows you to eliminate the header of the file by skipping the first row.
The LRTRIM clause is used to trim leading and trailing blanks from fields.
The SKIP clause skips the specified number of records in the datafile before loading. SKIP can be
on the size of the largest record the access driver can handle. The size is specified with an integer
indicating the number of bytes. The default value is 512KB (524288 bytes). You must specify a
larger value if any of the records in the datafile are larger than 512KB.
The LOGFILE clause names the file that contains messages generated by the external tables utility
while it was accessing data in the datafile. If a log file already exists by the same name, the access
driver reopens that log file and appends new log information to the end. This is different from bad
files and discard files, which overwrite any existing file. NOLOGFILE is used to prevent creation of a
log file. If you specify LOGFILE, you must specify a filename or you will receive an error. If neither
LOGFILE nor NOLOGFILE is specified, the default is to create a log file. The name of the file will be
The BADFILE clause names the file to which records are written when they cannot be loaded
because of errors. For example, a record was written to the bad file because a field in the datafile
could not be converted to the datatype of a column in the external table. Records that fail the
LOAD WHEN clause are not written to the bad file but are written to the discard file instead. The
purpose of the bad file is to have one file where all rejected data can be examined and fixed so that
it can be loaded. If you do not intend to fix the data, then you can use the NOBADFILE option to
prevent creation of a bad file, even if there are bad records. If you specify BADFILE, you must
specify a filename or you will receive an error. If neither BADFILE nor NOBADFILE is specified, the
default is to create a bad file if at least one record is rejected. The name of the file will be the table
With external tables, if the SEQUENCE parameter is used, rejected rows do not update the sequence
number value. For example, suppose we have to load 5 rows with sequence numbers beginning
with 1 and incrementing by 1. If rows 2 and 4 are rejected, the successfully loaded rows are
assigned the sequence numbers 1, 2, and 3.
An external table describes how the external table layer must present the data to the server. The access
driver and the external table layer transform the data in the datafile to match the external table definition.
The access driver runs inside of the database server hence the server must have access to any files to be
loaded by the access driver. The server will write the log file, bad file, and discard file created by the
access driver. The access driver does not allow to specify random names for a file. Instead, we have to
specify directory objects as the locations from where it will read the datafiles and write logfiles. A
directory object maps a name with the directory name on the file system.
Directory objects can be created by DBAs or by any user with the CREATE ANY DIRECTORY privilege. After
a directory is created, the user creating the directory object needs to grant READ or WRITE permission on
Notes
1. If we do not specify the type for the external table, then the ORACLE_LOADER type is used as a
default.
2. Using the PARALLEL clause while creating the external table enables parallel processing on the
datafiles. The access driver then attempts to divide large datafiles into chunks that can be
processed separately and parallely. With external table loads, there is only one bad file and one
discard file for all input datafiles. If parallel access drivers are used for the external table load, each
access driver has its own bad file and discard file.
3. We can change the target file name with the alter ddl command as:
4. The SYS tables for Oracle External Tabbles are dba_external_tables, all_external_tables and
user_external_tables.
We need to touch two major things here- first server architecture where we will know memory and process
Let’s first understand the difference between Oracle database and Oracle Instance.
Oracle database is a group of files that reside on disk and store the data. Whereas an Oracle instance is a
piece of shared memory and a number of processes that allow information in the database to be accessed
Database Instance
Control File
Online Redo Log Shared Memory (SGA)
Data File Processes
Temp File
Now let's learn some details of both Database and Oracle Instance.
Oracle Database
Control Control File contains information that defines the rest of the database like
File names, location and types of other files etc.
Redo Log Redo Log file keeps track of the changes made to the database. All user and
file meta data are stored in data files
Temp file stores the temporary information that are often generated when
Temp file
sorts are performed.
Each file has a header block that contains metadata about the file like SCN or system change number that
says when data stored in buffer cache was flushed down to disk. This SCN information is important for
Oracle Instance
This is comprised of a shared memory segment (SGA) and a few processes. The following picture shows
Shared Pool
Contains various structure for running SQL and dependency tracking
Shared SQL Area
Database Buffer Contains various data blocks that are read from database for some
Cache transaction
LGWR (Log
- writes redo log entries to disk
Writer)
Here we will learn about both physical and logical storage structure. Physical storage is how Oracle stores
the data physically in the system. Whereas logical storage talks about how an end user actually accesses
that data.
Physically Oracle stores everything in file, called data files. Whereas an end user accesses that data in
terms of accessing the RDBMS tables, which is the logical part. Let's see the details of these structures.
Physical storage space is comprised of different datafiles which contains data segments. Each segment can
contain multiple extents and each extent contains the blocks which are the most granular storage
structure. Relationship among Segments, extents and blocks are shown below.
Data Files
|
^