Você está na página 1de 33

DATA STRUCTURE

Definitions
Data Structure is a way of organizing all data
items that considers not only the elements
stored but also their relationship to each other.
Logical and mathematical model of a
particular organization of data items is called
Data Structure.

Types of Data Structure


1) Primitive data structure: These are basic structures and are directly
operated upon by the machine instructions.

Integer

Float

Character

Pointer

Types of Data Structure


2) Non-Primitive data structure: These are derived from the primitive data
structures and these emphasize on structuring of a
group of homogeneous or heterogeneous data
items.

Array

Structure/Union

List

Types of Data Structure


List can be further divided into two parts:1) Linear list:- Every item is related to its previous and next
time. Data is arranged in linear sequence. Data items can
be traversed in a single run e.g. array, stack, linked list,
queue. Its implementation is easy.
2) Non-Linear list:- Every item is attached with many other
items. Data is not arranged in sequence and cant be
traversed in a single run e.g. tree, graph. Its
implementation is difficult.

Array
An Array can be defined as a set of finite number of
homogeneous elements or data items. It means an
array can contain one type of data only.
The array are stored at consecutive memory locations
and are static by nature. When the user declarers array,
it occupy that memory location, either the user make
use of all that memory or not.

Linked-List
It can be defined as a collection of variable
number of data items. An element of a list must
contain at least two fields, one for storing data or
information and other for storing address of next
element. Each such element is referred to as a
node, therefore a list can be defined as a collection
of nodes.

A linked list a1, a2,..., an is shown in Figure

As shown in diagram the header node contains the address of


first node, which has two parts in which data and address of next
node is stored separately. In this way by traversing through user
can find the other nodes. The last node contains null in its
address part.

Stack
A Stack is an ordered collection of objects that are
inserted and removed according to the last-in first-out
(LIFO) principle.
Insertion of element into stack is called Push and
Deletion of element into stack is called Pop. Insertion
and deletion of elements can be done only from one
end, called TOS or top of stack.

Stack
2

1. Push(2)
10

2. Push(10)

3. Pop(10)
4. Pop(2)

Queue
A Queue is an ordered collection of elements which
works on FIFO principle.
Elements can be inserted into a queue from one end
called REAR and elements can be deleted from the
other end called FRONT.

Tree
A tree can be defined as finite set of data items. Tree
is non-linear types of data structures in which data
items are arranged or stored in hierarchical
relationship.

Graph
A graph G(V,E) is a set of vertices V and a set of
edges E. An edge connects a pair of vertices and
many have weight such as length, cost or another
measuring instrument for recording the graph.

Memory Allocation
There are two types of memory allocations: 1) Compile-time or static allocation

int x,y;
2) Run-time or Dynamic allocation
1) Malloc()- malloc(no. of elements*size of elements)
2) Calloc()- malloc(no. of elements, size of elements)
3) Ralloc()- realloc(ptr_var, new_size)

4) Free()- free(ptr_var)

Data Structure Operations


Traversing-Accessing each record to process that
item.
Searching-finding the location of record with a given
key value or that satisfy a condition.
Inserting-Adding a new record to structure.
Deleting-Removing a record from the structure.
Sorting-Arranging records in logical order.

Merging-combining records from two sorted files into


one file.

Algorithm
An algorithm is a well defined computational procedure
of finite steps which takes some values as input and
produces some value as output.
In other words an algorithm is a finite sequence of
computational steps that transform the input into the
output.

Complexity of Algorithms
Algorithmic complexity is concerned about how fast or slow
particular algorithm performs. We define complexity as a
numerical function T(n) - time versus the input size n. We want
to define time taken by an algorithm without depending on the
implementation details. The way around is to estimate efficiency
of each algorithm asymptotically. We will measure time T(n) as
the number of elementary "steps" (defined in any way), provided
each such step takes constant time.

- Time Complexity: Running time of the program


as a function of the size of input.

- Space Complexity: Amount of computer memory


required during the program execution, as a function
of the input size.

Asymptotic Notation
When we look at input sizes, large enough to
make only the order of growth of the running
time relevant, we are studying the asymptotic
efficiency of algorithms.
Three notations
Big-Oh (O) notation
Big-Omega() notation
Big-Theta() notation

Asymptotic notations

O notation: - (Upper bound) This notation gives


the upper bound for a function to within a constant
factor. It is represented as
f(x)= O(g(n))
means f (the running time of the algorithm) grows
exactly like g when n (input size) gets larger. In
other words, the growth rate of f(x) is
asymptotically proportional to g(n).Here the growth
rate is no faster than g(n). big-oh is the most useful
because represents the worst-case behavior. This
is the worst case for any algorithm.

notation: - (Tightly bound) This notation bounds


a function to within constant factors.It is
represented as
f(x)= (g(n))
If there exists positive constant n0 ,C1 and C2 such
that to the right of n0 ,the value of f(x) always lie
between C1 g(n) and C2 g(n) inclusive.

notation: - (Lower Bound) This notation gives a


lower bounds for a function to within constant
factor.It is represented as
f(x)= (g(n))
If there exists positive constant n0 ,C such that to
the right of n0 ,the value of f(x) always lie on or
above Cg(n).

Categories of algorithm
Seven functions that often appear in
algorithm analysis:

Constant 1
Logarithmic log n
Linear n
Log Linear n log n
2
Quadratic n
3
Cubic n
n
Exponential 2

The Constant Function


Constant Time: O(1)

An algorithm is said to run in constant time if it requires


the same amount of time regardless of the input size.
Examples:
array: accessing any element
fixed-size stack: push and pop methods
fixed-size queue: enqueue and dequeue methods

The Linear Functions


Linear Time: O(n)
An algorithm is said to run in linear time if
its time execution is directly proportional
to the input size, i.e. time grows linearly
as input size increases.
Examples:
array: linear search, traversing, find min
Array List: contains method
queue: contains method

Logarithmic Time: O(log n)


An algorithm is said to run in logarithmic
time if its time execution is proportional
to the logarithm of the input size.
Example:
binary search
Quadratic Time: O(n2)
An algorithm is said to run in logarithmic
time if its time execution is proportional
to the square of the input size.
Examples:
bubble sort, selection sort, insertion sort

The Cubic functions and other polynomials


Cubic functions
3
f(n) = n

Polynomials
2
d
f(n) = a0 + a1n+ a2n + .+adn
d is the degree of the polynomial
a0,a1.... ad are called coefficients.

The N-Log-N Function


f(n) = nlogn
Function grows little faster than linear
function and a lot slower than the quadratic
function.

Example
Algorithm Mystery(n)
sum 0
for i 0 to n 1 do
for j 0 to n 1 do
sum sum + 1

# operations
1
n+1
n (n + 1)
n .n

Total number of steps: 2n2 + 2n + 2

Example: Find sum of array elements


Algorithm arraySum (A, n)
Input array A of n integers
Output Sum of elements of A
sum 0
for i 0 to n 1 do
sum sum + A [i]
return sum

# operations
1
n+1
n
1

Input size: n (number of array elements)


Total number of steps: 2n + 3

Example: Find max element of an array


Algorithm arrayMax(A, n)
Input array A of n integers
Output maximum element of A

currentMax A[0]
for i 1 to n 1 do
if A [i] currentMax then
currentMax A [i]
return currentMax

# operations

1
n
n -1
n -1
1

Input size: n (number of array elements)


Total number of steps: 3n

Space complexity
Space complexity is a function describing the amount of
memory (space) an algorithm takes in terms of the amount of
input to the algorithm. We often speak of "extra" memory
needed, not counting the memory needed to store the input
itself. Again, we use natural (but fixed-length) units to
measure this. We can use bytes, but it's easier to use, say,
number of integers used, number of fixed-sized structures,
etc. In the end, the function we come up with will be
independent of the actual number of bytes needed to
represent the unit. Space complexity is sometimes ignored
because the space used is minimal and/or obvious, but
sometimes it becomes as important an issue as time.

Space complexity
For example, we might say "this algorithm takes n2 time,"
where n is the number of items in the input. Or we might say
"this algorithm takes constant extra space," because the
amount of extra memory needed doesn't vary with the
number of items processed. For both time and space, we are
interested in the asymptotic complexity of the algorithm:
When n (the number of items of input) goes to infinity, what
happens to the performance of the algorithm?

Você também pode gostar