Você está na página 1de 19

DISTRIBUTED SYSTEM

MODELS AND ENABLING


TECHNOLOGIES
By:
Megha. V
M15MC31

OUTLINE

Scalable Computing over the Internet :>>


>>

The Age of Internet Computing


Scalable Computing Trends and New Paradigms

SCALABLE COMPUTING OVER THE


INTERNET

Cloud Computing refers to the delivery of scalable IT resources


over the Internet as opposed to hosting and operating those
resources locally. Cloud computing enables your company to react
faster to the needs of your business, while driving greater
operational efficiencies

THE AGE OF INTERNET COMPUTING


Identity in the Age of Cloud Computing: The next-generation
Internets impact on business, governance and social interaction
examines the migration of information, software and identity into
the Cloud and explores the transformative possibilities of this new
computing paradigm for culture, commerce, and personal
communication

ADVANCE NETWORK-BASED COMPUTING


AND WEB-SERVICES WITH THE EMERGING
NEW TECHNOLOGIES
1. The Platform Evolution
2. High-Performance Computing
3. High-Throughput Computing
4. Three New Computing Paradigms
5. Computing Paradigm Distinctions
6. Distributed System Families

THE PLATFORM EVOLUTION

HIGH-PERFORMANCE COMPUTING

High-performance computing (HPC) :- HPC is the use of parallel


processing for running advanced application programs
efficiently, reliably and quickly. The term applies especially to
systems that function above a teraflop or 1012 floating-point
operations per second

HIGH-THROUGHPUT COMPUTING
High Throughput Computing (HTC) doesn't concern itself too
much with speeding up individual programs themselves - rather it
allows many copies of the same program to run at the same time.
More precisely, it allows many copies of the same program to
run in parallel or concurrently. Running multiple copies
of exactly the same problem is obviously a fairly pointless
exercise but the power of HTC lies in its ability to use different
data for each program copy

THREE NEW COMPUTING PARADIGMS


The rationalist paradigm, which was common among theoretical computer
scientists, defines computer science as a branch of mathematics, treats
programs on a par with mathematical objects, and seeks certain, a priori
knowledge about their "correctness" by means of deductive reasoning.
The technocratic paradigm, promulgated mainly by software engineers and has
come to dominate much of the discipline, defines computer science as an
engineering discipline, treats programs as mere data, and seeks probable, a
posteriori knowledge about their reliability empirically using testing suites.

CONT.
The scientific paradigm, prevalent in the branches of artificial
intelligence, defines computer science as a natural (empirical)
science, takes programs to be entities on a par.

COMPUTING PARADIGM
DISTINCTIONS
oCentralized computing
oParallel computing
oDistributed computing
oCloud computing

DISTRIBUTED SYSTEM FAMILIES


Efficiency measures the utilization rate of resources in an
execution model by exploiting massive parallelism in HPC. For
HTC, efficiency is more closely related to job throughput, data
access, storage, and power efficiency.
Flexibility in application deployment measures the ability of
distributed systems to run well in both HPC (science and
engineering) and HTC (business) applications.

CONT.
Dependability measures the reliability and self-management
from the chip to the
system and application levels. The purpose is to provide highthroughput service with
Quality of Service (QoS) assurance, even under failure conditions.
Adaptation in the programming model measures the ability to
support billions of job requests over massive data sets and
virtualized cloud resources under various workload and service
models.

SCALABLE COMPUTING TRENDS AND


NEW PARADIGMS
1) Degrees of Parallelism
2) Innovative Applications
3) The Trend Toward Utility Computing
4) The Hype Cycle of New Technologies

DEGREES OF PARALLELISM

Data-level parallelism (DLP) was made popular through SIMD


(single instruction, multiple data) and vector machines using
vector or array types of instructions. DLP requires even more
hardware support and compiler assistance to work properly. Ever
since the introduction of multicore processors and chip
multiprocessors (CMPs), it has been exploring task-level
parallelism (TLP).

DEGREES OF PARALLELISM

THE TREND TOWARD UTILITY


COMPUTING

THE HYPE CYCLE OF NEW


TECHNOLOGIES

Você também pode gostar