Você está na página 1de 9

Can you compare and contrast two-tier and three-tier distributed systems as they are related to

information security?
In a two-tier application, there is a thick client communicating directly with the data store -- the
application logic runs within the thick client. Think Lotus Notes or old PowerBuilder
applications. This is the original architecture that drove "client-server" back in the early 90's.

Three-tier systems add a middle tier to provide much of that application logic. So you are, in
effect, separating the application logic from the presentation, which can now run within a thin
client, like a Web browser. This is the dominant application type nowadays.

Of course, the pendulum always swings back and forth and now we are seeing hybrid models,
which include technologies like AJAX, to add more functionality within the browser to mimic
the capabilities achieved with fat-client applications. Is that muddled enough?

Relative to information security, a three-tier environment tends to be easier to control because


the application servers (the middle tier) are centralized and can be more easily managed. To put
some numbers behind that statement, let's say vulnerabilities are discovered in an application. In
a three-tier model, maybe 100 application servers will be patched. If you have fat clients all over
the place, maybe 10,000 patches will be needed to apply the fix.

Blocking and tackling to secure both applications and architectures is similar. The application
and the data need to be protected, so making sure there aren't vulnerabilities in your application
code is important. Also make sure only authorized parties are accessing the data in the database.

Given the overarching regulatory environment, it's important to not only monitor what's
happening within applications, but also to store log data and make sure you could recover from
an attack.

The bottom line is that there are lots of reasons why three-tier architecture is prevalent now.
Security is not really one of them, but security does benefit from this trend.

SearchSecurity.techtarget.com

“Two-tier distributed systems vs. three-tier distributed systems”.

26.04.2011
Introduction to 2-Tier Architecture

2-tier architecture is used to describe client/server systems where the client requests resources
and the server responds directly to the request, using its own resources. This means that the
server does not call on another application in order to provide part of the service.

Introduction to 3-Tier Architecture

In 3-tier architecture, there is an intermediary level, meaning the architecture is generally split up
between:

1. A client, i.e. the computer, which requests the resources, equipped with a user interface (usually
a web browser) for presentation purposes
2. The application server (also called middleware), whose task it is to provide the requested
resources, but by calling on another server
3. The data server, which provides the application server with the data it requires
The widespread use of the term 3-tier architecture also denotes the following architectures:

 Application sharing between a client, middleware and enterprise server


 Application sharing between a client, application server and enterprise database server.

Comparing both types of architecture

2-tier architecture is therefore a client-server architecture where the server is versatile, i.e. it is
capable of directly responding to all of the client's resource requests.

In 3-tier architecture however, the server-level applications are remote from one another, i.e.
each server is specialised with a certain task (for example: web server/database server). 3-tier
architecture provides:

 A greater degree of flexibility


 Increased security, as security can be defined for each service, and at each level
 Increased performance, as tasks are shared between servers

Multi-Tiered Architecture

In 3-tier architecture, each server (tier 2 and 3) performs a specialised task (a service). A server
can therefore use services from other servers in order to provide its own service. As a result, 3-
tier architecture is potentially an n-tiered architecture
en.kioskea.net
While Two-tier Client/Server (C/S) architectures are the most widely implemented today, their
drawbacks have been widely publicized as more organizations grapple with the difference
between expectations and reality.

Today, the two-tiered system is seen as increasingly obsolete, or even viewed as an impediment
to a more open, efficient and reliable computing environment.

Cross Platform Integration.

In the typical two-tier arrangement, a Windows/GUI-based PC handles presentation and


application activities at the client level, while the server provides access to the database. This
arrangement seemed ideal for a time.

However as enterprises added new platforms, operating systems and languages, and as the
number and complexity of stored procedures multiplied, the inherent limitations of two-tiered
architectures were revealed.

The Problems
Perhaps the most obvious problem with the two-tier arrangement is the need to store and manage
increasingly massive volumes of application oriented technology at the client location.

This means that in a large retail organization, for example, when a decision is made to change a
"compute cost" algorithm, the code must be rewritten and this new logic dispersed to affected
client locations throughout the entire enterprise.

With many businesses now using hundreds or thousands of client workstations to process
business data, the cost implications are enormous.

By placing the application workload at the client level, the two-tier architecture requires a major
and ongoing investment in technology, software, and data updates. That's the downside of just
distributing computers.

Standards...  What Standards?


Language is another serious drawback when attempting to serve a varied client base from a two-
tiered system. Most client-based applications are written in Visual Basic, Delphi, C ++,
PowerBuilder or GUPTA -- languages which cannot be utilized by mainframe, UNIX or many
other client stations.

As a result, critical application logic is embedded at far-flung client locations and is written in
languages that makes it all but useless to others in the enterprise.

Companies which operate mixed mainframe-and-UNIX systems confront additional problems


under a two-tier architecture. Performance and security requirements many times prevent two-
tier access to mainframe data sources.
In addition, many mainframe data sources are non-relational and require expensive gateway
products to access the data in a relational manner. As a result most mainframe shops keep a copy
of the required data on the UNIX systems.

To keep the UNIX box in sync with the mainframe, data managers keep two copies of the data
on the mainframe, one before image and one after image.

Each day the data managers perform a sort/merge operation on the two data sources and write
out a log of add, change, and delete records. These records are then transferred to the UNIX box
where yet an other program applies the changes.

As IS directors know all too well, these nightly batch jobs can consume staggering volumes of
system capacity. In data-intense industries such as banking and retail operations, these updates
can easily involve from 20,000 to 30,000 changes per night per table -- and can total upwards of
200,000 items per crunch.

In fact, it is estimated that in many transaction-oriented companies, fully 25% of all mainframe
CPUs are dedicated exclusively to sort/merge operations. Of course, this process also
monopolizes vast amounts of system memory and it takes a considerable communications punch
to push these monster files across the network.

These operations are vulnerable to failure at several points -- but those risks must be run every
time these massive files are captured, processed and transferred.

Architecture and old technology are the underlying causes of a legacy system held together by
bailing wire as well as an inadequate two-tier client/server implementation.

If an organization wants to make fundamental changes in its computing systems, it must change
the architecture. The right architecture can accommodate business changes, whether that means
more users or new business rules.

Similarly, the right architecture is key to today's client/server computing systems. For most
computing environments, the right choice is an N-Tier client/server architecture.

Misconceptions of Two-Tier Client/Server Architectures.


Organizations and systems integrators alike have preconceived notions about a two-tier
client/server architecture. Once a system is in place and its performance does not match
expectations or needs, the organization realizes that these preconceived ideas are simply not
reality of the situation.

Most organizations hold several misconceptions about the performance of two-tier client/server
architectures.

The first misconception is that because clients are easy to use, a Client/Server system is easy to
design and implement.
In actuality, the easier a client is to use, the more difficult the client/server architecture is to
develop.

A second common misconception some organizations make, is to assume that a fast network will
not experience bottlenecks.

In reality, as more and more clients access the server, they increase demands to the point that
eventually an undesirable network management issue is created. This undesirable situation
occurs regardless of the bandwidth of the network.

Stored Procedures
A third misconception about two-tier Client/Server architectures concerns utilizing stored
procedures to solve the "Fat Client" dilemma.

Stored procedures are instruction sets provided by relational database vendors. The products are
intended to help clients handle business logic and data integrity functions.

Many vendors' stored procedures are not robust enough for large applications, and even the best
is not a full-featured development environment.

While stored procedures work well in limited applications, they are not designed to deal with
large or complex programs.

By using stored procedures to solve fat client problems, the organization is not taking advantage
of modern developers' tools such as Object Oriented Programming, RAD environments and
interactive full screen debuggers.

In addition, stored procedures are platform dependent. A stored procedure based application is
not portable to another database, it is tied to that particular database vendor.

Such database dependency is not compatible with efforts to have a flexible computing
environment based on open systems.

Other Problems with Stored Procedures

 Stored procedures were (and still are) very difficult to write and debug.
 The lack of server managers make it difficult to load balance stored
procedures across a distributed environment.
 Stored procedures are inadequate when the application requires access to
non-database services.

In short, while stored procedures are well suited to maintaining business rule data integrity, they
are generally not a suitable choice to implement two-tier or N-Tier client/server systems.

Given these sizable drawbacks, it is no surprise that an increasing number of companies are
moving quickly to the safer, less costly and more practical distributed computing environment.
The N-Tier Client/Server Architecture
N-Tier architectures sidestep most of the misconceptions people have of two-tier architectures.

The primary feature of a more distributed, or multi-tier client/server architecture is that it moves
the application burden from the client to the server side of the equation.

In the N-Tier model, you redistribute the logic-intense application activities away from the client
side (which in many cases consists of thousands of client workstations), to the more centralized
and far more cost-efficient server domain.

An N-Tier architecture features two separate server levels: a business logic or application layer
and the underlying data layer.

By moving to N-Tier enterprise-wide computing, we eliminate the expensive and cumbersome


necessity of maintaining redundant databases while gaining the substantial benefits of reduced
hardware and software costs.

Thus, we distribute computing while providing powerful access to enterprise data.

Clients are insulated from database and network operations and are no longer burdened with the
need to know where or how, to find and get data.

The client does what it does best; it handles the user interface.

The client becomes  the presentation layer, handling just the user interface and is freed of
application-layer tasks and the associated need for powerful and expensive hardware/software
technologies.

In an N-Tier client/server environment, where the application layer functions between the
presentation and data layers, the client does not have to be a powerful, Pentium-based PC.

Because client-side terminals are now freed from these technology intense application tasks,
numerous client sites can be housed in lower-end Intel-based systems, Macintosh, X-Terminal or
NC devices.

http://n-tier.com/articles/cslimits.html
References

Tier 1 / Tier 2 / Tier 3 / Tier 4 Data Center

http://www.cyberciti.biz/faq/data-center-standard-overview/

SearchSecurity.techtarget.com

“Two-tier distributed systems vs. three-tier distributed systems”.

Two Tier Security

www.scribd.com

Building Secure N-Tier Environments

www.sun.com

3-Tier Architecture

fdsfsdf.googlecode.com/files/3Tier.ppt

Networking - 3-Tier Client/Server Architecture

en.kioskea.net

Two-tiered Client/Server Limitations

Você também pode gostar