Você está na página 1de 11

Chapter 2

CACHING: ACCELERATION OF HARD DISK ACCESS


Caches are used to speed up R/W operations by operating them from the cache. Specifically in
the field of disk subsystems, caches are designed to accelerate write and read accesses to physical
hard disks.
Types of cache:
. Cache on the hard disk and
!. Cache in the R"#$ controller.
i. Write cache and
ii. Read cache.
1. Cache on the hard di!
%ach indi&idual hard disk comes with a &ery small cache. This is necessary because the
transfer rate of the #/' channel to the disk controller is significantly higher than the speed at
which the disk controller can write to or read from the physical hard disk.
2. When a RAID contro""er #rite a block to a physical hard disk, the disk controller stores
this in its cache. The disk controller can thus write the block to the physical hard disk in its
own time whilst the #/' channel can be used for data traffic to the other hard disks. (any
R"#$ le&els use precisely this techni)ue to increase the performance *speed+ of the &irtual
hard disk.
$. Read acce is accelerated in a similar manner. #f a ser&er or an intermediate R"#$
controller wishes to read a block, it sends the address of the re)uested block to the hard disk
controller. The #/' channel can be used for other data traffic while the hard disk controller
copies the complete block from the physical hard disk into its cache at a slower data rate. The
hard disk controller transfers the block from its cache to the R"#$ controller or to the
ser&er at the higher data rate of the #/' channel.
%. &rite Cache in the di! '()te* contro""er
#n addition to the cache of the indi&idual hard dri&es many disk subsystems come with
their own cache, which in some models is gigabytes in si,e. "s a result it can buffer much
greater data )uantities than the cache on the hard disk.
The write cache should ha&e a battery backup and ideally be mirrored. The battery backup
is necessary to allow the data in the write cache to sur&i&e a power cut. " write cache with
battery backup can significantly reduce the write penalty of R"#$ - and R"#$ ..
(any applications do not write data at a continuous rate, but in batches. #f a ser&er sends
se&eral data blocks to the disk subsystem, the controller initially buffers all blocks into a
write cache with a battery backup and immediately reports back to the ser&er that all
data has been securely written to the dri&e. The disk subsystem then copies the data from
the write cache to the slower physical hard disk in order to make space for the ne/t write
peak.
Read cache in the di! '()te* contro""er
The acceleration of read operations is difficult in comparison to the acceleration of write
operations using cache. To peed 'p read acce () the er+er, the di!
'()te*- contro""er *'t cop) the re"e+ant data ("oc! .ro* the "o#er
ph)ica" hard di! to the .at cache (e.ore the er+er re/'et the data in
/'etion.
The pro("e* #ith thi i that it i +er) di..ic'"t .or the di! '()te*-
contro""er to #or! o't in ad+ance #hat data the er+er #i"" a! .or ne0t. The
controller in the disk subsystem knows neither the structure of the information stored
in the data blocks nor the access pattern that an application will follow when accessing the
data. Conse)uently, the controller can only analy,e past data access and use this to
e/trapolate which data blocks the ser&er will access ne/t.
#n se)uential read processes this prediction is comparati&ely simple, in the case of random
access it is almost impossible. "s a rule of thumb, good R"#$ controllers manage to
pro&ide around -01 of the re)uested blocks from the read cache in mi/ed read profiles.
The disk subsystem2s controller cannot further increase the ratio of read access pro&ided
from the cache *pre3fetch hit rate+, because it does not ha&e the necessary application
knowledge. Therefore, it is often worthwhile reali,ing a further cache within applications.
4or e/ample, after opening a file, file systems can load all blocks of the file into the main
memory *R"(+5 the file system knows the structures that the files are stored in. 4ile
systems can thus achie&e a pre3fetch hit rate of 001. 6owe&er, it is impossible to know
whether the e/pense for the storage of the blocks is worthwhile in an indi&idual case, since
the application may not actually re)uest further blocks of the file.
INTELLIGENT DISK S12S3STE4S
#ntelligent disk subsystems represent the third le&el of comple/ity for controllers after
78'$s and R"#$ arrays. The controllers of intelligent disk subsystems offer additional
functions, such as instant copies, o&er and abo&e those offered by R"#$.
Intant copie can &irtually copy data sets of se&eral terabytes within a disk subsystem in
a few seconds.
5irt'a" cop)in6 means that disk subsystems fool the attached ser&ers into belie&ing that
they are capable of copying such large data )uantities in a short space of time. The actual
copying process takes significantly longer. 6owe&er, the same ser&er, or a second
ser&er, can access the &irtually copied data after a few seconds *4igure !.9+.
Intant copie are used, for e/ample, for the generation of test data, for the backup of
data and for the generation of data copies for data mining. When copying data using
instant copies, attention should be paid to the consistency of the copied data. There are
numerous alternati&e implementations for instant copies. 'ne thing that all
implementations ha&e in common is that the pretence of being able to copy data in a
matter of seconds costs resources.
Fi6're 2.17 #nstant copies can &irtually copy se&eral terabytes of data within a disk subsystem in
a few seconds: ser&er works on the original data *+. The original data is &irtually copied in a
few seconds *!+. Then ser&er ! can work with the data copy, whilst ser&er continues to operate
with the original data *:+.

Cot in+o"+ed
"ll reali,ations of instant copies re)uire controller computing time and cache and place
a load on internal #/' channels and hard disks. The different implementations of instant
copy force the performance down at different times. 6owe&er, it is not possible to choose
the most fa&orable implementation alternati&e depending upon the application used
because real disk subsystems only e&er reali,e one implementation alternati&e of instant
copy.
T#o i*p"e*entation a"ternati+e that .'nction di..erent")
"t one e/treme, the data is permanently mirrored *R"#$ or R"#$ 0+. ;pon the copy
command both mirrors are separated: the separated mirrors can then be used
independently of the original.
"fter the separation of the mirror, the production data is no longer protected against
the failure of a hard disk. Therefore, to increase data protection, three mirrors are often
kept prior to the separation of the mirror *three3way mirror+, so that the production data
is always mirrored after the separation of the copy.
"t the other e/treme, no data at all is copied prior to the copy command, only after the
instant copy has been re)uested. To achie&e this, the controller administers two data areas,
one for the original data and one for the data copy generated by means of instant copy.
The controller must ensure that during write and read access operations to original data
or data copies the blocks in )uestion are written to or read from the data areas in
)uestion.
8artia" cop) and .'"" cop)
#n some implementations, it is permissible to write to the copy, in some it is not. Some
implementations copy <ust the blocks that ha&e actually changed *partial copy+, others copy
all blocks as a background process until a complete copy of the original data has been
generated *full copy+.
Consider access by ser&er to the original data *figure !.9+. Read operations are
completely unproblematic5 they are always ser&ed from the area of the original data.
6andling write operations is trickier. #f a block is changed for the first time since the
generation of the instant copy, the controller must first copy the old block to the data copy
area so that ser&er ! can continue to access the old data set. 'nly then it may write the
changed block to the original data area.
#f a block that has already been changed has to be written again, it must be written to the
original data area. The controller may not e&en back up the pre&ious &ersion of the block to
the data copy area because otherwise the correct &ersion of the block would be o&erwritten.
The case differentiations for access by ser&er ! to the data copy generated by means of
instant copy are somewhat simpler. #n this case, write operations are unproblematic: the
controller always writes all blocks to the data copy area. 'n the other hand, for read
operations it has to distinguish whether the block in )uestion has already been copied or
not. This determines whether it has to read the block from the original data area or read it
from the data copy area and forward it to the ser&er.
Space e..icient intant cop)
The subse)uent copying of the blocks offers the basis for important &ariants of instant
copy. Space3efficient instant copy only copies the blocks that were changed *4igure !.=+.
These normally re)uire considerably less physical storage space than the entire copy.
>et the e/ported &irtual hard disks of the original hard disk and the copy created through
space3 efficient instant copy are of the same si,e.
Fi6're 2.19 Space3efficient instant copy manages with fewer storage systems than the basic
form of instant copy *4igure !.9+. Space3efficient instant copy actually only copies the changed
blocks before they ha&e been o&erwritten into the separate area *!+. 4rom the &iew of ser&er ! the
hard disk that has been copied in this way is <ust as large as the source disk *:+. The link between
source and copy remains as the changed blocks are worthless on their own.
4rom the &iew of the ser&er both &irtual disks continue to ha&e the same si,e. Therefore, less
physical storage space is needed o&erall and, the cost of using instant copy can be reduced.
Incre*enta" intant cop) is another important &ariant of instant copy. #n some
situations such a hea&y burden is placed on the original data and the copy that the
performance within the disk subsystem suffers unless the data has been copied
completely onto the copy. "n e/ample of this is back3up when data is completely backed up
through instant copy. 'n the other side, the background process for copying all the data of
an instant copy re)uires many hours when &ery large data &olumes are in&ol&ed, making
this not a &iable alternati&e.
'ne remedy is incre*enta" intant cop) where data is only copied in its entirety the
first time around. "fterwards the instant copy is repeated ? for e/ample, daily ?
whereby only those changes since the pre&ious instant copy are copied.
A re+era" o. intant cop) is yet another important &ariant. #f data is backed up
through instant copy, then if a failure occurs, the operation should be continued with
the copy. " simple approach is to shut down the application, copy back the data on the
producti&e hard disks as a second instant copy from the copy onto the producti&e hard
disks and restart the application. #n this case, the disk subsystem must enable a re&ersal of
the instant copy. #f this function is not a&ailable, if a failure occurs, the data either has to
be copied back to the producti&e disks by different means or the operation continues
directly with the copy. 8oth of these approaches are coupled with ma<or copying
operations or ma<or configuration changes, and, conse)uently the reco&ery takes
considerably longer than a re&ersal of the instant copy.
Re*ote *irrorin6
#nstant copies are ideally suited for the copying of data sets within disk subsystems.
6owe&er, they can only be used to a limited degree for data protection. "lthough data
copies generated using instant copy protect against application errors *accidenta"
de"etion of a file system+ and logical errors *error in the data(ae pro6ra*+, they do
not protect against the failure of a disk subsystem.
Something as simple as a po#er .ai"'re can pre&ent access to production data and data
copies for se&eral hours. A .ire in the disk subsystem would destroy original data and data
copies. 4or data protection, therefore, the pro0i*it) o. prod'ction data and data
copie i .ata".
Re*ote *irrorin6 offers protection against such catastrophes. (odern disk
subsystems can now mirror their data, or part of their data, independently to a second
disk subsystem, which is at a distant place. The entire remote mirroring operation is
handled by the two participating disk subsystems. Remote mirroring is in&isible to
application ser&ers and does not consume their resources. 6owe&er, remote mirroring
re)uires resources in the two disk subsystems and in the #/' channel that connects the
two disk subsystems together, which means that reductions in performance can sometimes
make their way through to the application.
Hi6h a+ai"a(i"it) 'in6 re*ote *irrorin6
4igure !.!0 shows an application that is designed to achie&e high a&ailability using remote
mirroring. The application ser&er and the disk subsystem, plus the associated data, are
installed in the primary data centre. The disk subsystem independently mirrors the
application data onto the second disk subsystem that is installed .0 kilometers away in the
backup data centre by means of remote mirroring. Remote mirroring ensures that the
application data in the backup data centre is always kept up3to3date with the time inter&al
for updating the second disk subsystem being configurable. #f the disk subsystem in
the primary data centre fails, the backup application ser&er in the backup data centre
can be started up using the data of the second disk subsystem and the operation of the
application can be continued.
Fi6're 2.2: 6igh a&ailability with remote mirroring: *+ The application ser&er stores its data
on a local disk subsystem. *!+ The disk subsystem sa&es the data to se&eral physical dri&es by
means of R"#$. *:+ The local disk subsystem uses remote mirroring to mirror the data onto a
second disk subsystem located in the backup data centre. *-+ ;sers use the application &ia the
@"A. *.+ The stand3by ser&er in the backup data centre is used as a test system. The test data
is located on a further disk subsystem. *B+ #f the first disk subsystem fails, the application is
started up on the stand3by ser&er using the data of the second disk subsystem. *C+ ;sers use the
application &ia the W"A.
S)nchrono' and a)nchrono' re*ote *irrorin6.
#n synchronous remote mirroring the first disk subsystem sends the data to the second
disk subsystem first before it acknowledges a ser&er2s write command. 8y contrast,
asynchronous remote mirroring acknowledges a write command immediately5 only then
does it send the copy of the block to the second disk subsystem.
4igure !.! illustrates the data flow of synchronous remote mirroring.
Fi6're 2.21: #n synchronous remote mirroring one disk subsystem acknowledges a
write operation as soon as it has sa&ed the block itself. The price of the rapid response
time achie&ed using asynchronous remote mirroring is ob&ious. #n contrast to
synchronous remote mirroring, in asynchronous remote mirroring there is no guarantee
that the data on the second disk subsystem is up3to3date. This is precisely the case if the
first disk subsystem has sent the write acknowledgement to the ser&er but the block has
not yet been sa&ed to the second disk subsystem.
The ser&er writes block " to the first disk subsystem. This stores the block in its write
cache and immediately sends it to the second disk subsystem, which also initially stores
the block in its write cache. The first disk subsystem waits until the second reports that it
has written the block. The )uestion of whether the block is still stored in the write cache of
the second disk subsystem or has already been written to the hard disk is irrele&ant to the
first disk subsystem. #t does not acknowledge to the ser&er that the block has been written
until it has recei&ed confirmation from the second disk subsystem that this has written the
block.
Synchronous remote mirroring has the ad&antage that the copy of the data held by the
second disk subsystem is always up3to3date. This means that if the first disk
subsystem fails, the application can continue working with the most recent data set
by utili,ing the data on the second disk subsystem.
The disad&antage is that copying the data from the first disk subsystem to the
second and sending the write acknowledgement back from the second to the first
increases the response time of the first disk subsystem to the ser&er. 6owe&er, it is
precisely this response time that determines the throughput of applications such as
databases and file systems.
"n important factor for the repone ti*e is the i6na" tranit ti*e between
the two disk subsystems. "fter all, their communication is encoded in the form of
physical signals, which propagate at a certain speed. The propagation of the signals
from one disk subsystem to another simply costs time. "s a rule of thumb, it is worth
using synchronous remote mirroring if the cable lengths from the ser&er to the
second disk subsystem &ia the first are a ma/imum of B ? 0 kilometers. 6owe&er,
many applications can deal with noticeably longer distances. "lthough performance
may then not be optimal, it is still good enough.
#f we wish to mirror data o&er long distances but do not want to use only asynchronous
remote mirroring it is necessary to use three disk subsystems *4igure !.!!+. The first two
may be located <ust a few kilometers apart, so that synchronous remote mirroring can be
used between the two. #n addition, the data of the second disk subsystem is mirrored onto
a third by means of asynchronous remote mirroring. 6owe&er, this solution comes at a
price: for most applications the cost of data protection would e/ceed the costs that would
be incurred after data loss in the e&ent of a catastrophe. This approach would therefore
only be considered for &ery important applications.
Fi6're 2.22 : The combination of synchronous and asynchronous remote mirroring.
"n important aspect of remote mirroring is the duration of the initial copying of the data.
With large )uantities of data it can take se&eral hours until all data is copied from the first
disk subsystem to the second one. This is completely acceptable the first time remote
mirroring is established. 6owe&er, sometimes the connection between both disk subsystems
is interrupted later during operations ? for e/ample, due to a fault in the network between
both systems or during maintenance work on the second disk subsystem
The combination of synchronous and asynchronous remote mirroring means that rapid
response times can be achie&ed in combination with mirroring o&er long distances. "fter the
appropriate configuration the application continues operation on the first disk
subsystem without the changes ha&ing been transferred to the second disk subsystem.
Small )uantities of data can be transmitted in their entirety again after a fault has been
resol&ed. 6owe&er, with large )uantities of data a mechanism should e/ist that allows only
those blocks that were changed during the fault to be transmitted. This is also referred to as
'pendin6 ;or .ree<in6= o. re*ote *irrorin6 and resuming it later on. Sometimes
there is a deliberate reason for suspending a remote mirroring relationship. #n some cases it
may be necessary to suspend remote mirroring relationships at certain points in time for the
purposes of creating consistent copies in backup data centers.
Re+era" o. re*ote *irrorin6
Sometimes there is a need for a re&ersal of remote mirroring. #n this case, if the first
disk subsystem fails, the entire operation is completely switched o&er to the second disk
subsystem and afterwards the data is only changed on that second system. The second disk
subsystem logs all changed blocks so that only those blocks that were changed during
the failure are transmitted to the first disk subsystem once it is operational again. This
ensures that the data on both disk subsystems is synchroni,ed once again.
Conitenc) 6ro'p
"pplications such as databases normally stripe their data o&er multiple &irtual hard disks.
$epending on data )uantities and performance re)uirements, the data is sometimes e&en
distributed o&er multiple disk subsystems. Sometimes, as with the web architecture,
multiple applications that are running on different operating systems manage
common related data sets. The copies created through instant copy and remote mirroring
must also be consistent for these types of distributed data sets so that they can be used if
necessary to restart operation.
The problem in this case is that, unless other measures are taken, the copying from
multiple &irtual hard disks through instant copy and remote mirroring will not be
consistent. #f, for e/ample, a database with multiple &irtual hard disks is copied using
instant copy, the copies are created at almost the same time, but not e/actly at the same
time. 6owe&er, databases continuously write time stamps into their &irtual hard disks. "t a
restart, the database then checks the time stamp of the &irtual hard disks and aborts the
start if the time stamps of all the disks do not match 001. This means that the operation
cannot be restarted with the copied data and the use of instant copy has been worthless.
Conitenc) 6ro'p pro&ide help in this situation. " consistency group for instant
copy combines multiple instant copy pairs into one unit. #f an instant copy is then
re)uested for a consistency group, the disk subsystem makes sure that all &irtual hard
disks of the consistency group are copied at e/actly the same point in time. $ue to this
simultaneous copying of all the &irtual hard disks of a consistency group, the copies
are gi&en a consistent set of time stamps. "n application can therefore restart with the
hard disks that ha&e been copied in this way. When a consistency group is copied &ia
instant copy, attention of course has to be paid to the consistency of the data on
each indi&idual &irtual hard disk ? the same as when an indi&idual hard disk is copied.
#t is also important that the instant copy pairs of a consistency group can span multiple
disk subsystems as for large databases and large file systems.
The need to combine multiple remote mirroring pairs into a consistency group also
e/ists with remote mirroring *4igure !.!-+. 6ere too the consistency group should be
able to span multiple disk subsystems. #f the data of an application is distributed o&er
multiple &irtual hard disks or e&en o&er multiple disk subsystems, and, if the remote
mirroring of this application has been deliberately suspended so that a consistent copy
of the data can be created in the backup data centre, then all remote mirroring pairs
must be suspended at e/actly the same point in time. This will ensure that the time
stamps on the copies are consistent and the application can restart smoothly from the
copy.
&rite>order conitenc) for asynchronous remote mirroring pairs is another
important feature of consistency groups for remote mirroring. " prere)uisite for the
functions referred to as ?o'rna"in6 o. .i"e )te* and "o6 *echani* o.
data(ae, is that updates of data sets are e/ecuted in a &ery specific se)uence. #f the
data is located on multiple &irtual hard disks that are mirrored asynchronously, the
changes can get ahead of themsel&es during the mirroring so that the data in the backup
data centre is updated in a different se)uence than in the primary data centre *4igure
!.!-+. This means that the consistency of the data in the backup centre is then at risk.
Write3order consistency ensures that, despite asynchronous mirroring, the data on the
primary hard disks and on the target hard disks is updated in the same se)uence ? e&en
when it spans multiple &irtual hard disks or multiple disk subsystems.
L1N *a!in6
Lo6ica" 1nit N'*(er 4a!in6 is an authori,ation process that makes a @ogical ;nit Aumber
a&ailable to some hosts and una&ailable to other hosts.
So3called @;A masking brings us to the third important function ? after instant copy and
remote mirroring ? that intelligent disk subsystems offer o&er and abo&e that offered by
R"#$. @;A masking limits the access to the hard disks that the disk subsystem
e/ports to the connected ser&er. " disk subsystem makes the storage capacity of its
internal physical hard disks a&ailable to ser&ers by permitting access to indi&idual physical
hard disks, or to &irtual hard disks created using R"#$, &ia the connection ports. 8ased
upon the SCS# protocol, all hard disks ? physical and &irtual ? that are &isible outside the
disk subsystem are also known as @;A. Without @;A masking e&ery ser&er would see all
hard disks that the disk subsystem pro&ides. 4igure !.!: shows a disk subsystem without
@;A masking to which three ser&ers are connected. %ach ser&er sees all hard disks that
the disk subsystem e/ports outwards. "s a result, considerably more hard disks are
&isible to each ser&er than is necessary. #n particular, on each ser&er those hard disks that
are re)uired by applications that run on a different ser&er are &isible. This means that the
indi&idual ser&ers must be &ery carefully configured. #n figure !.!:, an erroneous
formatting of the disk @;A : of ser&er would destroy the data of the application that runs
on ser&er :. #n addition, some operating systems are &ery greedy: when booting up they try
to draw to them each hard disk that is written with the signature *label+ of a foreign
operating system.
Fi6're 2.2$ Chao:
%ach ser&er works to its own &irtual hard disk. Without @;A masking each ser&er sees all hard
disks. " configuration error on ser&er can destroy the data on the other two ser&ers. The data
is thus poorly protected.
Without @;A masking, therefore, the use of the hard disk must be &ery carefully configured in the
operating systems of the participating ser&ers.
@;A masking brings order to this chaos by assigning the hard disks that are e/ternally &isible to
ser&ers. "s a result, it limits the &isibility of e/ported disks within the disk subsystem. 4igure
!.!- shows how @;A masking brings order to the chaos of 4igure !.!:.
Fi6're 2.2% Order:
%ach ser&er works to its own &irtual hard disk. With @;A masking, each ser&er sees only its own
hard disks. " configuration error on ser&er can no longer destroy the data of the two other
ser&ers. The data is now protected.
%ach ser&er now sees only the hard disks that it actually re)uires. @;A masking thus acts as a
filter between the e/ported hard disks and the accessing ser&ers. #t is now no longer possible to
destroy data that belongs to applications that run on another ser&er. Configuration errors are
still possible, but the conse)uences are no longer so de&astating. 4urthermore, configuration
errors can now be more )uickly traced since the information is bundled within the disk
subsystem instead of being distributed o&er all ser&ers.
We differentiate between port3based @;A masking and ser&er3based @;A masking. Dort3based
@;A masking is found primarily in low3end disk subsystems. #n port3based @;A masking the
filter only works using the granularity of a port. This means that all ser&ers connected to the disk
subsystem &ia the same port see the same disks.
Ser&er3based @;A masking offers more fle/ibility. #n this approach e&ery ser&er sees only the
hard disks assigned to it, regardless of which port it is connected &ia or which other ser&ers are
connected &ia the same port
A5AILA2ILIT3 OF DISK S12S3STE4S
$isk subsystems are assembled from standard components, which ha&e a limited fault3
tolerance.
Th e standard components are combined in order to achie&e a le&el of fault3tolerance for
the entire disk subsystem that lies significantly abo&e the fault3tolerance of the indi&idual
components. Today, disk subsystems can be constructed so that they can withstand the
failure of any component without data being lost or becoming inaccessible. We can also say
that such disk subsystems ha&e no Esingle point of failure2.
The indi+id'a" *ea're to be taken to increase the a+ai"a(i"it) o. data:
i. The data is distributed o&er se&eral hard disks using R"#$ processes and
supplemented by further data for error correction. "fter the failure of a physical
hard disk, the data of the defecti&e hard disk can be reconstructed from the
remaining data and the additional data.
ii. #ndi&idual hard disks store the data using the so3called 6amming code. The 6amming
code allows data to be correctly restored e&en if indi&idual bits are changed on the hard
disk.
iii. Self3 diagnosis functions in the disk controller continuously monitor the rate of bit errors
and the physical &ariables *e.g., temperature, spindle &ibration+.
i&. #n the e&ent of an increase in the error rate, hard disks can be replaced before data is lost.
&. %ach internal physical hard disk can be connected to the controller &ia two internal
#/' channels. #f one of the two channels fails, the other can still be used.
&i. The controller in the disk subsystem can be reali,ed by se&eral controller instances. #f one
of the controller instances fails, one of the remaining instances takes o&er the tasks
of the defecti&e instance.
&ii. 'ther au/iliary components such as power supplies, batteries and fans can often be
duplicated so that the failure of one of the components is unimportant. When connecting
the power supply it should be ensured that the &arious power cables are at least
connected through &arious fuses. #deally, the indi&idual power cables would be
supplied &ia different e/ternal power networks5 howe&er, in practice this is seldom
reali,able.
&iii. Ser&er and disk subsystem are connected together &ia se&eral #/' channels. #f one of
the channels fails, the remaining ones can still be used.
i/. #nstant copies can be used to protect against logical errors. 4or e/ample, it would be
possible to create an instant copy of a database e&ery hour. #f a table is Eaccidentally2
deleted, then the database could re&ert to the last instant copy in which the database is
still complete.
/. Remote mirroring protects against physical damage. #f, for whate&er reason, the original
data can no longer be accessed, operation can continue using the data copy that was
generated using remote mirroring.
/i. Consistency groups and write3order consistency synchroni,e the copying of multiple
&irtual hard disks. This means that instant copy and remote mirroring can e&en guarantee
the consistency of the copies if the data spans multiple &irtual hard disks or e&en
multiple disk subsystems.
/ii. @;A masking limits the &isibility of &irtual hard disks. This pre&ents data being changed
or deleted unintentionally by other ser&ers.
This list shows that disk subsystems can guarantee the a&ailability of data to a &ery high
degree. $espite e&erything it is in practice sometimes necessary to shut down and switch off a
disk subsystem. #n such cases, it can be &ery tiresome to co3ordinate all pro<ect groups to a
common maintenance window, especially if these are distributed o&er different time ,ones.
4urther important factors for the a&ailability of an entire #T system are the a&ailability of the
applications or the application ser&er itself and the a&ailability of the connection between
application ser&ers and disk subsystems.

Você também pode gostar