Você está na página 1de 18

Developing a Storage

Strategy for the Future

an Storage eBook
contents
[ ]
Developing a Storage Strategy for the Future

This content was adapted from Internet.com's


Enterprise Storage Forum, Enterprise IT Planet,
and InternetNews Web sites. Contributors:
Richard Adhikari, Judy Mottl, Henry Newman,
Drew Robb, Jennifer Schiff and Paul Shread.

2 2 The Data Pileup: Save


Money or Save Data?
Judy Mottl

4 Three Acronyms That Could


Change the Storage World
Henry Newman

4 6 6 Sorting Out Your


Storage Options
Jennifer Schiff

10 Choosing the Right High-


Performance File System
Drew Robb
10 13
13 Managing Storage in a
Virtual World
Drew Robb

16 The Trouble with Virtual


Disaster Recovery
16 Richard Adhikari

Developing a Storage Strategy for the Future, an Internet.com Storage eBook. © 2008, Jupitermedia Corp.

1
[ Developing a Storage Strategy for the Future ]

The Data Pileup: Save


Money or Save Data?
By Judy Mottl

According to Gartner, the highest reported price in the

G
iven how cheap storage has become, it's under-
standable enterprises are expanding arrays to first quarter of 2008 for managed storage was $12.50
house growing data. But stocking up on hardware per gigabyte per month, and the lowest was $0.29 per
and software to hold more and more information is a costly gigabyte per month for archive storage needs.
misstep, according to a Gartner
report. A survey of Gartner clients
reported that none expected its
New regulations and legal con- storage budget to decrease in
cerns are likely prompting IT to 2008, and that 67 percent
keep every bit and byte of data expected the budget for storage
just in case some litigation issue hardware to increase. Of those
arises, and since storage costs polled, 64 percent also expected
are decreasing, the urge to push storage software costs to increase
another box into play can be as well.
tempting.
Just a quick look at backup stor-
The problem is that data growth age provides a clear view of how
will very quickly outpace the sav- storage costs are decreasing.
ings in storage, according to Prices dropped by about 30 per-
Whit Andrews, an analyst at cent from 2006 to 2008, accord-
Gartner. ing to Gartner.

Jupiterimages But then cheap storage isn't real-


"It's time for companies to mod-
ernize storage strategies and understand how informa- ly cheap when additional management costs and
tion access technology can be a good tool for making increased power and cooling costs are factored in.
sure they need what they're keeping," Andrews told
InternetNews.com. Enterprises that choose to retain everything run the risk
of significant future costs, Gartner reported. Also, the


A survey of Gartner clients reported that none expected its storage budget to decrease
in 2008, and that 67 percent expected the budget for storage hardware to increase.

2 ”
Developing a Storage Strategy for the Future, an Internet.com Storage eBook. © 2008, Jupitermedia Corp.
[ Developing a Storage Strategy for the Future ]
longer information is saved, the harder it is to discern
value, according to the survey. Still, companies can offset the costs through storage
savings, as well as benefits from improved business
Information-Access Tools processes.

Companies need to create a clear distinction on which The first step, according to Gartner, is to initiate and
data should be saved on primary storage and what develop a content valuation process. "This is determin-
data should be housed on cheaper secondary storage, ing what's important to keep and how a company
as the costs vary greatly in terms of hardware and soft- decides what to keep," explained Andrews.
ware.
"It means establishing criteria on what data is to be
Gartner provided a scenario, using a rough estimate of stored, where, and why," he added. "Cheap storage is
$5 per gigabyte for backup storage and a generation expensive when it's storing data that doesn't need to
rate of 10 gigabytes per employee per year. A 5,000- be stored," he noted.
worker company faces annual costs of $1.25 million for
five years of storage with those financials. Cutting the A good best practice is establishing a content-valuation
amount of data by 80 percent could save about $1 mil- policy on legacy data and making sure what's stored
lion for five years and lower the organization's liability, requires that storage investment.
noted the report.
While some enterprises are using information-access
Information-access technologies include a wide range technologies, the majority is not at this point, according
of tools such as enterprise search, content analytics and to the research firm. But sooner or later companies will
social search. Integration and deployment typically realize the waste taking place in storage and the costs
require some expertise, such as an information archi- of retaining data that has no value, Andrews said.
tect, to get the tools in place and working well, He adds, "Is what you're storing on tape valuable at
Andrews explained. And it's not cheap, either, as prod- this point, because if it's not, then you don't need it."I
ucts can range from $10,000 to millions for advanced
applications.

3 Developing a Storage Strategy for the Future, an Internet.com Storage eBook. © 2008, Jupitermedia Corp.
[ Developing a Storage Strategy for the Future ]

Three Acronyms That Could


Change the Storage World
By Henry Newman

So without further ado, here are three things that I think

A
lot of claims have been made lately of disruptive
storage technologies, but saying a particular compa- will be truly disruptive to the enterprise storage market.
ny is disruptive is a long way from Clayton FCoE
Christensen's original definition. Very few individual compa-
nies have changed the industry, and one big reason is that Fibre Channel over Ethernet (FCoE) is my No. 1 pick for
everyone wants a standards-based product, and standards a technology that could change enterprise storage in
require multiple companies to create them. Once a product dramatic ways.
is created that might be a disrup-
tive technology, lots of other Today, any higher performance
players jump into the mix. or higher reliability storage
data moves over Fibre
Clearly, disruptive technologies Channel. Fibre Channel has
are not an everyday event, nor been around for 10 years or so
are they easy to predict. Let's as the de facto storage medi-
examine some technologies um in the enterprise. iSCSI, in
that might significantly change my opinion, has never taken
enterprise storage and disrupt reasonable market share
the market. I won't adhere to because of the overhead both
the strict definition, but I am for CPU and packetization (the
going to suggest some tech- TCP/IP encapsulation uses a
nologies that if adopted could significant part of the packet
Jupiterimages
change the enterprise storage market. As I said, I think for small I/O requests).
very few companies are going to be able to create a
new technology market from a technology without a If FCoE happens, Fibre Channel connectivity to storage
standard that others can use. Even Microsoft, for exam- will be a thing of the past and we will have one net-
ple, supports all types of standards, from SATA (T13) work fabric for communications and storage. Even this
and FC/SCSI (T11) to IETF standards. No company can year, as FC interface-based disk drives are being
be an island today. replaced by SAS, FC chipsets shipped are declining. FC


Fibre Channel over Ethernet (FCoE) is my No. 1 pick for a technology that
could change enterprise storage in dramatic ways.

4 ”
Developing a Storage Strategy for the Future, an Internet.com Storage eBook. © 2008, Jupitermedia Corp.
[ Developing a Storage Strategy for the Future ]
chipsets never achieved the cost factor that Ethernet data from creation, to backup/archiving, restoration,
chipsets achieved because FC was never considered a deletion, and everything in between, including data
commodity technology — it was always a higher-priced protection and security. I believe OSD is coming to a
storage interconnect. Every computer from your laptop system near you, but it is going to take some time.
to a large SMP server has Ethernet built in. That is not
true and has never been true for Fibre Channel. pNFS
FCoE will reduce costs in a number of ways: I am a big proponent of this technology, and it has
• Cost per port: Although 10GbE is likely a bit higher some broad implications.
than 4Gbit FC in the cost per Gbyte/sec, that trend
will not last long. I suspect this will be changing, and In today's world we have SAN storage and NAS stor-
so does most of the industry. age. Everyone knows that SAN-based storage is faster
than NAS for lots of good reasons, not the least of
• Personnel: Today you have a storage networking which is that the NFS protocol was not really designed
group and an IP networking group in most large to deliver high-performance streaming or I/O. NFS was
organizations. They are separated, as the people designed to solve a different problem.
must deal with different technologies, training, patch-
es, pricing, and so on. Having a single group of peo- When NFSv4.1 is implemented and released, the ability
ple that can do the same things will save money. to have SAN performance on NAS equipment could
become a reality. Of course, the NAS equipment would
• In my opinion, much of the Fibre Channel commu- need to be redesigned to deliver SAN performance,
nity sees the writing on the wall, otherwise they and most NAS equipment is not designed that way, as
would not have such broad participation in the FCoE NFS is the bottleneck, but this would allow a merging
community and standards. of the technologies. In addition, many environments
are going to shared file systems for clusters of systems.
FCoE, when deployed, will change the storage net- NFSv4.1, if it lives up to its billing, would allow high
working world. The first steps will be the host side con- performance access from many nodes to a file system.
nections and switches and RAID controllers, and then
will come the other peripheral devices such as tape Of course, you will need a high performance file system
drives. FCoE means that Ethernet gets a larger market to support the high-speed access, and that could be a
and, in my opinion, it will likely mean the end of the problem for some vendors, but the tools are there. I
line for InfiniBand, as the combined FC and Ethernet believe NFSv4.1 will be disruptive, as it will merge the
market is just far too large a commodity market. SAN and NAS world over time together (yet another
argument in favor of an IP-based storage world). NAS
vendors are going to have to build faster hardware and
OSD better file systems, and SAN vendors are going to have
I have been writing about object-based storage for sev- to team with file system vendors to develop joint prod-
eral years now, and I am a big proponent of T10 OSD, ucts. This will all be very interesting, and I believe it
given the problems I see regularly with fragmentation. could also help OSD, as larger, higher performance file
systems likely will have more of the issues that are
OSD has a long way to go before it could be disrup- solved by OSD.
tive. There is not as much momentum behind OSD as
there is for FCoE. I think part of the problem is that the I am very skeptical of claims by vendors that their tech-
problems OSD solves are not as easily understood as nology is disruptive, as I have seen far too many such
the problems that FCoE solves, and because OSD is claims never pan out, but we've covered a few tech-
solving bigger, more complex problems, it requires a nologies here that could turn out to be genuinely dis-
larger infrastructure change such as file systems, drives, ruptive, and the implications for storage networks are
storage controller changes, and disk drives. I still very interesting. I
believe that OSD solves many of the bigger problems
that most sites face for the management of the life of

5 Developing a Storage Strategy for the Future, an Internet.com Storage eBook. © 2008, Jupitermedia Corp.
[ Developing a Storage Strategy for the Future ]

Sorting Out Your Storage Options


By Jennifer Schiff

What's Your Problem?


D
AS, SAN, NAS... RAID, MAID, solid state tech-
nologies, grid storage, hard disk drive storage,
Before you even talk to a vendor, "you have to deter-
tiered storage, tape storage... active storage,
mine what problem is it that your enterprise is trying
archival storage, remote storage, disaster recovery...
to solve... and define your pain points and require-
self-healing disk drives, virtualization, de-duplication,
ments," said Ashish Nadkarni, principal consultant at
thin provisioning…
GlassHouse Technologies.
It would take pages just to
You also need to know what
list all the types, makes, and
it is you are actually storing,
models of enterprise stor-
that is how much and what
age options currently on the
kind of data (e.g., file-based,
market. Then add a list of
block-based, structured or
the features and benefits of
unstructured), said Mark
each one and it's almost
Peters, an analyst with
enough to make a storage
Enterprise Strategy Group.
administrator in search of a
new, additional, or supple-
Other good questions to ask
mental storage system long
yourself and the people who
for the days when a storage
will be using the storage, he
solution was whatever came
said, include: How do you
with your server. Almost.
plan on utilizing this storage
system? Is it for active stor-
To make it easier for you to
age or backup or archiving
cut through at least some of Jupiterimages
— or remote storage or dis-
storage-decision-making clutter and make an
aster recovery? What applications are you running?
informed purchasing decision,
Do you want the system to be automated? Do you
EnterpriseStorageForum.com spoke with a few stor-
need it be scalable? How important are speed and
age analysts to gather advice to narrow down the
performance?
number of choices and help you find a storage solu-
tion that's right for your enterprise.
"You need to start from what you want rather than
what a vendor or group of vendors is trying to tell you


EnterpriseStorageForum.com spoke with a few storage analysts to gather advice
to narrow down the number of choices and help you find a storage solution that's
right for your enterprise.

6 ”
Developing a Storage Strategy for the Future, an Internet.com Storage eBook. © 2008, Jupitermedia Corp.
[ Developing a Storage Strategy for the Future ]
The Cloud Offers Promise
[you need]," said Peters, who added that "the chal-

for Storage Users


lenge for the user is to actually know and define what
it is they want."

To aid in that process, Greg Schulz, founder of and By Marty Foltyn


senior analyst at Storage IO, highly recommended
drawing up a list divided into three columns or cate- "Cloud computing" has been ill-defined and over-
gories. In the first column should be those features hyped, yet storage vendors have been quick to trot
and functionality you must have; in the second, those out their own "cloud storage" offerings and end
things you want or need to have; and in the third the users are wondering whether there's significant
features that would be nice to have. cost savings in these services for them, particularly
in tough economic times.

"Cloud-speak" can be downright confusing. A recent


For Schulz, must-haves include "availability, reliability,
Storage Networking World conference track pro-
some level of performance, some level of capacity
and scalability," specifically "RAID 1, RAID 5, RAID 6, claimed that clouds "are an evolving approach to
failover, redundant controllers, ease of management, providing users transparent IT services through a
tiered storage, different types of drives (fast drives shared infrastructure of pools of systems and serv-
and slow drives) and tape." Yes, even tape, which all ices." Clouds "provide a vision of a frictionless econ-
omy enabled by lowering the barrier for entry and
reducing the penalty for failure." And clouds "are a
three analysts said isn't going away any time soon —
vehicle to deliver infinite resources, a commitment
and is actually a good, economical, "green" storage
proportional to need, and cost-effective economies
solution.
of scale," albeit with a few caveats on existing infra-
Things that fall into Schulz's want-to-have or nice-to- structure, manageability, and security.
have bucket include de-duplication, thin provisioning,
and snapshots, features that have generated a lot of Surprisingly, Gartner considers the amorphous
nature of the term to be good news: "The very con-
fusion and contradiction that surrounds the term
buzz and may be very helpful but aren't absolutely
'cloud computing' signifies its potential to change
essential to storing data.
the status quo in the IT market," the IT research
Above all, said all three analysts, stay focused on the firm said earlier this year. Gartner perhaps didn't
essentials. If you happen to find a solution that meets help matters any by defining cloud computing as a
all of your must-have requirements and can also pro- "style of computing where massively scalable IT-
vide you with some of your want-to-have or nice-to- related capabilities are provided 'as a service' using
Internet technologies to multiple external cus-
tomers."
have features — at the right price — then go for it.

John Webster, principal IT advisor at Illuminata,


Remember, "it's what you want out of a solution, not
what a company wants to sell you," stressed simplifies matters by advising users to "think of the
Nadkarni. For example, if a certain amount of capacity cloud as the Internet," delivering services and com-
is a must-have requirement, focus on that. If compli- puting resources.
ance is your main issue, make sure the solution you
Oddly enough, storage vendors developed some of
the earliest cloud services, although it took another
choose has a good track record when it comes to

decade for the economics of Internet-based storage


compliance. If you are looking for a disaster recovery
to make sense.
solution, stay focused on that. And be sure to validate
vendor claims by checking with customers and review-
ing test results (for things like performance) if a com- "The application's storage is whatever sits up there
pany is new. on that 'ethereal thing,'" said Webster. There are dif-
ferent ways to access that storage, whether as an
external service or setting up your own "cloud"
Watch for the Warning Signs inside your enterprise firewall.
While the analysts we spoke with believed good continued
enterprise solutions far outweigh the bad, it's still pos-

7 Developing a Storage Strategy for the Future, an Internet.com Storage eBook. © 2008, Jupitermedia Corp.
[ Developing a Storage Strategy for the Future ]
sible to make a bad choice, particularly if you ignore
your must-have list, base your decision on marketing While cloud storage is not applicable for tier-one,
hype, are too emotionally involved with a vendor or mission-critical data due to the nature of the infor-
brand, act too quickly, or go with the low-ball quote mation (e-mail, databases, transactional data), pri-
without taking into account the total cost of owner- vate clouds can serve as community storage pools
for enterprise backup and archival data. Cloud stor-
age can also be a viable option for static data result-
ship and whether the system actually addresses most
ing from applications such as digital content and
of your needs.
distribution, video surveillance, or streaming
So what are some easy ways to avoid making a bad media.
storage decision?
Internal clouds can also offer advantages to users
"When you hear the words revolutionary, the only, the concerned with security issues who are more com-
fortable with their staff managing data within a cor-
porate intranet than over the Internet.
first, or the fastest or the most reliable, the alarm bells
should be going off," said Schulz. If vendors make
Especially with new regulations such as the Federal
claims about having the best or the fastest perform-
ance, "have them back it up by showing you [using Rules of Civil Procedure (FRCP) on depositions and
test results from organizations like SPEC, Microsoft discovery, using cloud storage in a highly redun-
ESRP, FPC, or TPC] and by comparing their perform- dant way helps make sure enough copies are lying
ance to others.... And make sure it's an apples-to- around based on a policy set. It can also free up
expensive human resources now targeted to sorting
and cataloging data.
apples comparison, not apples to oranges."

Security experts note, however, that for regulated


Another safeguard is to test out all the systems you
are considering — or at least see one in action at a data moving into any kind of a cloud, security best
customer site — and to speak with customers who've practices in the areas of encryption, key manage-
been using that solution for at least a few months. ment and general storage security apply. As out-
lined in the SNIA Storage Security Best Current
Practices Technical Proposal, organizations should
"ensure appropriate service-level objectives for vir-
"Don't settle for a WebEx demo," said Schulz. "Get
tual storage: 1) match the availability objective for
your hands on a system if you can. Ask questions. Ask
the 'storage cloud' to the application requirements;
for references... but ask to hear about a story that did-
n't quite go well, though the customer still ended up and 2) match the confidentiality and privacy
buying." requirements for the 'storage cloud' to the types of
information stored."
The truth is, he said, that "every vendor out there will
Companies might also be wise to examine their stor-
age and retention procedures in light of tracking
have problems at some point or another. And any

down data relevant to an e-discovery request. If


vendor that tells you they've never had a problem,
organizations are already having trouble finding
they've never had an interruption, that's an alarm bell.
data, cloud computing could potentially create more
All vendors have issues. All technologies have issues
at some point in time. What separates the vendors is places one has to look. And storage on external
how they respond to those issues. How do they pre- clouds or third-party facilities could also be includ-
vent them from recurring? How do they manage ed in any FRCP requests for backups and disaster
them? And then also have they improved their tech- recovery copies. I
nology?"

Speaking of which, because technologies get updat-


ed or replaced all the time, before you buy anything,
"grill the vendors on what their product road map is,"
said Nadkarni. "For example, if they just released a
product a year ago, then there's a very good chance
that in the next year they're not going to have any-
thing very drastic coming out that will replace that

8 Developing a Storage Strategy for the Future, an Internet.com Storage eBook. © 2008, Jupitermedia Corp.
[ Developing a Storage Strategy for the Future ]
product. [But] if the product has been in the market "We tend to get so embroiled in needs and feeds and
for a few years, the vendor may be [coming out] with speeds that the down-to-earth relationships can get
a brand new, completely redesigned product that's missed," said Peters. "Reputation, references, and
going to replace any and all products," which could (where relevant) experience are crucial... to storage
pose a problem, he said. That's why it pays to see a system choice."
product road map, to help you determine if the sys-
tem you are considering is going to need to be And if you do not feel comfortable making an impor-
upgraded or replaced sooner rather than later. tant storage decision on your own, get help, in the
form of an independent consultant.
As an example, Nadkarni points to modular storage
arrays. Vendors, he said, have been moving "away Take Your Time
from the old loop-based backend drives to a point-to-
point system. [But] there are a lot of arrays out there Above all, be patient when choosing a critical storage
in the market that have loop-based drives — and system. "Patience is a virtue," said Nadkarni, who said
they're all being replaced, slowly, by point-to-point- he thinks that phrase should be an operational guide-
based drives. So if you're buying a modular storage line. "Never hurry into a large decision. If you are
array, definitely check if that storage array is due for a proactive about how you manage your [storage] envi-
refresh, because once that point-to-point drive come ronment, you will know ahead of time what you need
out, [your] loop-based one is going to be obsolete, to do — and to purchase — to keep it running.
and you're going to have a disruptive upgrade to go
from one to another." "If you are under the gun to make a decision quickly,
chances are you're going to make a mistake," he said.
That's why establishing a good rapport with a vendor "But when you have time on your side, you can make
is important — and why you should talk to customers, sure all your i's are dotted and your t's are crossed —
and be more assured that you're making the right
decision." I
to see if that vendor will be there for you when you
need help, not just during installation.

9 Developing a Storage Strategy for the Future, an Internet.com Storage eBook. © 2008, Jupitermedia Corp.
[ Developing a Storage Strategy for the Future ]

Choosing the Right High-


Performance File System
By Drew Robb

for scratch storage, as they don't provide high enough

T
here are a lot of high-performance file systems out
there: Sun QFS, IBM GPFS, Quantum StorNext, Red I/O rates or a sufficient range of data management
Hat GFS and Panasas, to name a few. So which is tools such as snapshots.
best? It depends on who you ask and what your needs are.
Tough talk indeed from Panasas. So how do its rivals
"We typically compete with NetApp OnTap or OnTap respond to these claims?
GX, EMC, IBM GPFS, HP
Polyserve or Sun's open Todd Neville, GPFS offering
source research project manager at IBM, said the
called Lustre," said Len GPFS installation base is
Rosenthal, chief marketing diverse, including HPC,
officer of Panasas Inc. retail, media and entertain-
"Although we have ment, financial services, life
replaced systems running sciences, healthcare, Web
Sun's QFS, we have never 2.0, telco, and manufactur-
really competed with them ing. Neville is also dismis-
in sales situations." sive of the I/O rate claims.

Rosenthal claims that Greg Nuss, director of the


Quantum StorNext and HP software business line at
Polyserve can only deal Quantum, is more emphat-
with a maximum of 16 clus- ic, stating that the state-
tered NFS servers, so they Jupiterimages ment by Panasas about

don't tend to compete in scale-out NAS bids. Similarly, StorNext's capabilities is completely false.
he said that IBM GPFS and Sun Lustre, which are both
parallel file systems like Panasas PanFS, are mainly used "Each node in a StorNext cluster can act as NFS server,
by universities and government research organizations each presenting the common file system namespace at
the back end," he said. "Today our stated node sup-


There are a lot of high-performance file systems out there: Sun QFS, IBM
GPFS, Quantum StorNext, Red Hat GFS and Panasas, to name a few.

10 ”
Developing a Storage Strategy for the Future, an Internet.com Storage eBook. © 2008, Jupitermedia Corp.
[ Developing a Storage Strategy for the Future ]
port is 1,000 nodes and we support both SAN-attached ports, PanFS uses the parallel DirectFLOW protocol,
as well as LAN-attached nodes into the cluster. We which is the foundation of the upcoming pNFS (Parallel
have practical installations in the 300 to 400 node NFS) standard, which is the major advance in the
range deployed today. We don't typically run into upcoming NFS version 4.1. The key benefit to Panasas
Panasas in the market because StorNext is not typically parallel storage is said to be superior application per-
deployed in scale-out NAS configurations, but rather in formance.
high-performance workflow and archive configura-
tions." Where NFS servers require that all I/O requests go
through a single NAS filer head, PanFS enables parallel
HP, meanwhile, also took umbrage about the Panasas transfer of data directly from the clients or server nodes
claims. The company said that HP Scalable NAS does into the storage system. With Panasas, the NAS head is
not have an architectural limit on the number of NAS removed from the data path and is no longer the I/O
File Services server nodes that a customer can use in bottleneck. Case in point: Panasas parallel storage is
their clusters. installed with the world's highest performance comput-
er system in the world, the Roadrunner system at Los
"The stated 16 server node limit is a test limit only," Alamos National Lab in New Mexico. It generates close
said Ian Duncan, director of marketing for NAS for HP to 100 GB/s to a single shared file system.
StorageWorks. "HP has a number of NAS File Services
customers using clusters with more than 16 server "As a result of this architecture, Panasas parallel stor-
nodes." age systems scale to thousands of users/servers, tens of
Petabytes and can generate over 100GB/s in band-
Duncan said Panasas, Sun QFS, IBM GPFS, and width," said Rosenthal. "Other key features include its
Quantum StorNext are not true symmetrical file sys- software-based RAID architecture that enables parallel
tems, but are cluster file systems based on master RAID reconstructions that are 5X to 10X faster than
servers — whether for metadata operations, locking most storage systems."
operations, or both — which are relatively easy to
implement as an extension of traditional, single-node PanFS also includes Panasas Tiered Parity technology,
systems. However, Duncan believes they suffer from which automatically detects and corrects unrecoverable
performance and availability limitations inherent in the media errors, which is important during reconstructions.
master server's singular role. Finally, this file system is optimized for use with many
simulation and modeling applications.
"As servers are added, the load on the master server
increases, undercutting performance and subjecting Note, though, that Panasas systems are designed for
more nodes to loss of functionality in the event of a file storage, not block storage. Therefore, it is typically
master server's failure," said Duncan. "By contrast, the not installed for transaction-oriented applications such
4400 Scalable NAS File Services uses the HP Clustered as ERP, order entry or CRM. Instead, it tends to be
File System (CFS), which exploits multiple, independent deployed in applications where a large number of users
servers to provide better scalability and availability, or server nodes need shared access to a common pool
insulating the cluster from any individual node's failure of large files.
or performance limitation."
HP File Services
With that out of the way, let's take a closer look at
some of these file systems. HP claims superiority by pushing symmetry over paral-
lelism. The product is aimed at medium-sized cus-
tomers who need to seamlessly increase application
Panasas PanFS throughput far in excess of traditional NAS products
The Panasas PanFS parallel file system is an object- and easily grow storage capacity online without service
based file system designed for scale-out applications disruption. HP StorageWorks 4400 Scalable NAS File
that require high performance in both I/O and band- Services includes an HP StorageWorks 4400 Enterprise
width. Unlike NFS or CIFS, which Panasas also sup- Virtual Array with dual array controllers and 4.8 TB of

11 Developing a Storage Strategy for the Future, an Internet.com Storage eBook. © 2008, Jupitermedia Corp.
[ Developing a Storage Strategy for the Future ]
storage, three file serving nodes, management and Originally designed for technical high performance
replication software, and support for Windows or Linux. computing (HPC), it has since expanded into environ-
With three file serving nodes and dual array controllers, ments that require performance, fault tolerance and
the 4400 Scalable NAS File Services does not have a high capacity such as relational databases, CRM, Web
single point of failure. 2.0 and media applications, engineering, financial
applications and data archiving.
Downsides?
"GPFS is built on a SAN model where all the servers
"The 4400 Scalable NAS File Services is less suitable see all the storage," said Neville. "To allow data access
for high-performance computing applications that from systems not attached to the SAN, GPFS provides
require more than 6 GB/sec of throughput," said a software simulation of a SAN, allowing access to the
Duncan. data using general purpose networks such as
Ethernet."
Quantum StorNext
Data is striped across all the disks in each file system,
StorNext is certainly the platform of choice for anyone which allows the bandwidth of each disk to be used for
using Apple. Further, in media rich environments where service of a single file or to produce aggregate per-
Apple, Windows, and other systems must interact, formance for multiple files. This performance can be
StorNext appears to have the market cornered. For delivered to all the nodes that make up the cluster.
example, StorNext is commonly used in demanding GPFS can also be configured so that there are no sin-
video production and playback applications because of gle points of failure. On top of the core file service fea-
its ability to handle the large capacity and frame rates tures, GPFS provides functions such as the ability to
of high-definition content. How does it do beyond that share data between clusters and a policy-based infor-
niche? mation life cycle management (ILM) tool where data is
migrated among different tiers of storage, which can
"The key differentiators between StorNext and other include tape.
shared file systems are our tight level of integration
with the archive tier (StorNext/StorageManager) along In addition, GPFS can be used at the core of a file-serv-
with the robust tape support, as well as the broad OS ing NAS cluster where all the data is served via NFS,
platform support," said Nuss. "No other file system can CIFS, FTP, or HTTP from all nodes of the cluster simul-
support varieties of Linux, Unix, Apple and Windows taneously. Further nodes or storage devices can be
within a single cluster environment." added or removed from the cluster as demands
change. The IBM Scale Out File Services (SoFS) offer-
The StorNext file system is a heterogeneous, shared file ing, based on GPFS, includes additional functionality.
system with integrated archive capability. It enables sys-
tems to share a high-speed pool of images, media, "As file-centric data and storage continues to expand
content, analytical data and other files so they can be rapidly, NAS is expected to follow the trend of HPC,
processed and distributed rapidly, whether SAN or LAN Web serving, and other similar industries into a scale-
connected. According to Nuss, it excels at both high- out model based on standard low-cost components,
performance data rates and high capacity in terms of which is a core competency for GPFS," said Neville. I
the file size as well as number of files in the file system.

IBM GPFS
The General Parallel File System (GPFS) from IBM has
been out for a few years.

"GPFS is a high-performance, shared disk, clustered file


system for AIX and Linux," said John Webster, an ana-
lyst at Iluminata Inc.

12 Developing a Storage Strategy for the Future, an Internet.com Storage eBook. © 2008, Jupitermedia Corp.
[ Developing a Storage Strategy for the Future ]

Managing Storage in
a Virtual World
By Drew Robb

capacity. And it also makes it easier to provision away

D
emand for storage has been growing rapidly for
some time to meet ever-expanding volumes of data. an awful lot of storage.
And it seems that the more common virtualized
servers become, the more storage is required. Together, the In theory, this is supposed to make storage more effi-
two trends — data growth and virtualization — are becom- cient by improving utilization rates. But could it inad-
ing a potent combination for vertently be doing the oppo-
storage growth. site?

"Storage capacity continues to "VMware virtualized environ-


grow at a rate of nearly 60 per- ments do not inherently need
cent per year," said Benjamin more storage than their physical
Woo, an analyst at IDC. "2008 counterparts," said Jon Bock,
is likely to represent an inflec- VMware's senior product mar-
tion point in the way applica- keting manager. "An important
tions and storage will be inter- and relevant point is that cus-
faced. And virtual servers will tomers do often change the
emerge as the killer applica- way they use and manage stor-
tion for iSCSI." age in VMware environments to
leverage the unique capabilities
Are virtual machines (VMs) of VMware virtualization, and
accelerating storage growth? Jupiterimages their storage capacity require-
According to Scott McIntyre, vice president of software ments will reflect that."
and customer marketing at Emulex, VMware is typically
given a large storage allocation than normal. This acts What seems to be happening is that companies are
as an extra reserve to supply capacity on demand to adapting their storage needs to take advantage of the
various virtual machines as they are created. In fact, capabilities built into virtual environments. For exam-
VMware actually encourages storage administrators to ple, the snapshot capability provided by VMware's stor-
provision far more storage than is physically present, for age interface, VMFS (virtual machine file system), is
example, giving each of 20 VMs a 25 percent share of used to enable online backups, to generate archive


What seems to be happening is that companies are adapting their storage needs
to take advantage of the capabilities built into virtual environments.

13 ”
Developing a Storage Strategy for the Future, an Internet.com Storage eBook. © 2008, Jupitermedia Corp.
[ Developing a Storage Strategy for the Future ]
copies of virtual machines, and to provide a known %lpar_pool_busy, which is percentage of the processor
good copy for rollback in cases such as failed patch pool capacity consumed. It comes out at only 18.75
installs, virus infections, and so on. While you can do a percent. Or %lpar_phys_busy — the percentage of the
lot with it, it also requires a lot more space. physical processor capacity consumed. It scores 9.38
percent. And there are other metrics that might show
Solving Management Headaches completely different results.

Perhaps the bigger problem, however, is the manage- "A capacity planner might look at one score and think
ment confusion inherent in the collision of virtual utilization is low, whereas another takes a different view
servers and virtual storage. and sees an entirely different picture," said Jim Smith,
an enterprise performance specialist at TeamQuest
"The question of coordinating virtualized servers and Corp. of Clear Lake, Iowa. "So who's right? It's not an
virtualized storage is a particularly thorny issue," said easy question to answer with virtualized processors.
Mike Karp, an analyst with Enterprise Management Each answer is correct from its own perspective."
Associates. "The movement toward virtualizing enter-
prise data centers, while it offers enormous opportuni-
ties for management and power use efficiencies, also
Finding the Root Cause
creates a whole new set of challenges for IT man- To make things more challenging, there is the ongoing
agers." trend of marrying up virtual servers with virtual storage.
That means having to manage across two abstraction
Virtualization, after all, is all about introducing an layers instead of one. Now let's suppose something
abstraction layer to simplify management and adminis- goes wrong. How do you find out where the problem
tration. Storage virtualization, for example, refers to the lies? Is it on the application server, on the storage, on
presentation of a simple file, logical volume, or other the network or somewhere in between?
storage object (such as a disk drive) to an application in
such a way that allows the physical complexity of the "Identifying the root cause of the problem that poten-
storage to be hidden from both the storage administra- tially could be in any one of several technology
tor and the application. domains (storage, servers, network) is not a problem for
the faint of heart and, in fact, is not a problem that is
However, even in one domain — such as servers — this always solvable given the state of the art of the current
"simple layer" can get pretty darn complicated. Just generation of monitoring and analysis solutions," said
take a look at what it does to the traditional art of CPU Karp. "Few vendors offer solutions with an appropriate
measurement using as an example an IBM microparti- set of cross-domain analytics that allow real root cause
tion in an AIX simultaneous multi-threaded (SMT) envi- analysis of the problem."
ronment that consists of two virtual CPUs in a shared
processor pool. This partition has a single process run- EMC — majority owner of VMware — starts to look
ning that uses, let's say, 45 seconds of a physical CPU pretty smart now for its acquisition of Smarts a little
in a 60-second interval. When you come to measure while back. It is heading down the road of being able
such an environment, it presents some challenges. The to provide at least some of the vitally needed cross-vir-
results can be different, for example, if SMT is enabled tualization management. And NetApp is heading down
or disabled, and if the processor is capped or the same road with the acquisition of Onaro.
uncapped.
"Onaro extends the NetApp Manageability Software
The CPU statistic %busy represents the percentage of family, as SANscreen's VM Insight and Service Insight
the virtual processor capacity consumed. In this exam- products help minimize complexity while maximizing
ple, it might come out as 37.5 percent. Now take return," said Patrick Rogers, vice president of solutions
another CPU measurement, this time by LPAR (Logical marketing at NetApp. "These capabilities make Onaro
Partition) known as %entc. This represents the percent- a key element in NetApp's strategy to help customers
age of the entitled processor capacity consumed and it improve their IT infrastructure and processes."
comes out as 75 percent. Take another metric,

14 Developing a Storage Strategy for the Future, an Internet.com Storage eBook. © 2008, Jupitermedia Corp.
[ Developing a Storage Strategy for the Future ]
For virtual machine environments, VM Insight provides Emulex, for example, is providing the virtual plumbing
virtual machine-to-disk performance information to to handle some of the connectivity gaps between stor-
optimize the number of virtual machines per server. For age and server silos. Emulex LightPulse Virtual HBA
large-scale virtual machine farms, this type of cross- technology virtualizes SAN connections so that each
domain analytics assists in maintaining application avail- virtual machine has independent access to its own pro-
ability and performance. SANscreen Service Insight tected storage.
makes it easier to map resources used to support an
application in a storage virtualization environment. It "The end result is greater storage security, enhanced
provides service level visibility from the virtualized envi- management and migration of virtual machines and the
ronment to the back-end storage systems. ability to implement SAN best practices such as LUN
masking and zoning for individual virtual machines,"
Meanwhile, the management of multiple virtualization said McIntyre. "In addition, Virtual HBA Technology
technologies is coming together under the banner of allows virtual machines with different I/O workloads to
enterprise or data center virtualization. This encom- co-exist without impacting each other's I/O perform-
passes server virtualization, storage virtualization, and ance. This mixed workload performance enhancement
fabric virtualization. is crucial in consolidated, virtual environments where
multiple virtual machines and applications are all
"IT managers are increasingly considering the prospect accessing storage through the same set of physical
of a fully virtualized data center infrastructure," said HBAs."
Emulex's McIntyre. "One of the characteristics of enter-
prise data centers is the existence of storage area net- No doubt over time, more and more of the pieces of
works. There is a high degree of affinity between SANs the virtual plumbing and a whole lot more analytics will
and server virtualization, because the connectivity have to be added to the mix to make virtualization
offered by a SAN simplifies the deployment and migra- function adequately in an enterprise-wide setting. Until
tion of virtual machines." then, get ready for an awful lot of complexity in the
name of simplification.
SAN-based storage can be shared between multiple
servers, enabling data consolidation. Conversely, a vir- "It is absolutely necessary to understand the topology,
tual storage device can be constructed from multiple in real time — or at the very least, in near real-time —
physical devices in a SAN and be made available to in order both to identify problems and to manage the
one or more host servers. Not surprisingly then, not entire environment proactively as a system and pre-
only are storage devices being virtualized, but increas- empt problems," said Karp. "In a best-case scenario, a
ingly there is interest in virtualizing the SAN fabric itself constantly updated topology map would be available
in order to consolidate multiple physical SANs into one for each process being monitored." I
logical SAN, or segment one physical SAN into multi-
ple logical storage networks.

15 Developing a Storage Strategy for the Future, an Internet.com Storage eBook. © 2008, Jupitermedia Corp.
[ Developing a Storage Strategy for the Future ]

The Trouble with


Virtual Disaster Recovery
By Richard Adhikari

Lamorena.

A
s enterprises virtualize their data centers to cut costs
and consolidate their servers, they may be setting
themselves up for big trouble. So why are virtual servers being left out of DR plans, or,
if they're included, aren't being backed up? That's
According to the latest disaster recovery research because enterprise IT just does not have the right tools
report from Symantec, based to back up virtual servers,
on surveys of 1,000 IT man- according to Symantec.
agers in large organizations
worldwide, 35 percent of an The biggest problem for 44 per-
organization's virtual servers cent of North American respon-
are not included in its disaster dents was the plethora of differ-
recovery (DR) plans. ent tools for physical and virtual
environments. There are so
Worse yet, not all virtual many that IT doesn't know what
servers included in an organi- to use and when.
zation's DR plan will be backed
up. Only 37 percent of respon- Another 41 percent complained
dents to the survey said they about the lack of automated
back up more than 90 percent recovery tools. Much of the dis-
of their virtual systems. aster recovery process is manu-
Jupiterimages al, although VMware recently

When companies virtualize, they need to overhaul their unveiled a tool to automate the run book.
backup and DR plans, Symantec says; the survey found
that 64 percent of organizations are doing so. Another 39 percent of respondents said the backup
tools available are inadequate.
"That's no surprise, because virtualization has had a
huge impact on the way enterprises do disaster recov- Hewlett-Packard, IBM, CA, and smaller vendors such as
ery," said Symantec senior product marketing manager ManageIQ, Avocent, and Apani offer tools to manage
for high availability and disaster recovery Dan both the virtual and physical environments. And com-


When companies virtualize, they need to overhaul their backup and DR plans,
Symantec says; the survey found that 64 percent of organizations are doing so.

16 ”
Developing a Storage Strategy for the Future, an Internet.com Storage eBook. © 2008, Jupitermedia Corp.
[ Developing a Storage Strategy for the Future ]
panies like Hyperic are bringing out new tools. For 35 percent of the respondents, the tests failed
because "people didn't do what they were supposed
However, virtual server management tools, being rela- to do," Lamorena said. This means that much of recov-
tively new, are not as sophisticated as their counter- ery is still a manual process, and companies must begin
parts for the physical environment. Also, they have not looking at automation, he said.
been around long enough for users to be familiar with
them. For example, provisioning, or setting up, virtual Another cause is that tests are not run frequently
machines from physical ones and vice versa can also be enough. That's because "when you run a test, it dis-
a problem, and tools for this have only recently rupts employees and customers," Lamorena said. He
emerged. added that 20 percent of the respondents said their
revenue is hurt by DR tests, so "the tests cause the
"Virtualization makes some aspects of backup and dis- same pain to their customers as if they had a real disas-
aster recovery more difficult," said Symantec senior ter."
product marketing manager for NetBackup Eric Schou.
"IT shops are still struggling with the steep learning Finally, the survey found that top-level executive
curve." involvement in DR planning has fallen. "Last year, the
C-level involvement on disaster recovery committees
Porting over solutions from the physical environment was 55 percent; this year, it's 33 percent," Lamorena
won't work, Schou said. "IT shops need to get solu- said. C-level executives are CIOs, CTOs and CEOs.
tions that are finely tuned for virtualization," he added.
Lamorena finds the reduction in top-level involvement
Failing DR disturbing because it could lead to more problems with
DR. "That's a huge drop, and we've been thinking
Judging from the results of the survey, IT is still not as about this day and night," he said. "What's alarming is,
familiar with DR as it should be. DR testing is a mess. companies may be getting a little lax and don't think
they'll be affected by a disaster." I
A whopping 30 percent of respondents said their DR
tests failed. That's better than the 50 percent failure
rate in 2007, but it's still pretty scary.

17 Developing a Storage Strategy for the Future, an Internet.com Storage eBook. © 2008, Jupitermedia Corp.

Você também pode gostar