Você está na página 1de 74

Performance Comparison:

VERITAS File System 3.4


Patch 2 vs. UNIX File System
on Solaris 8 Update 4

www.veritas.com

VERITA S File Syste m 3.4 Pa tch 2 vs. UN IX Fil e System on Solari s 8 Update 4

Page 2

www.veritas.com

VERITA S File Syste m 3.4 Pa tch 2 vs. UN IX Fil e System on Solari s 8 Update 4

Page 3

1.Full fsck...............................................................................................................................................................9
1.1Introduction.......................................................................................................................................................9
1.2Test Configurations............................................................................................................................................9
1.3File Sets............................................................................................................................................................9
1.4Experiments....................................................................................................................................................11
1.5Results............................................................................................................................................................12
1.5.1E4000 Configuration Results.....................................................................................................................12
1.5.2Blade 1000 Configuration Results..............................................................................................................13
1.5.3E4500 Configuration Results.....................................................................................................................14
3.SPECsfs97 Network File System Server Benchmark..........................................................................................15
3.1Introduction.....................................................................................................................................................15
3.2Test Configurations..........................................................................................................................................15
3.3Overview of Results.........................................................................................................................................15
3.4Detailed Results...............................................................................................................................................15
3.5SPECsfs97 Defects and Our Use of the Benchmark.........................................................................................20
4.TPCC..............................................................................................................................................................21
4.1Introduction.....................................................................................................................................................21
4.2Test Configurations..........................................................................................................................................21
4.3Results............................................................................................................................................................23
4.3.1Buffered I/O File System Configurations....................................................................................................23
4.3.2Nonbuffered I/O File System Configurations (VERITAS File System Quick I/O vs. UFS CDIO)...................24
4.3.3PointinTime Backup File System Configurations Using Buffered I/O.......................................................25
4.3.4PointinTime Backup Using Nonbuffered I/O...........................................................................................26
5.Miscellaneous Commands.................................................................................................................................28
5.1Summary.........................................................................................................................................................28
5.2Introduction.....................................................................................................................................................28
5.3Test Configurations..........................................................................................................................................28
5.3.1mkfile, cp..................................................................................................................................................28
5.3.2touch_files................................................................................................................................................28
5.3.3uncompress and tar extract.......................................................................................................................28
5.4Overview of Results.........................................................................................................................................30
5.5Detailed Results...............................................................................................................................................31
6.PostMark 1.5 File System Benchmark................................................................................................................38
6.1Introduction.....................................................................................................................................................38
6.2Test Configuration...........................................................................................................................................38
6.3Results............................................................................................................................................................38
7.Sequential File I/O (vxbench) ............................................................................................................................41
7.1Introduction.....................................................................................................................................................41
7.2Test Configurations..........................................................................................................................................41
7.3Experiments....................................................................................................................................................42
7.4Results............................................................................................................................................................42
7.4.1Scaling With Increasing Concurrency........................................................................................................42
7.4.2Scaling With Increasing Number of Columns in a RAID 0 Stripe.................................................................47
1.E6500 (8X8)......................................................................................................................................................51
2.E6500 (12X12)..................................................................................................................................................54

www.veritas.com

VERITA S File Syste m 3.4 Pa tch 2 vs. UN IX Fil e System on Solari s 8 Update 4

Page 4

Executive Summary
This paper compares the performance of VERITAS File System 3.4 Patch 2 and Sun UNIX File System (UFS) on
Solaris 8 Update 4 (64bit unless otherwise specified) under the following workloads:
Benchmarks

UFS

UFS+logging

Recover file system after


crash (e.g. power failure),
fsck

69 GB

47 Minutes

Seconds

VERITAS
File
System
Seconds

768 GB

4.3 Hours

Seconds

Seconds

Recover file system after


unusual event (e.g.
physical disk corruption),
fsck

69 GB

47 Minutes

46 Minutes

6.5 Minutes

768 GB

4.3 Hours

4.3 Hours

31 Minutes

Conclusion

Logging is a necessity in
todays environment; true
for both UFS and
VERITAS File System.
VERITAS File System
has 6 to 7 times faster
recovery than UFS with
logging.

Since file system availability is critical to system availability, we will focus this summary comparison on UNIX File
System with logging to VERITAS File System in the following chart (report contains details on UFS, UFS with
logging and VERITAS File System results).
Benchmarks
NFS File Serving: SFS 2.0
On an 8 CPU/8 GB server:
On a 12 CPU/12 GB server:

VERITAS File System outperforms


UFS+logging:

8K Reads:
31 to 320%

64K Reads: 38 to 345%

8K Writes:
63 to 387%

64K Writes: 77 to 377%

Sequential File I/O


vxbench 24 column stripes

Reads:

Performance
UFS+logging
VERITAS File
Ops/sec (ORT)
System
5,648 (6.4)
Ops/sec (ORT)
11,670 (5.2)
5,872 (5.9)
13,986 (4.9)

Writes :

OLTP (TPCC on Oracle)


Buffered I/O:
UFS CDIO vs. VERITAS File
System QIO/CQIO:
UFS Snaps vs. VERITAS
File System Checkpoint:
UFS Snap+CDIO vs.
VERITAS File System
Checkpoint+QIO/CQIO:

UFS+logging
tpmC
1,557
3,730

4,447/5,137
1,211
2,243

VERITAS File System performance


lead increases as you access more
files concurrently.
VERITAS File System is 82% faster
on buffered I/O.
VERITAS File System Quick
I/O/Cached Quick I/O is 19% to
38% faster than UFS CDIO.
VERITAS File System performance
with pointintime backups are
85% faster w/buffered I/O, 125% to
167% faster with nonbuffered I/O.

1,552

Small File I/O (PostMark)

UFS+logging
(trans/sec)

1 process:
8 concurrent processes:
16 concurrent processes:

48
36
29

www.veritas.com

VERITAS File
System
tpmC
2,841

Conclusion
VERITAS File System has over
twice the throughput with better
Overall Response Time (ORT).
In going to a significantly larger
server (8x8 to 12x12), the UFS with
logging performance gain was only
4%.
VERITAS File System performance
lead increases as the number of
disks in the volume increases.

3,499/4,139
VERITAS File
System (+QLog)
(trans/sec)
218 (247)
558 (774)
664 (988)

UFS+logging does not scale with


increasing concurrency.
VERITAS File System+QuickLog is
about 400%, 2,100% and 3,300%
faster than UFS+logging at 1, 8 and
16 processes, respectively.

VERITA S File Syste m 3.4 Pa tch 2 vs. UN IX Fil e System on Solari s 8 Update 4

Page 5

Miscellaneous File I/O

mkfile
touch_files
tar extract
cp

www.veritas.com

VERITAS File System outperforms


UFS+logging:

mkfile:
72 to 118% faster

touch_files: 58 to 343% faster

tar extract: 49 to 177% faster

cp:
11 to 82% faster

VERITAS File System shows


stronger results as environments
become more complex and
concurrent commands (mkfile,
touch_files, and tar) are run.

VERITA S File Syste m 3.4 Pa tch 2 vs. UN IX Fil e System on Solari s 8 Update 4

Page 6

Availability Why Journaling Is Mandatory


fsck File system consistency check command (Section 1)
UNIX File System fsck performance essentially prohibits its use on high availability servers having large file
systems. Fsck time for 69 GB on a Sun StorEdge D1000 JBOD is high at 47 minutes. A 768 GB UFS fsck on three
Sun StorEdge A5200 JBODs takes about 4.3 hours. A typical server has many file systems and multiple file
systems are commonly striped across several disks. Because of this, even with parallel fsck, a server with
nonjournaling file systems will take many hours to run file system checks. Thus, the use of a journaling file system,
such as UFS with logging or VERITAS File System, should be considered mandatory on any server.

VERITAS File System full file system consistency checks are 6 to 7 times faster than UFS with logging.

VERITAS File System has been tuned to ensure maximum availability while delivering maximum
performance.

Scalability
SFS 2.0 A standard NFS file serving benchmark (Section 2)
On this benchmark, VERITAS File System achieves between 37 and 46 percent greater peak throughput (NFS ops
per second) than UFS, and between 107 and 138 percent greater peak throughput than UFS with logging. Despite
taking on this additional load, the NFS server running VERITAS File System also provided faster overall response
time to individual clients.

VERITAS File System produces over twice the throughput of UFS with logging, while delivering faster
response times.

UFS with logging has a performance penalty and does not scale as well as VERITAS File System to
additional processors.

Sequential File I/O The vxbench utility measures highbandwidth sequential disk transfers
(Section 6)
When transferring large sequential amounts of data to and from a striped volume using vxbench, VERITAS File
System scales linearly as more disks are added to a striped array. In addition, when presented with a concurrent
load of disjoint sequential data transfers (up to 32 processes), VERITAS File System often outperforms UFS with
logging by 300 to 400 percent.

VERITAS File System scales with increasing concurrency, outperforming UFS with logging by 31 to 406
percent.

VERITAS File System scales with increasing RAID0 column sizes, providing consistent throughput in mixed
traffic environments.

Small File I/O The PostMark v1.5 benchmark creates, deletes, reads and writes small files
(Section 5)
With a single PostMark process, VERITAS File System outperforms UFS with logging by about 350 percent. As
additional PostMark processes are run in parallel to test scalability, the performance advantage for VERITAS File
System increases significantly. The aggregate throughput of PostMark using VERITAS File System increases as
the number of processes increases from 1 to 16, but the UFS with logging aggregate throughput actually
decreases. At 16 processes, VERITAS File System is about 2,200 percent faster than UFS with logging. One

www.veritas.com

VERITA S File Syste m 3.4 Pa tch 2 vs. UN IX Fil e System on Solari s 8 Update 4

Page 7

conclusion from this study is that UFS with logging can suffer serious performance degradation with multiple
concurrent processes.

When VERITAS File System is augmented with VERITAS QuickLog, the performance advantages over
UFS+logging increase to about 410, 2,100 and 3,300 percent at 1, 8 and 16 processes, respectively.

TPCC A standard OLTP throughput performance benchmark (Section 3)


VERITAS File System buffered I/O achieves greater tpmC throughput than UFS and UFS with logging. VERITAS
File System Quick I/O (QIO) and Cached Quick I/O (CQIO) run faster than Suns Concurrent Direct I/O (CDIO).
The performance advantages are even more substantial when combined with technology for pointintime backups
(VERITAS File System Storage Checkpoints vs. UFS Snapshots).

With buffered I/O, VERITAS File System outperforms UFS+logging by 82 percent.

With unbuffered I/O (VERITAS File System Quick I/O vs. UFS CDIO), VERITAS File System outperforms
UFS+logging by 19 percent. VERITAS File System Cached Quick I/O (CQIO) increases this lead to 38
percent.

With buffered I/O alongside Storage Checkpoints (VERITAS File System) and Snapshots (UFS), VERITAS
File System outperforms UFS+logging by 85 percent.

With unbuffered I/O and Checkpoints/Snapshots, VERITAS File System outperforms UFS+logging by 125
percent (with VERITAS File System QIO) and 167 percent (with VERITAS File System CQIO).

Note: VERITAS File System Storage Checkpoints are persistent and UFS Snapshots are not.

Management
Miscellaneous file benchmarks mkfile, touch_files, tar extract, cp (Section 4)
VERITAS File System outperforms UFS with logging by as much as 343 percent in these tests and enabled better
system scalability.

VERITAS File System consistently outperforms UFS with logging: mkfile (72 to 118 percent), touch_files (58
to 343 percent), cp (11 to 82 percent) and uncompress/tar extract (49 to 177 percent).

In the touch_files benchmark, UFS with logging performs well on singleprocess operations, but as the
degree of multiprocessing increases, UFS with logging performance lags further and further behind
VERITAS File System.

In summary, VERITAS File System gives you the performance you need during operations as well as the quick
recovery during reboots. Suns CDIO implementation requires Solaris 8 update 3 and later and Oracle 8.1.7 and
later. VERITAS implementation works with Oracle 7 and later and Solaris 2.6 and later. VERITAS allows customers
to leverage their existing investments as well as provides for new implementations there is no need to upgrade
immediately. VERITAS File System 3.4 performance is approximately the same across OS releases.

www.veritas.com

VERITA S File Syste m 3.4 Pa tch 2 vs. UN IX Fil e System on Solari s 8 Update 4

Page 8

1.Full fsck
1.1Introduction
This section evaluates the performance of file system checks (fsck) for VERITAS File System 3.4 Patch 2 and
Sun UNIX File System (UFS) running on Solaris 8 Update 4 operating system. File system checking must be fast on
high availability servers, which cannot afford long downtimes. Journaling file systems, such as VERITAS File
System and UFS with logging, can usually perform a file system check on the order of seconds, needing only to
replay a log of metadata changes that have not yet been committed to disk. Only if the log has become damaged is
the more thorough "full fsck" required, where the entire file systems content is examined for consistency. In
contrast, checking a file system that does not have the benefit of logging (such as UFS without logging) always
requires the more expensive full fsck.
This section examines the performance of full fsck for UFS, UFS with logging and VERITAS File System on three
different machine and disk configurations. We find that due to the high cost of full fsck, the use of UFS without
logging in a high availability server environment is prohibitive. In such environments, the use of a journaling file
system should be considered mandatory, not a luxury. This conclusion has an important implication for the
performance studies presented in the remainder of this paper: because UFS without logging is not a viable file
system in a high availability server due to fsck time, the primary "baseline" competition to VERITAS File System is
UFS with logging. And as we will see, adding logging to UFS decreases performance in most benchmarks.
In the rare case when a journaling file system (such as UFS with logging and VERITAS File System) is unable to
replay its log during a fsck, the more expensive full fsck is required. At such times, VERITAS File System performs
a full fsck between 6.2 and 7.3 times faster than UFS with logging.

1.2Test Configurations
Full fsck tests were performed on three systems, each running Solaris 8 Update 4. The first is a Sun E4000 system,
with four 168 MHz UltraSPARCI CPUs and 832 MB of RAM. Due to hardware bugs in some UltraSPARCI chips
that prevented 64bit addressing, Solaris was booted in 32bit mode for this machine only. (All other configurations
in this paper use 64bit Solaris.) Two Sun StorEdge D1000 JBOD arrays, each with twelve Seagate ST39173W
(Barracuda 9LP) 7,200 RPM 9 GB disks, provided disk space. Using VERITAS Volume Manager 3.2, these arrays
were configured into a 12column stripemirror (RAID1+0) volume of 100 GB. The arrays were each directly
connected to fastwide SCSI ports on the E4000.
The second test system is a Sun Blade 1000, with two 750 MHz UltraSPARCIII CPUs and 1 GB of RAM. A single
Sun StorEdge T3 hardware RAID5 array provided disk space for the experiment. The T3 contains nine Seagate
ST318304FC (Cheetah 36 LP) 10,000 RPM 18 GB disks, with seven disks used for data and one for redundancy.
Additionally, the T3 was configured to use writeback caching (the default). The array was directly connected to a
builtin Fibre ChannelArbitrated Loop port on the Blade and configured to a 100 GB volume.
The third system used for fsck testing is a Sun E4500, with eight 400 MHz UltraSPARCII CPUs and 2 GB of RAM.
Three Sun StorEdge A5200 JBOD arrays, each with 22 Seagate ST318304FC (Cheetah 36 LP) 10,000 RPM
18 GB disks, were connected via gigabit fibre. (This study used 64 of the 66 total disks on the three arrays.) Two of
the arrays were connected to the same Sbus board via JNI 1083 cards, while the third was connected to a PCI
board via a Qlogic 2200 card. Using VERITAS Volume Manager 3.2, these arrays were configured into a striped
(RAID0) volume of 1 TB.

1.3File Sets
For the E4000 and Blade 1000 configurations, the same file set was used to populate the file system before running
fsck. This file set is a subset of the data produced by a run of the SPECsfs97 benchmark, with 2,870,713 files in
88,772 directories totaling 72,641,776 KB (about 69.3 GB). File sizes range from 0 bytes to 1.35 MB, with a heavy
concentration on most of the power of two file sizes. Table 1 shows the file size distribution used in the 100 GB
fsck.

www.veritas.com

VERITA S File Syste m 3.4 Pa tch 2 vs. UN IX Fil e System on Solari s 8 Update 4

Page 9

File Size Range


Up to 4K
>4K to 16K
>16K to 64K
>64K to 256K
>256K to 1MB
>1MB to 1.35 MB
About 31 MB (the .tar.gz files)

Number of Files
1,990,612
473,044
242,107
135,901
28,063
981
5

Table 1: File size distribution for 100 GB volume full Fsck tests (E4000 and Blade 1000 configurations)

To avoid rerunning SPECsfs97 to produce the file set each time, the files were archived into five .tar files, which
were then compressed using gzip (each .tar.gz file representing one of the five toplevel directories produced by
the prior run of SPECsfs97). The five .tar.gz files were each about 31 MB and are included among the files on
which fsck was run.
For the E4500 configuration, a larger (though similar) file set was used. First, the five toplevel SPECsfs97
directories were brought into a single .tar file which, when compressed using gzip, is 156 MB. Then, 11 copies of
this .tar.gz file were created. When uncompressed and extracted, the file set totals about 768 GB, with a size
distribution that is summarized in Table 2.

File Size Range


Up to 4K
>4K to 16K
>16K to 64K
>64K to 256K
>256K to 1MB
>1MB to 1.35 MB
About 156 MB (the .tar.gz files)

Number of Files
21,896,732
5,203,484
2,663,177
1,494,911
308,693
10,791
11

Table 2: File size distribution for 1 TB volume full Fsck tests (E4500 configuration)

www.veritas.com

VERITA S File Syste m 3.4 Pa tch 2 vs. UN IX Fil e System on Solari s 8 Update 4

Page 10

1.4Experiments
For each machine configuration, mkfs was used to create a UFS or VERITAS File System file system across the
entire volume. For the E4000 and Blade 1000 configurations, the file systems were 100 GB. For the E4500
configuration, VERITAS File System created a 1 TB file system (2311 512byte sectors). Using default newfs
options, UFS was unable to create a file system encompassing the entire volume. Through trial and error, UFS
eventually succeeded in creating a file system of 1,996,000,000 sectors, or about 951 GB. For VERITAS File
System, the only nondefault mkfs options used was large file support. For UFS, the default mkfs options were used.
After mkfs, a file system (either UFS, UFS with logging, or VERITAS File System) was mounted on the volume. The
UFS file system was mounted with either no options (UFS without logging) or with the logging flag (UFS with
logging). The VERITAS File System file system was mounted with default options.
After mounting, the file system was populated with the .tar.gz files, as described above. These files were then
uncompressed and extracted. (Section 4 contains timing information for the uncompress and extract steps on each
machine configuration.)
After uncompressing and extracting, the file system was unmounted and a full fsck was run using the following
command:
/bin/time fsck F fstype n /dev/vx/rdsk/fsckvol
Note that the command to fsck a UNIX File System volume is the same, whether or not it had been mounted with
logging. Nonetheless, because the volume had been populated using different UFS mount options, the full fsck
times for UFS and UFS with logging differ.
After the fsck, the volume was wiped clean by performing a mkfs for the next volume type in the experiment. Note
that although the mkfs options for UFS do not differentiate between logging and nonlogging variants (that is a
mounttime option), we did not take a shortcut of bypassing mkfs when switching between UFS and UFS with
logging tests.

www.veritas.com

VERITA S File Syste m 3.4 Pa tch 2 vs. UN IX Fil e System on Solari s 8 Update 4

Page 11

1.5Results
This subsection presents the results for the experiments described above.

1.5.1E4000 Configuration Results


Full fsck times for UFS, UFS with logging and VERITAS File System on the E4000 configuration are summarized in
Figure 1. The most important result arises from the full fsck time of UFS without logging: about 47 minutes. This
amount of time is too high to be acceptable in a high availability system. The conclusion is that, due to fsck time,
UFS (without logging) should not be used in a high availability server.
VERITAS File System fsck runs about 615 percent faster than both UFS and UFS with logging. Note that VERITAS
File System spends less time on CPU activities than its UFS counterparts. However, because the VERITAS File
System runs completed in much less time than the UFS runs, CPU time as a fraction of its respective run time is
much higher for VERITAS File System than with UFS. Higher relative CPU utilization indicates that VERITAS File
System is better positioned than UFS to take advantage of faster CPUs. In contrast, UFS fsck spends most of its
Full Fsck Time on E4000 (100 GB 12Column RAID 1+0 Volume, 69 GB Used)
UFS

UFS+logging

VERITAS File System

50
46.5

46.3

45

40

35

Minutes

30

25

20

15

10
6.5

5.5
3.9

5.5

3.9

3.7
3.2
1.6

1.6
0.5

0
Real

User

Sys

User+sys

Component of Time

Figure 1: Full Fsck on E4000 (69 GB used, on a 100 GB file system). VERITAS File System full fsck is 615 percent faster than UFS, and 612
percent faster than UFS with logging. UFS is primarily diskbound, not CPUbound, so its performance would not be expected to improve
greatly in the presence of faster (or additional) CPUs.

time in I/O, and thus will not benefit as much from faster CPUs.

www.veritas.com

VERITA S File Syste m 3.4 Pa tch 2 vs. UN IX Fil e System on Solari s 8 Update 4

Page 12

1.5.2Blade 1000 Configuration Results


The time to perform a full fsck on the Blade 1000 configuration is summarized in . The most important result arises
from the full fsck time of UFS without logging: about 34 minutes. This amount of time is too high to be acceptable in
a high availability system. Due to fsck time, UFS (without logging) should not be used in a high availability server.
In the atypical case where journaling file systems cannot replay their logs during a fsck, a full fsck is required. In
such times, VERITAS File System is about 700 percent faster than UFS and about 670 percent faster than UFS
with logging. Also of interest is the time occupied by CPU activities. UFS and UFS with logging are primarily I/O
bound, spending only about 5 percent of their (longer) run times on CPU activity. Consequently, their full fsck
performance can improve only slightly if given faster (or additional) CPUs.

Full Fsck Time on Blade 1000 (100 GB HW RAID 5 Volume, 69 GB Used)


UFS

UFS+logging

VERITAS File System

40

35

34.3
33.2

30

Minutes

25

20

15

10

4.3

1.3

1.3

0.8

1.6
0.3

0.3

1.6

0.9

0.1

0
Real

User

Sys

User+sys

Component of Time

Figure 2: Full Fsck on Blade 1000 (100 GB file system, 69 GB used). VERITAS File System full fsck is about 700 percent faster than UFS and
about 670 percent faster than UFS with logging. VERITAS File System also spends less total time on CPU activity than UFS and UFS with
logging.

www.veritas.com

VERITA S File Syste m 3.4 Pa tch 2 vs. UN IX Fil e System on Solari s 8 Update 4

Page 13

1.5.3E4500 Configuration Results


The time taken to perform a full fsck on the nearterabyte E4500 configuration is summarized in . As with the
Blade 1000, the nearterabyte file system exhibits prohibitive full fsck performance on UFS: 4.3 hours. Again, we
conclude that due to fsck time, UFS without logging should not be used in a high availability server that has a file
system of this size.
In the unusual case where VERITAS File System and UFS with logging need to perform a full fsck (rather than a log
replay), VERITAS File System is about 725 percent faster than UFS and UFS with logging. Also of interest is the
time occupied by CPU activities, as a percentage of their respective runtimes: about 57 percent for VERITAS File
System and about 11 percent for UFS and UFS with logging. This result indicates that VERITAS File System can
benefit in the future from faster (or additional) CPUs, while fsck for UFS and UFS with logging is largely I/O bound
and can benefit only incrementally from faster CPUs.

Full Fsck Time on E4500 System (1 TB RAID 0 Volume, 768 GB Used)


UFS

UFS+logging

VERITAS File System

300

257.0 257.1
250

Minutes

200

150

100

50
31.1

27.5
19.1

19.3

27.5
17.6

14.7
8.3

8.3

2.9

0
Real

User

Sys

User+sys

Component of Time

Figure 3: Full Fsck on E4500 (1 TB file system, 768 GB used). VERITAS File System full fsck is about 725 percent faster than UFS and UFS
with logging. VERITAS File System also spends less total time on CPU activity than UFS and UFS with logging.

www.veritas.com

VERITA S File Syste m 3.4 Pa tch 2 vs. UN IX Fil e System on Solari s 8 Update 4

Page 14

2.
3.SPECsfs97 Network File System Server Benchmark
3.1Introduction
This section demonstrates the performance benefits of VERITAS File System 3.4 Patch 2 over Solaris 8 Update 4
UNIX File System (UFS) in a Network File System (NFS) version 3 file server environment. To evaluate file system
performance, we used the Standard Performance Evaluation Corporation (SPEC) System File Server (SFS)
benchmark sfs97, also known as SFS 2.0. VERITAS File System obtained peak throughput that was over 100
percent greater than UFS with logging and provided significantly faster response time to client requests.

3.2Test Configurations
An E6500 system was configured as an NFS version 3 server with two different CPU and memory configurations:
eight 400 MHz UltraSPARCII CPUs with 8 GB of memory (8x8 configuration) and 12 400 MHz UltraSPARCII CPUs with
12 GB of memory (12x12 configuration). The file systems that were measured were installed on Unisys Clariion arrays,
totaling 198 disks. The disks were 9 GB and 18 GB Seagate Fibre Channel 10,000 RPM drives. The arrays were attached to
12 Sun Sbus Socal HBAs. For each test, 11 file systems were created across the 198 disks, each configured to a RAID 1+0
volume layout of 9 columns.
Both the UFS and VERITAS File System file systems were created with default options. For mounting, the VERITAS File
System runs used large file support, and UFS runs used either no options or the logging option (for UFS with logging runs).
The 14 clients used were Sun Microsystems Netra T1, each with one 400 MHz CPU, and 256 MB of RAM. A Cisco
100/1000BaseT Network switch (Catalyst 3500XL) was used to network both the Clients (via 100BaseT interface) and the
NFS server (via 1000BaseSX interface/fiberoptic cable). The NFS server was configured to a maximum of 1,600 threads, in
line with the recommendation in Cockroft and Pettiss book, Sun Performance and Tuning, Second Edition.

3.3Overview of Results
Table 3 shows the improvement in peak throughput that VERITAS File System obtained over UFS and UFS with logging.
VERITAS File System provided 34 to 36 percent greater peak throughput than UFS, and 95 to 139 percent greater peak
throughput than UFS with logging.
Configuration

VERITAS File System Improvement


Over UFS

8x8
12x12

46%
37%

VERITAS File System


Improvement over
UFS+logging
107%
138%

Table 3: Increase in SPECsfs97 peak throughput obtained by VERITAS File System

3.4Detailed Results
Table 4 and Table 5 show detailed results of each benchmark run with different UFS and VERITAS File System
mount options. The results show that VERITAS File System had lower CPU utilization than UFS and UFS with
logging, for both the 8x8 and 12x12 configurations. Lower CPU utilization enabled better scalability and higher peak
throughputs. The disk utilization at peak throughputs was similar for all tests, ranging from 20 to 40 percent.
As discussed in Section Overview of Results, VERITAS File System provides peak bandwidth that is 37 to 46
percent greater than UFS, and 107 to 138 percent greater than UFS with logging. Despite enabling a higher load on
the server, VERITAS File System provided for Overall Response Time (ORT) that is 21 to 31 percent faster than
UFS, and 20 to 23 percent faster than UFS with logging. (The ORT provides a measurement of how the system
responds, averaged across all server throughputs.) In other words, although the VERITAS File System server is
able to take on a much greater workload (as measured by throughput), it still consistently provides a faster
turnaround time for clients (as measured by ORT).

www.veritas.com

VERITA S File Syste m 3.4 Pa tch 2 vs. UN IX Fil e System on Solari s 8 Update 4

Page 15

A comparison of the 12x12 runs for UFS with logging reveals little improvement in peak bandwidth compared to the
8x8 configuration (5,872 for the 12x12 configuration, and 5,648 for the 8x8 configuration). We conclude that UFS
with logging does not scale as well as an NFS server; with an additional four CPUs and 4 GB of RAM, peak
throughput improved only 4 percent.
UFS
Ops/sec

UFS+logging

Msec/op

483
982
1,980
3,011
4,005
4,988
5,991
6,967
7,965
7,974

2.6
3.3
3.9
4.7
5.5
7.4
7.3
9.1
16.4
21.9

ORT (Msec/op)

%CPU

Ops/sec

Msec/op

4
8
18
30
42
52
65
80
97
99

482
982
1,980
3,010
4,006
5,004
5,648

2.7
2.9
4.1
5.4
6.6
7.6
29.1

6.3

ORT (Msec/op)

VERITAS File System


%CPU
4
9
20
33
45
59
67

6.4

Ops/sec

Msec/op

483
982
1,979
3,006
3,997
4,989
5,973
6,952
7,914
8,502
8,996
9,504
10,026
11,019
11,670

%CPU

2.7
3.9
3.5
3.8
3.9
4.2
4.7
5.0
5.5
5.6
6.5
7.0
7.1
9.0
12.7

ORT (Msec/op)

4
7
13
20
26
31
38
44
52
57
62
68
74
86
94
5.2

Table 4: SPECsfs97 statistics for 8x8 configuration. In the peak throughput (shaded) rows, VERITAS File System achieved 46 percent greater
throughput than UFS and 107 percent greater throughput than UFS with logging. ORT measurements show that VERITAS File System services
requests about 21 percent faster than UFS and about 23 percent faster than UFS with logging. It should be noted that VERITAS File System
provides faster overall response time despite taking on a much greater overall load. For example, the table shows that UFS and UFS with
logging are able to provide a client response time of 5.5 msec, while servicing an overall load of about 4,000 and 3,000 NFS operations per
second, respectively but no greater. VERITAS File System, by contrast, is able to drive over 7,900 NFS operations per second at that response
time.

www.veritas.com

VERITA S File Syste m 3.4 Pa tch 2 vs. UN IX Fil e System on Solari s 8 Update 4

Page 16

UFS
Ops/sec

UFS+logging

Msec/op

482
982
1,980
3,011
4,004
4,995
5,985
6,959
7,947
8,504
9,030
9,551
10,058
10,192

2.6
3.2
3.2
3.9
4.8
6.0
6.8
7.4
8.5
9.6
10.9
12.3
13.9
17.4

ORT (Msec/op)

VERITAS File System

%CPU

Ops/sec

Msec/op

%CPU

3
6
13
21
28
33
41
49
60
67
76
84
91
95

482
982
1,980
3,009
3,995
5,001
5,872.0

2.7
2.9
3.4
4.2
5.5
7.2
23.7

3
6
14
22
29
36
45.0

6.4

ORT (Msec/op)

5.9

Ops/sec
483
982
1,979
3,011
3,998
4,994
5,981
6,951
7,933
8,488
9,021
9,532
10,010
10,998
12,043
12,965
13,958
13,986

Msec/op

%CPU

2.1
2.6
2.5
2.9
3.3
3.8
4.1
4.4
4.5
4.6
5.1
5.5
5.5
6.3
7.0
8.7
11.1
12.5

3
5
9
14
18
22
27
30
34
38
40
44
47
53
62
71
81
84

ORT (Msec/op)

4.9

Table 5: SPECsfs97 statistics for 12x12 configuration. In the peak throughput (shaded) rows, VERITAS File System achieved 37 percent greater
throughput than UFS and 138 percent greater throughput than UFS with logging. ORT measurements show that VERITAS File System services
requests about 31 percent faster than UFS and about 20 percent faster than UFS with logging, even though VERITAS File System is able to
service a greater load. Also of note is that the peak bandwidth obtained by UFS with logging is not much higher than in the 8x8 configuration,
indicating that systems that use UFS with logging do not scale well as NFS servers do.

www.veritas.com

VERITA S File Syste m 3.4 Pa tch 2 vs. UN IX Fil e System on Solari s 8 Update 4

Page 17

and illustrate the bandwidth and response time limits for the various file systems, for the 8x8 and 12x12 machine
configurations, respectively.
Laddis 8x8 Configuration
UFS

UFS+logging

VERITAS File System

35

30

Response Time (msec/op)

25

20

15

10

0
0

2,000

4,000

6,000

8,000

10,000

12,000

14,000

Throughput (NFS ops/sec)

Figure 4: Throughput and response times, 8x8 configuration. UFS with logging is unable to achieve 6,000 NFS ops/sec, and sees an explosion of
response time after about 5,000 NFS ops/sec. UFS without logging is unable to achieve 8,000 NFS ops/sec, and suffers significant response time
penalties in throughputs greater than about 7,000 NFS ops/sec. VERITAS File System, in contrast, is able to achieve 10,000 NFS ops/sec before
response time begins to increase at a significant rate.

www.veritas.com

VERITA S File Syste m 3.4 Pa tch 2 vs. UN IX Fil e System on Solari s 8 Update 4

Page 18

Laddis 12x12 Configuration


UFS

UFS+logging

VERITAS File System

25

Response Time (msec/op)

20

15

10

0
0

2,000

4,000

6,000

8,000

10,000

12,000

14,000

16,000

Throughput (NFS ops/sec)

Figure 5: Throughput and response times, 12x12 configuration. As with the 8x8 configuration, UFS with logging is unable to achieve 6,000 NFS
ops/sec, and sees an explosion of response time after about 5,000 NFS ops/sec. We conclude that UFS with logging is unable to scale as an NFS
server to greater numbers of CPUs or greater RAM. UFS without logging is unable to scale much beyond 10,000 NFS ops/sec, and suffers
significant response time penalties in throughputs greater than that. VERITAS File System, in contrast, is able to achieve about 13,000 NFS
ops/sec before response time begins to increase at a significant rate.

www.veritas.com

VERITA S File Syste m 3.4 Pa tch 2 vs. UN IX Fil e System on Solari s 8 Update 4

Page 19

For reference, Table 6 shows the workload distribution of a typical SFS benchmark run.

NFS Op
getattr
setattr
lookup
readlink
read
write
create
remove
readdir
fsstat
access
commit
fsinfo
readdirplus

Percent
11%
1%
27%
7%
18%
9%
1%
1%
2%
1%
7%
5%
1%
9%
Table 6: SFS2 workload distribution

3.5SPECsfs97 Defects and Our Use of the Benchmark


Standard Performance Evaluation Corporation has announced that they have identified defects within the
SPECsfs97 benchmark and they are no longer publishing results of that benchmark. At the time of this writing,
SPEC has not published a replacement benchmark. The majority of the defects in SPECsfs97 revolve around the
changes in the file working set with changes in the total number of processes used (clients times processes). In this
paper, we used the same number of clients and processes for all SPECsfs97 runs. We also used the same clients,
NFS Server, and network, with the only changes made to the file system being benchmarked. We feel that with
these precautions, the SPECsfs97 benchmark provides a valid means of comparing file systems. Full disclosure of
these SPECsfs97 benchmark runs can be found in Appendix A.

www.veritas.com

VERITA S File Syste m 3.4 Pa tch 2 vs. UN IX Fil e System on Solari s 8 Update 4

Page 20

4.TPCC
4.1Introduction
This section describes the performance of VERITAS File System 3.4 Patch 2 and Sun UNIX File System (UFS) while
running an Online Transaction Processing (OLTP) workload on Solaris 8 Update 4. Typical OLTP systems involve processing
simple to moderately complex transactions with multiple updates to the database. The benchmark used for this performance
comparison was derived from the commonly known TPCC benchmark. This document contrasts the performance of
available I/O configurations offered by VERITAS File System and UFS.
The results of this study show that:
For configurations that use the operating systems page cache, VERITAS File System achieves 19 percent greater tpmC
throughput than UFS and 82 percent greater tpmC throughput than UFS with logging.
For configurations that bypass the operating systems page cache, VERITAS File System Quick I/O (QIO) and UFS
Concurrent Direct I/O (CDIO, also known as Database Direct I/O), VERITAS File System achieves 16 percent and 19 percent
greater tpmC throughput than UFS and UFS with logging, respectively. VERITAS File System Cached Quick I/O (CQIO)
increases the tpmC advantage over UFS and UFS with logging to 34 and 38 percent, respectively.
For configurations that manage pointintime backups (checkpoints), operating systems page cache, VERITAS File System
Storage Checkpoints and UFS Snapshots, VERITAS File System achieves 36 and 85 percent greater tpmC throughput than
UFS and UFS with logging, respectively.
For configurations that combine both QIO/CDIO with pointintime backups, VERITAS File System QIO with Storage
Checkpoints achieves 115 and 125 percent greater tpmC throughput than UFS CDIO with Snapshots and UFS with logging
CDIO with Snapshots, respectively. Similarly, VERITAS File System Cached QIO with Storage Checkpoints achieves 155
and 167 percent greater tpmC throughput than UFS CDIO with Snapshots and UFS with logging CDIO with Snapshots,
respectively.

4.2Test Configurations
Tests were run on a Sun E6500 computer system, with eight 400 MHz CPUs and 8 GB of RAM. Three Sun A5200
JBOD disk arrays were each connected to an Sbus controller on the E6500 via a Sun Sbus Socal HBA.
The TPCC data, totaling 28.5 GB, was housed on a 300 GB 20column stripemirrored (RAID 1+0) volume, which
was created with VERITAS Volume Manager 3.2. Two Sun A5200 FCAL arrays provided the disk space for this
volume.
The Oracle redo logs, totaling about 1 GB, were placed on an 18 GB 2way stripemirrored (RAID 1+0) volume on
a separate Sun Socal controller, using the third Sun A5200 array.
The Oracle executables and benchmark code resided on a separate internal disk outside of the volume group under
test.

www.veritas.com

VERITA S File Syste m 3.4 Pa tch 2 vs. UN IX Fil e System on Solari s 8 Update 4

Page 21

The following software releases were used in testing:

Oracle 8.1.7 (32bit)

Solaris 8 Update 4

VERITAS File System 3.4 Patch 2 (August 2001 Solaris release train)

VERITAS Volume Manager 3.2 (August 2001 Solaris release train)

The following VERITAS File System configurations were tested:

Buffered I/O

Quick I/O (QIO)

Cached Quick I/O (CQIO)

Storage Checkpoint (for pointintime backups)

The following UNIX File System configurations were tested:

UFS with and without logging

Concurrent Direct I/O (CDIO).

Snapshot (for pointintime backups)

For the UFS with Snapshot runs, the snapshot space was placed on a separate volume that resided on the same
disks as the TPCC data. This placement matches that of VERITAS File Systems Storage Checkpoints, which
always reside on the same volume as the underlying data.
The Snapshot or Storage Checkpoint was created after the database was initialized, but before the TPCC
benchmark was run. The UFS Snapshot volume was created in six minutes; the VERITAS File System Storage
Checkpoint was created almost instantaneously.
The VERITAS File System file systems were created with large file support. The UFS file systems were created with
default parameters. For mounting, the VERITAS File System file systems used the largefile option. The UFS file
systems were mounted with default options, with the following exceptions. UFS with logging runs use the logging
mount option, and CDIO runs use the forcedirectio option. In addition, CDIO runs add the following line to
Oracles initialization file:
_filesystemio_options = setall
We configured Oracle to use a 2 GB SGA, which is the maximum available in a 32bit environment (without
relinking the Oracle executables). Remaining memory is available for VERITAS File System Cached Quick I/O.
The database used in the test was 36 GB, consisting of 52 Oracle data files, including redo logs, indexes, rollback
segments, and temporary and user tablespaces. The database was a fully scaled TPCC database with a scale
factor of 200 warehouses.
Additional software configuration information (shared memory and IPC settings, and the Oracle parameter file) used
in this benchmark is in Appendix B.

www.veritas.com

VERITA S File Syste m 3.4 Pa tch 2 vs. UN IX Fil e System on Solari s 8 Update 4

Page 22

4.3Results
4.3.1Buffered I/O File System Configurations
Table 7 shows the performance improvement realized by VERITAS File System over UFS and UFS with logging, for file
system configurations that use the operating systems page cache. VERITAS File System improves on UFS and UFS with
loggings peak tpmC throughput by 19 and 82 percent, respectively. Figure 6 shows the data in chart form.
VERITAS File System Improvement
Over UFS
19%

VERITAS File System Improvement Over


UFS+logging
82%

Table 7: Comparing file system performance when using buffered I/O

TPCC Performance When Using Buffered I/O

3,000
2,841

2,500

2,386

tpmC

2,000

1,557
1,500

1,000

500

0
UFS

UFS+logging

VERITAS
File System

Figure 6: Peak tpmC for file system configurations that use the operation systems page cache

www.veritas.com

VERITA S File Syste m 3.4 Pa tch 2 vs. UN IX Fil e System on Solari s 8 Update 4

Page 23

4.3.2Nonbuffered I/O File System Configurations (VERITAS File System Quick I/O vs. UFS CDIO)
Table 8 shows the performance of file system configurations that bypass the operating systems page cache,
VERITAS File System Quick I/O (QIO) and UFS Concurrent Direct I/O (CDIO). VERITAS File System QIO
outperforms UFS CDIO by about 16 percent, and UFS with logging CDIO by about 19 percent. VERITAS File
System Cached Quick I/O (CQIO) is faster still, outperforming UFS CDIO by about 34 percent, and UFS with
logging CDIO by about 38 percent. Figure 7 shows the data in chart form.

UFS CDIO
UFS+logging CDIO

Improvement From Using


VERITAS File System
QIO
16%
19%

Improvement From Using


VERITAS File System CQIO
34%
38%

Table 8: Peak tpmC improvements gained by using VERITAS File System for configurations that bypass the operating system page cache (UFS
with Concurrent Direct I/O and VERITAS File System with Quick I/O or Cached Quick I/O)

TPCC Performance, UFS CDIO vs. VERITAS File System QIO/CQIO


6,000

5,137
5,000
4,447

4,000

3,844

tpmC

3,730

3,000

2,000

1,000

0
UFS CDIO

UFS+logging CDIO

VERITAS File System


Quick I/O

VERITAS File System Cached


Quick I/O

Figure 7: Peak tpmC for file system configurations that bypass the operation systems page cache

www.veritas.com

VERITA S File Syste m 3.4 Pa tch 2 vs. UN IX Fil e System on Solari s 8 Update 4

Page 24

4.3.3PointinTime Backup File System Configurations Using Buffered I/O


This section contrasts the performance of VERITAS File System and UFS for file system configurations that use pointin
time backups (VERITAS File System Storage Checkpoints and UFS Snapshots). In these configurations, a checkpoint or
snapshot is created after the database is initialized, but before the TPCC benchmark begins execution. Thereafter, the first
time a unit of data is modified, a copy of the premodified data is written to the checkpoint or snapshot, thus preserving
(checkpointing) a backup of the database prior to the TPCC run. Both VERITAS File System Storage Checkpoints and UFS
Snapshots operate using a copyonwrite strategy.
Table 9 shows the performance improvement realized by VERITAS File System (with Storage Checkpoints) over
UFS (with Snapshots), when using buffered I/O. In these configurations, VERITAS File System provides 36 and 85
percent greater tpmC than UFS and UFS with logging, respectively. Figure 8 shows the data in chart form.
VERITAS File System Storage
Checkpoint Improvement Over UFS
Snapshot
36%

VERITAS File System Storage


Checkpoint Improvement Over
UFS+logging Snapshot
85%

Table 9: File system performance, VERITAS File System Storage Checkpoint vs. UFS Snapshot

TPCC Performance, UFS Snapshot vs. VERITAS File System Data Full Checkpoint

2,500
2,243

2,000

1,648

tpmC

1,500

1,211

1,000

500

0
UFS with Snapshot

UFS+logging with Snapshot

VERITAS File System


with Checkpoint

Figure 8: Peak tpmC for file system configurations that use Snapshot/Checkpoint technology

www.veritas.com

VERITA S File Syste m 3.4 Pa tch 2 vs. UN IX Fil e System on Solari s 8 Update 4

Page 25

4.3.4PointinTime Backup Using Nonbuffered I/O


This section compares the performance of UFS and VERITAS File System when using both nonbuffered I/O and Snapshot
technology. That is, we compare UFS and UFS with logging configurations that use both CDIO and Snapshots, and VERITAS
File System configurations that use both Quick I/O (or Cached Quick I/O) and Storage Checkpoints.
Table 10 shows the performance of these file system configurations. VERITAS File System Quick I/O with Storage
Checkpoint outperforms UFS CDIO with Snapshot by 115 and 125 percent, respectively. VERITAS File System Cached
Quick I/O improves performance further, outperforming UFS and UFS with logging CDIO with Snapshot by 155 and 167
percent, respectively. Figure 9 presents the data in chart form.

UFS CDIO with Snapshot


UFS+logging CDIO with Snapshot

Improvement From Using


VERITAS File System
QIO With Checkpoint
115%
125%

Improvement From Using


VERITAS File System
CQIO With Checkpoint
155%
167%

Table 10: Peak tpmC improvements gained by using VERITAS File System for configurations that bypass the operating system page cache
(UFS with Concurrent Direct I/O and VERITAS File System with Quick I/O) and that use Snapshot/Checkpoint technology.

TPCC Performance, UFS CDIO vs. VERITAS File System QIO/CQIO, and
UFS Snapshot vs. VERITAS File System Checkpoint

5000

4,139
4000
3,499

tpmC

3000

2000
1,626

1,552

1000

0
UFS CDIO with Snapshot

www.veritas.com

UFS+logging CDIO with


Snapshot

VxFS Quick I/O with


Checkpoint

VxFS Cached Quick I/O with


Snapshot

VERITA S File Syste m 3.4 Pa tch 2 vs. UN IX Fil e System on Solari s 8 Update 4

Page 26

TPCC Performance, UFS CDIO vs. VxFS QIO/CQIO, and


UFS Snapshot vs. VxFS Checkpoint

5000

4139
4000
3499

tpmC

3000

2000
1626

1552

1000

0
UFS CDIO with Snapshot

UFS+logging CDIO with


Snapshot

VxFS Quick I/O with


Checkpoint

VxFS Cached Quick I/O with


Snapshot

Figure 9: Peak tpmC for file system configurations that use both unbuffered I/O and Snapshot/Checkpoint technology. VERITAS File System
Quick I/O under these configurations provides 115 and 125 percent greater throughput than UFS and UFS with logging CDIO, respectively.
VERITAS File System Cached Quick I/O improves performance even further, providing 155 and 167 percent greater tpmC throughput than
UFS and UFS with logging CDIO, respectively.

www.veritas.com

VERITA S File Syste m 3.4 Pa tch 2 vs. UN IX Fil e System on Solari s 8 Update 4

Page 27

5.Miscellaneous Commands
5.1Summary
This study shows the performance advantages of VERITAS File System over Solaris 8 UNIX File System (UFS)
while running several small benchmarks. VERITAS File System outperformed UFS by as much as 343 percent in
these tests, and enabled better system scalability. The commands examined, and the improvements realized by
using VERITAS File System over UFS and UFS with logging, are:

mkfile: 72 to 118 percent

touch_files: 58 to 343 percent (VERITAS File System has a large advantage over UFS with logging when
performing multiprocess concurrent I/O.)

cp: 11 to 82 percent

uncompress/tar extract: 49 to 177 percent

5.2Introduction
This report demonstrates the performance benefits of VERITAS File System 3.4 Patch 2 over UFS on a Sun
Microsystems Enterprise server running Solaris 8 Update 4. To evaluate the file system performance, four tests
were used. Two of the tests are standard UNIX commands, mkfile and cp. The third test, touch_files, creates
a file (like the UNIX touch command) and writes the first block of the file. (The source for touch_files is in
Appendix C.) The fourth test extracted large, compressed tar archives.

5.3Test Configurations
5.3.1mkfile, cp
A Sun E4500 was configured with four 400 MHz CPUs and 4 GB of memory (4x4). The file systems tested were set up on an
IBM ESS 2105 F20 (Shark) with 128 disks. The Shark was configured as two volumes of RAID 5 and two volumes of RAID
0. The disks are 35 GB IBM 10,000 RPM drives connected by fibre to Brocade switches through the two Shark cluster IBM
AIX systems. The Shark is connected to one Brocade Silkworm switch that is cascaded to a second Brocade Silkworm switch
and then to a JNI FC641063N card in the E4500. Of the 128 disks, 64 are used as Shark RAID 0 and 64 are used as Shark
RAID 5 (7 + 1 hot standby). The file systems are created on VERITAS Volume Manager stripemirror (RAID 1+0) volumes.
The stripemirror 225 GB volumes over RAID 0 are seven columns (14 disks). The stripemirror 225 GB volumes over
RAID 5 volumes are two columns. Both UFS and VERITAS File System file systems were created and mounted using default
options (except mounting of UFS with logging, where the logging mount option was used). The VERITAS File System file
systems were mounted with default options.

5.3.2touch_files
For this benchmark, a Sun UE10000 with 12,250 MHz CPUs and 6 GB of RAM was configured with two Unisys Clariion
arrays, each containing ten Seagate Cheetah 10,000 RPM 18 GB disks. The disks were organized into a 20way RAID 0
volume using VERITAS Volume Manager 3.2. Both UFS and VERITAS File System file systems were created and mounted
using default options (except mounting of UFS with logging, where the logging mount option was used).

5.3.3uncompress and tar extract


These tests were performed on three systems used for fsck performance comparisons in Section 1. The first
system is a Sun E4000, with four 168 MHz UltraSPARCI CPUs and 832 MB of RAM. Two Sun StorEdge D1000
JBOD arrays, connected via fastwide SCSI connectors, were configured to a 100 GB RAID 1+0 volume. The
second system is a Sun Blade 1000, with two 750 MHz UltraSPARCIII CPUs and 1 GB of RAM. A single Sun
StorEdge T3 hardware RAID 5 array, directly connected to a builtin Fibre ChannelArbitrated Loop port on the

www.veritas.com

VERITA S File Syste m 3.4 Pa tch 2 vs. UN IX Fil e System on Solari s 8 Update 4

Page 28

Blade, was configured to a 100 GB volume. The third system is a Sun E4500, with eight 400 MHz UltraSPARCII
CPUs and 2 GB of RAM. Three Sun StorEdge A5200 JBOD arrays provided 64 18 GB disks, configured for a
striped volume (RAID 0) of 1 TB.
Two different file sets were used in this study. For the E4000 and Blade 1000 configurations, the file set consists of
five compressed (using gzip) archive (tar) files. Each .tar.gz file represents distinct subsets of the data
produced by a run of the SPECsfs97 benchmark. After each file is uncompressed and extracted, the files total
72,641,776 KB of data (about 69.3 GB). The five .tar.gz files were each about 31 MB. For the E4500
configuration, a larger (though similar) file set was used. First, the five toplevel SPECsfs97 directories were
brought into a single .tar file that, when compressed using gzip, is 164 MB. Then, 11 copies of this .tar.gz file
were created. When uncompressed and extracted into different directories on the same file system, the file set
totals about 768 GB.
The 69.3 GB and 768 GB file sets are the same that were used in the full fsck study of Section Full fsck. (That
section contains a complete description of the file sizes.) In addition, the machine and disk configurations for the
E4000, Blade 1000 and E4500, as well as the mkfs and mount options that were used to create the file systems,
are the same as described in Section 1.
After mounting, the file system was populated with the .tar.gz files. The files in the 100 GB file systems were
extracted using the following script (the 1 TB file system uses a similar script):
gzcat laddis5.tar.gz | tar xf &
gzcat laddis6.tar.gz | tar xf &
gzcat laddis7.tar.gz | tar xf &
gzcat laddis8.tar.gz | tar xf &
gzcat laddis9.tar.gz | tar xf &
gzcat laddis10.tar.gz | tar xf &
/bin/time wait
After an experiment was completed on a given file system, the volume was wiped clean by performing a mkfs for
the next volume type in the experiment. Note that although the mkfs options for UFS do not differentiate between
logging and nonlogging variants (that is a mount time option), we did not take a shortcut of bypassing mkfs when
switching between UFS and UFS with logging tests.

www.veritas.com

VERITA S File Syste m 3.4 Pa tch 2 vs. UN IX Fil e System on Solari s 8 Update 4

Page 29

5.4Overview of Results
The performance improvement realized by using VERITAS File System over UFS and UFS with logging varies by command
and disk configuration. Table 11 through 14 summarize the results.
RAID 0
mkfile File Size

1 GB
2 GB
4 GB
8 GB

VERITAS File
System
Improvement
Over UFS
111%
74%
73%
72%

RAID 5

VERITAS File
System
Improvement
Over UFS+logging
111%
76%
80%
79%

VERITAS File
System
Improvement
Over UFS
118%
99%
100%
72%

VERITAS File
System
Improvement
Over UFS+logging
118%
109%
103%
77%

Table 11: VERITAS File System performance improvements for mkfile benchmark

1:1
Source, Dest

RAID 0,
RAID 0,
RAID 5,
RAID 5,

RAID 0
RAID 5
RAID 0
RAID 5

VERITAS
File System
Improveme
nt Over
UFS
50%
41%
82%
78%

VERITAS
File System
Improvemen
t Over
UFS+log
41%
30%
66%
49%

1:5 parallel
VERITAS
VERITAS
File System
File System
Improvement Improvemen
Over UFS
t Over
UFS+log
31%
38%
18%
15%
43%
26%
29%
32%

1:10 parallel
VERITAS
VERITAS File
File System
System
Improvemen
Improvement
t Over UFS
Over UFS+log
29%
16%
24%
24%

22%
14%
14%
11%

Table 12: cp Benchmark performance improvements for VERITAS File System

Single Process
Directories/
Files per Directory

100/1000 RAID 0

VERITAS File
System
Improvement Over
UFS
184%

VERITAS File
System
Improvement Over
UFS+logging
58%

Multiple Processes
100 parallel processes/directory
VERITAS File
VERITAS File
System
System
Improvement Over
Improvement Over
UFS
UFS+logging
96%
343%

Table 13: VERITAS File System performance improvements for touch_files benchmark. Of special interest is the performance of UFS with
logging on the multiprocess benchmark. In this case, VERITAS File System ran over 340 percent faster than UFS with logging.

VERITAS
File System
Improvemen
t Over UFS

91%

www.veritas.com

E4000, 69 GB on a
Blade 1000, 69 GB on a
E4500, 768 GB on a
100 GB JBOD
100 GB Hardware RAID 5
1 TB JBOD RAID 0
RAID 1+0 Volume, 5
Volume,
Volume, 11 Concurrent
Concurrent
5 Concurrent
Uncompress/Extract
Uncompress/Extract
Uncompress/Extract
Pipelines
Pipelines
Pipelines
VERITAS
VERITAS
VERITAS
VERITAS
VERITAS
File System
File System
File
File
File System
Improveme
Improvemen
System
System
Improvemen
nt Over
t Over UFS
Improveme Improvem
t Over
UFS+loggin
nt Over
ent Over
UFS+loggin
g
UFS+loggi
UFS
g
ng
49%
54%
54%
177%
171%

VERITA S File Syste m 3.4 Pa tch 2 vs. UN IX Fil e System on Solari s 8 Update 4

Page 30

Table 14: Uncompress and tar extract performance improvements for VERITAS File System

5.5Detailed Results
Detailed elapsed time results are shown in through 16.

mkfile
(18 GB, RAID 0 and RAID 5)

UFS

UFS+Logging

VERITAS File System

Elapsed Time (in minute)s

0
1 GB

2 GB

4 GB
RAID 0

8 GB

1 GB

2 GB

4 GB

8 GB

RAID 5

Figure 10: mkfile. VERITAS File System outperforms UFS and UFS with logging by 72 to 111 percent for RAID 0 configurations, and by 72 to
118 percent for RAID 5 configurations.

www.veritas.com

VERITA S File Syste m 3.4 Pa tch 2 vs. UN IX Fil e System on Solari s 8 Update 4

Page 31

mkfile 8 GB on RAID 0

UFS

UFS+logging

VERITAS File System

45

40

35

Bandwidth (MB/sec)

30

25

20

15

10

0
0

10

Minute in Run

Figure 11: File system performance of mkfile over time. VERITAS File System obtains its peak bandwidth quickly, and continues working at that
rate until it completes the benchmark, in about 6 minutes. UFS and UFS with logging initially reach approximately the same bandwidth as
VERITAS File System, but fail to maintain that rate after about 4 minutes. Consequently, the UFS and UFS with logging runs take longer to
complete than VERITAS File System.

www.veritas.com

VERITA S File Syste m 3.4 Pa tch 2 vs. UN IX Fil e System on Solari s 8 Update 4

Page 32

cp 4.5 GB file (1 to 1, 1 to 5, and 1 to 10)


With Source, Dest as RAID 0 and RAID 5

UFS

UFS+Logging

VERITAS File System

60

50

Elapsed Time (in minutes)

40

30

20

10

R0>R0

R0>R5

R5>R0
1:1

R5>R5

R0>R0

R0>R5

R5>R0
1:5

R5>R5

R0>R0

R0>R5

R5>R0

R5>R5

1:10

Figure 12: cp. VERITAS File System outperforms UFS and UFS with logging by 30 to 82 percent for 1to1 copy, by 15 to 43 percent for 1
to5 parallel copy, and by 11 to 29 percent for 1to10 parallel copy.

Figure 13: touch_files. Notice the performance of UFS with logging on the multiprocess runs. VERITAS File System outperforms UFS with
logging by 58 percent in the singleprocess runs, and by 343 percent in the multiprocess runs.

www.veritas.com

VERITA S File Syste m 3.4 Pa tch 2 vs. UN IX Fil e System on Solari s 8 Update 4

Page 33

touch_files 100 1000 6144


on 20Column RAID 0

UFS

UFS+Logging

VERITAS File System

40

35

34.0

Elapsed Time (inminutes)

30

25

19.8
20

15
12.0
10

4.0

5
1.8

0.9

0
RAID 0

RAID 0

Single Process

Multiprocess

The time taken to uncompress and extract on the E4000 system (about 69.3 GB of data on a 100 GB volume) is shown in .
Although the uncompress component (gunzip) involves significant CPU time, which is likely the same for each of the three
file systems, the extract (untar) component differs enough to show a significant overall win for VERITAS File System:
about 91percent faster than UFS, and about 49 percent faster than UFS with logging. (These figures are one of the rare
instances in this paper where UFS with logging performed better than UFS without logging.)

Figure 14: Uncompress and extract on E4000 system. Note that the user and sys times are perCPU. The user and system times as reported by
/bin/time in units of CPUminutes is four times the displayed value, because /bin/time reports the sum of CPU times across all processes.

www.veritas.com

VERITA S File Syste m 3.4 Pa tch 2 vs. UN IX Fil e System on Solari s 8 Update 4

Page 34

Uncompress and Untar Times on E4000 System


(100 GB 12Column RAID 1+0 Volume, About 69 GB Used)
UFS

UFS+logging

VERITAS File System

350

309.9
300

250

241.5

Minutes

200
162.4
150
123.8
114.6

108.4
100

86.2
71.6

50

77.0

36.8 37.6 37.6

0
Real

User

Sys

User+sys

Component of Time

The time taken to uncompress and extract on the Blade 1000 system (about 69.3 GB of data on a 100 GB volume) is shown in
. Although the uncompress component (gunzip) involves significant CPU time, which is likely the same for each of the three
file systems, the extract (untar) component differs enough to show a significant overall win for VERITAS File System:
about 54 percent faster than both UFS and UFS with logging.

Figure 15: Uncompress and extract on Blade 1000. Note that the user and sys times are perCPU. The user and system times as reported by
/bin/time in units of CPUminutes is twice the displayed value, because /bin/time reports the sum of CPU times across all processes.

www.veritas.com

VERITA S File Syste m 3.4 Pa tch 2 vs. UN IX Fil e System on Solari s 8 Update 4

Page 35

Uncompress and Untar Times on Blade 1000 System


(100 GB HW RAID 5 Volume, About 69 GB Used)
UFS

UFS+logging

VERITAS File System

140

124.7 124.7
120

100

80.9
Minutes

80

60

54.7

37.4

40

17.3

20

16.8

39.7

57.1

56.4

39.6

17.5

0
Real

User

Sys

User+sys

Component of Time

www.veritas.com

VERITA S File Syste m 3.4 Pa tch 2 vs. UN IX Fil e System on Solari s 8 Update 4

Page 36

The time to uncompress and extract on the E4500 configuration (about 768 GB used on a 1 TB file system) is
summarized in . The uncompress component involves significant CPU time which is likely the same for each file
system. However, the extract component differs enough between the file systems to show a significant win for
VERITAS File System: about 177 percent faster than UFS and 171 percent faster than UFS with logging.
Uncompress and Untar Times on E4500 System
(1 TB RAID 0 Volume, About 768 GB Used)
UFS

UFS+logging

VERITAS File System

1,600

1,400

1,350.1
1,324.1

1,200

Minutes

1,000

800

600
487.7

457.4
381.3

400

360.9
307.0

292.3
229.6
200
68.6

76.1

77.4

0
Real

User

Sys

User+sys

Component of Time

Figure 16: Uncompress and extract on E4500. Note that the user and sys times are perCPU. The user and system times as reported by
/bin/time, in units of CPUminutes, is eight times the displayed values, because /bin/time reports the sum of CPU times across all processors.

www.veritas.com

VERITA S File Syste m 3.4 Pa tch 2 vs. UN IX Fil e System on Solari s 8 Update 4

Page 37

6.PostMark 1.5 File System Benchmark


6.1Introduction
This section evaluates the performance of VERITAS File System 3.4 Patch 2 and Sun UNIX File System (UFS)
running Solaris 8 Update 4 by using the PostMark version 1.5 file server benchmark. PostMark measures the
performance of smallfile updates, in an effort to model the disk performance of electronic mail, net news and
Webbased commerce. Further details on the benchmark can be found at
http://www.netapp.com/tech_library/postmark.html.
Our performance studies show that VERITAS File System is about 80 to 230 percent faster than UFS, depending
on the number of PostMark processes run concurrently. Compared to UFS with logging, VERITAS File System is
between 350 and 2,200 percent faster, with the performance advantage consistently increasing as the number of
PostMark processes increases. When VERITAS File System is augmented with VERITAS QuickLog, the
performance enhancements increase even further, to 140 to 275 percent (over UFS) and 410 to 3,275 percent
(over UFS+logging).
The key conclusion drawn from this study is that while UFS and VERITAS File System performance increases as
more PostMark processes are added, UFS with logging performance consistently decreases as more processes
are added. We conclude that UFS with logging does not scale to a multiprocess smallfile set workload. A second
conclusion is that VERITAS QuickLog enhances VERITAS File System performance in this benchmark by 13 to 50
percent.

6.2Test Configuration
PostMark tests were run on a Sun E6500 system with eight 400 MHz UltraSPARCII CPUs, 8 GB of RAM, and six Sbus
controllers. All runs were performed on a 20column RAID 1+0 volume (40 disks), created with VERITAS Volume
Manager 3.2. An exception is the VERITAS File System with QuickLog runs, which were performed on a 19column RAID
1+0 volume (38 disks) with a single mirrored QuickLog volume (occupying the other two disks). The 40 disks involved in the
volume were spread across two Sun A5200 JBOD arrays, each containing 18 GB Seagate Fibre Channel 10,000 RPM disks.
Four file system configurations were used: UFS, UFS with logging, VERITAS File System 3.4 Patch 2, and VERITAS File
System 3.4 Patch 2 with VERITAS QuickLog. Each file system was created using default options. For mounting, default
options were used, with two exceptions: UFS with logging used the logging option, and VERITAS File System with
QuickLog used the qlog option.
We tested file system scalability by varying the number of concurrent PostMark processes from 1 to 16. Regardless of the
concurrency, each PostMark process operates on a distinct file set comprising 20,000 files across 1,000 directories, and
performing 20,000 transactions. In other words, as the number of concurrent processes is scaled up, the amount of work done
by any one PostMark process is kept constant. We kept PostMarks default for file sizes, which are linearly distributed across
a range of 500 bytes to 9.77 KB.
Aside from the number of directories and files, the only other PostMark option changed from the default was to bypass I/O
buffering of the standard C library.
We report performance results as PostMark throughput, in transactions per second. In concurrent multiprocess runs, we report
the aggregate throughput (the sum of the throughput of each individual process). All numbers shown are the average of five
runs.

6.3Results
The runs show that VERITAS File System, both with and without QuickLog, scales well to multiple processes. UFS (without
logging) scales as well, though to a much lesser degree. UFS with logging, on the other hand, does not scale at all; in fact, its
performance consistently decreases as the number of concurrent PostMark processes increases. (Note that this conclusion
regarding the scaling of UFS with logging in a small file multiprocess workload was also shown in the touch_files benchmark
in Section Miscellaneous Commands.) The PostMark performance of UFS, UFS with logging, and VERITAS File System is
shown in Table 15.

www.veritas.com

VERITA S File Syste m 3.4 Pa tch 2 vs. UN IX Fil e System on Solari s 8 Update 4

Page 38

Number of
Concurrent
PostMark
Processes
1
2
4
6
8
10
12
16

UFS
ops/sec

65.7
120.7
206.6
267.5
301.9
321.8
338.7
358.7

UFS With
Logging
ops/sec

48.2
44.3
41.0
38.5
35.6
33.3
31.7
29.3

VERITAS File System


ops/sec

218.0
256.2
387.7
505.9
557.9
617.4
614.2
663.6

Improvement
Over
UFS+logging
352%
479%
846%
1,215%
1,469%
1,753%
1,835%
2,166%

VERITAS File System +


QuickLog
ops/sec
Improvement
Over
UFS+logging
247.3
413%
300.9
580%
494.0
1,105%
658.8
1,612%
774.2
2,077%
870.4
2,512%
924.6
2,813%
987.6
3,272%

Table 15: PostMark performance improvements for VERITAS File System, Compared to UFS and UFS with logging. Note the steadily increasing
margins over UFS with logging, which does not scale to multiple processes in this benchmark. At 16 processes, VERITAS File System+QuickLog
is about 33 times faster than UFS+logging.

www.veritas.com

VERITA S File Syste m 3.4 Pa tch 2 vs. UN IX Fil e System on Solari s 8 Update 4

Page 39

shows the PostMark results in chart form, illustrating the degree to which the file systems scale to multiple concurrent
PostMark processes.
Concurrent PostMark
20Column RAID 1+0 (w/o QuickLog), or
19Column RAID 1+0 plus 1 Mirrored QuickLog Volume (with QuickLog)
UFS

UFS+logging

VERITAS File System

VERITAS File System+QuickLog

1,000

PostMark (ops/sec)

800

600

400

200

0
0

10

12

14

16

Number of Concurrent PostMark Processes


Figure 17: PostMark performance with increasing concurrency. Note the performance of UFS with logging, where the aggregate throughput
of all PostMark processes actually decreases as additional processes are added. Adding QuickLog increases VERITAS File System
performance by about 50 percent at the higher concurrencies.

www.veritas.com

VERITA S File Syste m 3.4 Pa tch 2 vs. UN IX Fil e System on Solari s 8 Update 4

Page 40

7.Sequential File I/O (vxbench)


7.1Introduction
This section evaluates the performance of highbandwidth disk reads and writes for VERITAS File System 3.4
Patch 2 and Sun UNIX File System (UFS) running on the Solaris 8 Update 4 operating system. The ability of a
server to process large disk operations quickly, under high loads, is a telling factor in the servers scalability.
This section compares read and write bandwidth that can be attained with UFS, UFS with logging, and VERITAS
File System file systems. By using a program called vxbench, we are able to measure the bandwidth of sequential
disk operations under varying conditions:

Reads vs. writes

I/O unit size (8K vs. 64K)

Varying concurrency. Note that when several processes concurrently perform sequential reads and writes
on different files, the workload presented to the file system as a whole is nonsequential.

We were able to draw several conclusions from these experiments:

VERITAS File System scales very well under concurrent loads; UFS and UFS with logging do not. In
particular, with 16 or 32 concurrent read or write operations, VERITAS File System often outperforms the
UFS variants by 300 percent or more. With increasing concurrency, UFS and UFS with logging performance
improves little or none for reads, and generally degrades for writes.

While both UFS and VERITAS File System bandwidth improves as RAID 0 striping units increase from 1 to
24, VERITAS File System scales better. VERITAS File System read performance scales linearly, and
VERITAS File System write performance scales almost linearly.

UFS with logging performance consistently lagged behind that of UFS, even though our tests used large
files, which presumably do not stress a file systems metadata.

7.2Test Configurations
The vxbench tests were performed on a Sun E6500 computer system, with 12 400 MHz UltraSPARCII CPUs and
12 GB of RAM. The machine has eight Sbus boards, with a total of 12 Fibre Channel Sbus controllers. Unisys
Clariion arrays, each with 18 GB Seagate 10,000 RPM Fibre Channel disks, were attached to the Sbus controllers
using Sun Socal HBAs. The arrays were configured into a striped (RAID 0) volume using VERITAS Volume
Manager 3.2.
Due to the highbandwidth nature of the benchmark, it was important to configure the machine so that no
bandwidth bottleneck existed in the disk arrays, controllers or boards. Such bottlenecks might prevent the file
system software from being stressed.
To determine the maximum I/O bandwidth of the hardware configuration, we performed large sequential reads from
the raw disks. An example of a vxbench command to perform such a read is:
vxbench w read i iosize=64k,iocount=3000 P rawdiskdevice(s)
This test showed that a single disk of a single Clariion array supported about 25 MB/sec. When additional raw disks
were added (appended additional /dev/rdsk arguments to the vxbench command), scaling stopped after about
75 MB/sec, or three disks per array. The limitation was either the fibre connection out of the array (rated at
120 MB/sec), or the Sbus controller (rated at 200 MB/sec). (Obtaining bandwidth below the rated value is expected,
due to protocol overheads.) Regardless of the cause, we determined to use only three disks per Clariion array in
this study.

www.veritas.com

VERITA S File Syste m 3.4 Pa tch 2 vs. UN IX Fil e System on Solari s 8 Update 4

Page 41

When the vxbench command was expanded to read from raw disks residing on different arrays, the bandwidth
obtained depended on whether the arrays were connected to controllers that share an Sbus board. As long as the
arrays were on different Sbus boards, the rate of 75 MB/sec per Clariion array scaled almost linearly. The E6500
had eight Sbus boards, and we connected only one Clariion array to each such board, yielding a total of 24 disks
(given the threediskperbrick limit). In this way, we were able to configure RAID 0 volumes of between 1 and 24
columns, confident that in the presence of sequential reads and writes, any measured scalability limits are due to
software, not hardware.

7.3Experiments
The experiments consisted of vxbench runs to write and read 4 GB of data. Several vxbench options altered the way in which
the data was transferred:
The transfer is either a read or a write. Note that before a transfer, we unmount and then remount the file system, to ensure
that its cache is cleared. Additionally, after a file system is mounted (at /mnt), and just before running vxbench, the mounted
file system has its directory metadata primed with ls l /mnt, so that vxbench will not incur this cost.
Vxbench is configured to read or write with I/O units of either 8K or 64K.
Vxbench can read (write) the 4 GB of data using a variable number of files, though the grand total of all files was always kept
at 4 GB. Note that one process is assigned to each file (using the vxbench P flag). Our tests include runs that use 1, 4, 8, 12,
16, 20, 24, 28, and 32 processes (and files).
As an exception to the 4 GB size, our singleprocess vxbench runs write only 1 GB (because we used a vxbench binary which
accesses files using 32bit signed offsets). Because we report our results as rates (MB/sec read or written) instead of run
times, this exception for singleprocess runs does not affect the validity of our results.
In addition to vxbench variants, other variables in this experiment include:

Choice of file system: either UFS, UFS with logging, or VERITAS File System. No options for UFS mkfs
were specified beyond the default. For VERITAS File System, the only nondefault mkfs option was largefile
support. The only mounttime option for UFS file systems was the logging flag (UFS with logging).
VERITAS File System file systems were mounted using default options.

Varying number of columns in stripe. VERITAS Volume Manager 3.2 was used to configure a RAID 0
volume using from between 1 and 24 disks (columns). Our tests include runs that use 1, 2, 3, 4, 5, 6, 8, 10,
12, 16, 20, and 24 columns. A volume is created using the command:
vxassist g testdg make vol1 15g layout=stripe ncol=numcols disks

7.4Results
Because the set of variables over which the benchmark was run is a fivedimensional space (file system type,
number of RAID 0 columns, reads vs. writes, 8K vs. 64K unit size, and number of processes and files), no single
graph can show all results. The full results are listed in Appendix D; this subsection illustrates the most important
points.

7.4.1Scaling With Increasing Concurrency


This subsection discusses how well UFS, UFS with logging, and VERITAS File System read performance scale with
increasing processes (and files). We focus on the case of a 24way RAID 0 stripe, because that configuration provides the
maximum opportunity for highbandwidth transfers. In one sense, multiple processes and files may be expected to decrease
the I/O rate, because having multiple files implies that the (grand total of) 4 GB is no longer read or written sequentially;
some disk seeking is necessary. On the other hand, multiple processes can allow I/O latency to be hidden, tending to increase
the I/O rate.
and show the performance of 8K reads and writes, respectively, on this 24column stripe. For 8K reads, all file systems
usually perform better under higher concurrencies. VERITAS File System improves its read transfer rate quickly as the

www.veritas.com

VERITA S File Syste m 3.4 Pa tch 2 vs. UN IX Fil e System on Solari s 8 Update 4

Page 42

number of processes increases, obtaining about 210 MB/sec with only 12 processes. (This rate is about half of what vxbench
is able to obtain when run against the raw disk. The difference is likely due to the overhead of the OS page cache and
consequent copies.) UFS improves its read performance though only modestly up to 20 processes, and begins to degrade
beyond that. VERITAS File System is consistently 190 to 285 percent faster than UFS with four or more processes. Only with
one process, where VERITAS File System is 32% faster than UFS, is the gap narrower. UFS with logging has similar read
performance compared to UFS for the lower concurrencies, and inferior performance for eight processes or more. In fact,
beyond 12 processes, UFS with logging performance generally decreases. Compared to UFS with logging, VERITAS File
System obtains read performance improvement of 31 percent with 1 process, and between about 290 to 320 percent for the
higher concurrencies.
File System Scalability to Increasing Number of Concurrent Processes
(8K Reads, 24Column RAID 0)
UFS

UFS+logging

VERITAS File System

250

I/O Rate (MB/sec)

200

150

100

50

0
1

12

16

20

24

28

32

Number of Concurrent Processes (1 process per file)

Figure 18: File system scalability to multiple processes and files using 8K Reads. UFS and UFS with logging read performance improves
only marginally, or even decreases, with 16 or more processes. VERITAS File System quickly obtains its peak read performance of over
200 MB/sec with 12 processes, and does not degrade significantly beyond that.

www.veritas.com

VERITA S File Syste m 3.4 Pa tch 2 vs. UN IX Fil e System on Solari s 8 Update 4

Page 43

The performance of 8K writes, shown in tells a similar story. Compared to UFS, VERITAS File System obtains performance
increases from 51 percent (1 process) to 160 percent (16 processes). The performance increases are an indication of how
VERITAS File System write performance scales with concurrency, while UFS generally does not (except from 1 to 4
processes). The performance increases compared to UFS with logging are even more dramatic, where VERITAS File System
bests UFS with logging by between 63 percent (1 process) to 390 percent (28 processes).
File System Scalability to Increasing Number of Concurrent Processes
(8K Writes, 24Column RAID 0)
UFS

UFS+logging

VERITAS File System

200

I/O Rate (MB/sec)

150

100

50

0
1

12

16

20

24

28

32

Number of Concurrent Processes (1 process per file)

Figure 19: File system scalability to multiple processes and files, Using 8K writes. UFS shows flat or slightly declining performance with 8 or
more processes. UFS with logging performs even worse, with performance declining steadily, to about 35 MB/sec, as concurrency increases
past 4 processes. VERITAS File System, by contrast, quickly achieves a peak write throughput that is about 85 percent of its read throughput,
and does not degrade significantly as concurrency increases.

www.veritas.com

VERITA S File Syste m 3.4 Pa tch 2 vs. UN IX Fil e System on Solari s 8 Update 4

Page 44

and compare UFS, UFS with logging, and VERITAS File System reads and writes (respectively) on a 24way stripe with a
larger I/O size, 64K. shows that for reads all file systems usually perform better with higher concurrencies. VERITAS File
System improves its read transfer rate quickly as the number of processes increases, obtaining its peak bandwidth of about
220 MB/sec with only 12 processes. UFS improves its read performance incrementally with multiple processes (and begins to
degrade beyond 20 processes); VERITAS File System is consistently 190 to 270 percent faster than UFS with 4 or more
processes. Only with 1 process, where VERITAS File System is 37 percent faster than UFS, is the gap narrower. UFS with
logging has similar read performance compared to UFS for the lower concurrencies, but inferior performance for 8 or more
processes. Compared to UFS with logging, VERITAS File System obtains read performance increase of 38 percent with 1
process, and between about 270 and 350 percent for the higher concurrencies.

File System Scalability to Increasing Number of Concurrent Processes


(64K Reads, 24Column RAID 0)
UFS

UFS+logging

VERITAS File System

250

I/O Rate (MB/sec)

200

150

100

50

0
1

12

16

20

24

28

32

Number of Concurrent Processes (1 process per file)

Figure 20: File system scalability to multiple processes and files (64K Reads). The performance of UFS and UFS with logging improves
marginally with multiple processes, and begins to dip again after 16 processes. VERITAS File System quickly obtains its peak read
performance of over 200 MB/sec, and generally does not degrade beyond that.

www.veritas.com

VERITA S File Syste m 3.4 Pa tch 2 vs. UN IX Fil e System on Solari s 8 Update 4

Page 45

The performance of 64K writes, shown in , tells a similar story. Compared to UFS, VERITAS File System obtains
performance increases from 63 percent (1 process) to 209 percent (32 processes). The performance increases are an indication
of how VERITAS File System write performance scales with concurrency, while UFS generally does not (except from 1 to 4
processes, where UFS obtained its only improvement). The performance increases compared to UFS with logging are even
more dramatic, where VERITAS File System bests UFS with logging from between 70 percent (1 process) to 406 percent (32
processes).

File System Scalability to Increasing Number of Concurrent Processes


(64K Writes, 24Column RAID 0)
UFS

UFS+logging

VERITAS File System

250

I/O Rate (MB/sec)

200

150

100

50

0
1

12

16

20

24

28

32

Number of Concurrent Processes (1 process per file)

Figure 21: File system scalability to multiple processes and files (64K Writes). UFS shows flat or slightly declining performance with 8 or
more processes. UFS with logging performs even worse, with performance declining steadily, to about 40 MB/sec, as concurrency increases
past 4 processes. VERITAS File System, by contrast, quickly achieves a peak write throughput that nearly equals its read throughput, and
generally does not degrade as concurrency increases.

www.veritas.com

VERITA S File Syste m 3.4 Pa tch 2 vs. UN IX Fil e System on Solari s 8 Update 4

Page 46

7.4.2Scaling With Increasing Number of Columns in a RAID 0 Stripe


Figure 22 through 25 give an indication of how the file systems scale with increasing numbers of columns in a RAID 0 stripe.
To keep the graphs simple, the number of concurrent processes is fixed at 16.
Figure 22 shows the scaling for 8K reads. The figure illustrates that VERITAS File System is much better able to benefit from
the additional bandwidth of more columns (disks) in the stripe. For reads, VERITAS File System scales linearly to higher
number of columns. UFS and UFS with logging also scale, though at a much slower pace. Compared to UFS, the VERITAS
File System 8K read performance increases at 1, 10, and 20 columns are 350, 200, and 220 percent, respectively. Compared to
UFS with logging, the VERITAS File System 8K read performance increases for 1, 10, and 20 columns are 333, 312, and 310
percent, respectively.

Scaling with Number of Columns in a RAID 0 Stripe


(8K Reads, 16 Processes, 1 Process/File)
UFS

UFS+logging

VERITAS File System

250

I/O Rate (MB/sec)

200

150

100

50

0
0

10

15

20

25

Number of RAID 0 Columns


Figure 22: Scalability to increasing number of RAID 0 columns (8K reads). VERITAS File System scales its read performance linearly. UFS
and UFS with logging read performance scales at a much slower rate compared to VERITAS File System. Note that UFS with loggings
maximum read rate at 24 columns is about 57 MB/sec, a rate that was exceeded by VERITAS File System with only 8 columns.

shows the scaling for 64K reads. As with 8K reads, VERITAS File System scales linearly to additional columns, while UFS
and UFS with logging scale at much slower rates. Compared to UFS, VERITAS File System 64K read performance increases

www.veritas.com

VERITA S File Syste m 3.4 Pa tch 2 vs. UN IX Fil e System on Solari s 8 Update 4

Page 47

at 1, 10, and 20 columns are 376, 210, and 234 percent, respectively. Compared to UFS with logging, the VERITAS File
System 64K performance increases for 1, 10, and 20 columns are 429, 276, and 285 percent, respectively.
Scaling with Number of Columns in a RAID 0 Stripe
(64K Reads, 16 Processes, 1 Process/File)
UFS

UFS+logging

VERITAS File System

250

I/O Rate (MB/sec)

200

150

100

50

0
0

10

15

20

25

Number of RAID 0 Columns


Figure 23: Scalability to increasing number of RAID 0 columns (64K reads). VERITAS File System scales its read performance linearly. UFS
and UFS with logging read performance scales at a much slower rate than VERITAS File System. Note that VERITAS File System is able to read
at 6 columns what takes UFS with logging 24 columns to achieve the same result.

www.veritas.com

VERITA S File Syste m 3.4 Pa tch 2 vs. UN IX Fil e System on Solari s 8 Update 4

Page 48

The VERITAS File System performance increases for 8K writes are shown in . Compared to UFS, VERITAS File System 8K
write performance increases at 1, 10 and 20 columns are about 320, 170 and 180 percent, respectively. Compared to UFS with
logging, VERITAS File System 8K write performance increases for 1, 10 and 20 columns are about 310, 230 and 320 percent,
respectively.

Scaling with Number of Columns in a RAID 0 Stripe


(8K Writes, 16 Processes, 1 Process/File)
UFS

UFS+logging

VERITAS File System

200

I/O Rate (MB/sec)

150

100

50

0
0

10

15

20

25

Number of RAID 0 Columns


Figure 24: Scalability to increasing number of RAID 0 columns (8K Writes). VERITAS File System scales its write performance about
linearly. UFS and UFS with logging also scale, but often erratically, and always less than VERITAS File System.

www.veritas.com

VERITA S File Syste m 3.4 Pa tch 2 vs. UN IX Fil e System on Solari s 8 Update 4

Page 49

The VERITAS File System performance lead when using 64K writes, shown in , tells a similar story. VERITAS File System
performance increases over UFS at 1, 10, and 20 columns are about 330, 220, and 250 percent, respectively. Compared to
UFS with logging, VERITAS File System 64K write performance increases for 1, 10, and 20 columns are about 380, 350, and
350 percent, respectively. As usual, UFS with logging performed significantly worse than UFS without logging.
Scaling with Number of Columns in a RAID 0 Stripe
(64K Writes, 16 Processes, 1 Process/File)
UFS

UFS+logging

VERITAS File System

250

I/O Rate (MB/sec)

200

150

100

50

0
0

10

15

20

25

Number of RAID 0 Columns


Figure 25: Scalability to increasing number of RAID 0 columns (64K Writes). VERITAS File System scales its write performance almost linearly
for all stripe sizes. UFS and UFS with logging also scale, but to a much lesser degree. Note that even when using all 24 columns, UFS and UFS
with logging are unable to match the write performance that VERITAS File System achieves with only 8 columns.

Appendix A: Configuration Information for SPECsfs97


Benchmarks

www.veritas.com

VERITA S File Syste m 3.4 Pa tch 2 vs. UN IX Fil e System on Solari s 8 Update 4

Page 50

1. E6500 (8X8)
E6500 JBOD SPECsfs97.v3 Result
UNIX File System
SPECsfs97.v3 = 7,974 Ops/Sec (Overall Response Time = 6.3)
UFS+logging
SPECsfs97.v3 = 5,648 Ops/Sec (Overall Response Time = 6.4)
VERITAS File System
SPECsfs97.v3 = 11,670 Ops/Sec (Overall Response Time = 5.2)

UFS
Throughput
Response
ops/sec
msec
483
982
1,980
3,011
4,005
4,988
5,991
6,967
7,965
7,974

UFS+logging
Throughput
Response
ops/sec
msec

2.6
3.3
3.9
4.7
5.5
7.4
7.3
9.1
16.4
21.9

482
982
1,980
3,010
4,006
5,004
5,648

2.7
2.9
4.1
5.4
6.6
7.6
29.1

VERITAS File System


Throughput
Response
ops/sec
msec
483
982
1,979
3,006
3,997
4,989
5,973
6,952
7,914
8,502
8,996
9,504
10,026
11,019
11,670

2.7
3.9
3.5
3.8
3.9
4.2
4.7
5.0
5.5
5.6
6.5
7.0
7.1
9.0
12.7

CPU, Memory and Power


Model Name
Processor
# of Processors
Primary Cache
Secondary Cache
Other Cache
UPS
Other Hardware

Sun Microsystems Enterprise 6500


400 MHz UltraSPARC
8
16KBI+16KBD on chip
4096KB (I+D) off chip
N/A
N/A
N/A

Memory Size
NVRAM Size
NVRAM Type
NVRAM Description

8192 MB
N/A
N/A
N/A

Server Software
OS Name and Version
Other Software

www.veritas.com

Sun Solaris 8
VERITAS File System 3.4 Patch 2, VERITAS Volume Manager 3.2

VERITA S File Syste m 3.4 Pa tch 2 vs. UN IX Fil e System on Solari s 8 Update 4

Page 51

File System
NFS version

VERITAS File System, UNIX File System


3

Server Tuning
Buffer Cache Size
# NFS Processes
Fileset Size

Default
1,600
119 GB (VERITAS File System) 84 GB (UFS) 59 GB (UFS+logging)

Network Subsystem
Network Type
Network Controller Desc.
Number Networks
Number Network Controllers 1
Protocol Type

1,000 Mbit Ethernet


1,000 Mbit Sun Gigabit Ethernet
1 (N1)
TCP

Switch Type
Bridge Type
Hub Type
Other Network Hardware

Cisco Catalyst 3500XL


N/A
N/A
N/A

Disk Subsystem and File system


Number Disk Controllers
Number of Disks
Number of Filesystems
File System Creation Ops
File System Config

11
198
11 (F1F11)
default (UFS, UFS+logging, and VERITAS File System)
default (See Notes)

Disk Controller
# of Controller Type
Number of Disks
Disk Type
File Systems on Disks
Special Config Notes

Sun internal UltraSCSI3 controller


1
4
Fujitsu 18 GB Enterprise 10K Series MAG3182L, 10,000 RPM
OS, swap, Misc.

Disk Controller
# of Controller Type
Number of Disks
Disk Type
Disk Type
File Systems on Disks
Special Config Notes

SUN Socal SCSI3 (5013060)


11
198
Seagate 9 GB Cheetah ST39103FC, 10,000 RPM (158)
Seagate 18 GB Cheetah ST318203FC, 10,000 RPM (40)
F1F11
Unisys ESM700 (Clariion) each enclosure with 10 drives

www.veritas.com

VERITA S File Syste m 3.4 Pa tch 2 vs. UN IX Fil e System on Solari s 8 Update 4

Page 52

Load Generator (LG) Configuration


Number of Load Generators 14
Number of Processes per LG
Biod Max Read Setting
5
Biod Max Write Setting
5
LG Type
LG Model
Number and Type Processors
Memory Size
Operating System
Compiler
Compiler Options
Network Type

11

LG1
Sun Microsystems Netra T1
1 400MHz UltraSPARC
256 MB
Solaris 2.8
SUNWspro4
default
On board 100baseT

Testbed Configuration
LG # LG Type
Network

114 LG1
N1

Target File System


Notes

F1, F2, F3.F11

Notes and Tuning


Vxtunefs parameters are dynamically set at the time of making the file system; no effort was made to change the
default chosen by VERITAS File System.
read_pref_io = 65536
read_nstream = 9
read_unit_io = 65536
write_pref_io = 65536
write_nstream = 9
write_unit_io = 65536
pref_strength = 20
buf_breakup_size = 262144
discovered_direct_iosz = 262144
max_direct_iosz = 9437184
default_indir_size = 8192
qio_cache_enable = 0
write_throttle = 492160
max_diskq = 9437184
initial_extent_size = 8
max_seqio_extent_size = 2048
max_buf_data_size = 8192
hsm_write_prealloc = 0

www.veritas.com

VERITA S File Syste m 3.4 Pa tch 2 vs. UN IX Fil e System on Solari s 8 Update 4

Page 53

2. E6500 (12X12)
E6500 JBOD SPECsfs97.v3 Result
UNIX File System
SPECsfs97.v3 = 10,192 Ops/Sec (Overall Response Time = 6.4)
UFS+logging
SPECsfs97.v3 = 5,872 Ops/Sec (Overall Response Time = 5.9)
VERITAS File System
SPECsfs97.v3 = 13,986 Ops/Sec (Overall Response Time = 4.9)

UFS
Throughput
ops/sec
482
982
1,980
3,011
4,004
4,995
5,985
6,959
7,947
8,504
9,030
9,551
10,058
10,192

UFS+logging

Response
msec
2.6
3.2
3.2
3.9
4.8
6.0
6.8
7.4
8.5
9.6
10.9
12.3
13.9
17.4

Throughput
ops/sec
482
982
1,980
3,009
3,995
5,001
5,872

Response
msec
2.7
2.9
3.4
4.2
5.5
7.2
23.7

VERITAS File System


Throughput
ops/sec
483
982
1,979
3,011
3,998
4,994
5,981
6,951
7,933
8,488
9,021
9,532
10,010
10,998
12,043
12,965
13,958
13,986

Response
msec
2.1
2.6
2.5
2.9
3.3
3.8
4.1
4.4
4.5
4.6
5.1
5.5
5.5
6.3
7.0
8.7
11.1
12.5

CPU, Memory and Power


Model Name
Processor
# of Processors
Primary Cache
Secondary Cache
Other Cache
UPS
Other Hardware

Sun Microsystems Enterprise 6500


400 MHz UltraSPARC
12
16KBI+16KBD on chip
4096KB (I+D) off chip
N/A
N/A
N/A

Memory Size
NVRAM Size
NVRAM Type
NVRAM Description

12,288 MB
N/A
N/A
N/A

www.veritas.com

VERITA S File Syste m 3.4 Pa tch 2 vs. UN IX Fil e System on Solari s 8 Update 4

Page 54

Server Software
OS Name and Version
Other Software
File System
NFS version

Sun Solaris 8
VERITAS File System 3.4 Patch 2, VERITAS Volume Manager 3.2
VERITAS File System, UNIX File System
3

Server Tuning
Buffer Cache Size
# NFS Processes
Fileset Size

Default
1,600
148 GB (VERITAS File System) 109 GB (UFS) 59 GB (UFS+logging)

Network Subsystem
Network Type
Network Controller Desc.
Number Networks
Number Network Controllers 1
Protocol Type

1,000 Mbit Ethernet


1,000 Mbit Sun Gigabit Ethernet
1 (N1)
TCP

Switch Type
Bridge Type
Hub Type
Other Network Hardware

Cisco Catalyst 3500XL


N/A
N/A
N/A

Disk Subsystem and File system


Number Disk Controllers
Number of Disks
Number of Filesystems
File System Creation Ops
File System Config

11
198
11 (F1F11)
Default (UFS, UFS+logging, and VERITAS File System)
Default (See Notes)

Disk Controller
# of Controller Type
Number of Disks
Disk Type
File Systems on Disks
Special Config Notes

Sun internal UltraSCSI3 controller


1
4
Fujitsu 18 GB Enterprise 10K Series MAG3182L, 10,000 RPM
OS, swap, Misc.

Disk Controller
# of Controller Type
Number of Disks
Disk Type
Disk Type
File Systems on Disks
Special Config Notes

SUN Socal SCSI3 (5013060)


11
198
Seagate 9 GB Cheetah ST39103FC, 10,000 RPM(158)
Seagate 18 GB Cheetah ST318203FC, 10,000 RPM(40)
F1F11
Unisys ESM700 (Clariion) each enclosure with 10 drives

www.veritas.com

VERITA S File Syste m 3.4 Pa tch 2 vs. UN IX Fil e System on Solari s 8 Update 4

Page 55

Load Generator (LG) Configuration


Number of Load Generators 14
Number of Processes per LG
Biod Max Read Setting
5
Biod Max Write Setting
5
LG Type
LG Model
Number and Type Processors
Memory Size
Operating System
Compiler
Compiler Options
Network Type

11

LG1
Sun Microsystems Netra T1
1 400 MHz UltraSPARC
256 MB
Solaris 2.8
SUNWspro4
Default
On board 100baseT

Testbed Configuration
LG # LG Type
Network

114 LG1
N1

Target File System


Notes

F1, F2, F3.F11

Notes and Tuning


Vxtunefs parameters are dynamically set at the time of making the file system; no effort was made to change the
default chosen by VERITAS File System.
read_pref_io = 65536
read_nstream = 9
read_unit_io = 65536
write_pref_io = 65536
write_nstream = 9
write_unit_io = 65536
pref_strength = 20
buf_breakup_size = 262144
discovered_direct_iosz = 262144
max_direct_iosz = 9437184
default_indir_size = 8192
qio_cache_enable = 0
write_throttle = 492160
max_diskq = 9437184
initial_extent_size = 8
max_seqio_extent_size = 2048
max_buf_data_size = 8192
hsm_write_prealloc = 0

www.veritas.com

VERITA S File Syste m 3.4 Pa tch 2 vs. UN IX Fil e System on Solari s 8 Update 4

Page 56

Appendix B: Additional Software Configuration


Information for TPCC Benchmark
The following shared memory and IPC setting were enabled in the /etc/system file:
* Shared memory
*
set shmsys:shminfo_shmmin=1
set shmsys:shminfo_shmmax=6442450000
set shmsys:shminfo_shmmni=256
set shmsys:shminfo_shmseg=100
*
* Semaphores
set semsys:seminfo_semmap=256
set semsys:seminfo_semmni=4096
set semsys:seminfo_semmns=4096
set semsys:seminfo_semmnu=4096
set semsys:seminfo_semume=64
set semsys:seminfo_semmsl=75
set semsys:seminfo_semopm=50

www.veritas.com

VERITA S File Syste m 3.4 Pa tch 2 vs. UN IX Fil e System on Solari s 8 Update 4

Page 57

Below is the Oracle parameter file used for the benchmark tests:

control_files

= (/TPCC_disks/control_001)

#dbwr_io_slaves
#dbwr_io_slaves
#disk_asynch_io
disk_asynch_io

= 20
= 10
=FALSE
=TRUE

_filesystemio_options

=setall

parallel_max_servers
= 30
recovery_parallelism
= 20
# checkpoint_process
= TRUE
# obsolete
compatible
= 8.1.5.0.0
db_name
= TPCC
db_files
= 200
db_file_multiblock_read_count
= 32
#db_block_buffers
= 512000
#db_block_buffers
= 1024000
db_block_buffers
= 900000
# _db_block_write_batch
= 1024
# obsolete
# db_block_checkpoint_batch
= 512
# obsolete
dml_locks
= 500
hash_join_enabled
= FALSE
log_archive_start
= FALSE
# log_archive_start
= TRUE
#log_archive_buffer_size
= 32
# obsolete
#log_checkpoint_timeout
= 600
log_checkpoint_interval
= 100000000
log_checkpoints_to_alert
= TRUE
log_buffer
= 1048576
#log_archive_dest
= /archlog
# gc_rollback_segments
= 220
# obsolete
# gc_db_locks
= 100
# obsolete
gc_releasable_locks
= 0
max_rollback_segments
= 220
open_cursors
= 200
#processes
= 200
processes
= 150
sessions
= 600
transactions
= 400
distributed_transactions
= 0
transactions_per_rollback_segment = 1
#rollback_segments
=
(t1,t2,t3,t4,t5,t6,t7,t8,t9,t10,t11,t12,t13,t14,t15,t16,t17,t18,t19,t20,t21,t22,t23,
t24,t25,t26,t27,t28,t29)
rollback_segments
=
(t_0_1,t_0_2,t_0_3,t_0_4,t_0_5,t_0_6,t_0_7,t_0_8,t_0_9,t_0_10,t_0_11,t_0_12,t_0_13,t
_0_14,t_0_15,t_0_16,t_0_17,t_0_18,t_0_19,t_0_20,t_0_21,t_0_21,t_0_22,t_0_23,t_0_24,t
_0_25,t_0_26,t_0_27,t_0_28,t_0_29,t_0_30,t_0_31,t_0_32,t_0_33,t_0_34,t_0_35,t_0_36,t
_0_37,t_0_38,t_0_39,t_0_40,t_0_41,t_0_42,t_0_43,t_0_44,t_0_45,t_0_46,t_0_47,t_0_48,t
_0_49,t_0_50,t_0_51,t_0_52,t_0_53,t_0_54,t_0_55,t_0_56,t_0_57,t_0_58,t_0_59,t_0_60)
shared_pool_size
# discrete_transactions_enabled
cursor_space_for_time

www.veritas.com

= 75000000
= FALSE
= TRUE

# obsolete

VERITA S File Syste m 3.4 Pa tch 2 vs. UN IX Fil e System on Solari s 8 Update 4

Page 58

Appendix C: Source Code for touch_files Program


/* $Id: touch_files.c,v 1.7 2001/05/14 15:05:26 oswa Exp $ */
/*
* Touch/Create file benchmark.
*
* Compile with: gcc O2 o touch_files touch_files.c
* or for debug: gcc O2 Wall g o touch_files touch_files.c
*
*/
#include
#include
#include
#include
#include
#include
#include
#include

<stdio.h>
<stdlib.h>
<sys/types.h>
<sys/stat.h>
<errno.h>
<fcntl.h>
<unistd.h>
<time.h>

static char ver[] = "$Id: touch_files.c,v 1.7 2001/05/14 15:05:26 oswa Exp $";
int
main(int argc, char **argv)
{
unsigned int
ndirs = 0;
unsigned int
nfiles = 0;
unsigned int
bsize = 0;
unsigned int
i, j;
char
path[1024];
void
*filler;
int
multithreaded = 0, pid, parent = 1;
int
fh;
time_t stime, ftime;
if (argc < 3) {
printf("usage: touch_files [m] numdirs numfiles");
printf(" [sizeinbytes]\n\n");
return (1);
}
while ((i = getopt(argc, argv, "m")) != 1) {
switch (i) {
case m:
multithreaded = 1;
break;
}
}
ndirs = atoi(argv[optind++]);
nfiles = atoi(argv[optind++]);
if (multithreaded && argc > 4 || !multithreaded && argc > 3 ) {
bsize = atoi(argv[optind]);
filler = malloc(bsize);
if (filler == NULL) {
printf("Unable to allocate memory\n");
exit(1);
}
}
printf("Creating %d top level directories ", ndirs);
printf("with %d files in each\n", nfiles);
printf("Using %s\n",
(multithreaded ? "multiple processes" : "single thread"));
/* start timer */
stime = time(NULL);
printf("Starting at: %s", ctime(&stime));
for (i = 0; i < ndirs; i++) {
/* top dirs */
sprintf(path, "./%d", i);
if (((mkdir(path, 0777)) == 1) && errno != EEXIST) {

www.veritas.com

VERITA S File Syste m 3.4 Pa tch 2 vs. UN IX Fil e System on Solari s 8 Update 4

Page 59

perror("mkdir() failed");
exit(1);
}
if (multithreaded) {
pid = fork();
if (pid == 1) {
fprintf(stderr, "Cannot fork\n");
exit(1);
} else if (pid != 0) {
continue;
} else {
parent = 0;
}
}
for (j = 0; j < nfiles; j++) {
/* files per dir */
sprintf(path, "./%d/%d", i, j);
fh = open(path,
O_WRONLY|O_CREAT|O_TRUNC|O_EXCL|O_DSYNC|O_SYNC,
0666);
if (fh == 1) {
perror("open() failed");
exit(1);
}
if (write(fh, filler, bsize) == 1) {
perror("write() failed");
exit(1);
}
close(fh);
}
if (!parent) {
return 0;
}
}
if (multithreaded && parent) {
for (i = 0 ; i < ndirs ; i++) {
wait(NULL);
}
}
/* stop timer + print results */
ftime = time(NULL);
printf("Finished at: %s", ctime(&ftime));
printf("The run took %d seconds\n", (ftime stime));
free(filler);
return (0);
}

www.veritas.com

VERITA S File Syste m 3.4 Pa tch 2 vs. UN IX Fil e System on Solari s 8 Update 4

Page 60

Appendix D: Complete vxbench Results


No single graph could summarize the entire set of vxbench runs from Section Sequential File I/O (vxbench),
because the set of variables that were modified from run to run form a fivedimensional space: file system, RAID 0
striping unit, reads vs. writes, 8K vs. 64K I/O size, and concurrency. This appendix lists all such results for
reference.
Variations in the runs are the I/O size (8K vs. 64K), and how many concurrent files are accessed (there is always process per
file). The total size read or written is always 4 GB, divided evenly among the number of files that are accessed (an exception
occurs for singleprocess runs, where only 1 GB is accessed). Results shown are I/O rates, in MB/sec.
ncol=1
concurrent
UFS
UFS+log
VxFS
op iosize files
(MB/sec)
(MB/sec)
(MB/sec)

rd 8K
1
24.14
24.26
26.56
rd 8K
4
3.72
2.56
13.07
rd 8K
8
5.08
3.66
8.37
rd 8K
12
3.55
3.19
11.89
rd 8K
16
2.87
2.98
12.91
rd 8K
20
2.73
2.89
11.35
rd 8K
24
2.52
2.71
10.82
rd 8K
28
2.49
2.82
10.99
rd 8K
32
2.50
2.85
10.76
rd
rd
rd
rd
rd
rd
rd
rd
rd

64K
64K
64K
64K
64K
64K
64K
64K
64K

1
4
8
12
16
20
24
28
32

24.23
4.98
4.96
3.42
2.83
2.69
2.61
2.56
2.55

24.15
4.68
4.18
3.04
2.55
2.41
2.32
2.29
2.36

26.80
12.81
8.38
12.26
13.48
11.08
10.72
11.10
10.75

wr
wr
wr
wr
wr
wr
wr
wr
wr

8K
8K
8K
8K
8K
8K
8K
8K
8K

1
4
8
12
16
20
24
28
32

19.67
4.69
3.88
2.91
2.40
2.35
2.19
2.13
2.07

15.15
3.17
2.60
2.45
2.45
2.45
2.37
2.36
2.32

22.25
10.10
10.25
9.78
10.01
10.00
9.89
9.93
9.90

wr
wr
wr
wr
wr
wr
wr
wr
wr

64K
64K
64K
64K
64K
64K
64K
64K
64K

1
4
8
12
16
20
24
28
32

20.41
5.89
4.00
2.82
2.40
2.33
2.30
2.27
2.27

18.15
5.36
3.32
2.47
2.14
2.05
2.03
2.00
1.99

21.91
10.35
10.47
9.97
10.32
10.18
10.12
10.13
10.10

www.veritas.com

VERITA S File Syste m 3.4 Pa tch 2 vs. UN IX Fil e System on Solari s 8 Update 4

Page 61

ncol=2
concurrent
UFS
UFS+log
VxFS
op iosize files
(MB/sec)
(MB/sec)
(MB/sec)

rd 8K
1
41.49
44.18
53.04
rd 8K
4
8.98
5.99
22.98
rd 8K
8
9.24
6.13
17.47
rd 8K
12
6.93
5.26
21.50
rd 8K
16
5.49
5.28
21.05
rd 8K
20
5.18
5.20
18.61
rd 8K
24
4.78
5.04
20.38
rd 8K
28
4.63
4.82
21.94
rd 8K
32
4.50
4.80
23.49
rd
rd
rd
rd
rd
rd
rd
rd
rd

64K
64K
64K
64K
64K
64K
64K
64K
64K

1
4
8
12
16
20
24
28
32

44.23
10.78
9.20
6.68
5.21
5.01
4.82
4.80
4.71

44.26
10.02
8.25
6.02
4.99
4.80
4.64
4.52
4.49

53.57
24.96
17.82
19.13
20.01
21.04
20.01
22.59
22.91

wr
wr
wr
wr
wr
wr
wr
wr
wr

8K
8K
8K
8K
8K
8K
8K
8K
8K

1
4
8
12
16
20
24
28
32

26.06
10.74
7.87
5.86
4.60
4.35
4.07
3.98
3.89

21.99
6.45
4.88
4.20
4.20
4.20
4.16
4.08
4.08

22.45
12.47
13.67
12.68
12.73
12.43
12.84
12.46
12.43

wr
wr
wr
wr
wr
wr
wr
wr
wr

64K
64K
64K
64K
64K
64K
64K
64K
64K

1
4
8
12
16
20
24
28
32

25.13
12.44
8.11
5.70
4.42
4.25
4.15
4.19
4.13

24.46
11.20
7.21
5.03
4.17
4.02
3.95
3.91
3.89

24.98
20.82
20.81
20.05
20.80
20.83
20.59
20.30
20.58

ncol=3
concurrent
UFS
UFS+log
VxFS
op iosize files
(MB/sec)
(MB/sec)
(MB/sec)

rd 8K
1
44.67
46.14
61.08
rd 8K
4
13.56
8.44
35.29
rd 8K
8
13.92
8.55
24.81
rd 8K
12
11.30
8.03
27.69
rd 8K
16
7.92
7.70
28.25
rd 8K
20
7.37
7.60
27.95
rd 8K
24
7.03
7.51
31.06
rd 8K
28
6.92
7.44
32.89
rd 8K
32
6.81
7.34
34.46
rd
rd
rd
rd
rd
rd
rd
rd
rd

64K
64K
64K
64K
64K
64K
64K
64K
64K

1
4
8
12
16
20
24
28
32

46.80
16.69
14.18
10.79
7.49
7.29
7.18
7.17
7.19

45.57
15.97
11.81
8.94
7.49
7.08
6.88
6.84
6.85

67.52
35.21
25.12
28.79
29.21
29.95
30.84
32.76
35.15

wr
wr
wr
wr
wr
wr
wr
wr
wr

8K
8K
8K
8K
8K
8K
8K
8K
8K

1
4
8
12
16
20
24
28
32

27.05
16.46
12.54
10.12
6.68
6.20
5.97
5.80
5.82

23.44
10.23
6.99
6.26
6.13
6.19
6.07
6.04
6.04

44.47
18.99
20.03
18.92
19.60
18.92
19.04
18.94
19.08

wr
wr
wr
wr
wr
wr
wr
wr

64K
64K
64K
64K
64K
64K
64K
64K

1
4
8
12
16
20
24
28

28.66
19.39
12.90
9.55
6.33
6.18
6.13
6.21

27.94
17.40
10.44
7.44
6.19
5.88
5.78
5.73

41.82
29.72
32.01
31.82
32.76
33.21
32.48
32.37

www.veritas.com

VERITA S File Syste m 3.4 Pa tch 2 vs. UN IX Fil e System on Solari s 8 Update 4

Page 62

wr 64K

32

6.24

www.veritas.com

5.78

32.23

VERITA S File Syste m 3.4 Pa tch 2 vs. UN IX Fil e System on Solari s 8 Update 4

Page 63

ncol=4
concurrent
UFS
UFS+log
VxFS
op iosize files
(MB/sec)
(MB/sec)
(MB/sec)

rd 8K
1
47.29
49.67
60.36
rd 8K
4
16.29
11.99
43.61
rd 8K
8
19.33
10.97
35.57
rd 8K
12
17.13
10.35
36.19
rd 8K
16
11.34
10.07
37.41
rd 8K
20
10.10
9.84
39.86
rd 8K
24
9.35
9.65
39.60
rd 8K
28
9.16
9.53
45.30
rd 8K
32
8.87
9.74
43.66
rd
rd
rd
rd
rd
rd
rd
rd
rd

64K
64K
64K
64K
64K
64K
64K
64K
64K

1
4
8
12
16
20
24
28
32

50.97
19.14
19.72
16.98
10.31
10.08
9.44
9.27
9.26

51.04
18.15
16.22
12.10
9.98
9.43
9.11
8.84
8.97

65.64
45.72
36.35
36.37
37.65
40.55
39.79
42.89
43.25

wr
wr
wr
wr
wr
wr
wr
wr
wr

8K
8K
8K
8K
8K
8K
8K
8K
8K

1
4
8
12
16
20
24
28
32

28.81
21.45
19.10
16.64
9.91
8.71
8.08
7.94
7.71

25.45
14.72
9.36
8.18
8.10
8.08
8.00
7.85
7.94

45.98
25.46
25.73
26.50
25.34
25.24
26.76
25.67
25.60

wr
wr
wr
wr
wr
wr
wr
wr
wr

64K
64K
64K
64K
64K
64K
64K
64K
64K

1
4
8
12
16
20
24
28
32

31.00
25.57
18.86
15.94
8.99
8.70
8.24
8.15
8.13

29.13
23.00
14.31
10.34
8.36
7.92
7.65
7.55
7.62

59.18
41.24
43.32
42.71
42.45
43.36
43.09
43.21
43.12

ncol=5
concurrent
UFS
UFS+log
VxFS
op iosize files
(MB/sec)
(MB/sec)
(MB/sec)

rd 8K
1
49.40
49.38
62.58
rd 8K
4
17.77
14.20
62.39
rd 8K
8
25.58
13.16
44.79
rd 8K
12
20.55
12.40
44.76
rd 8K
16
15.30
12.43
48.23
rd 8K
20
12.61
12.19
49.07
rd 8K
24
11.71
12.11
50.05
rd 8K
28
11.32
11.85
54.16
rd 8K
32
11.10
11.66
56.58
rd
rd
rd
rd
rd
rd
rd
rd
rd

64K
64K
64K
64K
64K
64K
64K
64K
64K

1
4
8
12
16
20
24
28
32

51.39
20.47
25.78
20.66
14.11
12.35
11.82
11.58
11.44

51.39
19.74
19.77
15.14
12.60
11.76
11.28
11.09
11.09

70.17
59.59
44.21
47.57
49.47
49.34
50.64
52.02
57.28

wr
wr
wr
wr
wr
wr
wr
wr
wr

8K
8K
8K
8K
8K
8K
8K
8K
8K

1
4
8
12
16
20
24
28
32

29.20
26.08
23.50
20.13
14.13
11.03
10.09
9.88
9.67

25.80
19.05
11.76
10.15
10.11
10.07
9.94
9.83
9.72

50.91
32.40
35.29
33.80
32.64
33.44
32.75
32.47
32.83

wr
wr
wr
wr
wr
wr
wr
wr

64K
64K
64K
64K
64K
64K
64K
64K

1
4
8
12
16
20
24
28

31.25
30.96
23.40
20.36
12.70
10.82
10.46
10.16

30.64
27.48
18.04
13.24
10.54
9.76
9.43
9.42

58.92
48.33
52.18
51.12
50.19
51.16
51.68
51.76

www.veritas.com

VERITA S File Syste m 3.4 Pa tch 2 vs. UN IX Fil e System on Solari s 8 Update 4

Page 64

wr 64K

32

10.15

www.veritas.com

9.38

52.07

VERITA S File Syste m 3.4 Pa tch 2 vs. UN IX Fil e System on Solari s 8 Update 4

Page 65

ncol=6
concurrent
UFS
UFS+log
VxFS
op iosize files
(MB/sec)
(MB/sec)
(MB/sec)

rd 8K
1
49.88
50.35
61.89
rd 8K
4
19.31
15.73
70.42
rd 8K
8
29.54
15.46
54.17
rd 8K
12
24.92
14.60
52.71
rd 8K
16
18.68
14.61
56.99
rd 8K
20
15.69
14.70
57.92
rd 8K
24
14.35
14.25
58.55
rd 8K
28
13.72
14.37
64.50
rd 8K
32
13.22
14.01
69.55
rd
rd
rd
rd
rd
rd
rd
rd
rd

64K
64K
64K
64K
64K
64K
64K
64K
64K

1
4
8
12
16
20
24
28
32

51.32
22.05
31.78
24.48
18.25
14.99
14.10
13.91
13.83

51.01
21.35
24.44
18.00
15.29
13.96
13.50
13.21
13.03

68.33
69.99
52.18
52.02
56.94
57.07
59.24
63.76
68.97

wr
wr
wr
wr
wr
wr
wr
wr
wr

8K
8K
8K
8K
8K
8K
8K
8K
8K

1
4
8
12
16
20
24
28
32

30.28
30.66
26.78
24.75
17.64
13.83
12.46
12.12
11.59

26.64
23.11
14.26
12.28
12.05
12.19
11.92
11.76
11.68

51.41
40.38
47.40
40.91
39.82
39.80
40.12
39.51
39.90

wr
wr
wr
wr
wr
wr
wr
wr
wr

64K
64K
64K
64K
64K
64K
64K
64K
64K

1
4
8
12
16
20
24
28
32

31.84
35.90
27.74
24.35
16.94
13.23
12.41
12.33
12.42

31.65
32.43
21.23
16.11
12.93
11.67
11.28
11.12
11.07

59.75
60.71
60.05
58.89
58.17
58.94
59.45
59.92
60.53

ncol=8
concurrent
UFS
UFS+log
VxFS
op iosize files
(MB/sec)
(MB/sec)
(MB/sec)

rd 8K
1
50.23
52.73
63.73
rd 8K
4
22.11
18.77
97.59
rd 8K
8
37.18
20.41
73.04
rd 8K
12
34.73
18.69
68.83
rd 8K
16
31.89
19.14
73.87
rd 8K
20
32.94
19.23
71.97
rd 8K
24
29.60
18.47
78.25
rd 8K
28
26.86
18.36
84.50
rd 8K
32
27.77
18.35
92.80
rd
rd
rd
rd
rd
rd
rd
rd
rd

64K
64K
64K
64K
64K
64K
64K
64K
64K

1
4
8
12
16
20
24
28
32

54.43
25.55
38.80
36.89
35.23
34.16
26.38
26.97
27.71

54.49
25.27
31.27
24.66
20.86
19.31
18.23
17.27
17.11

69.29
96.94
75.27
70.13
76.13
74.60
77.41
85.34
91.13

wr
wr
wr
wr
wr
wr
wr
wr
wr

8K
8K
8K
8K
8K
8K
8K
8K
8K

1
4
8
12
16
20
24
28
32

31.88
37.65
35.10
34.83
34.02
35.19
29.97
25.96
27.28

28.23
29.55
18.50
16.47
16.24
16.21
15.86
15.65
15.76

51.35
51.04
50.55
53.69
51.81
52.72
51.94
53.12
54.75

wr
wr
wr
wr
wr
wr
wr
wr

64K
64K
64K
64K
64K
64K
64K
64K

1
4
8
12
16
20
24
28

34.78
44.33
37.35
36.98
36.21
35.66
25.66
26.08

33.18
41.04
27.40
22.09
17.95
16.13
15.09
14.48

59.02
82.36
79.19
76.88
75.91
77.05
76.99
77.44

www.veritas.com

VERITA S File Syste m 3.4 Pa tch 2 vs. UN IX Fil e System on Solari s 8 Update 4

Page 66

wr 64K

32

27.28

www.veritas.com

14.25

77.34

VERITA S File Syste m 3.4 Pa tch 2 vs. UN IX Fil e System on Solari s 8 Update 4

Page 67

ncol=10
concurrent
UFS
UFS+log
VxFS
op iosize files
(MB/sec)
(MB/sec)
(MB/sec)

rd 8K
1
48.80
51.59
62.81
rd 8K
4
24.16
21.57
117.62
rd 8K
8
38.13
27.82
93.91
rd 8K
12
39.76
23.05
90.02
rd 8K
16
31.26
22.74
93.77
rd 8K
20
26.40
23.42
93.25
rd 8K
24
24.90
23.88
99.14
rd 8K
28
23.18
23.41
108.74
rd 8K
32
22.62
23.44
116.11
rd
rd
rd
rd
rd
rd
rd
rd
rd

64K
64K
64K
64K
64K
64K
64K
64K
64K

1
4
8
12
16
20
24
28
32

53.67
27.51
39.29
36.53
29.90
25.49
23.27
22.60
23.16

53.77
26.85
33.77
27.11
24.67
22.64
22.02
21.85
21.85

65.52
117.28
92.86
89.17
92.73
94.47
99.08
105.91
116.61

wr
wr
wr
wr
wr
wr
wr
wr
wr

8K
8K
8K
8K
8K
8K
8K
8K
8K

1
4
8
12
16
20
24
28
32

32.59
47.30
38.72
37.29
30.84
24.92
22.46
20.70
20.29

29.32
36.68
23.47
21.00
21.45
20.69
20.50
20.27
20.12

49.26
68.97
67.70
67.98
70.12
68.09
67.00
67.52
67.90

wr
wr
wr
wr
wr
wr
wr
wr
wr

64K
64K
64K
64K
64K
64K
64K
64K
64K

1
4
8
12
16
20
24
28
32

34.86
52.01
40.31
34.28
29.15
24.14
21.09
20.60
21.10

34.26
48.49
31.56
24.59
21.03
19.20
18.28
17.99
17.96

58.54
102.68
97.68
96.16
94.36
95.28
95.81
96.05
94.74

ncol=12
concurrent
UFS
UFS+log
VxFS
op iosize files
(MB/sec)
(MB/sec)
(MB/sec)

rd 8K
1
48.47
51.57
62.47
rd 8K
4
25.58
24.00
130.71
rd 8K
8
41.77
31.80
114.51
rd 8K
12
43.72
28.07
108.65
rd 8K
16
36.03
27.21
112.53
rd 8K
20
32.36
27.09
111.72
rd 8K
24
29.90
28.11
117.76
rd 8K
28
28.06
28.25
123.34
rd 8K
32
27.14
27.72
129.29
rd
rd
rd
rd
rd
rd
rd
rd
rd

64K
64K
64K
64K
64K
64K
64K
64K
64K

1
4
8
12
16
20
24
28
32

53.09
29.67
41.36
45.44
34.85
30.92
28.24
27.86
27.80

50.92
28.67
36.46
33.43
28.94
26.60
26.80
26.25
25.93

69.83
130.62
110.98
108.79
113.01
113.55
118.10
124.67
137.68

wr
wr
wr
wr
wr
wr
wr
wr
wr

8K
8K
8K
8K
8K
8K
8K
8K
8K

1
4
8
12
16
20
24
28
32

32.92
53.43
45.39
40.32
36.13
30.79
28.30
25.88
24.28

29.89
41.33
29.27
25.63
25.18
24.91
24.21
24.07
23.89

51.14
81.69
81.74
85.77
81.68
82.98
84.54
83.23
83.54

wr
wr
wr
wr
wr
wr
wr
wr

64K
64K
64K
64K
64K
64K
64K
64K

1
4
8
12
16
20
24
28

36.10
58.78
44.73
42.01
34.60
29.36
26.39
26.01

33.92
55.19
36.45
29.12
25.02
22.76
22.10
21.55

60.25
115.89
116.04
113.19
113.85
112.70
112.68
111.73

www.veritas.com

VERITA S File Syste m 3.4 Pa tch 2 vs. UN IX Fil e System on Solari s 8 Update 4

Page 68

wr 64K

32

25.79

www.veritas.com

21.26

111.88

VERITA S File Syste m 3.4 Pa tch 2 vs. UN IX Fil e System on Solari s 8 Update 4

Page 69

ncol=16
concurrent
UFS
UFS+log
VxFS
op iosize files
(MB/sec)
(MB/sec)
(MB/sec)

rd 8K
1
48.16
52.08
62.62
rd 8K
4
28.75
28.93
147.80
rd 8K
8
47.40
38.45
144.60
rd 8K
12
54.87
34.81
143.59
rd 8K
16
58.43
32.50
147.17
rd 8K
20
56.32
34.07
149.84
rd 8K
24
55.51
34.35
156.39
rd 8K
28
59.65
33.74
169.07
rd 8K
32
61.39
34.06
172.52
rd
rd
rd
rd
rd
rd
rd
rd
rd

64K
64K
64K
64K
64K
64K
64K
64K
64K

1
4
8
12
16
20
24
28
32

52.37
33.04
49.04
56.88
60.27
57.14
62.29
61.87
61.10

53.16
32.50
42.48
46.04
39.63
37.22
35.92
35.69
34.94

69.83
157.80
150.38
146.61
148.14
149.18
152.59
169.03
175.68

wr
wr
wr
wr
wr
wr
wr
wr
wr

8K
8K
8K
8K
8K
8K
8K
8K
8K

1
4
8
12
16
20
24
28
32

33.10
62.68
53.40
54.94
57.86
59.07
59.65
63.43
67.17

29.74
48.01
36.04
31.21
30.26
30.17
29.83
29.74
28.87

50.61
102.48
107.43
103.43
106.73
107.25
108.08
106.18
105.40

wr
wr
wr
wr
wr
wr
wr
wr
wr

64K
64K
64K
64K
64K
64K
64K
64K
64K

1
4
8
12
16
20
24
28
32

34.81
65.94
55.56
57.30
58.82
59.56
64.28
65.34
67.78

34.48
63.78
44.11
38.12
34.92
31.95
29.95
28.50
27.61

59.48
138.66
153.11
153.44
147.69
149.10
150.04
148.29
148.98

ncol=20
concurrent
UFS
UFS+log
VxFS
op iosize files
(MB/sec)
(MB/sec)
(MB/sec)

rd 8K
1
48.19
50.99
62.38
rd 8K
4
33.96
33.63
143.10
rd 8K
8
47.99
44.14
187.71
rd 8K
12
54.89
52.25
184.14
rd 8K
16
57.78
45.74
187.47
rd 8K
20
48.42
43.39
189.58
rd 8K
24
46.50
42.88
192.62
rd 8K
28
46.82
43.68
200.59
rd 8K
32
44.31
43.87
207.19
rd
rd
rd
rd
rd
rd
rd
rd
rd

64K
64K
64K
64K
64K
64K
64K
64K
64K

1
4
8
12
16
20
24
28
32

51.91
37.63
48.77
56.22
55.57
47.35
44.54
45.12
44.47

52.38
37.03
44.46
51.10
48.27
42.93
42.57
42.44
42.49

69.43
161.09
185.04
184.53
185.77
187.61
196.41
200.61
209.23

wr
wr
wr
wr
wr
wr
wr
wr
wr

8K
8K
8K
8K
8K
8K
8K
8K
8K

1
4
8
12
16
20
24
28
32

32.21
68.78
61.86
55.88
53.35
48.05
46.18
45.40
41.90

29.61
52.06
43.97
38.19
35.79
35.03
34.18
33.43
33.08

51.04
131.08
158.77
142.65
148.70
139.58
147.79
144.19
145.78

wr
wr
wr
wr
wr
wr
wr
wr

64K
64K
64K
64K
64K
64K
64K
64K

1
4
8
12
16
20
24
28

35.53
73.58
63.40
55.97
52.48
47.03
43.97
44.72

34.74
67.02
51.83
43.35
41.03
37.83
36.19
35.09

58.87
142.74
176.12
180.47
185.46
191.01
187.20
186.89

www.veritas.com

VERITA S File Syste m 3.4 Pa tch 2 vs. UN IX Fil e System on Solari s 8 Update 4

Page 70

wr 64K

32

42.37

www.veritas.com

34.74

185.61

VERITA S File Syste m 3.4 Pa tch 2 vs. UN IX Fil e System on Solari s 8 Update 4

Page 71

ncol=24
concurrent
UFS
UFS+log
VxFS
op iosize files
(MB/sec)
(MB/sec)
(MB/sec)

rd 8K
1
47.18
47.83
62.43
rd 8K
4
37.93
37.29
146.35
rd 8K
8
54.27
49.39
194.11
rd 8K
12
63.39
59.35
210.73
rd 8K
16
69.64
56.89
212.29
rd 8K
20
73.49
50.96
212.25
rd 8K
24
68.75
51.69
217.29
rd 8K
28
64.74
49.54
205.28
rd 8K
32
70.51
53.14
207.13
rd
rd
rd
rd
rd
rd
rd
rd
rd

64K
64K
64K
64K
64K
64K
64K
64K
64K

1
4
8
12
16
20
24
28
32

51.64
41.25
55.26
66.25
72.13
72.70
67.95
67.77
63.71

51.50
41.42
48.86
55.34
59.21
50.70
49.07
49.24
50.70

70.87
152.33
204.49
217.05
212.12
215.03
218.46
213.94
212.05

wr
wr
wr
wr
wr
wr
wr
wr
wr

8K
8K
8K
8K
8K
8K
8K
8K
8K

1
4
8
12
16
20
24
28
32

32.90
70.93
71.02
71.75
69.73
73.09
72.48
69.01
75.22

30.48
56.16
48.38
41.93
38.37
36.27
36.12
35.39
35.39

49.55
136.33
166.23
179.65
179.83
167.03
169.23
172.48
167.54

wr
wr
wr
wr
wr
wr
wr
wr
wr

64K
64K
64K
64K
64K
64K
64K
64K
64K

1
4
8
12
16
20
24
28
32

35.59
76.93
72.84
73.16
73.24
74.03
69.55
72.90
65.66

34.11
70.95
58.85
50.62
47.49
43.33
43.07
41.11
40.13

57.99
146.73
183.29
194.23
202.09
203.34
207.59
207.22
202.88

www.veritas.com

VERITA S File Syste m 3.4 Pa tch 2 vs. UN IX Fil e System on Solari s 8 Update 4

Page 72

www.veritas.com

VERITA S File Syste m 3.4 Pa tch 2 vs. UN IX Fil e System on Solari s 8 Update 4

Page 73

VERITAS
Software
Corporation
Copyright
2002
VERITAS Software
Corporation. All Rights Reserved. VERITAS, VERITAS Software, the VERITAS logo, and all other VERITAS product names and slogans are
trademarks
or registered
trademarks of VERITAS Software Corporation in the US and/or other countries. Other product names and/or slogans mentioned herein may be trademarks
Corporate
Headquarters
or registered trademarks of their respective companies. Specifications and product offerings subject to change without notice. January 2002.
9020167399
350 Ellis Street
Mountain View, CA 94043
6505278000 or 8003272232

For additional information about VERITAS


Software, its products, or the location of an office
near you, please call our corporate headquarters
or visit our Web site at www.veritas.com
sales@veritas.com

www.veritas.com

VERITA S File Syste m 3.4 Pa tch 2 vs. UN IX Fil e System on Solari s 8 Update 4

Page 74

Você também pode gostar