Você está na página 1de 5

DATA COMPRESION

Performed upon request


Dont affect apps
no inode number change
Support all solaris and linux except SF basic license
Read cause uncompresion un memory not on disk
Writes cause uncompresion on disk
Compress files that are not accedes often and where performace is not critical
Supportes with vXfs DISK LAYOUT VERSION 9
large files are compresed as several chunksdefautl chunk sixe is 1MB
Writw/read only compress the chunk whete the I/O happens
Single compression algorith in SFHA 6.0
Two new types: Compresed / Shared-compressed
Compressed extents have two sizes:
-Physical on disk
Logical uncompressed size
PERFORMING DATA COMPRESSION
To compress files or directory trees:
vxcompress [-v] [-r] [-t alg-strenght] [-b bsize] \
[-n numthreads] {file_or_dirname ...| - }
vxcompress /app/largefile
To uncompress files or directory trees:
vxcompress -u [-v] [-r] [-n numthreads] \
[file_or_dirname ... | - }
DISPLAY INFORMATION ABOUT DATA COMPRESION
To report the compresion of a file:
vxcompress -L filename
To report detailed information for all data extents:
fsmap -p filename
To check the compression of the entire file system:
fsadm -S compressed mount_point
COMPRESSION ATTRIBUTE
-Compression algorithm
-Compresion strengt (1-9)
-Compresion block size
A compressed file can have only one
compression attribute.
Only uncomrepssion removes the compression algorith
Compression attribute can be used by
backup programs to preserve file compression
DATA COMPRESION AND BACKUP
The vxdump command
- Uncompresses compressed extends as it encounters them
- Compression is not preserved across a back or restore operation

Raw volume backup


-Preserve compression
NBU FlashBackup
Does not support VxFs disk layout 8 or 9
Does not support compresssion
RECOMMENDATION FOR DATA
COMPRESSION WITH ORACLE
-Not compress database control files.
Not compress file belonging to TEMPORARY tablespaces
Not commpress file belong to SYSTEM and SYSAUX tablespace
Monitor de I/O of compressed file
Good candidates are:
Archive logs
Read-only tablespaces
Infrequenly accesed datafiles
Unstructured data stores in databse tables
FILE SNAP FEATURE
Clone files. Two files look like one
Benefits:
-Versioning, Backup, Restore
-No I/O impact during creation
-Unlimited number of FileSnaps
-No negative impact on I/O during sharind
-Single copy of shared data page to service multiple read request from
FilesSnaps and Original file
USING FILESNAP
To clone one or more files:
vxfilesnap [-i] [-p] source destination
-p preserve mode, ownership and time stamp
-i promppt before overwriting an existing destination file
To observe size of shared and private data in the file:
fsmap -cH filename
To observer storage savings dur to shared data:
/opt/VRTS/bin/fsadm [-H] -S shared mount_point

THE LAZY COPY-ON-WRITE IMPLEMENTATION


Keep file in memory if many users
access de file constantly
COPY ON WRITE
1-Allocate new block
2-Read old data
3-Write old data to the new block synchronously
4-Write the new data to the new block
LAZY COPY-ON-WRITE
Disable by default

Eliminates the reading and copying old data in the copy-on-write process
Allocates a new block for the writes as long as the new data covers the entire b
lock
Must be enable for the file system using:
vxtunefs -s -o lazy)copyonwrite=1 mount_point
Not secure in event of server crash
BEST PRACTICES
Separe dat relative from boot image
allocate a single extent to master boot image file
FILESNAP OVER NFS
Exported VxFS file system: /app
Mounted at client: /app
vxfilesnap -p /app/file1/file2
ln /app/file1 /app/file2::snap:vxfs
DEDUPLICATION
Layout version 9
Periodical and incremental
Only Enterprice License
VTRSfsadv package
Shared page cache optimization improves read I/O performace for current reads to
shared region
One file refered to many diferent places
The fsdedups daemon:
Star schedule deduplication jobs
Manually started and stoped
The fsdedupadm command:
Used to enbale,disable, schedule and query status of deduplication
The fsdedup process:
Responssible for deduplication the filesystem
Uses checkpoint, file change log and block sharinf from FS
Started manual or schedule
CONFIGURATION DEDUPLICATION
Enable deduplication:
fsdedupadm enable -c chunk_size [-q] mount_point
-q quiet mode
Scheduling
fsdedupadm
fsdedupadm
fsdedupadm

deduplication:
setschedule "hours days dedup_run" mount_point
setschedule "0 */2" /fs1
setschedule "0,6,12,18 *4" /fs1

Start deduplication:
fsdedupadm start [-q] mount_point
STARTING THE DEDUPLICATION SCHEDULER DAEMON
Solaris, Linux, AIX:
start: /etc/init.d/fsdedupschd start
stop: /etc/init.d/fsdedupschd stop

HP-UX:
start: /sbin/init.d/fsdedupschd start
stop: /sbin/init.d/fsdedupschd stop
DISPLAYING DEDUPLICATION
Iniating a dry run:
fsdedupadm dryrun [-o threshold=percentage] mount_point
Displaying deduplication configuration:
fsdedupadm list all | mount_point
Displaying deduplication status:
fsdedupadm status all| mount_point
UNCONFIGURING DEDUPLICATION:
Disabling deduplication:
fsdedupadm disable [-q] mount_point
Sttoping deduplication:
fsdedupadm stop [-q] mount_point
Removing the deduplication configuration
fsdedupadm remove mount_point
CONFIGURATION SETTINGS AND LOGS POR DEDUPLICATION
Chunk size:
default: min (4k, block size)
min value: file system block size
max value: 128k
Configuration file:
mount_point/lost+found/dedup/local_config
Logs file:
mount_point/lost+found/dedup
LIMITATION USING DEDUPLICATION
-Shared extends cannot be compressed
-Deduplication skips compressed file
You can use xvfilesnap on a compressed file (bextends compressed and shared file
)
-Restoring filesystem results in lost data
-Compression and deduplication result higher fragmentation

Você também pode gostar