Você está na página 1de 22

Oracle Solaris 10 Syntax

lucreaten newBE

Oracle Solaris 11 Syntax


beadmcreate newBE

lustatus
luactivate newBE
ludelete BE
luupgrade or patchadd

beadmlist
beadmactivate newBE
beadmdestroy BE
pkgupdate

Description
Create a new BE
Display BE information
Activate a BE
Destroy an inactive BE
Upgrade or update a BE

How to Update Your ZFS Boot Environment


To update a ZFS boot environment, use the pkgupdate command. If you update a ZFS BE by using pkg
update, a new BE is created and automatically activated. If the updates to the existing BE are minimal, a backup
BE is created before the updates are applied. The pkgupdate command displays whether a backup BE or a new
BE is created.
1.

Display your existing BE information.

2.
3.
4.

#beadmlist
BEActiveMountpointSpacePolicyCreated

solarisNR/12.24Gstatic2011100409:42
In the above output, NR means the BE is active now and will be the active BE on reboot.

5.

Update your BE.

6.
7.
8.
9.
10.
11.
12.
13.
14.

#pkgupdate
Packagestoremove:117
Packagestoinstall:186
Packagestoupdate:315
Createbootenvironment:Yes
DOWNLOADPKGSFILESXFER(MB)
Completed618/61829855/29855600.7/600.7
.
.
.
If your existing BE name is solaris, a new BE, solaris1, is created and automatically activated after
the pkgupdate operation is complete.

15.

Reboot the system to complete the BE activation. Then, confirm your BE status.

16.
17.
18.
19.

#init6
.
.
.

20.
21.
22.
23.

#beadmlist
BEActiveMountpointSpacePolicyCreated

solarisNR/12.24Gstatic2011100409:42
solaris16.08Gstatic2011101110:42

24.

If an error occurs when booting the new BE, activate and boot to the previous BE.

#beadmactivatesolaris1

root@Unixarena-SOL11:~# mkdir /old-be


root@Unixarena-SOL11:~# beadm mount solaris /old-be
root@Unixarena-SOL11:~# pkg -R /old-be list diffstat
pkg list: no packages matching 'diffstat' installed
root@Unixarena-SOL11:~#

Rollback operation
1.Any time you can rollback the Solaris 11 to old boot environment using below command.
root@Unixarena-SOL11:~# beadm activate solaris
root@Unixarena-SOL11:~# beadm list
BE
--

Active Mountpoint Space Policy Created


------ ---------- ----- ------ -------

UA-NEW N
solaris R

/
/old-be

341.97M static 2013-07-16 23:31


5.26G static 2013-02-12 03:26

root@Unixarena-SOL11:~#

N- Active now
R- Active upon Reboot

Displaying ZFS Share Information


As in the previous releases, you display the value of the sharenfs property by using zfsget

sharenfs property or by using the zfsgetall command syntax.


#zfsgetsharenfsrpool/fs1
NAMEPROPERTYVALUESOURCE
rpool/fs1sharenfsonlocal
The new share information is available by using the zfsgetshare command.

#zfsgetsharerpool/fs1
NAMEPROPERTYVALUESOURCE
rpool/fs1sharename=rpool_fs1,path=/rpool/fs1,prot=nfslocal
The new share information is not available in the zfsgetall command syntax.
If you create a share of a newly created ZFS file system, use the zfsgetshare command to identify the share
name name or the sharepath name. For example:

#zfscreateomountpoint=/dataosharenfs=onrpool/data
#zfsgetsharerpool/data
NAMEPROPERTYVALUESOURCE
rpool/datasharename=data,path=/data,prot=nfs
local

ZFS Sharing Inheritance


Inheritance of the zfsshare property and the sharenfs or the sharesmb property works as follows:

The zfsshare property is not inherited from a parent to a descendent file system. In addition, the zfs
setshare command does not support the r option to set a ZFS property on descendent file systems.
If the sharenfs or the sharesmb property is set on a parent file system, the sharenfs or
the sharesmb property is also set on the descendent file systems. For example:

#zfscreateomountpoint=/dsrpool/ds
#zfssetshare=name=ds,path=/ds,prot=nfsrpool/ds
name=ds,path=/ds,prot=nfs
#zfssetsharenfs=onrpool/ds
#cat/etc/dfs/sharetab
/dsrpool_dsnfssec=sys,rw
#zfscreaterpool/ds/ds1
#zfsgetsharenfsrpool/ds/ds1

NAMEPROPERTYVALUESOURCE
rpool/ds/ds1sharenfsoninheritedfromrpool/ds
Any existing child file system also inherits the parent's sharenfs or sharesmb property value.
If the sharenfs or the sharesmb property is set to off on the parent file system, the sharenfs property or
the sharesmb property is set is to off on the descendent file systems. For example:

#zfssetsharenfs=offrpool/ds
$zfsgetrsharenfsrpool/ds
NAMEPROPERTYVALUESOURCE
rpool/dssharenfsofflocal
rpool/ds/ds1sharenfsoffinheritedfromrpool/ds
rpool/ds/ds2sharenfsoffinheritedfromrpool/ds
rpool/ds/ds3sharenfsoffinheritedfromrpool/ds

Changing a ZFS Share


The name and protocol properties must be specified when you change share property values.
For example, create an NFS share like this:

#zfscreateomountpoint=/dsosharenfs=onrpool/ds
#zfssetshare=name=ds,path=/ds,prot=nfsrpool/ds
name=ds,path=/ds,prot=nfs
Then, add the SMB protocol:

#zfssetshare=name=ds,prot=nfs,prot=smbrpool/ds
name=ds,path=/ds,prot=nfs,prot=smb
Remove the SMB protocol:

#zfssetcshare=name=ds,prot=smbrpool/ds
name=ds,path=/ds,prot=nfs

Removing a ZFS Share


You can remove an existing share by using the zfsset c command. For example, identify the share name.

#zfsgetshare
NAMEPROPERTYVALUESOURCE

rpool/dssharename=ds,path=/ds,prot=nfslocal
Then, remove the share by identifying the sharename name. For example:

#zfssetcshare=name=dsrpool/ds
share'ds'wasremoved.
If a share is established by creating a default share, when the file system is created, then a share can be removed by
the sharename name or the sharepath name. For example, this share is given a default sharename name, data, and
a default sharepath name, /data.

#zfscreateomountpoint=/dataosharenfs=onrpool/data
#zfsgetsharerpool/data
NAMEPROPERTYVALUESOURCE
rpool/datasharename=data,path=/data,prot=nfslocal
Remove the share by identifying the sharename name. For example:

#zfssetcshare=name=datarpool/data
share'data'wasremoved.
Remove the share by identifying the sharepath name. For example:

#zfssetcshare=path=/datarpool/data
share'data'wasremoved.

ZFS File Sharing Within a Non-Global Zone


In previous Solaris releases, you could not create and publish NFS or SMB shares in a Oracle Solaris non-global
zone. In this Solaris release, you can create and publish NFS shares by using the zfssetshare command and
the legacy share command with a non-global zone.

If a ZFS file system is mounted and available in a non-global zone, it can be shared in that zone.
A file system can be shared in the global zone if it is not mounted in a non-global zone or is not shared to a
non-global zone.
If a ZFS file system's mountpoint property set to legacy, the file system can be shared by using the
legacy share command.

For example, the /export/home/data and /export/home/data1 file systems are available in
the zfszone.

zfszone#shareFnfs/export/home/data
zfszone#cat/etc/dfs/sharetab
/export/home/dataexport_home_datanfssec=sys,rw
zfszone#zfssetshare=name=data1,path=/export/home/data1,prot=nfs
tank/zones/export/home/data1
zfszone#zfssetsharenfs=ontank/zones/export/home/data1
zfszone#cat/etc/dfs/sharetab
/export/home/data1data1nfssec=sys,rw

New ZFS Sharing and Legacy Share Command Summary


This table describes the new ZFS file system sharing syntax and the legacy sharing syntax.
Table 6-5 ZFS Sharing and Legacy Share Command Summary

ZFS Share Legacy Share Syntax


Task

New Share Syntax

Share a ZFS Set the sharenfs property to on.


1.
file system
over NFS. #zfssetsharenfs=on
tank/fs1

2.

Create the NFS share.


#zfsset
share=name=fs1,path=/fs1,
prot=nfstank/fs1

3.

Set the sharenfs property to on.


#zfssetsharenfs=ontank/fs1

Share ZFS Set the sharesmb property to on.


1.
file system
over SMB. #zfssetsharesmb=on
tank/fs2

2.

Create the SMB share.


#zfsset
share=name=fs2,path=/fs2,
prot=smbtank/fs2

3.

Set the sharesmb property to on.


#zfssetsharesmb=ontank/fs2

Unshare the Set the sharenfs property to off. Set the sharenfs property to off.
ZFS file
system.
#zfssetsharenfs=off
#zfssetsharenfs=offtank/fs1

tank/fs1

Set the sharesmb property to off. Set the sharesmb property to off.
#zfssetsharesmb=off
tank/fs2

Add share Reset the sharenfs property.


options to
an existing #zfssetsharenfs=nosuid
tank/fs1
share.

#zfssetsharesmb=offtank/fs2

Reset the share with the additional


property.
#zfssetshare=name=fs1,prot=nfs,
nosuidrpool/fs1
name=fs1,path=/rpool/fs1,prot=nfs,
nosuid=true

Create a
Set the sharenfs property to on. Set the sharenfs property to on.
permanent
NFS share. #zfssetsharenfs=on
#zfssetsharenfs=ontank/fs1
tank/fs1

The /etc/dfs/dfstab file is not available in


For legacy share command syntax, this Solaris release.
you had to edit
the/etc/dfs/dfstab file to create a
permanent share.
Create a
Set the sharesmb property to on. Set the sharesmb property to on.
permanent
SMB share. #zfssetsharesmb=on
#zfssetsharesmb=ontank/fs2
tank/fs2

Or, create the SMB share


with sharemgr.

The sharemgr feature is not available in this


Solaris release.

#sharemgrcreatePsmb
fssmb
#sharemgraddsharerfs
smbs/tank/fs2fssmb

Troubleshooting ZFS Share Problems

You can't share a parent file system if a subdirectory or descendent file system is already shared.

#shareFnfs/rpool/fs2/dir1
#shareFnfs/rpool/fs2/dir2

#shareFnfs/rpool/fs2
share:NFS:descendantofpathisshared:/rpool/fs2/dir1in
rpool_fs2_dir2

Renaming a share that is created with the zfssetshare command is not supported.

You can create a file system share with both NFS and SMB protocols by using the zfsset

share command. For example:

#zfssetshare=name=ds,path=/ds,prot=nfs,prot=smbrpool/ds
name=ds,path=/ds,prot=nfs,prot=smb
If you want to create a file system share with both NFS and SMB protocols by using the
legacy share command, you must specify the command twice. For example:

#shareFnfs/rpool/ds
#shareFsmb/rpool/ds
#zfsgetsharerpool/df
name=rpool_ds,path=/rpool/ds,prot=nfs,prot=smb

A share path or description that includes a comma (,) must be quoted with double quotes.

CIFS Sharing on Solaris 11


By Paul Johnson-Oracle on Feb 20, 2012

Things have changed since Solaris 10 (and Solaris 11 Express too!) on how to properly set up a CIFS server on your
Solaris 11 machine so that Windows clients can access files. There's some documentation on the changes here, but
let me share the full instructions from beginning to end.

hostname: adrenaline
username: paulie
poolname: pool
mountpnt: /pool
share: mysharename

Install SMB server package

[paulie@adrenaline ~]$ sudo pkg install service/file-system/smb

Create the name of the share

[paulie@adrenaline ~]$ sudo zfs set


share=name=mysharename,path=/pool,prot=smb pool

Turn on sharing using zfs

[paulie@adrenaline ~]$ sudo zfs set sharesmb=on pool

Turn on your smb server

[paulie@adrenaline ~]$ sudo svcadm enable -r smb/server

Check that the share is active

[paulie@adrenaline ~]$ sudo smbadm show-shares adrenaline


Enter password:
c$

Default Share

IPC$

Remote IPC

mysharename
3 shares (total=3, read=3)

Enable an existing UNIX user for CIFS sharing (you may have to reset the password again eg.`passwd
paulie` )

[paulie@adrenaline ~]$ sudo smbadm enable-user paulie

Edit pam to allow for smb authentication (add line to end of file)
Solaris 11 GA only:

[paulie@adrenaline ~]$ vi /etc/pam.conf

other

password required

pam_smb_passwd.so.1 nowarn

Solaris 11 U1 or later:

[paulie@adrenaline ~]$ vi /etc/pam.d/other

password required

pam_smb_passwd.so.1 nowarn

Try to mount the share on your Windows machine

\\adrenaline\mysharename

Recovering Passwords in Solaris 11


By Paul Johnson-Oracle on Feb 11, 2013

About once a year, I'll find a way to lock myself out of a Solaris system. Here's how to get out of this scenario. You'll
need a Solaris 11 Live CD or Live USB stick.

Boot up from the Live CD/USB

Select the 'Text Console' option from the GRUB menu

Login to the solaris console using the username/password of jack/jack

Switch to root

$ sudo su
password jack

Mount the solaris boot environment in a temporary directory

# beadm mount solaris /a

Edit the shadow file

# vi /a/etc/shadow

Find your username and remove the password hash

Convert
username:iEwei23SamPleHashonf0981:15746::::::17216
to
username::15746::::::17216

Allow empty passwords at login

$ vi /a/etc/default/login

Switch this line


PASSREQ=YES
to
PASSREQ=NO

Update the boot archive

# bootadm update-archive -R /a

Reboot and remove the Live CD/USB from system

# reboot
If prompted for a password, hit return since this has now been blanked.

Configuring a Basic LDAP Server + Client in Solaris 11


By Paul Johnson-Oracle on Feb 21, 2013

Configuring the Server


Solaris 11 ships with OpenLDAP to use as an LDAP server. To configure, you're going to need a simple slapd.conf file
and an LDIF schema file to populate the database. First, let's look at the slapd.conf configuration:

# cat /etc/openldap/slapd.conf
include

/etc/openldap/schema/core.schema

include

/etc/openldap/schema/cosine.schema

include

/etc/openldap/schema/inetorgperson.schema

include

/etc/openldap/schema/nis.schema

pidfile

/var/openldap/run/slapd.pid

argsfile

/var/openldap/run/slapd.args

database

bdb

suffix

"dc=buford,dc=hillvalley"

rootdn

"cn=admin,dc=buford,dc=hillvalley"

rootpw

secret

directory

/var/openldap/openldap-data

index

objectClass

eq

You may want to change the lines suffix and rootdn to better represent your network naming schema. My LDAP
server's hostname is buford and domain name is hillvalley. You will need to add additional domain components (dc=)
if the name is longer. This schema assumes the LDAP manager will be called admin. Its password is 'secret'. This is
in clear-text just as an example, but you can generate a new one using slappasswd:

[paulie@buford ~]$ slappasswd


New password:
Re-enter new password:
{SSHA}MlyFaZxG6YIQ0d/Vw6fIGhAXZiaogk0G
Replace 'secret' with the entire hash, {SSHA}MlyFaZxG6YIQ0d/Vw6fIGhAXZiaogk0G, for the rootpw line. Now, let's
create a basic schema for my network.

# cat /etc/openldap/schema/hillvalley.ldif
dn: dc=buford,dc=hillvalley
objectClass: dcObject
objectClass: organization
o: bufford.hillvalley
dc: buford

dn: ou=groups,dc=buford,dc=hillvalley
objectCLass: top
objectClass: organizationalunit
ou: groups

dn: ou=users,dc=buford,dc=hillvalley
objectClass: top
objectClass: organizationalunit
ou: users

dn: cn=world,ou=groups,dc=buford,dc=hillvalley
objectClass: top
objectClass: posixGroup
cn: world
gidNumber: 1001

dn: uid=paulie,ou=users,dc=buford,dc=hillvalley
objectClass: top
objectClass: account
objectClass: posixAccount
objectClass: shadowAccount
cn: Paul Johnson
uid: paulie
uidNumber: 1001
gidNumber: 1001
homeDirectory: /paulie/
loginShell: /usr/bin/bash
userPassword: secret
I've created a single group, world, and a single user, paulie. Both share the uid and gid of 1001. LDAP supports lots
of additional variables for configuring a user and group account, but I've kept it basic in this example. Once again, be
sure to change the domain components to match your network. Feel free to also change the user and group details.
I've left the userPassword field in clear-text as 'secret'. The same slappasswd method above applies here as well. It's
time to turn on the server, but first, let's change some ownership permissions:

[paulie@buford ~]$ sudo chown -R openldap:openldap /var/openldap/


... and now ...

[paulie@buford ~]$ sudo svcadm enable ldap/server


Check that it worked:

[paulie@buford ~]$ svcs | grep ldap


online

12:13:49 svc:/network/ldap/server:openldap_24

Neat, now let's add our schema file to the database:

[paulie@buford ~]$ ldapadd -D "cn=admin,dc=buford,dc=hillvalley" -f


/etc/openldap/schema/hillvalley.ldif
Enter bind password:
adding new entry dc=buford,dc=hillvalley
adding new entry ou=groups,dc=buford,dc=hillvalley
adding new entry ou=users,dc=buford,dc=hillvalley
adding new entry cn=world,ou=groups,dc=buford,dc=hillvalley

adding new entry uid=paulie,ou=users,dc=buford,dc=hillvalley


That's it! Our LDAP server is up, populated, and ready to authenticate against.
Configuring the Client
I'm going to turn my example server, buford.hillvalley, into an LDAP client as well. To do this, we need to run the
`ldapclient` command to map our new user and group data:

[paulie@buford ~]$ ldapclient manual \


-a credentialLevel=proxy \
-a authenticationMethod=simple \
-a defaultSearchBase=dc=buford,dc=hillvalley \
-a domainName=buford.hillvalley \
-a defaultServerList=192.168.1.103 \
-a proxyDN=cn=admin,dc=buford,dc=hillvalley \
-a proxyPassword=secret \
-a attributeMap=group:gidnumber=gidNumber \
-a attributeMap=passwd:gidnumber=gidNumber \
-a attributeMap=passwd:uidnumber=uidNumber \
-a attributeMap=passwd:homedirectory=homeDirectory \
-a attributeMap=passwd:loginshell=loginShell \
-a attributeMap=shadow:userpassword=userPassword \
-a objectClassMap=group:posixGroup=posixgroup \
-a objectClassMap=passwd:posixAccount=posixaccount \
-a objectClassMap=shadow:shadowAccount=posixaccount \
-a serviceSearchDescriptor=passwd:ou=users,dc=buford,dc=hillvalley \
-a serviceSearchDescriptor=group:ou=groups,dc=buford,dc=hillvalley \
-a serviceSearchDescriptor=shadow:ou=users,dc=buford,dc=hillvalley
As usual, change the host and domain names as well as the IP address held in defaultServerList and the
proxyPassword. The command should respond back that the system was configured properly, however, additional
changes will need to be made if you use DNS for hostname lookups (most people use DNS, so run these
commands).

svccfg -s name-service/switch setprop config/host = astring: \"files dns


ldap\"
svccfg -s name-service/switch:default refresh

svcadm restart name-service/cache


Now, we need to change how users login so that the client knows that there is an extra LDAP server to authenticate
against. This should not lockout local worries. Examine the two files /etc/pam.d/login and /etc/pam.d/other. Change
any instance of

auth required

pam_unix_auth.so.1

to

auth binding

pam_unix_auth.so.1 server_policy

After this line, add the following new line:

auth required

pam_ldap.so.1

That's it! Finally, reboot your system and see if you can login with your newly created user.
Update: Glenn Faden wrote an excellent guide to configuring OpenLDAP using the native Solaris user/group/role
management system.

Configuring a Basic DNS Server + Client in Solaris 11


By Paul Johnson-Oracle on Mar 04, 2013

Configuring the Server


The default install of Solaris 11 does not come with a DNS server, but this can be added easily through IPS like so:

[paulie@griff ~]$ sudo pkg install service/network/dns/bind


Before enabling this service, the named.conf file needs to be modified to support the DNS structure. Here's what
mine looks like:

[paulie@griff ~]$ cat /etc/named.conf


options {
directory

"/etc/namedb/working";

pid-file

"/var/run/named/pid";

dump-file

"/var/dump/named_dump.db";

statistics-file "/var/stats/named.stats";
forwarders { 208.67.222.222; 208.67.220.220; };
};

zone "hillvalley" {
type master;
file "/etc/namedb/master/hillvalley.db";
};

zone "1.168.192.in-addr.arpa" {
type master;
file "/etc/namedb/master/1.168.192.db";
};
My forwarders use the OpenDNS servers, so any request that the local DNS server can't process goes through there.
I've also setup two zones: hillvalley.db for my forward zone and 1.168.192.db for my reverse zone. We need both for
a proper configuration. We also need to create some directories to support this file:

[paulie@griff ~]$ sudo mkdir /var/dump


[paulie@griff ~]$ sudo mkdir /var/stats
[paulie@griff ~]$ sudo mkdir -p /var/run/namedb
[paulie@griff ~]$ sudo mkdir -p /etc/namedb/master
[paulie@griff ~]$ sudo mkdir -p /etc/namedb/working
Now, let's populate the DNS server with a forward and reverse file.
Forward file

[paulie@griff ~]$ cat /etc/namedb/master/hillvalley.db


$TTL 3h
@

IN

SOA

griff.hillvalley. paulie.griff.hillvalley. (

2013022744 ;serial (change after every update)


3600 ;refresh (1 hour)
3600 ;retry (1 hour)
604800 ;expire (1 week)
38400 ;minimum (1 day)
)

hillvalley.

IN

NS

griff.hillvalley.

delorean

IN

192.168.1.1

biff

IN

192.168.1.101 ; NFS Server

griff

IN

192.168.1.102 ; DNS Server

buford

IN

192.168.1.103 ; LDAP Server

; Router

marty

IN

192.168.1.104 ; Workstation

doc

IN

192.168.1.105 ; Laptop

jennifer

IN

192.168.1.106 ; Boxee

lorraine

IN

192.168.1.107 ; Boxee

Reverse File

[paulie@griff ~]$ cat /etc/namedb/master/1.168.192.db


$TTL 3h
@

IN

SOA

griff.hillvalley. paulie.griff.hillvalley. (

2013022744 ;serial (change after every update)


3600 ;refresh (1 hour)
3600 ;retry (1 hour)
604800 ;expire (1 week)
38400 ;minimum (1 day)
)

IN

NS

griff.hillvalley.

IN

PTR

delorean.hillvalley.

; Router

101

IN

PTR

biff.hillvalley.

; NFS Server

102

IN

PTR

griff.hillvalley.

; DNS Server

103

IN

PTR

buford.hillvalley.

; LDAP Server

104

IN

PTR

marty.hillvalley.

; Workstation

105

IN

PTR

doc.hillvalley.

; Laptop

106

IN

PTR

jennifer.hillvalley.

; Boxee

107

IN

PTR

lorraine.hillvalley.

; Boxee

For referencing how these files works:

paulie is the admin user account name

griff is the hostname of the DNS server

hillvalley is the domain name of the network

I love BTTF

Feel free to tweak this example to match your own network. Finally, enable the DNS service and check that it's online:

[paulie@griff ~]$ sudo svcadm enable dns/server


[paulie@griff ~]$ sudo svcs | grep dns/server
online

22:32:20 svc:/network/dns/server:default

Configuring the Client


We will need the IP address (192.168.1.102), hostname (griff), and domain name (hillvalley) to configure DNS with
these commands:

[paulie@buford ~]$ sudo svccfg -s network/dns/client setprop config/nameserver


= net_address: 192.168.1.102
[paulie@buford ~]$ sudo svccfg -s network/dns/client setprop config/domain =
astring: hillvalley
[paulie@buford ~]$ sudo svccfg -s network/dns/client setprop config/search =
astring: hillvalley
[paulie@buford ~]$ sudo svccfg -s name-service/switch setprop config/ipnodes =
astring: '"files dns"'
[paulie@buford ~]$ sudo svccfg -s name-service/switch setprop config/host =
astring: '"files dns"'
Verify the configuration is correct:

[paulie@buford ~]$ svccfg -s network/dns/client listprop config


config

application

config/value_authorization astring
service.dns.client

solaris.smf.value.name-

config/nameserver

net_address 192.168.1.102

config/domain

astring

hillvalley

config/search

astring

hillvalley

And enable:

[paulie@buford ~]$ sudo svcadm enable dns/client


Now we need to test that the DNS server is working using both forward and reverse DNS lookups:

[paulie@buford ~]$ nslookup lorraine


Server:

192.168.1.102

Address:

192.168.1.102#53

Name:

lorraine.hillvalley

Address: 192.168.1.107

[paulie@buford ~]$ nslookup 192.168.1.1


Server:

192.168.1.102

Address:

192.168.1.102#53

1.1.168.192.in-addr.arpa

name = delorean.hillvalley.

Solaris 11 IPoIB + IPMP


By Paul Johnson-Oracle on Jul 10, 2013

I recently needed to create a two port active:standby IPMP group to be served over Infiniband on Solaris 11. Wow
that's a mouthful of terminology! Here's how I did it:
List available IB links

[root@adrenaline ~]# dladm show-ib


LINK

HCAGUID

PORTGUID

PORT STATE

PKEYS

net5

21280001CF4C96

21280001CF4C97

up

FFFF

net6

21280001CF4C96

21280001CF4C98

up

FFFF

Partition the IB links. My pkey will be 8001.

[root@adrenaline ~]# dladm create-part -l net5 -P 0x8001 p8001.net5


[root@adrenaline ~]# dladm create-part -l net6 -P 0x8001 p8001.net6
[root@adrenaline ~]# dladm show-part
LINK

PKEY

OVER

STATE

FLAGS

p8001.net5

8001

net5

unknown

----

p8001.net6

8001

net6

unknown

----

Create test addresses for the newly created datalinks

[root@adrenaline ~]# ipadm create-ip p8001.net5


[root@adrenaline ~]# ipadm create-addr -T static -a 192.168.1.101
p8001.net5/ipv4
[root@adrenaline ~]# ipadm create-ip p8001.net6
[root@adrenaline ~]# ipadm create-addr -T static -a 192.168.1.102
p8001.net6/ipv4

[root@adrenaline ~]# ipadm show-addr


ADDROBJ

TYPE

STATE

ADDR

p8001.net5/ipv4

static

ok

192.168.1.101/24

p8001.net6/ipv4

static

ok

192.168.1.102/24

Create an IPMP group and add the IB datalinks

[root@adrenaline ~]# ipadm create-ipmp ipmp0


[root@adrenaline ~]# ipadm add-ipmp -i p8001.net5 -i p8001.net6 ipmp0
Set one IB datalink to standby

[root@adrenaline ~]# ipadm set-ifprop -p standby=on -m ip p8001.net6


Assign an IP address to the IPMP group

[root@adrenaline ~]# ipadm create-addr -T static -a 192.168.1.100/24 ipmp0/v4


That's it! Final checks:

[root@adrenaline ~]# ipadm


NAME

CLASS/TYPE STATE

UNDER

ADDR

ipmp0

ipmp

ok

--

--

static

ok

--

192.168.1.100/24

ip

ok

ipmp0

--

ok

--

192.168.1.101/24

ok

ipmp0

--

ok

--

192.168.1.102/24

ipmp0/v4
p8001.net5

p8001.net5/ipv4 static
p8001.net6

ip

p8001.net6/ipv4 static

[root@adrenaline ~]# ping 192.168.1.100


192.168.1.100 is alive
Configuring the Oracle ZFS Storage Appliance
Each database should be contained in its own project.
1. From the ZFS controllers CLI, create a project called mysql.
zfs:> shares project mysql
2. Set logbias to latency to leverage write flash capabilities:

zfs:shares mysql (uncommitted)> set logbias=latency


logbias = latency (uncommitted)
3. Set the default user to mysql and default group to mysql:
zfs:shares mysql (uncommitted)> set default_user=mysql
default_user = mysql (uncommitted)
zfs:shares mysql (uncommitted)> set default_group=mysql
default_group = mysql (uncommitted)
Note: If a name service such as LDAP or NIS is not being used, change these to the actual UID and GID found in
/etc/passwd and /etc/group on the host.
4. Disable Update access time on read:
zfs:shares mysql> set atime=false
atime = false (uncommitted)
5. Commit the changes:
zfs:shares mysql> commit
6. Create a filesystem called innodb-data to hold data files:
zfs:shares mysql> filesystem innodb-data
7. Set the database record size to 16K to match Innodbs standard page size:
zfs:shares mysql/innodb-data (uncommitted)> set recordsize=16K
recordsize = 16K (uncommitted)
zfs:shares mysql/innodb-data (uncommitted)> commit
8. Create a filesystem called innodb-log to hold redo logs:
zfs:shares mysql> filesystem innodb-log
9. Set the database record size to 128K:
zfs:shares mysql/innodb-log (uncommitted)> set recordsize=128K
recordsize = 128K (uncommitted)
zfs:shares mysql/innodb-log (uncommitted)> commit

Configuring the server


This example assumes a Linux server will be running the MySQL database. The following commands are roughly the
same for a Solaris machine:
1. A directory structure should be created to contain the MySQL database:
# mkdir p /mysql/nas/innodb-data
# mkdir p /mysql/nas/innodb-log
# chown R mysql:mysql /mysql/nas
2. Each filesystem provisioned on the Oracle ZFS Storage Appliance should be mounted with the following options:
rw,bg,hard,rsize=1048576,wsize=1048576,vers=3,nointr,timeo=600,tcp,
actimeo =0,nolock
3. This should be supplied in /etc/fstab in order to be mounted automatically at boot, or it can be run manually from a
shell like so:
# mount t nfs o rw,bg,hard,rsize=1048576,wsize=1048576,vers=3,nointr,
timeo=600,tcp,actimeo =0,nolock zfs:/export/innodb-data /mysql/nas/innodbdata
# mount t nfs o rw,bg,hard,rsize=1048576,wsize=1048576,vers=3,nointr,
timeo=600,tcp,actimeo =0,nolock zfs:/export/innodb-log /mysql/nas/innodb-log
Configuring the MySQL database

Você também pode gostar