Escolar Documentos
Profissional Documentos
Cultura Documentos
2 - Virtualization
By Peter Baer Galvin
For Usenix
Last Revision Apr 2009
Save time
Overview
Objectives
Virtualization choices in Solaris
Zones / Containers
LDOMS and Domains
Virtualbox
Xvm (aka Xen)
Or...
Use virtualbox
Use your own system
Use a remote machine you have legit
access to
Or...
Use virtualbox
Use your own system
Use a remote machine you have legit
access to
!"#$%&#'()*+,(
*%-.#'()*
O1'-2($4(B#'D%P%#%$< O1'-2($4(%,4#0$%4-
!"#$%&'()*+*,-.*$/()0(&-,1(+$2$3)0+(&45,$6778
Copyright 2009 Peter Baer Galvin - All Rights Reserved 21
812/#.2()*< 812/#.2()*=
!13#.2*4*&!13*4*5"(6/ !137
8139"/()
!678)()09 345
!678)
:;"<' !/*(3.0
;=*$<&1(;"<$&*'
!"#$%&'()*+*,-.*$/()0(&-,1(+$2$3)0+(&45,$6778
Copyright 2009 Peter Baer Galvin - All Rights Reserved 23
Run other OSes (linux, win) with S10+ has the host
Industry semi-standard
R!*+&%!/"!#
T556+,3#+%!
-53#&%61 -T53,."(9@=@::1 -H:2"1 -/G2V6)1
./"0D:
./"0D=
./"0D9
L63#S%&/
8,%!2
8,%!2
8,%!2
./"0
,"0D:
,"0D=
,"0D9
U+&#N36
EN2&
EN2&
EN2&
EN2&
,"9
,"0
2#%&34"(,%/56"7
!"#$%&'()"*+," !"#$%&'()"*+," !"#$%&'()"*+,"
-./"01 -,"01 -,"91
!"#$%&'()*+*,-.*$/()0(&-,1(+$2$3)0+(&45,$6778
some VM and physical memory (for processes and daemons running in the zone)
- ~40MB of physical memory
!"#$%"&'##(&)
3#+,&'##(4&)*#+,-)*#+,7 . / 0 !"#$%"&02,5
3#+,&'##(4&) 3#+,&02,5
!"#$%&'()*+*,-.*$/()0(&-,1(+$2$3)0+(&45,$6778
!"#$%"&'##(&)
1#-/&'##(7&)8#-/+)8#-/9 4 5 6 !"#$%"&0,/2
1#-/&'##(7&) 1#-/&0,/2
56
)$,- )*+' )./0 /(3444
9)#-$:
!"#$%&'()*+*,-.*$/()0(&-,1(+$2$3)0+(&45,$6778
Copyright 2009 Peter Baer Galvin - All Rights Reserved 34
Some apps not supported in zones, some only in whole root, some in
sparse root
Per app check with app vendor!
Note that ZFS clone use for zone builds may mean that sparse root is no
longer useful!
# zoneadm list -v
ID NAME STATUS PATH
0 global running /
1 test running /opt/zone/test
# df -k
Filesystem kbytes used avail capacity Mounted on
/dev/dsk/c0d0s0 5678823 2766177 2855858 50% /
/devices 0 0 0 0% /devices
/dev/dsk/c0d0p0:boot 10296 1401 8895 14% /boot
proc 0 0 0 0% /proc
mnttab 0 0 0 0% /etc/mnttab
fd 0 0 0 0% /dev/fd
swap 594332 32 594300 1% /var/run
swap 594500 200 594300 1% /tmp
/dev/dsk/c0d0s7 4030684 32853 3957525 1% /export/home
Copyright 2009 Peter Baer Galvin - All Rights Reserved 51
# zlogin -C app1
[Connected to zone 'app1' console]
Select a Locale
. . .
Database gets
4 / 4+3+2+1=
! 40%
!!of all5CPU
4 ! $to container
$!%
4 $!% time available
!""#"!"$# "% $
!""#"!"$#
! ! "% 5
4
!"#$%&'())*+#,%-',*'.*/,#0/%$&
$ $!%
!""#"!"$# "%
!"#$%&'())*+#,%-',*'.*/,#0/%$&
!"#$%&'()*+*,-.*$/()0(&-,1(+$2$3)0+(&45,$6778
!"#$%&'())*+#,%-',*'.*/,#0/%$&
!"#$%&'()*+*,-.*$/()0(&-,1(+$2$3)0+(&45,$6778
!"#$%&'()*+*,-.*$/()0(&-,1(+$2$3)0+(&45,$6778
!"#$%&'()*+*,-.*$/()0(&-,1(+$2$3)0+(&45,$6778
Copyright 2009 Peter Baer Galvin - All Rights Reserved 78
Set an objective (if including a range of processors (i.e. min <> max)
# poolcfg -c 'modify pset email-pool (string
pset.poold.objectives="wt-load")'
Activate the configuration
# pooladm -c
# pooladm -c
Move all the processes in the default pool and its associated zones under the FSS.
(From http://blogs.sun.com/jerrysblog/feed/entries/atom?cat=%2FSolaris)
zonecfg:my-zone> set scheduling-class=FSS
zonecfg:my-zone> add dedicated-cpu
zonecfg:my-zone:dedicated-cpu> set ncpus=1-4
zonecfg:my-zone:dedicated-cpu> set importance=10
zonecfg:my-zone:dedicated-cpu> end
Need this if you want advanced networking features in a zone (firewalls, snooping,
DHCP client, traffic shaping)
Each zone get its own IP stack (and soon xVM will too)
zonecfg:my-zone>set ip-type=exclusive
zonecfg:my-zone> add net
zonecfg:my-zone:net> set physical=e1000g1
zonecfg:my-zone:net> end
Now the zone can set its own IP address et al, can do IPMP within a zone
Project Crossbow will allow virtual NICs to be IP instance entity (no longer tying up
Ethernet port)
Limited to Ethernet devices that use GLDv3 drivers (dladm show-link not reporting
“legacy”)
Fast reboot
dtrace
Live migration
Improved networking via project crossbow
Not just ip-exclusive. Virtual network stack for
each container
S10 containers?
What do the file systems look like from the global zone?
117
Management
NETWORK
utilization between
Data Center
5 to 15 % OS
Server
Energy costs
continue to rise Storage
118
119
OS
Server
121
123
124
125
• The Hypervisor
Unallocated
Resources
Solaris 10 08/07 Solaris 10 11/06 Solaris 10 08/07
+app+patches +app+patches
• Multiple Guest
Control & Service Guest
primary CPU
Cpu
CPU Crypto ldom1 ldom2
Cpu Mem
Mem Crypto
vsw0
Domains
CPU
Cpu
CPU
Cpu
vnet0 vnet1
Mem Crypto
Mem Crypto
primary-vds0
• Virtualised devices
Hypervisor
primary-vsw0
CPU
Cpu CPU
Cpu CPU
Cpu CPU
Cpu CPU
Cpu
CPU CPU
Cpu
Cpu
Hardware
Mem Shared CPU,Mem Mem
Mem Crypto Mem Crypto Mem Crypto
Memory & IO Mem
Mem Crypto
IO Devices
72GB PCI-E
Network
126
127
• The Hypervisor
Unallocated
Resources
Solaris 10 08/07 Solaris 10 11/06 Solaris 10 08/07
+app+patches +app+patches
• Multiple Guest
Control & Service Guest
primary CPU
Cpu
CPU Crypto ldom1 ldom2
Cpu Mem
Mem Crypto
vsw0
Domains
CPU
Cpu
CPU
Cpu
vnet0 vnet1
Mem Crypto
Mem Crypto
primary-vds0
• Virtualised devices
Hypervisor
primary-vsw0
CPU
Cpu CPU
Cpu CPU
Cpu CPU
Cpu CPU
Cpu
CPU CPU
Cpu
Cpu
Hardware
Mem Shared CPU,Mem Mem
Mem Crypto Mem Crypto Mem Crypto
Memory & IO Mem
Mem Crypto
IO Devices
72GB PCI-E
Network
128
129
• The Hypervisor
Unallocated
Resources
Solaris 10 08/07 Solaris 10 11/06 Solaris 10 08/07
+app+patches +app+patches
• Multiple Guest
Control & Service Guest
primary CPU
Cpu
CPU Crypto ldom1 ldom2
Cpu Mem
Mem Crypto
vsw0
Domains
CPU
Cpu
CPU
Cpu
vnet0 vnet1
Mem Crypto
Mem Crypto
primary-vds0
• Virtualised devices
Hypervisor
primary-vsw0
CPU
Cpu CPU
Cpu CPU
Cpu CPU
Cpu CPU
Cpu
CPU CPU
Cpu
Cpu
Hardware
Mem Shared CPU,Mem Mem
Mem Crypto Mem Crypto Mem Crypto
Memory & IO Mem
Mem Crypto
IO Devices
72GB PCI-E
Network
130
131
132
• The Hypervisor
Unallocated
Resources
Solaris 10 08/07 Solaris 10 11/06 Solaris 10 08/07
+app+patches +app+patches
Domains
CPU
Cpu
CPU
Cpu
vnet0 vnet1
Mem Crypto
Mem Crypto
primary-vds0
• Virtualised devices
Hypervisor
primary-vsw0
CPU
Cpu CPU
Cpu CPU
Cpu CPU
Cpu CPU
Cpu
CPU CPU
Cpu
Cpu
Hardware
Mem Shared CPU,Mem Mem
Mem Crypto Mem Crypto Mem Crypto
Memory & IO Mem
Mem Crypto
IO Devices
72GB PCI-E
Network
133
134
• The Hypervisor
Unallocated
Resources
Solaris 10 08/07 Solaris 10 11/06 Solaris 10 08/07
+app+patches +app+patches
Domains
CPU
Cpu
CPU
Cpu
vnet0 vnet1
Mem Crypto
Mem Crypto
primary-vds0
• Virtualised devices
Hypervisor
primary-vsw0
CPU
Cpu CPU
Cpu CPU
Cpu CPU
Cpu CPU
Cpu
CPU CPU
Cpu
Cpu
Hardware
Mem Shared CPU,Mem Mem
Mem Crypto Mem Crypto Mem Crypto
Memory & IO Mem
Mem Crypto
IO Devices
72GB PCI-E
Network
135
136
137
138
139
140
142
143
sc> showhost
Sun-Fire-T2000 System Firmware 6.5.5 2007/10/28 23:09
144
Step 2:
unzip and cd into 127580-05
Step 3:
145
146
- placeholder
c control domain
d delayed reconfiguration
n normal
s starting or stopping
t transition
v virtual I/O domain
148
149
150
ldmd CPU
Cpu
CPU
Cpu Mem
Mem
drd
• Set up the basic services needed. vntsd CPU
Cpu
CPU
Cpu Mem
Mem Mem
Mem
151
152
153
154
155
156
157
158
(vswitch)
Reconfiguration
159
(vcc)
Reconfiguration
160
daemon (vntsd)
Reconfiguration
161
162
T2000 ldm1-
Control &vol1
Service
CPU
Cpu
CPU
Cpu Mem
Mem
Crypto
Crypto
Guest
ldm add-domain ldm1 /dev/e1000g0 primary CPU
Cpu
CPU
Cpu Mem Crypto
Mem Crypto ldom1
vsw0 CPU
Cpu
CPU vnet0
ldm add-mau 1 ldm1
Cpu Mem Crypto
Mem Crypto
Shared CPU,
ldm add-vnet vnet0 primary-vsw0 ldm1 Mem
Mem Crypto
Memory & IO
Mem
Mem Crypto
IO Devices
Hardware
CPU
Cpu CPU
Cpu
Shared CPU,
ldm add-vdsdev /dev/dsk/c0t1d0s2 Mem
Mem Crypto
Memory & IO
Mem
Mem Crypto
ldm1-vol1@primary-vds0
IO Devices
ldm add-vdisk ldm1-vdisk1 ldm1-
vol1@primary-vds0 ldm1
72GB 72GB PCI-E
(The disk device name can vary - find it
via “ok show-devs”) Network
164
• vdc's are the objects passed to OBP and the Operating System in
guest systems
• Guest domain OBP and Solaris sees normal SCSI devices
• Domain administrators may setup devaliases or use raw vdisk
devices
• vdc’s provide Guest domains with virtual disk devices (vdisks) via
device instances from Virtual Disk Servers running in the Service
Domains(s)
• A future release will provide virtualised access to DVD/CD-ROM in
service domains
165
– 'primary-vsw0' vntsd
drd
– 'vnet0@ldm1'
IO Devices
e1000g0
72GB PCI-E
Network
166
167
168
169
170
2. Create a zpool
root@cmt1 > zfs create mypool/ldoms
root@cmt1 > zfs create mypool/ldoms/ldm1
root@cmt1 > cd /export/ldoms/ldm1
root@cmt1 > ls
root@cmt1 > mkfile 12G `pwd`/rootdisk
171
172
173
174
{0} ok
175
176
178
179
180
181
Check which PCI bus ports we own and are currently using and be sure to only give away
unused ones... i.e need to retain the Control Domain boot disk controller and network device...
Providing a PCI bus to a Guest makes the selected Domain a Service domain, by definition –
access to physical IO = Service Domain.
182
Disk Chassis
1RU 2RU/8
SSI
x8 FPGA
10GbE
10GbE
x4 PCI-E
LSI x4 Switch x8 MPC885
SAS links x4 1068E ILOM
PLX 8533 Service
DVD x4 Processor
PCI-E x4
x8 Switch
USB 2.0
PCI-E
to IDE PCI-E x1 Switch PLX 8533
to x8
10GbE
2.0 USB PLX 8517 x4 x4 x4 SerDes
BCM8704
USB 2.0 x4 x4
Hub 10GbE
2RU Only
USB Intel Intel Cu PHY
2.0 BCMxxxx
Dual Dual
GbE GbE XFP
Front Panel
10GbE
Fibre
0 1 2 3 Plugin
USB Quad GbE PCI-E PCI-E PCI-E PCI-E PCI-E PCI-E Serial NetworkPOSIX
Rear
183Panel Connectors x16 x8 x8 x8 x8 x8 Mgt MgtSerial DB-9
184
185
186
187
188
189
• Cypto Units
• Each T1/T2 physical CPU Core has a Crypto Unit
> 8 in total on a 8 core system
> referred to as (MAU)
• Crypto cores can only be allocated to domains that have at least
one vcpu(thread) on the same physical Core as the crypto unit
• Crypto cores cannot be shared, they are owned by exactly one (or
no) Domain
• Probably best to allocate all four/eight threads on a Core to a
domain that wants to use the Crypto core
190
T1 Core 0
T1 Core 1
T1 Core 2
• LDOM2 has 6 vCPUs spread across Cores 0, 1 & 2
> Potentially has access to MAUs 0,1 & 2
> BUT.. LDOM1 already binds MAU0
> So only can take MAU1 and MAU2
MAU MAU MAU
• LDOM3 has 3 vCPUs on Core 2 0 1 2
> But can't access any MAU's since LDOM2 has already taken MAU2
• Adding and removing vCPUs can cause access to previously accessible MAU's to be
lost, currently you can't elect specific vCPU's, framework does that itself
• When MAU's are allocated to Domains, vCPU's become delayed reconfiguration
properties in those domains
191
192
193
194
196
• LDOMs Blogs
– http://blogs.sun.com/hlsu/entry/logincal_domains_1_0_1
200
Application
utilization between
Management
NETWORK
Data Center
5 to 15 % OS
Server
201
202
OS
Server
204
Server
205
206
207
208
• The Hypervisor
Unallocated
Resources
Solaris 10 11/06 Solaris 10 08/07
Solaris 10 08/07
+app+patches +app+patches
vntsd
drd
CPU
Cpu /dev/dsk/c0d0s0 /dev/dsk/c0d0s0
CPU
Cpu
Cpu
CPU
Mem
Mem
Crypto
vdisk0 vdisk1
Cpu Mem
vol1
Multiple Guest
Mem
Domains
vsw0
CPU
Cpu
CPU vnet0 vnet1
Cpu Mem Crypto
Mem Crypto
CPU
Cpu CPU
Cpu CPU
Cpu CPU
Cpu CPU
Cpu
CPU CPU
Cpu
Cpu
Hardware
Mem Shared CPU,Mem Mem
Mem Crypto Mem Crypto Mem Crypto
Memory & IO Mem
Mem Crypto
IO Devices
72GB PCI-E
Network
209
210
• The Hypervisor
Unallocated
Resources
Solaris 10 11/06 Solaris 10 08/07
Solaris 10 08/07
+app+patches +app+patches
vntsd
drd
CPU
Cpu /dev/dsk/c0d0s0 /dev/dsk/c0d0s0
CPU
Cpu
Cpu
CPU
Mem
Mem
Crypto
vdisk0 vdisk1
Cpu Mem
vol1
Multiple Guest
Mem
Domains
vsw0
CPU
Cpu
CPU vnet0 vnet1
Cpu Mem Crypto
Mem Crypto
CPU
Cpu CPU
Cpu CPU
Cpu CPU
Cpu CPU
Cpu
CPU CPU
Cpu
Cpu
Hardware
Mem Shared CPU,Mem Mem
Mem Crypto Mem Crypto Mem Crypto
Memory & IO Mem
Mem Crypto
IO Devices
72GB PCI-E
Network
211
212
• The Hypervisor
Unallocated
Resources
Solaris 10 11/06 Solaris 10 08/07
Solaris 10 08/07
+app+patches +app+patches
vntsd
drd
CPU
Cpu /dev/dsk/c0d0s0 /dev/dsk/c0d0s0
CPU
Cpu
Cpu
CPU
Mem
Mem
Crypto
vdisk0 vdisk1
Cpu Mem
vol1
Multiple Guest
Mem
Domains
vsw0
CPU
Cpu
CPU vnet0 vnet1
Cpu Mem Crypto
Mem Crypto
CPU
Cpu CPU
Cpu CPU
Cpu CPU
Cpu CPU
Cpu
CPU CPU
Cpu
Cpu
Hardware
Mem Shared CPU,Mem Mem
Mem Crypto Mem Crypto Mem Crypto
Memory & IO Mem
Mem Crypto
IO Devices
72GB PCI-E
Network
213
214
215
• The Hypervisor
Unallocated
Resources
Solaris 10 11/06 Solaris 10 08/07
Solaris 10 08/07
+app+patches +app+patches
vntsd
drd
CPU
Cpu /dev/dsk/c0d0s0 /dev/dsk/c0d0s0
CPU
Cpu
Cpu
CPU
Mem
Mem
Crypto
vdisk0 vdisk1
Cpu Mem
vol1
Multiple Guest
Mem
Domains CPU
Cpu
CPU
Cpu Mem
Mem
Crypto
Crypto
vnet0 vnet1
CPU
Cpu CPU
Cpu CPU
Cpu CPU
Cpu CPU
Cpu
CPU CPU
Cpu
Cpu
Hardware
Mem Shared CPU,Mem Mem
Mem Crypto Mem Crypto Mem Crypto
Memory & IO Mem
Mem Crypto
IO Devices
72GB PCI-E
Network
216
217
• The Hypervisor
Unallocated
Resources
Solaris 10 11/06 Solaris 10 08/07
Solaris 10 08/07
+app+patches +app+patches
vntsd
drd
CPU
Cpu /dev/dsk/c0d0s0 /dev/dsk/c0d0s0
CPU
Cpu
Cpu
CPU
Mem
Mem
Crypto
vdisk0 vdisk1
Cpu Mem
vol1
Multiple Guest
Mem
Domains CPU
Cpu
CPU
Cpu Mem
Mem
Crypto
Crypto
vnet0 vnet1
CPU
Cpu CPU
Cpu CPU
Cpu CPU
Cpu CPU
Cpu
CPU CPU
Cpu
Cpu
Hardware
Mem Shared CPU,Mem Mem
Mem Crypto Mem Crypto Mem Crypto
Memory & IO Mem
Mem Crypto
IO Devices
72GB PCI-E
Network
218
219
220
221
222
223
224
225
sc> showhost
Sun System Firmware 7.0.1 2007/09/14 16:31
226
sc> showhost
Sun-Fire-T2000 System Firmware 6.5.5 2007/10/28 23:09
227
Step 2:
unzip and cd into 127580-05
Step 3:
228
229
- placeholder
c control domain
d delayed reconfiguration
n normal
s starting or stopping
t transition
v virtual I/O domain
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
ldmd CPU
Cpu
CPU Mem
Mem +app+patches
IO Devices
246
Hardware
ldm add-vdisk ldm1-vdisk1 ldm1- Mem
Mem Crypto
Shared CPU, Mem
Mem Crypto
vol1@primary-vds0 ldm1 Memory & IO
Network
247
• vdc's are the objects passed to OBP and the Operating System in guest
systems
• Guest domain OBP and Solaris sees normal SCSI devices
• Domain administrators may setup devaliases or use raw vdisk devices
• vdc’s provide Guest domains with virtual disk devices (vdisks) via device
instances from Virtual Disk Servers running in the Service Domains(s)
• A future release will provide virtualised access to DVD/CD-ROM in
service domains
248
Hardware
Cpu Cpu
72GB PCI-E
e1000g0
Network
249
250
251
252
253
2. Create a zpool
root@cmt1 > zfs create mypool/ldoms
root@cmt1 > zfs create mypool/ldoms/ldm1
root@cmt1 > cd /export/ldoms/ldm1
root@cmt1 > ls
root@cmt1 > mkfile 12G `pwd`/rootdisk
254
255
256
257
How to break
telnet> send brk
Debugging requested; hardware watchdog suspended.
{0} ok
258
259
260
261
262
263
264
Check which PCI bus ports we own and are currently using and be sure to only give away unused
ones... i.e need to retain the Control Domain boot disk controller and network device...
Providing a PCI bus to a Guest makes the selected Domain a Service domain, by definition – access
to physical IO = Service Domain.
265
Disk Chassis
1RU 2RU/8
SSI
x8 FPGA
10GbE
10GbE
x4 PCI-E
LSI x4 x8 MPC885
SAS links x4
Switch
1068E ILOM
PLX 8533 Service
DVD x4
Processor
x4
PCI-E
x8 Switch
PCI-E
USB 2.0
to IDE PCI-E Switch PLX 8533
x1
to x8
PLX 8517 10GbE
2.0 USB x4 x4 x4 SerDes
BCM8704
USB 2.0 x4 x4
Hub
2RU Only 10GbE
USB Intel Intel Cu PHY
2.0 BCMxxxx
Dual Dual
GbE GbE
XFP
Front Panel
10GbE
Fibre
Plugin
0 1 2 3
USB Quad GbE PCI-E PCI-E PCI-E PCI-E PCI-E PCI-E Serial Network POSIX
Rear266Panel Connectors x16 x8 x8 x8 x8 x8 Mgt Mgt Serial DB-9
267
268
269
270
271
272
• Cypto Units
• Each T1/T2 physical CPU Core has a Crypto Unit
> 8 in total on a 8 core system
> referred to as (MAU)
• Crypto cores can only be allocated to domains that have at least one
vcpu(thread) on the same physical Core as the crypto unit
• Crypto cores cannot be shared, they are owned by exactly one (or no)
Domain
• Probably best to allocate all four/eight threads on a Core to a domain
that wants to use the Crypto core
273
T1 Core 0
T1 Core 1
T1 Core 2
> Potentially has access to MAUs 0,1 & 2
> BUT.. LDOM1 already binds MAU0
> So only can take MAU1 and MAU2
• LDOM3 has 3 vCPUs on Core 2
MAU0 MAU1 MAU2
> But can't access any MAU's since LDOM2 has already taken MAU2
• Adding and removing vCPUs can cause access to previously accessible MAU's to be lost,
currently you can't elect specific vCPU's, framework does that itself
• When MAU's are allocated to Domains, vCPU's become delayed reconfiguration properties
in those domains
274
• Use MPxIO or VxVM, VxFS, Sun Cluster on service domains (only VxFS in Guests) for
resilient storage devices
• Use IPMP on Guest or Service Domains for resilient network connections
275
276
277
278
279
• http://www.sun.com/ldoms
• Sun Blueprints relating to LDOMs
– http://www.sun.com/blueprints/0207/820-0832.html
– http://www.sun.com/blueprints/0807/820-3023.html
• SDLC Release of LDOMs
– http://www.sun.com/download/products.xml?id=46e5ba66
• Official Documentation for the SDLC release
– http://www.sun.com/servers/coolthreads/ldoms/get.jsp
• LDOMs– Blogs
http://blogs.sun.com/hlsu/entry/logincal_domains_1_0_1
281
M9000+EU 16 24
M9000 8 24
M8000 4 16
M5000 2 4
M4000 1 2
Live migration
Management is browser-based
TBD
[Kasper] Kasper and McClellan, Automating Solaris
Installations, SunSoft Press, 1995
DTrace
http://users.tpg.com.au/adsln4yb/
dtrace.html
http://www.solarisinternals.com/si/dtrace/
index.php
http://www.sun.com/bigadmin/content/dtrace/