Escolar Documentos
Profissional Documentos
Cultura Documentos
1
There are several main reasons for providing an environment that allows process
cooperation:
Information sharing
Speedup;
Modularity;
Convenience; and
Privilege separation.
2
Date Release Notes
1982 Sun UNIX 0.7
• First version of Sun’s UNIX, based on 4.BSD
from UniSoft.
• Bundled with the Sun-1, Sun’s first workstation
based on the Motorola 68000 processor;
SunWin-
dows GUI.
3
1993 Solaris 2.3 • 8-way SMP.
• Device power management and system sus-
pend/resume functionality added.
• New directory name lookup cache.
1994 Solaris 2.4 • 20-way SMP.
• New kernel memory allocator (slab
allocator)
replaces SVR4 buddy allocator.
1995 Solaris 2.5 • Caching file system (cachefs).
• Large-page support for kernel and System V
shared memory.
• Fast local interprocess communication
(Doors) added.
• NFS Version 3.
• Supports sun4u (UltraSPARC) architecture.
Ultra SPARC-I-based products introduced—
the
Ultra-1 workstation.
1996 Solaris 2.5.1 • First release supporting multiprocessor Ultra-
SPARC-based systems.
• 64-way SMP.
• Ultra-Enterprise 3000–6000 servers introduced.
1996 Solaris 2.6 • Added support for large (> 2 Gbyte files).
• Dynamic processor sets.
• Kernel-based TCP sockets.
• Locking statistics.
1998 Solaris 7 • UFS direct I/O.
• 64-bit kernel and process address space.
• Logging UFS integrated.
• Priority Paging memory algorithm.
4
Of particular interest is the "Intimate Shared Memory" facility, where the translation
tables are shared as well as the memory. This enhances the effectiveness of the TLB
(Translation Lookaside Buffer), which is a CPU-based cache of translation table
information. Since the same information is used for several processes, available buffer
space can be used much more efficiently. In addition, ISM-designated memory cannot be
paged out, which can be used to keep frequently-used data and binaries in memory.
Database applications are the heaviest users of shared memory. Vendor recommendations
should be consulted when tuning the shared memory parameters.
Let’s consider just one simple example. A database system uses shared memory for
caching various database objects, such as data, stored procedures, indexes, etc. All
commercial database products implement a caching mechanism that uses shared memory.
Assume a large, 2-Gbyte shared segment is configured and there are 400 database
processes, each attaching to the shared segment. Two gigabytes of RAM equates to
262,144 eight-kilobyte pages. Assuming that the kernel needs to maintain 8 bytes of
information for each page mapping (two 4-byte pointers), that’s about 2 Mbytes of kernel
space needed to hold the translation information for one process. Without ISM, those
mappings are replicated for each process, so multiply the number times 400, and we now
5
need 800 Mbytes of kernel space just for those mappings. With ISM, the mappings are
shared, so we only need the 2 Mbytes of space, regardless of how many processes attach.
In addition to the translation table sharing, ISM provides another useful feature: when
ISM is used, the shared pages are locked down in memory and will never be paged out.
In Solaris releases up to and including Solaris 7, there is not an easy way to tell whether
or not a shared segment is an ISM shared segment. It can be done but requires root
permission and use of the crash(1M) utility.
System V Semaphores
6
memory, the system checks for the maximum amount of available kernel memory and
divides that number by 4, to prevent semaphore requirements from taking more than 25
percent of available kernel memory.
A kernel mutex lock is created for each semaphore set. This practice results in fairly fine
grained parallelism on multiprocessor hardware, since it means that multiple processes
can do operations on different semaphore sets concurrently. For operations on
semaphores in the same set, the kernel needs to ensure atomicity for the application.
Atomicity guarantees that a semaphore operation initiated by a process will complete
without interference from another process, whether the operation is on a single
semaphore or multiple semaphores in the same set.
Message queues provide a means for processes to send and receive messages of various
size in an asynchronous fashion on a Solaris system. As with the other IPC facilities, the
initial call when message queues are used is an ipcget call, in this case, msgget(2). The
msgget(2) system call takes a key value and some flags as arguments and returns an
identifier for the message queue. Once the message queue has been established, it’s
simply a matter of sending and receiving messages. Applications use msgsnd(2) and
msgrcv(2) for those purposes. The sender simply constructs the message, assigns a
message type, and calls msgsnd(2). The system will place the message on the appropriate
message queue until a msgrcv(2) is successfully executed. Sent messages are placed at
the back of the queue, and messages are received from the front of the queue; thus the
queue is implemented as a FIFO (First In, First Out).
The message queue facility implements a message type field, which is user (programmer)
defined. So, programmers have some flexibility, since the kernel has no embedded or
predefined knowledge of different message types. Programmers typically use the type
field for priority messaging or directing a message to a particular recipient
Solaris 2.5.1 and before used very coarse-grained mutex locking for message queues,
which resulted in uneccessary contention as compared to 2.6 and later versions.
The number of resources that the kernel allocates for message queues is tun-able. Values
for various message queue tunable parameters can be increased from their default values
so more resources are made available for systems running applications that make heavy
use of message queues.
A final note on kernel locking.
All versions of the Solaris, up to and including Solaris 2.5.1, do very coarse grained
locking in the kernel message queue module. Specifically, one kernel mutex is initialized
to protect the message queue kernel code and data. As a result, applications running on
multiprocessor platforms using message queues do not scale well. This situation is
7
changed in Solaris 2.6, which implements a finer-grained locking mechanism, allowing
for greater concurrency. The improved message queue kernel module has been
backported and is available as a patch for Solaris 2.5 and 2.5.1.
POSIX IPC:-
The evolution of the POSIX standard and associated application programming interfaces
(APIs) resulted in a set of industry-standard interfaces. The POSIX IPC facilities are
similar in functionality to System V IPC but are abstracted on top of memory mapped
files. The POSIX library routines are called by a program to create a new semaphore,
shared memory segment, or message queue using the Solaris file I/O system calls.
Internally in the POSIX library, the IPC objects exist as files. The object type exported to
the program through the POSIX interfaces is handled within the library routines.
The POSIX implementation of all three IPC facilities is built on the notion of POSIX IPC
names, which essentially look like file names but need not be actual files in a file system.
This POSIX name convention provides the necessary abstraction, a file descriptor, to use
the Solaris file memory mapping interface, on which all the POSIX IPC mechanisms are
built. This is very different from the System V IPC functions, where a key value was
required to fetch the proper identifier of the desired IPC resource. In System V IPC, a
common method used for generating key values was the ftok(3C) (file-to-key) function,
where a key value was generated based on the path name of a file. POSIX eliminates the
use of the key, and processes acquire the desired resource by using a file-name
convention.
They are quite similar in form and function to their System V equivalents but very
different in implementation.
Solaris Doors:-
Doors provide a facility for processes to issue procedure calls to functions in other
processes running on the same system. Using the APIs, a process can become a door
server, exporting a function through a door it creates by using the door_create(3X)
interface. Other processes can then invoke the procedure by issuing a door_call(3X),
specifying the correct door descriptor.
The door APIs were available in Solaris 2.5.1 but not documented and, at that point,
subject to change. Solaris 2.6 was the first Solaris release that included a relatively stable
set of interfaces (stable in the sense that they were less likely to change). The Solaris
kernel ships with a shared object library, libdoor. so, that must be linked to applications
using the doors APIs. Table 10-10 describes the door APIs available in Solaris 2.6 and
Solaris 7. During the course of our coverage of doors, we refer to the interfaces as
necessary for clarity.
8
Security Features:-
The Trusted Solaris environment supports the System V inter-process communication
(IPC) mechanism and provides security features for labeled communications between
System V IPC objects and both privileged and unprivileged processes.
• "Privileged Operations"
• "Data Types, Header Files, and Libraries "
• "Programming Interface Declarations"
• "Using Shared Memory Labels"
Privileged Operations: -
System V IPC objects are subject to discretionary and mandatory access controls, and
discretionary ownership controls.
A System V IPC object is created from a key and accessed by an object descriptor
returned when the IPC object is created. The object descriptor, like a file descriptor, is
used for future operations on the object. The sensitivity label of the System V IPC object
is the same as the sensitivity label of its creating process unless the creating process has
the privilege to create the System V IPC object at a different label. A process can access
a System V IPC object at its same sensitivity label unless the process has the privilege to
access a System V IPC object at another label. Because keys are qualified by the
sensitivity label at which they are created, there can be many objects that use the same
key, but no more than one instance of a key (object ID) at a given sensitivity label.
• Message queues allow processes to place messages into a queue where any
process can retrieve the message.
• Semaphore sets synchronize processes and are often used to control concurrent
access to shared memory regions.
• Shared memory regions allow multiple processes to attach to the same region of
memory to access changes to the memory.
Discretionary access to a System V IPC object is granted or denied according to the read
and write modes associated with the object for owner, group, and other in much the same
way as file access. System V IPC objects also has the creator user and creator group sets
that control attribute change requests. The process that creates a System V IPC object is
the owner and can set the discretionary permission bits to any value. To override
discretionary access and ownership restrictions, the process needs the ipc_dac_read,
ipc_dac_write, or ipc_owner privilege in its effective set, depending on the interface used
or operation requested.
9
Mandatory Access Controls: -
Unprivileged processes can only refer to System V IPC objects and return an IPC
descriptor at the process's correct sensitivity label. This makes the mandatory access
controls read-equal and write-equal and eliminates naming and access conflicts when an
unmodified base Solaris application using System V IPC runs at multiple sensitivity
labels. To override mandatory access restrictions, the process needs the ipc_mac_read or
ipc_mac_write privilege in its effective set, depending on the interface used.
10