Você está na página 1de 10

Solaris Operating System:-

Solaris is a Unix operating system originally developed by Sun Microsystems. It


superseded their earlier SunOS in 1992. Oracle Solaris, as it is now known, has been
owned by Oracle Corporation since Oracle's acquisition of Sun in January 2010.
Solaris was historically developed as proprietary software, then in June 2005 Sun
Microsystems released most of the code base under the CDDL license, and founded the
Open Solaris open source project. With Open Solaris Sun wanted to build a developer
and user community around the software. After the acquisition of Sun Microsystems in
January 2010, Oracle decided to discontinue the Open Solaris distribution and the
development model. As a result, the Open Solaris community forked the Open Indiana
project, as part of the Illumes Foundation. However, starting with Solaris 11, updates to
the Solaris source code will still be distributed under the CDDL license, after full binary
releases are made. Oracle will also begin a technology partner program, called Oracle
Technology Network (OTN), to permit their industry partners access to the in-
development Solaris source code.

What is Inter Process Communication?


Official definition:
Inter process communication (IPC) is used for programs to communicate data to each
other and to synchronize their activities. Semaphores, shared memory, and internal
message queues are common methods of inter process communication.
What it means:
IPC is a method for two or more separate programs or processes to communicate with
each other. Or we can say that Inter-process communication (IPC) is a set of techniques
for the exchange of data among multiple threads in one or more processes. Processes may
be running on one or more computers connected by a network. This avoids using real
disk-based files and the associated I/O overhead to pass information. Like a file, you
must first create or open the resource, use it and close it. Like real files, the resources
have an owner, a group, and permissions. Until you remove the resource it continues to
exist. Unlike real disk-based files, semaphores, message queues and shared memory do
not persist across reboots. The method of IPC used may vary based on the bandwidth and
latency of communication between the threads, and the type of data being communicated.

Reasons to use Inter Process Communication:


Use IPC’s when you need to talk between programs, you want the talking to be fast, and
you do not want to write the code to manage the low-level details of communication
between the processes. Since these are kernel routines, the kernel will take care of the
details of the communication. For example, when you are waiting for a resource that is
protected by a semaphore to become available, if you request access and the resource is
currently in use, the kernel will place you in a waiting queue. When the resource
becomes available, the kernel unblocks your process and you can continue. The kernel
also ensures that operations are atomic, which means that a test and increment operation
to set a semaphore cannot be interrupted.

1
There are several main reasons for providing an environment that allows process
cooperation:
Information sharing
Speedup;
Modularity;
Convenience; and
Privilege separation.

IPC’s in solaris (os):-


Interprocess communication (IPC) encompasses facilities provided by the operating
system to enable the sharing of data (shared memory segments), the exchange of
information and data (message queues), and synchronization of access to shared
resources (semaphores) between processes and threads on the same system. Contrast IPC
to networking-based facilities, such as sockets and RPC interfaces, which enable
communication over a network link between distributed systems. Early IPC facilities
originated in AT&T Unix System V, which added support for shared memory,
semaphores, and message queues around 1983. This original set of three IPC facilities is
generally known as System V IPC. Over time, a similar set of IPC features evolved from
the POSIX standards, and we now have POSIX semaphores, shared memory, and
message queues. The System V and POSIX IPCs use different APIs and are implemented
differently in the kernel, although for applications they provide similar functionality.
Other facilities for interprocess communication include memory mapped files, named
pipes (also known as FIFOs), Unix domain sockets, and recently added Solaris Doors,
which provides an RPC-like facility for threads running on the same system. Each
method by which an application can do interprocess communication offers specific
features and functionality which may or may not be useful for a given application. It’s
up to the application developer to determine what the requirements are and which
facility best meets those requirements. Doors are a low-latency method of invoking a
procedure in local process. A door server contains a thread that sleeps, waiting for an
invocation from the door client. A client makes a call to the server through the door,
along with a small (16 Kbyte) payload. When the call is made from a door client to a
door server, scheduling control is passed directly to the thread in the door server. Once
a door server has finished handling the request, it passes control and response back to
the calling thread. The scheduling control allows ultra-low-latency turnaround because
the client does not need to wait for the server thread to be scheduled to complete the
request.

2
Date Release Notes
1982 Sun UNIX 0.7
• First version of Sun’s UNIX, based on 4.BSD
from UniSoft.
• Bundled with the Sun-1, Sun’s first workstation
based on the Motorola 68000 processor;
SunWin-
dows GUI.

1983 SunOS 1.0 • Sun-2 workstation, 68010 based.


1985 SunOS 2.0
• Virtual file system (VFS) and vnode framework
allows multiple concurrent file system types.
• NFS implemented with the VFS/vnode frame-
work.
1988 SunOS 4.0 • New virtual memory system integrates the file
system cache with the memory system.
• Dynamic linking added.
• The first SPARC-based Sun workstation, the
Sun-4. Support for the Intel-based Sun 386i.
1990 SunOS 4.1 • Supports the SPARCstation 1+, IPC, SLC.
• OpenWindows graphics environment
1992 SunOS 4.1.3 • Asymmetric multiprocessing (ASMP) for
sun4m
systems (SPARCstation-10 and -600 series MP
(multiprocessor) servers).
1992 Solaris 2.0
• Solaris 2.x is born, based on a port of System V
Release 4.0.
• VFS/vnode, VM system, intimate shared mem-
ory brought forward from SunOS.
• Uniprocessor only.
• First release of Solaris 2, version 2.0, is a desk-
top-only developers release.

1992 Solaris 2.1 • Four-way symmetric multiprocessing (SMP).


1993 Solaris 2.2 • Large (> 2 Gbyte) file system support.
• SPARCserver 1000 and SPARCcenter 2000
(sun4d architecture).
1993 Solaris 2.1-x86 • Solaris ported to the Intel i386 architecture.

3
1993 Solaris 2.3 • 8-way SMP.
• Device power management and system sus-
pend/resume functionality added.
• New directory name lookup cache.
1994 Solaris 2.4 • 20-way SMP.
• New kernel memory allocator (slab
allocator)
replaces SVR4 buddy allocator.
1995 Solaris 2.5 • Caching file system (cachefs).
• Large-page support for kernel and System V
shared memory.
• Fast local interprocess communication
(Doors) added.
• NFS Version 3.
• Supports sun4u (UltraSPARC) architecture.
Ultra SPARC-I-based products introduced—
the
Ultra-1 workstation.
1996 Solaris 2.5.1 • First release supporting multiprocessor Ultra-
SPARC-based systems.
• 64-way SMP.
• Ultra-Enterprise 3000–6000 servers introduced.
1996 Solaris 2.6 • Added support for large (> 2 Gbyte files).
• Dynamic processor sets.
• Kernel-based TCP sockets.
• Locking statistics.
1998 Solaris 7 • UFS direct I/O.
• 64-bit kernel and process address space.
• Logging UFS integrated.
• Priority Paging memory algorithm.

System V Shared Memory


Shared memory provides an extremely efficient means of sharing data between multiple
processes on a Solaris system, since the data need not actually be moved from one
process’s address space to another. As the name implies, shared memory is exactly that:
the sharing of the same physical memory (RAM) pages by multiple processes, such that
each process has mappings to the same physical pages and can access the memory
through pointer dereferencing in code.
Shared memory provides the fastest way for processes to pass large amounts of data to
one another. As the name implies, shared memory refers to physical pages of memory
that are shared by more than one process.

4
Of particular interest is the "Intimate Shared Memory" facility, where the translation
tables are shared as well as the memory. This enhances the effectiveness of the TLB
(Translation Lookaside Buffer), which is a CPU-based cache of translation table
information. Since the same information is used for several processes, available buffer
space can be used much more efficiently. In addition, ISM-designated memory cannot be
paged out, which can be used to keep frequently-used data and binaries in memory.

Database applications are the heaviest users of shared memory. Vendor recommendations
should be consulted when tuning the shared memory parameters.

Intimate Shared Memory (ISM)


Intimate shared memory (ISM) is an optimization introduced first in Solaris 2.2. It allows
for the sharing of the translation tables involved in the virtual-to-physical address
translation for shared memory pages, as opposed to just sharing the actual physical
memory pages. Typically, non-
ISM_systems maintain a per-process
map-ping for the shared memory
pages. With many processes
attaching to shared memory, this
scheme creates a lot of redundant
mappings to the same physical pages
that the kernel must maintain.
Additionally, all modern processors
implement some form of a
Translation Look aside Buffer
(TLB), which is (essentially) a
hardware cache of address trans-
lation information. SPARC
processors are no exception, and the
TLB, just like an instruction and data
cache, has limits as to how many
translations it can maintain at any one time. As processes are context-switched in and
out, we can reduce the effectiveness of the TLB. If those processes are sharing memory
and we can share the memory mappings also, we can make more effective use of the
hardware TLB.

Let’s consider just one simple example. A database system uses shared memory for
caching various database objects, such as data, stored procedures, indexes, etc. All
commercial database products implement a caching mechanism that uses shared memory.
Assume a large, 2-Gbyte shared segment is configured and there are 400 database
processes, each attaching to the shared segment. Two gigabytes of RAM equates to
262,144 eight-kilobyte pages. Assuming that the kernel needs to maintain 8 bytes of
information for each page mapping (two 4-byte pointers), that’s about 2 Mbytes of kernel
space needed to hold the translation information for one process. Without ISM, those
mappings are replicated for each process, so multiply the number times 400, and we now

5
need 800 Mbytes of kernel space just for those mappings. With ISM, the mappings are
shared, so we only need the 2 Mbytes of space, regardless of how many processes attach.
In addition to the translation table sharing, ISM provides another useful feature: when
ISM is used, the shared pages are locked down in memory and will never be paged out.
In Solaris releases up to and including Solaris 7, there is not an easy way to tell whether
or not a shared segment is an ISM shared segment. It can be done but requires root
permission and use of the crash(1M) utility.

System V Semaphores

A semaphore, as defined in the dictionary, is a mechanical signaling device or a means of


doing visual signaling. The analogy typically used is the railroad mechanism of signaling
trains, where mechanical arms would swing down to block a train from a section of track
that another train was currently using. When the track was free, the arm would swing up,
and the waiting train could then proceed.
Semaphores are a shareable resource that takes on a non-negative integer value. They are
manipulted by the P (wait) and V (signal) functions, which decrement and increment the
semaphore, respectively. When a process needs a resource, a "wait" is issued and the
semaphore is decremented. When the semaphore contains a value of zero, the resources
are not available and the calling process spins or blocks (as appropriate) until resources
are available. When a process releases a resource controlled by a semaphore, it
increments the semaphore and the waiting processes are notified.
The notion focusing semaphores as a means of synchronization in computer soft-ware
was originated by a Dutch mathematician, E. W. Dijkstra, in 1965. Dijkstra’s original
work defined two semaphore operations, wait and signal. The operations were referred to
as P and V operations. The P operation was the wait, which decremented the value of the
semaphore if it was greater than zero, and the V operation was the signal, which
incremented the semaphore value. The terms P and V originate from the Dutch terms for
try and increase. P is from Probeer, which means try or attempt, and V is from Verhoog,
which means increase.
Semaphores provide a method of synchronizing access to a sharable resource by
multiple processes. They can be used as a binary lock for exclusive access or as a
counter; they manage access to a finite number of shared resources, where the
semaphore value is initialized to the number of shared resources. Each time a process
needs a resource, the semaphore value is decremented. When the process is done with
the resource, the semaphore value is incremented. A zero semaphore value conveys to
the calling process that no resources are currently available, and the calling process
blocks until another process finishes using the resource and frees it.
The semaphore implementation in Solaris (System V semaphores) allows for semaphore
sets, meaning that a unique semaphore identifier can contain multiple semaphores. The
semaphore system calls allow for some operations on the semaphore set. This approach
makes dealing with semaphore sets programmatically a little easier. Just as with shared

6
memory, the system checks for the maximum amount of available kernel memory and
divides that number by 4, to prevent semaphore requirements from taking more than 25
percent of available kernel memory.
A kernel mutex lock is created for each semaphore set. This practice results in fairly fine
grained parallelism on multiprocessor hardware, since it means that multiple processes
can do operations on different semaphore sets concurrently. For operations on
semaphores in the same set, the kernel needs to ensure atomicity for the application.
Atomicity guarantees that a semaphore operation initiated by a process will complete
without interference from another process, whether the operation is on a single
semaphore or multiple semaphores in the same set.

System V Message Queues

Message queues provide a means for processes to send and receive messages of various
size in an asynchronous fashion on a Solaris system. As with the other IPC facilities, the
initial call when message queues are used is an ipcget call, in this case, msgget(2). The
msgget(2) system call takes a key value and some flags as arguments and returns an
identifier for the message queue. Once the message queue has been established, it’s
simply a matter of sending and receiving messages. Applications use msgsnd(2) and
msgrcv(2) for those purposes. The sender simply constructs the message, assigns a
message type, and calls msgsnd(2). The system will place the message on the appropriate
message queue until a msgrcv(2) is successfully executed. Sent messages are placed at
the back of the queue, and messages are received from the front of the queue; thus the
queue is implemented as a FIFO (First In, First Out).
The message queue facility implements a message type field, which is user (programmer)
defined. So, programmers have some flexibility, since the kernel has no embedded or
predefined knowledge of different message types. Programmers typically use the type
field for priority messaging or directing a message to a particular recipient

Solaris 2.5.1 and before used very coarse-grained mutex locking for message queues,
which resulted in uneccessary contention as compared to 2.6 and later versions.

Kernel Resources for Message Queues

The number of resources that the kernel allocates for message queues is tun-able. Values
for various message queue tunable parameters can be increased from their default values
so more resources are made available for systems running applications that make heavy
use of message queues.
A final note on kernel locking.
All versions of the Solaris, up to and including Solaris 2.5.1, do very coarse grained
locking in the kernel message queue module. Specifically, one kernel mutex is initialized
to protect the message queue kernel code and data. As a result, applications running on
multiprocessor platforms using message queues do not scale well. This situation is

7
changed in Solaris 2.6, which implements a finer-grained locking mechanism, allowing
for greater concurrency. The improved message queue kernel module has been
backported and is available as a patch for Solaris 2.5 and 2.5.1.

POSIX IPC:-

The evolution of the POSIX standard and associated application programming interfaces
(APIs) resulted in a set of industry-standard interfaces. The POSIX IPC facilities are
similar in functionality to System V IPC but are abstracted on top of memory mapped
files. The POSIX library routines are called by a program to create a new semaphore,
shared memory segment, or message queue using the Solaris file I/O system calls.
Internally in the POSIX library, the IPC objects exist as files. The object type exported to
the program through the POSIX interfaces is handled within the library routines.
The POSIX implementation of all three IPC facilities is built on the notion of POSIX IPC
names, which essentially look like file names but need not be actual files in a file system.
This POSIX name convention provides the necessary abstraction, a file descriptor, to use
the Solaris file memory mapping interface, on which all the POSIX IPC mechanisms are
built. This is very different from the System V IPC functions, where a key value was
required to fetch the proper identifier of the desired IPC resource. In System V IPC, a
common method used for generating key values was the ftok(3C) (file-to-key) function,
where a key value was generated based on the path name of a file. POSIX eliminates the
use of the key, and processes acquire the desired resource by using a file-name
convention.
They are quite similar in form and function to their System V equivalents but very
different in implementation.

Solaris Doors:-

Doors provide a facility for processes to issue procedure calls to functions in other
processes running on the same system. Using the APIs, a process can become a door
server, exporting a function through a door it creates by using the door_create(3X)
interface. Other processes can then invoke the procedure by issuing a door_call(3X),
specifying the correct door descriptor.
The door APIs were available in Solaris 2.5.1 but not documented and, at that point,
subject to change. Solaris 2.6 was the first Solaris release that included a relatively stable
set of interfaces (stable in the sense that they were less likely to change). The Solaris
kernel ships with a shared object library, libdoor. so, that must be linked to applications
using the doors APIs. Table 10-10 describes the door APIs available in Solaris 2.6 and
Solaris 7. During the course of our coverage of doors, we refer to the interfaces as
necessary for clarity.

8
Security Features:-
The Trusted Solaris environment supports the System V inter-process communication
(IPC) mechanism and provides security features for labeled communications between
System V IPC objects and both privileged and unprivileged processes.

• "Privileged Operations"
• "Data Types, Header Files, and Libraries "
• "Programming Interface Declarations"
• "Using Shared Memory Labels"

Privileged Operations: -

System V IPC objects are subject to discretionary and mandatory access controls, and
discretionary ownership controls.

A System V IPC object is created from a key and accessed by an object descriptor
returned when the IPC object is created. The object descriptor, like a file descriptor, is
used for future operations on the object. The sensitivity label of the System V IPC object
is the same as the sensitivity label of its creating process unless the creating process has
the privilege to create the System V IPC object at a different label. A process can access
a System V IPC object at its same sensitivity label unless the process has the privilege to
access a System V IPC object at another label. Because keys are qualified by the
sensitivity label at which they are created, there can be many objects that use the same
key, but no more than one instance of a key (object ID) at a given sensitivity label.

• Message queues allow processes to place messages into a queue where any
process can retrieve the message.
• Semaphore sets synchronize processes and are often used to control concurrent
access to shared memory regions.
• Shared memory regions allow multiple processes to attach to the same region of
memory to access changes to the memory.

Discretionary Access and Ownership Controls: -

Discretionary access to a System V IPC object is granted or denied according to the read
and write modes associated with the object for owner, group, and other in much the same
way as file access. System V IPC objects also has the creator user and creator group sets
that control attribute change requests. The process that creates a System V IPC object is
the owner and can set the discretionary permission bits to any value. To override
discretionary access and ownership restrictions, the process needs the ipc_dac_read,
ipc_dac_write, or ipc_owner privilege in its effective set, depending on the interface used
or operation requested.

9
Mandatory Access Controls: -

Unprivileged processes can only refer to System V IPC objects and return an IPC
descriptor at the process's correct sensitivity label. This makes the mandatory access
controls read-equal and write-equal and eliminates naming and access conflicts when an
unmodified base Solaris application using System V IPC runs at multiple sensitivity
labels. To override mandatory access restrictions, the process needs the ipc_mac_read or
ipc_mac_write privilege in its effective set, depending on the interface used.

10

Você também pode gostar