Você está na página 1de 5

1.

If a client and a server are placed far apart, we may see network latency dominating
overall performance. How can we tackle this problem?

Many systems can be characterized as dominated either by throughput limitations or by latency


limitations in terms of end-user utility or experience. In some cases, hard limits such as the speed
of light present unique problems to such systems and nothing can be done to correct this. Other
systems allow for significant balancing and optimization for best user experience.

With back end latency so dominating normal performance testing, we optimized our systems to
minimize delay in back end processing. Our simplest case in small messages has us processing
20k requests per second, for latency in the sub-millisecond range, so in most cases, the gateway
is not contributing a any significant amount to latency.

Some policy elements have latency associated and can be avoided in latency sensitive
applications; Auditing is the obvious one as it has dependencies associated with synchronously
waiting for the auditing subsystem to write to hard disk. Layer7 has identified some usage
patterns that have added undue amounts of overhead to requests and we can help you with those
situations, just ask.

2. How does the distinction between kernel mode and user mode function as a
rudimentary from the protection (security) system?

In Kernel mode, the executing code has complete and unrestricted access to the underlying
hardware. It can execute any CPU instruction and reference any memory address. Kernel mode is
generally reserved for the lowest-level, most trusted functions of the operating system. Crashes
in kernel mode are catastrophic; they will halt the entire PC
In User mode, the executing code has no ability to directly access hardware or reference
memory. Code running in user mode must delegate to system APIs to access hardware or
memory. Due to the protection afforded by this sort of isolation, crashes in user mode are always
recoverable. Most of the code running on your computer will execute in user mode.

The kernel is merely the "core" or lowest level of an operating system. The kernel provides
numerous callable routines that allow other software to access files, display text and graphics, get
input from a keyboard or mouse, and other such capabilities. 

The operating systems that we come across today, generally have many features which are not
the necessary features to make a system work. But these features are required to make the
interaction with the system easier. Such features include graphical interface, file management,
process management, shell, etc. These features rely on the core part of the OS (called as kernel)
to run and provide interface to the user or other application programs. It is to be realized that
these features are inevitable, and only a kernel alone is of no use to the user. 

An operating system also includes utilities that use the kernel. For example, MS-DOS provides a
program known as COMMAND.COM, which is the program that allows a human to use the
operating system. Windows Explorer, the MacOS Finder, and the various UNIX shells offer
similar functionality. Other OS utilities may include a file manager, a software installer, and
other items that are necessary to make the computer useful (never mind some don't find
computers useful in the first place.

3. In the designing of the operating system there are two approaches modular kernel
and layered approach? How they are different?
A modular kernel is an attempt to merge the good points of kernel-level drivers and third-party
drivers. In a modular kernel, some part of the system core will be located in independent files
called modules that can be added to the system at run time. Depending on the content of those
modules, the goal can vary such as:

 only loading drivers if a device is actually found


 only load a file system if it gets actually requested
 only load the code for a specific (scheduling/security/whatever) policy when it should be
evaluated etc.

The basic goal remains however the same: keep what is loaded at boot-time minimal while still
allowing the kernel to perform more complex functions. The basics of modular kernel are very
close to what we find in implementation of plugins in applications or dynamic libraries in
general.

The operating system is divided into a number of layers (levels), each built on top of lower
layers. The bottom layer (layer 0), is the hardware; the highest (layer N) is the user interface.
With modularity, layers are selected such that each uses functions (operations) and services of
only lower-level layers

4. Under what circumstances does a multithreaded solution using multiple kernel


threads provides better performance than a single –threaded solution on a single-
processor system?

When a kernel thread suffers a page fault, another kernel thread can be switched in to use the
interleaving time in a useful manner. A single-threaded process, on the other hand, will not be
capable of performing useful work when a page fault takes place. Therefore, in scenarios where a
program might suffer from frequent page faults or has to wait for other system events, a multi-
threaded solution would perform better even on a single-processor system.

Any kind of sequential program is not a good candidate to be threaded. An example of this is a program that
calculates an individual tax return.

Another example is a "shell" program such as the C-shell or Korn shell. Such a program must
closely monitor its own working space such as open files, environment variables, and current
working directory.

5. Consider a system implementing multilevel queue scheduling ? what strategy can a


computer user employ to maximize the amount of CPU time allocated to the user’s
process?

The program could maximize the CPU time allocated to it by not fully utilizing its time
quantums. It could use a large fraction of its assigned quantum, but relinquish the CPU before
the end of the quantum, thereby increasing the priority associated with the process.

Ready queue is partitioned into separate queues: foreground (interactive)background (batch).


Each queue has its own scheduling algorithm. foreground –RR. background –FCFS. Scheduling
must be done between the queues. Fixed priority scheduling; (i.e., serve all from foreground
thenfrom background). Possibility of starvation.Time slice –each queue gets a certain amount of
CPU time which it can schedule amongst its processes; i.e., 80% to foreground in RR. 20% to
background in FCFS .
6.How does synchronization works in the Windows XP.

File synchronization, in its simplest form, is automatic copying. files from a specified directory


on one system are mirrored to a directory in a second system. Whenever changes are made, or at
specified points, the computers communicate and share any changes that have been made to the
directory or files.

Obviously, file synchronization can be used in more than just a simple two-computer setup. An
administrator might enable offline file synchronization on certain folders in a network drive used
to store communal documents. In this case, authorized persons at work could make changes to
the documents outside office hours, and have these changes replicated automatically the
following day, allowing other people at work access to the newly updated files.

Synchronizing Files

Offline files are automatically synchronized by default when you log off the network. To
manually synchronize an offline file, right-click the file that you want, and then
clickSynchronize. You must be connected to the network to synchronize files. 

To edit the synchronization settings:

1. Right-click the offline file that you want, and then click Synchronize.
2. In the Synchronizing dialog box, click Setup.
3. Change the synchronization settings that you want, and then click OK.

Você também pode gostar