Escolar Documentos
Profissional Documentos
Cultura Documentos
Practical 1
Aim: To study and practice on Beowulf project
Theory:
What makes a cluster a Beowulf?
Cluster is a widely-used term meaning independent computers combined into a unified system through software
and networking. At the most fundamental level, when two or more computers are used together to solve a
problem, it is considered a cluster. Clusters are typically used for High Availability (HA) for greater reliability or
High Performance Computing (HPC) to provide greater computational power than a single computer can provide.
Beowulf Clusters are scalable performance clusters based on commodity hardware, on a private system network,
with open source software (Linux) infrastructure. The designer can improve performance proportionally with
added machines. The commodity hardware can be any of a number of mass-market, stand-alone compute nodes as
simple as two networked computers each running Linux and sharing a file system or as complex as 1024 nodes
with a high-speed, low-latency network.
Class I clusters are built entirely using commodity hardware and software using standard technology such as
SCSI, Ethernet, and IDE. They are typically less expensive than Class II clusters which may use specialized
hardware to achieve higher performance.
Common uses are traditional technical applications such as simulations, biotechnology, and petro-clusters;
financial market modeling, data mining and stream processing; and Internet servers for audio and games.
Beowulf programs are usually written using languages such as C and FORTRAN. They use message passing to
achieve parallel computations. See Beowulf History for more information on the development of the Beowulf
architecture.
One question that is commonly enough asked on the Beowulf list is "How hard is it to build or care for a
beowulf?"
Mind you, it is quite possible to go into beowulfery with no more than a limited understanding of networking, a
handful of machines (or better, a pocketful of money) and a willingness to learn, and over the years I've watched
and sometimes helped as many groups and individuals (including myself) in many places went from a state of
near-total ignorance to a fair degree of expertise on little more than guts and effort.
However, this sort of school is the school of hard (and expensive!) knocks; one ought to be able to do better and
not make the same mistakes and reinvent the same wheels over and over again, and this book is an effort to
smooth the way so that you can.
One place that this question is often asked is in the context of trying to figure out the human costs of beowulf
construction or maintenance, especially if your first cluster will be a big one and has to be right the first time.
After all, building a cluster of more than 16 or so nodes is an increasingly serious proposition. It may well be that
beowulfs are ten times cheaper than a piece of "big iron'' of equivalent power (per unit of aggregate compute
power by some measure), but what if it costs ten times as much in human labor to build or run? What if it uses
more power or cooling? What if it needs more expensive physical infrastructure of any sort?
These are all very valid concerns, especially in a shop with limited human resources or with little linux expertise
or limited space, cooling, power. Building a cluster with four nodes, eight nodes, perhaps even sixteen nodes can
often be done so cheaply that it seems ''free'' because the opportunity cost for the resources required are so
minimal and the benefits so much greater than the costs. Building a cluster of 256 nodes without thinking hard
1
Vishal Shah
2
INDUS Institute of Technology and Engineering
The Berkeley NOW project is building system support for using a network of workstations (NOW) to act as a
distributed supercomputer on a building-wide scale. Because of the volume production, commercial workstations
today offer much better price/performance than the individual nodes of MPP's. In addition, switch-based networks
such as ATM will provide cheap, high-bandwidth communication. This price/performance advantage is increased
if the NOW can be used for both the tasks traditionally run on workstations and these large programs.
In conjunction with complementary research efforts in operating systems and communication architecture, we
hope to demonstrate a practical 100 processor system in the next few years that delivers at the same time
(1) better cost-performance for parallel applications than a massively parallel processing architecture (MPP) and
(2) better performance for sequential applications than an individual workstation. This goal requires combining
elements of workstation and MPP technology into a single system. If this project is successful, this project has the
potential to redefine the high-end of the computing industry.
To realize this project, we are conducting research and development into network interface hardware, fast
communication protocols, distributed file systems, and distributed scheduling and job control.
The NOW project is being conducted by the Computer Science Division at the University of California at
Berkeley.
The core hardware/software infrastructure for the project will include 100 SUN Ultrasparcs and 40 SUN
Sparcstations running Solaris, 35 Intel PC's running Windows NT or a PC UNIX variant, and between 500-1000
disks, all connected by a Myrinet switched network. Most of this hardware/software has been donated by the
companies involved. In addition, the Computer Science Division has been donated more than 300 HP
workstations which we are also planning on integrating into the NOW project
Using GLUnix
Taking advantage of NOW functionality is straightforward. Simply ensure that /usr/now/bin is in your shell's
PATH, and /usr/now/man in the MANPATH. To start taking advantage of GLUnix functionality, log into
now.cs.berkeley.edu and start a glush shell. While the composition of the GLUnix parition may change over time,
we make every effort to guarantee that now.cs is always running GLUnix. The glush shell runs most commands
remotely on the lightly loaded nodes in the cluster.
3
INDUS Institute of Technology and Engineering
Utility Programs
We have built a number of utility programs for GLUnix. All of these programs located in /usr/now/bin. Man
pages are available for all of these programs, either by running man from a shell, or by clicking here. A brief
description of each utility program follows:
glush:
The GLUnix shell is a modified version of tcsh. Most jobs submitted to the shell are load
balanced among GLUnix machines. However, some jobs must be run locally since GLUnix
does not provide completely transparent TTY support and since IO bandwidth to stdin, stdout,
and stderr are limited by TCP bandwidth. The shell automatically runs a number of these jobs
locally, however users may customize this list by adding programs to the glunix_runlocal shell
variable. The variable indicates to glush those programs which should be run locally.
glumake:
A modified version of GNU's make program. A -j argument specifies the degree of parallelism
for the make. The degree of parallelism defaults to the number of nodes available in the cluster.
glurun:
This program runs the specified program on the GLUnix cluster. For example glurun bigsim
will run bigsim on the least loaded machine in the GLUnix cluster. You can run parallel
program on the NOW by specifying the parameter -N where N is a number representing the
degree of parallelism you wish. Thus glurun -5 bigsim will run bigsim on 5, least-loaded nodes.
glustat:
glups:
glukill:
gluptime:
Similar to Unix uptime, reporting on how long the system has been up and the current system
load.
Jobs can be started on any node in the GLUnix cluster. A single job may spawn multiple
worker processes on different nodes in the system.
Load
Balancing:
GLUnix maintains imprecise information on the load of each machine in the cluster. The
system farms out jobs to the node which it considers least loaded at request time.
Signal
Propagation:
A signal sent to a process is multiplexed to all worker processes comprising the GLUnix
process.
Coscheduling:
Jobs spawned to multiple nodes can be gang scheduled to achieve better performance. The
4
INDUS Institute of Technology and Engineering
Output to stdout or stderr are piped back to the startup node. Characters sent to stdin are
multiplexed to all worker processes. Output redirection is limited by network bandwidth.
5
INDUS Institute of Technology and Engineering
7
INDUS Institute of Technology and Engineering
Theory:
1. Introduction and Concepts
This section gives you an introduction to how Alchemi implements the concept of grid computing and
discusses concepts required for using Alchemi. Some key features of the framework are highlighted along
the way.
1.1. The Network is the Computer
The idea of meta-computing - the use of a network of many independent computers as if they were one
large parallel machine, or virtual supercomputer - is very compelling since it enables supercomputer-scale
processing power to be had at a fraction of the cost of traditional supercomputers.
While traditional virtual machines (e.g. clusters) have been designed for a small number of tightly coupled
homogeneous resources, the exponential growth in Internet connectivity allows this concept to be applied
on a much larger scale. This, coupled with the fact that desktop PCs in corporate and home environments
are heavily underutilized typically only one-tenth of processing power is used has given rise to interest in
harnessing the vast amounts of processing power that is available in the form of spare CPU cycles on
Internet- or intranet-connected desktops. This new paradigm has been dubbed Grid Computing.
8
INDUS Institute of Technology and Engineering
A grid is created by installing Executors on each machine that is to be part of the grid and linking them to a
central Manager component. The Windows installer setup that comes with the Alchemi distribution and minimal
configuration makes it very easy to set up a grid.
An Executor can be configured to be dedicated (meaning the Manager initiates thread execution directly) or
non-dedicated (meaning that thread execution is initiated by the Executor.) Non-dedicated Executors can work
through firewalls and NAT servers since there is only one-way communication between the Executor and
Manager. Dedicated Executors are more suited to an intranet environment and non-dedicated Executors are
more suited to the Internet environment.
Users can develop, execute and monitor grid applications using the .NET API and tools which are part of the
Alchemi SDK. Alchemi offers a powerful grid thread programming model which makes it very easy to develop
grid applications and a grid job model for grid-enabling legacy or non-.NET applications.
An optional component (not shown) is the Cross Platform Manager web service which offers interoperability
with custom non-.NET grid middleware.
To install the manager as a windows application, use the Manager Setup installer. For service-mode
installation use the Manager Service Setup. The configuration steps are the same for both modes. In case of
the service-mode, the Alchemi Manager Service installed and configured to run automatically on Windows
start-up. After installation, the standard Windows service control manager can be used to control the
service. Alternatively the Alchemi ManagerServiceController program can be used. The Manager service
controller is a graphical interface, which is exactly similar to the normal Manager application.
Install the Manager via the Manager installer. Use the sa password noted previously to install the database
during the installation.
10
INDUS Institute of Technology and Engineering
Under service-mode operation, the GUI shown in fig. 3 is used to start / stop the Manager service. The service
will continue to operate even after the service controller application exits.
Manager Logging
The manager logs its output and errors to a log file called alchemi-manager.log. This can be used to debug
the manager / report errors / verify the manager operation. The log file is placed in the dat directory under
the installation directory.
11
INDUS Institute of Technology and Engineering
Users are administered via the 'Users' tab of the Alchemi Console (located in the Alchemi SDK). Only
Administrators have permissions to manage users; you must therefore initially log in with the default admin
account.
The Console lets you add users, modify their group membership and change passwords.
The Users group (grp_id = 3) is meant for users executing grid applications.
The Executors group (grp_id = 2) is meant for Alchemi Executors. By default, Executors attempting to connect to
the Manager will use the executor account. If you do not wish Executors to connect anonymously, you can change
the password for this account.
You should change the default admin password for production use.
2.4. Cross Platform Manager
The Cross Platform Manager (XPManager) requires:
Internet Information Services (IIS)
ASP.NET
Installation
Install the XPManager web service via the Cross Platform Manager installer.
12
INDUS Institute of Technology and Engineering
<appSettings>
<add key="ManagerUri" value="tcp://localhost:9000/Alchemi_Node" />
</appSettings>
Operation
The XPManager web service URL is of the format
http://[host_name]/[installation_path]
The default is therefore
http://[host_name]/Alchemi/CrossPlatformManager
The web service interfaces with the Manager. The Manager must therefore be running and started for the web
service to work.
2.5. Executor
Installation
The Alchemi Executor can be installed in two modes
As a normal Windows desktop application
As a windows service. (supported only on Windows NT/2000/XP/2003)
To install the executor as a windows application, use the Executor Setup installer. For service-mode installation
use the Executor Service Setup. The configuration steps are the same for both modes. In case of the service-mode,
the Alchemi Executor Service installed and configured to run automatically on Windows start-up. After
installation, the standard Windows service control manager can be used to control the service. Alternatively the
Alchemi ExecutorServiceController program can be used. The Executor service controller is a graphical interface,
which looks very similar to the normal Executor application.
Install the Executor via the Executor installer and follow the on-screen instructions.
If the Executor is configured for non-dedicated execution, you can start executing by clicking the "Start
Executing" button in the "Manage Execution" tab.
14
INDUS Institute of Technology and Engineering
The Executor only utilises idle CPU cycles on the machine and does not impact on the CPU usage of running
programs. When closed, the Executor sits in the system tray. Other options such a interval of executor
heartbeat (i.e time between pinging the Manager) can be configured via the options tab.
Under service-mode operation, the GUI shown in fig. 8 is used to start / stop the Executor service. The
service will continue to operate even after the service controller application exits.
Executor Logging
The executor logs its output and errors to a log file called alchemi-executor.log. This can be used to debug the
executor / report errors / verify the executor operation. The log file is placed in the dat directory under the
installation directory.
2.6. Software Development Kit
15
INDUS Institute of Technology and Engineering
Alchemi.Core.dll
16
INDUS Institute of Technology and Engineering
Alchemi.Core.dll is a class library for creating grid applications to run on Alchemi grids. It is located in the bin
directory. It must be referenced from by all your grid applications. (For more on developing grid applications,
please see section 3. Grid Programming).
3. Grid Programming
This section is a guide to developing Alchemi grid applications.
3.1. Introduction to Grid Software
For the purpose of grid application development, a grid can be viewed as an aggregation of multiple machines
(each with one or more CPUs) abstracted to behave as one "virtual" machine with multiple CPUs. However, grid
implementations differ in the way they implement this abstraction and one of the key differentiating features of
Alchemi is the way it abstracts the grid, with the aim to make the process of developing grid software as easy as
possible.
Due to the nature of the grid environment (loosely coupled, heterogenous resources connected over an
unreliable, high-latency network), grid applications have the following features:
They can be parallelised into a number of independent computation units
Work units have a high computation time vs. communication time ratio
Alchemi supports two models for parallel application composition.
Course-Grained Abstraction: File-Based Jobs
Traditional grid implementations have only offered a high-level abstraction of the virtual machine, where the
smallest unit of parallel execution is a process. The specification of a job to be executed on the grid at the most
basic level consists of input files, output files and an executable (process). In this scenario, writing software to
run on a grid involves dealing with files, an approach that can be complicated and inflexible.
Fine-Grained Abstraction: Grid Threads
On the other hand, the primary programming model supported by Alchemi offers a more low-level (and hence
more powerful) abstraction of the underlying grid by providing a programming model that is object-oriented and
that imitates traditional multi-threaded programming.
The smallest unit of parallel execution in this case is a grid thread (.NET object), where a grid thread is
programmatically analogous to a "normal" thread (without inter-thread communication).
The grid application developer deals only with grid thread and grid application .NET objects, allowing him/her to
concentrate on the application itself without worrying about the "plumbing" details. Furthermore, abstraction at
this level allows the use of a elegant programming model with clean interfacing between remote and local code.
Note: Hereafter, applications and threads can be taken to mean grid applications and grid threads respectively,
unless stated otherwise.
17
INDUS Institute of Technology and Engineering
18
INDUS Institute of Technology and Engineering
ILog
logger
(Int32)Math.Floor((double)NumberOfDigits
DigitsPerThread);
if (DigitsPerThread * NumThreads < NumberOfDigits)
{
NumThreads++;
}
// create and add the required number of grid threads
for (int i = 0; i < NumThreads; i++)
{
int StartDigitNum = 1 + (i*DigitsPerThread);
/// the number of digits for each thread
/// Each thread will get DigitsPerThread digits except the last one
/// which might get less
int
DigitsForThisThread
=
Math.Min(DigitsPerThread,
NumberOfDigits - i * DigitsPerThread);
Console.WriteLine(
"starting a thread to calculate the digits of pi from {0} to
{1}",
StartDigitNum,
StartDigitNum + DigitsForThisThread - 1);
PiCalcGridThread thread = new PiCalcGridThread(
StartDigitNum,
DigitsForThisThread
);
App.Threads.Add(thread);
}
// subcribe to events
25
INDUS Institute of Technology and Engineering
if (th > 1)
{
Console.WriteLine("For testing aborting threads beyond th=5");
try
{
Console.WriteLine("Aborting thread th=" + th);
thread.Abort();
26
INDUS Institute of Technology and Engineering
}
App.config
<?xml version="1.0" encoding="utf-8" ?>
<configuration>
<!-- Register a section handler for the log4net section -->
<configSections>
<section name="log4net" type="System.Configuration.IgnoreSectionHandler" />
</configSections>
<appSettings>
<!-- To enable internal log4net logging specify the following appSettings key -->
<!-- <add key="log4net.Internal.Debug" value="true"/> -->
</appSettings>
<!-- This section contains the log4net configuration settings -->
<log4net>
<!-- Define some output appenders -->
<appender
name="RollingLogFileAppender"
type="log4net.Appender.RollingFileAppender">
<file value="picalc.log" />
<appendToFile value="true" />
<maxSizeRollBackups value="5" />
<maximumFileSize value="1000000" />
<rollingStyle value="Once" />
<staticLogFileName value="true" />
<layout type="log4net.Layout.PatternLayout">
27
INDUS Institute of Technology and Engineering
28
INDUS Institute of Technology and Engineering
logger
33
INDUS Institute of Technology and Engineering
34
INDUS Institute of Technology and Engineering
we investigated the use of economics as a metaphor for management of resources in Grid computing
environments. A Grid resource broker, called Nimrod-G [5], has been developed that performs scheduling of
parameter sweep, task-farming applications on geographically distributed resources. It supports deadline and
budget-based scheduling driven by market-based economic models. To meet users quality of service
requirements, our broker dynamically leases Grid resources and services at runtime depending on their capability,
cost, and availability.Many scheduling experiments have been conducted on the execution of data-intensive,
science applications such as molecular modeling for drug design under a few Grid scenarios (like 2 h deadline and
10 machines for a single user). The ability to experiment with a large number of Grid scenarios was limited by the
number of resources that were available in the WWG (World-Wide Grid) testbed [9]. Also, it was impossible to
create a repeatable and controlled environment for experimentation and evaluation of scheduling strategies. This
is because resources in the Grid span across multiple administrative domains, each with their own policies, users,
and priorities.
The researchers and students, investigating resource management and scheduling for large-scale
distributed computing, need a simple framework for deterministic modeling and simulation of resources and
applications to evaluate scheduling strategies. For most who do not have access to ready-to-use testbed
infrastructures, building them is expensive and time consuming. Also, even for those who have access, the testbed
size is limited to a few resources and domains; and testing scheduling algorithms for scalability and adaptability,
and evaluating scheduler performance for various applications and resource scenarios is harder and impossible to
trace. To overcome these limitations, we provide a Java-based Grid simulation toolkit called GridSim. The Grid
computing researchers and educators also recognized the importance and the need for such a toolkit for modeling
and simulation environments [10]. It should be noted that this paper has a major orientation towards Grid,
however, we believe that our discussion and thoughts apply equally well to P2P systems since resource
management and scheduling issues in both systems are quite similar. The GridSim toolkit supports modeling and
simulation of a wide range of heterogeneous resources, such as single or multiprocessors, shared and distributed
memory machines such as PCs, workstations, SMPs, and clusters with different capabilities and configurations. It
can be used for modeling and simulation of application scheduling on various classes of parallel and distributed
computing systems such as clusters [11], Grids [1], and P2P networks [2]. The resources in clusters are located in
a single administrative domain and managed by a single entity, whereas in Grid and P2P systems, resources are
geographically distributed across multiple administrative domains with their own management policies and goals.
Another key difference between cluster and Grid/P2P systems arises from the way application scheduling is
performed. The schedulers in cluster systems focus on enhancing overall system performance and utility, as they
are responsible for the whole system. In contrast, schedulers in Grid/P2P systems called resource brokers, focus
on enhancing performance of a specific application in such a way that its end-users requirements are met. The
GridSim toolkit provides facilities for the modeling and simulation of resources and network connectivity with
different capabilities, configurations, and domains. It supports primitives for application composition, information
services for resource discovery, and interfaces for assigning application tasks to resources and managing their
35
INDUS Institute of Technology and Engineering
SimJava [14] is a general purpose discrete event simulation package implemented in Java. Simulations in SimJava
contain a number of entities, each of which runs in parallel in its own thread. An entitys behaviour is encoded in
Java using its body() method. Entities have access to a small number of simulation primitives:
sim schedule() sends event objects to other entities via ports;
sim hold() holds for some simulation time;
sim wait() waits for an event object to arrive.
These features help in constructing a network of active entities that communicate by sending and
receiving passive event objects efficiently. The sequential discrete event simulation algorithm, in SimJava, is as
follows. A central object Sim system maintains a timestamp ordered queue of future events. Initially all entities
are created and their body() methods are put in run state. When an entity calls a simulation function, the Sim
system object halts that entitys thread and places an event on the future queue to signify processing the function.
When all entities have halted, Sim system pops the next event off the queue, advances the simulation time
accordingly, and restarts entities as appropriate. This continues until no more events are generated. If the JVM
supports native threads, then all entities starting at exactly the same simulation time may run concurrently.
GridSim entities
37
INDUS Institute of Technology and Engineering
User. Each instance of the User entity represents a Grid user. Each user may differ from the
rest of users with respect to the following characteristics:
types of job created, e.g. job execution time, number of parametric replications, etc.;
scheduling optimization strategy, e.g. minimization of cost, time, or both;
activity rate, e.g. how often it creates new job;
time zone; and
absolute deadline and budget; or
D- and B-factors, deadline and budget relaxation parameters, measured in the range [0, 1]
express deadline and budget affordability of the user relative to the application processing
requirements and available resources.
Broker.
Each user is connected to an instance of the Broker entity. Every job of a user is
first submitted to its broker and the broker then schedules the parametric tasks according to the users scheduling
policy. Before scheduling the tasks, the broker dynamically gets a list of available resources from the global
directory entity. Every broker tries to optimize the policy of its user and therefore, brokers are expected to face
extreme competition while gaining access to resources. The scheduling algorithms used by the brokers must be
highly adaptable to the markets supply and demand situation
Resource:
Each instance of the Resource entity represents a Grid resource. Each resource may differ from the rest of the
resources with respect to the following characteristics:
number of processors;
cost of processing;
speed of processing;
internal process scheduling policy, e.g. time-shared or space-shared;
local load factor; and
time zone.
38
INDUS Institute of Technology and Engineering
The resource speed and the job execution time can be defined in terms of the ratings of standard
benchmarks such as MIPS and SPEC. They can also be defined with respect to the standard machine. Upon
obtaining the resource contact details from the Grid information service, brokers can query resources directly for
their static and dynamic properties.
Grid information service. Providing resource registration services and keeping track of a list
of resources available in the Grid. The brokers can query this for resource contact, configuration, and status
information.
query for resource discovery. The GIS entity returns a list of registered resources and their contactdetails. The
broker entity sends events to resources with a request for resource configuration andproperties. They respond with
dynamic information such as resources cost, capability, availability, load, and other configuration parameters.
These events involving the GIS entity are synchronous in nature.
Depending on the resource selection and scheduling strategy, the broker entity places asynchronousevents
for resource entities in order to dispatch Gridlets for executionthe broker need not wait for a resource to
complete the assigned work. When the Gridlet processing is finished, the resource entity updates the Gridlet status
and processing time and sends it back to the broker by raising an event to signify its completion.
The GridSim resources use internal events to simulate resource behavior and resource allocation. The
entity needs to bemodeled in such a way that it is able to receive all events meant for it. However, it is up to the
entity to decide on the associated actions. For example, in time-shared resource simulations (see Figure 5) internal
events are scheduled to signify the completion time of a Gridlet, which has the smallest remaining processing
time requirement. Meanwhile, if an external event arrives, it changes the share resource availability for each
Gridlet, which means the most recently scheduled event may
41
INDUS Institute of Technology and Engineering
not necessarily signify the completion of a Gridlet. The resource entity can discard such internal
events without processing.
Resource modelsimulating multitasking and multiprocessing
In the GridSim toolkit, we can create Processing Elements (PEs) with different speeds (measured
in either MIPS or SPEC-like ratings). Then, one or more PEs can be put together to create a machine. Similarly,
one or more machines can be put together to create a Grid resource. Thus, the resulting Grid resource can be a
single processor, shared memory multiprocessors (SMP), or a distributed memory cluster of computers. These
Grid resources can simulate time- or space-shared scheduling depending on the allocation policy. A single PE or
SMP-type Grid resource is typically managed by time-shared operating systems that use a round-robin scheduling
policy for multitasking. The distributed memory multiprocessing systems (such as clusters) are managed by
queuing systems, called space-shared schedulers, that execute a Gridlet by running it on a dedicated PE (see
Figure 12) when allocated. The space-shared systems use resource allocation policies such as first-come-firstserved (FCFS), back filling, shortest-job-first-served (SJFS), and so on. It should also be noted that resource
allocation within high-end SMPs could also be performed using the space-shared schedulers.
42
INDUS Institute of Technology and Engineering
Aneka
Manjrasoft is focused on the creation of innovative software technologies for simplifying the development and
deployment of applications on private or public Clouds. Our product Aneka plays the role of Application Platform
as a Service for Cloud Computing. Aneka supports various programming models involving Task Programming,
Thread Programming and MapReduce Programming and tools for rapid creation of applications and their
seamless deployment on private or public Clouds to distribute applications.
Highlights of Aneka
Technical Value
Business Value
Improved reliability
Simplicity
44
INDUS Institute of Technology and Engineering
APPLICATION
Distributed 3D Rendering
For 3D rendering, Aneka enables you to complete your jobs in a fraction of the usual time using existing
hardware infrastructure without having to do any programming.
45
INDUS Institute of Technology and Engineering
Aneka includes a Software Development Kit (SDK) which includes a combination of APIs and
Tools to enable you to express your application. Aneka also allows you to build different run-time
environments and build new applications.
Accelerate
Life Sciences
In the life sciences sector Aneka can be used for drug design, medical imaging, modular & quantum
mechanics, genomic search etc. Using Aneka, simulations take hours instead of days to complete
enabling you to improve your quality and precision of research by carrying out multiple simulations
and decrease your time to market by doing parallel simulations
46
INDUS Institute of Technology and Engineering
namespace Aneka.Examples.TaskDemo
{
/// <summary>
/// Class MyTask. Simple task function wrapping
/// the Gaussian normal distribution. It computes
/// the value of a given point.
/// </summary>
[Serializable]
public class MyTask : ITask
{
/// <summary>
/// value where to calculate the
/// Gaussian normal distribution.
/// </summary>
private double x;
/// <summary>
/// Gets, sets the value where to calculate
/// the Gaussian normal distribution.
/// </summary>
public double X
{ get { return this.x; } set { this.x = value; } }
/// <summary>
/// value where to calculate the
/// Gaussian normal distribution.
/// </summary>
private double result;
/// <summary>
/// Gets, sets the value where to calculate
/// the Gaussian normal distribution.
/// </summary>
public double Result
{
get { return this.result; }
set { this.result = value; }
}
/// <summary>
/// Creates an instance of MyTask.
47
INDUS Institute of Technology and Engineering
53
INDUS Institute of Technology and Engineering
// Ilist<...> class.
// StringBuilder class.
using System.IO;
using System.Drawing;
using Aneka.Entity;
using Aneka.Threading;
using System.Threading;
#endregion
namespace Aneka.Examples.ThreadDemo
{
/// <summary>
/// <para>
/// Class <i><b>WarholApplication</b></i>. This class manages the execution
/// of the <see cref="T:Aneka.Examples.ThreadDemo.WarholFilter" /> on Aneka
/// and thus creating a stereo image of a given picture composed by 4 copies
/// of the same image on which the filter is applied with different settings.
/// </para>
55
INDUS Institute of Technology and Engineering
}
finally
{
// we ensure that the application closes properly
// before leaving the method...
if (this.application != null)
{
if (this.application.Finished == false)
{
this.application.StopExecution();
}
}
}
}
#endregion
#region Helper Methods
/// <summary>
/// Loads the <see cref="T:Aneka.Entity.Configuration" /> and
/// initializes the <see cref="T:Aneka.Entity.AnekaApplication{W,M}" />
/// instance.
/// </summary>
protected void Init()
{
Configuration configuration = null;
if (string.IsNullOrEmpty(this.configPath) == true)
{
// we get the default configuration...
configuration = Configuration.GetConfiguration();
}
else
{
configuration = Configuration.GetConfiguration(this.configPath);
}
this.application = new AnekaApplication<AnekaThread, ThreadManager>(configuration);
}
/// <summary>
/// <para>
/// Starts the execution of the <see cref="T:Aneka.Threading.AnekaThread" />
/// instances.
/// </para>
/// <para>
/// This method createas a set of <see cref="T:Aneka.Examples.ThreadDemo.WarholFilter" />
/// instances and configure them with a <see cref="T:Aneka.Threading.AnekaThread" />
/// instance. All the threads are added to a local running queue and then the
cref="T:Aneka.Threading.AnekaThread.Start" />
/// is invoked. The <see cref="T:Aneka.Examples.ThreadDemo.WarholFilter" /> are cnfigured with
/// the <see cref="T:System.Drawing.Bitmap" /> <paramref name="source"/> as input image.
/// </para>
/// </summary>
<see
59
INDUS Institute of Technology and Engineering
60
INDUS Institute of Technology and Engineering
70
INDUS Institute of Technology and Engineering