Você está na página 1de 34

Software Requirement Specification

for
Could Enabled Application Programming Suite




1.1 Introduction

Cloud computing is the delivery of computing as a service rather than a product,
whereby shared resources, software and information are provided to computers and other
devices as a utility (like the electricity grid) over a network (typically the Internet). Cloud
computing provides computation, software, data access, and storage services that do not
require end-user knowledge of the physical location and configuration of the system that
delivers the services. The concept of cloud computing fills a perpetual need of IT: a way to
increase capacity or add capabilities on the fly without investing in new infrastructure,
training new personnel, or licensing new software.
Cloud computing is a model for enabling convenient, on-demand network access to a
shared pool of configurable computing resources that can be rapidly provisioned and
released with minimal management effort. The paper aims to describe an online compiler
which helps to reduce the problems of portability and storage space by making use of the
concept of cloud computing. The ability to use different compilers allows a programmer to
pick up the fastest or the most convenient tool to compile the code and remove the errors.
Moreover, a web-based application can be used remotely throughout any network
connection and it is platform independent. The errors/outputs of the code are stored in a
more convenient way. Also, the trouble of installing the compiler on each computer is
avoided. Thus, these advantages make this application ideal for conducting examinations
online.
Cloud computing is rapidly gaining the interest of service providers, programmers
and the public as no one wants to miss the new hype. While there are many theories on
how the cloud will evolve no real discussion on the programmability has yet taken place. In
this project a online complier is described, that enables user compile programs to run in a
distributed manner in the cloud. This is done by creating an object orientated syntax and
interpretation environment that can create objects on various distributed locations
throughout a network and address them in a scalable, fault tolerant and transparent way.
This is followed by a discussion of the problems faced and an outlook into the future.
1.2 Statement of Problem

Cloud computing is seen to bring together many services that are provided through
the world wide computer. A trend to multifunctional environments is currently taking place
on the operating system kernel level encouraged by new virtualization techniques
On the other hand, on the highest level of abstraction, object orientated notations and ideas
are mostly used. The general concept is that once the cloud provider is chosen, a lock-in to
their techniques and libraries occurs. Service compatibility is then achieved by adding
specific output filters to the program, which emulate object usage. This results in that every
Software as a Service (SaaS) provider creates his own format. Other programs then have
to retrieve this information and parse it accordingly and create local object representations,
if they want to communicate with this service. This creates many difficulties especially when
the format has to change. By these methods, both ends of a cloud service stack have
become scalable, and in a nutshell cloud enabled. Since the important layer of compilers
and interpreters and as such the program constructs, have been neglected in the past few
years, it is still the case that to use other services of a cloud provider, the programmer has
to include some specific library or write the interface himself. Efforts to make compilers
and/or interpreters more cloud friendly have only resulted in non-complete products and
are not generally used. As seen by the success in the usage of SOAP and the object
orientated paradigm, an object oriented distribution approach bears many advantages for
the cloud, but has not been implemented in the layer of language compiler yet.










1.3 Scope & objectives:
User Personalization
Online Programming Suite
Cloud Managed Compilation
Distributed Processors for Compilation
Users will be allowed to register to the system & can manage their profiles. A user
can write Programs online & save those in their profile. Also is allowed to later can update /
delete the programs. Online Programming allows the user to write and manage their
programs. The programs then stored on the could Manager & the compilation of the
programs will be managed by the cloud for forwarding the request to the required processor
for compilation. A Cloud managed Distributed architecture will be used for Processor load
balancing for the compiler allocation.
The project will create an innovative Turing complete online programming language
compilers that enables and promotes distribution of objects throughout a network. The core
principle of the compiler will be that it will make no difference to the syntax of the code if
the object is initialized locally or on an unknown resource indicated by an URL (Uniform
Resource Locator). The syntax of the compiler should seem familiar to any C, C++, Java
and PHP. Provide the basis for a discussion of how and if distributed objects can be used for
cloud programming purposes.

Personal Objectives
Gain a sound understanding of compilers, interpreters and the technology involved.
Understand the issues and problems associated with distributed computing and try
to find solutions.
Define cloud computing and gain knowledge about the general topic.
Become familiar with PHP and the tools linked to it.





Literature review


What is the Cloud?

Cloud computing is said to be one of the biggest shifts ever seen in the way computers are
used, but first it has to be clarified what the cloud stands for and how a cloud can
compute. The term cloud was coined based on the image of a cloud for the internet which
should resemble a large amount of anonymous, interlinked computers


A typical network diagram using a cloud


In essence this means that a cloud of computers and/or servers acts and reacts as a
single computer. These computers can be owned by a big company and as such be housed
in big server farms, can be personally owned home machines or virtualized resources. The
important thing is that this conglomerate of machines can be accessed via the internet. Lots
of synonyms have been associated with the cloud like Utility Computing (UC), Infrastructure
as a Service (IaaS), Platform as a Service (PaaS) and Software as a Service (SaaS)
[Armbrust et al., 2009]. To discuss the topic in more detail, the ambiguous term cloud
computing has to be divided into two categories:






Storage

Data storage forms the base of all computing, this is one of the main requirements
to be able to process anything. In terms of cloud computing cloud storage can be defined
as data being saved on multiple third party servers. The storage appears to the user as one
coherent block of space that he has for his use. One of the most used storage providers is
the Amazon S3 (Simple Storage Service) R service, which charges the user dynamically
based on usage by a metric consisting of upload/download and data held. For the user, the
storage seems unlimited and is only bound by the amount of money he can pay. The user
does not know where the data is housed, and it is known that Amazon holds redundant
copies in different countries. This, of course, holds some risks for companies as laws and
regulations might change from country to country, but this issue will not be discussed here.
Further, as currently seen with the Google document error; by uploading data into the cloud
it might become involuntarily accessible to the whole world. However the feature of being
able to share all documents that live in the cloud is seen as one of the great advantages
By uploading data into a cloud storage service, data security (loss, corruption, access) is
outsourced to the storage provider. There are many such storage providers but they all
conform in that they offer online accessible storage to the actual implementation hidden
from the user.
Software

Cloud programs are very similar to Software as a Service in that they are hosted
online and mostly accessible through a web browser. However, they are different in the
respect that the underlying hardware is not always provisioned by the creator of the service.
Software as a service is a well researched area, but Utility Computing is just at the
beginning. By having the administration of the services outsourced maintenance and
software installation are greatly simplified. There are two main areas of thought here: The
first one is the way Amazon is taking. It is possible to buy time on a virtual machine which
can then be installed and configured as needed. Further scaling horizontally, with only
money posing as a boundary, is possible by adding machines through a web interface. The
other way of thought is the way Google and Microsoft are proposing, in this context the
developer has to write his program in a special programming environment and using vendor
specific libraries. This makes the maintenance, scalability and installation very easy as it is
independent of the users So cloud computing can be defined by accessibility on the internet;
mostly through a browser; nearly infinitive pool of resources; horizontal scalability and
dynamic payment .
Finally, it has to be stated that cloud computing is not grid or grid computing. Clouds may
have properties similar to the grid and can internally use grid software to manage the
underlying architecture but the cloud consists of a stack of services whereas grid computing
is one layer of this stack.






Pros and Cons of Cloud Computing

The great advantage of cloud computing is elasticity: the ability to
add capacity or applications almost at a moments notice. Companies buy
exactly the amount of storage, computing power, security and other IT
functions that they need from specialists in data-center computing. They get
sophisticated data center services on demand, in only the amount they need
and can pay for, at service levels set with the vendor, with capabilities that
can be added or subtracted at will.
The metered cost, pay-as-you-go approach appeals to small- and medium-
sized enterprises; little or no capital investment and maintenance cost is
needed. IT is remotely managed and maintained, typically for a monthly fee,
and the company can let go of plumbing concerns. Since the vendor has
many customers, it can lower the per-unit cost to each customer. Larger
companies may find it easier to manage collaborations in the cloud, rather
than having to make holes in their firewalls for contract research
organizations. SaaS deployments usually take less time than in-house ones,
upgrades are easier, and users are always using the most recent version of
the application. There may be fewer bugs because having only one version
of the software reduces complexity.
This may all sound very appealing but there are downsides. In the cloud you
may not have the kind of control over your data or the performance of your
applications that you need, or the ability to audit or change the processes
and policies under which users must work. Different parts of an application
might be in many places in the cloud. Complying with federal regulations
such a Sarbanes Oxley, or FDA audit, is extremely difficult. Monitoring and
maintenance tools are immature. It is hard to get metrics out of the cloud
and general management of the work is not simple.3 There are systems
management tools for the cloud environment but they may not integrate
with existing system management tools, so you are likely to need two
systems. Nevertheless, cloud computing may provide enough benefits to
compensate for the inconvenience of two tools.
Cloud customers may risk losing data by having them locked into proprietary
formats and may lose control of data because tools to see who is using them
or who can view them are inadequate. Data loss is a real risk. In October
2009 1 million US users of the T-Mobile Sidekick mobile phone and emailing
device lost data as a result of server failure at Danger, a company recently
acquired by Microsoft.4 Bear in mind, though, that it is easy to
underestimate risks associated with the current environment while
overestimating the risk of a new one. Cloud computing is not risky for every
system. Potential users need to evaluate security measures such as
firewalls, and encryption techniques and make sure that they will have
access to data and the software or source code if the service provider goes
out of business.
It may not be easy to tailor service-level agreements (SLAs) to the specific
needs of a business. Compensation for downtime may be inadequate and
SLAs are unlikely to cover concomitant damages, but not all applications
have stringent uptime requirements. It is sensible to balance the cost of
guaranteeing internal uptime against the advantages of opting for the cloud.
It could be that your own IT organization is not as sophisticated as it might
seem.
Calculating cost savings is also not straightforward. Having little or no capital
investment may actually have tax disadvantages. SaaS deployments are
cheaper initially than in-house installations and future costs are predictable;
after 3-5 years of monthly fees, however, SaaS may prove more expensive
overall. Large instances of EC2 are fairly expensive, but it is important to do
the mathematics correctly and make a fair estimate of the cost of an on-
premises (i.e., in-house) operation.
Standards are immature and things change very rapidly in the cloud. All
IaaS and SaaS providers use different technologies and different standards.
The storage infrastructure behind Amazon is different from that of the typical
data center (e.g., big Unix file systems). The Azure storage engine does not
use a standard relational database; Googles App Engine does not support an
SQL database. So you cannot just move applications to the cloud and expect
them to run. At least as much work is involved in moving an application to
the cloud as is involved in moving it from an existing server to a new one.
There is also the issue of employee skills: staff may need retraining and they
may resent a change to the cloud and fear job losses.

Last but not least, there are latency and performance issues. The
Internet connection may add to latency or limit bandwidth. (Latency, in
general, is the period of time that one component in a system is wasting
time waiting for another component. In networking, it is the amount of time
it takes a packet to travel from source to destination.) In future,
programming models exploiting multithreading may hide latency.5
Nevertheless, the service provider, not the scientist, controls the hardware,
so unanticipated sharing and reallocation of machines may affect run times.
Interoperability is limited. In general, SaaS solutions work best for non-
strategic, non-mission-critical processes that are simple and standard and
not highly integrated with other business systems. Customized applications
may demand an in-house solution, but SaaS makes sense for applications
that have become commoditized, such as reservation systems in the travel
industry.



Existing System

Compilers

A compiler is a program that reads a well defined source language and outputs a
related target language. This target language can be an executable, that can be run directly
on an architecture or a byte code, an Abstract Syntax Tree, or similar which can be
interpreted. To be able to translate there are four distinctive steps as seen in Fig.











1. Lexical Analysis

In this step, the Lexical analyzer or scanner reads the source program and creates
meaningful tokens out of the characters. This means it tries to split the input up into little
lexemes.

2. Syntax Analysis

The parser uses the tokens to create a tree-like structure which is normally called a Parse
Tree. This is created based on a set of rules which describe how the syntax is recognized
and how the tree should be created. The regular appearance of such a tree is that the
operator is the root and the children are the parameters. This can be seen in fig. on the
preceding page after the step named PARSER.

3. Semantic Analysis

The semantic analyzer checks if the syntax tree has the correct semantic form and might
perform some optimizations. This means that the input is correct and can be understood by
further steps (semantic rules). Some compilers also do type checking and other changes to
the tree like type conversions. The output is then called an Abstract syntax tree (AST).

4. Code Generation

This is the final step were the intermediate representation is then converted into actual
output code. If this output is some form of Assembly language the registers are allocated
and the output is generated. This step can vary from implementation to implementation:
some compilers split it into three sub areas Intermediate Code Generation, Code
Optimization and Code Generation, whereas other compilers optimize based on the syntax
tree.






Compiler versus interpreted languages

Higher-level programming languages are generally divided for convenience
into compiled languages and interpreted languages. However, in practice there is rarely
anything about a language that requires it to be exclusively compiled or exclusively
interpreted, although it is possible to design languages that rely on re-interpretation at run
time. The categorization usually reflects the most popular or widespread implementations of
a language for instance, BASIC are sometimes called an interpreted language and C a
compiled one, despite the existence of BASIC compilers and C interpreters.
Modern trends toward just-in-time compilation and byte code interpretation at times
blur the traditional categorizations of compilers and interpreters.
Some language specifications spell out that implementations must include a
compilation facility; for example, Common Lisp. However, there is nothing inherent in the
definition of Common Lisp that stops it from being interpreted. Other languages have
features that are very easy to implement in an interpreter, but make writing a compiler
much harder; for example, APL, SNOBOL4, and many scripting languages allow programs to
construct arbitrary source code at runtime with regular string operations, and then execute
that code by passing it to a special evaluation function. To implement these features in a
compiled language, programs must usually be shipped with a runtime library that includes a
version of the compiler itself.


Problems with Existing System:



Basically a compiler is a program whose purpose is to translate high level
languages like C, C++ etc into the machine code which is the binary code which is
understandable by the computer. After being converted into machine code, the program can
be run on the computer.

Besides having the benefit of fast execution among others, there are some
disadvantages related to a compiler.

The compiler is not very good at finding errors in a program which makes the
removal of errors (Debugging) a little difficult. Another disadvantage of compiler is that
even when an error has been removed for the program, the whole program would start
compiling from the beginning, so the time consumed in executing a program may take
longer.


The Main disadvantage is actually a lack of flexibility in handling less well-designed
languages and/or target architectures. Any compiler that you are trying to specify and
generate with a compiler must be specifiable using the various grammars and
transformation specification languages used by the given compiler. Not all programming
languages or their constraining/type checking or to-target-code transformations fall into
that specification space, take some of the more archaic languages such as C, C++ or JAVA.
You have the most amount of control/flexibility when you write the compiler by hand, at the
obvious cost of the considerably higher level of effort required for that compared to
generating a compiler from specifications with a compiler.






Proposed System

The project tries to create an online multiple programming languages compiler that
acts as a layer of glue between the hardware cloud providers and the presentation of the
user interface where objects are already emulated and used. It should be possible to use an
array of services provided in the cloud, through published objects, in an independent and
transparent way.

Users will be allowed to register to the system & can manage their profiles. A user
can write Programs online & save those in their profile. Also is allowed to later can update /
delete the programs. Online Programming allows the user to write and manage their
programs. The programs then stored on the could Manager & the compilation of the
programs will be managed by the cloud for forwarding the request to the required processor
for compilation. A Cloud managed Distributed architecture will be used for Processor load
balancing for the compiler allocation.
Multiple users can write programs in different programming languages and also can run,
compile and debug program. While the program is running user can give input in program
so the program is execute and also displays the output.

Role of cloud computing in proposed system
The programs then stored on could manager & the compilation of the programs will
be managed by the cloud for forwarding the request to the required processor for
compilation.
For Example suppose one user writing program is C language and at a same time
another user is writing program in JAVA language. Both the users compiling their programs
at same time at that time cloud manager act as identifier. It identifies the programming
language and sends the program in C language to the C compiler and program in JAVA
sends to JAVA Compiler.


Advantages of Proposed System

Using cloud computing we can share any kind of data between a numbers
of clients.
The idea of the project is Carry Less, Use More!
This is an web Based application so that can open on your home pc, office
pc, your tablet and also on your mobile
Main advantage of this project is it supports multiple languages compiling.
User can save their programs, update and also can delete it.


Block Diagram











Could Enabled Application Programming Suite




Technology Used
.NET Framework
The .NET Framework is an environment for building, deploying, and running XML
Web services and other applications. It is the infrastructure for the overall .NET
platform. The .NET Framework consists of three main parts: the common
language runtime, the class libraries, and ASP.NET.
The common language runtime and class libraries, including Windows Forms,
ADO.NET, and ASP.NET, combine to provide services and solutions that can be
easily integrated within and across a variety of systems. The .NET Framework
provides a fully managed, protected, and feature-rich application execution
environment, simplified development and deployment, and seamless integration
with a wide variety of languages.

ASP.NET
ASP.NET is more than the next version of Active Server Pages (ASP); it is a unified
Web development platform that provides the services necessary for developers
to build enterprise-class Web applications. While ASP.NET is largely syntax-
compatible with ASP, it also provides a new programming model and
infrastructure that enables a powerful new class of applications. You can migrate
your existing ASP applications by incrementally adding ASP.NET functionality to
them.
ASP.NET is a compiled .NET Framework -based environment. You can
author applications in any .NET Framework compatible language, including Visual
Basic and Visual C#. Additionally, the entire .NET Framework platform is available
to any ASP.NET application. Developers can easily access the benefits of the .NET
Framework, which include a fully managed, protected, and feature-rich
application execution environment, simplified development and deployment, and
seamless integration with a wide variety of languages.
What is Classic ASP?
Microsoft's previous server side scripting technology ASP (Active
Server Pages) is now often called classic ASP.
ASP 3.0 was the last version of classic ASP.
ASP.NET is NOT ASP
ASP.NET is the next generation ASP, but it's not an upgraded version
of ASP.
ASP.NET is an entirely new technology for server-side scripting. It was
written from the ground up and is not backward compatible with
classic ASP.
You can read more about the differences between ASP and ASP.NET in
the next chapter of this tutorial.
ASP.NET is the major part of the Microsoft's .NET Framework.

What is ASP.NET?
ASP.NET is a server side scripting technology that enables scripts
(embedded in web pages) to be executed by an Internet server.
ASP.NET is a Microsoft Technology
ASP stands for Active Server Pages
ASP.NET is a program that runs inside IIS
IIS (Internet Information Services) is Microsoft's Internet server
IIS comes as a free component with Windows servers
IIS is also a part of Windows 2000 and XP Professional
What is an ASP.NET File?
An ASP.NET file is just the same as an HTML file
An ASP.NET file can contain HTML, XML, and scripts
Scripts in an ASP.NET file are executed on the server
An ASP.NET file has the file extension ".aspx"
How Does ASP.NET Work?
When a browser requests an HTML file, the server returns the file
When a browser requests an ASP.NET file, IIS passes the request
to the ASP.NET engine on the server
The ASP.NET engine reads the file, line by line, and executes the
scripts in the file
Finally, the ASP.NET file is returned to the browser as plain HTML

Microsoft SQL Server
Business today demands a different kind of data management
solution. Performance scalability, and reliability are essential, but businesses now
expect more from their key IT investment.
SQL Server 2005 exceeds dependability requirements and provides
innovative capabilities that increase employee effectiveness, integrate
heterogeneous IT ecosystems, and maximize capital and operating budgets. SQL
Server 2005 provides the enterprise data management platform your organization
needs to adapt quickly in a fast changing environment.
Benchmarked for scalability, speed, and performance, SQL Server 2005 is a
fully enterprise-class database product, providing core support for Extensible
Markup Language (XML) and Internet queries.
Easy-to-use Business Intelligence (BI) Tools
Through rich data analysis and data mining capabilities that integrate with
familiar applications such as Microsoft Office, SQL Server 2005 enables you to
provide all of your employees with critical, timely business information tailored to
their specific information needs. Every copy of SQL Server 2005 ships with a suite
of BI services.

Self-Tuning and Management Capabilities
Revolutionary self-tuning and dynamic self-configuring features optimize
database performance, while management tools automate standard activities.
Graphical tools and performance, wizards simplify setup, database design, and
performance monitoring, allowing database administrators to focus on meeting
strategic business needs.
Data Management Application and Services
Unlike its competitors, SQL Server 2005 provides a powerful and
comprehensive data management platform. Every software license includes
extensive management and development tools, a powerful extraction,
transformation, and loading (ETL) tool, business intelligence and analysis services,
and analysis service, and such as Notification Service. The result is the best overall
business value available.
Enterprise Edition includes the complete set of SQL Server data
management and analysis features are and is uniquely characterized by several
features that makes it the most scalable and available edition of SQL Server 2005
.It scales to the performance levels required to support the largest Web sites,
Enterprise Online Transaction Processing (OLTP) system and Data Warehousing
systems. Its support for failover clustering also makes it ideal for any mission
critical line-of-business application. Additionally, this edition includes several
advanced analysis features that are not included in SQL Server 2005 Standard
Edition.

Software Requirement Specification
HARDWARE REQUIREMENTS:

256 MB RAM.
80 GB HDD.
Intel 1.66 GHz Processor Pentium 4

SOFTWARE REQUIREMENTS:
Windows XP Service Pack 2
Visual Studio 2008
.net

Functional Requirement Specification:
This section outlines the use cases for each of the active readers separately. In our
system we describe the user use case describe.
User use-case:-



Brief Description:-
1. User must register to the system:-
The user who wishes to use the system must register to the system.
In the registration user enter the name, mobile no, address,
username and password.

2. User must login to the system:-
The user must login to the system to use the application.

3. User must select the language to write the code:-
The application has the multiple languages to write the program.
The user of the system any one of the languages to write the code.

4. User must write the code to get the output from the system:-
The user of the system must write the code to get the output from
the system. Here the user any compile the code, debug the code
and get the code from the system.

Following is the description of the user use-case.
1. First user Register to the system
2. After that it login to the system.
3. After the login its can manage the user profile.
4. User can select the language as per its convenient.
5. After selecting the language user write the program, compile it and the run
the program. Since this application is running on the cloud the program is
running on the cloud service.
6. User can manage its program.


Non-Functional Requirement:-
Our system has the one important user is cloud manager. The cloud manager is
intermediate part between the application and the compiler. The cloud manage is
aspects user login and verify the login details. The main work of the cloud manage
is aspects the user program and send to the compiler for the compiler. After the
compiling the compiler send the program output data to the cloud manage and then
cloud manage is send the data to the user application screen.
The cloud manage is intermediate user between the application GUI to compiler
manager.


Requirement Analysis
FEASIBILITY STUDY
The very first phase in any system developing life cycle is preliminary
investigation. The feasibility study is a major part of this phase. A measure of how
beneficial or practical the development of any information system would be to the
organization is the feasibility study.
The feasibility of the development software can be studied in terms of the
following aspects:

1. Operational Feasibility.
2. Technical Feasibility.
3. Economical feasibility.
4. Motivational Feasibility.
Legal Feasibility



OPERATIONAL FEASIBILITY

The site will reduce the time consumed to maintain manual records and is not
tiresome and cumbersome to maintain the records. Hence operational feasibility is
assured.
TECHNICAL FEASIBILITY

Minimum hardware requirements:
At least 166 MHz Pentium Processor or Intel compatible processor.
At least 16 MB RAM.
14.4 kbps or higher modem.
A video graphics card.
A mouse or other pointing device.
At least 3 MB free hard disk space.
Microsoft Internet Explorer 4.0 or higher.


ECONOMICAL FEASIBILTY
Once the hardware and software requirements get fulfilled, there is no need
for the user of our system to spend for any additional overhead.
For the user, the web site will be economically feasible in the following
aspects:
Our web site will reduce the time that is wasted in manual processes.
The storage and handling problems of the registers will be solved.


MOTIVATIONAL FEASIBILITY
The users of our system need no additional training. Visitors do not
require entering password and are shown the appropriate information.

LEGAL FEASIBILITY
The licensed copy of the required software is quite cheap and easy to
get. So from legal point of view the proposed system is legally feasible.












Software Development Model Used:
Software process model deals with the model which we are going to use for the
development of the project. There are many software process models available but
while choosing it we should choose it according to the project size that is whether it
is industry scale project or big scale project or medium scale project.

Accordingly the model which we choose should be suitable for the project as the
software process model changes the cost of the project also changes because the
steps in each software process model varies.

This software is build using the waterfall mode. This model suggests work
cascading from step to step like a series of waterfalls. It consists of the
following steps in the following manner




Waterfall model:









Analysis Phase:

Analysis Phase
Design Phase
Coding Phase
Testing Phase
To attack a problem by breaking it into sub-problems. The
objective of analysis is to determine exactly what must be done
to solve the problem. Typically, the systems logical elements (its
boundaries, processes, and data) are defined during analysis.
Design Phase:

The objective of design is to determine how the problem will be
solved. During design the analysts focus shifts from the logical to
structures, screens, reports, files and databases.
Coding Phase:
The system is created during this phase. Programs are coded,
debugged, documented, and tested. New hardware is selected
and ordered. Procedures are written and tested. End-user
documentation is prepared. Databases and files are initialized.
Users are trained.
Testing Phase:
Once the system is developed, it is tested to ensure that it does
what it was designed to do. After the system passes its final test
and any remaining problems are corrected, the system is
implemented and released to the user.



PROJECT PLAN

5.1 IMPLEMENTATION PLAN
The following table gives the project plan for the Phase 1 & 2 of our project:


Activity Description Effort in
person weeks
Deliverable
Phase 1

P1-01 Requirement Analysis 2 weeks Requirement Gathering
P1-02 Existing System Study & Literature 3 weeks Existing System Study & Literature
P1-03 Technology Selection 2 weeks >NET
P1-04 Modular Specifications 2 weeks Module Description
P1-05 Design & Modeling 4 weeks Analysis Report
Total 13 weeks













Activity Description Effort in
person weeks
Deliverable
Phase 2

P2-01 Detailed Design 2 weeks LLD / DLD Document
P2-02 UI and user interactions design Included in
above
UI document
P2-03 Coding & Implementation 12 weeks Code Release
P2-04 Testing & Bug fixing 2 weeks Test Report
P2-05 Performance Evaluation 4 weeks Analysis Report
P2-06 Release Included in
above
System Release
Total 20 weeks Deployment efforts are extra












Gantt Chart
Gantt Charts:

The Gantt Chart Shows planned and actual progress for a number of tasks displayed against a horizontal
time scale.
It is effective and easy-to-read method of indicating the actual current status for each of set of tasks
compared to planned progress for each activity of the set.
Gantt Charts provide a clear picture of the current state of the project.










References


Realization of open cloud computing federation based on mobile agent
Zehua Zhang; Xuejie Zhang;
Intelligent Computing and Intelligent Systems, 2009. ICIS 2009. IEEE International
Conference on
Volume: 3
Digital Object Identifier: 10.1109/ICICISYS.2009.5358085
Publication Year: 2009 , Page(s): 642 - 646
Cited by: 2
Green Cloud Computing: Balancing Energy in Processing, Storage, and
Transport
Baliga, J.; Ayre, R.W.A.; Hinton, K.; Tucker, R.S.;
Proceedings of the IEEE
Volume: 99 , Issue: 1
Digital Object Identifier: 10.1109/JPROC.2010.2060451
Publication Year: 2011 , Page(s): 149 - 167
Cited by: 1
Analysis and Research of Cloud Computing System Instance
Shufen Zhang; Shuai Zhang; Xuebin Chen; Shangzhuo Wu;
Future Networks, 2010. ICFN '10. Second International Conference on
Digital Object Identifier: 10.1109/ICFN.2010.60
Publication Year: 2010 , Page(s): 88 - 92

Você também pode gostar