Você está na página 1de 64

TABLE OF CONTENT

1. Case Study of 10 cloud tools


2. Case Study of Amazon Web Services
3. Case Study of Aneka
4. Case Study of Microsoft Azure
5. Case Study of Apache Hadoop
6. Case Study of OwnCloud
7. Case Study of Google App Engine
Tools:

1. Eucalyptus

Objective:
To study the cloud computing on Eucalyptus.

Introduction:
Eucalyptus is paid and open-source computer software for building Amazon Web Services
(AWS)-compatible private and hybrid cloud computing environments, originally developed by the
company Eucalyptus Systems. Eucalyptus is an acronym for Elastic Utility Computing
Architecture for Linking Your Programs. To Useful Systems. Eucalyptus enables pooling
compute, storage, and network resources that can be dynamically scaled up or down as application
workloads change. Mårten Mickos was the CEO of Eucalyptus. In September 2014, Eucalyptus
was acquired by Hewlett-Packard and then maintained by DXC Technology.

Architecture:
The architecture of the EUCALYPTUS system is simple, flexible and modular with a hierarchical
design reflecting common resource environments found in many academic settings. In essence,
the system allows users to start, control, access, and terminate entire virtual machines using an
emulation of Amazon EC2’s SOAP and “Query” interfaces. That is, users of EUCALYPTUS
interact with the system using the exact same tools and interfaces that they use to interact with
Amazon EC2. Currently, we support VMs that run atop the Xen hypervisor, but plan to add support
for KVM/QEMU, VMware, and others in the near future. We have chosen to implement each high-
level system component as a stand-alone Web service. This has the following benefits: first, each
Web service exposes a welldefined language-agnostic API in the form of a WSDL document
containing both operations that the service can perform and input/output data structures. Second,
we can leverage existing Web-service features such as WSSecurity policies for secure
communication between components.

Features:
 Supports both Linux and Windows virtual machines (VMs).
 Application program interface- (API) compatible with Amazon EC2 platform.
 Compatible with Amazon Web Services (AWS) and Simple Storage Service (S3).
 Works with multiple hypervisors including VMware, Xen and KVM.
 Can be installed and deployed from source code or DEB and RPM packages.
 Internal processes communications are secured through SOAP and WS-Security.
 Multiple clusters can be virtualized as a single cloud.
 Administrative features such as user and group management and reports.

System Requirements:
Central Processing Units (CPUs): We recommend that each host machine in your Eucalyptus
cloud contain either an Intel or AMD processor with a minimum of 4 2GHz cores.

Operating Systems: Eucalyptus supports the following Linux distributions: CentOS 7 and RHEL
7. Eucalyptus supports only 64-bit architecture.

Storage and Memory Requirements:Each machine in your network needs a minimum of 160GB
of storage..

Services Offered:
Eucalyptus (Elastic Utility Computing Architecture for Linking Your Programs to Useful
Systems), an open source software infrastructure that implements IaaS-style cloud computing. The
goal of Eucalyptus is to allow sites with existing clusters and server infrastructure to host a cloud
that is interface-compatible with Amazon's AWS and (soon) the Sun Cloud open API. In addition,
through its interfaces Eucalyptus is able to host cloud platform services such as AppScale (an open
source implementation of Google's AppEngine) and Hadoop, making it possible to "mix and
match" differ-ent service paradigms and configurations within the cloud. Finally, Eucalyptus can
leverage a heterogeneous collection of virtualization technologies within a single cloud, to
incorporate resources that have already been virtualized without modifying their configuration.

Summary:
The Eucalyptus cloud platform pools together existing virtualized infrastructure to create cloud
resources for infrastructure as a service, network as a service and storage as a service.
2. Google App Engine

Objective:
To study the cloud computing on Google App Engine.

Introduction:
Google App Engine (often referred to as GAE or simply App Engine) is a cloud computing
platform for developing and hosting web applications in Google-managed data centers.
Applications are sandboxed and run across multiple servers. App Engine offers automatic scaling
for web applications—as the number of requests increases for an application, App Engine
automatically allocates more resources for the web application to handle the additional demand.

Google App Engine is free up to a certain level of consumed resources. Fees are charged for
additional storage, bandwidth, or instance hours required by the application. It was first released
as a preview version in April 2008 and came out of preview in September 2011.

Architecture:
App Engine is Google’s PaaS platform, a robust development environment for applications written
in Java, Python, PHP and Go. The SDK for App Engine supports development and deployment of
the application to the cloud. App Engine supports multiple application versions which enables easy
rollout of new application features as well as traffic splitting to support A/B testing.

Integrated within App Engine are the Memcache and Task Queue services. Memcache is an in-
memory cache shared across the AppEngine instances. This provides extremely high speed access
to information cached by the web server (e.g. authentication or account information).

Services Offered:
1. Computing and hosting services a. Application platform b. Containers c. Virtual machines d.
combining computing and hosting options
2. Storage services
3. Networking services
a. Networks, firewalls, and routes
b. Load balancing
c. Cloud DNS
d. Advanced connectivity
4. Big data services
a. Data analysis
b. Batch and streaming data processing
c. Asynchronous messaging

System Requirements:
The App Engine standard environment is based on container instances running on Google's
infrastructure. Containers are preconfigured with one of several available runtimes (Java 7, Java
8, Python 2.7, Go and PHP). Each runtime also includes libraries that support App Engine Standard
APIs.

Summary:
App Engine is Google’s PaaS platform, a robust development environment for applications written
in Java, Python, PHP and Go.

3. Aneka:

Objective:
To study the cloud computing on Aneka.

Introduction:
Aneka is a platform and a framework for developing distributed applications on the Cloud. It
harnesses the spare CPU cycles of a heterogeneous network of desktop PCs and servers or
datacenters on demand. Aneka provides developers with a rich set of APIs for transparently
exploiting such resources and expressing the business logic of applications by using the preferred
programming abstractions. System administrators can leverage on a collection of tools to monitor
and control the deployed infrastructure. This can be a public cloud available to anyone through the
Internet, or a private cloud constituted by a set of nodes with restricted access.

Architecture:
The Aneka based computing cloud is a collection of physical and virtualized resources connected
through a network, which are either the Internet or a private intranet. Each of these resources hosts
an instance of the Aneka Container representing the runtime environment where the distributed
applications are executed. The container provides the basic management features of the single node
and leverages all the other operations on the services that it is hosting. The services are broken up
into fabric, foundation, and execution services. Fabric services directly interact with the node
through the Platform Abstraction Layer (PAL) and perform hardware profiling and dynamic
resource provisioning. Foundation services identify the core system of the Aneka middleware,
providing a set of basic features to enable Aneka containers to perform specialized and specific
sets of tasks. Execution services directly deal with the scheduling and execution of applications in
the Cloud

Features:

One of the key features of Aneka is the ability of providing different ways for expressing
distributed applications by offering different programming models; execution services are mostly
concerned with providing the middleware with an implementation for these models. Additional
services such as persistence and security are transversal to the entire stack of services that are
hosted by the Container. At the application level, a set of different components and tools are
provided to:

1) Simplify the development of applications (SDK);

2) Porting existing applications to the Cloud

3) Monitoring and managing the Aneka Cloud.

A common deployment of Aneka is presented at the side. An Aneka based Cloud is constituted by
a set of interconnected resources that are dynamically modified according to the user needs by
using resource virtualization or by harnessing the spare CPU cycles of desktop machines. If the
deployment identifies a private Cloud all the resources are in house, for example within the
enterprise. This deployment is extended by adding publicly available resources on demand or by
interacting with other Aneka public clouds providing computing resources connected over the
Internet
System Requirements:

The basic system requirement for installing Aneka service is listed below:

- 1GB RAM, 40 MB disk space


- Microsoft Windows Operating System (including Windows 2000, Windows Server 2003,
Windows XP, Windows Vista, Windows 7, Windows Server 2008)
- Microsoft .NET framework 2.0 or higher
- Microsoft SOL Server/SQL Server Express 2005 version 9 or higher/ MySQL 5.1.30 or higher
(Optional, if database support is required)

Summary:

All these features make Aneka a winning solution for enterprise customers in the Platform-as-a-
Service scenario.

4. Microsoft Azure

Objective:

To study the cloud computing on Microsoft Azure.

Introduction:

Microsoft Azure (formerly Windows Azure) is a cloud computing service created by Microsoft
for building, testing, deploying, and managing applications and services through a global network
of Microsoft-managed data centers. It provides software as a service (SAAS), platform as a service
and infrastructure as a service and supports many different programming languages, tools and
frameworks, including both Microsoft-specific and third-party software and systems.

Azure was announced in October 2008 and released on 1 February 2010 as "Windows Azure"
before being renamed "Microsoft Azure" on 25 March 2014.

Deploy anywhere with your choice of tools

 Build your apps, your way


 Connect on-premises data and apps
Protect your business with the most trusted cloud

 Achieve global scale in local regions


 Detect and mitigate threats

Accelerate app innovation

 Build apps quickly and easily


 Manage apps proactively

Power decisions and apps with insights

 Add intelligence to your apps


 Predict and respond proactively

Services:

1. Compute
2. Storage
3. Databases
4. Data + Analytics
5. Containers
6. Enterprise Integration
7. Other Clouds
8. Networking
9. AI + Cognitive Services
10. Developer Tools
11. Internet of Things
12. Web + Mobile
13. Security + Identity
14. Monitoring + Management

System Requirements:

 OS: Windows, Linux.


 Internet Browser: Mozilla Firefox (40.0 or above), IE9, Google chrome(50.0 or
above).:

 Hard disk: 60GB or Above.


 Ram: 4GB or Above.
 Processor: 3.0 GHz or Above.
Summary:

The “Cloud” allows for interesting scenarios – scaling, management, security – know your
costs.Windows Azure Platform is Microsoft’s cloud offering – platform as a service – compute,
storage, RDBMS, authorization, communication – local simulation environment for most cases
Applications need to be designed for the cloud – no simple “repackage & deploy” – load balanced
by design – patterns for cloud applications

5. Amazon Web Services

Objective:

To study the cloud computing on Amazon Web Services.

Introduction:

Amazon Web Services (AWS) describes both a technology and a company. The company AWS
is a subsidiary of Amazon.com and provides on-demand cloud computing platforms to individuals,
companies and governments, on a paid subscription basis with a free-tier option available for 12
months. The technology allows subscribers to have at their disposal a full-fledged virtual cluster
of computers, available all the time, through the internet. AWS' version of virtual computers have
most of the attributes of a real computer including hardware (CPU(s) & GPU(s) for processing,
local/RAM memory, hard-disk/SSD storage); a choice of operating systems; networking; and pre-
loaded application software such as web servers, databases, CRM, etc. Each AWS system also
virtualizes its console I/O (keyboard, display, and mouse), allowing AWS subscribers to connect
to their AWS system using a modern browser. The browser acts as a window into the virtual
computer, letting subscribers log-in, configure and use their virtual systems just as they would a
real physical computer. They can choose to deploy their AWS systems to provide internet-based
services for their own and their customers' benefit.
Architecture:

This is the basic structure of AWS EC2, where EC2 stands for Elastic Compute Cloud. EC2 allow
users to use virtual machines of different configurations as per their requirement. It allows various
configuration options, mapping of individual server, various pricing options, etc. We will discuss
these in detail in AWS Products section. Following is the diagrammatic representation of the
architecture.

Features:

1. Operating system support. Amazon Elastic Compute Cloud supports multiple OSes
without the need to pay additional licensing fees.
2. Security. You have complete control over the visibility of your AWS systems.
3. Pricing. Amazon EC2 provides its Web services through a simple interface that enables
users to configure compute resources and charges them by capacity used.
4. Fault tolerance and latency. Amazon EC2 is extremely flexible in enabling users to scale
across servers and procure compute resources to design fault-tolerant applications.

System Requirements:

Four virtual processors assigned to the VM.

16 GB of RAM assigned to the VM

80 GB of disk space for installation of VM image and system data

Services Offered:

1. Amazon EC2
2. Amazon EC2 Container Registry
3. Amazon EC2 Container Service
4. Amazon Lightsail
5. Amazon VPC
6. AWS Batch
7. AWS Elastic Beanstalk

Summary:
Whether you’re looking for compute power, database storage, content delivery or other
functionality, AWS has the services to help you build sophisticated applications with increased
flexibility, scalability and reliability.

6. OnlinePC

Objective:

To study the cloud computing on Apache Hadoop.

Introduction:

Apache Hadoop ( /həˈduːp/) is an open-source software framework used for distributed storage
and processing of dataset of big data using the MapReduce programming model. It consists of
computer clusters built from commodity hardware. All the modules in Hadoop are designed with
a fundamental assumption that hardware failures are common occurrences and should be
automatically handled by the framework.

Features:
 Scalability and Performance
 Reliability
 Flexibility
 Low Cost

Services Offered:
 Services include:
 log and/or clickstream analysis of various kinds
 marketing analytics
 machine learning and/or sophisticated data mining
 image processing
 processing of XML messages
 web crawling and/or text processing
 general archiving, including of relational/tabular data, e.g. for compliance
System Requirements:

There is no single hardware requirement set for installing Hadoop.

Summary:

The Apache Hadoop software library is a framework that allows for the distributed processing of
large data sets across clusters of computers using simple programming models. It is designed to
scale up from single servers to thousands of machines, each offering local computation and storage.
Rather than rely on hardware to deliver high-availability, the library itself is designed to detect and
handle failures at the application layer, so delivering a highly-available service on top of a cluster
of computers, each of which may be prone to failures.

7. Heroku
Objective:
To study the cloud computing on Heroku.

Introduction:
Heroku is a cloud platform as a service (PaaS) supporting several programming languages that is
used as a web application deployment model. Heroku, one of the first cloud platforms, has been in
development since June 2007, when it supported only the Ruby programming language, but now
supports Java, Node.js, Scala, Clojure, Python, PHP, and Go.[1][2] For this reason, Heroku is said
to be a polyglot platform as it lets the developer build, run and scale applications in a similar
manner across all the languages. Heroku was acquired by Salesforce.com in 2010 for $212 million.

Architecture:
Applications that are run on Heroku typically have a unique domain (typically
"applicationname.herokuapp.com") used to route HTTP requests to the correct dyno. Each of the
application containers,[16] or dynos,[17] are spread across a "dyno grid" which consists of several
servers. Heroku's Git server handles application repository pushes from permitted users.

The working can be summarized into two major categories:


1. Deploy
2. Runtime

Features:
 Automated Certificate Management and Add-ons
 Heroku Local, Dyno Types , One-Off Dynos and Scaling Your Dyno Formation
 Heroku CI and Process Types and the Procfile
 Configuration and Config Vars and Application Metrics
 Logging and Using Multiple Buildpacks for an App
 Container Registry and Runtime and Heroku Private Spaces
 Log Drains and Platform Updates, Maintenance and Notifications

Services Offered:
 Maintenance Mode and Session Affinity
 Heroku SSL, Transferring Apps, and Private Spaces DNS Service Discovery
 Heroku Beta Features, Optimizing Dyno Usage and Slug Checksums
 Legacy Dynos, Language Runtime Metrics (Public Beta) and Private Space Logging
 Heroku CI: Browser and User Acceptance Testing (UAT)
 Heroku CI: Technical Detail on Test Run Lifecycle

System Requirements:
 A free Heroku account.
 Java 8 installed locally.
 Maven 3 installed locally.

Summary:

Data is at the heart of any app — and Heroku provides a secure, scalable database-as-a-service.

8. MyOwnCloud

Objective:

To study the cloud computing on OwnCloud.

Introduction:
OwnCloud is a suite of client-server software for creating file hosting services and using them.
ownCloud is functionally very similar to the widely used Dropbox, with the primary functional
difference being that ownCloud is free and open-source, and thereby allowing anyone to install
and operate it without charge on a private server.[2] It also supports extensions that allow it to
work like Google Drive, with online document editing, calendar and contact synchronization, and
more. Its openness eschews enforced quotas on storage space or the number of connected clients,
instead having hard limits (like on storage space or number of users) defined only by the physical
capabilities of the server.

Features:
ownCloud files are stored in conventional directory structures, and can be accessed via WebDAV
if necessary. User files are encrypted both at rest and during transit. ownCloud can synchronise
with local clients running Windows (Windows XP, Vista, 7 and 8), macOS (10.6 or later), or
various Linux distributions.

Services Offered:
 Share with anybody on your terms and Access everything you care about.
 OwnCloud provides a safe home for your data.
 Calendars and contacts and Work with your documents.
 Activity feed and notifications.

System Requirements:

For best performance, stability, support, and full functionality we officially recommend and
support:

 Ubuntu 16.04
 MySQL/MariaDB
 PHP 7.0
 Apache 2.4 with mod_php

OwnCloud needs a minimum of 128MB RAM, and we recommend a minimum of 512MB.

Summary:
You can download and install ownCloud on your own Linux server, use the Web Installer to install
it on shared Web hosting, try some prefab cloud or virtual machine images, or sign up for hosted
ownCloud services.

9. Apache Cloudstack

Objective:

To study the cloud computing on Apache Cloudstack.

Introduction:

CloudStack was originally developed by Cloud.com, formerly known as VMOps.CloudStack is an


open source cloud computing software for creating, managing, and deploying infrastructure cloud
services. It uses existing hypervisors such as KVM, VMware ESXi and XenServer/XCP for
virtualization. In addition to its own API, CloudStack also supports the Amazon Web Services
(AWS) API and the Open Cloud Computing Interface from the Open Grid Forum.

Architecture:

The minimum production installation consists of one machine running the CloudStack
Management Server and another machine to act as the cloud infrastructure (in this case, a very
simple infrastructure consisting of one host running hypervisor software). In its smallest
deployment, a single machine can act as both the Management Server and the hypervisor host
(using the KVM hypervisor).

Multiple management servers can be configured for redundancy and load balancing, all pointing
to a common MySQL database.

Features:

 Built-in high-availability for hosts and VMs


 AJAX web GUI for management
 AWS API compatibility
 Hypervisor agnostic
 Snapshot management
 Usage metering
 Network management (VLAN's, security groups)
 Virtual routers, firewalls, load balancers
 Multi-role support

System Requirements:

 Must support HVM (Intel-VT or AMD-V enabled).


 64-bit x86 CPU (more cores results in better performance)
 Hardware virtualization support required.
 4 GB of memory.
 36 GB of local disk.
 At least 1 NIC.
 Latest hotfixes applied to hypervisor software

Summary:
CloudStack is open source software provided by Apache that provides AWS API compatibility,
usage metering and many more features.

10. Liquid Sky

Objective:

To study the cloud computing on Liquid Sky.

Introduction:

LiquidSky is a cloud gaming service developed by LiquidSky Software, Inc. which is founded by
Ian McLoughlin and Scott Johnston in 2014. It was initially announced at Consumer Electronics
Show 2017 and scheduled to launch on March 14, 2017. However, after several issues regarding
the datacenters, it was delayed to March 24, 2017. LiquidSky is a Rent-a-PC service which allows
users to stream video games to devices running MacOS, Windows, Linux and Android.
Architecture:

At the hardware level there isn't much to differentiate LiquidSky from other DaaS providers. They
use IBM Softlayer to provide basic Windows Server VMs. But that's where the similarity
ends.LiquidSky uses McLoughlin's h264 algorithm along with a custom-built hypervisor and
custom Nvidia drivers. All three work together to provide a low-latency experience that is designed
to run the newest games and the heaviest rendering software without a hitch. LiquidSky plans to
launch its public beta in mid-September, and anyone who signs up gets a 24-hour free trial. If
you're curious whether or not you could make use of LiquidSky to virtualize away expensive 3D
rendering desktops now is your chance.

Features:

 SkyStorage
 Ultra Low Latency
 No Cost Gaming
 Performance Hardware
 Lightning Internet Speed

System Requirements:

 Windows 7 and above versions


 2GB RAM
 250MB HDD storage
 Integrated Graphics

Summary:

LiquidSky is a growing cloud based gaming service provider that uses Desktop as a
Service (DaaS) is a cloud service in which the back-end of a virtual desktop infrastructure (VDI)
is hosted by a cloud service provider.
AMAZON WEB
SERVICES
OBJECTIVE

To study about cloud computing through Amazon Web Services and implementing its services.

TOOL DESCRIPTION

Amazon Web Services (AWS) is a secure cloud services platform, offering compute power,
database storage, content delivery and other functionality to help businesses scale and grow. It
is a subsidiary of Amazon.com that provides on-demand cloud computing platforms to
individuals, companies and governments, on a paid subscription basis with a free-tier option
available for 12 months. In 2006, Amazon Web Services (AWS) began offering IT
infrastructure services to businesses in the form of web services -- now commonly known as
cloud computing. One of the key benefits of cloud computing is the opportunity to replace up-
front capital infrastructure expenses with low variable costs that scale with your business. With
the Cloud, businesses no longer need to plan for and procure servers and other IT infrastructure
weeks or months in advance. Instead, they can instantly spin up hundreds or thousands of
servers in minutes and deliver results faster.

The technology allows subscribers to have at their disposal a full-fledged virtual cluster of
computers, available all the time, through the internet. AWS's version of virtual computers
have most of the attributes of a real computer including hardware (CPU(s) & GPU(s) for
processing, local/RAM memory, hard-disk/SSD storage); a choice of operating systems;
networking; and pre-loaded application software such as web servers, databases, CRM, etc.
Each AWS system also virtualizes its console I/O (keyboard, display, and mouse), allowing
AWS subscribers to connect to their AWS system using a modern browser. The browser acts
as a window into the virtual computer, letting subscribers log-in, configure and use their virtual
systems just as they would a real physical computer. They can choose to deploy their AWS
systems to provide internet-based services for their own and their customers' benefit.

To minimize the impact of outages and ensure robustness of the system, AWS is
geographically diversified into regions. These regions have central hubs in the Eastern USA,
Western USA (two locations), Brazil, Ireland, Singapore, Japan, and Australia. Each region
comprises multiple smaller geographic areas called availability zones. Each region is wholly
contained within a single country and all of its data and services stay within the designated
region. Each region has multiple "Availability Zones", which consist of one or more
discrete data centres, each with redundant power, networking and connectivity, housed in
separate facilities. Availability Zones do not automatically provide additional scalability or
redundancy within a region, since they are intentionally isolated from each other to
prevent outages from spreading between Zones. Several services can operate across
Availability Zones (e.g., S3, Dynamo DB) while others can be configured to replicate across
Zones to spread demand and avoid downtime from failures.

FEATURES

 Low Cost - AWS offers low, pay-as-you-go pricing with no up-front expenses or long-term
commitments. We are able to build and manage a global infrastructure at scale, and pass the
cost saving benefits onto you in the form of lower prices. With the efficiencies of our scale and
expertise, we have been able to lower our prices on 15 different occasions over the past four
years.

 Agility and Instant Elasticity - AWS provides a massive global cloud infrastructure that
allows you to quickly innovate, experiment and iterate. Instead of waiting weeks or months for
hardware, you can instantly deploy new applications, instantly scale up as your workload
grows, and instantly scale down based on demand. Whether you need one virtual server or
thousands, whether you need them for a few hours or 24/7, you still only pay for what you
use.

 Open and Flexible - AWS is a language and operating system agnostic platform. You choose
the development platform or programming model that makes the most sense for your business.
You can choose which services you use, one or several, and choose how you use them. This
flexibility allows you to focus on innovation, not infrastructure.

 Third-Party Marketplace - In addition to the Amazon-provided web services, AWS


also provides a marketplace for third-party software and services. These applications
have been designed to integrate with the AWS platform and other AWS services. The
AWS marketplace provides integrated software solutions from large software
manufacturers like Microsoft and Oracle to smaller providers like Bitnami and Appian
Corporation.

 Mobile-friendly Access and Services - It provides mobile app versions of the AWS
management console for Android and iOS devices. These mobile apps provide the
features of the AWS management console in a user interface that is specifically
designed for the smaller form factor of mobile phones. Secondly, AWS provides an
AWS Mobile Hub, which enables you to efficiently design and create features solutions
targeting mobile devices. It includes a unique console that helps you find and access
AWS services specifically related to mobile app development, including the
development, testing, and monitoring of mobile apps.

 Security and Compliance - AWS is security and compliance friendly. For example, all
AWS data centres meet strict security standards for protecting all data, including
personally identifiable information (PII). AWS also encrypts data in motion and at rest.
Customers who build applications and services on top of AWS inherit these security
benefits.AWS partners with their customers to help them build their programs and
ensure that they meet required compliance standards. The AWS Assurance programs
helps customers achieve their compliance goals. These assurance programs include
guidance, documentation, and tools for meeting many global, regional, and national
compliance standards. AWS will even co-sign specific compliance documents to help
customers meet compliance requirements.

SERVICES

The growing AWS collection offers over three dozen diverse services including:

 CloudDrive, which allows users to upload and access music, videos, documents, and photos
from Web-connected devices. The service also enables users to stream music to their devices.

 CloudSearch, a scalable search service typically used to integrate customized search


capabilities into other applications.
 Dynamo Database (also known as Dynamo DB or DDB), a fully managed NoSQL database
service known for low latencies and scalability.

 Elastic Compute Cloud, which allows business subscribers to run application programs and
can serve as a practically unlimited set of virtual machines (VMs).

 ElastiCache, a fully managed caching service that is protocol-compliant with Memcached, an


open source, high-performance, distributed memory object caching system for speeding up
dynamic Web applications by alleviating database load.

 Mechanical Turk, an application program interface (API) that allows developers to integrate
human intelligence into Remote Procedure Calls (RPCs) using a network of humans to perform
tasks that computers are ill-suited for.

 RedShift, a petabyte-scale data warehouse service designed for analytic workloads,


connecting to standard SQL-based clients and business intelligence tools.

 Simple Storage Service (S3), a scalable, high-speed, low-cost service designed for
online backup and archiving of data and application programs.

All AWS offerings are billed according to usage. The rates vary from service to service.

Types of Services and Offerings:


As of December 2014, Amazon Web Services operated an estimated 1.4 Million servers across
28 availability zones. The global network of AWS Edge locations consists of 54 points of
presence worldwide, including locations in the United States, Europe, Asia, Australia, and
South America.

SYSTEM REQUIREMENTS

AWS provides an interface to deploy services and use the products and the requirements varies
according to the services used. But the basic requirements are:

 Operating System with Browser Support.


 SSH Client in Linux based systems.
 Minimum of 1 GB RAM.
 .NET 4.0 Full Profile (for Amazon Chime client)

SERVICES DESCRIPTION
 Amazon Simple Storage Services (S3) - Amazon S3 is object storage built to store and
retrieve any amount of data from anywhere – web sites and mobile apps, corporate
applications, and data from IoT sensors or devices. It is designed to deliver 99.99% durability,
and stores data for millions of applications used by market leaders in every industry. S3
provides comprehensive security and compliance capabilities that meet even the most stringent
regulatory requirements. It gives customers flexibility in the way they manage data for cost
optimization, access control, and compliance. S3 is the only cloud storage solution with query-
in-place functionality, allowing you to run powerful analytics directly on your data at rest in
S3. And Amazon S3 is the most supported storage platform available, with the largest
ecosystem of ISV solutions and systems integrator partners.

 Amazon Elastic Cloud Compute (EC2) - EC2 is a web service that provides secure, resizable
compute capacity in the cloud. It is designed to make web-scale cloud computing easier for
developers. Amazon EC2’s simple web service interface allows you to obtain and configure
capacity with minimal friction. It provides you with complete control of your computing
resources and lets you run on Amazon’s proven computing environment. Amazon EC2 reduces
the time required to obtain and boot new server instances to minutes, allowing you to quickly
scale capacity, both up and down, as your computing requirements change. Amazon EC2
changes the economics of computing by allowing you to pay only for capacity that you actually
use. Amazon EC2 provides developers the tools to build failure resilient applications and
isolate them from common failure scenarios.

STEPWISE CONFIGURATION AND INSTALLATION


 S3 –
1. Sign up to AWS and then login to AWS Management Console:

2. Go to Services ( Top Left Corner ) and click S3

3. Click Create Bucket and choose a unique name so as to distinguish the bucket worldwide.
4. Now Set the properties and permissions according to your need:

5. After creating the bucket go to Overview to upload files you want to store:

6. Go to properties to enable different properties like website hosting, versioning, etc:


7. Set bucket policy if you want to make the files accessible to everyone:

8. Access the file using the link given in the object description i.e. by clicking a file you want to
share:
File link is as follow:
https://s3-<region>.amazonaws.com/bucket_name/object_name

APPLICATIONS OF S3:
 Static Website Hosting.
 For Storage purposes.
 Sharing of files made easy.

BENEFITS OF S3:
 Provide up to 5 GB of free Storage.
 Easy to implement.
 Expanded to 5tb on pay.
 We can store unlimited objects.
 Deletion of old objects and incomplete uploads available.
 Backup of objects on daily basis or monthly basis according to the user needs available.

 EC2 –
1. Go to Services (Top Left Corner) and Choose EC2 and then in the EC2 homepage Choose
“Launch Instance”.
2. Choose an Amazon Machine Instance, there are almost all the familiar operating systems and
a marketplace for choosing a pre-configured Operating System:
3. Choose an instance type according to your requirement and click next:

4. Configuration is already done by the AWS but to change it fill your own details and Click next:

5. Add more storage if you need, storage can be SSD also, but you will need to pay for the extra
storage.
6. Adding a tag, it enables you to categorize your AWS resources in different ways, for example,
by purpose, owner, or environment. This is useful when you have many resources of the same
type — you can quickly identify a specific resource based on the tags you've assigned to it.
7. Configure Security Group so as to enable the access of the resource, if you choose “MyIP” as
the source you will be able to access the resource from that IP address only whereas from
“Anywhere”, anyone can access the resources.

8. Select an existing Key pair or create new pair and then download and preserve the key pair as
it won’t be available again.
9. Download the Remote Desktop file and run the file and then Connect to your instance using
the username and password generated by the saved key pair

10. Click YES to Remote Desktop Connection Verification for accessing AMI
11. On Entering the correct username and password a popup running AMI will come else try again
as there was a problem with the username and password.

12. Final Remote Desktop connection running the server (Here it is Windows Server)
BENEFITS OF EC2:
 Easy to Use:
 Root access control.
 Ability to stop or re-start remotely.
 Mobile Access.
 Easy to use Console and API.

 Elastic:
 Simplify operations.
 Run applications securely.
 Run any application globally.
 Reduce costs.

 Cost Effective:
 Low cost computing capacity.
 Pay as you go pricing.
 Flexible pricing models.

 Flexible:
 Familiar Operating Systems.
 Add or remove capacity according to the needs.
 Ready Made configured AMIs available.

SUMMARY
After studying Amazon Web Services (AWS), a subsidiary of Amazon, we learned about the
services offered by AWS and the regions divided by the AWS known as the Availability Zones.
We learned about the S3 and EC3 implementation as we have installed and configured this
services and have used their features to deploy a static site on the cloud and successfully
connected with the AMI (Windows Server). Also uploaded files on S3 bucket to store the files
and share across users. We can conclude that AWS is a user friendly cloud computing tool
which can be used to implement services like database, compute, network, storage, etc. and
the cost is based on the pay-as-you-go concept that is we have to pay for the resources or the
services we have used, thus giving flexibility and AWS also provides 1 year of free usage to
few services with certain restrictions to understand the AWS console.
ANEKA
OBJECTIVE:

To study about the Aneka tool installation, configuration of Aneka tool, creating the virtual
machine and implementing services provided by the aneka.

DESCRIPTION:

Aneka is a Cloud Application Development Platform (CAP) for developing and running
compute and data intensive applications. As a platform it provides users with both a runtime
environment for executing applications developed using any of the three supported
programming models, and a set of APIs and tools that allow you to build new applications or
run existing legacy code. The purpose of this document is to help you through the process of
installing and setting up an Aneka Cloud environment. This document will cover everything
from helping you to understand your existing infrastructure, different deployment options,
installing the Management Studio, configuring Aneka Daemons and Containers, and finally
running some of the samples to test your environment.

Aneka technology primarily consists of two key components:

1. SDK (Software Development Kit) containing application programming interfaces (APIs) and
tools essential for rapid development of applications. Aneka APIs supports three popular Cloud
programming models: Task, Thread, and MapReduce; and
2. A Runtime Engine and Platform for managing deployment and execution of applications on
private or public Clouds.

SELECTED SERVICES:

MapReduce Programming Model-

It is an implementation of MapReduce ,as proposed by Google, for .NET and integrated with
Aneka. MapReduce is originated by two functions from the functional language: map and
reduce. The map function processes a key/value pair to generate a set of intermediate key/value
pairs, and the reduce function merges all intermediate values associated with the same
intermediate key. This model is particular useful for data intensive applications. The
MapReduce Programming Model provides a set of client APIs that allow developers to specify
their map and reduce functions, to locate the input data, and whether to collect the results if
required. In order to execute a MapReduce application on Aneka, developers need to create a
MapReduce application, configure the map and reduce components, and – as happens for any
other programming model – submit the execution to Aneka.

Licensing, Accounting, and Pricing-

Aneka provides an infrastructure that allows setting up public and private clouds. In a cloud
environment, especially in the case of public clouds, it is important to implement mechanisms
for controlling resources and pricing their usage in order to charge users. Licensing,
accounting, and pricing are the tasks that collectively implement a pricing mechanism for
applications in Aneka. The Licensing Service provides the very basic resource controlling
feature that protects the system from misuse. It restricts the number of resources that can be
used for a certain deployment. Every container that wants to join the Aneka Cloud is subject
to verification against the license installed in the system and its membership is rejected if
restrictions apply. These restrictions can involve the number of maximum nodes allowed in
the Aneka Cloud, or a specific set of services hosted by the container.

INSTALLATION:

Aneka installation begins with installing Aneka Cloud Management Studio. The Cloud
Management Studio is your portal for creating, configuring and managing Aneka Clouds.
Installing Aneka using the distributed Microsoft Installer Package (MSI) is a quick process
involving three steps as described below.

Step 1 – Run the installer package to start the Setup Wizard

Step 2 – Specifying the installation folder

Step 3 – Confirm and start the installation


At this point you are ready to begin the installation. Click “Next” to start the installation “Back”
to change your installation folder or.

CONFIGURATION:

Adding a new Machine

Giving the host address and select credential & Choose credentials

CONCLUSION-
Aneka, a framework providing a platform for cloud computing applications. As discussed in
the introduction there are different solutions for providing support for Cloud Computing.
Aneka is an implementation of the Platform as a Service approach, which focuses on providing
a set of APIs that can be used to design and implement applications for the Cloud.
The framework is based on an extensible and service oriented architecture that simplifies the
deployment of clouds and their maintenance and provides a customizable environment that
supports different design patterns for distributed applications. The heart of the framework is
represented by the Aneka Container which is the minimum unit of deployment for Aneka
Clouds and also the runtime environment for distributed applications. The container hosts a
collection of services that perform all the operations required to create an execution
environment for applications. They include resource reservation, storage and file management,
persistence, scheduling, and execution.
AZURE
OBJECTIVE:

Our Objective is to study the services of Microsoft Azure and deploy it.

TOOL DESCRIPTION:

Azure is a complete cloud platform that can host your existing applications, streamline the
development of new applications, and even enhance on-premises applications. Azure
integrates the cloud services that you need to develop, test, deploy, and manage your
applications—while taking advantage of the efficiencies of cloud computing.

By hosting your applications in Azure, you can start small and easily scale your application as
your customer demand grows. Azure also offers the reliability that’s needed for high-
availability applications, even including failover between different regions. The Azure portal
lets you easily manage all your Azure services. You can also manage your services
programmatically by using service-specific APIs and templates.

Azure provides several cloud-based compute offerings to run your application so that you don't
have to worry about the infrastructure details. You can easily scale up or scale out your
resources as your application usage grows.

Azure offers services that support your application development and hosting needs. Azure
provides infrastructure-as-a-service (IaaS) to give you full control over your application
hosting. Azure's platform-as-a-service (PaaS) offerings provide the fully managed services
needed to power your apps. There is even true server less hosting in Azure where all you need
to do is write your code.

FEATURES:

 Azure Site Recovery: A service that enabled Azure to orchestrate site-to-site replication and
recovery in event of disaster.
 Azure Express Route: It lets you connect your data centre with Azure via a private link that
does not travel over the internet. Its advantage is security and higher reliability.
 Build Websites With ASP.NET, PHP and Node.js.
SERVICES:

COMPUTE SERVICES -

 Virtual Machines: Create, deploy, and manage virtual machines running in the Windows
Azure cloud.
 Web Sites: Create new websites or migrate your existing business website into the cloud.
 Cloud Services: Build and deploy highly available and almost infinitely scalable applications
with low administration costs.
 Mobile Services: Build and deploy apps and storing data for mobile devices.
 Virtual Network: Treat the Windows Azure public cloud as if it is an extension of your on-
premises datacentre.
 Traffic Manager: Route application traffic for the user who is using the application to
Windows Azure datacentres.

DATA SERVICES -

 Data Management: Store your business data in SQL databases using Windows Azure SQL
Database, using NoSQL Tables via REST, or using BLOB storage.
 Business Analytics: Enables ease of discovery and data enrichment using Microsoft SQL
Server Reporting and Analysis Services.
 HDInsight: Brings a 100 percent Apache Hadoop solution to the cloud.
 Cache: Provides a distributed caching solution that can help speed up your cloud-based
applications and reduce database load.
 Backup: Helps to protect your server data offsite by using automated and manual backups to
Windows Azure.
 Recovery Manager: Hyper-V Recovery Manager helps you protect business critical services
by coordinating the replication.

APP SERVICES -

 Media Services: Allows you to build workflows for the creation, management, and
distribution of media using Azure public cloud.
 Messaging: Allows you to keep your apps connected across your private cloud environment
and the Windows Azure public cloud.
 Notification Hubs: Provides a highly scalable, cross-platform push notification infrastructure
for applications running on mobile devices.
 Multifactor Authentication: Provides an extra layer of authentication, in addition to the
user’s account credentials, in order to better secure access for both on-premises and cloud
applications.

SELECTED SERVICES:

APP SERVICE -

Web Apps:

App Service Web Apps is a fully managed compute platform that is optimized for hosting
websites and web applications. This platform-as-a-service (PaaS) offering of Microsoft Azure
lets you focus on your business logic while Azure takes care of the infrastructure to run and
scale your apps. In App Service, a web app is the compute resources that Azure provides for
hosting a website or web application. The compute resources may be on shared or dedicated
virtual machines (VMs), depending on the pricing tier that you choose. Your application code
runs in a managed VM that is isolated from other customers.

Your code can be in any language or framework that is supported by Azure App Service, such
as ASP.NET, Node.js, Java, PHP, or Python. You can also run scripts that use PowerShell and
other scripting languages in a web app.
SCREENSHOTS:
DATA SERVICE:

Azure DB for MySQL:

Azure Database for MySQL is a relational database service in the Microsoft cloud based on
the MySQL Community Edition database engine. Azure Database for MySQL delivers:

 Predictable performance at multiple service levels.


 Dynamic scalability with no application downtime.
 Built-in high availability.
 Data protection.
These capabilities require almost no administration and all are provided at no additional cost.
They allow you to focus on rapid app development and accelerating your time to market rather
than allocating precious time and resources to managing virtual machines and infrastructure.
In addition, you can continue to develop your application with the open source tools and
platform of your choice to deliver with the speed and efficiency your business demands, all
without having to learn new skills.
CONCLUSION

Microsoft Azure (formerly Windows Azure) is a cloud computing service created


by Microsoft for building, testing, deploying, and managing applications and services through
a global network of Microsoft-managed data centers. It provides software as a service
(SaaS), platform as a service and infrastructure as a service and supports many
different programming languages, tools and frameworks, including both Microsoft-specific
and third-party software and systems.
APACHE HADOOP
OBJECTIVE

To study about cloud computing through Apache Hadoop, single node cluster and its services.

TOOL DESCRIPTION

APACHE HADOOP: SINGLE NODE CLUSTER

The Apache Hadoop project develops open-source software for reliable, scalable, distributed
computing.

The Apache Hadoop software library is a framework that allows for the distributed processing
of large data sets across clusters of computers using simple programming models. It is designed
to scale up from single servers to thousands of machines, each offering local computation and
storage. Rather than rely on hardware to deliver high-availability, the library itself is designed
to detect and handle failures at the application layer, so delivering a highly-available service
on top of a cluster of computers, each of which may be prone to failures.

The core of Apache Hadoop consists of a storage part, known as Hadoop Distributed File
System (HDFS), and a processing part which is a MapReduce programming model. Hadoop
splits files into large blocks and distributes them across nodes in a cluster. It then
transfers packaged code into nodes to process the data in parallel.

The base Apache Hadoop framework is composed of the following modules:

 Hadoop Common – contains libraries and utilities needed by other Hadoop modules;
 Hadoop Distributed File System (HDFS) – a distributed file-system that stores data on
commodity machines, providing very high aggregate bandwidth across the cluster;
 Hadoop YARN – a platform responsible for managing computing resources in clusters and
using them for scheduling users' applications; and
 Hadoop MapReduce – an implementation of the MapReduce programming model for large-
scale data processing.
By getting inspiration from Google, this has written a paper about the technologies. It is using
like Map-Reduce programming model as well as its file system (GFS). As Hadoop was
originally written for the Nutch search engine project. When Doug Cutting and his team were
working on it but very soon, it became a top-level project due to its huge popularity.

Hadoop can be setup on a single machine (pseudo-distributed mode, but it shows its real power
with a cluster of machines. We can scale it to thousand nodes on the fly ie, without any
downtime. Therefore, we need not make any system down to add more systems in the cluster.
Follow this guide to learn Hadoop installation on a multi-node cluster.

Hadoop consists of three key parts –

 Hadoop Distributed File System (HDFS) – It is the storage layer of Hadoop.


 Map-Reduce – It is the data processing layer of Hadoop.
 YARN – It is the resource management layer of Hadoop.

FEATURES OF APACHE HADOOP

Apache Hadoop is the most popular and powerful big data tool, Hadoop provides world’s most
reliable storage layer – HDFS, a batch Processing engine – MapReduce and a Resource
Management Layer – YARN.-

 Open-source
 Distributed Processing
 Fault Tolerance High Availability
 Scalability
 Economic

SYSTEM REQUIREMENT

Hardware Specification:

Processor: Pentium IV processor or more


RAM: 4GB or more
Hard Disk Drive: 80 GB or more
CD Drive: For installation of software
Monitor: Any VGA Color Monitor
LAN card: For internet network
Modem: For internet

Software Specification:

 Java 1.6.x, preferably from Sun, must be installed.

 ssh must be installed and sshd must be running to use the Hadoop scripts that manage remote Hadoop
daemons.

 GNU/Linux is supported as a development and production platform. Hadoop has been demonstrated on
GNU/Linux clusters with 2000 nodes.

 Win32 is supported as a development platform. Distributed operation has not been well tested on Win32, so
it is not supported as a production platform.

SERVICES OFFERED BY APACHE HADOOP

MapReduce:

Hadoop MapReduce is a software framework for easily writing applications which process
vast amounts of data (multi-terabyte data-sets) in-parallel on large clusters (thousands of
nodes) of commodity hardware in a reliable, fault-tolerant manner.

A MapReduce job usually splits the input data-set into independent chunks which are
processed by the map tasks in a completely parallel manner. The framework sorts the outputs
of the maps, which are then input to the reduce tasks. Typically both the input and the output
of the job are stored in a file-system. The framework takes care of scheduling tasks, monitoring
them and re-executes the failed tasks.

The MapReduce framework consists of a single master Resource Manager, one slave Node
Manager per cluster-node, and MRAppMaster per application.

Minimally, applications specify the input/output locations and


supply map and reduce functions via implementations of appropriate interfaces and/or
abstract-classes. These, and other job parameters, comprise the job configuration.
The Hadoop job client then submits the job (jar/executable etc.) and configuration to
the Resource Manager which then assumes the responsibility of distributing the
software/configuration to the slaves, scheduling tasks and monitoring them, providing status
and diagnostic information to the job-client.

HDFS:

HDFS is a Java-based file system that provides scalable and reliable data storage, and it was
designed to span large clusters of commodity servers. HDFS has demonstrated production
scalability of up to 200 PB of storage and a single cluster of 4500 servers, supporting close to
a billion files and blocks. When that quantity and quality of enterprise data is available in
HDFS, and YARN enables multiple data access applications to process it, Hadoop users can
confidently answer questions that eluded previous data platforms.

HDFS is a scalable, fault-tolerant, distributed storage system that works closely with a wide
variety of concurrent data access applications, coordinated by YARN. HDFS will “just work”
under a variety of physical and systemic circumstances. By distributing storage and
computation across many servers, the combined storage resource can grow linearly with
demand while remaining economical at every amount of storage.

These specific features ensure that data is stored efficiently in a Hadoop cluster and that it is
highly available:

Feature Description

Rack awareness Considers a node’s physical location when allocating storage and scheduling tasks

Minimal data Hadoop moves compute processes to the data on HDFS and not the other way around.
motion Processing tasks can occur on the physical node where the data resides, which significantly
reduces network I/O and provides very high aggregate bandwidth.

Utilities Dynamically diagnose the health of the file system and rebalance the data on different nodes
Rollback Allows operators to bring back the previous version of HDFS after an upgrade, in case of
human or systemic errors

Standby Name Provides redundancy and supports high availability (HA)


Node

Operability HDFS requires minimal operator intervention, allowing a single operator to maintain a cluster
of 1000s of nodes

APACHE HIVE

Apache Hive is a data warehouse software project built on top of Apache Hadoop for
providing data summarization, query, and analysis. Hive gives an SQL-like interface to query
data stored in various databases and file systems that integrate with Hadoop. Traditional SQL
queries must be implemented in the MapReduce Java API to execute SQL applications and
queries over distributed data. Hive provides the necessary SQL abstraction to integrate SQL-
like queries (HiveQL) into the underlying Java without the need to implement queries in the
low-level Java API. Since most data warehousing applications work with SQL-based querying
languages, Hive aids portability of SQL-based applications to Hadoop. While initially
developed by Facebook, Apache Hive is used and developed by other companies such
as Netflix and the Financial Industry Regulatory Authority (FINRA). Amazon maintains a
software fork of Apache Hive included in Amazon Elastic MapReduce on Amazon Web
Services.

Apache Hive supports analysis of large datasets stored in Hadoop's HDFS and compatible file
systems such as Amazon S3 file system. It provides an SQL-like query language called
HiveQL with schema on read and transparently converts queries to MapReduce, Apache
Tez and Spark jobs. All three execution engines can run in Hadoop YARN. To accelerate
queries, it provides indexes, including bitmap indexes

By default, Hive stores metadata in an embedded Apache Derby database, and other
client/server databases like MySQL can optionally be used.
The first four file formats supported in Hive were plain text, sequence file, optimized row
columnar (ORC) format and RCFile. Apache Parquet can be read via plugin in versions later
than 0.10 and natively starting at 0.13. Additional Hive plugins support querying of
the Bitcoin Block chain.

Advantages

Apache Hive is a hadoop ecosystem which provides you the ability to analyze data by writing
SQL like queries called HiveQL. The advantage behind using hive is that data to be analyzed
is stored in HDFS which provides all features like scalability, redundancy etc. and SQL like
query over data in Hadoop.

APACHE SQOOP

Sqoop is a command-line interface application for transferring data between relational


databases and Hadoop. Sqoop supports incremental loads of a single table or a free form SQL
query as well as saved jobs which can be run multiple times to import updates made to a
database since the last import. Imports can also be used to populate tables
in Hive or HBase. Exports can be used to put data from Hadoop into a relational database.
Sqoop got the name from sql+hadoop. Sqoop became a top-level Apache project in March
2012.

Sqoop provides many salient features like:

 Full Load

 Incremental Load

 Parallel import/export

 Import results of SQL query

 Compression
CONCLUSION

After studying the Apache Hadoop tool of cloud computing we learned about the installation
and configuration of Apache Hadoop and installation and deployment of its various services
like apache HIVE, Apache Hadoop SQOOP.
OWNCLOUD
OBJECTIVE:

To study the services offered by ownCloud.

TOOL DESCRIPTION:

Frank Karlitschek, a KDE software developer, announced the development of ownCloud in


January 2010, in order to provide a free software replacement to proprietary storage service
providers. The company was founded in 2011.

ownCloud is open source file sync and share software for everyone from individuals operating
the free ownCloud Server edition, to large enterprises and service providers operating the
ownCloud Enterprise Subscription. ownCloud provides a safe, secure, and compliant file
synchronization and sharing solution on servers that you control. You can share one or more
files and folders on your computer, and synchronize them with your ownCloud server. Place
files in your local shared directories, and those files are immediately synchronized to the server
and to other devices using the ownCloud Desktop Sync Client, Android app, or iOS app.
ownCloud is functionally very similar to the widely used Dropbox, with the primary functional
difference being that ownCloud is free and open-source, and thereby allowing anyone to install
and operate it without charge on a private server. It also supports extensions that allow it to work
like Google Drive, with online document editing, calendar and contact synchronization, and
more.

The ownCloud server is written in the PHP and JavaScript scripting languages. For remote
access, it employs sabre/dav, an open-source WebDAV server.

FEATURES:

 Access Your Data: Store your files, folders, contacts, photo galleries, calendars and more on
a server of your choosing. Access that folder from your mobile device, your desktop, or a web
browser. Access your data wherever you are, when you need it.

 Share Your Data: Share your data with others, and give them access to your latest photo
galleries, your calendar, your music, or anything else you want them to see. Share it publicly,
or privately. It is your data, do what you want with it.
 Versioning: With the Versions Application enabled, ownCloud automatically saves old file
versions – you configure how much to save. To revert, simply hover over your file and roll
back to a previous version.

 Encryption: With the Encryption Application enabled, all files stored on the ownCloud server
are encrypted to your password. This is helpful if you store your files on an untrusted storage
outside the ownCloud server. Add to this an SSL connection, and your files are secure while
in motion and at rest.

 Sync your Data: Keep your files, contacts, photo galleries, calendars and more synchronized
amongst your devices.

 Drag and Drop Upload: While working on a computer and don’t want to install the entire
ownCloud client then simply log into ownCloud in a web browser and drag and drop your files
from your desktop into your desired target directory in the web browser. They will be
automatically uploaded to the server.

 Themeing: You can make ownCloud look and feel like the rest of your site by useing the new
theming directory functionality. Any style or image that you place in this directory will be used
in place of standard ownCloud fonts, colors and icons.

 Viewer for ODF Files: To read open document format files without downloading them, enable
this Application and you can click on any ODF formatted document (.odt, .odp, .ods) and read
it in your web browser with no download required.

 Migration and Backup: If you have multiple ownCloud instances, perhaps a primary and a
backup installation, now you can easily move your ownCloud user accounts between
ownCloud instances, and have a backup ready when you need it.

 Tasks: You can keep track of that all-important to do list with the Tasks Application, you can
easily sync your to do lists with your ownCloud instance.
 Calendars: You can be sharing your important calendar and important events in no time with
the help of Calender Application.

 File Notifications: Now you can notify others when a file is shared, making it faster and easier
to start sharing those documents, home movies and whatever else you choose.

 Galleries: You can specify the ownCloud photo directories, sort order, share your galleries
with any email address you choose, and control whether they can share those photos with
anyone else.

 External Storage: It provides one place to access all of your Gdrive and Dropbox files. With
the External Storage Application enabled, you can mount your external storage as a folder
inside your ownCloud instance, and use 1 interface to access all of your files.

 LDAP / Active Directory: Now ownCloud enables admins to manage users and groups from
their LDAP or AD instance.

 ownCloud offers a full text search functionality (based on Elasticsearch), that allows you to
find documents and files based on the content within your files.

SERVICES AND PRODUCTS:


 File Clipboard  SAML / Shibboleth Identity provider
 Bookmark Manager  Sharepoint as an External Storage
 Passman (Password Manager)  GpxMotion, GpxPod and GpxEdit
 Collaborative tags Management  Impersonate
 Music and Audio Player  onlyOffice, PDFviewer and TextEditor
 E2EE File Sharing  Calender and Contacts
 Integrate ownCloud Marketplace  Announcement Center
 ownBackup Service  Media Gallery and Activity
 QownNotesAPI (open source notepad)
REQUIREMENTS:
Memory:

 Minimum of 128MB RAM and 512MB is recommended.

Server:

 Debian 7 and 8

 SUSE Linux Enterprise Server 12 and 12 SP1

 Red Hat Enterprise Linux/Centos 6.5 and 7 (7 is 64-bit only)

 Ubuntu 14.04 LTS

Web Server:

 Apache 2.4 with mod_php

Database:

 MySQL

 MariaDB

 Oracle 11g(Enterprise Edition only)

 PostgreSQL

Hypervisor:

 Hyper – V

 VMware ESX

 XEN

 KVM

Operating System:

 Windows 7+

 Ubuntu 16.10, Ubuntu 16.04, Ubuntu 14.04

 Mac OS X 10.7+ (64 bit only)

 Debian 7.0 and Debian 8.0

 Fedora 24 and Fedora 25

 CentOS 7
Mobile:

 Android 4.0+

 iOS 9.0+

Web Browser:

 IE 11+ (Except Compatibility mode)

 Firefox 14+

 Chrome 18+

 Safari 5+

ADVANTAGES:
 Cost Effective: Personal clouds are cheap. Bringing that cloud to your own hardware
might have a high cost up front, but once that initial cost is paid, you're done.

 Better Control: No matter how large you want your cloud, how many users, how
you manage groups, how strong you want your security — it's all in your hands.

 Data Security: One of the benefits of rolling your own cloud is that you, and only
you, are responsible for the security of your data.

 True Flexibility: Your cloud can do whatever you need it to do, in a way that
perfectly fits your business model and needs.

 Sync Speed: You need a better syncing ability that does not bottleneck.

 Company Integration: Not only will this make your cloud more powerful, it will
also make it more IT- and user-friendly.

 Unlimited Size: With your own internal cloud, that space is unlimited.

 User Management: Suspend users and set password requirements enhance your
ability to manage users on your own cloud.

 Backup Control: If your cloud is in-house, you can always be sure that you have an
up-to-date backup.

 The server software installation is very straight forward, compatible with both
Windows and Linux.
 There is even a mobile app that works with android/iOS and will sync files and
photos, you can then set your phone to synchronise when you connect to Wi-Fi.

 You take a few photos and within 5 seconds they are readily accessible on OwnCloud
for your colleague to add to his report, it saves time and is completely seamless.

 Admin and developer documentation, examples and best practices.

 Full lifecycle support, from project inception to production and user growth.

SUMMARY:
ownCloud is a self-hosted file sync and share server. It provides access to your data through
a web interface, sync clients or WebDAV while providing a platform to view, sync and
share across devices easily — all under your control. ownCloud’s open architecture is
extensible via a simple but powerful API for applications and plugins and it works with
any storage.
GOOGLE APP
ENGINE
OBJECTIVE

To study Cloud Computing Tool Google Cloud Platform and implementing its services.

INTRODUCTION

Google Cloud Platform is a set of services that enables developers to build, test and deploy
applications on Google’s reliable infrastructure. Cloud Platform consists of a set of physical
assets, such as computers and hard disk drives, and virtual resources, such as virtual
machines (VMs) that are contained in Google's data centers around the globe. Each data
center location is in a global region.

TOOL DESCRIPTION

Google App Engine (often referred to as GAE or simply App Engine) is a cloud computing
platform for developing and hosting web applications in Google-managed data centers.
Applications are sandboxed and run across multiple servers. App Engine offers automatic
scaling for web applications—as the number of requests increases for an application, App
Engine automatically allocates more resources for the web application to handle the
additional demand.

Google App Engine is free up to a certain level of consumed resources. Fees are charged
for additional storage, bandwidth, or instance hours required by the application. It was first
released as a preview version in April 2008 and came out of preview in September 2011.

FEATURES

1. Computing and hosting services - Cloud Platform gives you options for computing
and hosting. You can choose to work with a managed application platform, leverage
container technologies to gain lots of flexibility, or build your own cloud-based
infrastructure to have the most control and flexibility.

 Application Platform- Google App Engine is Cloud Platform's platform as a


service (PaaS). With App Engine, Google handles most of the management of the
resources for you.
 Containers - With container-based computing, you can focus on your application
code, instead of on deployments and integration into hosting environments. Google
Container Engine is built on the open source Kubernetes system, which gives you the
flexibility of on-premises or hybrid clouds, in addition to Cloud Platform's public
cloud infrastructure.
 Virtual machines - Cloud Platform's unmanaged compute service is Google Compute
Engine. You can think of Compute Engine as providing an infrastructure as a
service (IaaS), because the system provides a robust computing infrastructure, but you
must choose and configure the platform components that you want to use. With
Compute Engine, it's your responsibility to configure, administer, and monitor the
systems. Google will ensure that resources are available, reliable, and ready for you
to use, but it's up to you to provision and manage them. The advantage, here, is that
you have complete control of the systems and unlimited flexibility.
 Combining computing and hosting options - You don't have to stick with just one
type of computing service. For example, you can combine App Engine and Compute
Engine to take advantage of the features and benefits of each.

2. Storage services - Whatever your application, you'll probably need to store some data.
Cloud Platform provides a variety of storage services.

3. Networking services - While App Engine manages networking for you, and Container
Engine uses the Kubernetes model, Compute Engine provides a set of networking services.
These services help you to load-balance traffic across resources, create DNS records, and
connect your existing network to Google's network.

 Networks, firewalls, and routes - Compute Engine provides a set of networking


services that your VM instances use. Each instance can be attached to only one
network. Every Compute Engine project has a default network. You can create
additional networks in your project, but networks cannot be shared between
projects.Firewall rules govern traffic coming into instances on a network. The default
network has a default set of firewall rules, and you can create custom rules,
too.A route lets you implement more advanced networking functions in your
instances, such as creating VPNs. A route specifies how packets leaving an instance
should be directed. For example, a route might specify that packets destined for a
particular network range should be handled by a gateway virtual machine instance that
you configure and operate.
 Load balancing - If your website or application is running on Compute Engine, the
time might come when you're ready to distribute the workload across multiple
instances.
 Cloud DNS - You can publish and maintain Domain Name System (DNS) records by
using the same infrastructure that Google uses. You can use the Google Cloud
Platform Console, the command line, or a REST API to work with managed zones
and DNS records.
 Advanced connectivity - If you have an existing network that you want to connect to
Cloud Platform resources, Google Cloud Interconnect offers three options for
advanced connectivity: Carrier Interconnect, Compute Engine VPN

4. Big data services - Big data services enable you to process and query big data in the
cloud to get fast answers to complicated questions.

a. Data analysis
b. Batch and streaming data processing
c. Asynchronous messaging

REQUIREMENTS

The App Engine standard environment is based on container instances running on Google's
infrastructure. Containers are preconfigured with one of several available runtimes (Java 7,
Java 8, Python 2.7, Go and PHP). Each runtime also includes libraries that support App
Engine Standard APIs.

APPLICATION

 Mobile Applications
Google Cloud Platform services let you focus on your application development,
without bothering about infrastructure. Build your app for Android, iOS on
infrasturcture that reliably scales as your app grows.

 Websites & Web Apps


Build a multi-tiered web application from scratch or host a static website, Google
Cloud Platform services & infrastructure enables you to develop and deploy robust,
scalable, globally-available applications and websites.

 Development & Test


Development and test is where any application starts. Google Cloud Platform provides
the agility to be able to try things out quickly and move on without incurring upfront
costs or facing delays while procuring hardware.

 Big Data
Applications today are generating more and more data at speed like never before.
Google Cloud Platform products and services help you collect, ingest, and analyze
your data quickly and cost effectively.

 Gaming
Google Cloud Platform infrastructure and services enables you to build massively
scabale games to delight your customers without having to worry about IT
Infrastructure and upfront financial investments

 Internet of Things
By 2020, it is expected that 50 billion connected devices will exist. Google Cloud
Platform gives you tools to scale connections, gather and make sense of data, and
provide reliable customer experiences that hardware devices require.

ADVANTAGES

This distribution of resources provides several benefits, including redundancy in case of


failure and reduced latency by locating resources closer to clients. This distribution also
introduces some rules about how resources can be used together.

 Better Pricing Than Competitors


 Private Global Fiber Network
 Live Migration of Virtual Machines
 Improved Performance
 State of the Art Security
 Dedication to Continued Expansion
 Redundant Backups

SUMMARY

With Google App Engine, users have a cloud-based platform that enables them to create,
maintain, and scale apps without the need to have and/or maintain servers. Once the
application is uploaded, it is ready for deployment. Scaling is also automatic and the
process depends mostly on incoming traffic.

Google App Engine offers support for a host of programming languages.

Pricing- Google claims to have proof that they provide pricing that is up to 50% less than
their competitors. Based on per-minute billing and a (reverse) Moore’s Law pricing
philosophy, Google aspires to be a true replacement for on-premises hosting. Google wants
to provide as simple a billing experience as possible and remove the need to plan ahead and
calculate costs.

Google provides a number of pre-packaged "solutions" that are architected around themes
and can provide you relevant information, guides, help, and tutorials for building or
migrating your business to the Google Cloud Platform: Media, Mobile Applications, Big
Data, Financial Services, Gaming, Retail and Commerce, Internet of Things, Websites and
Web Apps, and Development and Testing.

App Engine is Google’s PaaS platform, a robust development environment for applications
written in Java, Python, PHP and Go.

Você também pode gostar