Você está na página 1de 69

Q1. With the help of a neat diagram, explain the architecture of virtualized data center.

VMware vSphere virtualizes the entire IT infrastructure including servers, storage, and
networks.

VMware vSphere aggregates these resources and presents a uniform set of elements in the
virtual environment. With VMware vSphere, you can manage IT resources like a shared utility
and dynamically provision resources to different business units and projects.

Virtual Datacentre Architecture shows the key elements in a virtual datacentre.

You can use vSphere to view, configure, and manage these key elements. The following is a
list of the key elements:

■Computing and memory resources called hosts, clusters, and resource pools

■Storage resources called datastores

■Networking resources called networks

■Virtual machines

A host is the virtual representation of the computing and memory resources of a physical
machine running ESX/ESXi. When two or more physical machines are grouped to work and
be managed as a whole, the aggregate computing and memory resources form a cluster.

ABHISHEK KAMBLI 10/04/2019 CLOUD MANAGEMENT


Machines can be dynamically added or removed from a cluster. Computing and memory
resources from hosts and clusters can be finely partitioned into a hierarchy of resource pools.

Datastores are virtual representations of combinations of underlying physical storage


resources in the datacenter. These physical storage resources can come from the following
sources:

■Local SCSI, SAS, or SATA disks of the server

■Fibre Channel SAN disk arrays

■iSCSI SAN disk arrays

■Network Attached Storage (NAS) arrays

Networks in the virtual environment connect virtual machines to one another and to the
physical network outside of the virtual datacenter.

Virtual machines can be designated to a particular host, cluster or resource pool, and a
datastore when they are created. After they are powered-on, virtual machines consume
resources dynamically as the workload increases or give back resources dynamically as the
workload decreases.

Provisioning of virtual machines is much faster and easier than physical machines. New
virtual machines can be created in seconds. When a virtual machine is provisioned, the
appropriate operating system and applications can be installed unaltered on the virtual
machine to handle a particular workload as though they were being installed on a physical
machine. A virtual machine can be provisioned with the operating system and applications
installed and configured.

Resources get provisioned to virtual machines based on the policies that are set by the
system administrator who owns the resources. The policies can reserve a set of resources for
a particular virtual machine to guarantee its performance. The policies can also prioritize and
set a variable portion of the total resources to each virtual machine. A virtual machine is
prevented from being powered-on and consuming resources if doing so violates the
resource allocation policies.

ABHISHEK KAMBLI 10/04/2019 CLOUD MANAGEMENT


Q2. Explain public, private and hybrid clouds.
What is a public cloud?
Public clouds are the most common way of deploying cloud computing. The cloud
resources (like servers and storage) are owned and operated by a third-party cloud
service provider and delivered over the Internet. Microsoft Azure is an example of a
public cloud. With a public cloud, all hardware, software and other supporting
infrastructure are owned and managed by the cloud provider. In a public cloud, you
share the same hardware, storage and network devices with other organisations or
cloud “tenants”. You access services and manage your account using a web browser.
Public cloud deployments are frequently used to provide web-based email, online
office applications, storage, and testing and development environments.

Advantages of public clouds:

 Lower costs – no need to purchase hardware or software, and you only pay for the
service you use.

 No maintenance – your service provider provides the maintenance.

 Near-unlimited scalability – on-demand resources are available to meet your


business needs.

 High reliability – a vast network of servers ensures against failure.

What is a private cloud?


A private cloud consists of computing resources used exclusively by one business or
organisation. The private cloud can be physically located at your organisation’s on-
site data centre, or it can be hosted by a third-party service provider. But in a private
cloud, the services and infrastructure are always maintained on a private network and
the hardware and software are dedicated solely to your organisation. In this way, a
private cloud can make it easier for an organisation to customise its resources to
meet specific IT requirements. Private clouds are often used by government agencies,
financial institutions and any other medium to large-sized organisations with
business-critical operations seeking enhanced control over their environment.

Advantages of a private cloud:

 More flexibility – your organisation can customise its cloud environment to meet
specific business needs.

ABHISHEK KAMBLI 10/04/2019 CLOUD MANAGEMENT


 Improved security – resources are not shared with others, so higher levels of control
and security are possible.

 High scalability – private clouds still afford the scalability and efficiency of a public
cloud.

What is a hybrid cloud?


Often called “the best of both worlds”, hybrid clouds combine on-premises
infrastructure, or private clouds, with public clouds so that organisations can reap the
advantages of both. In a hybrid cloud, data and applications can move between
private and public clouds for greater flexibility and more deployment options. For
instance, you can use the public cloud for high-volume, lower-security needs such as
web-based email, and the private cloud (or other on-premises infrastructure) for
sensitive, business-critical operations like financial reporting. In a hybrid cloud, “cloud
bursting” is also an option. This is when an application or resource runs in the private
cloud until there is a spike in demand (such as a seasonal event like online shopping
or tax filing), at which point the organisation can “burst through” to the public cloud
to tap into additional computing resources.

Advantages of hybrid clouds:

 Control – your organisation can maintain a private infrastructure for sensitive assets.

 Flexibility – you can take advantage of additional resources in the public cloud when
you need them.

 Cost-effectiveness – with the ability to scale to the public cloud, you pay for extra
computing power only when needed.

 Ease – transitioning to the cloud doesn’t have to be overwhelming because you can
migrate gradually – phasing in workloads over time.

ABHISHEK KAMBLI 10/04/2019 CLOUD MANAGEMENT


Q3. Explain the layered architecture development of cloud platform for IaaS, PaaS and
SaaS

IaaS: Infrastructure-as-a-Service

The first major layer is Infrastructure-as-a-Service, or IaaS. (Sometimes it’s called

Hardware-as-a-Service.) Several years back, if you wanted to run business

applications in your office and control your company website, you would buy servers

and other pricy hardware in order to control local applications and make your

business run smoothly.

But now, with IaaS, you can outsource your hardware needs to someone else. IaaS

companies provide off-site server, storage, and networking hardware, which you rent

and access over the Internet. Freed from maintenance costs and wasted office space,

companies can run their applications on this hardware and access it anytime.

Some of the biggest names in IaaS include Amazon, Microsoft, VMWare, Rackspace

and Red Hat. While these companies have different specialties — some, like Amazon

and Microsoft, want to offer you more than just IaaS — they are connected by a

desire to sell you raw computing power and to host your website.

PaaS: Platform-as-a-Service

The second major layer of the cloud is known as Platform-as-a-Service, or PaaS,

which is sometimes called middleware. The underlying idea of this category is that all

of your company’s development can happen at this layer, saving you time and

resources.

PaaS companies offer up a wide variety of solutions for developing and deploying

applications over the Internet, such as virtualized servers and operating systems. This

saves you money on hardware and also makes collaboration easier for a scattered

ABHISHEK KAMBLI 10/04/2019 CLOUD MANAGEMENT


workforce. Web application management, application design, app hosting, storage,

security, and app development collaboration tools all fall into this category.

Some of the biggest PaaS providers today are Google App Engine, Microsoft Azure,

Saleforce’s Force.com, the Salesforce-owned Heroku, and Engine Yard. A few recent

PaaS startups we’ve written about that look somewhat intriguing

include AppFog, Mendix and Standing Cloud.

SaaS: Software-as-a-Service

The third and final layer of the cloud is Software-as-a-Service, or SaaS. This layer is

the one you’re most likely to interact with in your everyday life, and it is almost

always accessible through a web browser. Any application hosted on a remote server

that can be accessed over the Internet is considered a SaaS.

Services that you consume completely from the web like Netflix, MOG, Google Apps,

Box.net, Dropbox and Apple’s new iCloud fall into this category. Regardless if these

web services are used for business, pleasure or both, they’re all technically part of the

cloud.

Some common SaaS applications used for business include Citrix’s GoToMeeting,

Cisco’s WebEx, Salesforce’s CRM, ADP, Workday and SuccessFactors.

ABHISHEK KAMBLI 10/04/2019 CLOUD MANAGEMENT


Q4. What is disaster recovery? Explain in detail.

Disaster recovery
Cloud computing, based on virtualization, takes a very different
approach to disaster recovery. With virtualization, the entire server,
including the operating system, applications, patches and data is
encapsulated into a single software bundle or virtual server. This entire
virtual server can be copied or backed up to an offsite data center and
spun up on a virtual host in a matter of minutes.
Since the virtual server is hardware independent, the operating
system, applications, patches and data can be safely and accurately
transferred from one data center to a second data center without the
burden of reloading each component of the server. This can
dramatically reduce recovery times compared to conventional (non-
virtualized) disaster recovery approaches where servers need to be
loaded with the OS and application software and patched to the last
configuration used in production before the data can be restored.
The cloud shifts the disaster recovery tradeoff curve to the left, as
shown below. With cloud computing (as represented by the red
arrow), disaster recovery becomes much more cost-effective with
significantly faster recovery times.

When introduced with the cost-effectiveness of online backup


between data centers, tape backup no longer makes sense in the
cloud. The cost-effectiveness and recovery speed of online, offsite
backup makes it difficult to justify tape backup.
The cloud makes cold site disaster recovery antiquated. With cloud
computing, warm site disaster recovery becomes a very cost-effective
option where backups of critical servers can be spun up in minutes on
a shared or private cloud host platform.
With SAN-to-SAN replication between sites, hot site DR with very
short recovery times also becomes a much more attractive, cost-
effective option. This is a capability that was rarely delivered with
conventional DR systems due to the cost and testing challenges. One

ABHISHEK KAMBLI 10/04/2019 CLOUD MANAGEMENT


of the most exciting capabilities of disaster recovery in the cloud is the
ability to deliver multi-site availability. SAN replication not only
provides rapid failover to the disaster recovery site, but also the
capability to return to the production site when the DR test or disaster
event is over.
One of the added benefits of disaster recovery with cloud computing
is the ability to finely tune the costs and performance for the DR
platform. Applications and servers that are deemed less critical in a
disaster can be tuned down with less resources, while assuring that
the most critical applications get the resources they need to keep the
business running through the disaster.
Critical Path in Disaster Recovery – Networking
With the sea change in IT disaster recovery delivered by cloud
computing, network replication becomes the critical path. With fast
server recovery at an offsite data center, the critical path for a disaster
recovery operation is replicating the production network at the DR site
including IP address mapping, firewall rules & VLAN configuration.
Smart data center operators are providing full disaster recovery
services that not only replicate the servers between data centers, but
also replicate the entire network configuration in a way that recovers
the network as quickly as the backed up cloud servers.

ABHISHEK KAMBLI 10/04/2019 CLOUD MANAGEMENT


Q5. Explain the structure of data center networking.

USER ACCESS NETWORKS

User access networks connect end-user devices, such as desktop and notebook
computers, printers and Voice-over-IP handsets to enterprise networks. Generally,
the user access network consists of a wiring plant between offices and per-floor
wiring closets and switches in each wiring closet, as well as the interconnection
between the wiring closets and building data centers.

THE EDGE LAYER

The switches that connect directly to end-user devices are called “edge” or “access”
switches. The edge switch makes the first connection between the relatively
unreliable patch and internal wiring out to each user’s workstation and the more
reliable backbone network within the building. Each user may have only a single
connection to a single edge switch, but everything above the edge switch should be
designed with redundancy in mind. The edge switch is usually chosen based on two
key requirements: high port density and low per-port costs.
Low port costs are desirable because of the cost of patching and repatching devices
in end-user workspaces. If ports are expensive, and only a few ports are available,
then each time a user moves a workstation, printer or phone, someone has to go into
a wiring closet and repatch to their network port — a cost that quickly overwhelms
the savings of buying fewer ports. Since the primary purpose of an edge switch is to
move around Ethernet packets, there’s no reason to buy expensive “feature-full”
switches for most buildings.

High port density is desirable because of the costs associated with managing
them. Each switch is a manageable element, so more switches lead to greater
management complexity, associated costs and potential network downtime due to
human error.
Network managers achieve density in different ways, depending on the size of their
building and the number of devices that must connect to each wiring closet. Chassis
devices, which include blades (typically with 48 ports each) are popular and can scale
up to a large number of users. Switch stacking, which treats a cluster of individual
switches as a single distributed chassis with a high-speed interconnect, is a very
popular and economical alternative.

THE DISTRIBUTION AND AGGREGATION LAYER

In small networks of a few hundred users, edge switches can be connected


redundantly directly to core switch/router devices. However, for larger networks, , an
additional layer of switching, called the distribution layer, aggregates the edge

ABHISHEK KAMBLI 10/04/2019 CLOUD MANAGEMENT


switches. The main goal of the distribution layer is simply cabling reduction and
network management, taking the many uplinks from edge switches and aggregating
them into higher speed links.

If edge switches are chosen so that each wiring closet has only a single redundant
uplink, then the distribution layer is usually placed next to the network core, with a
minimum of two devices (one for each half of the redundant uplink) connecting to
each wiring closet.

Aggregation and distribution layer switches are usually selected over edge switches
for their greater reliability and larger feature set. While the aggregation/distribution
layer should always be redundant, devices at this layer should offer nonstop service,
such as in-service upgrades (software upgrades that don’t require reboots or
significant traffic interruption) and hot-swap fan and power supply modules.

Aggregation and distribution layer switches also have more stringent performance
requirements, including lower latency and larger MAC address table sizes. This is
because they may be aggregating traffic from thousands of users rather than the
hundreds that one would find in a single wiring closet.

THE CORE (OR BACKBONE LAYER)

For many network managers, a pair of core switches represents the top of their
network tree, the network backbone across which all traffic will pass. Although LANs
such as Ethernet are inherently peer to peer, most enterprise networks sync and
source traffic from a data center (either local or in the WAN cloud) and( to a lesser
extent) from the Internet.

This makes a large core switch a logical way to handle traffic passing between the
user access network and everything else. The advantage of a core switch is backplane
switching — the ability to pass traffic across the core without 1Gbps or even
10Gbps limits, achieving maximum performance.
Generally, the backbone of the network is where switching ends and routing begins,
with core switches serving as both switching and routing engines. In many cases,
core switches also have internal firewall capability as part of their routing feature set,
helping network managers segment and control traffic as it moves from one part of
the network to another.

ABHISHEK KAMBLI 10/04/2019 CLOUD MANAGEMENT


Q6. Explain the layered architecture development of cloud platform for IaaS.

IaaS: Infrastructure-as-a-Service

The first major layer is Infrastructure-as-a-Service, or IaaS. (Sometimes it’s called

Hardware-as-a-Service.) Several years back, if you wanted to run business

applications in your office and control your company website, you would buy servers

and other pricy hardware in order to control local applications and make your

business run smoothly.

But now, with IaaS, you can outsource your hardware needs to someone else. IaaS

companies provide off-site server, storage, and networking hardware, which you rent

and access over the Internet. Freed from maintenance costs and wasted office space,

companies can run their applications on this hardware and access it anytime.

Some of the biggest names in IaaS include Amazon, Microsoft, VMWare, Rackspace

and Red Hat. While these companies have different specialties — some, like Amazon

and Microsoft, want to offer you more than just IaaS — they are connected by a

desire to sell you raw computing power and to host your website.

ABHISHEK KAMBLI 10/04/2019 CLOUD MANAGEMENT


Q7. Discuss the principal approach to plan and realize disaster recovery.

Disaster Recovery

Although the concept -- and some of the products and services -- of cloud-
based disaster recovery is still nascent, some companies, especially SMBs,
are discovering and starting to leverage cloud services for DR. It can be an
attractive alternative for companies that may be strapped for IT resources
because the usage-based cost of cloud services is well suited for DR where
the secondary infrastructure is parked and idling most of the time.
Having DR sites in the cloudreduces the need for data center space, IT
infrastructure and IT resources, which leads to significant cost reductions,
enabling smaller companies to deploy disaster recovery options that were
previously only found in larger enterprises. "Cloud-based DR moves the
discussion from data center space and hardware to one about cloud
capacity planning," said Lauren Whitehouse, senior analyst at Enterprise
Strategy Group (ESG) in Milford, Mass.

But disaster recovery in the cloud isn't a perfect solution, and its
shortcomings and challenges need to be clearly understood before a firm
ventures into it. Security usually tops the list of concerns:

 Is data securely transferred and stored in the cloud?

 How are users authenticated?

 Are passwords the only option or does the cloud provider offer some
type of two-factor authentication?

 Does the cloud provider meet regulatory requirements?

And because clouds are accessed via the Internet, bandwidth requirements
also need to be clearly understood. There's a risk of only planning for
bandwidth requirements to move data into the cloud without sufficient
analysis of how to make the data accessible when a disaster strikes:

ABHISHEK KAMBLI 10/04/2019 CLOUD MANAGEMENT


 Do you have the bandwidth and network capacity to redirect all users to
the cloud?

 If you plan to restore from the cloud to on-premises infrastructure, how


long will that restore take?

"If you use cloud-based backups as part of your DR, you need to design
your backup sets for recovery," said Chander Kant, CEO and founder at
Zmanda Inc., a provider of cloud backup services and an open-source
backup app.

Reliability of the cloud provider, its availability and its ability to serve your
users while a disaster is in progress are other key considerations. The choice
of a cloud service provider or managed service provider (MSP) that can
deliver service within the agreed terms is essential, and while making a
wrong choice may not land you in IT hell, it can easily put you in the
doghouse or even get you fired.

ABHISHEK KAMBLI 10/04/2019 CLOUD MANAGEMENT


Q8. Discuss availability, reliability and serviceability of a system.

Reliability, availability and serviceability (RAS) is a computer hardware engineering term


involving reliability engineering, high availability, and serviceability design. The phrase was
originally used by International Business Machines (IBM) as a term to describe the robustness of
their mainframe computers.[1][2]
Computers designed with higher levels of RAS have many features that protect data integrity and
help them stay available for long periods of time without failure[3] This data integrity
and uptime is a particular selling point for mainframes and fault-tolerant systems.
While RAS originated as a hardware-oriented term, systems thinking has extended the concept of
reliability-availability-serviceability to systems in general, including software.[4]

 Reliability can be defined as the probability that a system will produce correct outputs up to
some given time t.[5] Reliability is enhanced by features that help to avoid, detect and repair
hardware faults. A reliable system does not silently continue and deliver results that include
uncorrected corrupted data. Instead, it detects and, if possible, corrects the corruption, for
example: by retrying an operation for transient (soft) or intermittent errors, or else, for
uncorrectable errors, isolating the fault and reporting it to higher-level recovery mechanisms
(which may failover to redundant replacement hardware, etc.), or else by halting the affected
program or the entire system and reporting the corruption. Reliability can be characterized in
terms of mean time between failures (MTBF), with reliability = exp(-t/MTBF).[5]
 Availability means the probability that a system is operational at a given time, i.e. the amount
of time a device is actually operating as the percentage of total time it should be operating.
High-availability systems may report availability in terms of minutes or hours of downtime
per year. Availability features allow the system to stay operational even when faults do occur.
A highly available system would disable the malfunctioning portion and continue operating
at a reduced capacity. In contrast, a less capable system might crash and become totally
nonoperational. Availability is typically given as a percentage of the time a system is
expected to be available, e.g., 99.999 percent ("five nines").
 Serviceability or maintainability is the simplicity and speed with which a system can be
repaired or maintained; if the time to repair a failed system increases, then availability will
decrease. Serviceability includes various methods of easily diagnosing the system when
problems arise. Early detection of faults can decrease or avoid system downtime. For
example, some enterprise systems can automatically call a service center (without human
intervention) when the system experiences a system fault. The traditional focus has been on
making the correct repairs with as little disruption to normal operations as possible.
Note the distinction between reliability and availability: reliability measures the ability of a system
to function correctly, including avoiding data corruption, whereas availability measures how often
the system is available for use, even though it may not be functioning correctly. For example, a
server may run forever and so have ideal availability, but may be unreliable, with frequent data
corruption.[6]
Physical faults can be temporary or permanent.

 Permanent faults lead to a continuing error and are typically due to some physical failure
such as metal electromigration or dielectric breakdown.
 Temporary faults include transient and intermittent faults.

ABHISHEK KAMBLI 10/04/2019 CLOUD MANAGEMENT


 Transient (a.k.a. soft) faults lead to independent one-time errors and are not due to
permanent hardware faults: examples include alpha particles flipping a memory bit,
electromagnetic noise, or power-supply fluctuations.
 Intermittent faults occur due to a weak system component, e.g. circuit parameters
degrading, leading to errors that are likely to recur.

Q9. Explain the layered architecture development of cloud platform for IaaS.

IaaS: Infrastructure-as-a-Service

The first major layer is Infrastructure-as-a-Service, or IaaS. (Sometimes it’s called

Hardware-as-a-Service.) Several years back, if you wanted to run business


applications in your office and control your company website, you would buy servers

and other pricy hardware in order to control local applications and make your

business run smoothly.

But now, with IaaS, you can outsource your hardware needs to someone else. IaaS

companies provide off-site server, storage, and networking hardware, which you rent

and access over the Internet. Freed from maintenance costs and wasted office space,

companies can run their applications on this hardware and access it anytime.

Some of the biggest names in IaaS include Amazon, Microsoft, VMWare, Rackspace

and Red Hat. While these companies have different specialties — some, like Amazon

and Microsoft, want to offer you more than just IaaS — they are connected by a

desire to sell you raw computing power and to host your website.

ABHISHEK KAMBLI 10/04/2019 CLOUD MANAGEMENT


Q10. What are the different issues in data center management?

Datacentre
management challenges

Challenge 1: Maintaining Availability and Uptime

If you’re using spreadsheets or homegrown tools to manage your server information, you
probably already know the information stored can be outdated, inaccurate, or incomplete. This
can prove challenging when unplanned downtime requires troubleshooting, or when attempting
to map the power chain.

DCIM enables consistent and accurate record keeping, and provides instantaneous visual and
textual information to reduce the time it takes to locate assets and dependencies, thereby
reducing troubleshooting time.

Challenge 2: Improving Utilization of Capacity (Power, Cooling, Space)

In a dynamic data center it is almost impossible to understand how much space, power, and
cooling you have; predict when will you run out, which server is the best for a new services, and
just how much power is needed to ensure uptime and availability.

DCIM tool to quickly model and allocate space, manage power and network connectivity are key
to discovering hidden capacity while better managing the capacity you do possess.

Challenge 3: Reporting Reduced Operating Expenses

It’s not enough to implement solutions that reduce operating expenses, you also have to prove it.
According to Uptime institute, “Going forward, enterprise data center managers will need to be
able to collect cost and performance data, and articulate their value to the business in order to
compete with third party offerings.”

A DCIM solution with dashboard and reporting tools capable instantly aggregates data across
several dimensions allowing data center managers to quickly show stakeholders that the data
center is moving toward full operational efficiency.

Challenge 4: Managing Energy Usage & Costs

According to a NY Times article, “Most data centers, by design, consume vast amounts of energy
in an incongruously wasteful manner…online companies typically run their facilities at maximum
capacity around the clock…as a result, data centers can waste 90 percent or more of the
electricity they pull off the grid.”

A DCIM solution helps data center managers monitor energy consumption, cycle off servers
during off hours, and identify candidates for consolidation

Challenge 5: Improving Staff Productivity

Non-automated or manual systems require facilities and IT staff to spend an extraordinary

ABHISHEK KAMBLI 10/04/2019 CLOUD MANAGEMENT


amount of time logging activities into spreadsheets. This takes away time that can be spent
making strategic decisions for the data center and improving service offerings.

DCIM solutions automate manual processes like workflow approvals and the assignment of
echnicians to make adds, moves, and changes. This also assists with provisioning and auditing, all
while achieving operational savings.

Q11. Discuss the challenges in developing cloud architecture.

1. Privacy and Security

Cloud architecture do not automatically grant security compliance for the end-user data or apps
on them, so apps written for cloud have to be secure in their own terms. Some of the
responsibility for this does fall to cloud vendors, but the lion’s share of it is still in the lap of the
application designers. Cloud computing introduces another level of risks because essential
services are often outsourced to a third party, making it harder to maintain data integrity and
privacy.

2. Client incomprehension

We have probably passed the days when people thought cloud were just big server clusters, but
that doesn’t mean that we can ignore the fact about cloud moving forward. There are also too
many misunderstandings about how public and private cloud work together, misunderstandings
about how easy it is to move from one kind of infrastructure to another. A good way to combat
this is to prevent customers with real-world examples of what is possible and why so that they
can base their understanding on the actual working.

3. Data Security

One of the major concerns associated with cloud computing is its dependency on the cloud
service provider. For uninterrupted and fast cloud service you need to choose a vendor with
proper infrastructure and technical expertise. Since you would be running your company’s asset
and data from a third party interface ensuring data security and privacy are of utmost
importance. Hence, when engaging a cloud service provider, always inquire about their cloud-
based security policies. However, cloud service companies usually employ strict data security
policies to prevent hacking and invest heavily in improved infrastructure and software.

4. Address growing integration complexities

Many applications have complex integration needs to connect to applications on the cloud
network, as well as to other on-premises applications. These include integrating existing cloud
services with existing enterprise applications and data structures. There is need to connect the
cloud application with the rest of the enterprise in a simple, quick and cost-effective way.
Integrating new applications with existing ones is a significant part of the process and cloud
services bring even more challenges from an integration perspective.

5. Reliability and availability

ABHISHEK KAMBLI 10/04/2019 CLOUD MANAGEMENT


Cloud service providers still lack the round-the-clock service, this result in frequent outages. It is
important to monitor the service being provided using internal or third-party tools. It is vital to
have plans to supervise usage, performance and business dependency of these cloud services.

6. Performance and Bandwidth Cost

Businesses can save money on hardware but they have to spend more for the bandwidth. This
could be a low cost for small applications but can be significantly high for the data-intensive
applications. Delivering intensive and complex data over the network requires sufficient
bandwidth. Because of this many enterprises are waiting for a reduced cost, before switching to
the cloud services.

7. Selecting the right cloud set-up

There are three types of cloud environments available – private, public and hybrid. The secret of
successful cloud implementation lies in choosing the most appropriate cloud set-up. Big
companies feel safer with their vast data in private cloud environment, small enterprises often
benefit economically by hosting their services in public cloud. Some companies also prefer the
hybrid cloud because it is flexible, cost-effective and offers a mix of public and private cloud
services.

8. Dependency on Service Providers

One of the major issues with cloud computing is its dependency on the service provider. The
companies providing cloud services charge businesses for utilizing cloud computing services
based on usage. Customers typically subscribe to cloud services to avail their services. For
uninterrupted and fast services one needs to choose a vendor with proper infrastructure and
technical expertise. You need a vendor who can meet the necessary standards. The service-level
agreement should be read carefully and understood in details in case of outage, lock-in-clauses
etc. Cloud service is any service made available to businesses or corporates from a cloud
computing provider’s server. In other words, cloud services are professional services that support
organizations in selecting, deploying and management various cloud-based resources.

9. Seeing beyond the challenges

These challenges should not be considered as roadblocks in the pursuit of cloud computing. It is
rather important to give serious considerations to these issues and the possible ways out before
adopting the technology. Cloud computing are rapidly gaining enterprise adoption, yet many IT
professionals still remain skeptical, for good reason. Issues like security and standards continue to
challenge this emerging technology. Strong technical skills will be essential to address the
security and integration issues in the long run. There are also issues faced while making
transitions from the on-premise set-up to the cloud services like data migration issues and
network configuration issues. But planning ahead can avoid most of these problems with cloud
configurations. The extent of the advantages and disadvantages of cloud services vary from
business to business, so it is important for any business to weigh these up when considering their
move into cloud computing.

ABHISHEK KAMBLI 10/04/2019 CLOUD MANAGEMENT


Q12. Discuss the scope of disaster recovery.

Disaster recovery

Defining DR systems
First, let’s nail down some definitions. DR is defined primarily as the protection of data in a
secure facility (generally off-site from production machines) with the intent of saving the
data in case of the loss of a data center or major data systems. DR does not include failover
capability, which is the domain of high availability (HA) systems. We'll discuss HA systems in
detail in an upcoming column. Many DR systems also include HA functionality; so if you are
considering using both types of systems, keep that fact in mind.

Both HA and DR are part of the overall science of business continuity planning (BCP), which
is the implementation of HA and DR for data systems, along with human resources and
facilities management policies, to ensure that both your data and your employees are safe.

Many DR products are on the market today, so I won't look at specific packages. Instead, I'll
go over the characteristics that most available products share. Generally speaking, DR
systems are split into two main types, defined by the methodology used to replicate data
from one location to another: synchronous and asynchronous data transfer systems.

Both DR systems let you create up-to-the-second backup copies of your valuable production
data in another physical location. This allows the data to survive intact if the data center is
lost for some reason, just as in a flood or fire. Unlike tape backup systems, the data is current
and in a useable format, as it is already on a disk system and not stored on a tape, which
must be restored to disk. A data center in Houston can be secured with a data center in
Dallas, for example, allowing systems and people to be moved to another location and then
resume operations with a minimum of recovery issues.

RPO (Recovery Point Objective)

RTO (Recovery Time Objective)

ABHISHEK KAMBLI 10/04/2019 CLOUD MANAGEMENT


Q13. Explain primary disaster recovery systems.

For most applications and businesses, asynchronous DR technologies offer a much more
cost-effective—and still quite sufficient—solution. Figure B offers a view of the typical
asynchronous system.

These systems are generally software-based and reside on the host server rather than on the
attached storage array. They can protect both local and attached disk systems. In an
asynchronous system, I/O requests are committed to the primary disk systems immediately
(blue line) while a copy of that I/O is sent via some medium (usually TCP/IP) to the backup
disk systems (red line). Since there is no waiting for the commit signal from the remote
systems, these systems can send a continuous stream of I/O data to the backup systems
without slowing down I/O response time on the primary system.

Most asynchronous systems have some methodology to make sure that if something is lost
in transmission, it can be resent. Some can also make sure that transactions are written to
both disks in the same order, which is vital for database-driven applications. In addition,
since the usual method of transmission is TCP/IP, these systems have no real distance
limitations, and there's no limit to splitting the primary and backup systems across WAN
segments or subnets.

ABHISHEK KAMBLI 10/04/2019 CLOUD MANAGEMENT


The main drawback is the potential for a few transactions to be lost during a failover event. If
the primary server suddenly goes offline, anything waiting to be transmitted to the backup
system will be lost. However, since this usually involves only a few transactions (and a few
bytes of data), the performance of asynchronous systems is well within the required
parameters of almost all business applications. For example: Exchange, SQL Server, and
Oracle systems can easily recover during a failover event using these systems without the
need for advanced recovery operations.

In addition to server-to-server replication, asynchronous solutions can allow you to send


multiple replication streams from multiple primary servers to a single DR server, known as a
many-to-one configuration. This methodology allows for protection of the data without the
expense of obtaining duplicate hardware for each primary server in the DR location. Adding
a SAN or NAS system into the mix at the DR site can further reduce overall TCO. Many of the
new NAS systems are being shipped with replication software already in place or ready to be
installed directly onto the storage device itself, allowing it to act as the host to receive
replication data at the DR site.

ABHISHEK KAMBLI 10/04/2019 CLOUD MANAGEMENT


Unit 2
Q1. What is fibre channel? What is storage area network? Discuss the evolution of Fibre
channel SAN from arbitrated loop to enterprise SAN.

Fibre Channel is a high-speed network technology used to connect servers to data


storage area networks. Fibre Channel technology handles high-performance disk
storage for applications on many corporate networks, and it supports data backups,
clustering and replication.

Fibre Channel vs. Fiber Optic Cables

Fibre Channel technology supports both fiber and copper cabling, but copper
limits Fibre Channel to a maximum recommended reach of 100 feet, whereas more
expensive fiber optic cables reach up to 6 miles. The technology was specifically
named Fibre Channel rather than Fiber Channel to distinguish it as supporting both
fiber and copper cabling.

Fibre Channel Speed and Performance

The original version of Fibre Channel operated at a maximum data rate of 1 Gbps.
Newer versions of the standard increased this rate up to 128 Gbps, with 8, 16, and 32
Gbps versions also in use.

Fibre Channel does not follow the typical OSI model layering. It is split into five
layers:

 FC-4 – Protocol-mapping layer


 FC-3 – Common services layer
 FC-2 – Signalling Protocol
 FC-1 – Transmission Protocol
 FC-0 – PHY connections and cabling

Fibre Channel networks have a historical reputation for being expensive to build,
difficult to manage, and inflexible to upgrade due to incompatibilities between
vendor products. However, many storage area network solutions use Fibre Channel
technology. Gigabit Ethernet has emerged, however, as a lower cost alternative for
storage networks. Gigabit Ethernet can better take advantage of internet standards
for network management like SNMP.

SAN

Storage area networks (SANs) are the most common storage networking architecture
used by enterprises for business-critical applications that need to deliver high

ABHISHEK KAMBLI 10/04/2019 CLOUD MANAGEMENT


throughput and low latency. A rapidly growing portion of SAN deployments
leverages all-flash storage to gain its high performance, consistent low latency, and
lower total cost when compared to spinning disk. By storing data in centralized
shared storage, SANs enable organizations to apply consistent methodologies and
tools for security, data protection, and disaster recovery.

A SAN is block-based storage, leveraging a high-speed architecture that connects


servers to their logical disk units (LUNs). A LUN is a range of blocks provisioned from
a pool of shared storage and presented to the server as a logical disk. The server
partitions and formats those blocks—typically with a file system—so that it can store
data on the LUN just as it would on local disk storage.

SANs make up about two-thirds of the total networked storage market. They are
designed to remove single points of failure, making SANs highly available and
resilient. A well-designed SAN can easily withstand multiple component or device
failures.

Arbitrated Loop, also known as FC-AL, is a Fibre Channel topology in which devices
are connected in a one-way loop fashion in a ring topology. Historically it was a
lower-cost alternative to a fabric topology. It allowed connection of many servers
and computer storage devices without using then very costly Fibre Channel switches.
The cost of the switches dropped considerably, so by 2007, FC-AL had become rare
in server-to-storage communication. It is however still common within storage
systems.

 It is a serial architecture that can be used as the transport layer in a SCSI


network, with up to 127 devices. The loop may connect into a fibre channel
fabric via one of its ports.
 The bandwidth on the loop is shared among all ports.
 Only two ports may communicate at a time on the loop. One port wins
arbitration and may open one other port in either half or full duplex mode.
 A loop with two ports is valid and has the same physical topology as point-to-
point, but still acts as a loop protocol-wise.
 Fibre Channel ports capable of arbitrated loop communication are NL_port
(node loop port) and FL_port (fabric loop port), collectively referred to as the
L_ports. The ports may attach to each other via a hub, with cables running
from the hub to the ports. The physical connectors on the hub are not ports in
terms of the protocol. A hub does not contain ports.
 An arbitrated loop with no fabric port (with only NL_ports) is a private loop.
 An arbitrated loop connected to a fabric (through an FL_port) is a public loop.

ABHISHEK KAMBLI 10/04/2019 CLOUD MANAGEMENT


 An NL_Port must provide fabric logon (FLOGI) and name registration facilities
to initiate communication with other node through the fabric (to be an
initiator).

Arbitrated loop can be physically cabled in a ring fashion or using a hub. The physical
ring ceases to work if one of the devices in the chain fails. The hub on the other
hand, while maintaining a logical ring, allows a star topology on the cable level. Each
receive port on the hub is simply passed to next active transmit port, bypassing any
inactive or failed ports.

Fibre Channel hubs therefore have another function: They provide bypass circuits
that prevent the loop from breaking if one device fails or is removed. If a device is
removed from a loop (for example, by pulling its interconnect plug), the hub’s bypass
circuit detects the absence of signal and immediately begins to route incoming data
directly to the loop’s next port, bypassing the missing device entirely. This gives
loops at least a measure of resiliency—failure of one device in a loop doesn’t cause
the entire loop to become inoperable.

ABHISHEK KAMBLI 10/04/2019 CLOUD MANAGEMENT


Q2. What is FCIP? Explain the encapsulation of FC frames into IP payload.

Fibre Channel over IP, or FCIP, is a tunnelling protocol used to connect Fibre Channel
(FC) switches over an IP network, enabling interconnection of remote locations. From
the fabric view, an FCIP link is an inter-switch link (ISL) that transports FC control and
data frames between switches.

FCIP routers link SANs to enable data to traverse fabrics without the need to merge
fabrics. FCIP as an ISL between Fibre Channel SANs makes sense in situations such as:

 Where two sites are connected by existing IP-based networks but not dark
fibre.
 Where IP networking is preferred because of cost or the distance exceeds the
FC limit of 500 kilometres.
 Where the duration or lead time of the requirement does not enable dark
fibre to be installed.

FCIP ISLs have inherent performance, reliability, data integrity and manageability
limitations compared with native FC ISLs. Reliability measured in percentage of
uptime is on average higher for SAN fabrics than for IP networks. Network delays
and packet loss may create bottlenecks in IP networks. FCIP troubleshooting and
performance analysis requires evaluating the whole data path from FC fabric, IP LAN
and WAN networks, which can make it more complex to manage than other
extension options.

Protocol conversion from FC to FCIP can impact the performance that is achieved,
unless the IP LAN and WAN are optimally configured, and large FC frames are likely
to fragment into two Ethernet packets. The default maximum transfer unit (MTU) size
for Ethernet is 1,500 bytes, and the maximum Fibre Channel frame size is 2,172 byes,
including FC headers. So, a review of the IP network’s support of jumbo frames is
important if sustained gigabit throughput is required. To determine the optimum
MTU size for the network, you should review IP WAN header overheads for network
resources such as the VPN and MPLS.

FCIP is typically deployed for long-haul applications that are not business-critical and
do not need especially high performance.

Applications for FCIP include:

 Remote data asynchronous replication to a secondary site.


 Centralised SAN backup and archiving, although tape writes can fail if packets
are dropped.
 Data migration between sites, as part of a data centre migration or
consolidation project.

ABHISHEK KAMBLI 10/04/2019 CLOUD MANAGEMENT


Fiber Channel Frames and Ethernet
Fiber channel frames which would be transferred on the fiber channel network, are
encapsulated within the standard IP packet. This is called tunneling. Essentially, a
particular fiber channel frame is put inside of an IP packet. By doing this, IP packet
can transmit on our Ethernet network just like any other IP packet. Switch or any
other hardware on our network don’t know that it is any different. They don’t know
that inside we have a hidden fiber channel frame.

Software on both ends of the connection is configured to strip off the IP information
leaving the native fiber channel frame available.

ABHISHEK KAMBLI 10/04/2019 CLOUD MANAGEMENT


Q3. What is iFCP . Explain the architecture of iFCP.

The iFCP specification defines iFCP as a gateway-to-gateway


protocol for the implementation of a Fibre Channel fabric in which
TCP/IP switching and routing elements replace Fibre Channel
components. The protocol enables the attachment of existing Fibre
Channel storage products to an IP network by supporting the fabric
services required by such devices.iFCP supports FCP, the ANSI SCSI
serialization standard to transmit SCSI commands, data, and status
information between a SCSI initiator and SCSI target on a serial link,
such as a Fibre Channel network (FC-2). iFCP replaces the transport
layer (FC-2) with an IP network (i.e. Ethernet), but retains the upper
layer (FC-4) information, such as FCP. This is accomplished by
mapping the existing Fibre Channel transport services to TCP/IP.
iFCP, through the use of TCP/IP, can therefore accommodate
deployment in environments where the underlying IP network is not
reliable. iFCP's primary advantage as a SAN gateway protocol is the
mapping of Fibre Channel transport services over TCP, allowing
networked, rather than point-to-point, connections between and
among SANs without requiring the use of Fibre Channel fabric
elements. Existing FCP-based drivers and storage controllers can
safely assume that iFCP, also being Fibre Channel-based, provides
the reliable transport of storage data between SAN domains via
TCP/IP, without requiring any modification of those products. iFCP
is designed to operate in environments that may experience a wide
range of latencies

iFCP is designed for customers who may have a wide range of Fibre
Channel devices (i.e. Host Bus Adapters, Subsystems, Hubs,
Switches, etc.), and want the flexibility to interconnect these devices
with IP network. iFCP can interconnect Fibre Channel SANs with IP,
as well as allow customers the freedom to use TCP/IP networks in
place of Fibre Channel networks for the SAN itself. Through the
implementation of iFCP as a gateway-to-gateway protocol, these
customers can maintain the benefit of their Fibre Channel devices

ABHISHEK KAMBLI 10/04/2019 CLOUD MANAGEMENT


while leveraging a highly scaleable, manageable and flexible
enterprise IP network as the transport medium of choice.iFCP
enables Fibre Channel device-to-device communication over an IP
network, providing more flexibility compared to only enabling SAN-
to-SAN communication. For example, iFCP has a TCP connection
per N_Port to N_Port couple, and such a connection can be set to
have its own Quality of Service (QoS) identity. With SAN-to-SAN
communication, varying connections cannot be prioritized over one
another.Using a multi-connection model for TCP is important for
iFCP as it provides higher aggregate throughput compared to an
implementation of a single-connection model. With the single-
connection model, a single TCP connection links multiple SAN
islands, and therefore multiple N_Port to N_Port sessions. One
congestion loss in the connection can disrupt the entire fabric and
affect all N_Port to N_Port sessions using that tunnel.With the multi-
connection model of iFCP, congestion loss in one N_Port to N_Port
session will only affect the throughput of that session. This model
isolates the effects of congestion to specific sessions, without
impact to other sessions operating in parallel.

ABHISHEK KAMBLI 10/04/2019 CLOUD MANAGEMENT


Q4.What is FCoE? Explain the FCoE architecture?

Fibre Channel over Ethernet (FCoE) is a computer network technology that


encapsulates Fibre Channel frames over Ethernet networks. This allows Fibre Channel
to use 10 Gigabit Ethernet networks (or higher speeds) while preserving the Fibre
Channel protocol. The specification was part of the International Committee for
Information Technology Standards T11 FC-BB-5 standard published in 2009.[1]

FCoE transports Fibre Channel directly over Ethernet while being independent of the
Ethernet forwarding scheme. The FCoE protocol specification replaces the FC0 and
FC1 layers of the Fibre Channel stack with Ethernet. By retaining the native Fibre
Channel constructs, FCoE was meant to integrate with existing Fibre Channel
networks and management software.

The main application of FCoE is in data center storage area networks (SANs). FCoE
has particular application in data centers due to the cabling reduction it makes
possible, as well as in server virtualization applications, which often require many
physical I/O connections per server.

With FCoE, network (IP) and storage (SAN) data traffic can be consolidated using a
single network. This consolidation can:

 reduce the number of network interface cards required to connect to


disparate storage and IP networks
 reduce the number of cables and switches
 reduce power and cooling costs

Data centers used Ethernet for TCP/IP networks and Fibre Channel for storage area
networks (SANs). With FCoE, Fibre Channel becomes another network protocol
running on Ethernet, alongside traditional Internet Protocol (IP) traffic. FCoE operates
directly above Ethernet in the network protocol stack, in contrast to iSCSI which runs
on top of TCP and IP. As a consequence, FCoE is not routable at the IP layer, and will
not work across routed IP networks.

Since classical Ethernet had no priority-based flow control, unlike Fibre Channel,
FCoE required enhancements to the Ethernet standard to support a priority-based
flow control mechanism (to reduce frame loss from congestion). The IEEE standards
body added priorities in the data center bridging Task Group.

Fibre Channel required three primary extensions to deliver the capabilities of Fibre
Channel over Ethernet networks:

ABHISHEK KAMBLI 10/04/2019 CLOUD MANAGEMENT


 Encapsulation of native Fibre Channel frames into Ethernet Frames.
 Extensions to the Ethernet protocol itself to enable an Ethernet fabric in which
frames are not routinely lost during periods of congestion.
 Mapping between Fibre Channel N_port IDs (aka FCIDs) and Ethernet MAC
addresses.

"Converged" network adapter

Computers can connect to FCoE with converged network adapters (CNAs), which
contain both Fibre Channel host bus adapter (HBA) and Ethernet network interface
controller (NIC) functionality on the same physical card. CNAs have one or more
physical Ethernet ports. FCoE encapsulation can be done in software with a
conventional Ethernet network interface card, however FCoE CNAs offload (from the
CPU) the low level frame processing and SCSI protocol functions traditionally
performed by Fibre Channel host bus adapters.

ABHISHEK KAMBLI 10/04/2019 CLOUD MANAGEMENT


Q8. What is an iSCSI session? Why is it required?

What is iSCSI Connection and Session

Here is a listing of technical definitions for iSCSI Connection & iSCSI Session.

Connection: A connection is a TCP connection. Communication between the initiator


and target occurs over one or more TCP connections. The TCP connections carry
control messages, SCSI commands, parameters, and data within iSCSI Protocol Data
Units (iSCSI PDUs).

Session: The group of TCP connections that link an initiator with a target form a
session (loosely equivalent to a SCSI I-T nexus). TCP connections can be added and
removed from a session. Across all connections within a session, an initiator sees one
and the same target.

SSID (Session ID): A session between an iSCSI initiator and an iSCSI target is defined
by a session ID that is a tuple composed of an initiator part (ISID) and a target part
(Target Portal Group Tag). The ISID is explicitly specified by the initiator at session
establishment. The Target Portal Group Tag is implied by the initiator through the
selection of the TCP endpoint at connection establishment. The Target Portal Group
Tag key must also be returned by the target as a confirmation during connection
establishment when TargetName is given.

CID (Connection ID): Connections within a session are identified by a connection ID.
It is a unique ID for this connection within the session for the initiator. It is generated
by the initiator and presented to the target during login requests and during logouts
that close connections.

ISID: The initiator part of the Session Identifier. It is explicitly specified by the initiator
during Login.

A session is established between an iSCSI initiator and an iSCSI target when the iSCSI
initiator performs a logon or connects with the target. The link between an initiator
and a target which contains the group of TCP connections forms a session. A session
is identified by a session ID that includes an initiator part and a target part.

Every connection in a session is identified through a connection ID (CID).


Communication between the initiator and the target is carried out over one or more
TCP connections. It must support at least one TCP connection between iSCSI targets
and initiators, and may support multiple connections in a session. The TCP
connections carry control messages, SCSI commands, parameters, and data within
iSCSI Protocol Data Units (iSCSI PDUs). In case of failure, two connections are
required for error recovery in a session.

ABHISHEK KAMBLI 10/04/2019 CLOUD MANAGEMENT


There are two types of sessions that can be established:

Discovery-session: This type of session is used only for target discovery. The iSCSI
target may permit SendTargets text requests in such a session.

Normal operational session: This type of session is typically an unlimited session.

Q9. Explain the Fibre Channel architecture.

Fibre Channel (FC) Architectures


The configuration of the above 3 architectures are explained below

Point-to-point Architecture: In this configuration, two nodes are connected directly to each
other. This configuration provides a dedicated connection for data transmission between
nodes. However, the point-to-point configuration offers limited connectivity and scalability
and is used in a DAS environment.

FC-Arbitrated Loop: In this configuration, the devices are attached to a shared loop. Each
device contends with other devices to perform I/O operations. The devices on the loop must
“arbitrate” to gain control of the loop. At any given time, only one device can perform I/O
operations on the loop. Because each device in a loop must wait for its turn to process an I/O
request, the overall performance in FC-AL environments is low.

Further, adding or removing a device results in loop re-initialization, which can cause a
momentary pause in loop traffic. As a loop configuration, FC-AL can be implemented without
any interconnecting devices by directly connecting one device to another two devices in a ring
through cables. However, FC-AL implementations may also use FC hubs through which the

ABHISHEK KAMBLI 10/04/2019 CLOUD MANAGEMENT


arbitrated loop is physically connected in a star topology.

FC-Switch: It involves a single FC switch or a network of FC switches (including FC directors)


to interconnect the nodes. It is also referred to as fabric connect. A fabric is a logical space in
which all nodes communicate with one another in a network. In a fabric, the link between any
two switches is called an interswitch link (ISL). ISLs enable switches to be connected together
to form a single, larger fabric.

They enable the transfer of both storage traffic and fabric management traffic from one switch
to another. In FC-SW, nodes do not share a loop; instead, data is transferred through a
dedicated path between the nodes. Unlike a loop configuration, an FC-SW configuration
provides high scalability. The addition or removal of a node in a switched fabric is minimally
disruptive; it does not affect the ongoing traffic between other nodes.

ABHISHEK KAMBLI 10/04/2019 CLOUD MANAGEMENT


Q10. What are the two flow control mechanisms in fiber channel technology?

ABHISHEK KAMBLI 10/04/2019 CLOUD MANAGEMENT


ABHISHEK KAMBLI 10/04/2019 CLOUD MANAGEMENT
Unit 3

Q1. What is cloud OS? What are the three key platforms of Microsoft Cloud OS?
Enumerate the four key goals of Microsoft Cloud OS.

What Is the Cloud OS?


Microsoft’s Cloud OS allows you to build IaaS public, private, and hybrid clouds that
have all of the generally accepted characteristics of a cloud. There are two
components to the Cloud OS, which I’ll go into further in a moment.

 Windows Server 2012: Hyper-V provides the virtualization and networking


functionality for IaaS.
 System Center 2012 with Service Pack 1 (SP1): This suite of hugely
powerful enterprise management and automation products builds upon
Windows Server 2012 to complete the package.

Windows Server 2012 Hyper-V

The list of new features and enhancements in Windows Server 2012 Hyper-V (let
alone the rest of the OS) is staggering. A number of the features were designed for
building the compute resources of a cloud, and some even started life in Windows
Azure.

 Hyper-V Network Virtualization (HNV): In a public cloud, each tenant needs


to be isolated. This has been done using VLANs, a solution that was originally
intended to be used for breaking up broadcast domains and network subnets.
VLANs are messy to own and have limited scalability (4096 VLANs). HNV, also
referred to as Software Defined Networking, allows a cloud to operate a
simple physical layer of provider (IP) addresses. Each tenant is assigned one or
more virtual networks, called a VM Network, where their services operate
using abstracted consumer (IP) addresses. VM Networks provide the tenant
isolation that is a requirement in a cloud. A side effect of the abstraction of
VM Networks is that they can have overlapping consumer addresses. This
means many tenants can continue to use the commonly used 192.168.1.0/24
or 10.0.0.0 IP ranges when they deploy or move services into a cloud. This
feature also makes it possible for companies to merge server infrastructures.
 Virtual Switch: The Virtual Network of the past has been replaced by an
extensible Virtual Switch. Third-party ISVs can (and already have) build
extensions to the functionality of the Hyper-V Virtual Switch. Already there are
monitoring and security extensions. And companies such as Cisco and NEC
have created so-called forwarding extensions to make the virtual switch

ABHISHEK KAMBLI 10/04/2019 CLOUD MANAGEMENT


manageable like a physical network appliance. This returns the complexity of
virtual network management, which has been done by virtualization engineers,
to the network engineers who are best skilled to do this work. Other
troubleshooting features have also been added, such as logging and port
mirroring to assist with network/service diagnostics without installing software
in tenant VMs.
 Automation: Hyper-V underwent huge increases in scalability with the release
of WS2012, which means that automated administration is more important.
There are over 2,000 PowerShell cmdlets in WS2012, and over 160 of those are
for Hyper-V alone. This enables automated administration, rapid changes at
huge scales, and the achievement of repeatable results.
 Flexibility and mobility: Live Migration (the movement of virtual machines
and/or their files with no downtime to service availability) leads the market.
You can live migrate a virtual machine to different storage locations, between
clustered hosts, between clusters, between non-clustered hosts, and between
clustered hosts and non-clustered hosts. Other features leverage this, such as
host maintenance mode and Clustered Aware Updating (the orchestrated
patching of clustered hosts).
 Storage: Microsoft continued their investment in traditional block storage
such as fiber channel SANs. Host bus adapters (HBAs) in Hyper-V hosts can be
virtualized and shared with virtual machines via virtual HBAs, and this is done
with the support of Live Migration. TRIM and Unmap features are supported
in virtual SCSI attached VHDX files. Offloaded Data Transfer (ODX) enables
near instant VHD/X (even fixed-size) creation and deployment of virtual
machine templates from one LUN to another. But the most interesting new
feature is the SMB 3.0, where Windows Server 2012 Hyper-V supports storing
running virtual machines on a Windows Server 2012 file server. Don’t be
prejudiced: SMB 3.0 storage might not offer enhanced features like LUN
replication, but it offers the fastest and most economic storage available.
Features such as SMB Multichannel (think of it as dynamic auto-configuring
MPIO for file server traffic) and SMB Direct (leveraging Remote Direct
Memory Access or RDMA) give really fast networking over 1 GbE, 1 GBE, and
faster (iWarp, ROCE, and Infiniband) networking. Continuous availability and
scalability are offered through an active/active file server cluster for
application data called a Scale-Out File Server.
 Networking: There are a huge number of new networking elements in
Windows Server 2012 to improve performance, scalability, and manageability.
Built-in NIC teaming and quality of service (QoS) allows us to create
converged networks to simplify host networking. SR-IOV, RSS, and DVMQ
improve performance and scalability.

ABHISHEK KAMBLI 10/04/2019 CLOUD MANAGEMENT


These are just some of the features of Windows Server 2012 that take Hyper-V
beyond virtualization. There are many more ingredients: WS2012 brings a lot to the
table on its own, but the cloud OS really comes to life when some of these features
are lit up by System Center 2012 SP1.

ABHISHEK KAMBLI 10/04/2019 CLOUD MANAGEMENT


Q2. Explain the different components of Microsoft System Center 2012.

System Center 2012 SP1

Prior to the release of System Center 2012, there were numerous independent
System Products that had some level of integration. Then came System Center 2012,
a suite (that some consider as a single product) that is made up of a deeply
integrated set of components. Used separately, each tool brings a lot to the Cloud
OS. When used together, System Center 2012 SP1 can revolutionize who a business
consumes (or provides) cloud computing.

 Virtual Machine Manager (VMM/SCVMM): VMM is at the heart of the


Cloud OS with Hyper-V. To those coming from vSphere, VMM is like vCenter,
but it does much more. VMM provides the mechanism for deploying your
hosts, configuring host networks and enabling HNV, and deploying – not only
virtual machines, but also services made up of many elastic tiers, networks,
storage, load balancers – all from a template that is IT-designed and
managed. It should be noted that, like much of System Center, while VMM
offers the most when used with Hyper-V, it can also deploy clouds, virtual
machines, and services on vSphere and XenServer.
 App Controller: A key trait of the cloud is self-service. VMM enables self-
service with self-service roles (featuring optional quotas and resource access
control), but it requires a portal. App Controller offers a portal to your private
cloud(s) that is (are) managed by VMM. App Controller also offers and IT-
managed portal to a company account with public clouds such as Azure or
those provided by hosting companies. This allows tenants to deploy their
services onto the cloud that best suits their service. This provides the tenant
access to private, public, and hybrid clouds through a single portal.
 Data Protection Manager (DPM): The most important thing that a service
provider can do is secure a tenant’s data. Host security features such as
BitLocker provide access security. Backup provides data access security by
backing up virtual machines and data. Host-level backup policies can backup
large collections of virtual machines, optionally leveraging Volume Shadow
Copy Service (VSS) compatible LUN snapshot functionality of supporting
SANs. And in-guest policies can backup guest operating systems and/or their
data. A new feature is that data can be sent offline automatically to Window
Azure Online Backup for automated off-site storage.
 Configuration Manager (SCCM/ConfigMgr): ConfigMgr often gets
forgotten in the Microsoft Cloud OS story – this is a mistake. ConfigMgr has
the power to deliver patching, auditing, compliance control, and change
control from a central point, and comes with an app store called the
Application Catalog. The makes ConfigMgr much more than a client endpoint
management solution.

ABHISHEK KAMBLI 10/04/2019 CLOUD MANAGEMENT


 Endpoint Protection: The basics of security are covered too; Endpoint
Protection moved from the more-or-less defunct Forefront product group to
System Center, and provides anti-virus protection for your cloud.
 Operations Manager (SCOM/OpsMgr): Service providers must offer service
level agreements (SLAs) to their customers, internal or external alike. There is
much more to service availability than checking for a full disk, CPU utilization,
or running a ping test. OpsMgr performs automatic deep discovery of
resources with a multi-function agent. This automation is important in a cloud
– the operators of the cloud usually have no clue what tenants are deploying
and therefore traditional IT-driven agent configuration cannot work.
Operators and engineers get the deep level of monitoring that it takes to run
a reliable service. On the other hand, services can be modeled and monitored
at the element and client perspective basis (remote monitoring, even from
Microsoft data centers around the world). This high level view gives the tenant
(and business) the view that they want, and the SLA reporting that they need.
 Orchestrator (SCOrch): There are integrations throughout System Center,
such as Performance and Resource Optimization (PRO) between OpsMgr and
VMM. Integration Packs allow SCOrch to integrate all of System Center and
other products (such as vSphere, Active Directory, Exchange, and so on) in
deep ways using runbooks. A runbook is an automated implementation of
some process, such as creating a user (user account, mailbox, emailing a
password, etc). By itself, SCOrch gives operators a very powerful set of tools.
 Service Manager: This must be said: Service Manager is not a helpdesk
product, although you can run a helpdesk from it. Service Manager is much
more than a $90/year product. Think of this huge business centric product as a
means for revealing the functions of IT and revealing them through a request
mechanism in the SharePoint-based Service Manager portal. Entire solutions,
such as the Cloud Process Pack, glue together the Cloud OS. For example, a
tenant can request a service through the portal. A runbook kicks off, maybe
seeking the approval of the assigned budget owner. The components that
make up the service are deployed on Hyper-V by VMM, including the VM
networks for HNV. Monitoring of the service is enabled by deploying OpsMgr
agents (leading to automated discovery and monitoring). DPM backup
policies can be updated to include the new virtual machines and/or SQL
databases. Finally, a notification can be sent out to the tenant to let know that
their new service is online. That’s an example of how a deeply integrated
cloud OS operates.
 Microsoft is positioning System Center 2012 as a cloud management tool for
both your "private cloud" of internal servers (Windows, Solaris, and Linux) and
for public cloud services. That "public cloud" claim is a stretch, though, as it
means only resources hosted in Microsoft's Windows Azure cloud, not in
competing public clouds.

ABHISHEK KAMBLI 10/04/2019 CLOUD MANAGEMENT



 [ Windows Server 8 is coming, and InfoWorld can help you get ready
with the Windows Server 8 Deep Dive PDF special report. | Stay abreast
of key Microsoft technologies in our Technology: Microsoft newsletter. ]
 [ From IDEs to test frameworks: Must-have devops tools for Windows
admins. | The InfoWorld review: Chef 12 fires up devops. ]
 Also new in the 2012 version is System Center 2012's ability to manage
Android, iOS, Symbian, and Windows Phone 7 mobile devices using the same
EAS (Exchange ActiveSync) policies that Microsoft Exchange has long had to
manage the same devices. System Center also remains a desktop and virtual-
desktop management tool for Windows PCs, as in its previous version.

 Configuration Manager lets you deploy software to employee’s devices and


computers, inventory their hardware, push out OS / software updates as well
as deploy OSs to bare metal computers. New in Configuration Manager 2012
is user centric management of devices and software with the concept of a
user’s primary device(s), self-service software catalog, management of mobile
devices, a vastly simplified infrastructure hierarchy, remediation of
configuration drift through settings management and Role Based Access.


 Windows Server Performance Monitoring - Analyze with Drag-and-drop Dashboards
 Try Datadog now! Ad
 Virtual Machine Manager manages your fabric infrastructure for virtualization:
hosts, clusters and networks from the bare metal to the ultimate abstraction in
private clouds. The 2012 version has changed fundamentally from its
predecessor in the overall scope by now managing the entire fabric, creating
Hyper-V clusters from bare metal, managing resource and power optimization
natively, interfacing with Hyper-V, VMWare ESX and Citrix Xen server hosts
and orchestrating patching of clusters. There’s also a Service model that lets
you deploy (and subsequently update) entire groups of related VMs for a

ABHISHEK KAMBLI 10/04/2019 CLOUD MANAGEMENT


distributed application, there’s Server App-V application virtualization
deployment, built in High Availability for Virtual Machine Manager itself,
storage control (iSCSI and FC SANs) and built in self-service with Role Based
Access control.
 Operations Manager keeps an eye on your servers (physical and virtual), OS
(Windows and Unix / Linux), applications (All Microsoft, many, many third
party and Java application servers) and your networks through Microsoft and
third party Management Packs that contain knowledge about each
component. New in 2012 is a simpler infrastructure with built in High
Availability, much better and easier network monitoring, enhanced
dashboards that can be published to SharePoint for wider audiences,
Application Performance Monitoring (formerly known as Avicode), Java
Enterprise Edition (JEE) monitoring and enhanced functionality and security for
*nix monitoring.
 Data Protection Manager is the best backup product for Microsoft’s
workloads, following supported processes to backup from Disk to Disk, Disk to
Tape, Disk to Disk to Tape as well as Disk to Disk to Cloud. New in 2012 is a
centralized console (through Operations Manager) that can manage hundreds
of Data Protection Manager servers (including DPM 2010), a new console UI,
scoped consoles for troubleshooting, Role Based Access, improved Item Level
Recovery (ILR) for recovering files from within VMs that have only been
backed up through a host level backup, much faster SharePoint recoveries and
certificate based communication for workgroup data sources.


 Inventory and PC Auditing - Hardware and Software - Windows, Linux, VMware, macOS
 Download Total Network Inventory now! Ad
 Orchestrator (formerly known as Opalis) is a newcomer in the Service Center
2012 suite but it’s very important as it integrates and links the other
components through automation. Via a Visio like interface Activities are linked
together into Runbooks that can then automate IT processes on demand;
Runbooks can be started from a web interface or from Service Manager or any

ABHISHEK KAMBLI 10/04/2019 CLOUD MANAGEMENT


other product that talks to the new Orchestrator Web Service. The true power
of Orchestrator comes in the form of Integration Packs that allows it to “talk”
to many other systems, including all the components in System Center 2012
as well as earlier System Center versions and many other third party systems,
giving you a true automation engine to better provide IT as a service.
 Service Manager is a help desk system for tracking incidents, change requests,
service requests and configuration management in a central Configuration
Management Database (CMDB). There’s also a Data Warehouse server that
manages long term storage of data, not just from Service Manager but from
all the System Center 2012 products, data is brought into Service Manager
through connectors to AD, Virtual Machine Manager, Operations Manager
and Configuration Manager for consolidated reporting, templates are used to
build service offerings and self-service IT is enabled through a SharePoint
integrated portal.
 Endpoint Protection is Microsoft’s business anti-malware solution that’s
integrated into Configuration Manager. This means that the distribution of the
application is done through the normal model in Configuration Manager
(either as part of new OS deployments or as a distributed program after OS
installation) as well as all the signature updates. Reporting and policy
management is also integrated which means that there’s no separate
infrastructure to manage and no new user interfaces to learn. Endpoint
Protection is a multi-engine solution and one engine can be active and scan
files and traffic while other engines are being updated with new signatures.
 App Controller is an end-user / application owner focused web portal that lets
you see Virtual Machine Manager private clouds and services deployed in
them as well as Windows Azure public cloud services and monitor, scale out
and scale in these services.
 Unified Installer lets you install the entire System Center 2012 product, all the
components, and more importantly, all the prerequisites through one
interface. This brings the total number of screens required to click through to
install all the required software and all the components from over 400 to 16.
This installation is designed for evaluation, learning, testing and proof of
concepts, it’s not designed for production deployments (unless your
environment is small) and only works with Windows Server 2008 R2 SP1 and
SQL Server 2008 R2 as the underlying components. A full installation requires
nine separate servers / VMs, each with 2 GB of memory so if you’re going to
install this on one Hyper-V/ESX or XEN server host it’ll need 20+ GB of
memory.

ABHISHEK KAMBLI 10/04/2019 CLOUD MANAGEMENT


Q. What is AppController? What are the different day to day tasks that can be
performed using AppController?

Microsoft System Center App Controller is a new member of the System Center
family of products. Although other products in this suite can be implemented
independently of one another (with the ability to integrate, of course), App Controller
is highly dependent on System Center Virtual Machine Manager (VMM) or Windows
Azure. In case you aren't familiar with App Controller's purpose, let me make a brief
introduction.

App Controller is a product for managing applications and services that are deployed
in private or public cloud infrastructures, mostly from the application owner's
perspective. It provides a unified self-service experience that lets you configure,
deploy, and manage virtual machines (VMs) and services. Some people mistakenly
think that App Controller is simply the replacement for the VMM Self-Service Portal.
Although App Controller does indeed serve this function, and in some way can
replace the Self-Service Portal, its focus is different. VMM Self-Service portal was
used primarily for creating and managing VMs, based on predefined templates; App
Controller also focuses on services and applications. App Controller lets users focus
on what is deployed in the VM, rather than being limited to the VM itself.

To understand this concept, you need to be familiar with System Center 2012 VMM
2012. Although this article is not about VMM, I must mention some important things
so you can get the full picture. VMM 2012 has significantly changed from VMM 2008
R2. VMM 2012 still manages and deploys hosts and VMs, but its main focus is on
private clouds and service templates. The end result is that an administrator or end
user can deploy a service or application to a private cloud even without knowing
exactly what lies beneath it.

I mentioned earlier that you can use App Controller to connect to both private and
public clouds. Connecting to a private cloud means establishing a connection to a
VMM 2012 Management Server. However, you can also add a Windows Azure
subscription to App Controller.

Target users for App Controller are not administrators, although some admin tasks
can be performed through the App Controller console. App Controller is intended to
be used by application or service owners: the people that deploy and manage an
application or service. (Don't confuse these folks with the end users that actually use
a service or application. End users should not be doing anything with App
Controller.) An owner might be an administrator, or an owner might be a developer
that needs a platform to test an application. The key point is self-servicing: App
Controller enables application owners to deploy new instances of a service or

ABHISHEK KAMBLI 10/04/2019 CLOUD MANAGEMENT


application without requiring them to deal with jobs such as creating VMs, Virtual
Hard Disks (VHDs), or networks or installing OSs. To achieve that level of automation,
administrators should do a lot of work in VMM.

App Controller can't create or manage building blocks for VMs or services. Nor can it
be used to create new objects from scratch (except for service instances). Anything
you work with in App Controller must first be prepared in VMM. That means creating
VM templates, guest OS profiles, hardware profiles, application profiles and
packages, and logical networks, as well as providing Sysprepped .vhd files, ISO
images, and private cloud objects. To deploy services through App Controller, a
VMM administrator must create a service template and deployment configuration.
Self-service user roles also should be created in VMM and associated with one or
more private clouds and quotas.

App Controller doesn't have its own security infrastructure: It relies completely on
security settings in VMM, so available options for a user in App Controller depend
directly on the rights and permissions that are assigned to the user in VMM.
Authentication is performed by using a web-based form, but you can opt to use
Windows Authentication in Microsoft IIS to achieve single sign-on (SSO).

ABHISHEK KAMBLI 10/04/2019 CLOUD MANAGEMENT


Q5. Enumerate the benefits of Microsoft System Center 2012.

SC2012 ADVANTAGES

Here is a list of ten things that are new in Microsoft System Center 2012 R2:

1. Windows Azure Pack and cloud tools


In addition to the familiar SCOM, SCVMM, and other System Center components, the
2012 R2 lineup includes new tools for duty in the private data center: Service
Management Automation (SMA), Service Bus Clouds, Windows Azure Pack (WAP),
and Service Reporting. These are portal, data center automation, and multi-tenancy
features so important in the service provider and hosting scenarios. This portfolio
(which brings proven, cost-effective Windows Azure public cloud technologies to the
private cloud), extends the CloudOS and represents a strategic asset for Microsoft.

2. Server support
System Center 2012 R2 server-side components prefer the latest server operating
system (OS), Windows Server 2012 R2. The major System Center component that
requires Windows Server 2012 R2 is SCVMM. Windows Server 2012 is a second
choice, and as a third choice, Windows Server 2008 R2 will host most components as
well. Orchestrator and DPM servers can still run even on Windows Server 2008. (Users
of the SharePoint-based Service Manager Self-Service Portal (SSP) must use
Windows Server 2008 or 2008 R2.)

3. Highly available backup and recovery service


The Data Protection Manager (DPM) component of System Center 2012 R2 now
supports the use of clustered SQL Server nodes for its database. This removes the
standalone limitation that previously existed, and provides for higher reliability by
mitigating the single point of failure when a standalone SQL server is used. DPM can
now also be installed on a virtual machine, and backup to storage using .VHD (virtual
hard drive) storage pool disks that are shared through the VMM library.

4. Backup Linux Virtual Machines


DPM now provides support for the protection and backup of Linux virtual machines
when they are guests in a Hyper-V environment. These backups take the form of file-
consistent snapshots. (Linux application-consistent snapshots are not yet available.)

5. Support for monitoring IPv6

ABHISHEK KAMBLI 10/04/2019 CLOUD MANAGEMENT


In System Center 2012 R2 Operations Manager (SCOM) the Operations console can
take IPv6 addresses as input for Network Discovery and display IPv6 addresses in the
network- related views. IPv6 information was previously collected and displayed in
the SCOM console, but the only way to index network information was using IPv4.
This makes SCOM a leader in network management for enterprises using IPv6 in
production roles.

6. System Center Advisor integrated with Operations


Manager
System Center Advisor is a Microsoft online service that analyzes uploaded
configuration and performance data from selected Microsoft server software. Advisor
returns information in the form of alerts. Previously these alerts were only available
for push notification in a weekly email dump from Microsoft. New is the ability to
view Advisor alerts in the Operations Manager Operations console. This makes it a lot
easier to assess the importance and relevance of Advisor alerts in the overall context
of the operations environment.

7. Service Management Automation for cloud-based


workflow orchestration
You can install the Service Management Automation (SMA) web service from System
Center 2012 R2 Orchestrator setup program. SMA can be used as part of the
Windows Azure Pack (WAP), or to enable you to execute runbooks and perform
other automation tasks using PowerShell.

8. Create an extended network spanning on-


premises site(s) and a service provider cloud
Leverage service provider-hosted private clouds as extensions of your network just as
Azure Virtual Network functions with the Azure public cloud. New site-to-site NVGRE
gateway network virtualization management in Virtual Machine Manger (VMM)
allows you to create your own virtualized network on top of the service provider
network infrastructure. You can utilize your own private IP numbering plan within the
virtualized network, connect it to your on-premises network, and route into the
private cloud as if it were co-located on your wide area network (WAN).

9. Manage the latest high-value Hyper-V features


You can now create generation 2 virtual machines and VM templates that are based
on these VMs. Generation 2 VMs provide new functionality such as a faster boot, and

ABHISHEK KAMBLI 10/04/2019 CLOUD MANAGEMENT


boot from virtual SCSI controllers. Gain support for live cloning, where virtual
machines are exported without downtime, and for online VHDX resize, which allows
for online resizing of virtual hard disks while the disks are in use. Another nice
feature: enhanced support for Hyper-V Dynamic Memory modification, which can
now be changed and applied to a running virtual machine.

10. Out-of-band VM management, even from IOS


and Android
Remote Console provides the ability to access the console of a VM in scenarios when
other remote tools (such as Remote Desktop Protocol, or RDP) are unavailable. Note
that remote console clients need a computer that supports Remote Desktop Protocol
8.1, which means Windows 8.1 and now even IOS and Android. Remote Console
works like the keyboard, video, and mouse (KVM) connection that is used by physical
computers.

Q6. What is virtual machine manager? What are the benefits of virtual machine
manager to an enterprise.

System Center Virtual Machine Manager (SCVMM) forms part of Microsoft's


System Center line of virtual machine management and reporting tools, alongside
previously established tools such as System Center Operations Manager and System
Center Configuration Manager. SCVMM is designed for management of large
numbers of Virtual Servers based on Microsoft Virtual Server and Hyper-V, and was
released for enterprise customers in October 2007. [1] A standalone version for small
and medium business customers is available.

System Center Virtual Machine Manager enables increased physical server utilization
by making possible simple and fast consolidation on virtual infrastructure. This is
supported by consolidation candidate identification, fast Physical-to-Virtual (P2V)
migration and intelligent workload placement based on performance data and user
defined business policies (NOTE: P2V Migration capability was removed in SCVMM
2012r2). VMM enables rapid provisioning of new virtual machines by the
administrator and end users using a self-service provisioning tool. Finally, VMM
provides the central management console to manage all the building blocks of a
virtualized data center.

Microsoft System Center 2016 Virtual Machine Manager was released in September
2016. This product enables the deployment and management of a virtualized,

ABHISHEK KAMBLI 10/04/2019 CLOUD MANAGEMENT


software-defined datacenter with a comprehensive solution for networking, storage,
computing, and security.

The latest release is System Center 2019 Virtual Machine Manager, which was
released in March 2019. It added features in the areas of azure integration,
computing, networking, security and storage.

ABHISHEK KAMBLI 10/04/2019 CLOUD MANAGEMENT


Q7. What are the components of AppController?

Q9. What are the components of virtual machine manager? Explain each
component.

VMM COMPONENTS

 Datacenter: Configure and manage your datacenter components as a single


fabric in VMM. Datacenter components include virtualization servers,
networking components, and storage resources. VMM provisions and
manages the resources needed to create and deploy virtual machines and
services to private clouds.
 Virtualization hosts: VMM can add, provision, and manage Hyper-V and
VMware virtualization hosts and clusters.
 Networking: Add networking resources to the VMM fabric, including network
sites defined by IP subnets, virtual LANs (VLANs), logical switches, static IP
address and MAC pools. VMM provides network virtualization, including
support for creating and manage virtual networks and network gateways.
Network virtualization allows multiple tenants to have isolated networks and
their own IP address ranges for increased privacy and security. Using
gateways, VMs on virtual networks can connect to physical networks in the
same site or in different locations.
 Storage: VMM can discover, classify, provision, allocate, and assign local and
remote storage. VMM supports block storage (fibre channel, iSCSI, and Serial
Attached SCSI (SAS) storage area networks (SANs)).
 Library resources: The VMM fabric retains a library of file-based and non file-
based resources that are used to create and deploy VMs and services on
virtualization hosts. File-based resources include virtual hard disks, ISO
images, and scripts. Non file-based resources include templates and profiles

ABHISHEK KAMBLI 10/04/2019 CLOUD MANAGEMENT


that are used to standardize the creation of VMs. Library resources are
accessed through library shares.

Q10. What is virtual machine placement. What are the different ways of deploying
virtual machines.

When a virtual machine is deployed on a host, the process of selecting the most
suitable host for the virtual machine is known as virtual machine placement, or simply
placement. During placement, hosts are rated based on the virtual machine’s
hardware and resource requirements, the anticipated usage of resources, and
capabilities resulting from the specific virtualization platform. Host ratings also take
into consideration the placement goal: resource maximization on individual hosts,
load balancing among hosts, or whether the virtual machine is highly available. The
administrator selects a host for the virtual machine based on the host ratings.

Automatic Placement
In the following cases, a virtual machine is automatically placed on the most suitable
host in a host group, in a process known as automatic placement:

 In virtual machine self-service, users' virtual machines are automatically placed


on the most suitable host in the host group that is used for self-service.

 Automatic placement also occurs when the drag-and-drop method is used to


migrate a virtual machine to a host group in Virtual Machines view.

During automatic placement, the configuration files for the virtual machine are
moved to the volume judged most suitable on the selected host. For automatic
placement to succeed, a virtual machine path must be configured on the
recommended volume. For more information, see About Default Virtual Machine
Paths.

The placement of the virtual machines in a Hyper-V cluster is an important step to


ensure performance and high availability. To make a highly available application,

ABHISHEK KAMBLI 10/04/2019 CLOUD MANAGEMENT


usually, a cluster is deployed spread across two or more virtual machines. In case of a
Hyper-V node is crashing, the application must keep working.

But the VM placement concerns also its storage and its network. Let’s think about a
storage solution where you have several LUNs (or Storage Spaces) according to a
service level. Maybe you have an LUN with HDD in RAID 6 and another in RAID 1
with SSD. You don’t want that the VM which requires intensive IO was placed on
HDD LUN.

Thanks to Virtual Machine Manager we are able to deploy a VM in the right network
and in the wanted storage. Moreover, the VM can be constrained to be hosted in a
specific hypervisor. In this topic, we will see how to deploy this kind of solution. I
assume that you have some knowledge about VMM.

About the network placement


With a proper network configuration in VMM, you are able to “place” the VM on the
right network. It is not really a network placement, but a virtual NICs auto
configuration. You need to configure logical networks, logical switches, port profiles
and so on. Below you can find an example of a configuration:

ABHISHEK KAMBLI 10/04/2019 CLOUD MANAGEMENT


Q11. Explain the three basic role types in virtual machine manager.

Roles

ABHISHEK KAMBLI 10/04/2019 CLOUD MANAGEMENT


Q. What are the different virtual machine manager management interfaces?
Explain each in detail.

 Windows Management Instrumentation (WMI)


 PowerShell management interfaces

WMI

Windows Management Instrumentation (WMI) consists of a set of extensions to the


Windows Driver Model that provides an operating system interface through which
instrumented components provide information and notification. WMI is Microsoft's
implementation of the Web-Based Enterprise Management (WBEM) and Common
Information Model (CIM) standards from the Distributed Management Task Force
(DMTF).

WMI allows scripting languages (such as VBScript or Windows PowerShell) to


manage Microsoft Windows personal computers and servers, both locally and
remotely. WMI comes preinstalled in Windows 2000 and in newer Microsoft OSes. It
is available as a download for Windows NT,[1] Windows 95 and Windows 98.[2]

The purpose of WMI is to define a proprietary set of environment-independent


specifications which allow management information to be shared between
management applications. WMI prescribes enterprise management standards and
related technologies for Windows that work with existing management standards,
such as Desktop Management Interface (DMI) and SNMP. WMI complements these
other standards by providing a uniform model. This model represents the managed
environment through which management data from any source can be accessed in a
common way.

PowerShell

System Center Virtual Machine Manager 2012 R2 has enormous PowerShell support.
Every task that you can perform on the SCVMM console can also be performed using
PowerShell. Also, there are some tasks in SCVMM that can only be performed using
PowerShell.

There are two ways in which you can access the PowerShell console for SCVMM:

The first technique is to launch it from the SCVMM console itself. Open the SCVMM
console in administrator mode and click on the PowerShell icon in the GUI console.
This will launch the PowerShell console with the imported virtualmachinemanager
PowerShell module:

ABHISHEK KAMBLI 10/04/2019 CLOUD MANAGEMENT


You can also import the virtualmachinemanager PowerShell module using the
Import-module cmdlet. Launch the PowerShell console in an administrative mode
and type the following command:

Import-module virtualmachinemanager

This will import the cmdlets in the virtualmachinemanager moduel for administrative
use.

ABHISHEK KAMBLI 10/04/2019 CLOUD MANAGEMENT


Unit 4

Q. What client management functionality is provided by configuration


manager 2012 for managing end user devices?

Microsoft System Center Configuration Manager 2012 (SCCM 2012) is a Windows


product that enables administrators to manage the deployment and security of
devices and applications across an enterprise. SCCM is part of the Microsoft System
Center 2012 systems management suite.

The SCCM integrated console enables management of Microsoft Application


Virtualization (App-V), Microsoft Enterprise Desktop Virtualization (Med-V), Citrix
XenApp, Microsoft Forefront and Windows Phone applications from a single location.
System Center Configuration Manager 2012 discovers servers, desktops, tablets, and
mobile devices connected to a network through Active Directory and installs client
software on each node. It then manages application deployments and updates on a
device or group basis, allowing for automated patching with Windows Server Update
Services and policy enforcement with Network Access Protection. System Center
Endpoint Protection Manager 2012, formerly known as Forefront Endpoint
Protection, is built into System Center Configuration Manager to secure data stored
on those devices.

Several key features of System Center Configuration Manager 2012 help


administrators address the bring-your-own-device (BYOD) trend in the enterprise,
including user-centric management. End users can search for applications with a self-
service Software Center and specify times when installations and upgrades take
place. IT administrators can install applications in different ways on different devices -
- for example, as a native application on a primary device or as a Remote Desktop
Services app or App-V program on a tablet. SCCM 2012 also includes role-based
access control (RBAC), which enhances system security by only showing end users
the interface elements that apply to their specific roles as defined by Active Directory.

CCM features remote control, patch management, operating system deployment,


network protection and other various services.

Note: SCCM was formerly known as SMS (Systems Management Server), originally
released in 1994. In November 2007, SMS was renamed to SCCM and is sometimes
called ConfigMgr.

Users of SCCM can integrate with Microsoft InTune, allowing them to manage
computers connected to a business, or corporate, network. SCCM allows users to
manage computers running the Windows or macOS, servers using the Linux or Unix,
and even mobile devices running the Windows, iOS, and Android operating systems.

ABHISHEK KAMBLI 10/04/2019 CLOUD MANAGEMENT


SCCM is available from Microsoft and can be used on a limited time trial basis. When
the trial period expires, a license needs to be purchased to continue using it.

Q. What are the operation changes in configuration manager 2012?

The new edition to the System Center Configuration Manager Legacy is System
Center 2012 Configuration Manager (SCCM 2012). There have been quite a few
improvements over its predecessor, System Center Configuration Manager 2007
(SCCM 2007). These improvements stretch from the big picture improvements in
hierarchy to the more granular improvements in custom client settings. This product
has made many leaps and bounds from its SMS days. Here are some of the big
picture changes that were made.

MOF changes

No more sms_def.mof to extend hardware inventory. It is as simple as just checking a


box now for what hardware inventory you are looking for, but testing is very
necessary still to make sure you are getting specifically what you want. Ability to
export and import inventory setting is now possible.

Management Point (MP) Enhancement

More than 1 MP per site to extend the number of clients each site can handle and
help with redundancy. Clients will choose which one they want based off of capability
and proximity.

Distribution Point (DP) Improvements

 There is only 1 kind of DP now and can be installed on workstation or server,


eliminating branch DPs
 DP Groups makes it easy to deploy software by taking away the need to target
multiple DPs per package
 Tools built-in to manage pre-staged content to help with not saturating the
WAN Link
 Same throttling as with what Secondary Site servers which almost completely
removes the need for Secondary Site servers and scheduling is now available.
 PXE Role integrated with DP since they go hand in hand anyway
 Content validation can be scheduled to let you know when the hash fails and
the content goes out of sync; helping warn you before you begin getting
content mismatch errors

ABHISHEK KAMBLI 10/04/2019 CLOUD MANAGEMENT


Addition of Software Center

This new tool improves end user experience by giving them limited ability to manage
settings for interacting with SCCM, empowering them with self-service. With the end
user having the ability to setup “business hours”, this will help with reducing the
downtime associated with updates, software distribution, and OSD since the end user
decides when they want it done. This replaced the run advertised programs in SCCM
2007.

The big picture changes are going to make a very big impact Site/Hierarchy wide.
The granular changes being made on a client and feature level will surely make life
easier on admins. Here are some of the more critical changes made with SCCM 2012.

Role Based Administration

Permissions can now go across sites and be made granular with security scopes. This
is a much needed improvement since it was really difficult with previous versions to
separate permissions properly.

Centrally managed client settings

Once a change is made to client settings than they are made hierarchy wide with the
default settings or you can be granular and mix in some custom client settings for
specific collections. Remember, the custom client settings take precedence over the
default settings.

Application Distribution vs. Package Distribution

Packages contain source files that are run off of command line through programs.
Applications now have a dependency intelligence that is built into the agent which
will

 Remove processing burden from the site server


 Speed up deployment because there is no evaluation required by query to
determine if a computer goes in or out of a collection
 Writing requirements into the installer package is removed because of logic
provided through criteria
 Manage superseded apps and app uninstalls

Software Updates Improvements

ABHISHEK KAMBLI 10/04/2019 CLOUD MANAGEMENT


 Auto cleanup of expired content
 Helps with auto-approval of certain updates
 Allows deployment of Superseded updates
 Better criteria and exclusions are now possible (criteria can be saved and
reused)
 Automatic deployment rules allowing complete automation of the update
process
 Software center allows the user to set business hours so that productivity is
not affected and grouping of deadlines together for pending updates so there
are fewer reboots, improving end user experience

Q. Explain the process of managing deployment.

Configuration Manager provides you with tools and infrastructure you can use to create and
deploy operating system images for servers and virtual machines in your environment. It
does this using the same technologies as client management, including Windows Imaging
(WIM) and the Microsoft Deployment Toolkit (MDT), which offers additional customization
capabilities. These server images can also include enterprise applications, OEM device
drivers, and additional customizations needed for your environment.

Servers can be organized by group, user, or region to phase a deployment rollout. Servers
that are upgraded also have the option to migrate their user state information. Bootable
media containing operating system images can also be created, and this can be particularly
helpful in datacenters where PXE boot isn’t possible. Configuration Manager 2012 R2 can
store images as VHD files and optionally place them in a Virtual Machine Manager library
share together with App-V packages. Virtual Machine Manager can then use these library
objects to deploy preconfigured virtual machines or inject application packages into
application profiles and virtual machines.

ABHISHEK KAMBLI 10/04/2019 CLOUD MANAGEMENT


An application in Configuration Manager contains the files and information required to
deploy a software package to a device and the information about the software that all
deployment types share. Applications are similar to packages in Configuration Manager
2007, but contain more information to support smart deployment. When you make changes
to an application, a new revision of the application is created. Previous versions of the
application are stored, and you can retrieve them at a later time.

When we deploy applications, we will come across few of the elements of applications :
1) Application Information – This provides general information about the application such
as the name, description, version, owner and administrative categories. Configuration
Manager can read this information from the application installation files if it is present.

2) Application Catalog – Provides information about how the application is displayed to


users who are browsing the Application Catalog.

3) Deployment Types – A deployment type is contained within an application and contains


the information that is required to install software. A deployment type also contains rules
that specify if and how the software is deployed. A single application can have multiple
deployment types that use the same technology. There are multiple deployment types
available in CM2012.

a) Windows Installer – Creates a deployment type from a Windows Installer file.


Configuration Manager can retrieve information from the Windows Installer file and related
files in the same folder to automatically populate some fields of the Create Deployment Type
Wizard.

b) Microsoft Application Virtualization – Detects application information and deployment


types from a Microsoft Application Virtualization 4 manifest (.xml) file.

c) Windows Mobile Cabinet – Creates a deployment type from a Windows Mobile Cabinet
(CAB) file.

d) Nokia SIS file – Creates a deployment type from a Nokia Symbian Installation Source (SIS)
file.

When you deploy an application in CM 2012 you come across 2 things, Deployment Action
and Deployment Purpose. Both of these are really important.

– Deployment Action – Deployment Action includes “Install” or “Uninstall“. We can install


an app or uninstall an app by providing relevant information in deployment action.

– Deployment Purpose – This is really important, you have an option to specify Deployment
purpose as “Available” or “Required“. If the application is deployed to a user, the user sees
the published application in the Application Catalog and can request it on-demand. If the
application is deployed to a device, the user sees it in Software Center and can install it on
demand.

ABHISHEK KAMBLI 10/04/2019 CLOUD MANAGEMENT


Q. How operating system can be deployed using configuration manager 2012?

The operating system deployment process

Configuration Manager provides several methods that you can use to deploy an operating
system. There are several actions that you must take regardless of the deployment method
that you use:

 Identify Windows device drivers that are required to start the boot image or install
the operating system image that you have to deploy.
 Identify the boot image that you want to use to start the destination computer.
 Use a task sequence to capture an image of the operating system that you will
deploy. Alternatively, you can use a default operating system image.
 Distribute the boot image, operating system image, and any related content to a
distribution point.
 Create a task sequence with the steps to deploy the boot image and the operating
system image.
 Deploy the task sequence to a collection of computers.
 Monitor the deployment.

Methods to deploy operating systems

There are several methods that you can use to deploy operating systems to Configuration
Manager client computers.

PXE initiated deployments: PXE-initiated deployments let client computers request a


deployment over the network. In this method of deployment, the operating system image
and a Windows PE boot image are sent to a distribution point that is configured to accept
PXE boot requests. For more information, see Use PXE to deploy Windows over the network
with System Center Configuration Manager. Make operating systems available in Software
Center: You can deploy an operating system and make it available in the Software Center.
Configuration Manager clients can initiate the operating system installation from Software
Center. For more information, see Replace an existing computer and transfer settings.

Multicast deployments: Multicast deployments conserve network bandwidth by concurrently


sending data to multiple clients instead of sending a copy of the data to each client over a
separate connection. In this method of deployment, the operating system image is sent to a
distribution point. This in turn deploys the image when client computers request the
deployment. For more information, see Use multicast to deploy Windows over the network.

Bootable media deployments: Bootable media deployments let you deploy the operating
system when the destination computer starts. When the destination computer starts, it
retrieves the task sequence, the operating system image, and any other required content
from the network. Because that content is not included on the media, you can update the
content without having to re-create the media. For more information, see Create bootable
media.

ABHISHEK KAMBLI 10/04/2019 CLOUD MANAGEMENT


Stand-alone media deployments: Stand-alone media deployments let you deploy operating
systems in the following conditions:

In environments where it is not practical to copy an operating system image or other large
packages over the network.

In environments without network connectivity or low bandwidth network connectivity.

Pre-staged media deployments: Pre-staged media deployments let you deploy an operating
system to a computer that is not fully provisioned. The pre-staged media is a Windows
Imaging Format (WIM) file that can be installed on a bare-metal computer by the
manufacturer or at an enterprise staging center that is not connected to the Configuration
Manager environment.

Later in the Configuration Manager environment, the computer starts by using the boot
image provided by the media, and then connects to the site management point for available
task sequences that complete the download process. This method of deployment can reduce
network traffic because the boot image and operating system image are already on the
destination computer. You can specify applications, packages, and driver packages to include
in the pre-staged media.

ABHISHEK KAMBLI 10/04/2019 CLOUD MANAGEMENT


Unit 5

Q1. What are the hardware, software and networking requirements of operations
manager 2012?

ABHISHEK KAMBLI 10/04/2019 CLOUD MANAGEMENT


ABHISHEK KAMBLI 10/04/2019 CLOUD MANAGEMENT
Q. What is the functionality provided by operations manager 2012?

Every enterprise relies on its underlying services and applications for everyday
business and user productivity. SCOM is a monitoring and reporting tool that checks
the status of various objects defined within the environment, such as server
hardware, system services, operating systems (OSes), hypervisors and applications.
Administrators set up and configure the objects. SCOM then checks the relative
health -- such as packet loss and latency issues -- of each object and alerts
administrators to potential problems. Additionally, SCOM offers possible root causes
or corrective action to assist troubleshooting procedures.

SCOM uses traffic light color coding for object health states. Green is healthy, yellow
is a warning and red is a critical issue. (Gray can denote an item is under maintenance
or SCOM cannot connect to the object.) Administrators set a threshold for each
object's health states to determine if SCOM should issue an alert. For example, the
admin can set a disk drive as green/healthy with more than 70% capacity remaining,
yellow/warning with 70% to 80% capacity filled and red/critical with more than 80%
of storage capacity filled. The admin can adjust these levels when needed

Operations Manager enables you to monitor hardware, virtual machines, operating


systems, services, applications, devices, and operations for the systems in a
computing environment. Operations Manager can be used to monitor environments
for businesses both large and small, in datacenter environments, and for private,
public, or hosted cloud solutions. Operations Manager can monitor both client and
server systems, displaying health, availability, and performance information collected
from these systems within a single console that you can use to detect and resolve
real-time problems in your environment. Monitored systems can be running a
version of Microsoft Windows, a supported version of the Linux or UNIX operating
systems, and a variety of third-party infrastructure servers, such as the VMware and
Citrix virtualization platforms.

ABHISHEK KAMBLI 10/04/2019 CLOUD MANAGEMENT


The basic unit of management functionality for Operations Manager is the
management group, which consists of one or more management servers, a reporting
server, and two databases. The operational database contains the configuration data
for the management group. The data warehouse database contains the historical
monitoring and alert information collected from the systems being monitored. The
reporting server generates reports from the information stored in the data
warehouse database. The management server administers the management-group
configuration and databases, and it collects information from the systems being
monitored.

Q. Explain the phases of service manager project?

System Center Service Manager provides you with an integrated platform for
delivering IT as a service through automation, self-service, standardization, and
compliance. System Center Orchestrator enables you to create and manage
workflows for automating cloud and datacenter management tasks. And Windows
Azure Pack lets you implement the Windows Azure self-service experience right
inside your own datacenter using your own hardware.

IT isn’t just about technology, it’s also about the people and processes that use those
services. Employees don’t care about which Microsoft Exchange Server their
Microsoft

Outlook client gets their mail from, they just need to be able to get their mail so that
they can do their job. They also don’t want to know the details of how mail servers

are upgraded or patched, they just want the newest capabilities without any service
interruptions. From the user’s perspective, IT just delivers a service they depend on as
they perform their daily routine.

The design goal of System Center Service Manager is to provide organizations with
an integrated platform for delivering IT as a Service (ITaaS) through automation, self-
service, standardization, and compliance. Service Manager does this by enabling

■■ Automation of IT Service Management (ITSM) processes, such as activity


management, change management, incident management, problem management,
and release management as defined by industry-standard frameworks like Microsoft
Operations Framework (MOF), Information Technology Infrastructure Library (ITIL),
and Control Objectives for Information and Related Technology (COBIT). Service
Manager provides automation interfaces you can use for automating the delivery of
IT services and processes. Service Manager also provides a centralized configuration
management database (CMDB), an OLAP-based data warehouse built on Microsoft

ABHISHEK KAMBLI 10/04/2019 CLOUD MANAGEMENT


SQL Server that integrates with other System Center products for centralized data
storage.

Self-service for users by providing a self-service portal (known as the Service


Manager Portal or “SMPortal”) that allows consumers of IT services to submit
requests and view their status, search the knowledge base, and perform other tasks.
The self-service portal is customizable and is built on top of Microsoft SharePoint.
Service Manager also provides customizable dashboards and reporting based on SQL
Server Reporting Services (SSRS) that can provide both real-time and historical
information for the service desk.

■■ A standardized experience for implementing ITSM processes according to


standardized frameworks. Service Manager uses templates for defining business
processes, and you can build and customize these templates to meet the specific
needs of your business through GUI-based wizards, so no coding is required.

■■ Compliance by logging every service management action in a database so that it


can be reviewed and analyzed when needed. Compliance can be continuously
evaluated in real time against a predefined baseline, and incidents can be
automatically generated when deviation from the baseline is detected.

ABHISHEK KAMBLI 10/04/2019 CLOUD MANAGEMENT


Q. What is service management automation? Explain in detail.

Building automation

A workflow is a sequence of activities, actions, or tasks in Service Manager, such as


the copying of documents from one system to another as a part of an automated
business process. In earlier versions of Service Manager, the primary tool for building
workflows was the Service Manager Authoring Tool. Beginning with Service Manager
2012, however, Microsoft recommends that Orchestrator be implemented with
Service Manager. Orchestrator is a workflow-management solution you can use to
automate the creation, monitoring, and deployment of resources in your
environment. Orchestrator 2012 and later includes an Integration Pack for common
Service Manager activities, such as editing an incident and creating an object. This
approach to Service Manager workflow automation requires less management
overhead, provides better error handling, and requires less knowledge of Windows
PowerShell scripting.

System Center Orchestrator can be used to create and manage workflows for
automating cloud and datacenter management tasks. These tasks might include
automating the creation, configuration, management, and monitoring of IT systems;
provisioning new hardware, software, user accounts, storage, and other kinds of
resources; and automating various IT services or operational processes.
Orchestrator provides end-to-end automation, coordination, and management using
a graphical interface to connect diverse IT systems, software, processes, and
practices. Orchestrator provides tools for building, testing, and managing custom IT
solutions that can streamline cloud and datacenter management. Orchestrator also
facilitates cross-platform integration of disparate hardware and software systems in
heterogeneous environments.

Orchestrator is a key component of System Center for building private clouds


because it allows you to connect complex actions or processes and make them into
single tasks that can then be run automatically through scheduling or in response

to service requests from customers or users. Orchestrator is also a valuable tool for
automating complex, repetitive tasks in traditional datacenter environments and can
simplify the management of a large, heterogeneous datacenter.

ABHISHEK KAMBLI 10/04/2019 CLOUD MANAGEMENT


ABHISHEK KAMBLI 10/04/2019 CLOUD MANAGEMENT

Você também pode gostar