Você está na página 1de 96

Enterprise Reference Architecture for Client Virtualization for HP VirtualSystem

Implementing the HP Architecture for Citrix XenDesktop on Microsoft Windows Server 2008 R2 Hyper-V
Technical white paper

Table of contents
HP and Client Virtualization .................................................................................................................. 2 Software used for this document ........................................................................................................ 3 The Enterprise Client Virtualization Reference Architecture for HP VirtualSystem .......................................... 6 Why HP, Citrix, and Microsoft for Client Virtualization ............................................................................. 7 The partners .................................................................................................................................... 7 What this document produces ............................................................................................................. 10 Citrix and Client Virtualization ............................................................................................................ 13 Why Citrix XenDesktop 5 ................................................................................................................ 13 Rack layout ................................................................................................................................... 15 Configuring the platform ..................................................................................................................... 18 External Insight Control ................................................................................................................... 18 Configuring the enclosures .............................................................................................................. 19 Creating a Virtual Connect domain with stacked enclosures ................................................................ 20 Defining profiles for hosts ................................................................................................................ 31 Setting up management hosts .............................................................................................................. 40 Deploying storage ............................................................................................................................. 42 Configuring the P4800 G2 SAN for BladeSystem .............................................................................. 42 Configuring Hyper-V hosts to access SAN ......................................................................................... 49 Configuring the management group for hosts .................................................................................... 50 Configuring and attaching storage for management hosts ................................................................... 52 Setting up DAS for non-persistent VM hosts ....................................................................................... 56 Installation of servers physical and virtual........................................................................................... 62 Setting up the infrastructure ............................................................................................................. 63 Setting up management VMs ........................................................................................................... 63 Understanding storage for XenDesktop ............................................................................................. 64 Bill of materials .................................................................................................................................. 66 Installing and configuring XenDesktop 5 ............................................................................................... 68 Summary .......................................................................................................................................... 84 Appendix A Storage patterning and planning for Citrix XenDesktop environments .................................. 87 Appendix B Scripting the configuration of the Onboard Administrator ................................................... 90 Appendix C CLIQ commands for working with P4000 ........................................................................ 95 For more information .......................................................................................................................... 96

HP and Client Virtualization


Planning a Microsoft Windows 7 migration? How much of your corporate data is at the airport today in a lost or stolen laptop? What is the cost per year to manage your desktops? Are you prepared to support the upcoming always-on workforce? HP Client Virtualization can help customers achieve the goals of IT and workforce support, without compromising performance, operating costs, information security, and user experience with HP Client Virtualization Reference Architectures. These reference architectures provide: Simplicity: with an integrated data center solution for rapid installation/startup and easy ongoing operations Self contained and modular server, storage, and networking architecture no virtualization data egresses the rack 3x improvement in IT productivity Optimization: a tested solution with the right combination of compute, storage, networking, and system management tuned for Client Virtualization efficiency Scalable performance, enhanced security, always available 60% less rack space compared to competitors 95% fewer NICs, HBAs, and switches; 65% lower cost; 40% less power for LAN/SAN connections Flexibility: with options to scale up and/or scale out to meet precise customer requirements Flexible solution for all workers in an organization from task workers to PC power users Support for up to 7,800 Virtual Desktop Infrastructure (VDI) users and 6,400 Citrix XenApp connections in three racks using the different desktop delivery methods offered by Citrix XenDesktop with FlexCast technology and leveraging Microsoft Hyper-V Dynamic Memory. Unmatched price/performance with both direct attached (DAS) and SAS tiered storage in a single rack (50% cheaper than SAN) By adopting Client Virtualization, IT can drive new levels of flexibility, security, control, costs savings, management simplification, and power reduction as well as meet some of the business top initiatives. The idea is simple. Remove the traditional dependency between the end user and the compute device by managing the OS, applications, and data separately from the core compute resource. The results can enable new levels of IT agility, flexibility and control. The complete reference architecture is a tool for HP VirtualSystem, a strategic portfolio of infrastructure solutions, which serves as the foundation for your virtualized workloads. Based on HP Converged Infrastructure (CI), HP VirtualSystem utilizes market-leading capabilities from Citrix and Microsoft to centralize administrative tasks, improve scalability, optimize workloads, and reduce complexity. Purpose of this document This document serves three primary functions. Give IT decision makers, architects and implementation specialists an overview of how HP along with Citrix and Microsoft approach Client Virtualization and how the joint solutions they bring to market enable simpler, optimized and more flexible IT. Outline the steps required to configure and deploy the hardware platform in an optimized fashion to support Citrix XenDesktop as an enterprise level Desktop Virtualization implementation. Assist IT planners and architects with understanding storage patterning and tiering within the context of the overall architecture.

This document does not discuss the in depth implementation steps to install and configure Citrix and Microsoft software unless it directly effects the successful deployment of the overall platform. Abbreviations and naming conventions Table 1 is a list of abbreviations and names used throughout this document and their intended meaning.
Table 1. Abbreviations and names used in this document

Convention SCVMM MS RDP SSD VDI OA LUN IOPs POD SIM RBSU

Definition System Center Virtual Machine Manager Microsoft Remote Desktop Protocol Solid State Drives Virtual Desktop Infrastructure Onboard Administrator Logical Unit Number Input and Output Operations per second The scaling unit of this reference architecture HP Systems Insight Manager ROM Based Setup Utility

Target audience This document is targeted at IT architects and engineers that plan on implementing Citrix XenDesktop on Windows Server 2008 R2 SP1 and are interested in understanding the unique capabilities and solutions that HP, Citrix, and Microsoft bring to the Client Virtualization market as well as how a viable, enterprise level desktop virtualization solution is crafted. This document is one in a series of reference architecture documents available at http://www.hp.com/go/cv. Skillset It is expected that the installer utilizing this document will be familiar with servers, networking and storage principles and have skills around Microsoft virtualization. The installer should also be familiar with HP BladeSystem. Familiarity with Client Virtualization and the various desktop and application delivery model concepts and definitions is helpful, but not necessary.

Software used for this document


This document references numerous software components. The acceptable version of each OS and versions of software used for test are listed in this section. Hypervisor hosts
Components OS Software description Microsoft Windows Server 2008 R2 SP1

Management server operating systems


Components SCVMM HP Systems Insight Manager (SIM) server HP P4000 Central Management Console server Microsoft SQL Server servers [1] Software description Microsoft Windows Server 2008 Microsoft Windows Server 2008

Microsoft Windows 7 Professional, x64

Microsoft Windows Server 2008

Management software
Components VM Management HP Systems Insight Manager HP P4000 SAN/iQ Centralized Management Console Microsoft SQL Server 2008 Software description System Center Virtual Machine Manager (SCVMM) HP Systems Insight Manager 6.0

HP P4000 SAN/iQ Centralized Management Console (CMC) 9.0

Microsoft SQL Server 2008 Enterprise edition, x64

XenDesktop 5.0 components


Components Desktop Delivery Controller Setup Wizard Provisioning Server Citrix Web Interface Citrix Licensing Software description XD 5.0, broker software Citrix XenDesktop 5 Setup Wizard Citrix Provisioning Services 5.6 SP1 Citrix Web Interface 5.4 Citrix Licensing 11.6.1

[1]

It is assumed that an existing SQL Server cluster will be used to host the necessary databases.

Firmware revisions
Components HP Onboard Administrator HP Virtual Connect HP ProLiant Server System ROM HP SAS Switch HP Integrated Lights-Out 3 (iLO 3) HP 600 Modular Disk Array (MDS600) Version 3.30 3.18 Varies by server 2.2.15.0 1.20 2.66

End user virtual machines


Components Operating System Connection Protocol Software description Microsoft Windows 7, x64 Microsoft RDP and Citrix ICA

The Enterprise Client Virtualization Reference Architecture for HP VirtualSystem


The Enterprise Client Virtualization Reference Architecture for HP VirtualSystem is shown in Figure 1. As discussed in the document entitled Virtual Desktop Infrastructure for the Enterprise (http://h20195.www2.hp.com/V2/GetDocument.aspx?docname=4AA3-4348ENW), HP is focusing on Client Virtualization as a whole with VDI as a specific implementation of Client Virtualization technologies. In the image below, that means that the VDI session is represented by the Compute Resource.

Figure 1: The HP Converged Infrastructure for Client Virtualization Reference Architecture for Desktop Virtualization

From the endpoint device the user interfaces with to the backend data center servers and storage, HP has the hardware and management capabilities for a complete end to end infrastructure. Client Virtualization is much more than delivering a virtual desktop to an end-point device. There are multiple methods of delivering a desktop and application to a user or users. Client Virtualization is inclusive of multiple desktop and application delivery options. No one option will sufficiently satisfy a complete organization. There are some users that can share a virtual desktop (session virtualization), which is served up with Microsoft Remote Desktop Services, formerly known as Terminal Services, while other users may require a more secure and personalized desktop environment, but still not require dedicated control over their desktop. Some users do not need the capability to install software, or make changes to the underlying operating system, but still need a separate isolated desktop when they log in. Nothing needs to be maintained in the desktops between login sessions. They receive a clean fresh desktop at each login. Profile management can be used for user virtualization, allowing users to customize their environment without making changes to the desktop. The user customizations like drive and printer mappings, desktop layout, color schemes and

preferences are loaded into the desktop at user login. This can be accomplished using non-persistent virtual machines (VMs) in a Virtual Desktop configuration. Along with the non-persistent users, there are persistent users. These users need to preserve operating system and application installation changes across logins, and may have requirements for administrator access rights to their virtual desktop. These users will either have dedicated VMs, one for each user creating a large storage footprint, or the users may start with the same base image file and utilize smaller differential files to maintain their personalities. Whether using session-based desktops, persistent, or non-persistent workers, virtualization of applications should be implemented for better management and performance. Using tools like Citrix XenApp, a key component of XenDesktop, and Microsoft Application Virtualization (App-V) to virtualize and deliver applications allows offloading the running of applications to dedicated servers, decreasing the load in the VMs being used to support the virtual desktops. The Citrix approach to IT for delivering multiple types of virtual desktops and applications whether hosted or local is its FlexCast delivery technology. Using Citrix XenDesktop with FlexCast on Microsoft Windows Server 2008 R2 SP1 and HP hardware offers a complete Client Virtualization solution. This document will focus on using those pieces of the Hosted VDI (commonly known as VDI) delivery model of Citrix FlexCast to create the HP Enterprise Reference Architecture for Client Virtualization with Citrix XenDesktop 5 and Microsoft Windows Server 2008 R2 SP1. The document will also touch on other FlexCast delivery technologies such as Hosted Shared desktops and On-Demand applications to show how a complete Client Virtualization solution could be built by starting with this VDI reference architecture.

Why HP, Citrix, and Microsoft for Client Virtualization


Great solutions deliver value at a level that cobbled together components and poorly coordinated partnerships cannot approach. The HP Enterprise Reference Architecture for Client Virtualization with Citrix XenDesktop 5 on Microsoft Windows Server 2008 R2 SP1 brings together three companies that have partnered together for many, many years, and understand the value and process of partnering.

The partners
HP, Microsoft and Citrix all have long, strong relationships around partnering. The HP and Microsoft global strategic alliance is one of the longest standing alliances of its kind in the industry. The goal is helping businesses around the world improve services through the use of innovative technologies. HP and Microsoft have more than 25 years of market leadership and technical innovation. Since 1996, HP and Citrix have shared a close, collaborative relationship, being mutual customers as well as partners. HP and Citrix work together to deliver joint engineering solutions, with a dedicated HP team supporting Citrix sales, operations, marketing, consulting and integration services, and technical development. HP supports Citrix StorageLink technology to simplify storage management. HP offers the full suite of products and services to support Citrix solutions. HP ProLiant and BladeSystem servers, HP P4000 storage technology, and HP Networking all provide solutions specifically designed to support Citrix solutions. HP thin clients are certified as Citrix Ready, and provide support for the latest HDX and HDX 3D protocols. HP management tools provide single pane of glass for managing the reference architecture as part of an overall IT environment. HP is a leading global system integrator, with hundreds of Citrix-certified professionals with deep experience implementing Citrix and HP solutions. HP Technology Services provides strategic assessment, solution design and deployment, and migration services for Citrix products. HP Enterprise Services Client Virtualization Service provides application and desktop virtualization as a managed service based on XenDesktop.

For Citrix and Microsoft, 2011 marks the 22nd anniversary of the Citrix/Microsoft partnership. Citrix builds on Windows as its Innovation Platform and continues to expand upon the successful alignment pioneered through the collaboration between Microsoft and Citrix in the application delivery marketplace. Most recently, Citrix and Microsoft have joined forces again to deliver joint desktop virtualization offerings a market now dominated by these joint Citrix-Microsoft solutions. In recognizing the outstanding infrastructure solutions that Citrix brings to the Microsoft marketplace, Microsoft has awarded their annual Global Infrastructure Partner of the Year award to Citrix four out of the last eight years. More information about the HP/Microsoft partnership can be found at www.hp.com/go/microsoft. For information about the HP/Citrix partnerships go to www.hp.com/go/citrix. HP HP brings a self-contained and modular hardware solution providing performance within the enclosure with integrated tools to give you enhanced visibility and prevention notifications. With everything in a rack, the involvement of multiple IT teams is limited or not required. The rack has redundant networks for connecting to the data center management and production links. All iSCSI network traffic, virtualized application traffic, and VM provisioning traffic stays within the rack. With the storage within the rack, the storage team is not required to manage or be involved in the storage configuration. When looking at networking, the HP Virtual Connect Flex-10 modules and the Flex-NICs on the HP BladeSystem servers offer tremendous reduction in external network ports. Each blade has two NIC ports, and each port represents four physical NICs with a combined bandwidth of 10Gb. The NICs can be teamed across the ports to create network redundancy, all without adding more NICs or switches to the configuration. At the management level, HP offers the ability to manage many systems with one core infrastructure using the Onboard Administrator of the BladeSystem enclosure to manage the blades, enclosure, SAS switches and Virtual Connect modules. Additionally, the HP Insight Control software can manage all of the servers and hardware, providing failure prevention notifications; and to highlight the partnerships, HP Insight Control is fully integrated with Microsoft System Center management software. In creating the reference architecture (RA), HP looked at an optimally sized and engineered set of hardware that leverages HPs Converged Infrastructure to pull everything together end-to-end, running Microsoft Windows Server 2008 R2 SP1 as a solid software base and Citrix XenDesktop 5 to give the users the best possible user experience. Microsoft Microsoft Windows Server 2008 R2 SP1 Microsoft released Windows Server 2008 R2 SP1 in earlier 2011 with several major enhancements, and major improvement in the performance of Hyper-V. The most important enhancement to building the reference architectures is Dynamic Memory. Dynamic Memory allows utilization of physical memory to its fullest capacity without sacrificing performance. Leveraging Dynamic Memory allows utilization of all physical memory in the server. For this RA, all VMs had Dynamic Memory configured. The release of SP1 has seen great improvements with performance, as well as the introduction of RemoteFX, designed to bring a full Windows 7 Aero experience to the VDI user. More information about utilizing RemoteFX can be found at www.hp.com/go/cv. RemoteFX is not supported in a Server Core installation of Windows Server 2008 R2 SP1.

Citrix Citrix XenDesktop Citrix XenDesktop transforms Windows desktops to an on-demand service to any user, any device, anywhere. XenDesktop quickly and securely delivers any type of virtual desktop or Windows, web and SaaS application to all the latest PCs, Macs, tablets, smartphones, laptops and thin clients all with a high-definition HDX user experience. FlexCast delivery technology enables IT to optimize the performance, security and cost of virtual desktops for any type of user, including task workers, mobile workers, power users and contractors. XenDesktop helps IT rapidly adapt to business initiatives, such as offshoring, M&A and branch expansion, by simplifying desktop delivery and enabling user selfservice. The open, scalable and proven architecture simplifies management, support and integration. Benefits of Citrix XenDesktop Citrix XenDesktop key features include: Any device, anywhere with Receiver. Todays digital workforce demands the flexibility to work from anywhere at any time using any device theyd like. Leveraging Citrix Receiver as a lightweight universal client, XenDesktop users can access their desktop and corporate applications from the latest tablets, smartphones, PCs, Macs, or thin clients. This enables virtual workstyles, business continuity and user mobility. HDX user experience. XenDesktop 5 delivers an HDX user experience on any device, over any network, while using up to 90% less bandwidth compared to competing solutions. With HDX, the desktop experience rivals a local PC, even when using multimedia, real-time collaboration, USB peripherals, and 3D graphics. Integrated WAN optimization capabilities boost network efficiency and performance even over challenging, high latency links. Beyond VDI with FlexCast. Different types of workers across the enterprise have varying performance and personalization requirements. Some require offline mobility of laptops, others need simplicity and standardization, while still others need high performance and a fully personalized desktop. XenDesktop can meet all these requirements in a single solution with the unique Citrix FlexCast delivery technology. With FlexCast, IT can deliver every type of virtual desktop, hosted or local, optimized to meet the performance, security and mobility requirements of each individual user. Any Windows, web or SaaS app. With XenDesktop, you can provide your workforce with any type of application they need, including Windows, web and SaaS apps. For Windows apps, XenDesktop includes XenApp, the on-demand application delivery solution that enables any Windows app to be virtualized, centralized, and managed in the data center and instantly delivered as a service to users anywhere on any device. For web and SaaS apps, Receiver seamlessly integrates them into a single interface, so users only need to log on once to have secure access to all their applications. Open, scalable, proven. With numerous awards, industry-validated scalability and over 10,000 Citrix Ready products, XenDesktop 5 provides a powerful desktop computing infrastructure thats easier than ever to manage. The open architecture works with your existing hypervisor, storage, Microsoft, and system management infrastructures, with complete integration and automation via the comprehensive SDK. Single-instance management. XenDesktop enables IT to separate the device, OS, applications and user personalization and maintain single master images of each. Instead of juggling thousands of static desktop images, IT can manage and update the OS and apps once, from one location. Imagine being able to centrally upgrade the entire enterprise to Windows 7 in a weekend, instead of months. Single-instance management dramatically reduces on-going patch and upgrade maintenance efforts, and cuts data center storage costs by up to 90 percent by eliminating redundant copies.

Data security and access control. With XenDesktop, users can access desktops and applications from any location or device, while IT uses policies that control where data is kept. XenDesktop can prevent data from residing on endpoints, centrally controlling information in the data center. In addition, XenDesktop can ensure that any application data that must reside on the endpoint is protected with XenVault technology. Extensive access control and security policies ensure that intellectual property is protected, and regulatory compliance requirements are met.

What this document produces


Utilizing components of Citrix FlexCast delivery technology, this document will help construct a platform capable of supporting more than 1200 task workers leveraging direct-attached storage, and more than 400 productivity workers connected to an HP P4800 G2 SAN for BladeSystem. Before diving deep into how the HP platform when combined with Citrix XenDesktop creates a robust, enterprise ready VDI solution, it is first necessary to define VDI. HPs view of VDI is captured in Figure 2. An end user, from a client accesses a brokering mechanism which provides a desktop over a connection protocol to the end user.

Figure 2: The HP Approach for VDI

10

This desktop OS is, at logon, combined with the users personality, application and data settings and also an application to create a runtime VDI instance as in Figure 3.

Figure 3: The VDI runtime instance

The entire application stack must be housed on resilient, cost effective and scalable infrastructure that can be managed by a minimal number of resources. There are many different terms used to define types of VMs associated with VDI, for this document persistent/non-persistent will be used. A persistent VM saves changes across logins. The user usually has admin rights to the VM and can make changes, add software, and customize the VM as they need. For a non-persistent VM, any changes or modifications are lost when the user logs out, and at login the users is always presented with a pristine fresh VM. Customization to non-persistent VMs is handled by user virtualization utilizing Citrix Profile Management. The use of non-persistent VMs minimizes the amount of SAN storage required, allows for use of Direct Attached Storage (DAS), and minimizes the amount of data required to be backed up.

11

Figure 4 shows the Citrix XenDesktop architecture software stack. The XenApp component of XenDesktop provides the application virtualization layer. Desktop Delivery Controller server acts as the broker and desktops are delivered over the network via Citrix HDX or Microsoft RDP. Citrix allows multiple models for managing user data within the overall ecosystem. HP recommends selecting a mechanism for user virtualization that minimizes network impact from the movement of user files and settings and allows for the customization of the users environment based on a number of factors including location, operating system and device.

Figure 4: Citrix XenDesktop on HP Converged Infrastructure

12

Figure 5 below shows the networks required to configure the platform for XenDesktop and where the various components reside. Note the dual homed storage management approach which allows all storage traffic to remain within the Virtual Connect domain, reducing complexity and involvement from multiple teams.

Figure 5: A Citrix XenDesktop specific implementation viewed from an overall network standpoint

Citrix and Client Virtualization


Why Citrix XenDesktop 5
Many IT organizations are looking for a better way to manage desktops. The continuous cycle of imaging, patching and upgrading a myriad of physical devices dispersed throughout the organization is costly, time consuming and frustrating. With the ever increasing push to be more agile and flexible, IT organizations are increasingly looking to desktop virtualization as an alternative to traditional desktop management solutions. Citrix XenDesktop helps organizations deliver on their key priorities to simplify management, increase flexibility, improve security, and lower costs with the following market leading technologies and features. FlexCast delivery technology Different types of workers across the enterprise have varying performance and personalization requirements. Some require simplicity and standardization while others need high performance or a fully personalized desktop. XenDesktop can meet all these requirements in a single solution with Citrixs unique Citrix FlexCast delivery technology. With FlexCast, IT can deliver every type of virtual

13

desktop, hosted or local, physical or virtual each specifically tailored to meet the performance, security and flexibility requirements of each individual user. Hosted Shared Desktops provide a locked down, streamlined and standardized environment with a core set of applications, ideally suited for task workers where personalization is not needed or allowed. Hosted VDI Desktops offer a personalized Windows desktop experience, typically needed by office workers, which can be securely delivered over any network to any device. Streamed Virtual Hard Drive (VHD) Desktops leverage the local processing power of rich clients, while providing centralized single-image management of the desktop. These types of desktops are often used in computer labs and training facilities, and when users require local processing for certain applications or peripherals. Local VM Desktops extend the benefits of centralized, single-instance management to mobile workers that need to use their laptops offline. When they are able to connect to a suitable network, changes to the OS, apps and user data are automatically synchronized with the data center. Modular architecture The Citrix XenDesktop modular architecture provides the foundation for building a scalable desktop virtualization infrastructure. It creates a single design for a data center, integrating all FlexCast models. The modular architecture consists of three main modules: Control Module manages user access and virtual desktop allocation, containing components like XenDesktop Controllers, SQL database, License Server and the Web Interface. Desktop Modules contains a module for each of the above mentioned FlexCast models, managing Physical Endpoints, XenApp Servers, Hypervisor pools, physical machines, etc. Imaging Module provides the virtual desktops with the master desktop image, managing Installed Images, Provisioning Server and Machine Creation Services. For a detailed description of the modular architecture, please refer to the Citrix XenDesktop 5 Reference Architecture document at http://support.citrix.com/article/CTX127587 Desktop provisioning technologies Provisioning Server (PVS) Citrix Provisioning Server provides images to physical and virtual desktops. Desktops utilize network booting to obtain the image and only portions of the desktop images are streamed across the network as needed. Provisioning Server does require additional server resources, but can be either physical or virtual servers depending on the capacity requirements and hardware configuration. Also, Provisioning Server does not require the desktop to be virtualized as Provisioning Server can deliver desktop images to physical desktops. Machine Creation Services (MCS) Citrix Machine Creation Services was introduced in XenDesktop 5 and provides powerful provisioning and lifecycle management of hosted virtual desktop machines. As it is integrated directly into XenDesktop, no additional servers or connections are required making use of MCS simple for even the smallest deployments. MCS delivers storage savings by building virtual machines from a common master image and only storing differences for persistent desktops. This enables administrators to apply updates to the master image once, and have those changes applied toward all existing virtual machines without the need to re-provision.

14

Machine Creation Services and Provisioning Services The decision between utilizing Machine Creation Services desktops or Provisioning Services desktops will be based on the overall architecture. If there are plans to utilize other FlexCast options, like Streamed VHD or Hosted Shared Desktops, the Provisioning Services infrastructure will already be in place and expanding to include streamed desktops is inconsequential. However, if the implementation is focused on the use of Hosted VDI desktops only, then Machine Creation Services might be a better option as it requires less infrastructure servers.

Rack layout
Figure 6 shows the overall rack layout.

Figure 6: Citrix XenDesktop/Microsoft Windows Server 2008 R2 SP1/HP BladeSystem RA (front and back)

15

Figure 7 shows the overall function of each component in the rack, leveraging different blade servers to support VDI desktops and both DAS and P4800 SAN to support persistent and non-persistent VDI sessions using both PVS and MCS. This also includes XenApp servers for session based applications and session based desktops.

Figure 7: Hardware platform being created for this document

16

The two management blades are running Windows Server 2008 R2 SP1 Hyper-V with Microsoft Failover Cluster and Clustered Shared Volumes configured. This allows for high availability and live migration of the management VMs. The following VMs are running on the management servers: Web Interface Server Desktop Delivery Controller (DDC) SCVMM administration server The server VMs reside on the P4800 as shared storage to the management servers to allow for HA. Six BL490c blades are configured for MCS and persistent VDI users, and use the P4800 as storage. Twelve BL460c servers supporting task workers are used for non-persistent VDI users. Two BL460c servers are configured as PVS servers for redundancy. A single PVS server can handle up to 5000 connections, however for HA two servers are configured. Also eight BL460c servers are configured to run XenApp for the application virtualization. NOTE: SQL is required by multiple applications including the DDC, PVS and SCVMM servers. It is assumed the data center has a clustered SQL configuration already running. If not, then additional servers are required to support a clustered SQL implementation.

17

Figure 8 shows cabling for the platform outlined in this document. This minimal configuration shows four cables supporting all users. Redundant 10GbE is dedicated to production and management traffic via a pair of cables. The enclosures communicate via a highly available 10GbE bi-directional network that carries migration and storage traffic without egressing. This minimizes network team involvement while enhancing flexibility. This configuration can be expanded to include a 10GbE uplink to the core from each enclosure enhancing availability.

Figure 8: Minimal total cabling within the rack required to support all users and hosts within the rack. Optionally a second set of uplinks to the core may be defined in the lower enclosure.

Configuring the platform


External Insight Control
Prior to configuring the platform, you will need to make a decision as to where you will locate your Insight Control Suite. As a rule, external servers in VDI environments that will not monitor the desktop virtual machines are recommended as they scale easily across numerous sets of hardware. This reduces the amount of management servers required and minimizes licensing costs. For other implementation scenarios it is recommended that the Insight Control software is installed within the enclosure on a management host. If you are using the Insight Control Plugins for Microsoft System Center, the software required will be installed within the VC domain.

18

Configuring the enclosures


Once the infrastructure is physically in place, it needs to be configured to work within a VDI environment. The setup is straightforward and can be accomplished via a single web browser session. Configuration settings for the Onboard Administrator (OA) will vary from customer to customer and thus they are not highlighted here in depth. Appendix B offers a sample script to aid with OA configuration and can be used to build a script customized to your environment. There are a couple of steps that must be undertaken to insure that your infrastructure is optimized to work with your storage in a lights out data center. This involves setting the startup timings for the various interconnects and servers within the infrastructure stack. For both enclosures, log on to your OA. In the left column, expand Enclosure Information, Enclosure Settings and then finally, click on Device Power Sequence. Insure that the tab for Interconnect Bays is highlighted. Set the power on for your SAS switches to Enabled and the delay to 180 seconds as in Figure 9. This step should be done for any enclosure that contains SAS switches. This insures that in the event of a catastrophic power event, the disks in the P4800 G2 SAN or the MDS600 will have time to fully initialize prior to the SAS switches communicating with them.

Figure 9: Setting the SAS switch power on delay

19

From the same page, set the Virtual Connect Flex-10 power on to Enabled and set the delay for some point beyond the time when the SAS switches will power on. A time of 210 seconds is generally acceptable. Click on Apply when finished to save settings as in Figure 10.

Figure 10: Configuring the power on sequence of the Flex-10 modules within the enclosures

Highlight the Device Bays tab by clicking on it. Set power on timings for any P4460sb G2 blades or persistent user hosts that are attached to storage to 240 seconds. Click on Apply when finished. Set remaining hosts to power on at some point beyond this point. Before proceeding to the next section, ensure your enclosures are fully configured for your environment.

Creating a Virtual Connect domain with stacked enclosures


This section will focus on configuring the Virtual Connect domain. The simplest way to do this is to undertake the configuration with either a minimal number or no servers within the enclosures. It is recommended that you begin the configuration with the enclosure that will house the P4800 G2 SAN. To begin, you should launch the Virtual Connect Manager console by either clicking the link from the OA interface or by entering the IP address or hostname directly into your browser. Log onto the VC modules using the information provided on the asset tag that came with the primary module as in Figure 11. NOTE: It is recommended that you carry out the configurations in this section without the enclosures stacked, but the stacking will need to be in place prior to completing the import of the second enclosure. Plan on connecting the modules prior to importing the second enclosure.

20

Figure 11: Virtual Connect Manager logon screen

Once logged in youll be presented with the Domain Setup Wizard. Click on Next as in Figure 12 to proceed with setup.

Figure 12: The initial Domain Setup Wizard screen

21

You will be asked for the Administrator credentials for the local enclosure as in Figure 13. Enter the appropriate information and click on Next.

Figure 13: Establishing communication with the local enclosure

At the resulting screen choose to Create a new Virtual Connect domain by importing this enclosure and click on Next as in Figure 14.

Figure 14: Importing the first enclosure

22

When asked, click on Yes as in Figure 15 to confirm that you wish to import the enclosure.

Figure 15: Confirm importing the enclosure

You should receive a success message as in Figure 16 that highlights the successful import of the enclosure. Click on Next to proceed.

Figure 16: Enclosure import success screen

23

At the next screen, you will be asked to assign a name to the Virtual Connect domain. Keep scaling in mind as you do this. A moderate sized domain with 4,000 users can potentially be housed within a single Virtual Connect domain. Very large implementations may require multiple domains. If you will scale to very large numbers a naming convention that scales is advisable. Enter the name of the domain in the text box as in Figure 17 and then click on Next to proceed.

Figure 17: Virtual Connect domain naming screen

Configure local user accounts at the Local User Accounts screen. Ensure that you change the default Administrator password. When done with this section, click on Next as in Figure 18.

Figure 18: Configuring local user accounts within Virtual Connect Manager

24

This will complete the initial domain configuration. Check the box to Start the Network Setup Wizard as in Figure 19. Click Finish to start configuring the network.

Figure 19: Final screen of the initial configuration

Configuring the network The next screen to appear is the initial Network Setup Wizard screen. Click on Next to proceed as in Figure 20.

Figure 20: Initial network setup screen

25

Click Next. At the Virtual Connect MAC Address screen choose to use the static MAC addresses of the adapters rather than Virtual Connect assigned MAC addresses as in Figure 21. Click on Next to proceed when done.

Figure 21: Virtual Connect MAC Address settings

Select to Map VLAN Tags as in Figure 22. You may change this setting to be optimized for your environment, but this document assumes mapped tags. Click on Next when done.

Figure 22: Configuring how VLAN tags are handled

26

At the next screen, you will choose to create a new network connection. The connections used in this document create shared uplink sets. This initial set will be linked externally and will carry both the management and production networks. Choose Connection with uplink(s) carrying multiple networks (using VLAN tagging). Click on Next to proceed as in Figure 23. You will need to know your network information including VLAN numbers to complete this section.

Figure 23: Defining the network connections

27

Provide a name for the uplink set and grant the connection the two network ports that are cabled from the Virtual Connect modules to the network core. Add in the management and production networks as shown in Figure 24. Click on Apply to proceed.

Figure 24: Configuring external networks

You will be returned to the network setup screen. Choose once again to Connection with uplink(s) carrying multiple networks (using VLAN tagging). Click on Next to proceed as in Figure 25.

Figure 25: Setting up the second uplink set

28

This second set of uplinks will carry the migration and iSCSI networks. These networks will not egress the Virtual Connect domain. Give a name to the uplink set and define your networks along with their VLAN ID, but do not assign uplink ports as in Figure 26. This will ensure the traffic stays inside the domain. Click on Next to proceed.

Figure 26: Defining the internal network

29

When you return to the Defined Networks screen, choose No, I have defined all available networks as in Figure 27. Click on Next to continue.

Figure 27: Final defined networks screen

Click on Finish at the final wizard screen. This will take you to the home screen for the Virtual Connect Manager as in Figure 28. This completes the initial setup of the Virtual Connect domain.

Figure 28: The initial Virtual Connect Manager screen

30

Defining profiles for hosts


Virtual Connect allows you to build a network profile for a device bay within the enclosure. No server need be present. The profile will be assigned to any server that is placed into the bay. This profile configures the networks and the bandwidth associated with the onboard FlexNICs. The following recommendations work for all ProLiant servers, but if you are using the ProLiant BL620c G7 you will have twice as many NICs to work with (up to 16 FlexNICs). You may wish to maximize bandwidth accordingly with these adapters. Table 2 reiterates the networks created for the Virtual Connect domain as well as how they are assigned for hypervisor and management hosts.
Table 2. Virtual Connect networks and path

Network

External or Internal Network External Internal Internal External

VLANd

Uplink Set

Production Migration iSCSI Management

Yes No No Yes

External Internal Internal External

The device bays for the P4800 G2 SAN are simply configured with two adapters of 10GbE bandwidth each. Both adapters are assigned to the iSCSI network. HP suggests the following bandwidth allocations for each network as in Table 3.
Table 3. Bandwidth recommendations for hypervisor and management host profiles

Network Production Migration iSCSI Management

Assigned Bandwidth 1.5 Gb/s 2 Gb/s 6 Gb/s 500 Mb/s

31

To begin the process of defining and assigning the server profiles, click on Define and then Server Profile as in Figure 29.

Figure 29: Define a server profile via dropdown menu

32

You will create a single profile for the hypervisor and management hosts as in Figure 30. This profile will be copied and assigned to each device bay. Right click on the Ethernet Adapter Connections and choose to Add Connection. You will do this eight times. Assign two adapters to each network defined in Table 3 above and assign the bandwidth to those adapters as shown. Do not assign the profile to any bay as this will serve as your master profile. Click on Apply when finished.

Figure 30: Configuring the profile for hypervisor and management hosts

33

Figure 31 shows the screen as it appears once the networks are properly defined.

Figure 31: The host profile for hypervisor and management hosts

34

Repeat the prior process to define a second profile to be copied to any slot where a P4460sb G2 storage blade resides. Assign the full bandwidth of 10Gb to each of two adapters. Do not create any extra Ethernet connections. Click on Apply as in Figure 32 to save the profile.

Figure 32: Master profile to be copied to P4800 storage blades

35

Importing the second enclosure With the configuration of the initial enclosure and VC domain complete, additional enclosures can be incorporated into the domain. On the left column, click on Domain Enclosures and then click on the Domain Enclosures tab. Click on the Find button and enter the information for your second enclosure as in Figure 33.

Figure 33: Find enclosure screen

36

At the next screen, click the check box next to the second enclosure and choose the Import button as in Figure 34.

Figure 34: Import the second enclosure into the domain

37

You should receive an enclosure import success screen as in Figure 35 below.

Figure 35: Enclosure import success

38

Optionally, you may click on the Domain IP Address tab and assign a single IP address from which to manage the entire domain as in Figure 36.

Figure 36: Configuring an IP address for the domain

39

Be sure to back up your configuration as in Figure 37 prior to proceeding. This will provide you with a baseline to return to.

Figure 37: Domain settings backup screen

With your domain configured, copy the server profiles you created and assign them to the desired bays. Once complete, backup your entire domain configuration again for a baseline configuration with profiles in place that can be used to restore to a starting point if needed.

Setting up management hosts


If you have not done so already, now is the time to insert your hosts into their respective locations within the enclosures. Because Virtual Connect works off of the concept of a device bay rather than an actual server, the hosts have not been needed up to this point. Installing Windows Server 2008 R2 SP1 Except for the P4800 controller blades, the remaining physical servers are installed with Microsoft Windows Server 2008 R2 SP1 and the Hyper-V role is enabled. However, on the PVS and XenApp servers do not install the Hyper-V role unless planning to virtualize the PVS and XenApp servers. Installing the full Windows Server or the Server Core is up to the installers preference. In considering the choices when running only the Hyper-V role, Hyper-V is less affected by Windows updates and patches so the benefit to running in Server Core mode is uptime and reliability. When running in Server Core mode, the server can be managed either from the Windows PowerShell command line using sconfig.cmd or by using the Server Management tools on a full Windows installation on another server; the Server Core is fully supported by SCVMM. It is recommended to install at least one of the management servers with the full Windows Server as a minimum to have the ability to run GUI based applications from within the infrastructure. Configure the onboard disks as a RAID10 set in Online ROM Configuration for Arrays (ORCA) and ensure they are set as the boot volume in the ROM-Based Setup Utility (RBSU).

40

NOTE: RemoteFX is only supported in a full installation of Microsoft Windows Server 2008 R2 SP1. License each server as appropriate. Configure the management servers Ensure the management servers have been installed with Windows Server 2008 R2 SP1 and the Hyper-V role has been configured. These servers need to be installed first, before any other servers or storage can be configured. Configure networks If you followed the recommended configuration advice for the setup of the Virtual Connect domain (in the Creating a Virtual Connect domain with stacked enclosures section of this doc) this section will complete the networking configuration through to the SCVMM. Configure your networks for each hypervisor host as in Table 4.
Table 4. Network configuration for hypervisor hosts

Network Management Production Special_net iSCSI

Bandwidth 500 Mb/s 1.5 Gb/s 2 Gb/s 6 Gb/s

Function Management Production network for protocol, application and user traffic Migration traffic iSCSI network

Once the management servers have been installed, Hyper-V configured, and the necessary networks created within Hyper-V, the next step is to create a basic VM to allow for management of the P4800. It is recommended to create a dual-homed management VM, but is not necessary. The management console for the P4800 can be installed directly onto one of the management servers if it is running the full Windows installation, but best practices suggest creating a separate VM that can be migrated between management servers. Dual-homed management VM At this point there is no access to the P4800 as no external networks are defined. To address this, create a management VM and assign it two Ethernet adapters, the first on the management network and the second on the iSCSI network. Install the operating system into this VM. You may choose from a variety of operating systems supported by the P4000 Centralized Management Console (CMC). For the purpose of this document we installed a copy of Microsoft Windows 7 Professional and granted it a single vCPU with 1GB of RAM. This VM should be installed on the local data store. Once the VM has been installed and patched, install the P4000 CMC. You may choose to migrate it to a shared storage volume once the management hosts are fully configured.

41

Deploying storage
Configuring the P4800 G2 SAN for BladeSystem
In order to access and manage the P4800 G2 you must first set IP addresses for the P4460sb G2 storage blades. For each blade, perform the following steps to configure the initial network settings. It is assumed you will not have DHCP available on the private storage network. If you are configuring storage traffic to egress the enclosure and are running DHCP you can skip ahead.
1. Log onto the blade from the iLO. The iLO for each blade can be launched from within the

Onboard Administrator as long as the OA user has appropriate permissions. If not, use the asset tag on the P4460sb to locate the iLO name and administrator password.
2. From the command line, type the word Start. 3. Choose Network TCP/IP Settings from the available options. 4. Choose a single adapter to configure. 5. At the Network Settings screen, enter the IP information for the node.

When you have completed these steps for each P4460sb proceed to the next section. Configuring the SAN With P4000 SANs, there is a hierarchy of relationships between nodes and between SANs that should be comprehended by the installer. In order to create a P4800 G2 SAN, you will need to define the nodes, cluster and a management group. A node in a P4000 SAN is an individual storage server, in this case, an HP P4460sb G2. A cluster is a group of nodes combined to form a SAN. A management group houses one or more clusters/SANs and serves as the management point for those devices. Launch the HP P4000 Centralized Management Console by logging into your management VM and clicking the icon.

42

Detecting nodes You will locate the two (2) P4000sb nodes that you just configured. Figure 38 shows the initial wizard for identifying nodes.

Figure 38: CMC find nodes wizard

Click on the Find button to proceed. You can now walk through the wizard adding nodes by IP address or finding them via mask. Once you have validated that all nodes are present in the CMC you can move on to the next section to create the management group.

43

Creating the management group When maintaining an internal iSCSI network each Virtual Connect domain must have its own CMC and management group. The management group is the highest level from which the administrator will manage and maintain the P4800 SAN. To create the first management group, click on the Management Groups, Clusters, and Volumes Wizard at the Welcome screen of the CMC as shown in Figure 39.

Figure 39: CMC Welcome screen

44

Click on the Next button when the wizard starts. This will take you to the Choose a Management Group screen as in Figure 40.

Figure 40: Choose a Management Group screen

Select the New Management Group radio button and then click on the Next button. This will take you to the Management Group Name screen. Assign a name to the group and insure all P4460sb nodes are selected prior to clicking on the Next button. Figure 41 shows the screen.

Figure 41: Name the management group and choose the nodes

45

It will take time for the management group creation to complete. When the wizard finishes, click on Next to continue. Figure 42 shows the resulting screen where you will be asked to add an administrative user.

Figure 42: Creating the administrative user

Enter the requested information to create the administrative user and then click on Next. You will have the opportunity to create more users in the CMC after the initial installation. Enter an NTP server on the iSCSI network if available at the next screen and click on Next. If unavailable, manually set the time. An NTP server is highly recommended. Immediately after, you will be asked to configure DNS information for email notifications. Enter the information requested and click on Next. Enter the SMTP information for email configuration and click Next. The next screen will begin the process of cluster creation described in the following section.

46

Create the cluster At the Create a Cluster screen, select the radio button to choose a Standard Cluster as in Figure 43.

Figure 43: Create a standard cluster

Click on the Next button once you are done. At the next screen, enter a Cluster name and verify all P4460sb nodes are highlighted. Click on the Next button.

47

At the next screen, you will be asked to assign a virtual IP address for the cluster as in Figure 44. Click on Add and enter a Virtual IP Address and Subnet Mask on the private iSCSI network. This will serve as the target address for your hypervisor side iSCSI configuration.

Figure 44: Select a virtual IP address

Click on Next when done. At the resulting screen, check the box in the lower right corner that says Skip Volume Creation and then click Finish. You will create volumes in another section. To create an Adaptive Load Balancing (ALB) bond, on the first P4460sb G2 node you need to click the plus next to the node and select TCP/IP Network as in Figure 45.

Figure 45: Highlight TCP/IP Network

48

Highlight both adapters in the right TCP/IP tab, right click and select New Bond. Define the IP address of the bond. When done, it should appear as in Figure 46. Repeat this until every P4460sb node in the cluster has an ALB bond defined.

Figure 46: New ALB bond screen

Once done, close all windows. At this point, the sixty (60) day evaluation period for SAN/iQ 9.0 begins. You will need to license and register each of the P4600sb nodes within this sixty (60) day period.

Configuring Hyper-V hosts to access SAN


Preparing hosts for shared storage The management hosts, SCVMM virtual machines and the MCS Hyper-V hosts require access to the P4800. To access the P4800 these servers need to have the Microsoft iSCSI initiator enabled. If running a full Windows installation, go to Start->Administrative Tools->iSCSI Initiator. If using Server Core, from the command line enter the command iscsicpl.exe. If prompted start the Microsoft iSCSI service, as shown in Figure 47.

Figure 47: Configuration Information

49

Once the service has started, click on the Configuration tab. This will show the iSCSI Initiator Name associated with the server, Figure 48.

Figure 48: Configuration Information

You may choose to alter this name and make it simpler by eliminating the characters after the : or leave it as is. Copy down this iSCSI name, it will be needed later. The servers and associated Initiator names must be added to the P4800 cluster before the hosts can connect. Before the iSCSI Initiator can be fully configured on the serves, the associated volumes and access must be granted on the P4800 management group.

Configuring the management group for hosts


Configuring the P4000 storage and hosts for iSCSI communication is a two part process. Each hypervisor host must have its software based iSCSI initiator enabled and pointed at the target address of the P4800. The P4800 must have each host that will access it defined in its host list by a logical name and iSCSI initiator name. This section covers the configuration of servers within the CMC. From the CMC virtual machine, start a session with the CMC as the administrative user. Highlight the management group you created earlier and log in if prompted. Right-click on Servers in the CMC and select New Server as in Figure 49.

50

Figure 49: Adding a new server from the CMC

The resulting window as in Figure 50 appears.

Figure 50: The New Server window in the CMC

Enter a name for the server (the hostname of the server works well), a brief description of the host and then enter the initiator node name for your host that was saved earlier. If you are using CHAP you should configure it at this time. Click on OK when done. This process will need to be repeated for every host that will attach to the P4800 G2. Currently only the management servers have been installed. Once the remaining servers have been installed you will need to repeat this process for the MCS Hyper-V hosts, and the SCVMM VMs.

51

Configuring and attaching storage for management hosts


You will need to create an initial set of volumes on the P4800 G2 SAN that will house your management VMs. A volume of 350 GB will be created to hold the management VMs, and a second volume of 300 GB will be utilized by the SCVMM server VMs as a shared cluster volume to hold ISO images, templates, VHD files, and VM information. From the CMC, verify the management servers have been properly defined in the servers section before proceeding. From the CMC, expand the cluster and click on Volumes (0) and Snapshots (0). Right click to create a New Volume as in Figure 51.

Figure 51: Volumes and snapshots in the CMC

Click the drop down labeled Tasks to be presented with options. From the drop down, select the option for New Volume. In the New Volume window under the Basic tab, enter a volume name and short description. Enter a volume size of 350GB. This volume will house the management VMs. Figure 52 shows the window.

Figure 52: New Volume window

52

Once you have entered the data, click on the Advanced tab. Insure you have selected your cluster, RAID-10 replication and then click the radio button for Thin Provisioning.

Figure 53: The Advanced tab of the New Volume window

Click on the OK button when done. Repeat this process to create the other management volume. When all volumes have been created, return to the Servers section of the CMC under the main management group. You will initially assign the volumes you just created to the first management host. In this document this host is in device bay 1. Right click on your first management server and choose to Assign and Unassign Volumes and Snapshots as in Figure 54.

Figure 54: Server options

A window will appear with the volumes you have defined. Select the appropriate volumes to assign to the host by selecting the check boxes under the Assigned column. You will repeat these steps when you create your other volumes after the other servers have been installed.

53

NOTE: You may script the creation and assignment of volumes using the CLIQ utility shipped on your P4000 software DVD. See Appendix C of this document for samples. Finish configuration of management servers Once the volumes have been assigned, go back to the management server and run the iSCSI Initiator again. Click on the Discovery tab, then on the Discover Portal button, Figure 55.

Figure 55: Discovery

Enter the IP Address of the P4800. Click OK.

54

Select the Targets tab. The volume(s) should now be listed, and shown as Inactive, Figure 56.

Figure 56: Targets tab

Click on the Connect button.

55

Click the Add this connection to the list of Favorite Targets. If the Multipath IO feature (MPIO) has been installed in the server, then select the Enable multi-path check box, Figure 57.

Figure 57: Connect to Target

Adding the volume to the Favorite Targets means the server will attempt to connect to the volume when the server restarts. This process needs to be repeated for the other management server, and will need to be repeated for the SCVMM VMs and the MCS Hyper-V hosts.

Setting up DAS for non-persistent VM hosts


The steps in this section will help you configure direct attached storage for your hypervisor hosts that will be part of the non-persistent users pool. This direct attached storage will house the Write Cache files used by PVS as well as temporary files that will be eliminated when users log off. The number of disks you assign each host will vary based on your expected I/O profile. This section assumes four 15K RPM LFF Hot Plug SAS disks for each host, creating a total of twelve (12) volumes. When mapping the drives to the server, follow the zoning rule as described in Figure 58 and in the documentation provided with your SAS switches.

56

Figure 58: SAS switch zoning rules by blade slot

Launch the Virtual SAS Manager by highlighting a SAS switch and clicking on Management Console. The following screen appears as in Figure 59. Highlight the Zone Groups and then click on Create Zone Group.

Figure 59: HP Virtual SAS Manager

57

You will highlight between 4 and 6 disks based on expected I/O patterns for the individual hosts. Figure 60 highlights the selection of 4 disks for the new Zone Group. Click on OK once you have selected the disks and assigned a name to the zone group.

Figure 60: Assigning drives to a Zone Group

58

Repeat the process until you have Zone Groups defined for your DAS hosts. Figure 61 shows four (4) Zone Groups that have been created. Click on Save Changes prior to proceeding.

Figure 61: Zone Groups created

59

For each device bay that you have created a Zone Group, highlight the device bay and then click Modify Zone Access as in Figure 62.

Figure 62: Modifying zone access by device bay

60

Select the Zone Group that belongs to the device bay by clicking on the check box next to it. Click on OK to complete the assignment as in Figure 63.

Figure 63: Assigning Zone Group to a device bay

When booting, the server can boot from either the internal drives of the server attached to the P410i controller, or from the drives just assigned using the P700m controller. To define where your server will boot from you will need to change the settings in the RBSU to boot from the P700m array if so desired. Use the ORCA for the P700m, not the P410i controller, to configure the disks you assigned in this section as a RAID10 set. For non-persistent users a file will be created for each VM to hold the page file and client-side write cache for the provisioning server. By default, the page size is 1.5 times memory and the client-side write file is a minimum of 5 GB. For a task worker this means 1.5 GB page file + 5 GB. A 6.5 GB file will be created for each task worker supported on a server. From a performance and space consideration, you may want to put drives in the server, mirror those drives with RAID10 and install Windows Server 2008 R2 SP1 to the internal drives. Whichever path is chosen, verify that RBSU is set to boot from the correct device.

61

Installation of servers physical and virtual


This section looks at the physical and virtual servers that need to be installed, what the role of each is, and memory and disk configurations associated with each server. Physical Machines
Server Type BL460c G7 BL490c G7 BL460c G7 BL460c G7 BL460c G7 Role XenApp Servers Hyper-V for MCS VMs Hyper-V for PVS VMs Management Servers PVS No. of Servers 8 6 12 2 2 CPUs Sockets/Cores 2x6 2x6 2x6 2x6 2x6 Memory 32GB 144GB 96GB 96GB 96GB Hard Disk 2 2 8 (from DAS) 2 2 NICs 4 4 4 4 4

Virtual Machines
Role Desktop Delivery Controller SCVMM Windows 7 Desktop base image Web Interface VM vCPUs 2 2 1 1 Memory 4GB 4GB 1.5GB 1.5GB Hard Disk 40GB 40GB 40GB 40GB NICs 1 1 1 1

Desktop Delivery Controller (DDC) VM: Operating System: Windows Server 2008 R2 SP1 XenDesktop 5 Desktop Delivery Controller Desktop Studio Console Systems Center Virtual Machine Manager Administration Console Desktop Director Citrix Web Interface 5.4 Citrix Licensing 11.6.1 Microsoft System Center Virtual Machine Manager (SCVMM) VM Windows Server 2008 R2 SP1 Systems Center Virtual Machine Manager 2008 SQL Server 2008 (required, assumed to be installed elsewhere) Web Interface Server VM Windows Server 2008 R2 SP1 Internet Information Services (IIS) Web Interface 5.4

62

Setting up the infrastructure


Installing Windows Server 2008 R2 SP1 Now that storage has been completed, the remaining physical servers can be installed with Windows Server 2008 R2 SP1. Once the physical servers have been installed, create the remaining server VMs on the management servers: Two SCVMM VMs Two DDC VMs One Web Interface VMs These VMs require a full installation of Windows Server 2008 R2 Enterprise SP1 (do not install as Server Core) due to the graphically applications that will be running on them. Dynamic Memory was not configured for these server VMs; they were each configured as stated in the table above. For redundancy, NIC teaming can be configured for all of the physical servers, however to do redundant networks when utilizing the Microsoft iSCSI Initiator and Hyper-V on a network you must enable and use MPIO. To accomplish this give both iSCSI network NICs separate IP addresses; install MPIO DSM and select the enable multipathing in the iSCSI initiator. Clustering needs to be configured on the management servers to allow for migration and HA of the management VMs. When utilizing Microsoft Failover Clustering and Hyper-V always configure Clustered Shared Volumes.

Setting up management VMs


In this document, each management server with the exception of Microsoft SQL Server is housed in a virtual machine. You may choose to virtualize SQL Server. It is not virtualized in this configuration because the assumption is a large, redundant SQL Server entity exists on an accessible network that is sized to accommodate all of the databases required for this configuration. By keeping the management components virtualized it insures that the majority of the overall architecture is made highly available via standard Hyper-V practices and reduces the server count required to manage the overall stack. Each of the following management VMs should be created based on the best practices of the software vendor. The vendors installation instructions should be followed to produce an optimized VM for the particular application service. These VMs should be created on the first management host that was installed and the VMs should be stored in the 200GB management servers volume. You may do a live migration to migrate the CMC VM volumes to this storage if you wish. SCVMM VMs For performance, multiple SCVMM servers are required. For best performance the number of VMs for each SCVMM server should be in the range of 700. For this RA, two SCVMM servers are utilized. One VM is installed on each of the management servers. Store the VM VHD files on the Cluster Shared Volumes associated with the management servers. This will allow for migration and HA of the SCVMM management VMs. Once the operating system has been installed, install SCVMM and the SCVMM administrator console into each VM. Configure each administrator console to connect to the local host as the SCVMM server. Install the Failover Cluster feature into both SCVMM VMs. Configure Cluster Shared Volumes between the two VMs to access the volume defined on the P4800. This storage will be used to hold the necessary library files for templates, images, and ISOs for installation Documentation for installing SCVMM can be found at http://technet.microsoft.com/enus/library/cc917964.aspx.

63

HP Insight Control Plugins for Microsoft System Center HP Insight Control for Microsoft System Center provides seamless integration of the unique ProLiant and BladeSystem manageability features into the Microsoft System Center consoles. By integrating the server management features of HP ProLiant and HP BladeSystem into Microsoft System Center consoles, administrators can gain greater control of their technology environments. Failover Manager HP P4000 SANs utilize a Failover Manager (FOM) to insure that data remain available across management groups in the event of a single node failure. If you want to run multi-site, then follow the recommendations of the P4000 Multi-Site HA/DR Solution Pack user guide at http://h20000.www2.hp.com/bc/docs/support/SupportManual/c02063195/c02063195.pdf.

Understanding storage for XenDesktop


The following sections will highlight how storage is attached to support persistent and non-persistent pools. Figure 64 shows the storage connectivity of the persistent VM hosts, where each Hyper-V host supports an image as well as the differential and identity disks for the associated VMs.

Figure 64: Persistent VMs and the relationship between hosts and volumes

Figure 65 shows an overview of how storage will connect to volumes supporting non-persistent VM pools. The master image is held by the provisioning server, and can be stored on a volume on the

64

SAN or on local drives of the server. A single PVS server can support up to 5000 connections, with approximately 400 connections per NIC. Each host is assumed to hold 95-100 task workers per the sizing numbers HP has calculated in the document entitled Virtual Desktop Infrastructure for the Enterprise at http://h20195.www2.hp.com/V2/GetDocument.aspx?docname=4AA3-4348ENW. Each Page File/Write Cache is 1.5 times memory plus 5 GB.

Figure 65: Non-persistent VMs and the relationship between hosts

For persistent VMs, MCS is used. The master image will be 40 GB in size, and a replica of the master image is copied to each volume to be created. Then the differential and identity disks for the VMs associated with that volume will be created during XenDesktop configuration. Each differential file associated with the image could grow to be the same size as the master image if not managed correctly. HP recommends consulting the Citrix documentation for managing the size of differential files with MCS. For planning purposes, 20 GB will be allocated to hold a differential file and its associated identity disk for each VM created. HP suggests aligning the volumes with approximately 30-35 VMs per volume. Determining the number of volumes is simple math:

Assume there is a total of 420 VMs planned, 14 volumes would be sufficient, 420/30. To determine the total amount of space required the equation is:

(Number of VMs * (VM Differential Size+ 300MB)) + (Number of Volumes * (2 * Master Image Size))
The 300MB is to allow space per VM for each identity disk associated with the differential disks. The Number of Volumes * (2 * Master Image Size) allows for space for a master image and a copy of

65

the master image per volume. In a worst case scenario, if the differential files were to grow to match the size of the master image file, the total space required would be 420 * (40GB+300MB) + (14 * (2*40)) equals 17.9 TB of required space. For our sizing, we assumed a maximum of 20 GB per differential disk, thereby requiring 9.5 TB of disk space. To calculate each volume size:

For this document, our volume size is 9.5TB /14, approximately 680 GB for each volume. It should be noted that all P4800 SANs are ready for thin provisioning from initialization. This allows for overprovisioning of space to ensure that storage is not constrained by physical limits that dont always make sense in VDI environments. This means volumes can be sized for a 100% match between the master image and the differential files. The installer must understand that growth must be accommodated and reacted to when thin provisioning is used.

Bill of materials
This section shows the equipment needed to build the sample configuration contained in this document. It does not include clients, operating system, alternative application virtualization technology, user virtualization or application costs as those are unique to each implementation. Some items related to power and overall infrastructure may need to be customized to meet customer requirements. Core Blade Infrastructure
Quantity 2 2 2 2 2 4 Part Number 507019-B21 413379-B21 517521-B21 517520-B21 456204-B21 455880-B21 Description HP BladeSystem c7000 Enclosure with 3 LCD Single Phase Power Module 6x Power supply bundle 6x Active Cool Fan Bundle c7000 Redundant Onboard Administrator Virtual Connect Flex-10 Ethernet Module for HP BladeSystem

Rack and Power


Quantity 1 1 1 Part Number AF002A AF009A AF054A Customer Choice Customer Choice Description 10642 G2 (42U) Rack Cabinet - Shock Pallet* HP 10642 G2 Front Door 10642 G2 (42U) Side Panels (set of two) (Graphite Metallic) Power distribution unit Uninterruptable power supply

66

Management
Quantity 2 2 Part Number 603718-B21 610859-L21 Description HP ProLiant BL460c G7 CTO Blade HP BL460c G7 Intel Xeon X5670 (2.93GHz/6-core/12MB/95W) FIO Processor Kit HP BL460c G7 Intel Xeon X5670 (2.93GHz/6-core/12MB/95W) FIO Processor Kit 72GB 6GB SAS 15K SFF DP HDD HP 8GB Dual Rank x4 PC3-10600 DIMMs

2 4 24

610859-B21 512545-B21 500662-B21

Optional External Management Host (Virtualized)


Quantity 1 Part Number 579240-001 Description HP ProLiant DL360 G7 E5640 1P

Persistent VDI Hypervisor Hosts (Processors may be substituted to match the configurations found in the document Virtual Desktop Infrastructure for the Enterprise.)
Quantity 6 6 6 6 108 Part Number 603719-B21 603600-L21 603600-B21 572075-B21 500662-B21 Description ProLiant BL490c G7 CTO Blade HP BL490c G7 Intel Xeon X5670 (2.93GHz/6-core/12MB/95W) FIO Processor Kit HP BL490c G7 Intel Xeon X5670 (2.93GHz/6-core/12MB/95W) FIO Processor Kit 60GB 3G SATA SFF Non-hot plug SSD HP 8GB Dual Rank x4 PC3-10600 DIMMs

Non-persistent Hypervisor Hosts (Processors may be substituted to match the configurations found in the document Virtual Desktop Infrastructure for the Enterprise.)
Quantity 12 12 12 144 12 12 Part Number 603718-B21 610859-L21 610859-B21 500662-B21 508226-B21 452348-B21 Description ProLiant BL460c G7 CTO Blade HP BL460c G7 Intel Xeon X5670 (2.93GHz/6-core/12MB/95W) FIO Processor Kit HP BL460c G7 Intel Xeon X5670 (2.93GHz/6-core/12MB/95W) FIO Processor Kit HP 8GB Dual Rank x4 PC3-10600 DIMMs HP Smart Array P700m SAS Controller HP Smart Array P-Series Low Profile Battery

67

P4800 G2 SAN for BladeSystem


Quantity 1 1 2 1 Part Number BV931A AJ865A Customer Choice HF383E Description HP P4800 G2 SAN Solution for BladeSystem HP 3Gb SAS BL Switch Dual Pack Mini-SAS Cable P4000 Training Module

Direct Attached Storage


Quantity 1 2 1 48 1 4 Part Number AJ866A AP763A AF502B 516816-B21 AJ865A Customer Choice Description HP MDS600 with two Dual Port IO Module System HP MDS600 Dual I/O Module Option Kit C-13 Offset Power Cord Kit HP 450GB SAS 3.5 15K DP HDD HP 3Gb SAS BL Switch Dual Pack Mini-SAS Cable

NOTE: The drive count should reflect the number of drives (four or six) planned for each of the direct attached storage hosts within the reference architecture. User Data Storage
Quantity 2 1 Part Number BV871A BQ890A Description HP X3800 G2 Gateway HP P4500 G2 120TB MDL SAS Scalable Capacity SAN Solution

HP Software
Quantity 2 1 VAR Part Number TC277AAE 436222-B21 Description HP Insight Control for BladeSystem 16 Server license HP Insight Software Media Kit HP Client Automation, Enterprise

Installing and configuring XenDesktop 5


Installing XenDesktop 5 will require installing the Desktop Delivery Controller (DDC), Provisioning Server (PVS), and XenApp. The process for installing XenDesktop 5 can also be found at http://edocs.citrix.com.

68

The first step is to install the Desktop Delivery Controller. You must install the SCVMM Administrator Console prior to installing the DDC. For more information visit: http://technet.microsoft.com/enus/library/bb740758.aspx. When installing the DDC on the Components to Install select all. The license server for the Citrix configuration will run on the DDC. Changes to the firewall may be required depending on your firewall settings. Please consult the XenDesktop 5 Product Documentation (http://edocs.citrix.com) for firewall recommendations. Once the DDC has finished installation, go to the Start menu and launch the Desktop Studio console to configure the desktop deployment. This will define the XenDesktop site, licensing and database options as well as define Microsoft Virtualization as the host type. When prompted, specify the SCVMM server address and credentials to authenticate to the SCVMM server. For Citrix Licensing configuration visit: http://support.citrix.com/proddocs/index.jsp?topic=/licensing/lic-licensing-115.html For more information about using an existing SQL database visit: http://support.citrix.com/article/CTX128008 Once the first DDC VM has been installed, repeat the process on the second DDC VM, joining it to the farm created when the first DDC was configured. Since the RA is using both PVS and MCS (Machine Creation Services) to deploy the VMs, multiple catalogs will need to be configured with the DDC. For PVS, no clustering is required as DAS storage will be used to support the write cache files, this process will be defined later in this document. For using MCS, a Microsoft Clustered Shared Volume is required to support each master image that will be created. This will determine the number of DDC groups required to support MCS. XenApp When installing XenApp there are two possibilities. The first is to install XenApp on bare-metal servers, requiring eight servers to be running XenApp to support the RA. The XenApp servers can also be virtualized if so desired. To support a similar load using virtualization as a single server would running bare metal requires four (4) VMs each with 4 vCPUs and 8 GB of memory. For this document, the XenApp servers were installed bare metal, no virtualization. NOTE: If you choose to run XenApp virtualized additional volumes will need to be created and configured on the P4800. For instructions for implementing XenApp, refer to the Citrix eDocs website: http://support.citrix.com/proddocs/topic/xenapp6-w2k8/ps-install-config-wrapper.html. Only the XenApp server role was installed using the Server Role Administrator in this exercise. In addition to the installation, the following applications were installed and published as hosted applications: Microsoft Word 2007 Microsoft Outlook 2007 Microsoft Excel 2007 Microsoft PowerPoint 2007 Microsoft Visio 2007 For this exercise, virtual desktops leveraged applications by using the Citrix Online Plugin via XenApp. To allow for this functionality, additional configurations were made to the Web Interface.
1. Open the Desktop Studio Console on the Desktop Delivery Controller.

69

2. Expand the Access folder, expand Web Interface, right click on XenApp Services Site and click

on Manage Server Farms.


3. Remove any currently configured farms, and then click Add. 4. Specify the name of the farm and add all respective XenApp servers into the farm, Figure 66, then

click OK.

Figure 66: Selecting XenApp servers

5. Verify all settings are correct and click OK, Figure 67.

Figure 67: Setting Verification

70

6. Right click on the site and choose Configure Authentication Methods. 7. Verify that Pass-through and Prompt are enabled, Figure 68. 8. Set Pass-through as the default authentication method and click OK.

Figure 68: Non-persistent VMs and the relationship between hosts

9. Desktops will now pass thru credentials to XenApp to enumerate applications within the desktop

session. Windows 7 Base Image Once the DDC has been installed, create the base image file(s) that will be used for provisioning. The same base image file can be used for both PVS and MCS. The Windows 7 Optimization Guide from Citrix was used to optimize the desktop delivery. In addition the following steps were done to improve performance:
1. Create a virtual machine in SCVMM or Hyper-V with the following:

Desired HD size, normally 40GB for Windows 7 1.5GB RAM Legacy Hyper-V NIC (required to network boot VMs)
2. Boot the VM with Microsoft Windows 7 media. 3. Install Windows 7. 4. Verify Hyper-V Integration Services have been installed. 5. Add machine to domain.

When creating the base image a value of 1.5 GB was used for memory. For the VDI VMs in the template Dynamic Memory was configured with a minimum of 512 MB and the max dependent on the type of work, 1 GB for task workers, 1.5 GB for productivity users, and 2.0 GB for knowledge users. Once the Windows 7 VM has been created, XenApp support with the Citrix Online Plugin is installed, along with any additional software desired in the image. As a final step, install the Virtual Desktop Agent into the VM. To install the Virtual Desktop Agent, attach the XenDesktop5.iso to the Windows 7 VM using SCVMM. Once the application has started select Install Virtual Desktop Agent, then select

71

Advanced Install. In the Advanced Install select Virtual Desktop Agent and Support for XenApp Application Delivery, specify the URL of the XenApp Services Site. Manually specify the DDC controller location, and allow for XenDesktop Performance Optimizations, User Desktop shadowing, and Real Time Monitoring. Once the installation has completed the VM can now be shutdown. Two copies need to be made, one to be the PVS master image and one to be the MCS master image. Machine Creation Services Machine Creation Services will be used to create the persistent VMs on the SAN storage. All of the servers that will be supporting the persistent users should be installed with Windows Server 2008 R2 SP1 with the Hyper-V role enabled, and configured into a cluster using Cluster Shared Volumes for each volume that was created on the P4800 to support the persistent VMs. The VM to be used as the MCS master image should have its properties modified to enable Dynamic Memory and set the maximum memory limit for the VM as defined by the user type. Task workers are assigned 1GB, productivity workers get 1.5 GB and knowledge workers normally get 2 GB. From the Desktop Studio console on the DDC right click on Machines and select Create Catalog. When prompted for the host name specify the name of the Hyper-V Cluster. For the Machine Type specify Dedicated. When prompted, specify the Windows 7 master image created earlier to be used by MCS. Then specify the number of virtual machines to create, the number of vCPUs and memory to allocate to each VM, and select Create New Accounts for the Active Directory computer accounts. You will also need to specify the OU to store the computer names, and a naming scheme for the computer names. NOTE: for each # used in the Account Naming Scheme another digit is added to the name. Finally, specify the administrators that can manage this catalog, and select Finish to start the process. Once the desktop catalog has been created you must create a desktop group to assign users to desktops. In Desktop Studio Console, right click on Assignments and choose Create Desktop Group. Select the catalog of machines created in the previous step, and then specify the Active Directory user group that will have access to these VMs, and the number of virtual desktops a user can launch within this group at any one time. You will also need to specify the desktop administrators that can manage the group, and the Desktop Display Name and Desktop Group name. Once complete, select Finish to create the group. The VMs can now be booted, and the users can login using the DDC to access their persistent desktop. Provisioning Services (PVS) PVS will be used to support the non-persistent workers. Local storage will be used to keep the write cache files associated with the PVS image for the users. A write-cache file gets re-created on every login, so no data is saved between logins. If not configured correctly or an error occurs then the writecache for all of the VMs will default back to the location of the PVS image file. This document assumes Windows Server 2008 R2 SP1 has been installed onto the two physical servers in the RA to support PVS. Prior to installing PVS on the servers, the SCVMM Administration Console must be installed. In addition to the steps below for this exercise the following optimizations were performed: 15 threads per port were configured for the Provisioning Server TCP Large Send Offload was disabled on the Provisioning Server (http://support.citrix.com/article/CTX117374) PVS will be installed and configured on the first server, for the second server it will be configured to join an existing farm defined during configuration of the first server. This installation assumes one

72

image file for all non-persistent users, and the image file will be maintained on each physical PVS server. When installing and configuring the first server you will need to specify: Where DHCP services run That the PXE server runs on this computer Name of the new Farm The SQL server and instance name (It is assumed SQL is running in the data center) The database name, farm name, and farm administrators Store path for the vDisk images Licensing server, currently the DDC is the license server Services account for streaming and SOAP services, this account must have access to the vDisk location to be able to stream The Active Directory computer account password update if desired The NICs and ports for network communication, defaults were used. The TFTP service to provide the ARDBP32.BIN file at boot time PVS server is listed in the Stream Servers Boot List. Once everything is ready, select Finish to install PVS. For the second PVS server add it to the existing farm just created. Settings in the DHCP server scope must be configured for PXE to work correctly. Configure options 66 and 67 on the DHCP server scope that the desktops will boot from. Option 66 should contain one of the Provisioning Server TFTP IP addresses. Option 67 should be configured for the ARDBP32.BIN bootstrap file. As final steps, change the default threads per port from 8 to 15 for all NICs being used on the PVS servers. Then install PVS 5.6 XenDesktop Setup Wizard hotfix to enable quick provisioning and deployment of VMs: http://support.citrix.com/article/CTX129381. Once PVS has been installed and configured it is necessary to create the vDisk image that will be the master image for the VDI VMs. On the PVS server, run the Provisioning Server Management Console from the start menu. Login and specify one of the PVS servers as the host to connect to. Once logged in you will need to create a vDisk, for optimum performance specify a fixed vDisk equal to or larger than the Windows 7 base image disk. Once the disk is created, verify it is in Private Image Mode by right clicking on the vDisk and selecting File Properties. Select the Mode tab to verify Private Image Mode. On the Options tab verify Active Directory machine account password management is selected. Select OK to exit. Right click on your Collection and specify Create Device. Specify the name and MAC address of the base image VM to be captured. For the Boot from option specify Hard Disk. Under the vDisks tab select the vDisk created earlier. In SCVMM, boot the PVS base image VM and install the Provisioning Services Target Device software. To accomplish this, attach the PVS 5.6 SP1 ISO to the VM as a DVD device. From within the VM, run the software and select Target Device configuration. When completed shutdown the VM. In SCVMM, modify the Windows 7 base image VM hardware configuration to boot from network. To boot from network, the network adapter must be the legacy adapter. Boot the base image VM. Once booted, you should now see the vDisk in the task bar for the VM, and it should be active. From the Start Menu, launch XenConvert. Specify the From to be This Machine and the To to be Provisioning Services vDisk. Verify the size and capacity of the source and destination drives, and select the AutoFit feature to ensure the target device software will use the correct vDisk size, click Next.

73

Click Optimize for Provisioning Service, it is recommended to accept all specified features. When ready, click Convert. The conversion process will start, and can take a while to convert. Figure 69 shows screen shots of the conversion process.

Figure 69: Citrix XenConvert Screen Shots

74

Once the conversion is complete, shutdown the VM. The next steps create the write cache file associated with the PVS Image in Standard Image mode, and stores the cache on the local storage. This means creating a local hard disk to hold the VM pagefile and the write cache for PVS. If in Standard Image mode and the VM pagefile is on a writeable disk, then if specified the PVS write cache will be created in the same hard disk containing the pagefile. The VM pagefile will be 1.5 times memory assigned to the VM. The file to be created needs to be VM pagefile size + 5 GB for the write cache. So a VM with 2 GB of memory would require a pagefile size of 3 GB, and the vDisk being created would need to be 8 GB in size. Once the Windows 7 VM has shutdown, go to the PVS server and convert the Windows 7 device to boot from the vDisk instead of from the hard disk, then using SCVMM remove the hard disk from the VM but do not delete it.

75

In SCVMM on the same VM create a new 8GB fixed size hard disk (.vhd) using the IDE controller, and attach it to the Windows 7 desktop VM. This drive will be used for the Provisioning Server write cache information and known as the write cache drive. This should be created in the DAS or local storage associated with the Hyper-V host. Final configuration is shown in Figure 70.

Figure 70: Write-cache disk

76

Once the additional disk is created, boot the VM and verify it sees the additional drive. Format the drive with NTFS doing a quick format. Once formatted, configure the page file for the VM to reside locally by moving the paging file from C: to your write cache drive and removing the paging file completely from the C: drive. Set the paging file size to 1.5 x Device RAM. Reboot when prompted.

Figure 71: Move Page File from C to another drive

After the reboot, verify the paging file has been removed from C: and placed on the write cache drive. Once verified, shutdown the VM. The next step converts the VM into a template that can be used to deploy VMs. Before converting the VM into a template, edit the properties for the VM and configure Dynamic Memory as done before. These settings will be carried into the template and made available to all of the new VMs to be created using this template.

77

To convert this virtual machine to a template using SCVMM, right click on the VM name and select New template.

Figure 72: Creating a template

Choose Customization not required for Guest operating system profile during template creation.

Figure 73: Configuring a template

This will convert the VM into a template that can be used for deploying VMs with PVS. To deploy this image to multiple desktops, you must first place the vDisk in Standard Image Mode on the PVS server. This allows a 1: many relationship for the vDisk to VMs.

78

On the PVS server, navigate to your vDisk store, right click on your vDisk and choose File Properties. Select the Mode tab and choose Standard Image (multi-device, write-cache enabled) for the Access Mode, and choose Cache on devices HD for the Cache Type.

Figure 74: Setting Standard Image Mode

79

To deploy the desktops, you will need to launch the XenDesktop Wizard from the Provisioning Services console. Within the Provisioning Services Console right click on the site name and select XenDesktop Setup Wizard.

Figure 75: Running XenDesktop Setup Wizard

When prompted, specify the host name of the DDC, XenDesktop Controller and specify where the VM template is stored. You can select multiple hosts to deploy VMs to. Once you have authenticated to the host, choose the template, and click OK. When prompted specify a collection name for the VMs and select the Windows 7 vDisk assigned to the virtual machines. Specify the number of virtual machines to create, the vCPUs desired, and the memory for the VMs. The default machine settings are recommended since they are pulled from the template. Select Create new accounts under Active Directory computer accounts to have the XenDesktop Setup Wizard create AD computer accounts automatically. Specify the OU where the AD computer accounts need to be created, and provide a machine account naming scheme. Using # will add another digit to the VM name. Specify the name of the desktop catalog that will be visible in the Desktop Delivery Controller and the appropriate credentials to authenticate with the Desktop Delivery Controller. At the Confirm configuration settings click Finish to start building the VMs. Once complete, the VMs will be ready to use. Figure 76 has screen shots of the steps.

80

Figure 76: Running XenDesktop Setup Wizard

81

82

Citrix Profile Management Citrix Profile Management allows users to access non-persistent desktops in a persistent manner by retaining user settings and data between sessions. For persistent desktops, it also ensures that the user settings and data are retained in the event of corruption within the users virtual desktop. In order to leverage Citrix Profile Management a file share must be created and the Citrix User Profile Management Agent must be installed into the virtual desktop. When leveraging applications or remote desktops via XenApp, the Citrix Profile Management Agent can be installed on the XenApp servers to retain user settings and data. To manage and control the behavior of the Citrix Profile Management Agent on the desktops and XenApp servers, a Group Policy Object (GPO) is leveraged which is included with the Citrix Profile Management product. The following settings were configured in the GPO to control the agent:

Figure 77: GPO configuration

83

Summary
As stated earlier, the goal of this document was to create a self-contained reference architecture on HP ProLiant servers and storage using Citrix XenDesktop 5 on Microsoft Windows Server 2008 R2 SP1, and supporting 1600 VDI users 400 persistent and 1200 non-persistent users. In summary, the advantages of this RA include: DAS and SAN Storage HP hardware allows configuration of both DAS and SAN hardware in the same RA, and Citrix XenDesktop 5 and Microsoft Hyper-V can take advantage of both. Running users on DAS storage reduces the storage cost per user by more than 50%. Offload of application execution to XenApp servers, lessening the workload and IOPs for the VDI VMs, allowing support of 10-15% more VMs per server. Microsoft Windows Server 2008 R2 SP1 Dynamic Memory allows utilization of all of the physical memory for VDI VMs and makes the most of the hardware configurations. A complete, self-contained POD with integrated management. All application, boot, login, migrations and execution related to the VDI infrastructure stays within the RA rack. The only network wiring required to leave the rack is the redundant connections to the corporate production network and the data center management network. To extend this further, HP has multiple end-point devices all supporting the Citrix Receiver technology. From the end-point device, to the networking, to the data center, HP can meet the requirements to implement the VDI RA. This RA is the basis for a full Client Virtualization solution leveraging Citrix FlexCast and the power of Microsoft Hyper-V on HP ProLiant servers and storage. In this RA XenApp servers were shown to offload application execution from the VMs, providing additional headroom on each Hyper-V host server to run more VMs and take advantage of Hyper-V Dynamic Memory. This was done to highlight the ease of adding XenApp, and Remote Sessions servers to the VDI RA to extend the capability of the solution, and leverage the full capabilities of Citrix FlexCast while maintaining the same management infrastructure.

84

Expanding the RA One of the benefits of the POD approach to the RA is the ease in expanding and growing. Figure 78 looks at extending the RA VDI components to a multi-rack solution.

Figure 78: Client Virtualization Solution

The racks in Figure 78 consist of two Virtual Connect domains. The left-most rack is a P4800 domain using a six node P4800 configuration, and will support 1200 persistent productivity workers with 16 BL490c G7 servers, 1200 XenApp connections using six (6) BL460c G7 servers, 4 BL460c G7 servers to run App-V and Sessions, and two servers to be management servers for the domain. The second domain is a DAS domain, supporting 6600 non-persistent VDI task workers with 60 BL490c G7 servers and utilizing four (4) BL460c G7 servers as provisioning servers. This domain uses twenty-six (26) BL460c G7 servers supporting 6400 XenApp connections and 2 BL460c G7 servers for App-V and Sessions.

85

Actual numbers supported in this configuration will be dependent on the user type and load. No one approach remote desktops, non-persistent VMs, or persistent VMs will address an enterprise level solution. A best practice is to do an environment assessment using HPs Client Virtualization Analysis Monitoring service (CVMA) to understand the current user environment. This will help understand the types of users best aligned with remote desktops, non-persistent or persistent VDI and determine the best deployment scenario using the Citrix FlexCast model on Microsoft Hyper-V with HP ProLiant servers and storage.

86

Appendix A Storage patterning and planning for Citrix XenDesktop environments


HP has taken a proprietary approach to storage testing since 2008. One of the goals for the original test design was to address the somewhat predictable storage I/O patterns produced by the test methodologies that were in use at the time. These I/O patterns were the result of user scripts that utilized a minimal set of applications and services. The consequence was that the results of storage testing observed in the lab frequently differed greatly from what was observed in the field. HP used diverse user patterning to create three (3) storage workloads that incorporated I/O counts, RW ratios over time and more variability in block sizes and that were tied to an end user type. These workloads were captured over a fibre channel trace analyzer and were fed into an HP developed program that allowed them to be played against a variety of storage types. This allowed HP to be able to do comparative performance analysis between storage containers as well as look for areas that could benefit from various optimizations. Storage patterning The primary area of focus for testing this RA was on the knowledge worker workload. Of all workloads, this workload produces the most diverse set of behaviors. An 80 user load was used. For the Provisioning server, the client-side write cache was configured. With this configuration, the read/write ratio averaged 5/95 for the tests. During the login process of the 80 VMs, writing to the cache averaged 150 write IOPs for the 80 VMs, with a peak as high as 700 IOPs. During the test run the average IOPs for 80 VMs was closer to 60 write IOPs, with occasional peaks over 500 IOPs, as shown in Figure A1.

Figure A1: Write IOPs, 80 User Run

80VM Run - 15K IOPS Write


1000 900 800 700 600 500 400 300 200 100 0 1 78 155 232 309 386 463 540 617 694 771 848 925 1002 1079 1156 1233 1310 1387 1464 1541 1618 1695 1772 1849 1926 2003 2080 2157 2234 2311

87

For reads, the average IOPs for login was 8-10 IOPs, with an overall average across the entire test run for reads of 5-6 IOPs. When using MCS for the persistent VMs the read/write ratio is closer to a 50/50 ratio. In the 80 user VM run the average total IOPs was just over 5,000. The option of using SSDs or I/O accelerator cards is often looked at, but in a properly configured XenDesktop implementation with Provisioning server, these will bring little if any performance gains but increase cost. When using Provisioning Server configured with client-side write cache, PVS will put into memory the commonly referenced bits from the master image file. The VMs will read from the common bits in memory and little to no I/O is generated from the image file. To highlight this, a test was done with 80 VMs accessing the same image from PVS configured with 48 GB of memory and using client side write cache. The PVS server was started, then all 80 VMs booted. After 15 seconds, I/O to the image file went to nil. Once all 80 VMs had been started, they were shut down, and then restarted. On reboot of the 80 VMs, no I/O traffic was seen to the master image file on the PVS server. When using Machine Creation Services, MCS, SSDs become more problematic. MCS requires the master image file and the differential files for the VMs to reside on the same storage repository. This means the writes to the differential files will be written to the SSDs. SSD technology has progressed well, and will improve, but with current SSD technology a high degree of writes can cause failure of the SSDs. Due to these considerations it is not recommended to run MCS on SSDs at this time. Other I/O acceleration cards can be considered, but an understanding of the write performance and impact must be understood by the implementer. Storage planning Based on observations and analysis of the storage workload, storage planning at client side cache must account for a primarily write driven workload. Based on HPs analysis, the bulk of the reads will be offloaded to the provisioning server. The remaining I/O is observed as the write portion of the overall I/O per user plus an average of 1 read I/O. The estimated per user requirement could thus be estimated as:

((Total User I/O x Write Percentage) + 1)


A user generating 20 IOPs with a 60% write ratio would thus need ((20 x 0.6) +1) or 13 IOPs. Similarly, an end user generating 10 IOPs with a 20% write ratio would need only ((10 x 0.2) +1) or 3 IOPs on the direct-attached storage. It is important to note that this produces an estimate only, but one that is an intelligent beginning to proper storage planning. Understanding user patterning en masse will improve estimates. As an example, a group of users with an 80/20 read/write ratio and an average of only 6 IOPs per user may be tempting to use as a planning number. However, if your end users have a shared peak of 18 IOPs with a 50/50 read/write ratio for a 20 minute period over the course of the day this could result in overcommitted storage that is not capable of handling the required load. Similarly, understanding what files will reside in all locations is critical as well. As an example, if a locally installed application builds a local, per user cache file and then performs frequent reads against it, it is likely the cache hits will come from the linked clone which can change the overall profile. In HP testing, user profile information is located within a database and moved over only as needed resulting in a very small I/O impact. User data is also treated as a separate I/O pattern and not recorded on either of the monitored volumes. Failing to relocate your user data or implementing fully provisioned virtual machines will result in a dramatic difference in I/O patterning as well as a number of difficulties around management and overall system performance and is not recommended.

88

For the best understanding of overall user requirements including I/O, CPU, memory and even application specific information, HP offers the Client Virtualization Analysis and Modeling Service. For more information about the service, see the information sheet at http://h20195.www2.hp.com/v2/GetPDF.aspx/4AA3-2409ENW.pdf.

89

Appendix B Scripting the configuration of the Onboard Administrator


This is a sample script for a single enclosure with Virtual Connect Flex-10 configuration. The installer will need to alter settings as appropriate to their environment. Consult the BladeSystem documentation at http://www.hp.com/go/bladesystem for information on individual script options. Note that this script alters power on timings to insure the systems involved in the HP P4800 SAN for BladeSystem are properly delayed to allow the disks in the MDS600 to spin up and enter a full ready state.
#Script Generated by Administrator #Set Enclosure Time SET TIMEZONE CST6CDT #SET DATE MMDDhhmm{{CC}YY} #Set Enclosure Information SET ENCLOSURE ASSET TAG "TAG NAME" SET ENCLOSURE NAME "ENCL NAME" SET RACK NAME "RACK NAME" SET POWER MODE REDUNDANT SET POWER SAVINGS ON #Power limit must be within the range of 2700-16400 SET POWER LIMIT OFF #Enclosure Dynamic Power Cap must be within the range of 2013-7822 #Derated Circuit Capacity must be within the range of 2013-7822 #Rated Circuit Capacity must be within the range of 2082-7822 SET ENCLOSURE POWER_CAP OFF SET ENCLOSURE POWER_CAP_BAYS_TO_EXCLUDE None #Set PowerDelay Information SET INTERCONNECT POWERDELAY 1 SET INTERCONNECT POWERDELAY 2 SET INTERCONNECT POWERDELAY 3 SET INTERCONNECT POWERDELAY 4 SET INTERCONNECT POWERDELAY 5 SET INTERCONNECT POWERDELAY 6 SET INTERCONNECT POWERDELAY 7 SET INTERCONNECT POWERDELAY 8 SET SERVER POWERDELAY 1 0 SET SERVER POWERDELAY 2 0 SET SERVER POWERDELAY 3 0 SET SERVER POWERDELAY 4 0 SET SERVER POWERDELAY 5 0 SET SERVER POWERDELAY 6 0 SET SERVER POWERDELAY 7 240 SET SERVER POWERDELAY 8 240 SET SERVER POWERDELAY 9 0 SET SERVER POWERDELAY 10 0 SET SERVER POWERDELAY 11 0 SET SERVER POWERDELAY 12 0 SET SERVER POWERDELAY 13 0 SET SERVER POWERDELAY 14 0 SET SERVER POWERDELAY 15 240 SET SERVER POWERDELAY 16 240

210 210 0 0 30 30 0 0

# Set ENCRYPTION security mode to STRONG or NORMAL. SET ENCRYPTION NORMAL

90

#Configure Protocols ENABLE WEB ENABLE SECURESH DISABLE TELNET ENABLE XMLREPLY ENABLE GUI_LOGIN_DETAIL #Configure Alertmail SET ALERTMAIL SMTPSERVER 0.0.0.0 DISABLE ALERTMAIL #Configure Trusted Hosts #REMOVE TRUSTED HOST ALL DISABLE TRUSTED HOST #Configure NTP SET NTP PRIMARY 10.1.0.2 SET NTP SECONDARY 10.1.0.3 SET NTP POLL 720 DISABLE NTP #Set SNMP Information SET SNMP CONTACT "Name" SET SNMP LOCATION "Locale" SET SNMP COMMUNITY READ "public" SET SNMP COMMUNITY WRITE "private" ENABLE SNMP #Set Remote Syslog Information SET REMOTE SYSLOG SERVER "" SET REMOTE SYSLOG PORT 514 DISABLE SYSLOG REMOTE #Set Enclosure Bay IP Addressing (EBIPA) Information for Device Bays #NOTE: SET EBIPA commands are only valid for OA v3.00 and later SET EBIPA SERVER 10.0.0.1 255.0.0.0 1 SET EBIPA SERVER GATEWAY NONE 1 SET EBIPA SERVER DOMAIN "vdi.net" 1 ENABLE EBIPA SERVER 1 SET EBIPA SERVER 10.0.0.2 255.0.0.0 2 SET EBIPA SERVER GATEWAY NONE 2 SET EBIPA SERVER DOMAIN "vdi.net" 2 ENABLE EBIPA SERVER 2 SET EBIPA SERVER 10.0.0.3 255.0.0.0 3 SET EBIPA SERVER GATEWAY NONE 3 SET EBIPA SERVER DOMAIN "vdi.net" 3 ENABLE EBIPA SERVER 3 SET EBIPA SERVER 10.0.0.4 255.0.0.0 4 SET EBIPA SERVER GATEWAY NONE 4 SET EBIPA SERVER DOMAIN "vdi.net" 4 ENABLE EBIPA SERVER 4 SET EBIPA SERVER 10.0.0.5 255.0.0.0 5 SET EBIPA SERVER GATEWAY NONE 5 SET EBIPA SERVER DOMAIN "vdi.net" 5 ENABLE EBIPA SERVER 5 SET EBIPA SERVER 10.0.0.6 255.0.0.0 6 SET EBIPA SERVER GATEWAY NONE 6 SET EBIPA SERVER DOMAIN "vdi.net" 6 ENABLE EBIPA SERVER 6 SET EBIPA SERVER 10.0.0.7 255.0.0.0 7 SET EBIPA SERVER GATEWAY NONE 7 SET EBIPA SERVER DOMAIN "vdi.net" 7 ENABLE EBIPA SERVER 7 SET EBIPA SERVER 10.0.0.8 255.0.0.0 8 SET EBIPA SERVER GATEWAY NONE 8

91

SET EBIPA SERVER DOMAIN "vdi.net" 8 ENABLE EBIPA SERVER 8 SET EBIPA SERVER 10.0.0.9 255.0.0.0 9 SET EBIPA SERVER GATEWAY NONE 9 SET EBIPA SERVER DOMAIN "vdi.net" 9 ENABLE EBIPA SERVER 9 SET EBIPA SERVER 10.0.0.10 255.0.0.0 10 SET EBIPA SERVER GATEWAY NONE 10 SET EBIPA SERVER DOMAIN "vdi.net" 10 ENABLE EBIPA SERVER 10 SET EBIPA SERVER 10.0.0.11 255.0.0.0 11 SET EBIPA SERVER GATEWAY NONE 11 SET EBIPA SERVER DOMAIN "vdi.net" 11 ENABLE EBIPA SERVER 11 SET EBIPA SERVER 10.0.0.12 255.0.0.0 12 SET EBIPA SERVER GATEWAY NONE 12 SET EBIPA SERVER DOMAIN "vdi.net" 12 ENABLE EBIPA SERVER 12 SET EBIPA SERVER 10.0.0.13 255.0.0.0 13 SET EBIPA SERVER GATEWAY NONE 13 SET EBIPA SERVER DOMAIN "vdi.net" 13 ENABLE EBIPA SERVER 13 SET EBIPA SERVER 10.0.0.14 255.0.0.0 14 SET EBIPA SERVER GATEWAY NONE 14 SET EBIPA SERVER DOMAIN "vdi.net" 14 ENABLE EBIPA SERVER 14 SET EBIPA SERVER NONE NONE 14A SET EBIPA SERVER GATEWAY 10.65.1.254 14A SET EBIPA SERVER DOMAIN "" 14A SET EBIPA SERVER 10.0.0.15 255.0.0.0 15 SET EBIPA SERVER GATEWAY NONE 15 SET EBIPA SERVER DOMAIN "vdi.net" 15 ENABLE EBIPA SERVER 15 SET EBIPA SERVER 10.0.0.16 255.0.0.0 16 SET EBIPA SERVER GATEWAY NONE 16 SET EBIPA SERVER DOMAIN "vdi.net" 16 ENABLE EBIPA SERVER 16 #Set Enclosure Bay IP Addressing (EBIPA) Information for Interconnect Bays #NOTE: SET EBIPA commands are only valid for OA v3.00 and later SET EBIPA INTERCONNECT 10.0.0.101 255.0.0.0 1 SET EBIPA INTERCONNECT GATEWAY NONE 1 SET EBIPA INTERCONNECT DOMAIN "vdi.net" 1 SET EBIPA INTERCONNECT NTP PRIMARY 10.1.0.2 1 SET EBIPA INTERCONNECT NTP SECONDARY 10.1.0.3 1 ENABLE EBIPA INTERCONNECT 1 SET EBIPA INTERCONNECT 10.0.0.102 255.0.0.0 2 SET EBIPA INTERCONNECT GATEWAY NONE 2 SET EBIPA INTERCONNECT DOMAIN "vdi.net" 2 SET EBIPA INTERCONNECT NTP PRIMARY 10.1.0.2 2 SET EBIPA INTERCONNECT NTP SECONDARY 10.1.0.3 2 ENABLE EBIPA INTERCONNECT 2 SET EBIPA INTERCONNECT 10.0.0.103 255.0.0.0 3 SET EBIPA INTERCONNECT GATEWAY NONE 3 SET EBIPA INTERCONNECT DOMAIN "" 3 SET EBIPA INTERCONNECT NTP PRIMARY NONE 3 SET EBIPA INTERCONNECT NTP SECONDARY NONE 3 ENABLE EBIPA INTERCONNECT 3 SET EBIPA INTERCONNECT 10.0.0.104 255.0.0.0 4 SET EBIPA INTERCONNECT GATEWAY NONE 4 SET EBIPA INTERCONNECT DOMAIN "" 4 SET EBIPA INTERCONNECT NTP PRIMARY NONE 4 SET EBIPA INTERCONNECT NTP SECONDARY NONE 4 ENABLE EBIPA INTERCONNECT 4 SET EBIPA INTERCONNECT 10.0.0.105 255.0.0.0 5 SET EBIPA INTERCONNECT GATEWAY NONE 5 SET EBIPA INTERCONNECT DOMAIN "vdi.net" 5 SET EBIPA INTERCONNECT NTP PRIMARY 10.1.0.2 5

92

SET EBIPA INTERCONNECT NTP SECONDARY 10.1.0.3 5 ENABLE EBIPA INTERCONNECT 5 SET EBIPA INTERCONNECT 10.0.0.106 255.0.0.0 6 SET EBIPA INTERCONNECT GATEWAY NONE 6 SET EBIPA INTERCONNECT DOMAIN "vdi.net" 6 SET EBIPA INTERCONNECT NTP PRIMARY 10.1.0.2 6 SET EBIPA INTERCONNECT NTP SECONDARY 10.1.0.3 6 ENABLE EBIPA INTERCONNECT 6 SET EBIPA INTERCONNECT 10.0.0.107 255.0.0.0 7 SET EBIPA INTERCONNECT GATEWAY NONE 7 SET EBIPA INTERCONNECT DOMAIN "" 7 SET EBIPA INTERCONNECT NTP PRIMARY NONE 7 SET EBIPA INTERCONNECT NTP SECONDARY NONE 7 ENABLE EBIPA INTERCONNECT 7 SET EBIPA INTERCONNECT 10.0.0.108 255.0.0.0 8 SET EBIPA INTERCONNECT GATEWAY NONE 8 SET EBIPA INTERCONNECT DOMAIN "" 8 SET EBIPA INTERCONNECT NTP PRIMARY NONE 8 SET EBIPA INTERCONNECT NTP SECONDARY NONE 8 ENABLE EBIPA INTERCONNECT 8 SAVE EBIPA #Uncomment following line to remove all user accounts currently in the system #REMOVE USERS ALL #Create Users add at least 1 administrative user ADD USER "admin" SET USER CONTACT "Administrator" SET USER FULLNAME "System Admin" SET USER ACCESS ADMINISTRATOR ASSIGN SERVER 1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,1A,2A,3A,4A,5A,6A,7A,8A,9A,10A,11A,12A,13A ,14A,15A,16A,1B,2B,3B,4B,5B,6B,7B,8B,9B,10B,11B,12B,13B,14B,15B,16B "Administrator" ASSIGN INTERCONNECT 1,2,3,4,5,6,7,8 "Administrator" ASSIGN OA "Administrator" ENABLE USER "Administrator" #Password Settings ENABLE STRONG PASSWORDS SET MINIMUM PASSWORD LENGTH 8 #Session Timeout Settings SET SESSION TIMEOUT 1440 #Set LDAP Information SET LDAP SERVER "" SET LDAP PORT 0 SET LDAP NAME MAP OFF SET LDAP SEARCH 1 "" SET LDAP SEARCH 2 "" SET LDAP SEARCH 3 "" SET LDAP SEARCH 4 "" SET LDAP SEARCH 5 "" SET LDAP SEARCH 6 "" #Uncomment following line to remove all LDAP accounts currently in the system #REMOVE LDAP GROUP ALL DISABLE LDAP #Set SSO TRUST MODE SET SSO TRUST Disabled #Set Network Information #NOTE: Setting your network information through a script while # remotely accessing the server could drop your connection.

93

# If your connection is dropped this script may not execute to conclusion. SET OA NAME 1 VDIOA1 SET IPCONFIG STATIC 1 10.0.0.255 255.0.0.0 0.0.0.0 0.0.0.0 0.0.0.0 SET NIC AUTO 1 DISABLE ENCLOSURE_IP_MODE SET LLF INTERVAL 60 DISABLE LLF #Set VLAN Information SET VLAN FACTORY SET VLAN DEFAULT 1 EDIT VLAN 1 "Default" ADD VLAN 21 "VDI" ADD VLAN 29 Migration ADD VLAN 93 PUB_ISCSI ADD VLAN 110 "MGMT_VLAN" SET VLAN SERVER 1 1 SET VLAN SERVER 1 2 SET VLAN SERVER 1 3 SET VLAN SERVER 1 4 SET VLAN SERVER 1 5 SET VLAN SERVER 1 6 SET VLAN SERVER 1 7 SET VLAN SERVER 1 8 SET VLAN SERVER 1 9 SET VLAN SERVER 1 10 SET VLAN SERVER 1 11 SET VLAN SERVER 1 12 SET VLAN SERVER 1 13 SET VLAN SERVER 1 14 SET VLAN SERVER 1 15 SET VLAN SERVER 1 16 SET VLAN INTERCONNECT 1 1 SET VLAN INTERCONNECT 1 2 SET VLAN INTERCONNECT 1 3 SET VLAN INTERCONNECT 1 4 SET VLAN INTERCONNECT 1 5 SET VLAN INTERCONNECT 1 6 SET VLAN INTERCONNECT 1 7 SET VLAN INTERCONNECT 1 8 SET VLAN OA 1 DISABLE VLAN SAVE VLAN DISABLE URB SET URB URL "" SET URB PROXY URL "" SET URB INTERVAL DAILY 0

94

Appendix C CLIQ commands for working with P4000


This section offers examples of command line syntax for creating SAN/iQ volumes as well as adding hosts and presenting volumes to hosts. These samples can be combined to create scripts to make initial deployment of P4000 volumes simple. For instructions on installing and using CLIQ consult the HP P4000 documentation that ships with your P4800 G2 SAN for BladeSystem. The following line when run in the CLIQ will create a server named mtx-esx01 with an initiator name of iqn.1998-01.com.citrix:xd01 cliq createServer serverName=mtx-esx01 useChap=0 initiator=iqn.199801.com.citrix:xd01 login=172.16.0.130 userName=admin passWord=password The following line will create a 200GB, thinly provisioned volume named data-01 within a cluster titled P4800_VDI. cliq createVolume prompt=0 volumeName=data-01 clusterName=P4800_VDI size=200GB replication=2 thinprovision=1 login=172.16.0.130 username=admin password=password The following line assigns the previously created volume to the server that was created. cliq assignVolumeToServer volumeName=data-01 serverName=mtx-esx01 login=172.16.0.130 username=admin password=password These lines can be combined in a batch file and run as a single script to create all volumes and servers and assign these as needed.

95

For more information


To read more about HP and Client Virtualization, go to www.hp.com/go/cv Other documents in the Client Virtualization reference architecture series can be found at the same URL. HP Insight Control Integrations, Insight Control for Microsoft System Center, http://h18000.www1.hp.com/products/servers/management/integration-msc.html HP Client Virtualization Analysis and Modeling service, http://h20195.www2.hp.com/v2/GetPDF.aspx/4AA3-2409ENW.pdf HP and Citrix, http://www.hp.com/go/citrix and http://www.citrix.com/hp. Citrix XenDesktop, http://www.citrix.com/xendesktop Citrix XenApp, http://www.citrix.com/xenapp Microsoft Hyper-V, http://www.microsoft.com/hyper-v-server

To help us improve our documents, please provide feedback at http://h71019.www7.hp.com/ActiveAnswers/us/en/solutions/technical_tools_feedback.html.

Copyright 2011 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice. The only warranties for HP products and services are set forth in the express warranty statements accompanying such products and services. Nothing herein should be construed as constituting an additional warranty. HP shall not be liable for technical or editorial errors or omissions contained herein. Microsoft and Windows are U.S. registered trademarks of Microsoft Corporation. Intel and Xeon are trademarks of Intel Corporation in the U.S. and other countries. 4AA3-5327ENW, Created June 2011

Você também pode gostar