Escolar Documentos
Profissional Documentos
Cultura Documentos
Implementing the HP Architecture for Citrix XenDesktop on Microsoft Windows Server 2008 R2 Hyper-V
Technical white paper
Table of contents
HP and Client Virtualization .................................................................................................................. 2 Software used for this document ........................................................................................................ 3 The Enterprise Client Virtualization Reference Architecture for HP VirtualSystem .......................................... 6 Why HP, Citrix, and Microsoft for Client Virtualization ............................................................................. 7 The partners .................................................................................................................................... 7 What this document produces ............................................................................................................. 10 Citrix and Client Virtualization ............................................................................................................ 13 Why Citrix XenDesktop 5 ................................................................................................................ 13 Rack layout ................................................................................................................................... 15 Configuring the platform ..................................................................................................................... 18 External Insight Control ................................................................................................................... 18 Configuring the enclosures .............................................................................................................. 19 Creating a Virtual Connect domain with stacked enclosures ................................................................ 20 Defining profiles for hosts ................................................................................................................ 31 Setting up management hosts .............................................................................................................. 40 Deploying storage ............................................................................................................................. 42 Configuring the P4800 G2 SAN for BladeSystem .............................................................................. 42 Configuring Hyper-V hosts to access SAN ......................................................................................... 49 Configuring the management group for hosts .................................................................................... 50 Configuring and attaching storage for management hosts ................................................................... 52 Setting up DAS for non-persistent VM hosts ....................................................................................... 56 Installation of servers physical and virtual........................................................................................... 62 Setting up the infrastructure ............................................................................................................. 63 Setting up management VMs ........................................................................................................... 63 Understanding storage for XenDesktop ............................................................................................. 64 Bill of materials .................................................................................................................................. 66 Installing and configuring XenDesktop 5 ............................................................................................... 68 Summary .......................................................................................................................................... 84 Appendix A Storage patterning and planning for Citrix XenDesktop environments .................................. 87 Appendix B Scripting the configuration of the Onboard Administrator ................................................... 90 Appendix C CLIQ commands for working with P4000 ........................................................................ 95 For more information .......................................................................................................................... 96
This document does not discuss the in depth implementation steps to install and configure Citrix and Microsoft software unless it directly effects the successful deployment of the overall platform. Abbreviations and naming conventions Table 1 is a list of abbreviations and names used throughout this document and their intended meaning.
Table 1. Abbreviations and names used in this document
Convention SCVMM MS RDP SSD VDI OA LUN IOPs POD SIM RBSU
Definition System Center Virtual Machine Manager Microsoft Remote Desktop Protocol Solid State Drives Virtual Desktop Infrastructure Onboard Administrator Logical Unit Number Input and Output Operations per second The scaling unit of this reference architecture HP Systems Insight Manager ROM Based Setup Utility
Target audience This document is targeted at IT architects and engineers that plan on implementing Citrix XenDesktop on Windows Server 2008 R2 SP1 and are interested in understanding the unique capabilities and solutions that HP, Citrix, and Microsoft bring to the Client Virtualization market as well as how a viable, enterprise level desktop virtualization solution is crafted. This document is one in a series of reference architecture documents available at http://www.hp.com/go/cv. Skillset It is expected that the installer utilizing this document will be familiar with servers, networking and storage principles and have skills around Microsoft virtualization. The installer should also be familiar with HP BladeSystem. Familiarity with Client Virtualization and the various desktop and application delivery model concepts and definitions is helpful, but not necessary.
Management software
Components VM Management HP Systems Insight Manager HP P4000 SAN/iQ Centralized Management Console Microsoft SQL Server 2008 Software description System Center Virtual Machine Manager (SCVMM) HP Systems Insight Manager 6.0
[1]
It is assumed that an existing SQL Server cluster will be used to host the necessary databases.
Firmware revisions
Components HP Onboard Administrator HP Virtual Connect HP ProLiant Server System ROM HP SAS Switch HP Integrated Lights-Out 3 (iLO 3) HP 600 Modular Disk Array (MDS600) Version 3.30 3.18 Varies by server 2.2.15.0 1.20 2.66
Figure 1: The HP Converged Infrastructure for Client Virtualization Reference Architecture for Desktop Virtualization
From the endpoint device the user interfaces with to the backend data center servers and storage, HP has the hardware and management capabilities for a complete end to end infrastructure. Client Virtualization is much more than delivering a virtual desktop to an end-point device. There are multiple methods of delivering a desktop and application to a user or users. Client Virtualization is inclusive of multiple desktop and application delivery options. No one option will sufficiently satisfy a complete organization. There are some users that can share a virtual desktop (session virtualization), which is served up with Microsoft Remote Desktop Services, formerly known as Terminal Services, while other users may require a more secure and personalized desktop environment, but still not require dedicated control over their desktop. Some users do not need the capability to install software, or make changes to the underlying operating system, but still need a separate isolated desktop when they log in. Nothing needs to be maintained in the desktops between login sessions. They receive a clean fresh desktop at each login. Profile management can be used for user virtualization, allowing users to customize their environment without making changes to the desktop. The user customizations like drive and printer mappings, desktop layout, color schemes and
preferences are loaded into the desktop at user login. This can be accomplished using non-persistent virtual machines (VMs) in a Virtual Desktop configuration. Along with the non-persistent users, there are persistent users. These users need to preserve operating system and application installation changes across logins, and may have requirements for administrator access rights to their virtual desktop. These users will either have dedicated VMs, one for each user creating a large storage footprint, or the users may start with the same base image file and utilize smaller differential files to maintain their personalities. Whether using session-based desktops, persistent, or non-persistent workers, virtualization of applications should be implemented for better management and performance. Using tools like Citrix XenApp, a key component of XenDesktop, and Microsoft Application Virtualization (App-V) to virtualize and deliver applications allows offloading the running of applications to dedicated servers, decreasing the load in the VMs being used to support the virtual desktops. The Citrix approach to IT for delivering multiple types of virtual desktops and applications whether hosted or local is its FlexCast delivery technology. Using Citrix XenDesktop with FlexCast on Microsoft Windows Server 2008 R2 SP1 and HP hardware offers a complete Client Virtualization solution. This document will focus on using those pieces of the Hosted VDI (commonly known as VDI) delivery model of Citrix FlexCast to create the HP Enterprise Reference Architecture for Client Virtualization with Citrix XenDesktop 5 and Microsoft Windows Server 2008 R2 SP1. The document will also touch on other FlexCast delivery technologies such as Hosted Shared desktops and On-Demand applications to show how a complete Client Virtualization solution could be built by starting with this VDI reference architecture.
The partners
HP, Microsoft and Citrix all have long, strong relationships around partnering. The HP and Microsoft global strategic alliance is one of the longest standing alliances of its kind in the industry. The goal is helping businesses around the world improve services through the use of innovative technologies. HP and Microsoft have more than 25 years of market leadership and technical innovation. Since 1996, HP and Citrix have shared a close, collaborative relationship, being mutual customers as well as partners. HP and Citrix work together to deliver joint engineering solutions, with a dedicated HP team supporting Citrix sales, operations, marketing, consulting and integration services, and technical development. HP supports Citrix StorageLink technology to simplify storage management. HP offers the full suite of products and services to support Citrix solutions. HP ProLiant and BladeSystem servers, HP P4000 storage technology, and HP Networking all provide solutions specifically designed to support Citrix solutions. HP thin clients are certified as Citrix Ready, and provide support for the latest HDX and HDX 3D protocols. HP management tools provide single pane of glass for managing the reference architecture as part of an overall IT environment. HP is a leading global system integrator, with hundreds of Citrix-certified professionals with deep experience implementing Citrix and HP solutions. HP Technology Services provides strategic assessment, solution design and deployment, and migration services for Citrix products. HP Enterprise Services Client Virtualization Service provides application and desktop virtualization as a managed service based on XenDesktop.
For Citrix and Microsoft, 2011 marks the 22nd anniversary of the Citrix/Microsoft partnership. Citrix builds on Windows as its Innovation Platform and continues to expand upon the successful alignment pioneered through the collaboration between Microsoft and Citrix in the application delivery marketplace. Most recently, Citrix and Microsoft have joined forces again to deliver joint desktop virtualization offerings a market now dominated by these joint Citrix-Microsoft solutions. In recognizing the outstanding infrastructure solutions that Citrix brings to the Microsoft marketplace, Microsoft has awarded their annual Global Infrastructure Partner of the Year award to Citrix four out of the last eight years. More information about the HP/Microsoft partnership can be found at www.hp.com/go/microsoft. For information about the HP/Citrix partnerships go to www.hp.com/go/citrix. HP HP brings a self-contained and modular hardware solution providing performance within the enclosure with integrated tools to give you enhanced visibility and prevention notifications. With everything in a rack, the involvement of multiple IT teams is limited or not required. The rack has redundant networks for connecting to the data center management and production links. All iSCSI network traffic, virtualized application traffic, and VM provisioning traffic stays within the rack. With the storage within the rack, the storage team is not required to manage or be involved in the storage configuration. When looking at networking, the HP Virtual Connect Flex-10 modules and the Flex-NICs on the HP BladeSystem servers offer tremendous reduction in external network ports. Each blade has two NIC ports, and each port represents four physical NICs with a combined bandwidth of 10Gb. The NICs can be teamed across the ports to create network redundancy, all without adding more NICs or switches to the configuration. At the management level, HP offers the ability to manage many systems with one core infrastructure using the Onboard Administrator of the BladeSystem enclosure to manage the blades, enclosure, SAS switches and Virtual Connect modules. Additionally, the HP Insight Control software can manage all of the servers and hardware, providing failure prevention notifications; and to highlight the partnerships, HP Insight Control is fully integrated with Microsoft System Center management software. In creating the reference architecture (RA), HP looked at an optimally sized and engineered set of hardware that leverages HPs Converged Infrastructure to pull everything together end-to-end, running Microsoft Windows Server 2008 R2 SP1 as a solid software base and Citrix XenDesktop 5 to give the users the best possible user experience. Microsoft Microsoft Windows Server 2008 R2 SP1 Microsoft released Windows Server 2008 R2 SP1 in earlier 2011 with several major enhancements, and major improvement in the performance of Hyper-V. The most important enhancement to building the reference architectures is Dynamic Memory. Dynamic Memory allows utilization of physical memory to its fullest capacity without sacrificing performance. Leveraging Dynamic Memory allows utilization of all physical memory in the server. For this RA, all VMs had Dynamic Memory configured. The release of SP1 has seen great improvements with performance, as well as the introduction of RemoteFX, designed to bring a full Windows 7 Aero experience to the VDI user. More information about utilizing RemoteFX can be found at www.hp.com/go/cv. RemoteFX is not supported in a Server Core installation of Windows Server 2008 R2 SP1.
Citrix Citrix XenDesktop Citrix XenDesktop transforms Windows desktops to an on-demand service to any user, any device, anywhere. XenDesktop quickly and securely delivers any type of virtual desktop or Windows, web and SaaS application to all the latest PCs, Macs, tablets, smartphones, laptops and thin clients all with a high-definition HDX user experience. FlexCast delivery technology enables IT to optimize the performance, security and cost of virtual desktops for any type of user, including task workers, mobile workers, power users and contractors. XenDesktop helps IT rapidly adapt to business initiatives, such as offshoring, M&A and branch expansion, by simplifying desktop delivery and enabling user selfservice. The open, scalable and proven architecture simplifies management, support and integration. Benefits of Citrix XenDesktop Citrix XenDesktop key features include: Any device, anywhere with Receiver. Todays digital workforce demands the flexibility to work from anywhere at any time using any device theyd like. Leveraging Citrix Receiver as a lightweight universal client, XenDesktop users can access their desktop and corporate applications from the latest tablets, smartphones, PCs, Macs, or thin clients. This enables virtual workstyles, business continuity and user mobility. HDX user experience. XenDesktop 5 delivers an HDX user experience on any device, over any network, while using up to 90% less bandwidth compared to competing solutions. With HDX, the desktop experience rivals a local PC, even when using multimedia, real-time collaboration, USB peripherals, and 3D graphics. Integrated WAN optimization capabilities boost network efficiency and performance even over challenging, high latency links. Beyond VDI with FlexCast. Different types of workers across the enterprise have varying performance and personalization requirements. Some require offline mobility of laptops, others need simplicity and standardization, while still others need high performance and a fully personalized desktop. XenDesktop can meet all these requirements in a single solution with the unique Citrix FlexCast delivery technology. With FlexCast, IT can deliver every type of virtual desktop, hosted or local, optimized to meet the performance, security and mobility requirements of each individual user. Any Windows, web or SaaS app. With XenDesktop, you can provide your workforce with any type of application they need, including Windows, web and SaaS apps. For Windows apps, XenDesktop includes XenApp, the on-demand application delivery solution that enables any Windows app to be virtualized, centralized, and managed in the data center and instantly delivered as a service to users anywhere on any device. For web and SaaS apps, Receiver seamlessly integrates them into a single interface, so users only need to log on once to have secure access to all their applications. Open, scalable, proven. With numerous awards, industry-validated scalability and over 10,000 Citrix Ready products, XenDesktop 5 provides a powerful desktop computing infrastructure thats easier than ever to manage. The open architecture works with your existing hypervisor, storage, Microsoft, and system management infrastructures, with complete integration and automation via the comprehensive SDK. Single-instance management. XenDesktop enables IT to separate the device, OS, applications and user personalization and maintain single master images of each. Instead of juggling thousands of static desktop images, IT can manage and update the OS and apps once, from one location. Imagine being able to centrally upgrade the entire enterprise to Windows 7 in a weekend, instead of months. Single-instance management dramatically reduces on-going patch and upgrade maintenance efforts, and cuts data center storage costs by up to 90 percent by eliminating redundant copies.
Data security and access control. With XenDesktop, users can access desktops and applications from any location or device, while IT uses policies that control where data is kept. XenDesktop can prevent data from residing on endpoints, centrally controlling information in the data center. In addition, XenDesktop can ensure that any application data that must reside on the endpoint is protected with XenVault technology. Extensive access control and security policies ensure that intellectual property is protected, and regulatory compliance requirements are met.
10
This desktop OS is, at logon, combined with the users personality, application and data settings and also an application to create a runtime VDI instance as in Figure 3.
The entire application stack must be housed on resilient, cost effective and scalable infrastructure that can be managed by a minimal number of resources. There are many different terms used to define types of VMs associated with VDI, for this document persistent/non-persistent will be used. A persistent VM saves changes across logins. The user usually has admin rights to the VM and can make changes, add software, and customize the VM as they need. For a non-persistent VM, any changes or modifications are lost when the user logs out, and at login the users is always presented with a pristine fresh VM. Customization to non-persistent VMs is handled by user virtualization utilizing Citrix Profile Management. The use of non-persistent VMs minimizes the amount of SAN storage required, allows for use of Direct Attached Storage (DAS), and minimizes the amount of data required to be backed up.
11
Figure 4 shows the Citrix XenDesktop architecture software stack. The XenApp component of XenDesktop provides the application virtualization layer. Desktop Delivery Controller server acts as the broker and desktops are delivered over the network via Citrix HDX or Microsoft RDP. Citrix allows multiple models for managing user data within the overall ecosystem. HP recommends selecting a mechanism for user virtualization that minimizes network impact from the movement of user files and settings and allows for the customization of the users environment based on a number of factors including location, operating system and device.
12
Figure 5 below shows the networks required to configure the platform for XenDesktop and where the various components reside. Note the dual homed storage management approach which allows all storage traffic to remain within the Virtual Connect domain, reducing complexity and involvement from multiple teams.
Figure 5: A Citrix XenDesktop specific implementation viewed from an overall network standpoint
13
desktop, hosted or local, physical or virtual each specifically tailored to meet the performance, security and flexibility requirements of each individual user. Hosted Shared Desktops provide a locked down, streamlined and standardized environment with a core set of applications, ideally suited for task workers where personalization is not needed or allowed. Hosted VDI Desktops offer a personalized Windows desktop experience, typically needed by office workers, which can be securely delivered over any network to any device. Streamed Virtual Hard Drive (VHD) Desktops leverage the local processing power of rich clients, while providing centralized single-image management of the desktop. These types of desktops are often used in computer labs and training facilities, and when users require local processing for certain applications or peripherals. Local VM Desktops extend the benefits of centralized, single-instance management to mobile workers that need to use their laptops offline. When they are able to connect to a suitable network, changes to the OS, apps and user data are automatically synchronized with the data center. Modular architecture The Citrix XenDesktop modular architecture provides the foundation for building a scalable desktop virtualization infrastructure. It creates a single design for a data center, integrating all FlexCast models. The modular architecture consists of three main modules: Control Module manages user access and virtual desktop allocation, containing components like XenDesktop Controllers, SQL database, License Server and the Web Interface. Desktop Modules contains a module for each of the above mentioned FlexCast models, managing Physical Endpoints, XenApp Servers, Hypervisor pools, physical machines, etc. Imaging Module provides the virtual desktops with the master desktop image, managing Installed Images, Provisioning Server and Machine Creation Services. For a detailed description of the modular architecture, please refer to the Citrix XenDesktop 5 Reference Architecture document at http://support.citrix.com/article/CTX127587 Desktop provisioning technologies Provisioning Server (PVS) Citrix Provisioning Server provides images to physical and virtual desktops. Desktops utilize network booting to obtain the image and only portions of the desktop images are streamed across the network as needed. Provisioning Server does require additional server resources, but can be either physical or virtual servers depending on the capacity requirements and hardware configuration. Also, Provisioning Server does not require the desktop to be virtualized as Provisioning Server can deliver desktop images to physical desktops. Machine Creation Services (MCS) Citrix Machine Creation Services was introduced in XenDesktop 5 and provides powerful provisioning and lifecycle management of hosted virtual desktop machines. As it is integrated directly into XenDesktop, no additional servers or connections are required making use of MCS simple for even the smallest deployments. MCS delivers storage savings by building virtual machines from a common master image and only storing differences for persistent desktops. This enables administrators to apply updates to the master image once, and have those changes applied toward all existing virtual machines without the need to re-provision.
14
Machine Creation Services and Provisioning Services The decision between utilizing Machine Creation Services desktops or Provisioning Services desktops will be based on the overall architecture. If there are plans to utilize other FlexCast options, like Streamed VHD or Hosted Shared Desktops, the Provisioning Services infrastructure will already be in place and expanding to include streamed desktops is inconsequential. However, if the implementation is focused on the use of Hosted VDI desktops only, then Machine Creation Services might be a better option as it requires less infrastructure servers.
Rack layout
Figure 6 shows the overall rack layout.
Figure 6: Citrix XenDesktop/Microsoft Windows Server 2008 R2 SP1/HP BladeSystem RA (front and back)
15
Figure 7 shows the overall function of each component in the rack, leveraging different blade servers to support VDI desktops and both DAS and P4800 SAN to support persistent and non-persistent VDI sessions using both PVS and MCS. This also includes XenApp servers for session based applications and session based desktops.
16
The two management blades are running Windows Server 2008 R2 SP1 Hyper-V with Microsoft Failover Cluster and Clustered Shared Volumes configured. This allows for high availability and live migration of the management VMs. The following VMs are running on the management servers: Web Interface Server Desktop Delivery Controller (DDC) SCVMM administration server The server VMs reside on the P4800 as shared storage to the management servers to allow for HA. Six BL490c blades are configured for MCS and persistent VDI users, and use the P4800 as storage. Twelve BL460c servers supporting task workers are used for non-persistent VDI users. Two BL460c servers are configured as PVS servers for redundancy. A single PVS server can handle up to 5000 connections, however for HA two servers are configured. Also eight BL460c servers are configured to run XenApp for the application virtualization. NOTE: SQL is required by multiple applications including the DDC, PVS and SCVMM servers. It is assumed the data center has a clustered SQL configuration already running. If not, then additional servers are required to support a clustered SQL implementation.
17
Figure 8 shows cabling for the platform outlined in this document. This minimal configuration shows four cables supporting all users. Redundant 10GbE is dedicated to production and management traffic via a pair of cables. The enclosures communicate via a highly available 10GbE bi-directional network that carries migration and storage traffic without egressing. This minimizes network team involvement while enhancing flexibility. This configuration can be expanded to include a 10GbE uplink to the core from each enclosure enhancing availability.
Figure 8: Minimal total cabling within the rack required to support all users and hosts within the rack. Optionally a second set of uplinks to the core may be defined in the lower enclosure.
18
19
From the same page, set the Virtual Connect Flex-10 power on to Enabled and set the delay for some point beyond the time when the SAS switches will power on. A time of 210 seconds is generally acceptable. Click on Apply when finished to save settings as in Figure 10.
Figure 10: Configuring the power on sequence of the Flex-10 modules within the enclosures
Highlight the Device Bays tab by clicking on it. Set power on timings for any P4460sb G2 blades or persistent user hosts that are attached to storage to 240 seconds. Click on Apply when finished. Set remaining hosts to power on at some point beyond this point. Before proceeding to the next section, ensure your enclosures are fully configured for your environment.
20
Once logged in youll be presented with the Domain Setup Wizard. Click on Next as in Figure 12 to proceed with setup.
21
You will be asked for the Administrator credentials for the local enclosure as in Figure 13. Enter the appropriate information and click on Next.
At the resulting screen choose to Create a new Virtual Connect domain by importing this enclosure and click on Next as in Figure 14.
22
When asked, click on Yes as in Figure 15 to confirm that you wish to import the enclosure.
You should receive a success message as in Figure 16 that highlights the successful import of the enclosure. Click on Next to proceed.
23
At the next screen, you will be asked to assign a name to the Virtual Connect domain. Keep scaling in mind as you do this. A moderate sized domain with 4,000 users can potentially be housed within a single Virtual Connect domain. Very large implementations may require multiple domains. If you will scale to very large numbers a naming convention that scales is advisable. Enter the name of the domain in the text box as in Figure 17 and then click on Next to proceed.
Configure local user accounts at the Local User Accounts screen. Ensure that you change the default Administrator password. When done with this section, click on Next as in Figure 18.
Figure 18: Configuring local user accounts within Virtual Connect Manager
24
This will complete the initial domain configuration. Check the box to Start the Network Setup Wizard as in Figure 19. Click Finish to start configuring the network.
Configuring the network The next screen to appear is the initial Network Setup Wizard screen. Click on Next to proceed as in Figure 20.
25
Click Next. At the Virtual Connect MAC Address screen choose to use the static MAC addresses of the adapters rather than Virtual Connect assigned MAC addresses as in Figure 21. Click on Next to proceed when done.
Select to Map VLAN Tags as in Figure 22. You may change this setting to be optimized for your environment, but this document assumes mapped tags. Click on Next when done.
26
At the next screen, you will choose to create a new network connection. The connections used in this document create shared uplink sets. This initial set will be linked externally and will carry both the management and production networks. Choose Connection with uplink(s) carrying multiple networks (using VLAN tagging). Click on Next to proceed as in Figure 23. You will need to know your network information including VLAN numbers to complete this section.
27
Provide a name for the uplink set and grant the connection the two network ports that are cabled from the Virtual Connect modules to the network core. Add in the management and production networks as shown in Figure 24. Click on Apply to proceed.
You will be returned to the network setup screen. Choose once again to Connection with uplink(s) carrying multiple networks (using VLAN tagging). Click on Next to proceed as in Figure 25.
28
This second set of uplinks will carry the migration and iSCSI networks. These networks will not egress the Virtual Connect domain. Give a name to the uplink set and define your networks along with their VLAN ID, but do not assign uplink ports as in Figure 26. This will ensure the traffic stays inside the domain. Click on Next to proceed.
29
When you return to the Defined Networks screen, choose No, I have defined all available networks as in Figure 27. Click on Next to continue.
Click on Finish at the final wizard screen. This will take you to the home screen for the Virtual Connect Manager as in Figure 28. This completes the initial setup of the Virtual Connect domain.
30
Network
VLANd
Uplink Set
Yes No No Yes
The device bays for the P4800 G2 SAN are simply configured with two adapters of 10GbE bandwidth each. Both adapters are assigned to the iSCSI network. HP suggests the following bandwidth allocations for each network as in Table 3.
Table 3. Bandwidth recommendations for hypervisor and management host profiles
31
To begin the process of defining and assigning the server profiles, click on Define and then Server Profile as in Figure 29.
32
You will create a single profile for the hypervisor and management hosts as in Figure 30. This profile will be copied and assigned to each device bay. Right click on the Ethernet Adapter Connections and choose to Add Connection. You will do this eight times. Assign two adapters to each network defined in Table 3 above and assign the bandwidth to those adapters as shown. Do not assign the profile to any bay as this will serve as your master profile. Click on Apply when finished.
Figure 30: Configuring the profile for hypervisor and management hosts
33
Figure 31 shows the screen as it appears once the networks are properly defined.
Figure 31: The host profile for hypervisor and management hosts
34
Repeat the prior process to define a second profile to be copied to any slot where a P4460sb G2 storage blade resides. Assign the full bandwidth of 10Gb to each of two adapters. Do not create any extra Ethernet connections. Click on Apply as in Figure 32 to save the profile.
35
Importing the second enclosure With the configuration of the initial enclosure and VC domain complete, additional enclosures can be incorporated into the domain. On the left column, click on Domain Enclosures and then click on the Domain Enclosures tab. Click on the Find button and enter the information for your second enclosure as in Figure 33.
36
At the next screen, click the check box next to the second enclosure and choose the Import button as in Figure 34.
37
38
Optionally, you may click on the Domain IP Address tab and assign a single IP address from which to manage the entire domain as in Figure 36.
39
Be sure to back up your configuration as in Figure 37 prior to proceeding. This will provide you with a baseline to return to.
With your domain configured, copy the server profiles you created and assign them to the desired bays. Once complete, backup your entire domain configuration again for a baseline configuration with profiles in place that can be used to restore to a starting point if needed.
40
NOTE: RemoteFX is only supported in a full installation of Microsoft Windows Server 2008 R2 SP1. License each server as appropriate. Configure the management servers Ensure the management servers have been installed with Windows Server 2008 R2 SP1 and the Hyper-V role has been configured. These servers need to be installed first, before any other servers or storage can be configured. Configure networks If you followed the recommended configuration advice for the setup of the Virtual Connect domain (in the Creating a Virtual Connect domain with stacked enclosures section of this doc) this section will complete the networking configuration through to the SCVMM. Configure your networks for each hypervisor host as in Table 4.
Table 4. Network configuration for hypervisor hosts
Function Management Production network for protocol, application and user traffic Migration traffic iSCSI network
Once the management servers have been installed, Hyper-V configured, and the necessary networks created within Hyper-V, the next step is to create a basic VM to allow for management of the P4800. It is recommended to create a dual-homed management VM, but is not necessary. The management console for the P4800 can be installed directly onto one of the management servers if it is running the full Windows installation, but best practices suggest creating a separate VM that can be migrated between management servers. Dual-homed management VM At this point there is no access to the P4800 as no external networks are defined. To address this, create a management VM and assign it two Ethernet adapters, the first on the management network and the second on the iSCSI network. Install the operating system into this VM. You may choose from a variety of operating systems supported by the P4000 Centralized Management Console (CMC). For the purpose of this document we installed a copy of Microsoft Windows 7 Professional and granted it a single vCPU with 1GB of RAM. This VM should be installed on the local data store. Once the VM has been installed and patched, install the P4000 CMC. You may choose to migrate it to a shared storage volume once the management hosts are fully configured.
41
Deploying storage
Configuring the P4800 G2 SAN for BladeSystem
In order to access and manage the P4800 G2 you must first set IP addresses for the P4460sb G2 storage blades. For each blade, perform the following steps to configure the initial network settings. It is assumed you will not have DHCP available on the private storage network. If you are configuring storage traffic to egress the enclosure and are running DHCP you can skip ahead.
1. Log onto the blade from the iLO. The iLO for each blade can be launched from within the
Onboard Administrator as long as the OA user has appropriate permissions. If not, use the asset tag on the P4460sb to locate the iLO name and administrator password.
2. From the command line, type the word Start. 3. Choose Network TCP/IP Settings from the available options. 4. Choose a single adapter to configure. 5. At the Network Settings screen, enter the IP information for the node.
When you have completed these steps for each P4460sb proceed to the next section. Configuring the SAN With P4000 SANs, there is a hierarchy of relationships between nodes and between SANs that should be comprehended by the installer. In order to create a P4800 G2 SAN, you will need to define the nodes, cluster and a management group. A node in a P4000 SAN is an individual storage server, in this case, an HP P4460sb G2. A cluster is a group of nodes combined to form a SAN. A management group houses one or more clusters/SANs and serves as the management point for those devices. Launch the HP P4000 Centralized Management Console by logging into your management VM and clicking the icon.
42
Detecting nodes You will locate the two (2) P4000sb nodes that you just configured. Figure 38 shows the initial wizard for identifying nodes.
Click on the Find button to proceed. You can now walk through the wizard adding nodes by IP address or finding them via mask. Once you have validated that all nodes are present in the CMC you can move on to the next section to create the management group.
43
Creating the management group When maintaining an internal iSCSI network each Virtual Connect domain must have its own CMC and management group. The management group is the highest level from which the administrator will manage and maintain the P4800 SAN. To create the first management group, click on the Management Groups, Clusters, and Volumes Wizard at the Welcome screen of the CMC as shown in Figure 39.
44
Click on the Next button when the wizard starts. This will take you to the Choose a Management Group screen as in Figure 40.
Select the New Management Group radio button and then click on the Next button. This will take you to the Management Group Name screen. Assign a name to the group and insure all P4460sb nodes are selected prior to clicking on the Next button. Figure 41 shows the screen.
Figure 41: Name the management group and choose the nodes
45
It will take time for the management group creation to complete. When the wizard finishes, click on Next to continue. Figure 42 shows the resulting screen where you will be asked to add an administrative user.
Enter the requested information to create the administrative user and then click on Next. You will have the opportunity to create more users in the CMC after the initial installation. Enter an NTP server on the iSCSI network if available at the next screen and click on Next. If unavailable, manually set the time. An NTP server is highly recommended. Immediately after, you will be asked to configure DNS information for email notifications. Enter the information requested and click on Next. Enter the SMTP information for email configuration and click Next. The next screen will begin the process of cluster creation described in the following section.
46
Create the cluster At the Create a Cluster screen, select the radio button to choose a Standard Cluster as in Figure 43.
Click on the Next button once you are done. At the next screen, enter a Cluster name and verify all P4460sb nodes are highlighted. Click on the Next button.
47
At the next screen, you will be asked to assign a virtual IP address for the cluster as in Figure 44. Click on Add and enter a Virtual IP Address and Subnet Mask on the private iSCSI network. This will serve as the target address for your hypervisor side iSCSI configuration.
Click on Next when done. At the resulting screen, check the box in the lower right corner that says Skip Volume Creation and then click Finish. You will create volumes in another section. To create an Adaptive Load Balancing (ALB) bond, on the first P4460sb G2 node you need to click the plus next to the node and select TCP/IP Network as in Figure 45.
48
Highlight both adapters in the right TCP/IP tab, right click and select New Bond. Define the IP address of the bond. When done, it should appear as in Figure 46. Repeat this until every P4460sb node in the cluster has an ALB bond defined.
Once done, close all windows. At this point, the sixty (60) day evaluation period for SAN/iQ 9.0 begins. You will need to license and register each of the P4600sb nodes within this sixty (60) day period.
49
Once the service has started, click on the Configuration tab. This will show the iSCSI Initiator Name associated with the server, Figure 48.
You may choose to alter this name and make it simpler by eliminating the characters after the : or leave it as is. Copy down this iSCSI name, it will be needed later. The servers and associated Initiator names must be added to the P4800 cluster before the hosts can connect. Before the iSCSI Initiator can be fully configured on the serves, the associated volumes and access must be granted on the P4800 management group.
50
Enter a name for the server (the hostname of the server works well), a brief description of the host and then enter the initiator node name for your host that was saved earlier. If you are using CHAP you should configure it at this time. Click on OK when done. This process will need to be repeated for every host that will attach to the P4800 G2. Currently only the management servers have been installed. Once the remaining servers have been installed you will need to repeat this process for the MCS Hyper-V hosts, and the SCVMM VMs.
51
Click the drop down labeled Tasks to be presented with options. From the drop down, select the option for New Volume. In the New Volume window under the Basic tab, enter a volume name and short description. Enter a volume size of 350GB. This volume will house the management VMs. Figure 52 shows the window.
52
Once you have entered the data, click on the Advanced tab. Insure you have selected your cluster, RAID-10 replication and then click the radio button for Thin Provisioning.
Click on the OK button when done. Repeat this process to create the other management volume. When all volumes have been created, return to the Servers section of the CMC under the main management group. You will initially assign the volumes you just created to the first management host. In this document this host is in device bay 1. Right click on your first management server and choose to Assign and Unassign Volumes and Snapshots as in Figure 54.
A window will appear with the volumes you have defined. Select the appropriate volumes to assign to the host by selecting the check boxes under the Assigned column. You will repeat these steps when you create your other volumes after the other servers have been installed.
53
NOTE: You may script the creation and assignment of volumes using the CLIQ utility shipped on your P4000 software DVD. See Appendix C of this document for samples. Finish configuration of management servers Once the volumes have been assigned, go back to the management server and run the iSCSI Initiator again. Click on the Discovery tab, then on the Discover Portal button, Figure 55.
54
Select the Targets tab. The volume(s) should now be listed, and shown as Inactive, Figure 56.
55
Click the Add this connection to the list of Favorite Targets. If the Multipath IO feature (MPIO) has been installed in the server, then select the Enable multi-path check box, Figure 57.
Adding the volume to the Favorite Targets means the server will attempt to connect to the volume when the server restarts. This process needs to be repeated for the other management server, and will need to be repeated for the SCVMM VMs and the MCS Hyper-V hosts.
56
Launch the Virtual SAS Manager by highlighting a SAS switch and clicking on Management Console. The following screen appears as in Figure 59. Highlight the Zone Groups and then click on Create Zone Group.
57
You will highlight between 4 and 6 disks based on expected I/O patterns for the individual hosts. Figure 60 highlights the selection of 4 disks for the new Zone Group. Click on OK once you have selected the disks and assigned a name to the zone group.
58
Repeat the process until you have Zone Groups defined for your DAS hosts. Figure 61 shows four (4) Zone Groups that have been created. Click on Save Changes prior to proceeding.
59
For each device bay that you have created a Zone Group, highlight the device bay and then click Modify Zone Access as in Figure 62.
60
Select the Zone Group that belongs to the device bay by clicking on the check box next to it. Click on OK to complete the assignment as in Figure 63.
When booting, the server can boot from either the internal drives of the server attached to the P410i controller, or from the drives just assigned using the P700m controller. To define where your server will boot from you will need to change the settings in the RBSU to boot from the P700m array if so desired. Use the ORCA for the P700m, not the P410i controller, to configure the disks you assigned in this section as a RAID10 set. For non-persistent users a file will be created for each VM to hold the page file and client-side write cache for the provisioning server. By default, the page size is 1.5 times memory and the client-side write file is a minimum of 5 GB. For a task worker this means 1.5 GB page file + 5 GB. A 6.5 GB file will be created for each task worker supported on a server. From a performance and space consideration, you may want to put drives in the server, mirror those drives with RAID10 and install Windows Server 2008 R2 SP1 to the internal drives. Whichever path is chosen, verify that RBSU is set to boot from the correct device.
61
Virtual Machines
Role Desktop Delivery Controller SCVMM Windows 7 Desktop base image Web Interface VM vCPUs 2 2 1 1 Memory 4GB 4GB 1.5GB 1.5GB Hard Disk 40GB 40GB 40GB 40GB NICs 1 1 1 1
Desktop Delivery Controller (DDC) VM: Operating System: Windows Server 2008 R2 SP1 XenDesktop 5 Desktop Delivery Controller Desktop Studio Console Systems Center Virtual Machine Manager Administration Console Desktop Director Citrix Web Interface 5.4 Citrix Licensing 11.6.1 Microsoft System Center Virtual Machine Manager (SCVMM) VM Windows Server 2008 R2 SP1 Systems Center Virtual Machine Manager 2008 SQL Server 2008 (required, assumed to be installed elsewhere) Web Interface Server VM Windows Server 2008 R2 SP1 Internet Information Services (IIS) Web Interface 5.4
62
63
HP Insight Control Plugins for Microsoft System Center HP Insight Control for Microsoft System Center provides seamless integration of the unique ProLiant and BladeSystem manageability features into the Microsoft System Center consoles. By integrating the server management features of HP ProLiant and HP BladeSystem into Microsoft System Center consoles, administrators can gain greater control of their technology environments. Failover Manager HP P4000 SANs utilize a Failover Manager (FOM) to insure that data remain available across management groups in the event of a single node failure. If you want to run multi-site, then follow the recommendations of the P4000 Multi-Site HA/DR Solution Pack user guide at http://h20000.www2.hp.com/bc/docs/support/SupportManual/c02063195/c02063195.pdf.
Figure 64: Persistent VMs and the relationship between hosts and volumes
Figure 65 shows an overview of how storage will connect to volumes supporting non-persistent VM pools. The master image is held by the provisioning server, and can be stored on a volume on the
64
SAN or on local drives of the server. A single PVS server can support up to 5000 connections, with approximately 400 connections per NIC. Each host is assumed to hold 95-100 task workers per the sizing numbers HP has calculated in the document entitled Virtual Desktop Infrastructure for the Enterprise at http://h20195.www2.hp.com/V2/GetDocument.aspx?docname=4AA3-4348ENW. Each Page File/Write Cache is 1.5 times memory plus 5 GB.
For persistent VMs, MCS is used. The master image will be 40 GB in size, and a replica of the master image is copied to each volume to be created. Then the differential and identity disks for the VMs associated with that volume will be created during XenDesktop configuration. Each differential file associated with the image could grow to be the same size as the master image if not managed correctly. HP recommends consulting the Citrix documentation for managing the size of differential files with MCS. For planning purposes, 20 GB will be allocated to hold a differential file and its associated identity disk for each VM created. HP suggests aligning the volumes with approximately 30-35 VMs per volume. Determining the number of volumes is simple math:
Assume there is a total of 420 VMs planned, 14 volumes would be sufficient, 420/30. To determine the total amount of space required the equation is:
(Number of VMs * (VM Differential Size+ 300MB)) + (Number of Volumes * (2 * Master Image Size))
The 300MB is to allow space per VM for each identity disk associated with the differential disks. The Number of Volumes * (2 * Master Image Size) allows for space for a master image and a copy of
65
the master image per volume. In a worst case scenario, if the differential files were to grow to match the size of the master image file, the total space required would be 420 * (40GB+300MB) + (14 * (2*40)) equals 17.9 TB of required space. For our sizing, we assumed a maximum of 20 GB per differential disk, thereby requiring 9.5 TB of disk space. To calculate each volume size:
For this document, our volume size is 9.5TB /14, approximately 680 GB for each volume. It should be noted that all P4800 SANs are ready for thin provisioning from initialization. This allows for overprovisioning of space to ensure that storage is not constrained by physical limits that dont always make sense in VDI environments. This means volumes can be sized for a 100% match between the master image and the differential files. The installer must understand that growth must be accommodated and reacted to when thin provisioning is used.
Bill of materials
This section shows the equipment needed to build the sample configuration contained in this document. It does not include clients, operating system, alternative application virtualization technology, user virtualization or application costs as those are unique to each implementation. Some items related to power and overall infrastructure may need to be customized to meet customer requirements. Core Blade Infrastructure
Quantity 2 2 2 2 2 4 Part Number 507019-B21 413379-B21 517521-B21 517520-B21 456204-B21 455880-B21 Description HP BladeSystem c7000 Enclosure with 3 LCD Single Phase Power Module 6x Power supply bundle 6x Active Cool Fan Bundle c7000 Redundant Onboard Administrator Virtual Connect Flex-10 Ethernet Module for HP BladeSystem
66
Management
Quantity 2 2 Part Number 603718-B21 610859-L21 Description HP ProLiant BL460c G7 CTO Blade HP BL460c G7 Intel Xeon X5670 (2.93GHz/6-core/12MB/95W) FIO Processor Kit HP BL460c G7 Intel Xeon X5670 (2.93GHz/6-core/12MB/95W) FIO Processor Kit 72GB 6GB SAS 15K SFF DP HDD HP 8GB Dual Rank x4 PC3-10600 DIMMs
2 4 24
Persistent VDI Hypervisor Hosts (Processors may be substituted to match the configurations found in the document Virtual Desktop Infrastructure for the Enterprise.)
Quantity 6 6 6 6 108 Part Number 603719-B21 603600-L21 603600-B21 572075-B21 500662-B21 Description ProLiant BL490c G7 CTO Blade HP BL490c G7 Intel Xeon X5670 (2.93GHz/6-core/12MB/95W) FIO Processor Kit HP BL490c G7 Intel Xeon X5670 (2.93GHz/6-core/12MB/95W) FIO Processor Kit 60GB 3G SATA SFF Non-hot plug SSD HP 8GB Dual Rank x4 PC3-10600 DIMMs
Non-persistent Hypervisor Hosts (Processors may be substituted to match the configurations found in the document Virtual Desktop Infrastructure for the Enterprise.)
Quantity 12 12 12 144 12 12 Part Number 603718-B21 610859-L21 610859-B21 500662-B21 508226-B21 452348-B21 Description ProLiant BL460c G7 CTO Blade HP BL460c G7 Intel Xeon X5670 (2.93GHz/6-core/12MB/95W) FIO Processor Kit HP BL460c G7 Intel Xeon X5670 (2.93GHz/6-core/12MB/95W) FIO Processor Kit HP 8GB Dual Rank x4 PC3-10600 DIMMs HP Smart Array P700m SAS Controller HP Smart Array P-Series Low Profile Battery
67
NOTE: The drive count should reflect the number of drives (four or six) planned for each of the direct attached storage hosts within the reference architecture. User Data Storage
Quantity 2 1 Part Number BV871A BQ890A Description HP X3800 G2 Gateway HP P4500 G2 120TB MDL SAS Scalable Capacity SAN Solution
HP Software
Quantity 2 1 VAR Part Number TC277AAE 436222-B21 Description HP Insight Control for BladeSystem 16 Server license HP Insight Software Media Kit HP Client Automation, Enterprise
68
The first step is to install the Desktop Delivery Controller. You must install the SCVMM Administrator Console prior to installing the DDC. For more information visit: http://technet.microsoft.com/enus/library/bb740758.aspx. When installing the DDC on the Components to Install select all. The license server for the Citrix configuration will run on the DDC. Changes to the firewall may be required depending on your firewall settings. Please consult the XenDesktop 5 Product Documentation (http://edocs.citrix.com) for firewall recommendations. Once the DDC has finished installation, go to the Start menu and launch the Desktop Studio console to configure the desktop deployment. This will define the XenDesktop site, licensing and database options as well as define Microsoft Virtualization as the host type. When prompted, specify the SCVMM server address and credentials to authenticate to the SCVMM server. For Citrix Licensing configuration visit: http://support.citrix.com/proddocs/index.jsp?topic=/licensing/lic-licensing-115.html For more information about using an existing SQL database visit: http://support.citrix.com/article/CTX128008 Once the first DDC VM has been installed, repeat the process on the second DDC VM, joining it to the farm created when the first DDC was configured. Since the RA is using both PVS and MCS (Machine Creation Services) to deploy the VMs, multiple catalogs will need to be configured with the DDC. For PVS, no clustering is required as DAS storage will be used to support the write cache files, this process will be defined later in this document. For using MCS, a Microsoft Clustered Shared Volume is required to support each master image that will be created. This will determine the number of DDC groups required to support MCS. XenApp When installing XenApp there are two possibilities. The first is to install XenApp on bare-metal servers, requiring eight servers to be running XenApp to support the RA. The XenApp servers can also be virtualized if so desired. To support a similar load using virtualization as a single server would running bare metal requires four (4) VMs each with 4 vCPUs and 8 GB of memory. For this document, the XenApp servers were installed bare metal, no virtualization. NOTE: If you choose to run XenApp virtualized additional volumes will need to be created and configured on the P4800. For instructions for implementing XenApp, refer to the Citrix eDocs website: http://support.citrix.com/proddocs/topic/xenapp6-w2k8/ps-install-config-wrapper.html. Only the XenApp server role was installed using the Server Role Administrator in this exercise. In addition to the installation, the following applications were installed and published as hosted applications: Microsoft Word 2007 Microsoft Outlook 2007 Microsoft Excel 2007 Microsoft PowerPoint 2007 Microsoft Visio 2007 For this exercise, virtual desktops leveraged applications by using the Citrix Online Plugin via XenApp. To allow for this functionality, additional configurations were made to the Web Interface.
1. Open the Desktop Studio Console on the Desktop Delivery Controller.
69
2. Expand the Access folder, expand Web Interface, right click on XenApp Services Site and click
click OK.
5. Verify all settings are correct and click OK, Figure 67.
70
6. Right click on the site and choose Configure Authentication Methods. 7. Verify that Pass-through and Prompt are enabled, Figure 68. 8. Set Pass-through as the default authentication method and click OK.
9. Desktops will now pass thru credentials to XenApp to enumerate applications within the desktop
session. Windows 7 Base Image Once the DDC has been installed, create the base image file(s) that will be used for provisioning. The same base image file can be used for both PVS and MCS. The Windows 7 Optimization Guide from Citrix was used to optimize the desktop delivery. In addition the following steps were done to improve performance:
1. Create a virtual machine in SCVMM or Hyper-V with the following:
Desired HD size, normally 40GB for Windows 7 1.5GB RAM Legacy Hyper-V NIC (required to network boot VMs)
2. Boot the VM with Microsoft Windows 7 media. 3. Install Windows 7. 4. Verify Hyper-V Integration Services have been installed. 5. Add machine to domain.
When creating the base image a value of 1.5 GB was used for memory. For the VDI VMs in the template Dynamic Memory was configured with a minimum of 512 MB and the max dependent on the type of work, 1 GB for task workers, 1.5 GB for productivity users, and 2.0 GB for knowledge users. Once the Windows 7 VM has been created, XenApp support with the Citrix Online Plugin is installed, along with any additional software desired in the image. As a final step, install the Virtual Desktop Agent into the VM. To install the Virtual Desktop Agent, attach the XenDesktop5.iso to the Windows 7 VM using SCVMM. Once the application has started select Install Virtual Desktop Agent, then select
71
Advanced Install. In the Advanced Install select Virtual Desktop Agent and Support for XenApp Application Delivery, specify the URL of the XenApp Services Site. Manually specify the DDC controller location, and allow for XenDesktop Performance Optimizations, User Desktop shadowing, and Real Time Monitoring. Once the installation has completed the VM can now be shutdown. Two copies need to be made, one to be the PVS master image and one to be the MCS master image. Machine Creation Services Machine Creation Services will be used to create the persistent VMs on the SAN storage. All of the servers that will be supporting the persistent users should be installed with Windows Server 2008 R2 SP1 with the Hyper-V role enabled, and configured into a cluster using Cluster Shared Volumes for each volume that was created on the P4800 to support the persistent VMs. The VM to be used as the MCS master image should have its properties modified to enable Dynamic Memory and set the maximum memory limit for the VM as defined by the user type. Task workers are assigned 1GB, productivity workers get 1.5 GB and knowledge workers normally get 2 GB. From the Desktop Studio console on the DDC right click on Machines and select Create Catalog. When prompted for the host name specify the name of the Hyper-V Cluster. For the Machine Type specify Dedicated. When prompted, specify the Windows 7 master image created earlier to be used by MCS. Then specify the number of virtual machines to create, the number of vCPUs and memory to allocate to each VM, and select Create New Accounts for the Active Directory computer accounts. You will also need to specify the OU to store the computer names, and a naming scheme for the computer names. NOTE: for each # used in the Account Naming Scheme another digit is added to the name. Finally, specify the administrators that can manage this catalog, and select Finish to start the process. Once the desktop catalog has been created you must create a desktop group to assign users to desktops. In Desktop Studio Console, right click on Assignments and choose Create Desktop Group. Select the catalog of machines created in the previous step, and then specify the Active Directory user group that will have access to these VMs, and the number of virtual desktops a user can launch within this group at any one time. You will also need to specify the desktop administrators that can manage the group, and the Desktop Display Name and Desktop Group name. Once complete, select Finish to create the group. The VMs can now be booted, and the users can login using the DDC to access their persistent desktop. Provisioning Services (PVS) PVS will be used to support the non-persistent workers. Local storage will be used to keep the write cache files associated with the PVS image for the users. A write-cache file gets re-created on every login, so no data is saved between logins. If not configured correctly or an error occurs then the writecache for all of the VMs will default back to the location of the PVS image file. This document assumes Windows Server 2008 R2 SP1 has been installed onto the two physical servers in the RA to support PVS. Prior to installing PVS on the servers, the SCVMM Administration Console must be installed. In addition to the steps below for this exercise the following optimizations were performed: 15 threads per port were configured for the Provisioning Server TCP Large Send Offload was disabled on the Provisioning Server (http://support.citrix.com/article/CTX117374) PVS will be installed and configured on the first server, for the second server it will be configured to join an existing farm defined during configuration of the first server. This installation assumes one
72
image file for all non-persistent users, and the image file will be maintained on each physical PVS server. When installing and configuring the first server you will need to specify: Where DHCP services run That the PXE server runs on this computer Name of the new Farm The SQL server and instance name (It is assumed SQL is running in the data center) The database name, farm name, and farm administrators Store path for the vDisk images Licensing server, currently the DDC is the license server Services account for streaming and SOAP services, this account must have access to the vDisk location to be able to stream The Active Directory computer account password update if desired The NICs and ports for network communication, defaults were used. The TFTP service to provide the ARDBP32.BIN file at boot time PVS server is listed in the Stream Servers Boot List. Once everything is ready, select Finish to install PVS. For the second PVS server add it to the existing farm just created. Settings in the DHCP server scope must be configured for PXE to work correctly. Configure options 66 and 67 on the DHCP server scope that the desktops will boot from. Option 66 should contain one of the Provisioning Server TFTP IP addresses. Option 67 should be configured for the ARDBP32.BIN bootstrap file. As final steps, change the default threads per port from 8 to 15 for all NICs being used on the PVS servers. Then install PVS 5.6 XenDesktop Setup Wizard hotfix to enable quick provisioning and deployment of VMs: http://support.citrix.com/article/CTX129381. Once PVS has been installed and configured it is necessary to create the vDisk image that will be the master image for the VDI VMs. On the PVS server, run the Provisioning Server Management Console from the start menu. Login and specify one of the PVS servers as the host to connect to. Once logged in you will need to create a vDisk, for optimum performance specify a fixed vDisk equal to or larger than the Windows 7 base image disk. Once the disk is created, verify it is in Private Image Mode by right clicking on the vDisk and selecting File Properties. Select the Mode tab to verify Private Image Mode. On the Options tab verify Active Directory machine account password management is selected. Select OK to exit. Right click on your Collection and specify Create Device. Specify the name and MAC address of the base image VM to be captured. For the Boot from option specify Hard Disk. Under the vDisks tab select the vDisk created earlier. In SCVMM, boot the PVS base image VM and install the Provisioning Services Target Device software. To accomplish this, attach the PVS 5.6 SP1 ISO to the VM as a DVD device. From within the VM, run the software and select Target Device configuration. When completed shutdown the VM. In SCVMM, modify the Windows 7 base image VM hardware configuration to boot from network. To boot from network, the network adapter must be the legacy adapter. Boot the base image VM. Once booted, you should now see the vDisk in the task bar for the VM, and it should be active. From the Start Menu, launch XenConvert. Specify the From to be This Machine and the To to be Provisioning Services vDisk. Verify the size and capacity of the source and destination drives, and select the AutoFit feature to ensure the target device software will use the correct vDisk size, click Next.
73
Click Optimize for Provisioning Service, it is recommended to accept all specified features. When ready, click Convert. The conversion process will start, and can take a while to convert. Figure 69 shows screen shots of the conversion process.
74
Once the conversion is complete, shutdown the VM. The next steps create the write cache file associated with the PVS Image in Standard Image mode, and stores the cache on the local storage. This means creating a local hard disk to hold the VM pagefile and the write cache for PVS. If in Standard Image mode and the VM pagefile is on a writeable disk, then if specified the PVS write cache will be created in the same hard disk containing the pagefile. The VM pagefile will be 1.5 times memory assigned to the VM. The file to be created needs to be VM pagefile size + 5 GB for the write cache. So a VM with 2 GB of memory would require a pagefile size of 3 GB, and the vDisk being created would need to be 8 GB in size. Once the Windows 7 VM has shutdown, go to the PVS server and convert the Windows 7 device to boot from the vDisk instead of from the hard disk, then using SCVMM remove the hard disk from the VM but do not delete it.
75
In SCVMM on the same VM create a new 8GB fixed size hard disk (.vhd) using the IDE controller, and attach it to the Windows 7 desktop VM. This drive will be used for the Provisioning Server write cache information and known as the write cache drive. This should be created in the DAS or local storage associated with the Hyper-V host. Final configuration is shown in Figure 70.
76
Once the additional disk is created, boot the VM and verify it sees the additional drive. Format the drive with NTFS doing a quick format. Once formatted, configure the page file for the VM to reside locally by moving the paging file from C: to your write cache drive and removing the paging file completely from the C: drive. Set the paging file size to 1.5 x Device RAM. Reboot when prompted.
After the reboot, verify the paging file has been removed from C: and placed on the write cache drive. Once verified, shutdown the VM. The next step converts the VM into a template that can be used to deploy VMs. Before converting the VM into a template, edit the properties for the VM and configure Dynamic Memory as done before. These settings will be carried into the template and made available to all of the new VMs to be created using this template.
77
To convert this virtual machine to a template using SCVMM, right click on the VM name and select New template.
Choose Customization not required for Guest operating system profile during template creation.
This will convert the VM into a template that can be used for deploying VMs with PVS. To deploy this image to multiple desktops, you must first place the vDisk in Standard Image Mode on the PVS server. This allows a 1: many relationship for the vDisk to VMs.
78
On the PVS server, navigate to your vDisk store, right click on your vDisk and choose File Properties. Select the Mode tab and choose Standard Image (multi-device, write-cache enabled) for the Access Mode, and choose Cache on devices HD for the Cache Type.
79
To deploy the desktops, you will need to launch the XenDesktop Wizard from the Provisioning Services console. Within the Provisioning Services Console right click on the site name and select XenDesktop Setup Wizard.
When prompted, specify the host name of the DDC, XenDesktop Controller and specify where the VM template is stored. You can select multiple hosts to deploy VMs to. Once you have authenticated to the host, choose the template, and click OK. When prompted specify a collection name for the VMs and select the Windows 7 vDisk assigned to the virtual machines. Specify the number of virtual machines to create, the vCPUs desired, and the memory for the VMs. The default machine settings are recommended since they are pulled from the template. Select Create new accounts under Active Directory computer accounts to have the XenDesktop Setup Wizard create AD computer accounts automatically. Specify the OU where the AD computer accounts need to be created, and provide a machine account naming scheme. Using # will add another digit to the VM name. Specify the name of the desktop catalog that will be visible in the Desktop Delivery Controller and the appropriate credentials to authenticate with the Desktop Delivery Controller. At the Confirm configuration settings click Finish to start building the VMs. Once complete, the VMs will be ready to use. Figure 76 has screen shots of the steps.
80
81
82
Citrix Profile Management Citrix Profile Management allows users to access non-persistent desktops in a persistent manner by retaining user settings and data between sessions. For persistent desktops, it also ensures that the user settings and data are retained in the event of corruption within the users virtual desktop. In order to leverage Citrix Profile Management a file share must be created and the Citrix User Profile Management Agent must be installed into the virtual desktop. When leveraging applications or remote desktops via XenApp, the Citrix Profile Management Agent can be installed on the XenApp servers to retain user settings and data. To manage and control the behavior of the Citrix Profile Management Agent on the desktops and XenApp servers, a Group Policy Object (GPO) is leveraged which is included with the Citrix Profile Management product. The following settings were configured in the GPO to control the agent:
83
Summary
As stated earlier, the goal of this document was to create a self-contained reference architecture on HP ProLiant servers and storage using Citrix XenDesktop 5 on Microsoft Windows Server 2008 R2 SP1, and supporting 1600 VDI users 400 persistent and 1200 non-persistent users. In summary, the advantages of this RA include: DAS and SAN Storage HP hardware allows configuration of both DAS and SAN hardware in the same RA, and Citrix XenDesktop 5 and Microsoft Hyper-V can take advantage of both. Running users on DAS storage reduces the storage cost per user by more than 50%. Offload of application execution to XenApp servers, lessening the workload and IOPs for the VDI VMs, allowing support of 10-15% more VMs per server. Microsoft Windows Server 2008 R2 SP1 Dynamic Memory allows utilization of all of the physical memory for VDI VMs and makes the most of the hardware configurations. A complete, self-contained POD with integrated management. All application, boot, login, migrations and execution related to the VDI infrastructure stays within the RA rack. The only network wiring required to leave the rack is the redundant connections to the corporate production network and the data center management network. To extend this further, HP has multiple end-point devices all supporting the Citrix Receiver technology. From the end-point device, to the networking, to the data center, HP can meet the requirements to implement the VDI RA. This RA is the basis for a full Client Virtualization solution leveraging Citrix FlexCast and the power of Microsoft Hyper-V on HP ProLiant servers and storage. In this RA XenApp servers were shown to offload application execution from the VMs, providing additional headroom on each Hyper-V host server to run more VMs and take advantage of Hyper-V Dynamic Memory. This was done to highlight the ease of adding XenApp, and Remote Sessions servers to the VDI RA to extend the capability of the solution, and leverage the full capabilities of Citrix FlexCast while maintaining the same management infrastructure.
84
Expanding the RA One of the benefits of the POD approach to the RA is the ease in expanding and growing. Figure 78 looks at extending the RA VDI components to a multi-rack solution.
The racks in Figure 78 consist of two Virtual Connect domains. The left-most rack is a P4800 domain using a six node P4800 configuration, and will support 1200 persistent productivity workers with 16 BL490c G7 servers, 1200 XenApp connections using six (6) BL460c G7 servers, 4 BL460c G7 servers to run App-V and Sessions, and two servers to be management servers for the domain. The second domain is a DAS domain, supporting 6600 non-persistent VDI task workers with 60 BL490c G7 servers and utilizing four (4) BL460c G7 servers as provisioning servers. This domain uses twenty-six (26) BL460c G7 servers supporting 6400 XenApp connections and 2 BL460c G7 servers for App-V and Sessions.
85
Actual numbers supported in this configuration will be dependent on the user type and load. No one approach remote desktops, non-persistent VMs, or persistent VMs will address an enterprise level solution. A best practice is to do an environment assessment using HPs Client Virtualization Analysis Monitoring service (CVMA) to understand the current user environment. This will help understand the types of users best aligned with remote desktops, non-persistent or persistent VDI and determine the best deployment scenario using the Citrix FlexCast model on Microsoft Hyper-V with HP ProLiant servers and storage.
86
87
For reads, the average IOPs for login was 8-10 IOPs, with an overall average across the entire test run for reads of 5-6 IOPs. When using MCS for the persistent VMs the read/write ratio is closer to a 50/50 ratio. In the 80 user VM run the average total IOPs was just over 5,000. The option of using SSDs or I/O accelerator cards is often looked at, but in a properly configured XenDesktop implementation with Provisioning server, these will bring little if any performance gains but increase cost. When using Provisioning Server configured with client-side write cache, PVS will put into memory the commonly referenced bits from the master image file. The VMs will read from the common bits in memory and little to no I/O is generated from the image file. To highlight this, a test was done with 80 VMs accessing the same image from PVS configured with 48 GB of memory and using client side write cache. The PVS server was started, then all 80 VMs booted. After 15 seconds, I/O to the image file went to nil. Once all 80 VMs had been started, they were shut down, and then restarted. On reboot of the 80 VMs, no I/O traffic was seen to the master image file on the PVS server. When using Machine Creation Services, MCS, SSDs become more problematic. MCS requires the master image file and the differential files for the VMs to reside on the same storage repository. This means the writes to the differential files will be written to the SSDs. SSD technology has progressed well, and will improve, but with current SSD technology a high degree of writes can cause failure of the SSDs. Due to these considerations it is not recommended to run MCS on SSDs at this time. Other I/O acceleration cards can be considered, but an understanding of the write performance and impact must be understood by the implementer. Storage planning Based on observations and analysis of the storage workload, storage planning at client side cache must account for a primarily write driven workload. Based on HPs analysis, the bulk of the reads will be offloaded to the provisioning server. The remaining I/O is observed as the write portion of the overall I/O per user plus an average of 1 read I/O. The estimated per user requirement could thus be estimated as:
88
For the best understanding of overall user requirements including I/O, CPU, memory and even application specific information, HP offers the Client Virtualization Analysis and Modeling Service. For more information about the service, see the information sheet at http://h20195.www2.hp.com/v2/GetPDF.aspx/4AA3-2409ENW.pdf.
89
210 210 0 0 30 30 0 0
90
#Configure Protocols ENABLE WEB ENABLE SECURESH DISABLE TELNET ENABLE XMLREPLY ENABLE GUI_LOGIN_DETAIL #Configure Alertmail SET ALERTMAIL SMTPSERVER 0.0.0.0 DISABLE ALERTMAIL #Configure Trusted Hosts #REMOVE TRUSTED HOST ALL DISABLE TRUSTED HOST #Configure NTP SET NTP PRIMARY 10.1.0.2 SET NTP SECONDARY 10.1.0.3 SET NTP POLL 720 DISABLE NTP #Set SNMP Information SET SNMP CONTACT "Name" SET SNMP LOCATION "Locale" SET SNMP COMMUNITY READ "public" SET SNMP COMMUNITY WRITE "private" ENABLE SNMP #Set Remote Syslog Information SET REMOTE SYSLOG SERVER "" SET REMOTE SYSLOG PORT 514 DISABLE SYSLOG REMOTE #Set Enclosure Bay IP Addressing (EBIPA) Information for Device Bays #NOTE: SET EBIPA commands are only valid for OA v3.00 and later SET EBIPA SERVER 10.0.0.1 255.0.0.0 1 SET EBIPA SERVER GATEWAY NONE 1 SET EBIPA SERVER DOMAIN "vdi.net" 1 ENABLE EBIPA SERVER 1 SET EBIPA SERVER 10.0.0.2 255.0.0.0 2 SET EBIPA SERVER GATEWAY NONE 2 SET EBIPA SERVER DOMAIN "vdi.net" 2 ENABLE EBIPA SERVER 2 SET EBIPA SERVER 10.0.0.3 255.0.0.0 3 SET EBIPA SERVER GATEWAY NONE 3 SET EBIPA SERVER DOMAIN "vdi.net" 3 ENABLE EBIPA SERVER 3 SET EBIPA SERVER 10.0.0.4 255.0.0.0 4 SET EBIPA SERVER GATEWAY NONE 4 SET EBIPA SERVER DOMAIN "vdi.net" 4 ENABLE EBIPA SERVER 4 SET EBIPA SERVER 10.0.0.5 255.0.0.0 5 SET EBIPA SERVER GATEWAY NONE 5 SET EBIPA SERVER DOMAIN "vdi.net" 5 ENABLE EBIPA SERVER 5 SET EBIPA SERVER 10.0.0.6 255.0.0.0 6 SET EBIPA SERVER GATEWAY NONE 6 SET EBIPA SERVER DOMAIN "vdi.net" 6 ENABLE EBIPA SERVER 6 SET EBIPA SERVER 10.0.0.7 255.0.0.0 7 SET EBIPA SERVER GATEWAY NONE 7 SET EBIPA SERVER DOMAIN "vdi.net" 7 ENABLE EBIPA SERVER 7 SET EBIPA SERVER 10.0.0.8 255.0.0.0 8 SET EBIPA SERVER GATEWAY NONE 8
91
SET EBIPA SERVER DOMAIN "vdi.net" 8 ENABLE EBIPA SERVER 8 SET EBIPA SERVER 10.0.0.9 255.0.0.0 9 SET EBIPA SERVER GATEWAY NONE 9 SET EBIPA SERVER DOMAIN "vdi.net" 9 ENABLE EBIPA SERVER 9 SET EBIPA SERVER 10.0.0.10 255.0.0.0 10 SET EBIPA SERVER GATEWAY NONE 10 SET EBIPA SERVER DOMAIN "vdi.net" 10 ENABLE EBIPA SERVER 10 SET EBIPA SERVER 10.0.0.11 255.0.0.0 11 SET EBIPA SERVER GATEWAY NONE 11 SET EBIPA SERVER DOMAIN "vdi.net" 11 ENABLE EBIPA SERVER 11 SET EBIPA SERVER 10.0.0.12 255.0.0.0 12 SET EBIPA SERVER GATEWAY NONE 12 SET EBIPA SERVER DOMAIN "vdi.net" 12 ENABLE EBIPA SERVER 12 SET EBIPA SERVER 10.0.0.13 255.0.0.0 13 SET EBIPA SERVER GATEWAY NONE 13 SET EBIPA SERVER DOMAIN "vdi.net" 13 ENABLE EBIPA SERVER 13 SET EBIPA SERVER 10.0.0.14 255.0.0.0 14 SET EBIPA SERVER GATEWAY NONE 14 SET EBIPA SERVER DOMAIN "vdi.net" 14 ENABLE EBIPA SERVER 14 SET EBIPA SERVER NONE NONE 14A SET EBIPA SERVER GATEWAY 10.65.1.254 14A SET EBIPA SERVER DOMAIN "" 14A SET EBIPA SERVER 10.0.0.15 255.0.0.0 15 SET EBIPA SERVER GATEWAY NONE 15 SET EBIPA SERVER DOMAIN "vdi.net" 15 ENABLE EBIPA SERVER 15 SET EBIPA SERVER 10.0.0.16 255.0.0.0 16 SET EBIPA SERVER GATEWAY NONE 16 SET EBIPA SERVER DOMAIN "vdi.net" 16 ENABLE EBIPA SERVER 16 #Set Enclosure Bay IP Addressing (EBIPA) Information for Interconnect Bays #NOTE: SET EBIPA commands are only valid for OA v3.00 and later SET EBIPA INTERCONNECT 10.0.0.101 255.0.0.0 1 SET EBIPA INTERCONNECT GATEWAY NONE 1 SET EBIPA INTERCONNECT DOMAIN "vdi.net" 1 SET EBIPA INTERCONNECT NTP PRIMARY 10.1.0.2 1 SET EBIPA INTERCONNECT NTP SECONDARY 10.1.0.3 1 ENABLE EBIPA INTERCONNECT 1 SET EBIPA INTERCONNECT 10.0.0.102 255.0.0.0 2 SET EBIPA INTERCONNECT GATEWAY NONE 2 SET EBIPA INTERCONNECT DOMAIN "vdi.net" 2 SET EBIPA INTERCONNECT NTP PRIMARY 10.1.0.2 2 SET EBIPA INTERCONNECT NTP SECONDARY 10.1.0.3 2 ENABLE EBIPA INTERCONNECT 2 SET EBIPA INTERCONNECT 10.0.0.103 255.0.0.0 3 SET EBIPA INTERCONNECT GATEWAY NONE 3 SET EBIPA INTERCONNECT DOMAIN "" 3 SET EBIPA INTERCONNECT NTP PRIMARY NONE 3 SET EBIPA INTERCONNECT NTP SECONDARY NONE 3 ENABLE EBIPA INTERCONNECT 3 SET EBIPA INTERCONNECT 10.0.0.104 255.0.0.0 4 SET EBIPA INTERCONNECT GATEWAY NONE 4 SET EBIPA INTERCONNECT DOMAIN "" 4 SET EBIPA INTERCONNECT NTP PRIMARY NONE 4 SET EBIPA INTERCONNECT NTP SECONDARY NONE 4 ENABLE EBIPA INTERCONNECT 4 SET EBIPA INTERCONNECT 10.0.0.105 255.0.0.0 5 SET EBIPA INTERCONNECT GATEWAY NONE 5 SET EBIPA INTERCONNECT DOMAIN "vdi.net" 5 SET EBIPA INTERCONNECT NTP PRIMARY 10.1.0.2 5
92
SET EBIPA INTERCONNECT NTP SECONDARY 10.1.0.3 5 ENABLE EBIPA INTERCONNECT 5 SET EBIPA INTERCONNECT 10.0.0.106 255.0.0.0 6 SET EBIPA INTERCONNECT GATEWAY NONE 6 SET EBIPA INTERCONNECT DOMAIN "vdi.net" 6 SET EBIPA INTERCONNECT NTP PRIMARY 10.1.0.2 6 SET EBIPA INTERCONNECT NTP SECONDARY 10.1.0.3 6 ENABLE EBIPA INTERCONNECT 6 SET EBIPA INTERCONNECT 10.0.0.107 255.0.0.0 7 SET EBIPA INTERCONNECT GATEWAY NONE 7 SET EBIPA INTERCONNECT DOMAIN "" 7 SET EBIPA INTERCONNECT NTP PRIMARY NONE 7 SET EBIPA INTERCONNECT NTP SECONDARY NONE 7 ENABLE EBIPA INTERCONNECT 7 SET EBIPA INTERCONNECT 10.0.0.108 255.0.0.0 8 SET EBIPA INTERCONNECT GATEWAY NONE 8 SET EBIPA INTERCONNECT DOMAIN "" 8 SET EBIPA INTERCONNECT NTP PRIMARY NONE 8 SET EBIPA INTERCONNECT NTP SECONDARY NONE 8 ENABLE EBIPA INTERCONNECT 8 SAVE EBIPA #Uncomment following line to remove all user accounts currently in the system #REMOVE USERS ALL #Create Users add at least 1 administrative user ADD USER "admin" SET USER CONTACT "Administrator" SET USER FULLNAME "System Admin" SET USER ACCESS ADMINISTRATOR ASSIGN SERVER 1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,1A,2A,3A,4A,5A,6A,7A,8A,9A,10A,11A,12A,13A ,14A,15A,16A,1B,2B,3B,4B,5B,6B,7B,8B,9B,10B,11B,12B,13B,14B,15B,16B "Administrator" ASSIGN INTERCONNECT 1,2,3,4,5,6,7,8 "Administrator" ASSIGN OA "Administrator" ENABLE USER "Administrator" #Password Settings ENABLE STRONG PASSWORDS SET MINIMUM PASSWORD LENGTH 8 #Session Timeout Settings SET SESSION TIMEOUT 1440 #Set LDAP Information SET LDAP SERVER "" SET LDAP PORT 0 SET LDAP NAME MAP OFF SET LDAP SEARCH 1 "" SET LDAP SEARCH 2 "" SET LDAP SEARCH 3 "" SET LDAP SEARCH 4 "" SET LDAP SEARCH 5 "" SET LDAP SEARCH 6 "" #Uncomment following line to remove all LDAP accounts currently in the system #REMOVE LDAP GROUP ALL DISABLE LDAP #Set SSO TRUST MODE SET SSO TRUST Disabled #Set Network Information #NOTE: Setting your network information through a script while # remotely accessing the server could drop your connection.
93
# If your connection is dropped this script may not execute to conclusion. SET OA NAME 1 VDIOA1 SET IPCONFIG STATIC 1 10.0.0.255 255.0.0.0 0.0.0.0 0.0.0.0 0.0.0.0 SET NIC AUTO 1 DISABLE ENCLOSURE_IP_MODE SET LLF INTERVAL 60 DISABLE LLF #Set VLAN Information SET VLAN FACTORY SET VLAN DEFAULT 1 EDIT VLAN 1 "Default" ADD VLAN 21 "VDI" ADD VLAN 29 Migration ADD VLAN 93 PUB_ISCSI ADD VLAN 110 "MGMT_VLAN" SET VLAN SERVER 1 1 SET VLAN SERVER 1 2 SET VLAN SERVER 1 3 SET VLAN SERVER 1 4 SET VLAN SERVER 1 5 SET VLAN SERVER 1 6 SET VLAN SERVER 1 7 SET VLAN SERVER 1 8 SET VLAN SERVER 1 9 SET VLAN SERVER 1 10 SET VLAN SERVER 1 11 SET VLAN SERVER 1 12 SET VLAN SERVER 1 13 SET VLAN SERVER 1 14 SET VLAN SERVER 1 15 SET VLAN SERVER 1 16 SET VLAN INTERCONNECT 1 1 SET VLAN INTERCONNECT 1 2 SET VLAN INTERCONNECT 1 3 SET VLAN INTERCONNECT 1 4 SET VLAN INTERCONNECT 1 5 SET VLAN INTERCONNECT 1 6 SET VLAN INTERCONNECT 1 7 SET VLAN INTERCONNECT 1 8 SET VLAN OA 1 DISABLE VLAN SAVE VLAN DISABLE URB SET URB URL "" SET URB PROXY URL "" SET URB INTERVAL DAILY 0
94
95
Copyright 2011 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice. The only warranties for HP products and services are set forth in the express warranty statements accompanying such products and services. Nothing herein should be construed as constituting an additional warranty. HP shall not be liable for technical or editorial errors or omissions contained herein. Microsoft and Windows are U.S. registered trademarks of Microsoft Corporation. Intel and Xeon are trademarks of Intel Corporation in the U.S. and other countries. 4AA3-5327ENW, Created June 2011