Você está na página 1de 13

P2V Consideration and Pre-Post Migration Checklist

Leave a comment

Candidate selection: Use capacity tools (Capacity Planner for ex.) to qualify a physical server as a P2V candidate, some points to note:

Small to medium sized workloads which use less SpecInt are the best fit for Virtualization, the SpecInt can eventually increase with the influx of new hardware which has increased CPU speeds. But most of the applications are not CPU intensive

Check the memory utilization at 95th Percentile, Memory allocation can be huge, more than 30+ GB, but keep a check on Active Memory, which is always the key Average Disk Read/Write at the 95th percentile should not be way too high, benchmarks are set within the organization depending on their bandwidth and usage, same goes with network Its the Storage Protocol which dictates your disk space requirements, SAN can be an expensive deal, but worth looking at Thin provisioning (NFS) if there are large disk space requirements with low Disk Usage

Set benchmarks for OS drives, only if your applications and DB reside on Data volume Application which does not have low latency requirements Can be anything, Exchange, SQL, Oracle, SAP or any business critical application as long as it gets the required resources

Some showstoppers which might prevent candidate selection:


Applications with low latency Very large workloads and large data sets Non Standard hardware requirements, modems, fax cards, dongles etc

Pre Migration Checks:


Use Capacity tools to qualify a physical server as a P2V candidate, preferably Capacity Planner Hostname OS Type Server Model # of CPU sockets and Cores

Physical memory installed Disk Capacity requirements, any LUNS from FC or SCSI or NFS mount points (CIFS) CPU, Memory and Disk usage (Capacity tools can give an insight) Decide on vCPUs, Virtual Memory and Disk space to be allocated Its always good to have a local Administrator account on the physical box prior to P2V, else login atleast once using your domain credentials so that its stored in the local SAM Record the IP configuration, possibly a screen dump of ipconfig iLO information Check for disk defragmentation Check whether the applications are hardcoded to any IP/Mac addresses RDP access Information about all the applications and check which services are required to be stopped during migration Possibly a runbook, which provides an update on the milestone, P2V started, in progress, successful or failed etc On board resources from OS and Application teams, their contact information Ensure there are no Hardware dongles, take note of compatibility, else the effort is wasted Ensure the firewall rules are opened for the destination network, if applicable Ensure your ESX/ESXi hosts management port group is connected to atleast 1GB port

Post Migration Checklist:


Ensure to power off the physical server before powering on the VM, else ensure the Network Adapter is disabled on the VM Install VMware tools and ensure the Hardware Version is compliant with the ESX/ESXi version Removed unwanted serial ports and uninstall all the Hardware Services, HP Insight Manager etc Ensure there are no yellow exclamation marks in the Device Manager Check and Adjust the Hardware Abstraction Layer, if required Once the server is powered on, enable the NIC and assign the IP address Start up all the disabled Application services if any, also check all the Automatic Services are started Ensure the Physical Server is no longer pingable in the network, else you may run into Duplicate entries in AD Get your OS teams (Check Windows Events log) and Application team for stress testing Test Network and Disk latency Most importantly User Experience

Please feel free to add if I have missed anything critical P2V pre-migration checklist - and considerations My prevoius post was a P2V post migration checklist. This post is a pre-migration checklist which is about all the information that should gathered and checked before doing any P2V conversions.

I have been involved in a number of larger P2V projects (+50 P2V's) and, in my experience, proper planning is a key element for a succesful project. Typically, you, as a VMware- or P2V person, have no real knowledge of the Windows servers to be converted - their just another server. This means that you rely on other people to collect relevant data on your behalf. Such a setup has an important implication. As you have no knowledge of the server, it cannot be released into production by yourself, you should let a Windows guy verify the OS after which it can be handed over for application testing. Resources for both tests should be allocated up front by the project manager and they should be standing by in the agreed maintenance window.

In regards to the length of maintenance windows, we have had the most succes with long time frames during weekends - e.g. 36 hours from Saturday 08.00 a.m. to Sunday 08.00 p.m. Obviously, such a window can be difficult to obtain, but it has two significant advantages: 1) Specifying the actual conversion time can be tricky it happens that a 30 GB server takes 12 hours to convert for one reason or the other. 2) It is less stressfull to do P2V's during weekends and a long window will let you work at your own pace, Furthermore, conversions can run over night if they have large disks (e.g. + 200 GB).

Now, a few words about the checklist. Over time, it has been gradually extended as we have learned important lessons - some of them the hard way where. For example, a server that hadn't been checked for hardware dongles, then you need to roll back - or e.g. a VLAN that hadn't been properly trunked... A specific list will match a specific scenario so, typically, the list will be modified to some degree for each project. However, a large part of the list will remain the same, so hopefully it

can be used for inspiration. We use Sharepoint 2007 to organise the lists. These can be dynamically updated, which is practical when multiple persons have to update at the same time.

Servername OS type Server model Has Capacity Planner run for this server? # of CPU sockets # of CPU cores Amount of physical memory installed Physical disk capacity (C-drive, D-drive, etc.) Current CPU usage (preferably from cap. planner) Current memory usage (preferably from cap. planner) Current physical disk usage (C-drive, D-drive, etc.) # vCPUs that should be assigned Amount of memory to be assigned to VM Sizes of vDisks after resizing (C-drive, D-drive, etc. remember separate .vmdks for each logical volume) Total size of vDisks (then you can sum up total disk capacity needed and ask for storage up front)

Local administrator credentials (local windows accounts are recommended) Ipconfig /all screendump attached to list (this is to ensure you have the right IP and mac address)? ILO-information (address, credentials) (if you have to do cold migration) Has server been defragmented (this can significantly speed up conversion rates)? Has server been checked for hardware dongles? Has VLAN been trunked?

Do server application licenses have any binding to MAC or IP address? Remote access type (RDP, Netop)? (for stopping services up front) Physical server location Applications on server What services to stop on server before conversion OS tester contact info Application tester contact info (for pre- and post migration test) Server to be converted by (employee) Date for conversion Conversion progress/status (not begun, P2V begun, handed over to OS testing, released to production, etc.) Has physical server been shut down? Notes

VMware P2V Converter Best Practices


Categories:

VMware by ldhoore 8 January 2010

The best/easiest approach to converting a Windows operating system from a physical machine to a virtual machine is to perform a hot migration with VMware Converter installed locally on the source (physical machine) operating system. Below an overview of the different steps involved in a P2V (Physical to Virtual) conversion. Note that these steps only apply to Windows operating systems.

BEFORE CONVERSION

Confirm that the source server has at least 200 MB free disk space on its system volume. This space is required to operate the disk snapshot features in VMware Converter Confirm that the source machine has at least 364 MB RAM If you have software mirroring, break the mirror (but not the data) since VMware converter does not support software mirrors. Clean-up any temporary files and un-needed data Change all hardware related services to disabled startup mode Download the following utilities/scripts and install them to the directory c:\Temp\P2Von the source server. The following files are included: o p1-HWPhysical.bat o p2-installP2VConverter.bat o p3-SystemConfigUtil.bat o v1-HWVirtual.bat o v2-vmprofile.bat o v3-enablehdwacc.bat o v3-enablehdwacc.vbs o v4-renameNICs.bat o v4-renameNICs.vbs o v5-setip.bat o v6-uninstallP2Vconverter.bat o v7-SystemConfigUtil.bat o v8-hiddendevices.bat o v9-HALupdate.bat o PSPCleaner.exe o comm.exe

libiconv2.dll libintl3.dll devcon.exe VMware vCenter Converter Standalone 4.01 build 161434 (See also http://www.vmware.com/download/converter/) Log on to the source machine (mstsc -v: servername /F -console) with a local administrator account and open a command window Run c:\Temp\P2V\p1-HWPhysical.bat o The script creates a list of all devices of the physical machine (including non-present devices) Install the VMware vCenter Converter Standalone software by executing c:\Temp\P2V\p2installP2VConverter.bat Run the System Configuration Utility on the source server by executing c:\Temp\P2V\p3SystemConfigUtil.bat to reduce the number of services and applications running on startup (all software except for all Microsoft Services and VMware Converter Service). o On the General tab Select Selective Startup Uncheck Load Startup Items o On the Services tab Select Hide All Microsoft Services Click Disable All Mark the VMware vCenter Converter Agent Mark the VMware vCenter Converter Server o Click Apply o Click Close o Click Restart

o o o o

CONVERSION

Log on to the source machine with a local administrator account Start the VMware vCenter Converter Standalone application (A shortcut should be available on the desktop) Select Convert Machine Specify Source o Select source Type: Powered-on machine o Specify the powered-on machine: This local machine o Click NEXT Specify Destination o Destination Type Select destination type: VMware Infrastructure virtual machine VMware Infrastructure details Server: type in the IP address of the ESX server you want to convert to (or of your Virtual Center server) User name: type the user name of an administrative account for the above server Password: type the password of the above user name Click NEXT o Host/Resource

Select the ESX host/group you want to convert to Virtual machine name: type the name of the destination virtual machine (this is normally already set) Datasource: select the datastore in which to place the destination virtual machine Click NEXT View/Edit Options o Data to copy Keep the defaults unless you need to re-size the partitions Select Ignore page file and hibernation file o Devices Select the number of processors: adapt if needed (Remember to revert the HAL after conversion if you are changing from a multi-processor to a uniprocessor machine) Disk controller: select SCSI LSI logic for Windows 2003 and beyond Memory for this virtual machine: adapt if needed o Network adapters Choose the number of network adapters you need Select the appropriate VLAN Select Connect at power-on o Services Go to tab Destination Services Change starting mode to Disabled for all services you will not need in the virtual machine o Advanced options De-select Synchronize changes that occur to the source during cloning De-select Power on target machine De-select Install VMware Tools on the imported machine Select Remove System Restore checkpoints on destination Select Reconfigure destination virtual machine o Click NEXT Ready to Complete o Review the summary information o Click NEXT

AFTER CONVERSION

Shut down the physical machine Use the VI client to log on to your virtual center server or to your ESX server Review and adjust the virtual hardware settings: o Adjust the number of NICs, CPUs, RAM, o Remove any unnecessary devices such as serial ports, USB controllers, COM ports, floppy drives, Start the virtual machine Log on to the virtual machine Run c:\Temp\P2V\v1-HWVirtual.bat o Creates a list of all devices that matches the virtual machine

Compares the list of all devices of the physical machine (created in the step before conversion with the list of all devices that matches the virtual machine (created in the previous step) o Removes all phantom hardware o Rescans hardware o Reboots the server Change registry key regarding profile problem in VMware (see VMware article) by executing c:\Temp\P2V\v2-vmprofile.bat Enable Video Hardware Acceleration by executing c:\Temp\P2V\v3-enablehdwacc.bat (this command calls the vbs script c:\Temp\P2V\enablehdwacc.vbs) Rename your network connections by executing c:\Temp\P2V\v4-renameNICs.bat (this command calls the vbs script c:\Temp\P2V\renameNICs.vbs) Set IP information on the first network connection by executing c:\Temp\P2V\v5-setip.bat %1 %2 %3 %4 %5 %6 %7 From this step on you should be able to connect to the virtual server via RDP Uninstall VMware vCenter Converter Standalone by executing c:\Temp\P2V\v6uninstallP2Vconverter.bat If you are converting from HP Proliant hardware you can clean up the HP hardware related drivers, utilities, agents using the HP Proliant Support Pack Cleaner from Guillermo Musumeci. Execute c:\Temp\P2V\PSPCleaner.exe Run the System Configuration Utility on the virtual server by executing c:\Temp\P2V\v7SystemConfigUtil.bat, select the normal startup option and reboot your server Show all hidden devices and uninstall any unused devices by executing c:\Temp\P2V\v8hiddendevices.bat o Select show hidden devices o Check if there are still unused devices present and uninstall them manually Update the HAL if you changed from multi to uniprocessor by executing the script c:\Temp\P2V\v9-HALupdate.bat Reboot your server Install VMware Tools

VMware vSphere Pre-installation Checklist Suggestion:

1. Is your hardware supported? Check the HCL. 2. How many NICs will you have per host? 3. Will it be 1GB or 10GB? 4. Do you have enough ports and cables/Fiber? 5. Is your network setup? 6. Are your VLANs ready? 7. Is your DNS setup? 8. Do you have host IP addresses ready?

9. Do you have management and data networks (VLANs) ready? 10. Do you know what your primary and secondary DNS IPs are? 11. Do you know what your gateway IP is? 12. Make a list of names for your host servers. FQDN are required. 13. Do you have a list of IP addresses for your data and management networks? 14. Is your external network configured right? (VLANS, Routing) 15. Is your storage configured right (SAN or NAS). iqns, igroups, LUNS, iSCSI, etc 16. Do you have a storage plan with LUNs and IDs ready? 17. Are you using multi-path whats your plan? 18. Do you have physical or virtual servers ready for running the vCenter and MS SQL? 19. Do you have the software for ESXi and vCenter downloaded? 20. Do you have a database ready with SA access? 21. Do you have the right VMware licenses? 22. Do you have enough VMware licenses? 23. What patches do you need on your ESXi host and vCenter? 24. Do you have a configuration plan for DRS, HA, vMotion and SvMotion? 25. Do you have a user access plan for ESXi and vCenter? 26. What else should be on your vSphere pre-installation checklist [add it now]?

Blades vs Rack Servers For vSphere


By Joe in Share 0

Blades Are Sexy!


Are you trying to decide whether to use blade and chassis instead of rack mounted servers for virtualization? Here are the Pros and Cons of Blades vs Rack Servers
Pros of Blade and Chassis for VMware vSphere

First of all Blade and Chassis are Cool, especially UCS! (now that we have that out of the way lets move on).

Uses less network ports (2 or 4 per chassis depending storage type) Uses less power. Uses less cabling. Creates less heat. Uses less rack space (Approx. 10U for 16 blades). Uses less rack space and square footage for higher density DC. Centralized server management. Cons of Blade and Chassis

Cool is more expensive. (1 Chassis, 16 blades and networking Approx. $300K) Higher density means a larger failure domain (Approx. 350 VMs in 1 fully populated chassis would require 2 (3 would be better) more fully populated chassis running at 35 70% utilized. Remember when you maintenance these chassi, the VMs need to be vMotioned somewhere or powered off.

Huge hassle when doing firmware upgrades of chassis, blades and other chassi components. Networking and storage configuration can get tricky requiring training staff.
Pros of Rack Mounted Servers for vSphere

Per server is cheaper (2 socket, 24 core, 192 GB memory Approx. $16K). Smaller failure domain is easier to evacuate 1 host server (25- 30 VMs) if something fails. Nothing new for DC staff to learn (easy setup). Easy firmware and driver upgrades. Cons for Rack Mounted Servers

Uses more network ports. Uses more power. Requires more cabling (copper/fiber). Causes more heat (high cooling bill). Uses more rack space (2 4U per server). Lower density per square foot of DC. Individually managed servers (more IPs).

Blades vs Rack Server Conclusion:

First let me apologize to all my friends at Dell and HP, but I need to be honest here. Theres obvious a sales benefit to blade and chassis over rack mounted servers and thats why many sales reps will push them. If blade and chassis are already an established hardware standard then its a no-brainer to stick with the standard, however, if you are being sold blade and chassis right out of the gate and your IT operation is smaller than 1000 VM servers consider that HA will require spanning blade servers across 2 4 chassis. This is the main reason why rack servers may be the better solution. Rack mounted servers arent sexy but theyre much easier to scale capacity at a lower cost. It can be frustrating when youve purchased your first two chassis with 4 blades per chassis, and then later on finding out the blade server technology has changed and now requires you to purchase upgrades for the chassis in order to use new generation blades. This is a real world example and is an important fact to consider Heres my simple formula:

1500 VMs or more blade servers are the better solution and remember to plan for multiple failure domains, maintenance mode and upgrades.

1000 VMs or less rack mounted servers are a better solution and remember to build enough extra capacity into your environment to allow a host to be evacuated.

Você também pode gostar