How to Efficiently Deploy Virtual Machines from VMware v. Sphere Content Library - VMware VROOM!
Blogby Joanna Guan and Davide Bergamasco. This post is the first of a series which aims to assess the performance of the VMware v. Sphere Content Library solution in various scenarios and provide v. Sphere Administrators with some ideas about how to set up a high performance Content Library environment. After providing an architectural overview of the Content Library components and inner workings, the post delves into the analysis and optimization of the most basic Content Library operation, i. Introduction. The VMware v.
Sphere Content Library empowers v. Sphere administrators to effectively and efficiently manage virtual machine templates, v. Apps, ISO images, and scripts. Specifically an administrator can leverage Content Library to: Store and manage content from a central location; Share content across boundaries of v. Center Servers; Deploy virtual machine templates from the Content Library directly onto a host or cluster for immediate use. Typically, a v. Sphere datacenter includes a multitude of v. Center servers, ESXi servers, networks, and datastores.
In such an environment it can be time- consuming to clone or deploy a virtual machine through all the ESXi servers, v. Center servers, and networks from a source datastore to a destination datastore. Moreover, this problem is compounded by the fact that the size of virtual machines and other content keeps getting larger over time.
I use a 40 GB drive C:, 1 vCPU, and 4 GB of RAM. Both of those can be changed later after you deploy from this template. You should change your Network type to VMXNET3. How to build a Windows 2012 R2 VMware Template Guest!OS!and!then,!through!scripts,!tools!and!cloning,!generate!a!VMware!template.!In!the. Slow!and!errorWprone!deployment!during!the!install!of!the. VMware ESXi and Oracle RAC 1. Deploy!an!Oracle!RAC!cluster!without.
Windows Server 2012 R2 Template on ESXi 5.5 vSphere. I recently setup a Windows 2012 R2 Server in a VMware\vSphere 5.5 environment. 2012 R2 5.1 5.5 Best Practice ESXi Template VMware Template vSphere Template Windows.
The objective of Content Library is to address these issues by transferring large amounts of data in the most efficient way. Architectural Overview.
Content Library is composed of three main components which run on a v. Center server: A Content Library Service, which organizes and manages content sitting on various storage locations; A Transfer Service, which oversees the transfer content across said storage locations; A Database which stores all the metadata associated with the content (e.
The architecture diagram in Figure 1 shows how the three components interact with each other and with other v. Center components, along with the control path (depicted as thin black lines) and data path (depicted as thick red lines). Figure 1. VMware v. Sphere Content Library architecture. The Content Library Service implements the control plane that manages storage and handles content operations such as deployment, upload, download, and synchronization. The Transfer Service implements the data plane that is responsible for actual data transfers between content stores, which may be datastores attached to ESXi hosts, NFS file systems mounted on the v. Center Server, or remote HTTP(S) servers.
Data Transfer. The data transfer performance varies depending on the storage type and available connectivity. The Transfer Service can transfer data in two ways: streaming mode and direct copy mode. The diagram in Figure 2 shows how the two modes work in a data transfer between datastores. Figure 2. Content Library Data Transfer Flows. If the source and destination hosts have direct connectivity, the Transfer Service asks v. Center to instruct the source host to directly copy the content to the target host. When this is not possible (e.
Center servers) streaming mode is used instead. In streaming mode the data flows through the Transfer Service itself. This involves one extra hop for the data, and also compression/decompression for the VMDK disk files. Also, v. Center appliances are usually connected to a management network, which could become a bottleneck due its limited bandwidth. For these reasons, direct copy mode typically has better performance than streaming mode. Optimizing Virtual Machines Deployment.
- Slow VM Clone/Template Deployment? Deploying a single template is slow; VMware KB.
- How to Deploy an OVF/OVA in the ESXi Shell. Posted on 05/21/2012 by William Lam.
- I have VMware Vsphere enviornment and I'm trying to build a 2012R2 template to use as a. Windows 2012R2 VMware Template deployment issues.
Having covered Content Library architecture and transfer modes, we can now discuss how to optimize its performance starting from the most basic operation, the deployment of a virtual machine. Deploying a virtual machine from Content Library creates a new virtual machine by cloning it from a template. We assess the performance of deployment operations by measuring their completion time. This metric is obviously the most visible and important one from an administrator. The following table summarizes the hardware and software specifications of the testbed.
Center Server Host. We ran a workload that consisted of deploying a virtual machine from a Content Library item onto a cluster. All experiments used the same 3. GB OVF template. We conducted various experiments, based on the possible configurations of the source content store (the storage backing the Content Library) and the destination content store (the storage where the new virtual machine was deployed), as shown in the following table.
Experiment 1. An ESXi host is connected to a VAAI- capable storage array (VAAI stands for v. Storage API for Array Integration and it is a technology which enables ESXi hosts to offload specific virtual machine and storage management operations to compliant storage hardware.).
However, these datastores are either hosted on a non- VAAI array or on two different arrays. Experiment 3. An ESXi hosts is connected to the source datastore while a different host is connected to the destination datastore. The datastores are hosted on different arrays. Experiment 4. The source content store is an NFS file system mounted on the v. Center server, while the destination content store is a datastore is hosted on a storage array.
Figure 3. Storage configurations and data transfer flows. Experimental Results. Figure 4 shows the results of the four experiments described above in terms of deployment duration (lower is better), while the following table summarizes the main observations for each experiment. Experiment 1. The best performance was achieved in Experiment 1 (two datastores backed by a VAAI array). This was expected, as in this scenario the actual data transfer occurs internally to the storage array, without any involvement from the ESXi host. This is obviously the most efficient scenario from a deployment perspective.
Experiment 2. In Experiment 2, although the array is not VAAI- capable (or the datastores are hosted on two separate arrays), the source and the destination datastores are connected to the same ESXi host. This means the data transfer occurs through the 8 Gb/s Fibre Channel connection. This scenario is about 2. Experiment 1. Experiment 3. The scenario of Experiment 3 is significantly slower (about three times) than Experiment 1 because the datastores are attached to two different ESXi hosts.
This causes the data transfer to go through the 1. Gbps Ethernet connection. We also ran this experiment using a 1. Gbps Ethernet network, and found that the deployment duration was similar to the one measured in Experiment 2.
This suggests that the 1. Gbps Ethernet connection is a significant bottleneck for this scenario.
Experiment 4. In the final scenario, Experiment 4, the template resides on an NFS file system mounted on the v. Center server. Because the template is stored in a compressed format on the NFS file system in order to save network bandwidth, its decompression on the v.
Center server slows the data transfer quite noticeably. The network hops between the v. Center Server and the destination ESXi host may further slow the end- to- end data transfer. For these reasons, this scenario was about seven times slower than Experiment 1. Given that compression and decompression are CPU- heavy operations, using a faster network may result in only a marginal performance improvement. Figure 4. Deployment completion time for the four storage configurations.
Conclusions. This blog post explored how different Content Library backing storage configurations can affect the performance of a virtual machine deployment operation. The following guidelines may help an administrator in optimizing the Content Library performance for said operation based on the storage options at her/his disposal: If no other optimizations are possible, the Content Library should be at least backed by a datastore connected to one of the ESXi hosts (scenario of Experiment 3). Ideally a 1. 0Gbps Ethernet connection should be employed. A better option is to have each ESXi host connected to both the source datastore (the one backing the Content Library) and the destination datastore(s) (the one(s) where the new virtual machine is being deployed). This is the scenario of Experiment 2. The best case is when all the ESXi hosts are connected to a VAAI- capable storage array and both the source and destination datastores reside on said array (Experiment 1).