Mastering VMware vSphere 6. Marshall Nick
Чтение книги онлайн.

Читать онлайн книгу Mastering VMware vSphere 6 - Marshall Nick страница 7

СКАЧАТЬ style="font-size:15px;">      VMware vRealize Orchestrator

      VMware vRealize Orchestrator (previously named VMware vCenter Orchestrator) is a workflow automation engine that is automatically installed with every instance of vCenter Server. Using vRealize Orchestrator, you can build automated workflows for a wide variety of tasks available within vCenter Server. The automated workflows you build using vRealize Orchestrator range from simple to complex. VMware also makes vRealize Orchestrator plug-ins to extend the functionality to include manipulating Microsoft Active Directory, Cisco’s Unified Computing System (UCS), and VMware vRealize Automation. This makes vRealize Orchestrator a powerful tool to use in building automated workflows in the virtualized datacenter.

      Now that we’ve discussed the specific products in the VMware vSphere product suite, I’d like to take a closer look at some of the significant features.

      Examining the Features in VMware vSphere

      In the following sections, we’ll take a closer look at some of the features that are available in the vSphere product suite. We’ll start with Virtual SMP.

      vSphere Virtual Symmetric Multi-Processing

The vSphere Virtual Symmetric Multi-Processing (vSMP or Virtual SMP) product allows you to construct VMs with multiple virtual processor cores and/or sockets. vSphere Virtual SMP is not the licensing product that allows ESXi to be installed on servers with multiple processors; it is the technology that allows the use of multiple processors inside a VM. Figure 1.2 identifies the differences between multiple processors in the ESXi host system and multiple virtual processors.

Figure 1.2 vSphere Virtual SMP allows VMs to be created with more than one virtual CPU.

      With vSphere Virtual SMP, applications that require and can actually use multiple CPUs can be run in VMs configured with multiple virtual CPUs. This allows organizations to virtualize even more applications without negatively impacting performance or being unable to meet service-level agreements (SLAs).

      In vSphere 5, VMware expanded this functionality by also allowing users to specify multiple virtual cores per virtual CPU. Using this feature, a user could provision a dual “socket” VM with two cores per “socket” for a total of four virtual cores. This approach gives users tremendous flexibility in carving up CPU processing power among the VMs.

      vSphere vMotion and vSphere Storage vMotion

      If you have read anything about VMware, you have most likely read about the extremely useful feature called vMotion. vSphere vMotion, also known as live migration, is a feature of ESXi and vCenter Server that allows you to move a running VM from one physical host to another physical host without having to power off the VM. This migration between two physical hosts occurs with no downtime and with no loss of network connectivity to the VM. The ability to manually move a running VM between physical hosts on an as-needed basis is a powerful feature that has a number of use cases in today’s datacenters.

      Suppose a physical machine has experienced a nonfatal hardware failure and needs to be repaired. You can easily initiate a series of vMotion operations to remove all VMs from an ESXi host that is to undergo scheduled maintenance. After the maintenance is complete and the server is brought back online, you can use vMotion to return the VMs to the original server.

      Alternately, consider a situation in which you are migrating from one set of physical servers to a new set of physical servers. Assuming that the details have been addressed – and I’ll discuss the details of vMotion in Chapter 12, “Balancing Resource Utilization” – you can use vMotion to move the VMs from the old servers to the newer servers, making quick work of a server migration with no interruption of service.

      Even in normal day-to-day operations, vMotion can be used when multiple VMs on the same host are in contention for the same resource (which ultimately causes poor performance across all the VMs). With vMotion, you can migrate any VMs facing contention to another ESXi host with greater availability for the resource in demand. For example, when two VMs contend with each other for CPU resources, you can eliminate the contention by using vMotion to move one VM to an ESXi host with more available CPU resources.

      vMotion moves the execution of a VM, relocating the CPU and memory footprint between physical servers but leaving the storage untouched. Storage vMotion builds on the idea and principle of vMotion: you can leave the CPU and memory footprint untouched on a physical server but migrate a VM’s storage while the VM is still running.

      Deploying vSphere in your environment generally means that lots of shared storage – Fibre Channel or iSCSI SAN or NFS – is needed. What happens when you need to migrate from an older storage array to a newer storage array? What kind of downtime would be required? Or what about a situation where you need to rebalance utilization of the array, either from a capacity or performance perspective?

      With the ability to move storage for a running VM between datastores, Storage vMotion lets you address all of these situations without downtime. This feature ensures that outgrowing datastores or moving to a new SAN does not force an outage for the affected VMs and provides you with yet another tool to increase your flexibility in responding to changing business needs.

      vSphere Distributed Resource Scheduler vMotion is a manual operation, meaning that you must initiate the vMotion operation. What if VMware vSphere could perform vMotion operations automatically? That is the basic idea behind vSphere Distributed Resource Scheduler (DRS). If you think that vMotion sounds exciting, your anticipation will only grow after learning about DRS. DRS, simply put, leverages vMotion to provide automatic distribution of resource utilization across multiple ESXi hosts that are configured in a cluster.

      Given the prevalence of Microsoft Windows Server in today’s datacenters, the use of the term cluster often draws IT professionals into thoughts of Microsoft Windows Server clusters. Windows Server clusters are often active-passive or active-active-passive clusters. However, ESXi clusters are fundamentally different, operating in an active-active mode to aggregate and combine resources into a shared pool. Although the underlying concept of aggregating physical hardware to serve a common goal is the same, the technology, configuration, and feature sets are quite different between VMware ESXi clusters and Windows Server clusters.

      Aggregate Capacity and Single Host Capacity

      Although I say that a DRS cluster is an implicit aggregation of CPU and memory capacity, it’s important to keep in mind that a VM is limited to using the CPU and RAM of a single physical host at any given time. If you have two ESXi servers with 32GB of RAM each in a DRS cluster, the cluster will correctly report 64 GB of aggregate RAM available, but any given VM will not be able to use more than approximately 32 GB of RAM at a time.

      An ESXi cluster is an implicit aggregation of the CPU power and memory of all hosts involved in the cluster. After two or more hosts have been assigned to a cluster, they work in unison to provide CPU and memory to the VMs assigned to the cluster (keeping in mind that any given VM can only use resources from one host; see the sidebar “Aggregate Capacity and Single Host Capacity”). The goal of DRS is twofold:

      • At startup, DRS attempts to place each VM on the host that is best suited to run that VM at that time.

      • Once a VM is running, DRS seeks to provide that VM with the required hardware resources while minimizing the amount of contention for those resources in an effort to maintain balanced utilization levels.

      The first part of DRS is often referred to as intelligent placement. DRS can automate the placement of each VM as СКАЧАТЬ