What’s New In Virtualization

Published by: Processor.com
Written by: Kurt Marko

Server virtualization is one of the hottest technology trends of the last several years, and now that many IT departments have gained real-world experience on production implementations, some warts are starting to show. A primary impetus for virtualization has been the de facto best practice of isolating enterprise applications on dedicated machines due to their often-conflicting OS and middleware requirements. This server outbreak, when juxtaposed with server hardware performance that has rapidly outpaced software demands, has led to vast underutilization of most systems. The ability to run multiple OSes and application environments on increasingly common 4- and 8-way servers, along with the promise of decreasing time to provision new servers, provides strong motivation for going virtual.

While most IT organizations consider their virtualization deployments successful, a recent survey by The Strategic Council found a significant 44% had unsatisfactory or ambiguous outcomes to virtualization projects. In either case, companies are finding that virtualization can succumb to the Law of Unintended Consequences, creating second-order problems not initially obvious. (See the “Biggest Hurdle When Implementing Server Virtualization” graph for more information.) In response to such issues, vendors specializing in VM (virtual machine) and application management software have come forth with a variety of innovative, and in some cases revolutionary, new products.

Core VM Software: Hypervisors From XenSource

The foundation of most virtualization systems is the hypervisor—software that allows multiple OSes to run concurrently on a single server. In this market, XenSource may be David to VMware’s Goliath, but CTO Simon Crosby feels that the company’s pricing and feature set are particularly attractive to SMEs, adding that Xen has traditionally focused on smaller customers. Crosby points out that XenSource eliminates many of the “bells and whistles,” making it easier for new customers to get started. He adds that “most customers can have VMs up within 10 minutes of getting the installation CD” and that Xen is “focused on delivering an optimized package to address server consolidation.”

Crosby feels that VMware’s reliance on SANs for its centralized storage pool is a particular hindrance for SMEs that often don’t have the budget or expertise to implement a complex storage architecture. While XenSource can work with SANs—in fact, a new feature of the latest XenEnterprise 3.2 release is support for iSCSI-based SANs—it is equally at home using NAS devices or server-based file sharing as a virtual image repository. A major addition in the 3.2 product is support for multiprocessor guest OSes with compute-intensive applications, such as Microsoft Exchange or SQL Server (www .microsoft.com). The 3.2 release now supports a broader range of guest OSes, including “thin clients” that run on a central server using a remote desktop display.

Combating Server Sprawl

This proliferation of OS images in virtualized environments is so common as to have earned its own pejorative moniker: server sprawl. Yet, several companies have developed creative solutions to prevent sprawl before it takes hold.

FastScale’s dynamic application provisioning. FastScale Technology is a new startup. CEO Lynn LeBlanc heralds its recently released signature product, FastScale Composer Suite, as a treatment designed to “attack software bloat and server sprawl at the root cause, not by treating symptoms.” It does this by building so-called thin images using technology FastScale terms DABs (Dynamic Application Bundles). DABs are small, self-contained bundles of OS and application resources that are initially defined by creating an application blueprint. DABs are provisioned and executed in real time by the FastScale runtime environment, which can run on dedicated machines or VMware virtual servers. According to LeBlanc, Fast-Scale’s benchmarking shows that a DAB’s small memory footprint allows VMware to support three times as many VMs per physical server and enables booting 40 FastScale VMs faster than one traditional image.

CiRBA application consolidation and optimization. Using a slightly less revolutionary approach, CiRBA turns the mapping of server applications to virtual machines into an operations research problem, applying mathematical optimization techniques to determine the most efficient use of servers for a given workload. CiRBA’s software answers the questions, “what could go together, what should go together, and what fits together.”

CiRBA’s DCI (Data Center Intelligence) solution analyzes the constraints most critical when consolidating servers. Parameters examined include hardware, OS, and application configurations; CPU, I/O, and memory workload patterns; and business requirements, such as maintenance windows or service levels. The result of this analysis is a graphical matrix illustrating which combinations of virtual servers and associated applications are most compatible (a form of affinity analysis). The tool is particularly useful when deploying new applications because it identifies compatible, underutilized machines capable of supporting additional workload, thus obviating the need for new hardware. According to CiRBA CTO Andrew Hillier, users of the company’s software find they can increase server virtualization ratios by 50%, easily paying for the product with just a handful of systems.

VM performance monitoring and analysis with Netuitive. A vital IT function that has been slow to catch up with the virtualization trend is application and performance monitoring. According to Daniel Heimlich, vice president at Netuitive, a recent Gartner survey found that 27% of IT executives indicated “no confidence” in their current performance management tools. Responding to this need, Netuitive has developed what Heimlich terms “the industry’s first self-learning analysis software for virtualization.” The company’s Netuitive SI for VMware is a self-learning monitoring package that uses algorithm-based trend analysis, based on actual server performance data, to “automatically pinpoint root causes of performance issues.” Although currently only available on VMware ESX, Heimlich states that Netuitive is planning to add other virtualization platforms such as XenSource and IBM’s LPAR (logical partitions) as demand warrants.

Storage Solutions For Virtual Servers

Virtualizing the computational load is only half of the battle for most organizations—efficiently provisioning storage in a dynamic and expanding application environment is an equally challenging task. Two companies offering virtualized storage, albeit employing very different technologies, are Attune Systems and Compellent Technology.

Attune’s network file management. Attune’s Maestro File Manager is a network appliance that works with NAS devices and file servers (anything using the CIFS or NFS protocols) to enable virtualization of unstructured file data. According to Dan Liddle, vice president of marketing at Attune, one of the product’s key features is the ability to “nondestructively migrate data while it’s in use,” a critical function in a VM environment where data may often need to be moved among virtual servers. The Maestro also builds a customized namespace such that a file’s storage path remains unchanged even as it moves across storage platforms. Finally, Attune’s product offers what the company terms “real-time policy management,” a feature that examines file usage and can automatically migrate infrequently used data from high-performance storage arrays to less expensive near-line systems.

Liddle notes that Attune’s simplicity appeals to SMEs because they are often strapped for IT personnel and don’t usually have storage experts, adding that Maestro can automate many storage management processes “without a lot of hand tuning.”

Compellent’s dynamic capacity and automated tiering. Compellent’s suite of hardware and software products offers features similar to those of Attune’s Maestro but for SAN-based, block-level storage. Compellent’s architecture builds a virtualization layer between servers and disk arrays using a combination of intelligent SAN switches and disk controllers. Like Attune, its software can automatically migrate seldom-used data to slower devices—what Compellent calls “automated tiered storage.” Another feature of particular importance in a consolidated VM environment is what is known as thin provisioning or dynamic capacity, which is the ability to transparently add storage space as it is needed and eliminate unused disk overhead. Compellent’s software also includes a number of features found in traditional SAN environments, such as disk image snapshots and remote data replication.

New & Exciting

Hardware consolidation and concomitant utilization improvements continue to be a major impetus for virtualization. The proliferation of virtualization technologies is leading to a natural maturation and enhancement of functionality, particularly as real-world experience reveals shortcomings of incipient products. While base VM platforms are often provided by the major IT server or OS vendors, many small, innovative companies are filling out the virtualization product portfolio with exciting new offerings.