Mixed VMmark and Horizon View Desktop Workloads with Hitachi

Mixed VMmark and Horizon View Desktop Workloads
with Hitachi Unified Compute Platform for VMware
vSphere on Hitachi Virtual Storage Platform G1000
Lab Validation Report
By Jose Perez
January 19, 2015
Feedback
Hitachi Data Systems welcomes your feedback. Please share your thoughts by
sending an email message to [email protected]. To assist the routing of this
message, use the paper number in the subject and the title of this white paper in
the text.
Table of Contents
Product Features........................ ......................................................................... 2
Unified Compute Platform for VMware vSphere....................................... 2
Virtual Storage Platform G1000................................................................ 2
Hitachi Dynamic Provisioning......................... .......................................... 3
Hitachi Compute Blade 500......................... ............................................. 3
Hitachi Accelerated Flash......................... ................................................ 4
Brocade Storage Area Network Switches................................................. 4
Cisco Ethernet Switches........................................................................... 4
VMware vSphere 5......................... .......................................................... 5
VMware Horizon 6......................... ........................................................... 5
Test Environment Configuration........................................................................ 6
Hardware Components............................................................................. 6
Software Components......................... ..................................................... 8
Network Infrastructure......................... ..................................................... 8
Storage Infrastructure......................... ......................................................8
Solution Infrastructure...............................................................................9
Test Methodology........................ ...................................................................... 11
VMmark Configuration......................... ................................................... 11
VMware Horizon View Configuration......................... ............................. 14
Login VSI Test Harness Configuration......................... .......................... 19
Test Cases.............................................................................................. 20
Analysis........................ ...................................................................................... 22
Test Case 1: VMmark Servers Initialization and Boot Storm for Virtual
Desktops................................................................................................. 22
Test Case 2: VMmark Benchmark Workloads, Login Storm, and Steady
State for Extreme Power Users and Task Users......................... ........... 31
Test Case 3: VMmark Only..................................................................... 56
Conclusion........................ ................................................................................. 59
1
1
Mixed VMmark and Horizon
View Desktop Workloads with
Hitachi Unified Compute
Platform for VMware vSphere
on Hitachi Virtual Storage
Platform G1000
Lab Validation Report
This lab validation report shows the ability to run mixed VDI workloads
together with other popular business application workloads with no impact to
user experience. This allows organizations to share a single hardware stack
not only for VDI workloads such as Task Users, but also software
development-class workloads such as Extreme Power Users alongside a mix
of application workloads such as e-mail, database, and web servers.
The report documents the performance capabilities of Hitachi Unified
Compute Platform concurrently running a mix of application workloads.
Specifically, this report shows how Hitachi Data Systems used a single
hardware stack to simultaneously run the following workloads:

Task User workloads on linked clones

Extreme Power User workloads on dedicated persistent desktops

VMmark benchmark workloads, which include:

Mail servers

Web 2.0 load simulation

E-Commerce simulation

Standby systems

Common virtualization operations such as virtual machine cloning
and deployment, VMotion, Storage VMotion, and automated load
balancing of resources
VDI workloads were performed with Login VSI and the other application
workloads were performed with VMware VMmark tiles. This lab validation
report shows the capability of Hitachi Unified Compute Platform for VMware
vSphere on Hitachi Virtual Storage Platform G1000 to easily handle mixed
workloads with sufficient end user experience.
2
2
Product Features
The following information describes the hardware and software features
used in testing.
Unified Compute Platform for VMware vSphere
Unified Compute Platform for VMware vSphere offers the following:

Full parity across the RESTful API

Command line interface

Graphical user interface
Hitachi Unified Compute Platform Director software on the Unified
Compute Platform for VMware vSphere integrates directly into VMware
vSphere. It provides unified end-to-end infrastructure orchestration within
a single interface.
Unified Compute Platform for VMware vSphere leverages your existing
storage in one of two ways:

Connect to your existing Hitachi Virtual Storage Platform family or
Hitachi Unified Storage VM

Virtualize other storage arrays that you have from other vendors
using Virtual Storage Platform or Unified Storage VM
Unified Compute Platform for VMware vSphere provides the following
benefits:

Centralization and automation of compute, storage, and networking
components

Significant reduction of time to value and operational costs across
data centers

Faster deployment of converged infrastructure with more efficient
resource allocation

Provides a foundation for the journey to the software defined
datacenter using full support of the RESTful API
Virtual Storage Platform G1000
Virtual Storage Platform G1000 provides an always-available, agile, and
automated foundation that you need for a continuous infrastructure cloud.
This delivers enterprise-ready software-defined storage, advanced global
storage virtualization, and powerful storage.
3
3
Supporting always-on operations, Virtual Storage Platform G1000
includes self-service, non-disruptive migration and active-active storage
clustering for zero recovery time objectives. Automate your operations
with self-optimizing, policy-driven management.

Virtual Storage Platform G1000 supports Oracle Real Application
Clusters and VMware Metro Storage Cluster
Hitachi Dynamic Provisioning
On Hitachi storage systems, Hitachi Dynamic Provisioning provides wide
striping and thin provisioning functionalities.
Using Dynamic Provisioning is like using a host-based logical volume
manager (LVM), but without incurring host processing overhead. It
provides one or more wide-striping pools across many RAID groups.
Each pool has one or more dynamic provisioning virtual volumes (DPVOLs) of a logical size you specify of up to 60 TB created against it
without allocating any physical space initially.
Deploying Dynamic Provisioning avoids the routine issue of hot spots that
occur on logical devices (LDEVs). These occur within individual RAID
groups when the host workload exceeds the IOPS or throughput capacity
of that RAID group. Dynamic Provisioning distributes the host workload
across many RAID groups, which provides a smoothing effect that
dramatically reduces hot spots.
When used with Virtual Storage Platform family, Dynamic Provisioning
has the benefit of thin provisioning. Physical space assignment from the
pool to the dynamic provisioning volume happens as needed using 42 MB
pages, up to the logical size specified for each dynamic provisioning
volume. There can be a dynamic expansion or reduction of pool capacity
without disruption or downtime. You can rebalance an expanded pool
across the current and newly added RAID groups for an even striping of
the data and the workload.
Hitachi Compute Blade 500
Hitachi Compute Blade 500 combines the high-end features with the high
compute density and adaptable architecture you need to lower costs and
protect investment. Safely mix a wide variety of application workloads on
a highly reliable, scalable, and flexible platform. Add server management
and system monitoring at no cost with Hitachi Compute Systems
Manager, which can seamlessly integrate with Hitachi Command Suite in
IT environments using Hitachi storage.
The chassis of Hitachi Compute Blade 500 contains internal Fibre
Channel and network switches for the high availability requirements of
Unified Compute Platform for VMware vSphere.
4
4
Hitachi Accelerated Flash
Hitachi Accelerated Flash features a flash module built specifically for
enterprise-class workloads. Developed for Virtual Storage Platform,
Accelerated Flash is available for Unified Storage VM.
Accelerated Flash features innovative Hitachi-developed embedded flash
memory controller technology. Hitachi flash acceleration software speeds
I/O processing to increase flash device throughput.
Brocade Storage Area Network Switches
Brocade and Hitachi Data Systems partner to deliver storage networking
and data center solutions. These solutions reduce complexity and cost,
as well as enable virtualization and cloud computing to increase business
agility.
The lab validation report uses the following Brocade products:

Brocade 6510 Switch

Brocade 5460 8 Gb SAN Switch Module for Hitachi Compute Blade
500
Cisco Ethernet Switches
The lab validation report uses the following Brocade products:

Cisco Nexus 5548UP Switch

The design of Cisco Nexus 5000 Series Switches is for data center
environments. As part of the foundation for Cisco Unified Fabric,
Cisco Nexus 5000 Series Switches help address business,
application, and operational requirements of evolving data centers.
They provide infrastructure simplicity, to decrease the total cost of
ownership.
The Cisco Nexus 5548UP Switch delivers innovative architectural
flexibility, infrastructure simplicity, and business agility. It supports
networking standards. For traditional, virtualized, unified, and highperformance computing environments, the Nexus 5548UP Switch
offers IT and business advantages.

Cisco Nexus 2232PP Fabric Extender

Cisco Nexus 3048 Switch
5
5
VMware vSphere 5
VMware vSphere 5 is a virtualization platform that provides a datacenter
infrastructure. It features vSphere Distributed Resource Scheduler (DRS),
high availability, and fault tolerance.
VMware vSphere 5 has the following components:

ESXi 5 — A hypervisor that loads directly on a physical server. It
partitions one physical machine into many virtual machines that share
hardware resources.

vCenter Server — Management of the vSphere environment through
a single user interface. With vCenter, there are features available
such as vMotion, Storage vMotion, Storage Distributed Resource
Scheduler, High Availability, and Fault Tolerance.
VMware Horizon 6
VMware View 6 provides virtual desktops as a managed service. Using
View, you can create images of approved desktops and then deploy them
automatically, as needed. Desktop users access their personalized
desktop, including data, applications, and settings from anywhere with
network connectivity to the server. PCoIP, a high performance display
protocol, provides enhanced end-user experience compared to traditional
remote display protocols.
6
6
Test Environment Configuration
Testing of the mixed VMmark, Extreme Power Users, and Task User
desktop workloads took place in the Hitachi Data Systems laboratory
using Hitachi Compute Blade 500, Virtual Storage Platform G1000, and
Hitachi Unified Compute Platform Director.
Hardware Components
Table 1 describes the details of the hardware components used.
Table 1. Infrastructure Hardware Components
Hardware
Description
Version
Quantity
Virtual Storage
Platform G1000
Dual controllers
80-01-22-00/01
1
SVP: A0206-A-9296
1
64 × 8 Gb/sec Fibre Channel ports
1 TB cache memory
24 × 900 GB 10k RPM SAS disks, 2.5 inch
SFF
4 × 800 GB SSD disks, 2.5 inch
SFF
16 × 1.6 TB Hitachi Accelerated Flash
Module Drives (or FMD)
Hitachi Compute
8-blade chassis
Blade 500 Chassis
2 × Brocade 5460 Fibre Channel switch
modules, each with 6 × 8 Gb/sec uplink
ports. (UCP utilizes 4 ports)
5460: FOS 7.2.1
2 × Hitachi 10GbE LAN Pass-through
modules, each with 16 × 10 Gb/sec uplink
ports. (UCP utilizes 8 ports)
2 × management modules
6 × cooling fan modules
4 × power supply modules
520HB2 Server
Blade (Task User
Desktops)
Half Blade
Firmware:
2 × 12-core Intel Xeon E5-2697v2
processor, 2.7 GHz
04-30
384 GB RAM
1 × 2 port 10 Gb/sec Emulex PCIe
Ethernet
1 × 2 port 1 Gb/sec onboard Ethernet
BMC/EFI:
04-23/10-46
2
7
7
Table 1. Infrastructure Hardware Components (Continued)
Hardware
Description
Version
Quantity
520HB2 Server
Blade (Extreme
Power User
Desktops)
Half Blade
Firmware:
2
2 × 12-core Intel Xeon E5-2697v2
processor, 2.7 GHz
04-30
384 GB RAM
BMC/EFI:
04-23/10-46
1 × 2 port 10 Gb/sec Emulex PCIe
Ethernet
1 × 2 port 1 Gb/sec onboard Ethernet
520HB2 Server
Blade (VMmark)
Half Blade
Firmware:
2 × 12-core Intel Xeon E5-2697v2
processor, 2.7 GHz
04-30
384 GB RAM
2
BMC/EFI:
04-23/10-46
1 × 2 port 10Gb/sec Emulex PCIe
Ethernet
1 × 2 port 1 Gb/sec onboard Ethernet
520HB2 Server
Blade
(Infrastructure)
Half Blade
Firmware:
2 × 12-core Intel Xeon E5-2697v2
processor, 2.7 GHz
04-30
384 GB RAM
2
BMC/EFI:
04-23/10-46
1 × 2 port 10 Gb/sec Emulex PCIe
Ethernet
1 × 2 port 1 Gb/sec onboard Ethernet
CR 210HM (UCP
Management)
2 × 8 core Intel Xeon E5-2620 processor,
2.00 GHz
BIOS/BMC:
2
01-09-00
96 GB RAM
1 × 2 port 10 Gb/sec Emulex PCIe
Ethernet
1 × 2 port 1 Gb/sec onboard Ethernet
CB 2000 (VMmark 1 Blade
Prime Client)
2 × Intel® Xeon® X5670, 6C, 2.93GHz
BMC/EFI:
1
03-93/03-57
120 GB RAM
1 × 2 port 1 Gb/sec onboard Ethernet
Brocade 6510
switch
SAN switch with 48 × 8 Gb Fibre Channel FOS V7.2.1a
ports
2
8
8
Table 1. Infrastructure Hardware Components (Continued)
Hardware
Description
Version
Quantity
Cisco Nexus 3048 Ethernet switch with 48 × 1 Gb/sec ports
Ethernet Switch
6.0(2)U2(4)
2
Cisco Nexus
5548UP Ethernet
Switch
32 Unified Ports plus one expansion
module with 16 Unified Ports
6.0(2)N2(4)
2
Cisco Nexus
2232PP Fabric
Extenders
32 × 10 Gb Ethernet server ports and 8 ×
10Gb Ethernet uplink ports
2
Software Components
Table 2 describes the details of the software components used in this
testing.
Table 2. Software Components
Software
Version
Hitachi Storage Navigator Modular 2
Microcode Dependent
Dynamic Provisioning
Microcode Dependent
Unified Compute Platform Director
3.5 Patch 2, Build 4743
VMware vCenter server
5.5.U1a, Build 1750787
VMware ESXi
5.5.U1a, Build 1746018
VMware Virtual Infrastructure Client
5.5, Build 1746248
Microsoft® Windows Server® 2012
Datacenter, R2
Microsoft SQL Server®
2012 SP1
Microsoft Windows Server 2008
Enterprise Edition, R2
VMware Horizon View
6.0.1, Build 2088845
VMware Horizon View Client
3.1.0, Build 2085634
Login VSI
4.0.12.754
VMware VMmark
2.5.2 Patch-02052014
Network Infrastructure
The network infrastructure for Unified Compute Platform for VMware
vSphere is configured for management and automation through Unified
Compute Platform Director. For more information, please visit the Unified
Compute Platform for VMware vSphere product homepage.
Storage Infrastructure
The storage infrastructure for Unified Compute Platform for VMware
vSphere is configured for management and automation through Unified
Compute Platform Director. For more information, please visit the Unified
Compute Platform for VMware vSphere product homepage.
9
9
Solution Infrastructure
For this testing, the infrastructure servers for VMware Horizon View used
for this solution were placed on separate infrastructure clusters with
dedicated resources.
Table 3 describes the details of the server components required for
VMware Horizon View.
Table 3. VMware Horizon View Components
Server Name
vCPU
Memory
Disk Size
Disk Type
Operating System
View Connection Server
4
16 GB
40 GB
Eager Zeroed
Thick
Microsoft® Windows
Server® 2012 R2
View Composer
4
12 GB
40 GB
Eager Zeroed
Thick
Microsoft® Windows
Server® 2012 R2
Domain Controller
2
8 GB
40 GB
Eager Zeroed
Thick
Microsoft® Windows
Server® 2012 R2
Database Server
4
16 GB
40 (OS)
Eager Zeroed
Thick
Microsoft® Windows
Server® 2012 R2/
60 (data)
Microsoft® SQL Server®
2012 SP1
All of the above virtual machines were configured with the VMware
Paravirtual SCSI controller. The domain controller was deployed to
support user authentication and domain services for the VMware Horizon
View solution infrastructure. The VMware vCenter used for Horizon View
was the same vCenter used in the Unified Compute Platform
management cluster
The UCP Compute nodes were configured as follows:

Compute Cluster 01: 2 × ESXi Hosts, Dedicated for Task User
Desktops

Compute Cluster 02: 2 × ESXi Hosts, Dedicated for Extreme Power
User Desktops

Compute Cluster 03: 2 × ESXi Hosts, Dedicated for VMmark
In order to separate network traffic for the three different workloads, each
compute cluster was configured with its own dvPortGroup and separate
VLANs for the application virtual machines (VMmark, Extreme Power
Users, and Task Users).
10
10
To isolate storage traffic, Hitachi Dynamic Provisioning (HDP) Pool virtual
volumes were mapped (with Unified Compute Platform Director) to the
ESXi hosts from each compute cluster via separate storage ports as
indicated in Table 4. The storage multipathing policy was set to Round
Robin.
Table 4. Compute Clusters and Storage Ports Mapping
Compute Cluster
Number of Hosts
Number of HBAs
Storage Ports
(2 in each VSP cluster)
Task Users
2
2
5A, 6A,7A, 8A
Extreme Power Users
2
2
1B, 2B, 3B, 4B
VMmark
2
2
5B, 6B, 7B, 8B
Unified Compute Platform contains a dedicated management cluster, and
a minimum of one compute cluster. For this testing, VMware Horizon
View management/administration components were placed on separate
Infrastructure cluster, and the virtual desktops (Task Users and Extreme
Power Users) were placed on separate compute clusters. VMmark virtual
machines were placed in separate compute cluster as well. Figure 1 gives
a high-level overview of the infrastructure and component placement.
Figure 1
11
11
Test Methodology
This section describes the test methodology used. The purpose of the
tests is to show acceptable levels of performance and end user
experience when running mixed VMmark and Horizon View workloads
concurrently.
VMmark Configuration
Each VMmark tile consists of the following virtual machines listed in
Table 5.
Table 5. Configuration Details of Virtual Machines for each VMmark Tile
Function
vCPU
Memory
Boot
Disk
Data
Disk
Operating System
Virtual Client
4
4 GB
40 GB
Mail Server
4
8 GB
40 GB
Standby
1
512 MB
4 GB
Olio Database
2
2 GB
10 GB
4 GB
SLES 11 SP2 (64-bit)
Olio Web
4
6 GB
10 GB
70 GB
SLES 11 SP2 (64-bit)
DVD Store 2 Database
4
4 GB
10 GB
35 GB
SLES 11 SP2 (64-bit)
DVD Store 2 Web (A)
2
2 GB
10 GB
SLES 11 SP2 (64-bit)
DVD Store 2 Web (B)
2
2 GB
10 GB
SLES 11 SP2 (64-bit)
DVD Store 2 Web (C)
2
2 GB
10 GB
SLES 11 SP2 (64-bit)
Windows 2008 Enterprise SP2 (64-bit)
40 GB
Windows 2008 Enterprise SP2 (64-bit)
Windows 2003 Enterprise SP2 (32-bit)
Each tile represented a simulation of the following types of workloads:

Microsoft Exchange 2007 mail servers for general email workloads

Apache Olio web and database servers for Web 2.0 workloads

DVD Store 2 web and database servers for OLTP workloads

Standby servers for idle general infrastructure workloads
12
12
Table 6 shows the workload definitions and the performance metrics used
in VMmark.
Table 6. Workload Definitions for VMmark
Workload Name
Application(s)
Simulated Load per Tile and Metric
Mail Server

Microsoft Exchange 2007


Microsoft Exchange LoadGen
1000 users with a heavy workload
profile

Metric is measured in Send Mail
Actions per Minute
Standby System

None

Non-load based functional test to
activate idle resources for ondemand usage
Web 2.0

Back End : Olio DB

400 concurrent users


Metric is measured in Operations
per Minute

10 constant driver thread loads
from one web server

20 burst based driver threads from
two web servers

Metric is measured in Transactions
per Minute
(Olio)

Front End : Olio Web application
servers:

E-commerce

(DVD Store 2)
2-tier Java-based implementation
of the Olio workload, including the
following operations:

HomePage

Login

TagSearch

EventDetail

PersonDetail

AddPerson

AddEvent
Back End : DS2 DB


MySQL database
MySQL database
Front End : DS2 Web servers

Apache 2.2 Web Server
13
13
Table 6. Workload Definitions for VMmark (Continued)
Workload Name
Application(s)
Simulated Load per Tile and Metric
Infrastructure Operations:
Dynamic storage
relocation
Dynamic virtual
machine relocation
Virtual machine
cloning and
deployment
Automated load
balancing (VMotion)
VMware Storage VMotion:

A standby virtual machine is selected
in a round-robin fashion from among
all the tiles

Files are moved to temporary location,
then back to original location

Then wait 5 minutes to repeat
VMware VMotion:

An Olio database virtual machine is
selected in a round-robin fashion from
among all the tiles

A destination host is selected at
random

Virtual machine is relocated and then
wait 3 minutes to repeat
VMware virtual machine cloning
technology:

Cloning of virtual machine, customizes
OS, upgrades VM Tools, verifies
successful deployment

Destroys new virtual machine and wait
5 minutes to repeat
VMware Distributed Resource Scheduler:

Set to aggressive

Concurrent clone and deploy
operations increases with number
of tiles and number of hosts

Metric is measured in Storage
VMotioned VMs/Hour

Concurrent relocations increases
with number of tiles and number of
hosts

Metric is measured in VMotioned
VMs/Hour

Metric is measured in number of
Deployed VMs/Hours

Infrastructure Functionality to load
balance tile workload.
The VMmark test used 14 tiles distributed in 2 ESXi hosts. These
VMmark tiles contained the following mix of server types:

112 virtual machines

294 virtual CPUs

371 GB of virtual machine memory
In addition, the VMmark workloads required a prime client that was
configured in a bare-metal server, and one virtual client for each tile.
These virtual clients ran in other hosts, outside of the ESXi hosts for the
workload virtual machines.
14
14
The VMmark pool was configured to use 18 datastores, which were
presented to both hosts in the Unified Compute Platform for VMware
vSphere compute cluster used for VMmark workloads. Each datastore
was presented to the cluster as a HDP pool virtual volume, with the HDP
pool containing 8 1.6 TB FMD in 2 RAID-10 (2D+2D) parity groups.
Table 7 shows the distribution of the VMmark Tiles across the storage
resources.
Table 7. Distribution of VMmark Tiles
HDP Pool
VMFS Datastore
VMs/purpose
1 x HDP Pool
LUN 1 - 14
Tiles 1 - 14 ( 1 tile per LUN, except Standby Servers)
LUN 15
Standby Server 1-7
LUN 16
Standby Server 8-14
LUN 17
Deploy template/ deploy target
LUN 18
Target for sVMotion
(2 × Parity Groups)
Figure 2 illustrates the storage configuration used for the VMmark VMs.
Figure 2
VMware Horizon View Configuration
Most of the configuration of the Horizon View Pools for the Task Users
and Extreme Power Users is based on previous tests from the Lab
Validation Report "Mixed Virtual Desktop Workloads with Hitachi Unified
Compute Platform for VMware Horizon View on Hitachi Virtual Storage
Platform G1000".
15
15
Extreme Power Users - Persistent Full Clone Desktop
Pool
A full clone desktop pool of 180 desktops with dedicated user assignment
was configured in VMware Horizon View for use in the concurrent
workload testing. The virtual machine template used for the full clones
was configured for a Heavy Power User workload type as defined in the
VMware whitepaper "Storage Considerations for VMware Horizon View".
The virtual machines were then configured to run a total average load of
150-175 IOPS to simulate software development build and compile tasks.
A 3 GB memory reservation was configured on the virtual machines in
order to reduce the datastore space consumed by the vswap files. The
virtual machine template was prepared for VDI use by following the
guidelines in the VMware optimization guide "VMware Horizon View
Optimization Guide for Windows 7 and Windows 8". The virtual machine
template was also configured for use with Login VSI by following the
guidelines established in the Login VSI 4.x Documentation.
Table 8 lists the configuration details of the virtual machine template used
for the Extreme Power User full clone desktops.
Table 8. Configuration Details of Virtual Machine Template for Full Clones
Operating System
Microsoft Windows 7, 64-bit
vCPU Allocation
2
Memory Allocation
4 GB (3 GB Reserved)
Desktop Disk/Type
34 GB/ Thin Provisioned
2 GB/ Thick Provisioned (vdbench test)
SCSI Controller
LSI Logic SAS
Average Steady State IOPS
150-175
High-Density vCPU per Core
7.5
The desktop pool was configured to use two datastores, which were
presented to both hosts in the Unified Compute Platform for VMware
vSphere compute cluster used for the Extreme Power User workloads.
Each datastore was presented to the cluster as a HDP pool virtual
volume, with the HDP pool containing 8 × 1.6 TB FMD in 2 RAID-10
(2D+2D) parity groups.
16
16
Figure 3 illustrates the storage configuration used for the Extreme Power
User full clone desktops.
Figure 3
Each VMFS datastore contained 90 desktops. Each ESXi server in the
compute cluster used for Extreme Power Users workloads was
configured to host exactly 90 desktops in order to obtain accurate end
user experience metrics during the testing.
Task Users - Linked Clone Desktop Pool
A linked clone desktop pool with a floating user assignment of 400
desktops was configured in VMware Horizon View for use in the
concurrent workload testing. The virtual machine template used for the
linked clones was configured for a Task User workload type as defined in
the VMware whitepaper "Storage Considerations for VMware Horizon
View". The virtual machine template was prepared for VDI use by
following the guidelines in the VMware optimization guide "VMware
Horizon View Optimization Guide for Windows 7 and Windows 8". The
virtual machine template was also configured for use with Login VSI by
following the guidelines established in the Login VSI 4.0 Documentation.
17
17
Table 9 lists the configuration details of the virtual machine template used
for the linked clone desktops.
Table 9. Configuration Details of Virtual Machine Template for Linked Clones
Operating System
Microsoft Windows 7, 32-bit
vCPU Allocation
1
Memory Allocation
1 GB
Desktop Disk/Type
24 GB/Thin Provisioned
SCSI Controller
LSI Logic SAS
Average Steady State IOPS
3-7
High-Density vCPU per Core
8.3
The desktop pool was configured to use eight datastores for linked clone
storage, which were presented to both hosts in the Unified Compute
Platform for VMware vSphere compute cluster used for Task User
workloads. Each datastore was presented to the cluster as a HDP pool
virtual volume, with the HDP pool containing 2 RAID-10 (2D+2D) parity
groups of 900 GB 10k SAS.
The desktop pool was configured to use a single datastore for replica
storage. The replica datastore was presented to the cluster as an HDP
pool virtual volume, with the HDP pool containing a single RAID-10
(2D+2D) parity group of 800 GB SSD.
18
18
Figure 4 illustrates the storage configuration used for the Task User
linked clone desktops.
Figure 4
Each VMFS datastore contained 50 desktops (+/- 2 desktops). Each ESXi
server in the compute cluster used for Task User workloads was
configured to host exactly 200 desktops in order to obtain accurate end
user experience metrics during the testing.
19
19
Login VSI Test Harness Configuration
Extreme Power Users - Persistent and Full Clone
Desktop Pool
Login VSI was used to generate a heavy workload on the desktops. Login
VSI launchers were each configured to initiate up to 15 PCoIP sessions to
the VMware Horizon View Connection Server to simulate end-to-end
execution of the entire VMware Horizon View infrastructure stack.
The standard "Heavy" Login VSI workload was modified to include
running Vdbench in the background during all test phases in order to
generate additional IOPS necessary to meet the 150-175 IOPS target for
the necessary workload. In order to ensure the I/O profile used in
Vdbench was applicable to desktop application usage, individual
application I/O profiles were captured from previous testing and used as
Vdbench workload definitions. Table 10 lists the applications and
associated I/O profiles used to define the Vdbench workload definitions.
Table 10. VDbench Application I/O Profiles Used in Workload Definitions
Application
% Read
% Write
% Random
% Sequential
Microsoft PowerPoint®
19
81
75
25
Adobe Acrobat Reader
22
78
71
29
Microsoft Outlook®
17
83
81
19
Microsoft Excel®
18
82
74
26
Mozilla Firefox
36
64
55
45
Microsoft Internet Explorer®
17
83
80
20
Microsoft Web Album
27
73
68
32
Microsoft Media Player
23
77
61
39
Microsoft Word®
26
74
62
38
Login VSI was configured to stagger logins of all 180 users with a single
login occurring every 5.5 seconds. This ensured all 180 users logged into
the desktops and began the steady state workloads within a 17 minute
period in order to simulate a moderate login storm.
Task Users - Linked Clone Desktop Pool
Login VSI was used to generate a light workload on the desktops. Login
VSI launchers were each configured to initiate up to 20 PCoIP sessions to
the VMware Horizon View Connection Server to simulate end-to-end
execution of the entire VMware Horizon View infrastructure stack. The
standard "Light" Login VSI workload was used to simulate a Task User
workload.
Login VSI was configured to stagger logins of all 400 users with a single
login occurring every 6 seconds. This ensured all 400 users logged into
the desktops and began the steady state workloads within a 40 minute
period in order to simulate a moderate login storm.
20
20
Test Cases
Test Case 1: Concurrent VMmark Servers Initialization
and Boot Storm for Virtual Desktops
In this test, an immediate power on of 580 desktops (linked clone and full
clone) and 112 VMmark virtual servers (14 VMmark tiles) was performed
through the VMware Virtual Infrastructure Client. This boot storm test
contained the following mix of desktop and server types:

400 linked clone desktops configured for Task User workloads

180 full clone desktops configured for Extreme Power User workloads

112 virtual servers (14 Tiles) configured for VMmark workloads, this
contained the following mix of server types:

14 Exchange Servers

56 Web Servers (DVD Store 2 and Olio)

28 Database Servers (DVD Store 2 and Olio DBs)

14 Standby systems
This test case was performed to ensure storage assigned to the desktop
pools and VMmark virtual servers performed adequately under stress,
and to determine the amount of time necessary for the desktops to be
ready for login by the end user.
Test Case 2: Concurrent VMmark Benchmark Workloads,
Login Storm, and Steady State for Extreme Power Users
and Task Users
Login VSI allows for staggering user logins within a desktop pool and then
looping the configured workload for a specified amount of time before
logging the session off. For this test case, both desktop pools started their
login storm at the same time and each pool logged on a single user every
5.5 seconds. In addition, VMmark workloads were started 24 minutes
before the completion of all the desktop logins.
All the Login VSI sessions were configured to log off approximately five
hours after logging on to ensure that each desktop ran a steady state
workload loop at least twice, and to ensure that both desktop workloads
(Extreme Power Users and Task Users) and VMmark workloads ran
together for at least 3 hours.
This test case was performed to ensure that all 580 desktops ran the full
steady state workload at the same time the 14 VMmark Tiles were
running their workloads to show sustained concurrent performance, as
well as show that the infrastructure can support not only login storm but
also steady workloads of all 580 desktops and 14 VMmark tiles
concurrently.
21
21
Test Case 3: VMmark only
For this test, all VMware Horizon View desktops were kept powered off
and only VMmark workloads for the 14 Tiles were running. VMmark Tiles
were configured to run for 3 hours (same as Test Case 2).
This test was performed to measure the scores when running VMmark
workloads only and compare with the scores when running together with
other workloads (Test case 2).
22
22
Analysis
Test Case 1: VMmark Servers Initialization and
Boot Storm for Virtual Desktops
The immediate power on of 580 desktops and 112 VMmark virtual
servers (14 Tiles) took 3 minutes and 20 seconds. This was measured
from the time that the desktops were concurrently powered on from
vCenter until the time all 580 desktops showed as "Available" within
VMware Horizon View Administrator. Six minutes of metrics are graphed
to illustrate the periods prior to power-on and after the desktops were
marked as available and ready to login.
Storage Infrastructure
Multiple performance metrics from Virtual Storage Platform G1000 were
analyzed to ensure that the storage could support the stress of an
immediate power on of all of the desktops and VMmark servers.
23
23
HDP Pool IOPS
Figure 5 and Figure 6 illustrate the total combined IOPS during the boot
storm for the LDEVs used within the HDP pools used for the virtual
desktops and VMmark tiles.


Write IOPS peak at approximately:

0 for the Replica HDP Pool

6,500 for the Linked Clone HDP Pool

4,000 for the Full Clone HDP Pool (Extreme Power User)

1,700 for the VMmark HDP Pool
Read IOPS peak at approximately:

13,000 for the Replica HDP Pool

13,700 for the Linked Clone HDP Pool

11,600 for the Full Clone HDP Pool (Extreme Power User)

11,500 for the VMmark HDP Pool
Figure 5

Total Write IOPS peak at approximately 9,981.

Total Read IOPS peak at approximately 46,400
24
24
Figure 6
25
25
Processor and Cache Write Pending
Figure 7 and Figure 8 illustrate the management processor utilization and
cache write pending rate observed during the boot storm.

MPU utilization does not rise above 17 %

Cache write pending does not rise above 2 %

These metrics show that even during a boot storm Virtual Storage
Platform G1000 still has plenty of resources for additional workloads.
Figure 7
26
26
Figure 8
Cache Hit Percentage
Figure 9 illustrates the read cache hit percentage observed during the
boot storm.

The read cache hit percentage of the Replica HDP Pool is
approximately 100% throughout the boot storm.

The read cache hit percentage of the Linked Clone HDP Pool is
approximately 85% throughout the boot storm.

The read cache hit percentage of the VMmark HDP Pool decreases
as individual VMmark servers power on.
Figure 9
27
27
Physical Disk
Figure 10 illustrates the average physical disk busy rate for the LDEVs
used within the HDP pools used for the virtual desktops and VMmark
tiles.

The disk busy rate on 900 GB 10K SAS drives for the Linked Clone
HDP Pool peaked at 100% during one minute of the boot storm
operation.

The disk busy rate on 800 GB SSD drives for the Replica Pool HDP
Pool does not rise above 1%

The disk busy rate on 1.6 TB FMD drives for the Extreme Power
Users HDP Pool does not rise above 36%

The disk busy rate on 1.6 TB FMD drives for the VMmark HDP Pool
does not rise above 25%
Figure 10
28
28
Frond End Ports
Figure 11 and Figure 12 illustrate the throughput observed on the front
end Fibre Channel ports of Virtual Storage Platform G1000 during the
boot storm.

Throughput on ports used for the Linked Clone Compute Cluster
peaked at 111 MB/sec.

Throughput on ports used for the Extreme Power Users (Full Clone)
Compute Cluster peaked at 113 MB/sec.

Throughput on ports used for the VMmark Compute Cluster peaked
at 82 MB/sec.

The total throughput peaked at 1,214 MB/sec during boot storm.
Figure 11

Figure 12 shows the average throughput for each of the workloads
during the boot storm.
29
29
Figure 12
30
30
Storage Port Latency
Figure 13 illustrates the average storage latency observed on Virtual
Storage Platform G1000 during the boot storm.

Average latency on ports used for the Linked Clone Compute Cluster
peaked at 9 milliseconds during the boot storm.

Average latency on ports used for the Extreme Power Users (Full
Clones) Compute Cluster peaked at 1.2 milliseconds

Average latency on ports used for the VMmark Compute Cluster
peaked at 0.6 milliseconds
Figure 13
31
31
HDP Pool Latency
Figure 14 illustrates the averaged storage latency for the LDEVs used
within the HDP pools during the boot storm.

The average latency for the LDEVs used for the Linked Clone HDP
Pool peaked at 15 milliseconds during the boot storm.

The average latency for the LDEVs used within the Extreme Power
Users HDP Pools and VMmark Pool did not rise above 1 milliseconds
during the boot storm.
Figure 14
Test Case 2: VMmark Benchmark Workloads,
Login Storm, and Steady State for Extreme
Power Users and Task Users
This test generates multiple performance metrics from ESXi hypervisors,
Hitachi Virtual Storage Platform G1000, VMmark, and Login VSI test
harnesses during the login storm and steady state operations. This test
contained the following mix of desktop and server types:

400 linked clone desktops configured for Task User workloads

180 full clone desktops configured for Extreme Power User workloads

112 virtual servers (14 Tiles) configured for VMmark workloads
Compute Infrastructure
Esxtop was used to capture performance metrics on all 6 hosts in the
Horizon and VMmark clusters during the VMmark workloads and login
storm/steady state operations for the desktops. The following metrics
illustrate the performance of the hypervisor and guest virtual machines.
32
32
Hypervisor CPU Performance
Figure 15 illustrates the CPU utilization representative of a host in the
Extreme Power User Cluster during the login storm and steady state
operations.

Percent core utilization dos not rise above 94%, averaging 81%
during steady state. With this utilization rate, the blades are running
with high CPU utilization. This just proves the maximum loads per
host for the workloads used in this test. For high availability, either
reduce the number of desktops or add ESXi servers to the cluster.

Percent utilization time does not rise above 86%, averaging 64%
during steady state.

Percent processor utilization does not rise above 52%, averaging
44% during steady state.
Figure 15
33
33
Figure 16 illustrates the CPU utilization representative of a host in the
VMmark Cluster during the workload setup and steady workloads of the
VMmark Tiles

Percent core utilization averages 98% during the steady VMmark tiles
workloads.

Percent utilization time averages 92% during the steady VMmark tiles
workloads.

Percent processor utilization does not rise above 55%
Figure 16
34
34
Hypervisor CPU Load
Figure 17 illustrates the CPU Load representative of a host in the
Extreme Power User Cluster during the login storm and steady state
operations.

The CPU Load for 1 Minute average does not rise above 2.3
Figure 17
35
35
Figure 18 illustrates the CPU Load representative of a host in the
VMmark Cluster during the workload setup and steady workloads of the
VMmark Tiles.

The CPU Load for 1 Minute average does not rise above 2.9 during
the steady VMmark tiles workloads.
Figure 18
36
36
Hypervisor Memory Performance
Figure 19 illustrates the memory performance representative of a host in
the Extreme Power User cluster during the login storm and steady
operations.

The total used memory averages at 350 GB from the total of 384 GB
during the login storm and steady state.
Figure 19
37
37
Figure 20 illustrates the memory performance representative of a host in
the VMmark cluster during the workload setup and steady workloads of
the VMmark Tiles.

The Total Used memory does not rise above 45% of the available
physical RAM.
Figure 20
38
38
Hypervisor Network Performance
Figure 21 shows the network performance representative of a host in the
Extreme Power Users cluster during the login storm and steady state
operations.

MBits/sec transmitted does not rise above 140 MBits/sec.

MBits/sec received does not rise above 70 MBits/sec

Considering that each hypervisor has 2 x 10 GbE ports, these metrics
show that there is plenty of network bandwidth available for additional
workloads.
Figure 21
39
39
Figure 22 shows the network performance representative of a host in the
Extreme Power Users cluster during the login storm and steady state
operations.

MBits/sec transmitted does not rise above 958 MBits/sec.

MBits/sec received does not rise above 480 MBits/sec
Figure 22
Guest CPU Performance - Extreme Power User

Figure 23 and Figure 24 illustrate the in-guest CPU performance
metrics obtained from vCenter during the login storm and steady state
operations of the Extreme Power Users. These graphs show four
representative desktops from the 180 that ran the workloads. The
CPU Ready times metrics were obtained from vCenter past-day
summation values and converted to CPU Ready %. The CPU CoStop
metrics represent 1 hour of real time statistics from vCenter for four
desktops of the 180 that ran the workloads.
40
40
Percent ready should not average above 10% for every vCPU assigned
to the virtual desktop (20% in the case of the tested infrastructure).
Percent CoStop is an important metric to observe when deploying virtual
machines with multiple vCPUs. A high CoStop percentage indicates that
the underlying host's CPU resources are overloaded to the point where
the virtual machine is ready to execute a command, but is waiting for the
availability of multiple pCPUs from the ESXi CPU scheduler. As a best
practice, CoStop should not rise above 3%.

Percent Ready does not rise above 21% and averages between 8
and 9%

Percent CoStop does not rise above 1.4%
Figure 23
Figure 24
41
41
Guest IOPS Performance - Extreme Power User
Figure 25 shows in-guest IOPS performance metrics obtained from
esxtop during the login storm and steady state operations for the Extreme
Power Users. The graphs show four representative desktops from the
180 that ran the workloads.

IOPS peak at 656 during the login storm

IOPS peak at 227 and average at 157 during steady state
Figure 25
Guest CPU Performance - Task User
Figure 26 illustrate the in-guest CPU performance metrics obtained from
vCenter during the login storm and steady state operations of the Task
Users. These graphs show four representative desktops from the 400 that
ran the workloads. The CPU Ready times metrics were obtained from
vCenter past-day summation values and converted to CPU Ready %.
Percent ready should not average above 10% for every vCPU assigned
to the virtual desktop (20% in the case of the tested infrastructure).

Percent Ready does not rise above 10.5% and averages between 6
and 7%
42
42
Figure 26
Guest IOPS Performance - Task User

IOPS peak at 120 during login storm

IOPS peak at 16 and averages between 3 and 4 during steady state.
Figure 27
Storage Infrastructure
Multiple performance metrics were captured from Virtual Storage Platform
G1000 during the VMmark workloads and login storm/steady state
operations for the virtual desktops. The following metrics illustrate the
performance of the storage array.
43
43
HDP Pool IOPS
Figure 28 and Figure 29 illustrate the total combined IOPS during
VMmark workloads and login storm/steady state operations for the virtual
desktops.


Write IOPS peak at approximately:

1 for the Replica HDP Pool

3,500 for the Linked Clone HDP Pool

42,000 for the Full Clone HDP Pool (Extreme Power User)

21,200 for the VMmark HDP Pool
Read IOPS peak at approximately:

300 for the Replica HDP Pool

900 for the Linked Clone HDP Pool

10,300 for the Full Clone HDP Pool (Extreme Power User)

11,700 for the VMmark HDP Pool
Figure 28

The total Write IOPS for all the HDP Pools peaked at approximately
18,700

The total Read IOPS for all the HDP Pools peaked at approximately
45,120
44
44
Figure 29
45
45
Processor and Cache Write Pending
Figure 30 and Figure 31 illustrate the management processor utilization
and cache write pending rate observed during the VMmark workloads and
login storm/steady state operations for the virtual desktops.

MPU utilization does not rise above 49% during the login storm, and
stays below 34% during the steady workloads. This indicates that
there is more than 65% headroom during steady workloads.

Cache write pending does not rise above 38% during the login storm,
and stays below 22% during the steady workloads, indicating plenty
of headroom for additional workloads.
Figure 30
46
46
Figure 31
47
47
Cache Hit Percentage
Figure 32 illustrates the read cache hit percentage observed during the
VMmark workloads and login storm/steady state operations for the virtual
desktops.

The read cache hit percentage of the Replica HDP Pool is
approximately 100% throughout the login storm and steady
workloads.

The read cache hit percentage of the Linked Clone HDP Pool
decreases throughout the login storm and increases during the
steady state to an average of 95%.

The read cache hit percentage of the Extreme Power Users HDP
Pool decreases throughout the login storm and stays below 33%
during the steady workloads.

The read cache hit percentage of the VMmark HDP Pool decreases
during the VMmark Workload Setup and stays below 60% during the
VMmark Tiles Workloads.
Figure 32
48
48
Physical Disk
Figure 33 illustrates the average physical disk busy rate for the LDEVs
used within the HDP pools during the VMmark workloads and login storm/
steady state operations.

The disk busy rate on 900 GB 10K SAS drives for the Linked Clone
HDP Pool does not rise above 65%.

The disk busy rate on 800 GB SSD drives for the Replica Pool HDP
Pool does not rise above 1%

The disk busy rate on 1.6 TB FMD drives for the Extreme Power
Users HDP Pool peaked at 94% during the login storm, and remained
below 70% during steady workloads.

The disk busy rate on 1.6 TB FMD drives for the VMmark HDP Pool
peaked at 99% during the VMmark Workload Setup task (for about 5
minutes), and remained below 60% during steady VMmark
workloads.

There is 30% to 40% headroom during steady workloads.
Figure 33
49
49
Front End Ports
Figure 34 and Figure 35 illustrate the throughput observed on the front
end Fibre Channel ports of Virtual Storage Platform G1000 during the
VMmark workloads and login storm/steady state operations.

Throughput on ports used for the Linked Clone Compute Cluster
peaked at 20 MB/sec.

Throughput on ports used for the Extreme Power Users (Full Clone)
Compute Cluster peaked at 110 MB/sec.

Throughput on ports used for the VMmark Compute Cluster peaked
at 500 MB/sec during the VMmark Workload Setup task (for about 5
minutes); the total throughput during this time peaked at 2,400 MB/
sec.
Figure 34

Figure 35 illustrates the average storage port throughput for each of
the workloads.
50
50
Figure 35
51
51
Storage Port Latency

Average latency on ports used for the Linked Clone and Extreme
Power Users Desktops peaked at 1.3 milliseconds during login storm.

Average latency on ports used for the Linked Clone and Extreme
Power Users Desktops peaked at 2.7 milliseconds during steady
state.
Figure 36

Average latency on ports used for the VMmark Compute Cluster
remained below 0.5 milliseconds during the steady VMmark
workloads.
Figure 37
52
52
HDP Pool Latency

The average latency for the LDEVs used for the Linked Clone HDP
Pools peaked at 2.4 milliseconds during the steady state.

The average latency for the LDEVs used within the Extreme Power
Users HDP Pools peaked at 0.9 milliseconds during the steady state.
Figure 38

The average latency for the LDEVs used for VMmark HDP Pool
remained below 1 millisecond during the steady VMmark workloads.

This data shows the capability of Virtual Storage Platform G1000 to
handle a mix of workloads with very low latency.
53
53
Figure 39
Application Experience
Login VSI was used to measure end user application response times and
to determine the maximum number of desktops that can be supported on
the tested infrastructure running the specified workloads.
Application Response Times
Login VSI reported the time required for various operations to complete
within the desktop during the test.

All operations completed in less than 3.8 seconds.

If the "Zip High Compression (ZHC)" metric from the linked clone
desktops is removed from the analyzed metrics, all other operations
completed in less than 2.3 seconds.
These performance metrics prove that the tested infrastructure provides
adequate end user experience for 400 Task Users and 180 Extreme
Power Users even when they are running simultaneously with other 112
Virtual machines from 14 × VMmark Tiles.
54
54
Table 11 shows the operation abbreviations used in Login VSI and a
description of the action taken during the operation.
Table 11. Login VSI Operation Descriptions
Login VSI Operation
Description
FCDL
File Copy Document Local
FCDS
File Copy Document Share
FCTS
File Copy Text Share
FCTL
File Copy Text Local
NFP
Notepad File Print
NSLD
Notepad Start/Load File
WFO
Windows File Open
WSLD
Word Start/Load File
ZHC
Zip High Compression
ZNC
Zip No Compression
Figure 40 shows the application experience metrics as reported by Login
VSI.
Figure 40
55
55
VSI Max Metric
VSI Max is a metric that indicates the number of desktop sessions the
tested infrastructure can support running a specified workload.
Several metrics were monitored during the concurrent workload testing.
While the "VSI Max" metric from Login VSI gives a good indicator of when
the maximum user density has been reached due to infrastructure
resource saturation, we chose to lower densities slightly below VSI Max
while carefully monitoring guest CPU ready percentage, ESXi CPU load
and utilization, and guest CPU CoStop values. This ensured that we can
guarantee end user experience while still having enough overall resource
headroom to support bursts of CPU or I/O usage by the underlying guest
operating system.
VMmark
The figures below show the results for the VMmark workloads; this is the
result of 14 × VMmark Tiles running in the same infrastructure where 580
VMware Horizon Desktops (Full clones and Linked Clones) are running at
the same time.

The average workload throughput and average workload latency for
each of the VMmark workloads are basically the same as the ones
from standard VMmark results where VMmark is the only running
workload in the infrastructure.
Figure 41 shows the average workload throughput for each of the
application workloads.
Figure 41
56
56
Figure 42 shows the average response time measured for each of the
application workloads.
Figure 42
Test Case 3: VMmark Only
The figures below show the results for the VMmark workloads; this is the
result of 14 × VMmark Tiles running in the same infrastructure as Test
Case 2, with the only difference that in this case VMmark is the only
workload running.
57
57
Figure 43 and Figure 44 show the average workload throughput and
average workload latency.
Figure 43
Figure 44
58
58

Figure 45 shows the results from both VMmark tests.

The workload throughput for each of the applications is basically the
same.
Figure 45

Figure 46 shows that the workload latency slightly increases when
running VMmark together with other workloads in the same
infrastructure. However, as indicated in the previous graph, the
VMmark scores are basically the same.
Figure 46
59
59
Conclusion
All performance metrics analyzed show that the underlying hardware
contained within Unified Compute Platform for VMware vSphere
converged stack supported:

An immediate power on of 400 linked clones and 180 persistent
desktops, including the power on of 112 virtual machines from 14
VMmark tiles.

Adequate end user experience for 440 Task Users running at 3-7
IOPS per desktop

Adequate end user experience for 180 Extreme Power Users running
at 150-175 IOPS per desktop

Adequate application throughput and latency for 14 VMmark Tiles

Simultaneous virtualization operations such as cloning and
deployment, vmotion, and storage vmotion.
With respect to VMmark results, it is important to highlight that the
VMmark scores (running concurrently with VDI workloads) are basically
the same when compared to the results from standard VMmark results.
For More Information
Hitachi Data Systems Global Services offers experienced storage consultants,
proven methodologies and a comprehensive services portfolio to assist you in
implementing Hitachi products and solutions in your environment. For more
information, see the Hitachi Data Systems Global Services website.
Live and recorded product demonstrations are available for many Hitachi
products. To schedule a live demonstration, contact a sales representative. To
view a recorded demonstration, see the Hitachi Data Systems Corporate
Resources website. Click the Product Demos tab for a list of available recorded
demonstrations.
Hitachi Data Systems Academy provides best-in-class training on Hitachi
products, technology, solutions and certifications. Hitachi Data Systems Academy
delivers on-demand web-based training (WBT), classroom-based instructor-led
training (ILT) and virtual instructor-led training (vILT) courses. For more
information, see the Hitachi Data Systems Services Education website.
For more information about Hitachi products and services, contact your sales
representative or channel partner or visit the Hitachi Data Systems website.
Corporate Headquarters
2845 Lafayette Street, Santa Clara, California 95050-2627 USA
www.HDS.com
Regional Contact Information
Americas: +1 408 970 1000 or [email protected]
Europe, Middle East and Africa: +44 (0) 1753 618000 or [email protected]
Asia-Pacific: +852 3189 7900 or [email protected]
© Hitachi Data Systems Corporation 2015. All rights reserved. HITACHI is a trademark or registered trademark of Hitachi, Ltd. “Innovate with Information” is a trademark or registered
trademark of Hitachi Data Systems Corporation. Microsoft, Windows Server, SQL Server, PowerPoint, Outlook, Excel, Word, and Internet Explorer are trademarks or registered trademarks
of Microsoft Corporation. All other trademarks, service marks, and company names are properties of their respective owners.
Notice: This document is for informational purposes only, and does not set forth any warranty, expressed or implied, concerning any equipment or service offered or to be offered by Hitachi
Data Systems Corporation.
AS-356-00, January 2015