PDF

White Paper
Oracle Database on Cisco UCS C-Series Servers
with Fusion-io ioMemory
August 2014
© 2014 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public.
Page 1 of 32
Contents
What You Will Learn ................................................................................................................................................ 3
Objectives................................................................................................................................................................. 3
Audience................................................................................................................................................................... 3
Purpose of This Document ..................................................................................................................................... 4
Solution Overview.................................................................................................................................................... 4
Cisco Unified Computing System .......................................................................................................................... 6
Cisco Nexus 5548UP Switch ................................................................................................................................ 8
Fusion-io ............................................................................................................................................................... 9
Hardware and Software Components ................................................................................................................... 9
Infrastructure Setup............................................................................................................................................... 12
Network Design................................................................................................................................................... 12
Fusion-io ioDrive2 Configuration ......................................................................................................................... 13
Oracle Database 11g Configuration .................................................................................................................... 16
Performance and Failover Analysis ..................................................................................................................... 17
Workload Description .......................................................................................................................................... 17
Test Scenarios .................................................................................................................................................... 18
Conclusion ............................................................................................................................................................. 31
For More Information ............................................................................................................................................. 32
© 2014 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public.
Page 2 of 32
What You Will Learn
The Cisco Unified Computing System™ (Cisco UCS®) is a next-generation data center platform that unites
computing, networking, storage access, and virtualization resources in a cohesive system designed to reduce total
cost of ownership (TCO) and increase business agility. Cisco UCS C-Series Rack Servers extend unified
computing innovations to a rack-mount form factor, including unified management through a single wire, a
standards-based unified network fabric, and radical simplification through Cisco® fabric extender technology.
Based on industry-standard, intelligent Intel Xeon processors, Cisco UCS C-Series Rack Servers can be
configured with PCI Express (PCIe) based Fusion-io ioDrive2 cards, which provides ultra-low latency with storage
capacity of up to 3 terabytes (TB) per Fusion-io ioDrive2 card.
PCIe-connected flash-memory technology is widely accepted in data centers because it allows existing
applications to be used transparently while offering a significant performance improvement with a smaller board
footprint. PCIe flash-memory-based storage is directly connected to the PCIe bus, providing a direct connection
between the PCIe SAS or SATA solid-state drive (SSD) and the CPU, as well as system memory. This tightly
coupled connection provides a storage environment that has lower latency than a SAS or SATA SSD. Fusion-io
ioDrive2, a PCIe flash-memory-based storage solution, provides low latency, high capacity, and high performance.
Oracle Database provides the foundation that allows IT to successfully deliver more information with higher quality
of service (QoS), reduce the risk of change, and make more efficient use of the IT budget. All across small and
midsized businesses, high performance, database failover, data backup, and data restoration techniques are the
major challenges in adoption of a single-Instance Oracle Database deployment.
This document discusses how customers can achieve high I/O throughput with very low disk latency when using
Cisco UCS Rack Servers with Fusion-io cards for their enterprise-class Oracle Database deployments. Cisco UCS
Rack Server direct-connect technology allows the use of Cisco UCS Service Profiles on Cisco UCS Managed Rack
Servers, resulting in little downtime during migration after a server failure.
Objectives
This document provides a reference architecture that illustrates the benefits of using Cisco UCS and Fusion-io,
with Oracle Database to provide a robust, resilient, and efficient infrastructure solution that can meet the
demanding needs of businesses today. This document assumes that the user is familiar with Cisco UCS, Cisco
Nexus® switches, Fusion-io, and Oracle Database product technologies.
Audience
This document is intended for solution architects, sales engineers, field engineers, and design consultants involved
in planning, designing, and deploying an Oracle Database Solution on Cisco UCS and Fusion-io infrastructure. It
assumes that the reader has an architectural understanding of the base configuration and implementation of
Oracle Database, Cisco UCS, and Fusion-io.
© 2014 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public.
Page 3 of 32
Purpose of This Document
The testing reported in this document had two purposes:
●
To evaluate the maximum performance of a single instance of Oracle Database on Cisco UCS C240 M3
Rack Servers with Fusion-io
●
To illustrate how a single-instance Oracle Database with Fusion-io can be used as an enterprise solution
when combined with robust backup and recovery techniques leading to little downtime and high data
reliability
Enterprises can use the comparisons, failover and backup techniques reported here to help make value-based
decisions in choosing Cisco UCS and Fusion-io technologies to implement reliable and cost-effective Oracle
Database solutions.
Solution Overview
The solution discussed in this document demonstrates the deployment of a single instance of Oracle Database 11g
on a Cisco UCS C240 M3 server equipped with two 3-TB Fusion-io ioDrive2 cards. Cisco UCS C240 M3 servers
are directly connected to the Fabric Interconnect using 10-Gbps unified network fabric. Cisco UCS Manager 2.2
supports an option to connect Cisco UCS C-Series Rack Servers directly to the Cisco UCS Fabric Interconnects.
This option enables Cisco UCS Manager to manage the Cisco UCS C-Series servers using a single cable for both
management traffic and data traffic. The Cisco UCS 6248UP 48-Port Fabric Interconnect (Figure 2) is a 1RU 10
Gigabit Ethernet, FCoE, and Fibre Channel switch offering up to 960-Gbps throughput and up to 48 unified ports.
The switch has 32 1/10-Gbps fixed Ethernet, FCoE, and Fibre Channel ports on the base with the option of one
expansion slot with 16 unified ports. Cisco Fabric Interconnects creates a unified network fabric and provides
uniform access to both network and storage resources. Fabric Interconnects are deployed in pairs for high
availability. The Fabric Interconnects are connected to two upstream Cisco Nexus 5548UP Switches.
Oracle Database (single instance) on Cisco UCS Rack Servers with Fusion-io cards can be configured with
multiple options. The configuration depends on the performance and high-availability requirements of the deployed
solution. Some possible configurations are listed here:
●
Oracle Database (single instance) on a Cisco UCS Rack Server with a single Fusion-io card:
This option provides high performance but lacks in high-availability features. If the Fusion-io card fails, the
customer has to install another Fusion-io card on the server and restore the database through backup and
archive logs.
●
Oracle Database (single instance) on a Cisco UCS Rack Server with two Fusion-io cards: This option
provides protection from failure of a Fusion-io card. Oracle Automatic Storage Management (ASM) with
normal redundancy can be configured to mirror data across two Fusion-io cards. Hence, failure of one
Fusion-io card is tolerated. A maximum of four 3-TB Fusion-io cards can be installed in a Cisco UCS C240
M3 server.
●
Oracle Database (single instance) with Oracle Data Guard on a Cisco UCS Rack Server and Fusionio card: This option provides high performance and high availability for Oracle Database during Fusion-io
card failure. Oracle Data Guard is a high-availability and disaster-recovery solution that provides very fast
automatic failover (referred to as fast-start failover) after database failures, node failures, data corruption,
and media failures.
© 2014 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public.
Page 4 of 32
In the solution discussed here, two Fusion-io ioDrive2 cards are installed on a single Cisco UCS C240 M3 and
configured as a normal-redundancy disk group. This configuration provides storage capacity equivalent to one
Fusion-io card and protection during failure of one Fusion-io card. Additionally, the Cisco UCS C240 M3 was
provisioned with external network-attached storage (NAS), which allows storage of Oracle database backup and
archive logs. Oracle Recovery Manager (RMAN) was used to regularly back up Oracle Database. This setup
allows a failed Oracle Database instance to be reprovisioned within a certain downtime.
The solution described here presents the following aspects of deployment:
●
Configuration of Oracle Database Enterprise Edition (single instance) on Cisco UCS C240 M3 server
(single-wire Cisco UCS Management) with two Fusion-io ioDrive2 cards and Red Hat Enterprise Linux 6.4
●
Configuration of Oracle ASM using ASMlib with normal redundancy for the Oracle ASM disk group
●
Performance validation using an online transaction processing (OLTP) workload
●
Performance impact during failure of one of the Fusion-io cards
●
Restoration of the database through archive and backup logs
Figure 1 shows the physical layout of the test environment.
Figure 1.
Physical Layout of Managed Cisco UCS C240 M3 Rack Server
© 2014 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public.
Page 5 of 32
Cisco Unified Computing System
Cisco UCS is a next-generation data center platform that unites computing, network, and storage access. The
platform, optimized for virtual environments, is designed using open industry-standard technologies. It aims to
reduce the total cost of ownership (TCO) and increase business agility. The system integrates a low-latency;
lossless 10 Gigabit Ethernet unified network fabric with enterprise-class, x86-architecture servers. It is an
integrated, scalable, multichassis platform in which all resources participate in a unified management domain.
The main components of the Cisco UCS are:
●
Computing: The system is based on an entirely new class of computing system that incorporates blade
and rack servers based on the Intel® Xeon® processor 5500, 5600, and E5-2600 series.
●
Network: The system is integrated onto a low-latency, lossless, 10-Gbps unified network fabric. This
network foundation consolidates LANs, SANs, and high-performance computing networks, which are
separate networks today. The unified fabric lowers costs by reducing the number of network adapters,
switches, and cables needed and by decreasing the power and cooling requirements.
●
Virtualization: The system unleashes the full potential of virtualization by enhancing the scalability,
performance, and operation control of virtual environments. Cisco security, policy enforcement, and
diagnostic features are now extended to virtualized environments to support changing business and IT
requirements.
●
Storage access: The system provides consolidated access to both the SAN and NAS over the unified
fabric. By unifying storage access, Cisco UCS enables access to storage over Ethernet, Fibre Channel,
FCoE, and Small Computer System Interface over IP (iSCSI). This capability provides customers with a
choice of storage access plus investment protection. In addition, server administrators can preassign
storage access policies for system connectivity to storage resources. The result is simplified storage
connectivity and increased productivity.
●
Management: The system uniquely integrates all the components within the solution. This single entity can
be effectively managed using Cisco UCS Manager. Cisco UCS Manager has an intuitive GUI, a commandline interface (CLI), and a robust API to manage all system configuration and operations.
Cisco UCS is designed to deliver:
●
Reduced TCO and increased business agility
●
Increased IT staff productivity through just-in-time provisioning and mobility support
●
A cohesive, integrated system that unifies the technology at the data center; the system is managed,
serviced, and tested as a whole
●
Scalability through a design for hundreds of discrete servers and thousands of virtual machines and the
capability to scale I/O bandwidth to match demand
●
Industry standards supported by a partner ecosystem of industry leaders
© 2014 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public.
Page 6 of 32
Cisco UCS C240 M3 Rack Server
Building on the success of the Cisco UCS C-Series M2 Rack Servers, the enterprise-class Cisco UCS C240 M3
Rack Server (Figure 2) enhances the capabilities of the Cisco UCS portfolio in a 2RU form factor. With the addition
of the Intel Xeon processor E5-2600 product family, it delivers significant performance and efficiency gains.
Figure 2.
Cisco UCS C240 M3 Rack Server
The Cisco UCS C240 M3 also offers up to 768 GB of RAM, 24 hard disk drives or SSD drives, and four Gigabit
Ethernet LAN interfaces built into the motherboard. The result is a server with outstanding levels of density and
performance in a compact package.
The Cisco UCS C240 M3 balances simplicity, performance, and density for production-level virtualization and other
mainstream data-center workloads. The server is a 2-socket server with substantial throughput and scalability. The
Cisco UCS C240 M3 extends the capabilities of Cisco UCS. It uses the latest Intel Xeon processor E5-2600 series
multicore CPUs to deliver enhanced performance and efficiency. These processors adjust the server performance
according to application needs and use DDR3 memory technology with memory scalable up to 768 GB for
demanding virtualization and large-data-set applications. Alternatively, these servers can have a more costeffective memory footprint for less demanding workloads. Cisco UCS C240 M3 Rack Servers offer these main
benefits:
●
Outstanding server performance and expandability: The Cisco UCS C240 M3 offers up to two Intel
Xeon processor E5-2600 or E5-2600 v2 CPUs, 24 DIMM slots, 24 disk drives, and four 1 Gigabit Ethernet
LAN-on-motherboard (LOM) ports to provide outstanding levels of internal memory and storage
expandability and exceptional performance.
●
Flexible deployment and easy management: The Cisco UCS C240 M3 can operate in standalone
environments or as part of Cisco UCS for greater flexibility and computing density in a small footprint. In a
multiple-rack environment, because of the ease of data migration between racks, the complexity of rack
management is greatly reduced.
●
Workload scalability: Cisco UCS C240 M3 servers along with four Fusion-io ioDrive2 cards, each with 3TB capacity for x8 PCIe slots, allows organizations to scale to meet growing workload demands, handle
greater spikes in traffic, and consolidate infrastructure.
© 2014 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public.
Page 7 of 32
Cisco UCS 1225 VIC
A Cisco innovation, the Cisco UCS VIC 1225 (Figure 3) is a dual-port Enhanced Small Form-Factor Pluggable
(SFP+) 10 Gigabit Ethernet and FCoE-capable PCIe card designed exclusively for Cisco UCS C-Series Rack
Servers. With its half-height design, the card preserves full-height slots in servers for third-party adapters certified
by Cisco. It incorporates next-generation converged network adapter (CNA) technology from Cisco, providing
investment protection for future feature releases. The card enables a policy-based, stateless, agile server
infrastructure that can present up to 256 PCIe standards-compliant interfaces to the host that can be dynamically
configured as either network interface cards (NICs) or host bus adapters (HBAs). In addition, the Cisco UCS VIC
1225 supports Cisco Data Center Virtual Machine Fabric Extender (VM-FEX) technology, which extends the Cisco
UCS fabric interconnect ports to virtual machines, simplifying server virtualization deployment.
Figure 3.
Cisco UCS VIC 1225 Architecture
Cisco Nexus 5548UP Switch
The Cisco Nexus 5548UP (Figure 4) is a 1RU and 10 Gigabit Ethernet switch that offers up to 960 Gbps of
throughput and scales up to 48 ports. It offers 32 1 and 10 Gigabit Ethernet fixed SFP+ Ethernet and FCoE or
1/2/4/8-Gbps native Fibre Channel unified ports and three expansion slots. These slots have a combination of
Ethernet and FCoE and native Fibre Channel ports.
Figure 4.
Cisco Nexus 5548UP Switch
© 2014 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public.
Page 8 of 32
Fusion-io
The Fusion ioMemory platform (Fusion-io ioDrive2) combines the Fusion-io Virtual Storage Layer (VSL) and
ioMemory modules into a Fusion-io ioMemory platform that provides enhanced capabilities for enterprise
applications and databases. This platform uses flash-based memory technology to significantly increase data
center efficiency with enterprise-class performance, reliability, availability, and manageability. Some of the
important features of Fusion-io ioDrive2 are:
●
Consistent, low-latency performance: The Fusion-io ioMemory platform provides consistent, low-latency
access for mixed workloads. The sophisticated Fusion-io ioMemory architecture enables nearly symmetrical
read and write performance, with best-in-class low-queue-depth performance. This feature makes Fusion-io
ioDrive2 devices, powered by Fusion ioMemory, excellent solutions for a wide variety of real-world, highperformance enterprise environments.
●
Industry-leading capacity: Fusion-io ioDrive2 is available in 365-GB, 785-GB, 1,205-GB, and 3-TB
capacities to help increase CPU efficiency in even the most data-intensive environments.
Cisco UCS and 3-TB Fusion-io ioDrive2 adapter specifications are as follows:
●
3-TB multilevel cell (MLC) flash-memory capacity
●
1.5-GBps bandwidth (1 MB read)
●
1.1-GBps bandwidth (1 MB write)
●
143,000 I/O operations per second (IOPS; 512 bytes random read)
●
535,000 IOPS (512 bytes random write)
●
136,000 IOPS (4K random read)
●
242,000 IOPS (4K random write)
●
15 microseconds of write latency and 68 microseconds of read latency
●
Hardware supported: All Cisco UCS M3 servers
●
Software supported: Cisco UCS Manager 2.1 and later
Hardware and Software Components
The configuration presented in this document is based on the following main software and hardware components.
Hardware Requirements
●
One Cisco UCS C240 M3 Rack Server (Intel Xeon processor E5-2695 v2 at 2.4 GHz) with 256 GB
of memory
●
One Cisco UCS 1225 VIC
●
Two Cisco UCS 6248UP Fabric Interconnects
●
Two Cisco Nexus 5548 switches
●
Two 3-TB Fusion-io ioDrive2 cards
Client Hardware Requirements
●
One Cisco UCS C210 M2 Rack Server
© 2014 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public.
Page 9 of 32
Software Requirements
●
Cisco UCS Firmware Release UCS 2.2(1b)
●
Cisco Integrated Management Controller (IMC) Release 1.5(4)
●
Red Hat Enterprise Linux (RHEL) Version 6.4
●
Oracle Database 11g Enterprise Edition Release 11.2.0.4.0 (64-bit)
●
Oracle ASM Library: Generic Linux Version 2.0.4
●
Swingbench Release 2.4
© 2014 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public.
Page 10 of 32
Cisco UCS C240 M3 Server BIOS Settings
The BIOS tests and initializes the hardware components of a system and boots the operating system from a
storage device. In a typical computational system, several BIOS settings control the system’s behavior. Some of
these settings are directly related to the performance of the system.
Figure 5 shows the BIOS settings that control performance.
Figure 5.
BIOS Settings
© 2014 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public.
Page 11 of 32
Infrastructure Setup
This section provides important design and configuration information for deployment of Oracle Database 11g on
Cisco UCS C240 M3 servers configured with Fusion-io ioDrive2 cards. Cisco UCS Manager 2.2 supports
integration of Cisco UCS C-Series Rack Servers with Cisco UCS Manager through direct connection to the fabric
interconnects. Fabric extenders are not required in this configuration. This option enables Cisco UCS Manager to
manage the Cisco UCS C-Series servers using a single cable for both management traffic and data traffic. For
complete details about single-wire management through direct-connect mode, please refer to
http://www.cisco.com/c/en/us/td/docs/unified_computing/ucs/c-series_integration/ucsm2-2/b_C-SeriesIntegration_UCSM2-2/b_C-Series-Integration_UCSM2-2_chapter_0110.html.
Network Design
Figure 6 shows the network connectivity from the Cisco UCS C240 M3 to the Fabric Interconnects. The server is
configured using direct-connect mode. Each port on the Cisco UCS VIC1225 on the Cisco UCS C240 M3 server is
attached to the server ports on Fabric Interconnects A and B through a 10-Gbps Small Form-Factor (SFP) cable.
The paths shown in multiple colors in the figure carry both management and data traffic.
Figure 6.
Logical Connectivity of Cisco UCS C240 M3 and Cisco UCS 6248UP Fabric Interconnect
© 2014 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public.
Page 12 of 32
Fusion-io ioDrive2 Configuration
Oracle Database was configured on a mirrored group of 3-TB Fusion-io ioDrive2 cards. Some of the main steps for
configuring Fusion-io ioDrive2 cards on the Cisco UCS C240 M3 server are presented here.
Step 1. Verify that the latest drivers for RHEL 6.4 are installed for each virtual NIC (vNIC) interface on the Cisco
UCS C240 M3. The latest Ethernet NIC (eNIC) drives can be downloaded from www.cisco.com. Extract
the driver from the downloaded ISO file and extract the driver for Cisco VIC 1225 from the
Linux\Network\Cisco\12x5x\RHEL\RHEL6.4 directory.
root@fusionio-c240-1 downloads]# rpm -ivh kmod-enic-2.1.1.52rhel6u4.el6.x86_64.rpm
Preparing...
########################################### [100%]
1:kmod-enic
########################################### [100%]
[root@fusionio-c240-1 downloads]#
[root@fusionio-c240-1 downloads]# modinfo enic
filename:
/lib/modules/2.6.32-358.el6.x86_64/extra/enic/enic.ko
version:
2.1.1.52
license:
GPL v2
author:
Scott Feldman <[email protected]>
description:
Cisco VIC Ethernet NIC Driver
srcversion:
F55DFDB1146824B7E94F5F5
alias:
pci:v00001137d00000071sv*sd*bc*sc*i*
alias:
pci:v00001137d00000044sv*sd*bc*sc*i*
alias:
pci:v00001137d00000043sv*sd*bc*sc*i*
depends:
vermagic:
2.6.32-358.el6.x86_64 SMP mod_unload modversions
Step 2. Install the latest firmware and utilities for the Fusion-io ioDrive2 cards.
root@fusionio-c240-1 Software Binaries]# rpm -ivh iomemory-vsl-2.6.32-358.el6.x
86_64-3.2.6.1212-1.0.el6.x86_64.rpm
Preparing...
########################################### [100%]
1:iomemory-vsl-2.6.32-358########################################### [100%]
[root@fusionio-c240-1 Utilities]# ls
fio-util-3.2.6.1212-1.0.el6.x86_64.rpm
[root@fusionio-c240-1 Utilities]# rpm -ivh fio-util-3.2.6.1212-1.0.el6.x86_64.rp
m
Preparing...
1:fio-util
########################################### [100%]
########################################### [100%]
[root@fusionio-c240-1 Firmware]# ls
fio-firmware-fusion-3.2.6.20131003-1.noarch.rpm
fusion_3.2.6-20131003.fff
[root@fusionio-c240-1 Firmware]# rpm -ivh fio-firmware-fusion-3.2.6.20131003-1.n
oarch.rpm
Preparing...
1:fio-firmware-fusion
########################################### [100%]
########################################### [100%]
[root@fusionio-c240-1 Firmware]#
© 2014 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public.
Page 13 of 32
Step 3. Format the Fusion-io card and attach it.
[root@fusionio-c240-1 ~]# fio-format /dev/fct0
/dev/fct0: Creating block device.
Block device of size 1205.00GBytes (1122.24GiBytes).
Using block (sector) size of 4096 bytes.
WARNING: Formatting will destroy any existing data on the device!
Do you wish to continue [y/n]? y
WARNING: Do not interrupt the formatting! If interrupted, the fio-sure-erase
utility may help recover from format errors. Please see documentation or contact
support.
Formatting: [====================] (100%) /
/dev/fct0 - format successful.
[root@fusionio-c240-1 ~]# fio-format /dev/fct1
/dev/fct1: Creating block device.
Block device of size 1205.00GBytes (1122.24GiBytes).
Using block (sector) size of 4096 bytes.
WARNING: Formatting will destroy any existing data on the device!
Do you wish to continue [y/n]? y
WARNING: Do not interrupt the formatting! If interrupted, the fio-sure-erase
utility may help recover from format errors. Please see documentation or contact
support.
Formatting: [====================] (100%) |
/dev/fct1 - format successful.
[root@fusionio-c240-1 ~]#
[root@fusionio-c240-1 ~]# fio-attach /dev/fct0
Attaching: [====================] (100%) /
fioa - attached.
[root@fusionio-c240-1 ~]# fio-attach /dev/fct1
Attaching: [====================] (100%) /
fiob - attached.
[root@fusionio-c240-1 ~]#
© 2014 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public.
Page 14 of 32
Step 4. Format each of the Fusion-io cards (/dev/fioa and /dev/fiob) into four partitions of 675 GB each.
[root@fusionio-c240-1 ~]# parted /dev/fioa
GNU Parted 2.1
Using /dev/fioa
Welcome to GNU Parted! Type 'help' to view a list of commands.
(parted) mklabel gpt
(parted) mkpart primary 2048s 675GB
(parted) print
Number
1
Start
End
Size
1049kB
675GB
675GB
File system
Name
Flags
primary
(parted) mkpart primary 675GB 1350GB
(parted) mkpart primary 1350GB 2025GB
(parted) mkpart primary 2025GB 2700GB
(parted) print
Model: Unknown (unknown)
Disk /dev/fioa: 3000GB
Sector size (logical/physical): 4096B/4096B
Partition Table: gpt
Number
Start
End
Size
File system
Name
1
1049kB
719GB
719GB
primary
2
719GB
1439GB
719GB
primary
3
1439GB
2158GB
719GB
primary
4
2158GB
2878GB
719GB
primary
Flags
(parted) quit
Information: You may need to update /etc/fstab.
© 2014 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public.
Page 15 of 32
Oracle Database 11g Configuration
Oracle Database was configured on a mirrored group of 3-TB Fusion-io ioDrive2 cards. Some of the main steps for
configuring Oracle Database 11g are presented here.
Step 1. Before installing Oracle Database, verify that all the prerequisite components are installed on RHEL 6.4
(see http://docs.oracle.com/cd/E16655_01/install.121/e17888/prelinux.htm).
Step 2. Download and install the Oracle ASM packages on the Red Hat kernel (see
https://access.redhat.com/solutions/315643).
Step 3. Verify that Oracle ASM is configured with the following parameters:
[oracle@fusionio-c240-1 ~]$ oracleasm configure
ORACLEASM_ENABLED=true
ORACLEASM_UID=oracle
ORACLEASM_GID=oinstall
ORACLEASM_SCANBOOT=true
ORACLEASM_SCANORDER="fi"
ORACLEASM_SCANEXCLUDE="sd"
ORACLEASM_USE_LOGICAL_BLOCK_SIZE="true"
Step 4. Set the asm_diskstring parameter (alter system-set asm_diskstring='ORCL:*' scope=both sid='*';)
Step 5. Create an Oracle ASM disk with the four partitions created on each of the Fusion-io cards.
/usr/sbin/asmtool -C -l /dev/oracleasm -n fioa1 -s /dev/fioa1 -a force=yes
/usr/sbin/asmtool -C -l /dev/oracleasm -n fioa2 -s /dev/fioa2 -a force=yes
/usr/sbin/asmtool -C -l /dev/oracleasm -n fioa3 -s /dev/fioa3 -a force=yes
/usr/sbin/asmtool -C -l /dev/oracleasm -n fioa4 -s /dev/fioa4 -a force=yes
/usr/sbin/asmtool -C -l /dev/oracleasm -n fiob1 -s /dev/fiob1 -a force=yes
/usr/sbin/asmtool -C -l /dev/oracleasm -n fiob2 -s /dev/fiob2 -a force=yes
/usr/sbin/asmtool -C -l /dev/oracleasm -n fiob3 -s /dev/fiob3 -a force=yes
/usr/sbin/asmtool -C -l /dev/oracleasm -n fiob4 -s /dev/fiob4 -a force=yes
Step 6. Before creating the Oracle ASM disk group, verify that all the disks appear as candidate disks.
SQL> select header_status,sector_size from v$asm_disk;
HEADER_STATU SECTOR_SIZE
------------ ----------CANDIDATE
4096
CANDIDATE
4096
CANDIDATE
4096
CANDIDATE
4096
CANDIDATE
4096
CANDIDATE
4096
© 2014 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public.
Page 16 of 32
CANDIDATE
4096
CANDIDATE
4096
8 rows selected.
Step 7. Create the Oracle ASM disk group.
CREATE DISKGROUP fiodg NORMAL REDUNDANCY
FAILGROUP fio1 DISK
'ORCL:fioa1',
'ORCL:fioa2',
'ORCL:fioa3',
'ORCL:fioa4'
FAILGROUP fio2 DISK
'ORCL:fiob1',
'ORCL:fiob2',
'ORCL:fiob3',
'ORCL:fiob4'
ATTRIBUTE 'sector_size'='4096',
'compatible.asm' = '11.2.0.4',
'compatible.rdbms' = '11.2.0.4';
Step 8. After the Oracle ASM disk group is configured, create Oracle Database with both data files and log files
on the same disk group. Because the Fusion-io ioDrive2 3-TB cards have a default sector size of 4 KB,
the Oracle ASM disk group was created with a sector size of 4096 bytes.
Step 9. Install Oracle Database and Grid (Release 11.2.0.4) infrastructure on the Cisco UCS C240 M3 server with
the Oracle user. Oracle binaries are installed on the local disk of the server. The data and redo log files
reside in the Oracle ASM disk group created on the Fusion-io cards.
Step 10.
To enable restoration during database failure, verify that the archive redo logs and backup files are
configured on a separate NAS device.
Performance and Failover Analysis
Workload Description
In this solution, Swingbench is used for OLTP workload testing. Swingbench is a simple-to-use, free, Java-based
tool used to generate a database workload and perform stress testing using various benchmarks in the Oracle
Database environment. Swingbench provides four separate benchmarks: Order Entry, Sales History, Calling Circle,
and Stress Test. In the workload testing described in this section, the Swingbench Order Entry benchmark was
used. The Order Entry benchmark is based on the Swingbench Order Entry schema and resembles the TPC-C
benchmark with respect to types of transactions. The workload uses a very balanced read-to-write ratio of 60 to 40
and can be designed to run continuously and test the performance of a typical order-entry workload against a small
set of tables, producing contention for database resources.
© 2014 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public.
Page 17 of 32
Test Scenarios
In an enterprise solution deployment, system administrators and solution architects need to identify both the
performance of databases and the server restoration process in the event of a hardware or software failure.
This section discusses three test scenarios that evaluate the performance of Fusion-io cards deployed on
Cisco UCS C240 M3 servers and the best practices for backing up and restoring the database in the event of
any hardware failure.
●
Performance validation using OLTP workload: This test scenario evaluates the main server
characteristics, such as CPU, memory, and the total IOPS value generated when scaling from 100 to 450
OLTP users.
●
Performance impact of failure of one Fusion-io card: This test scenario analyzes the impact on end-user
response time and throughput of the failure of one of the Fusion-io cards.
●
Restoration of database from archive and backup logs: In the unlikely event of a server failure,
database administrators (DBAs) and other administrators should have a strategy to restore the database.
The scenario measures the total restoration time for a database of about 1.8 TB.
Performance Validation Using OLTP Workload
This scenario evaluates the performance of Oracle Database deployed on a Cisco UCS C240 M3 server with two
Fusion-io cards and from 100 to 450 users. System resource use, such as CPU, memory, and disk I/O use, was
measured using the nmon analyser tool. The number of transactions per second (TPS) was measured from Oracle
Automatic Workload Repository (AWR) reports generated at 10-minute intervals for OLTP users, with the number
of users ranging from 100 to 450. Oracle Enterprise Manager was configured for the database to measure I/O,
throughput, and disk latency.
Figure 7 shows linear scalability when scaling from 100 to 450 OLTP users. The total number of transactions per
second at 450 users as measured in the Oracle AWR report was about 14,000 TPS.
Figure 7.
Transaction Throughput for OLTP Workload
© 2014 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public.
Page 18 of 32
CPU Utilization
The Cisco UCS C240 M3 server was equipped with a 2-socket Intel Xeon processor E5-2680 v2 running at a 2.4GHz clock frequency. As shown in Figure 8, the total CPU utilization ranged from about 18 percent to a maximum
of about 80 percent when scaling from 100 to 450 OLTP users. CPU utilization in the system gradually increased,
reflecting the linear scalability of the workload.
Figure 8.
CPU Utilization for OLTP Workload
Figure 9 shows the CPU utilization captured by the nmon analyser tool during scalability testing for 450 OLTP
Swingbench users.
© 2014 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public.
Page 19 of 32
Figure 9.
CPU Utilization for 450-User OLTP Workload
I/O Performance
The Cisco UCS C240 M3 server was equipped with two 3-TB Fusion-io ioDrive2 cards configured as a mirrored
Oracle ASM disk group. As shown in Figure 10, the maximum IOPS value generated was about 39,400 IOPS when
the system was scaled to 450 OLTP users. The total disk read-and-write wait time was less than 1 millisecond (ms)
when the system was scaled from 100 to 450 users. The workload read-to-write ratio was about 60:40 for 450
users. These IOPS values were measured both through nmon analyser and Oracle AWR reports.
© 2014 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public.
Page 20 of 32
Figure 10.
I/O Performance
© 2014 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public.
Page 21 of 32
Figure 11 shows the Oracle AWR data captured during scalability testing for 450 OLTP Swingbench users. The
highlighted numbers shows the read and write I/O generated during the 450-user OLTP test.
Figure 11.
Oracle AWR Data for 450 Users
© 2014 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public.
Page 22 of 32
Oracle Enterprise Manager Snapshot
Figure 12 shows the Oracle Enterprise Manager snapshot for 450 OLTP users. The data corroborates the statistics
presented in the previous section.
As shown in the figure, the total number of IOPS was about 39,000, whereas I/O throughput was about 350 MBps.
Figure 12.
Oracle Enterprise Manager Statistics
Performance Impact of Failure of One Fusion-io Card
Oracle Database deployed on a Cisco UCS C240 M3 server was configured with two 3-TB Fusion-io ioDrive2
cards with mirrored Oracle ASM disk groups. This section shows the variation in end-user throughput during the
failure of one of the Fusion-io cards. It details the system characteristics during the following events:
●
Normal phase: Both the Fusion-io cards are in normal working condition with the OLTP user workload.
●
Failure phase: One of the Oracle ASM disk groups was taken offline, and the impact on system
characteristics such as CPU use and IOPS was analyzed.
●
Restore phase: The failed card was brought back online, and the system characteristics were analyzed
until normal working conditions were achieved.
© 2014 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public.
Page 23 of 32
Figure 13 shows the TPS and IOPS values during each of the three phases.
Figure 13.
Performance Impact During Fusion-io Card Failure
© 2014 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public.
Page 24 of 32
Figure 14 shows server CPU utilization for all phases during the attach and detach processes for one Fusion-io
card.
Figure 14.
CPU Utilization During Fusion-io Card Failure
© 2014 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public.
Page 25 of 32
Normal Phase: 450-User Workload
Figure 15 shows a snapshot from Oracle Enterprise Manager during a system load of 450 OLTP users. It shows
total IOPS of about 39,000 IOPS, throughput of about 380 MBps, and less than 1 ms of disk latency. There were
about 70 active sessions at this point. The total TPS measured through an Oracle AWR report were about
13,800 TPS.
Figure 15.
Normal Conditions: 450-User Workload
© 2014 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public.
Page 26 of 32
Failure Phase: 450-User Workload
In this phase, the disk group on one of the Fusion-io cards was taken offline. Swingbench was running a 450-user
OLTP workload. As shown in Figure 16, there was no change in total IOPS or total throughput per second. Disk
latency increased slightly, but it remained less than 1 ms, and there was a very small increase in total active user
sessions. The total TPS value measured by an Oracle AWR report was about 13,400 TPS.
Figure 16.
Failure Condition: 450-User Workload
© 2014 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public.
Page 27 of 32
Restore Phase: 450-User Workload
This phase shows the system characteristics when the Oracle ASM disk group on the Fusion-io card was brought
back online. This phase consists of two main events:
●
Oracle ASM synchronizes with the online Cisco ASM disk group.
●
After synchronization, the database works under normal conditions.
As shown in Figure 17, when Oracle ASM synchronizes with the online Oracle ASM disk group, a high disk latency
of about 5 ms occurs. The number of active sessions on the database increases from about 70 to 170 sessions.
The disk IOPS value drops to about 30,000 IOPS, and the total TPS value measured through Cisco AWR was
about 10,500 TPS. The synchronization phase was completed in about 50 to 60 minutes for a database of about
1.8 TB.
© 2014 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public.
Page 28 of 32
Figure 17.
Restore Condition: 450-User Workload with Oracle ASM Disk Synchronization
After synchronization, the database works under normal conditions. As shown in Figure 18, the system achieves
the expected 39,000 IOPS and throughput of about 380 MBps. The number of transactions was about 13,500 TPS.
© 2014 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public.
Page 29 of 32
Figure 18.
Restore Condition: 450-User Workload After Synchronization with Normal Conditions
© 2014 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public.
Page 30 of 32
Restoration of Database from Archive and Backup Logs
Oracle Database (single instance) deployed on a Cisco UCS C240 M3 server with Fusion-io cards was analyzed
for several types of failures. The previous section analyzed the failure of one Fusion-io card (two Fusion-io cards
are installed with Oracle ASM mirroring) and detailed the performance during the failure and database
synchronization stages. Some failures may cause the system to become unusable and require intervention by
system administrators and DBAs for business continuity. These failures are discussed here:
●
Fusion-io card failure: A single Fusion-io card is installed on a Cisco UCS C240 M3 server.
●
Oracle Database failure: The database needs restoration from backup.
●
Local disk failure: The operating system and Oracle Relational Database Management System (RDBMS)
binaries need rehosting.
●
CPU or memory failure: Workload may need to be moved to another server to reduce downtime.
The last two cases are outside the scope of this document. Case 3 requires disk replacement and installation of
OS and RDBMS binaries. Case 4 may require another server. However, the use of Cisco UCS service profiles can
greatly reduce the downtime from these two failure scenarios.
In the first case, a single Fusion-io card is installed on the server, and a failure occurs. Upon this failure, the
Fusion-io card is replaced on the server, and the Oracle database files are restored from a NAS backup.
In the second case 2, an Oracle Database failure occurs. There are different types of Oracle Database failures.
Some can be mitigated with simple steps taken by DBAs or with the help of Oracle support. However, in some
cases the only option is to restore the database files, requiring full tablespace recovery using Oracle RMAN. In the
test bed, because of the high I/O throughput of PCIe-based Fusion-io cards, multiple tests were run to ascertain
the backup and restoration of Oracle Database on external storage.
The total time taken for RMAN full database restoration was about 52 minutes for a 1.8-TB database.
Conclusion
The Cisco UCS C240 M3 Rack Server provides high performance and internal storage from a single vendor known
for its long history of innovation in architecture, technology, partnerships, and services. The Cisco UCS C240 M3 is
designed for organizations with increasing data and storage demands. The enterprise-class Cisco UCS C240 M3 is
equipped with Intel Xeon processor E5-2600 technology, which delivers top-class performance, energy efficiency,
and flexibility. Cisco engineers have designed the Cisco UCS C240 M3 with Cisco UCS 1225 VIC to handle a
broad range of applications, including workgroup collaboration, virtualization, consolidation, massive data
infrastructure, and small and medium-size business (SMB) databases.
The Cisco UCS C240 M3, with its comprehensive stack of technological offerings, offers excellent computing and
networking performance and provides internal storage in one cohesive system to meet the challenges of
virtualization technologies and the growing demands of SMBs.
The combination of the Cisco UCS C240 M3 server, Fusion-io ioMemory platform, and Oracle Database
dramatically improves performance. The solution delivers fast access to information, enabling organizations to
capture important insights to help guide business decisions. Even with a single Oracle Database instance, the
solution helps organizations achieve exceptional performance with a failover strategy that meets today’s needs, at
a lower cost. The solution achieves about 39,000 IOPS with a read-write ratio of 60:40, which is comparable to
performance achieved by enterprise-class storage targeted at SMBs. In the event of a single Fusion-io card failure,
© 2014 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public.
Page 31 of 32
the server sustains the throughput and IOPS achieved during normal conditions. During a server failure, the
system downtime is decreased by direct attachment to the Cisco UCS rack server, unleashing the full capabilities
of Cisco UCS statelessness. By deploying Cisco UCS, Fusion-io, and Oracle Database together, companies gain
an enterprise-class, high-performance, low-latency, reliable, and scalable solution. The solution can help increase
customer and user loyalty, gain competitive advantage, and lower operating costs. The performance study
discussed in this document showed that the Cisco UCS C240 M3 Rack Server with Fusion-io cards improves the
application performance and operation efficiency of Oracle Database.
For More Information
●
http://www.cisco.com/c/en/us/products/index.html
●
http://www.fusionio.com/
●
https://access.redhat.com/solutions/315643
●
http://www.oracle.com
Printed in USA
© 2014 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public.
C11-732623-00
08/14
Page 32 of 32