HPE Virtualized NonStop
Deployment and Configuration
Guide
Part Number: 875814-001
Published: March 2017
Edition: L17.02 and subsequent L-series RVUs
© Copyright 2017 Hewlett Packard Enterprise Development LP
Legal Notice
The information contained herein is subject to change without notice. The only warranties for Hewlett Packard Enterprise products and services
are set forth in the express warranty statements accompanying such products and services. Nothing herein should be construed as constituting
an additional warranty. Hewlett Packard Enterprise shall not be liable for technical or editorial errors or omissions contained herein.
Confidential computer software. Valid license from Hewlett Packard Enterprise required for possession, use, or copying. Consistent with FAR
12.211 and 12.212, Commercial Computer Software, Computer Software Documentation, and Technical Data for Commercial Items are licensed
to the U.S. Government under vendor's standard commercial license.
Links to third-party websites take you outside the Hewlett Packard Enterprise website. Hewlett Packard Enterprise has no control over and is not
responsible for information outside the Hewlett Packard Enterprise website.
Acknowledgments
Microsoft® and Windows® are either registered trademarks or trademarks of Microsoft Corporation in the United States and/or other countries.
Intel, Pentium, and Celeron are trademarks or registered trademarks of Intel Corporation or its subsidiaries in the United States and other countries.
Java® and Oracle® are registered trademarks of Oracle and/or its affiliates.
UNIX® is a registered trademark of The Open Group.
Motif, OSF/1, UNIX, X/Open, and the "X" device are registered trademarks, and IT DialTone and The Open Group are trademarks of The Open
Group in the U.S. and other countries.
Open Software Foundation, OSF, the OSF logo, OSF/1, OSF/Motif, and Motif are trademarks of the Open Software Foundation, Inc.
OSF MAKES NO WARRANTY OF ANY KIND WITH REGARD TO THE OSF MATERIAL PROVIDED HEREIN, INCLUDING, BUT NOT LIMITED
TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE.
OSF shall not be liable for errors contained herein or for incidental consequential damages in connection with the furnishing, performance, or
use of this material.
© 1990, 1991, 1992, 1993 Open Software Foundation, Inc. The OSF documentation and the OSF software to which it relates are derived in part
from materials supplied by the following:
© 1987, 1988, 1989 Carnegie-Mellon University. © 1989, 1990, 1991 Digital Equipment Corporation. © 1985, 1988, 1989, 1990 Encore Computer
Corporation. © 1988 Free Software Foundation, Inc. © 1987, 1988, 1989, 1990, 1991 Hewlett-Packard Company. © 1985, 1987, 1988, 1989,
1990, 1991, 1992 International Business Machines Corporation. © 1988, 1989 Massachusetts Institute of Technology. © 1988, 1989, 1990 Mentat
Inc. © 1988 Microsoft Corporation. © 1987, 1988, 1989, 1990, 1991, 1992 SecureWare, Inc. © 1990, 1991 Siemens Nixdorf Informationssysteme
AG. © 1986, 1989, 1996, 1997 Sun Microsystems, Inc. © 1989, 1990, 1991 Transarc Corporation.
OSF software and documentation are based in part on the Fourth Berkeley Software Distribution under license from The Regents of the University
of California. OSF acknowledges the following individuals and institutions for their role in its development: Kenneth C.R.C. Arnold, Gregory S.
Couch, Conrad C. Huang, Ed James, Symmetric Computer Systems, Robert Elz. © 1980, 1981, 1982, 1983, 1985, 1986, 1987, 1988, 1989
Regents of the University of California.
Contents
About This Document.............................................................................................6
Supported Release Version Updates (RVUs).......................................................................................6
New and Changed Information in 875814-001.....................................................................................6
Publishing History.................................................................................................................................6
1 Introduction to HPE Virtualized NonStop (vNS)..................................................7
Core Licensing......................................................................................................................................9
Deployment environment for vNS systems...........................................................................................9
OpenStack services for vNS deployment......................................................................................10
Virtualized NonStop System interconnect using RoCE......................................................................11
RoCE configuration requirements.................................................................................................11
RoCE switches and VLAN considerations...............................................................................11
Virtualized NonStop hardware and software requirements................................................................11
2 Supported Virtual Machines (VMs) for vNS.......................................................12
Virtualized NonStop System Console (vNSC)....................................................................................12
NonStop Virtualized CPU (NS vCPU) ................................................................................................12
Requirements for NS vCPUs.........................................................................................................12
Considerations for hyperthreads..............................................................................................13
NonStop Virtualized CLuster I/O Modules (vCLIMs).....................................................................13
IP vCLIM and Telco vCLIM.......................................................................................................13
Supported network interface configuration (NIC) options...................................................14
NIC interfaces for IP vCLIM and Telco vCLIM....................................................................14
Supported NICs for SR-IOV and PCI-Passthrough............................................................14
Storage vCLIM.........................................................................................................................15
Storage vCLIM virtio network interfaces.............................................................................15
Requirements for Storage vCLIMs......................................................................................15
LUN Manager changes to support virtualized environments..............................................................16
3 Managing Virtualized NonStop (vNS)................................................................20
Virtualized NonStop Deployment Tools..............................................................................................20
Features of Virtualized NonStop Deployment Tools......................................................................21
Relation of Virtualized NonStop Deployment Tools to OpenStack Services.................................21
Fault zone isolation options for vNS..............................................................................................22
Flavor management for NonStop virtual machines.......................................................................23
License management for Virtualized NonStop...................................................................................24
Examples of vNS License management screens..........................................................................24
OpenStack CLI commands for vNS License management and system creation..........................25
License add command.............................................................................................................25
License delete command.........................................................................................................25
License show command...........................................................................................................25
License list command...............................................................................................................25
License update command........................................................................................................26
4 Planning tasks for a Virtualized NonStop system..............................................27
............................................................................................................................................................27
Mandatory prerequisites for a vNS system.........................................................................................27
5 Configuring a Virtualized NonStop system for deployment...............................30
Configuring the Host OS on the compute nodes................................................................................30
Configuring OpenStack.......................................................................................................................30
6 Installing and configuring Virtualized NonStop software and tools....................32
Obtaining Virtualized NonStop software.............................................................................................32
Sourcing in an OpenStack resource file.............................................................................................32
Contents
3
Installing Virtualized NonStop Deployment Tools...............................................................................32
Configuring Virtualized NonStop Deployment Tools...........................................................................34
Importing Virtualized NonStop images into Glance............................................................................35
Creating the Virtualized NonStop system flavors...............................................................................36
Installing licenses on the vNS license server.....................................................................................38
Create the vNSC.................................................................................................................................38
7 Deploying the Virtualized NonStop System.......................................................39
Post-deployment procedures for vCLIMs...........................................................................................49
8 Configuring a provisioned Virtualized NonStop system....................................51
Booting the vNS system.....................................................................................................................55
9 vNS administrator tasks.....................................................................................56
Scenarios for vNS administrators.......................................................................................................56
Deprovisioning a vNS system........................................................................................................56
Listing created vNS systems.........................................................................................................56
Viewing virtual resources for a vNS system..................................................................................57
Reviewing maximum transmission unit (MTU) in OpenStack ......................................................57
Managing vNS resources..............................................................................................................58
Scenarios that prompt re-provision of resources...........................................................................59
Reprovision examples..............................................................................................................60
10 Troubleshooting vNS problems.......................................................................62
Collecting vCLIM crash dumps and debug logs ................................................................................62
vCLIM is unresponsive at the Horizon console..................................................................................62
Using the vCLIM serial log to assist with troubleshooting..................................................................62
Debugging hypervisor issues..............................................................................................................63
Networking issues for vCLIMs............................................................................................................63
Issues with HSS boot, reload, or CPU not being online.....................................................................63
11 Support and other resources...........................................................................64
Accessing Hewlett Packard Enterprise Support.................................................................................64
Accessing updates..............................................................................................................................64
Websites.............................................................................................................................................64
Customer self repair...........................................................................................................................65
Remote support..................................................................................................................................65
Documentation feedback....................................................................................................................65
A Creating a Virtualized NonStop System console (vNSC).................................66
Prerequisites for creating the Virtualized NonStop System Console (vNSC).....................................66
Creating a vNSC............................................................................................................................66
B Warranty and regulatory information.................................................................70
Warranty information...........................................................................................................................70
Regulatory information........................................................................................................................70
Belarus Kazakhstan Russia marking.............................................................................................71
Turkey RoHS material content declaration....................................................................................72
Ukraine RoHS material content declaration..................................................................................72
4
Contents
Figures
1
2
3
4
5
6
7
8
9
10
11
12
Virtualization example...................................................................................................................7
Virtualized NonStop running VMs in a private cloud.....................................................................9
OpenStack services for vNS deployment....................................................................................10
Virtualized NonStop control plane...............................................................................................20
Virtualized NonStop Deployment Tools in OpenStack................................................................21
Flavor screen examples for Virtualized NonStop........................................................................23
vNS Horizon Panel displaying multiple vNS License Files..........................................................24
Upload screen for a vNS License File.........................................................................................24
Login screen for vCLIM...............................................................................................................49
MTU result in OpenStack............................................................................................................57
vCLIM serial log...........................................................................................................................62
HSS does not boot, reload issues, CPU not online.....................................................................63
Tables
1
Characteristics of Virtualized NonStop..........................................................................................8
Examples
1
2
3
4
5
6
7
8
9
10
Creating a Maria DB....................................................................................................................33
Creating vNS service, user, role and endpoints in OpenStack...................................................33
vNS configuration file..................................................................................................................34
Uploading an image into Glance.................................................................................................35
Verifying an image (QCOW)........................................................................................................35
Configuring eth0 on vCLIM..........................................................................................................50
Example display of virtual resources for a vNS system..............................................................57
Example display of reprovisioning CPUs on a vNS system........................................................60
Example display of reprovisioning CLIMs on a vNS system.......................................................60
Example display of NSK volumes on a vNS system...................................................................61
About This Document
This guide provides an overview of HPE Virtualized NonStop (vNS) and describes the tasks
required to deploy a vNS system in an OpenStack private cloud. This guide is intended for
personnel who are OpenStack administrators or have completed Hewlett Packard Enterprise
training on vNS system support.
Supported Release Version Updates (RVUs)
This publication supports L17.02 and all subsequent L-series RVUs until otherwise indicated in
a replacement publication.
New and Changed Information in 875814-001
This is a new guide.
Publishing History
6
Part Number
Product Version
Publication Date
875814-001
N.A.
March 2017
1 Introduction to HPE Virtualized NonStop (vNS)
HPE Virtualized NonStop (vNS) expands the NonStop system family by introducing virtualization
to NonStop. Virtualization lets you create a virtual machine (VM) from a physical resource (such
as an Intel® Xeon® based physical server) and share that resource with other VMs and a host
operating system (OS). A guest OS runs in each VM.
In the case of vNS, the guest OS is the NonStop software stack running on a Linux Kernel-based
Virtual Machine (KVM). KVM is a hypervisor that creates the emulated hardware environment
for the VMs. The hypervisor provides a consistent interface between the VMs and the physical
hardware as shown in Figure 1 (page 7).
Figure 1 Virtualization example
Virtualized NonStop is cloud-ready and provides a new deployment tool that lets you specify the
VMs and deploy them into a private cloud running OpenStack. Table 1 (page 8) describes the
vNS characteristics and Figure 2 (page 9) shows an example of vNS VMs running in a private
cloud.
7
Table 1 Characteristics of Virtualized NonStop
Deployment
Private cloud using OpenStack Mitaka
Processor/Processor model
Intel® Xeon® x86 processors running in 64-bit mode
Supported RVU
L17.02 and later RVUs
Virtualized environment
KVM hypervisor (included with Ubuntu 16.04)
Virtual Machines (VMs)
These NonStop resources are supported as VMs:
• NonStop Virtualized CPUs (vNS CPUs) — Enterprise-Edition (also known as
high-end) and entry-class options are supported
• Virtualized NonStop System Console (vNSC) —1 vNSC is required
• NonStop Virtualized CLuster I/O Modules (vCLIMs) — up to 56 vCLIMs are
supported
For more information, see “Supported Virtual Machines (VMs) for vNS” (page 12)
Fault zone isolation
Offers three options for the vNS:
• NonStop Standard (recommended for fault tolerance)
• CPUs only
• None (allows OpenStack to provision resources wherever they fit and is not fault
tolerant)
IP CLIM and Telco CLIM
networking
Supports VLANs and VXLANs for virtio, SR-IOV and PCI-passthrough network
interfaces
Software / OpenStack
implementation
• Ubuntu 16.04, including KVM hypervisor
Software delivery
• Custom SUT in QEMU Copy on Write (QCOW2) format for initial deployment
• OpenStack Mitaka
• QCOW2 image for vCLIMs for vCLIM initial deployment (CLIM DVD for
subsequent CLIM software updates)
• ISO image for NonStop System Console DVD Update 27
• ISO image for Halted System Services (HSS)
• Virtualized NonStop deployment tools for OpenStack
System interconnect
40 Gbps RoCE (RDMA over Converged Ethernet)
Clustering
Supports native RoCE clustering between high-end Virtualized NonStop systems
Expand networking
Supports connectivity to NonStop X and NonStop i systems using Expand-over-IP
Storage
Supports several storage options, including SAS drives, HPE StoreVirtual Virtual
Storage Appliance (VSA), and storage arrays
Software Core Licensing
A core license file is required for Virtualized NonStop and clustering. See Core
Licensing (page 9)
• 2-, 4-, and 6-core software
licensing options (high-end)
• 1-core software license
(entry-class)
Minimum Development system Requires two compute nodes that can be provisioned for two NS vCPU VMs, two
Storage vCLIM VMs, two IP vCLIM VMs, and one vNSC VM
8
Introduction to HPE Virtualized NonStop (vNS)
Figure 2 Virtualized NonStop running VMs in a private cloud
For more information, see “Deployment environment for vNS systems” (page 9).
Core Licensing
There are core licensing requirements for Virtualized NonStop and RoCE clustering. For more
information about licensing for RoCE clustering, see the NonStop Core Licensing Guide. For
more information on managing vNS licenses, see “License management for Virtualized NonStop”
(page 24) and the NonStop Core Licensing Guide.
Deployment environment for vNS systems
Deploying and hosting vNS in a private cloud environment requires OpenStack Mitaka and Ubuntu
16.04. The vNS system uses the OpenStack services described at “OpenStack services for vNS
deployment” (page 10) and Figure 3 (page 10) shows these services.
Core Licensing
9
OpenStack services for vNS deployment
The vNS system uses these OpenStack services for deployment.
OpenStack service
Function
Nova — compute service
Supports an API to instantiate and manage VMs on KVM
Keystone — identity service
Provides authentication services
Glance — image service
Manages VM images, including querying/updating image
metadata and retrieving actual image data
Neutron — networking service
Provides network connectivity and IP addressing for VMs
managed by the Nova compute service
Cinder — block storage service
Provides an API to instantiate and manage block storage
volumes
Horizon — dashboard service
Provides web-based user interfaces for creating,
allocating, and managing OpenStack resources within a
cloud
Figure 3 OpenStack services for vNS deployment
10
Introduction to HPE Virtualized NonStop (vNS)
Virtualized NonStop System interconnect using RoCE
Virtualized NonStop uses an RDMA over Converged Ethernet (RoCE) system interconnect that
supports a Single Root I/O Virtualization (SR-IOV) network interface for sharing a single RoCE
NIC with multiple NonStop VMs and any other operating system running on the same physical
server.
RoCE clustering of High-end Virtualized NonStop systems is supported. For more information,
see the NonStop X Cluster Solution Manual.
RoCE configuration requirements
NOTE:
A vNS system requires RoCE v2.
RoCE requirement
That can provide
HPE 544+ series Ethernet adapters • Two 40 Gbps Ethernet ports
in the compute nodes
• Drivers that support RoCE
Ethernet switches
• 40 Gbps Ethernet ports
• Support for Data Center Bridging (DCB) protocols, specifically IEEE 802.3x
Global Pause and IEEE 802.1Qaz Enhanced Transmission Selection (ETS)
to provide buffer management for the Ethernet switches
Two independent interconnect
fabrics
• Two ports on the Ethernet host adapter card must be connected to two
separate HPE c7000 enclosure switches or two separate 40 Gbps Ethernet
switches.
NOTE: The two switches function as independent interconnect fabrics to
provide fault tolerance.
RoCE switches and VLAN considerations
The Virtualized NonStop CPUs and vCLIMs communicate through a Virtual LAN (VLAN) configured
on top of the RoCE fabrics. The VLAN enforces network traffic isolation for security and provides
Quality of Service (QoS) for the RoCE traffic between VMs.
Virtualized NonStop hardware and software requirements
For more information about the hardware and software requirements for vNS, refer your HPE
representative to the Reference Architecture and Solution Guide for virtualized NonStop located
on Services Access Workbench (SAW).
Virtualized NonStop System interconnect using RoCE
11
2 Supported Virtual Machines (VMs) for vNS
•
“Virtualized NonStop System Console (vNSC)” (page 12)
•
“NonStop Virtualized CPU (NS vCPU) ” (page 12)
•
“NonStop Virtualized CLuster I/O Modules (vCLIMs)” (page 13)
Virtualized NonStop System Console (vNSC)
The vNSC provides access to the OSM tools for Virtualized NonStop system management.
vNSC VM
vNSC VM characteristics
• Default is 1 vNSC per instance and the vNSC instance must run in a licensed
compute node.
• Runs Windows Server 2012 R2 in a VM.
Illustration is for clarity and is
not an actual representation
• 4 virtual hyperthreads (for example, 2 hyperthreaded physical cores with 2
hyperthreads per core).
• Core pinning is not required.
• 8 GB virtual memory (1 GB huge pages are not required but offer better
performance if selected).
NonStop Virtualized CPU (NS vCPU)
The NS vCPU provides the vNS application and database compute workload and has these
characteristics.
NS vCPU VM
Characteristics
• Runs as a VM in an Intel Xeon-based server as long as hardware requirements
are met.
• Up to 16 NS vCPUs per vNS system are supported – NS vCPUs in the same
system must be deployed in different physical servers for fault tolerance.
• Each NS vCPU in a vNS system should be configured identically as all the other
instances.
• Cores are dedicated (pinned) and isolated to ensure deterministic timing, fault
isolation, and performance.
• Core options:
Illustration is for clarity and is
not an actual representation
◦
2-, 4-, and 6-cores (high-end)
◦
1-core (entry-class)
Requirements for NS vCPUs
12
NS vCPU VM
Resource Requirements for NonStop vCPUs
NS vCPU (Entry-class)
1 pinned (dedicated) physical core with a single active hyperthread per core, 32 to
64 GB memory in 1 GB increments backed by 1GB pinned huge pages.
NS vCPU (High-end)
2, 4, or 6 pinned physical cores with a single active hyperthread per core, 64 to 96
GB memory in 1 GB increments backed by 1GB pinned huge pages.
All NS vCPU types
NS vCPUs in the same system must be deployed in different physical servers for
fault-tolerance
Supported Virtual Machines (VMs) for vNS
Considerations for hyperthreads
HPE recommends that physical cores assigned to NS vCPUs and vCLIMs reside in the same
Non-uniform memory access (NUMA) zone for best performance.
•
Hyperthreads are enabled for all cores.
•
For compute nodes that run vCLIMs, the processors must have hyperthreading enabled.
•
The NS vCPUs require dedicated cores with hyperthreading enabled. One hyperthread will
be used by the vCPU. The other hyperthread must be kept idle by the host operating system,
dedicating it to the vCPU.
•
The NS vCLIMs require hyperthreading to be enabled, and the vCLIM uses both hyperthreads.
The host operating system must ensure the core is not used for other purposes, dedicating
it to the vCLIM.
NonStop Virtualized CLuster I/O Modules (vCLIMs)
Virtualized NonStop systems support the IP vCLIM, Telco vCLIM, and Storage vCLIM which
function as offload engines. With vCLIMs, there are no SNMP agents, iLO communications, or
firmware to manage. vCLIMs are deployed through the Virtualized NonStop Deployment tools.
IP vCLIM and Telco vCLIM
The IP vCLIM and Telco vCLIM provide virtualized NonStop networking and function as networking
offload engines with 10 Gigabit Ethernet (10GbE) network interface configurations (NICs) and
five customer-configurable Ethernet ports. vCLIMs are deployed using the Virtualized NonStop
Deployment Tools.
vCLIM VMs
IP vCLIM and Telco vCLIM characteristics
• Runs as a VM in a blades server or rack-mount physical server
• Communicates with the NonStop vCPUs over 40GbE RoCE
• Provides the internal (maintenance) communication and external (customer)
communication
Illustration is for clarity and is
not an actual representation
• Supports up to 5 virtual network interfaces. Each virtual network interface can
be:
◦
A virtio_net, virtual network interface.
◦
A virtual function on a 10GbE NIC, shared with the vCLIM using Single Root
I/O Virtualization (SR-IOV)
◦
A physical function on a 10GbE NIC, shared with the vCLIM using PCI
pass-through.
• The vCLIM supports the use of VLAN and VXLAN for each of these interfaces.
• Cores are dedicated (pinned) and isolated to ensure deterministic timing, fault
isolation, and performance. Core options, virtual memory, and pinned huge pages
are:
◦
IP/Telco vCLIM (default) – 16 virtual hyperthreads backed by 8 pinned
hyperthreaded physical cores, 16 GB of virtual memory backed by 1 GB of
pinned huge pages
◦
IP/Telco vCLIM (entry-class) – 8 virtual hyperthreads backed by 4 pinned
hyperthreaded physical cores, 16 GB of virtual memory backed by 1 GB of
pinned huge pages
• Supports several features found on physical IP and Telco CLIMs such as UDP,
TCP, SCTP, and raw sockets over IPv4, IPv6, and IPSec
• Supports connectivity through Expand-over-IP to HPE Integrity NonStop X and
HPE Integrity NonStop i systems
NonStop Virtualized CPU (NS vCPU)
13
Supported network interface configuration (NIC) options
NIC option
Function of NIC option on IP vCLIM and Telco vCLIM
PCI passthrough
• Passes entire NIC to the vCLIM providing direct hardware access and provides best
performance of the NIC options, comparable to a physical CLIM
• Fewer VMs can share the NIC and requires specific supported NICs for the vCLIM (see
“Supported NICs for SR-IOV and PCI-Passthrough” (page 14).)
SR-IOV
• Passes one virtual function of the NIC to the vCLIM providing direct hardware access and
good performance, comparable to a physical CLIM
• Multiple VMs can share the NIC through other virtual functions, although multiple VMs compete
for NIC throughput
• Requires specific supported NICs for the vCLIM (see “Supported NICs for SR-IOV and
PCI-Passthrough” (page 14).)
virtio_net
• Sends Ethernet packets through a virtual (virtio_net) device for the vCLIM; the hypervisor
directs these packets to the applicable physical NICs
A virtio device for each network interface is provided to the vCLIM
• Multiple VMs can share the NIC through multiple virtio_net devices
• Lets the hypervisor implement Software Defined Networking (SDN) technologies such as
VLAN, VXLAN, Open vSwitch (OVS), and Distributed Virtual Routing (DVR) between the
vCLIM and the physical network.
• Provides easier networking management, although performance might not be as strong and
some SDN technologies might further reduce performance.
NIC interfaces for IP vCLIM and Telco vCLIM
NIC interface
Function
eth0
Reserved for manageability networking
eth1-eth5
10GbE NIC ports (customer-configurable)
eth6
Reserved for manageability support (maintenance LAN)
NOTE:
Network manageability requires that at least one port be configured on two vCLIMs for $ZTCP0 or $ZTCP1
Supported NICs for SR-IOV and PCI-Passthrough
If IP vCLIM or Telco vCLIM
uses
SR-IOV virtualization
PCI passthrough virtualization
one of these 10GbE NICs is required
• HPE 560M — for blade servers only
• HPE 560SFP+ — for rack-mount servers
• HPE 530T — for rack-mount servers
• HPE 530SFP+ — for rack-mount servers
14
Supported Virtual Machines (VMs) for vNS
Storage vCLIM
The Storage vCLIM functions as an offload engine for Storage access. Volume level encryption
is supported with an additional license.
Storage vCLIM VM
Characteristics
• Runs as a VM in a blades server or rack-mount physical server
• Offers two configuration options: standard and encrypted (encryption requires
an additional license)
Illustration is for clarity and is
not an actual representation
• Supports ETI-NET vBackBox iSCSI Virtual tape, Virtual Storage Appliance (VSA),
SAS drives, and storage arrays
• Supports virtio_blk interfaces to access virtual block I/O devices (drives)
• Provides up to 25 drives per Storage vCLIM in OpenStack (24 drives are
supported when both iSCSI tape and encryption are used)
• Runs Maintenance Entity Units (MEUs) in first two Storage vCLIMs
• Cores are dedicated (pinned) and isolated to ensure deterministic timing, fault
isolation, and performance. Core options are:
◦
4 cores for standard Storage vCLIM
◦
8 cores for NSVLE Storage vCLIM
Storage vCLIM virtio network interfaces
Ethernet port
Function for Storage vCLIM
eth0
Reserved for manageability support
eth1
Customer-configurable port provides Enterprise Security Key Manager (ESKM) connectivity for
NonStop Volume Level Encryption (NSVLE)
eth2
Customer-configurable port provides iSCSI connectivity for virtual tape
Requirements for Storage vCLIMs
Storage vCLIM VM
Resource Requirements for NonStop Storage vCLIMs
Storage vCLIM (no encryption) 8 virtual hyperthreads backed by 4 pinned hyperthreaded physical cores. 4 GB of
virtual memory backed by 1GB pinned huge pages.
Storage vCLIM (encrypted)
16 virtual hyperthreads backed by 8 pinned hyperthreaded physical cores. 4 GB of
virtual memory backed by 1GB pinned huge pages.
All vCLIMs
vCLIM failover pairs (primary and backup vCLIMs) in the same system must be
deployed in different physical servers for fault tolerance.
NonStop Virtualized CPU (NS vCPU)
15
LUN Manager changes to support virtualized environments
In a virtualized environment, you use the Logical Unit Number (LUN) Manager to manage virtual
block I/O devices and iSCSI tape devices.
•
•
The Virtual Block I/O devices have this device type and LUN range.
◦
Type 9 devices
◦
LUN number range 10001–10999
The iSCSI Tape devices have this device type and LUN range.
◦
Type 3 devices
◦
LUN number range 1 – 32
There are several new and updated LUN Manager commands to support a virtualized environment.
NOTE: These commands do not apply to physical NonStop system environments. To
differentiate between LUN Manager commands for physical and virtual environments, issue the
help command: lunmgr —h (--help)
For more information on using LUN Manager and its relation to the Storage CLIM, see the NonStop
Cluster I/O Protocols (CIP) Configuration and Management Manual.
New LUN Manager commands
lunmgr —t (--addiscscsitape) <iP address>
Issues an iSCSI Discovery command to the IP address that you input. Once discovery completes, the LUN manager
logs in to all available tape devices to establish an iSCSI communication session. The LUN manager then assigns
the next available LUN number to the tape device and adds it to stomod.
This command example shows:
• Two tape devices (VBACK00 and VBACK01) are discovered
• IP port and IP address information for the devices
• Log in attempts and successful log ins for the devices
lunmgr --deliscsitape <iSCSI name>
Issues an iSCSI logoff command to the iSCSI name that you input and closes the iSCSI communication. The LUN
manager deletes the tape device from stomod.
NOTE:
16
You must type the entire iSCSI name of a tape device to delete it.
Supported Virtual Machines (VMs) for vNS
New LUN Manager commands
This command example shows:
• A user deletes the VBACK00 tape device by entering the entire iSCSI name and the tape logs off to close the
communication:
lunmgr --deliscsitape iqn.2000–01.com.etinet:VBACK00
lunmgr —v (--printvolname)
Prints out the LUN number and both the Primary and Alternate Volume name of each known (approved) Virtual
Block I/O device on the CLIM.
lunmgr —x (--cleancache)
Cleans the LUN and SID caches of old LUNS and should be used when devices are displayed in the LUN and SID
caches but are no longer seen by the vCLIM.
TIP: Running the --find command after running the --cleancaches command should show that there no longer any
devices in the LUN cache that are no longer seen on the vCLIM.
LUN Manager changes to support virtualized environments
17
Changed LUN Manager commands
lunmgr —a (--approve)
NOTE:
The Yesall parameter is not valid in a virtual environment
• Displays the next Virtual Block I/O device LUN number assignment, the OpenStack Virtual ID, and the NSK
primary and alternate volume names (if present) that require approval. Valid user replies are:
◦
y (approve)
◦
n (don’t approve)
◦
a LUN number valid for a virtual block device
TIP: If you prefer a different LUN number assignment than what the LUN manager provides, you can enter a
different number as long as it has not been used before and is within the LUN number range.
This command example shows:
◦
A new device on a CLIM that was previously assigned LUN manager number 10013
◦
A virtual ID that functions like a serial number and is the first 20 characters of the ID that OpenStack assigned
to the device
◦
A proposed Static ID (SID) for the device which is unique and that goes on the label of the disk on the master
boot record
◦
A user must decide whether to assign 10011 to 10013; since the device was previously assigned to 10013,
the user opts to assign the device to 10013 which creates a static address for the device
◦
The user also assigns 10011 to 10011, 10012 to 10012, and enters Y for each to assign the devices; the
devices are then assigned static IDs
lunmgr —d (--delete) <LUN>
Deletes the input LUN from the device table. The LUN number is not an optional parameter in a virtual environment.
This command example shows the deletion of LUNs 10011, 10012, and 10013.
18
Supported Virtual Machines (VMs) for vNS
Changed LUN Manager commands
lunmgr -f (--find)
Provides information about the virtual Block I/O devices. This information includes:
• Devices known to this vCLIM.
• Devices seen (present) by this vCLIM.
• Devices that are in the LUN and SID caches but that are no longer seen by the vCLIM; if any devices are displayed
here, the user should run the cleancaches command (lunmgr –cleancaches).
This command example shows:
• Four LUNs: 10007, 10010, 10012, and 10015 with stable addresses.
• Under Devices that are no longer present but in cache there are LUNs listed; these are LUNs
that were once assigned to the CLIM that are no longer available to the CLIM (these could be freed up by issuing
the cleancaches command).
This command example shows the LUNs for iSCSI tape devices (Type 3).
lunmgr —h (--help)
Displays a list of valid commands and the effects of these commands. The help command has been changed to
reflect the changes and additions to commands valid for the virtualized environment as well as describing commands
for the physical environment.
LUN Manager changes to support virtualized environments
19
3 Managing Virtualized NonStop (vNS)
The topics in this chapter describe the vNS management tools including the tasks and
requirements associated with these tools.
Deployment management tools
•
“Virtualized NonStop Deployment Tools” (page 20)
◦
“Features of Virtualized NonStop Deployment Tools” (page 21)
◦
“Relation of Virtualized NonStop Deployment Tools to OpenStack Services” (page 21)
•
“Fault zone isolation options for vNS” (page 22)
•
“Flavor management for NonStop virtual machines” (page 23)
Licensing management tools and requirements
•
“License management for Virtualized NonStop” (page 24)
◦
“Examples of vNS License management screens” (page 24)
◦
“OpenStack CLI commands for vNS License management and system creation” (page
25)
Virtualized NonStop Deployment Tools
The Virtualized NonStop Deployment Tools use an OpenStack control plane which provides the
foundation for all deployment operations including:
•
Virtualized NonStop RESTful API and license server that communicates with
◦
Horizon plugin that is accessed via a web browser
◦
OpenStack command line interface (CLI)
Figure 4 Virtualized NonStop control plane
20
Managing Virtualized NonStop (vNS)
Features of Virtualized NonStop Deployment Tools
•
Provide a Horizon plugin for license and system management
•
Provide an OpenStack CLI interface for license and system management
•
Offer license and system management that:
◦
Stores all Virtualized NonStop licenses for you
◦
Verifies that a valid license is present before launching a new system
◦
Let administrators or their delegates create “Flavors” using the Horizon interface —
Flavors define your virtual machine attributes
◦
Let users create and delete vNS system deployments in OpenStack
Relation of Virtualized NonStop Deployment Tools to OpenStack Services
Whenever possible, the NonStop Deployment Tools simplify operations and let OpenStack
services do the work for you.
•
•
No authentication is done by the NonStop Deployment Tools
◦
CLI authentication is handled by Keystone using an OpenStack Python client
configuration package
◦
Tokens that are returned are used with future requests
◦
Token management is handled by several Keystone Python packages
RESTFUL APIs provide communication with other services
◦
Keystone keeps a service catalog
◦
Service catalog looks up the RESTful endpoint
◦
Request is sent to that endpoint
Figure 5 (page 21) shows the NonStop Deployment Tools in relation to OpenStack Services.
Figure 5 Virtualized NonStop Deployment Tools in OpenStack
Virtualized NonStop Deployment Tools
21
Fault zone isolation options for vNS
Fault zone isolation options let you select what constraints are placed on the placement of virtual
resources when creating the system with the Virtualized NonStop Deployment Tools.
Options
NonStop Standard (recommended and fault-tolerant)
• Requires that:
◦
An administrator configures 2 separate OpenStack availability zones (an OpenStack availability zone is
a group of compute nodes)
◦
You select different availability zones (First Availability Zone and Second Availability Zone) during
deployment
• Guarantees:
◦
CPU virtual machines for this system do not run in the same host (odd CPUs are in one availability zone;
even CPUs are in another)
◦
Each half of a mirrored volume is provisioned from a separate availability zone
◦
The primary and backup vCLIMs in a failover pair do not run in the same host
• IP and Telco vCLIMs are evenly split into two availability zones; Storage vCLIMs are split into the two
availability zones, based on the disks that are attached
• Storage vCLIMs are split into the two availability zones, based on the disks that are attached
CPUs Only
• Guarantees CPU virtual machines for this system will not run in the same host
• No limitations on vCLIMs or disks; however, the CPUs Only option is not fault-tolerant with respect to vCLIMs
and disks
• (Optional) Can have one OpenStack availability zone: First Availability Zone
None
• Allows OpenStack to provision resources wherever they fit
• (Optional) Can have one OpenStack availability zone: First Availability Zone
• The None option is not fault-tolerant
22
Managing Virtualized NonStop (vNS)
Flavor management for NonStop virtual machines
IMPORTANT: Until the administrator grants permission to a user, only the administrator (by
default) can manage flavors for NonStop virtual machines (VMs).
CPU flavor selections determine the number of cores and memory reserved for each vNS VM
in the vNS system. Because NS vCPUs and vCLIMs (Storage, IP, and Telco) require different
flavors for VMs, the Horizon interface lets a user select:
•
Core and memory size for NS vCPUs
•
Cores for Storage, IP, and Telco vCLIMs
•
SRIOV or PCI passthrough alias for the RoCE NIC must be specified for each type
This illustration shows the prompts for the flavor information in the Horizon interface. For procedure
details, see “Installing and configuring Virtualized NonStop software and tools” (page 32).
Figure 6 Flavor screen examples for Virtualized NonStop
Virtualized NonStop Deployment Tools
23
License management for Virtualized NonStop
NOTE: The license will also need to be installed on the vNS system after NSK is running, using
the Install Core License guided procedure found on the system object in the OSM Service
Connection.
Each Virtualized NonStop instance requires a license file. A user with administrator privileges
can add the license file using a Horizon dashboard action or an OpenStack CLI command.
The license file contains important information about your vNS instance such as the unique
System Serial Number.
vNS License File attribute or task
Description
Installation
The license file is sent to the controller node’s operating
system via SFTP and can be added using the Horizon
dashboard action or an OpenStack CLI command.
Examples of vNS License management screens
Figure 7 vNS Horizon Panel displaying multiple vNS License Files
Figure 8 Upload screen for a vNS License File
24
Managing Virtualized NonStop (vNS)
OpenStack CLI commands for vNS License management and system creation
License add command
The license add command requires a positional argument using a path and filename as shown
in this example.
vnonstop license add /home/licenses/555555
License was added successfully.
License delete command
NOTE:
The license file must be in the “unused” state before deletion can be done.
The license delete command requires a System Serial Number argument as shown in this
example:
vnonstop license delete 000001
System Serial Number (000001) delete passed.
License show command
The license show command requires one argument for the System Serial Number as shown in
this example which includes the result of issuing the license show command.
vnonstop license show 999999
+------------------------+----------------------------------+
| Name
| Value
|
+------------------------+----------------------------------+
| System Serial Number
| 999999
|
| Enabled IPU Count
| 2
|
| Enabled Memory
| 64
|
| Number Of CPUs
| 8
|
| System Class
| High
|
| NSADI Enabled
| 0
|
| Clustering Enabled
| false
|
| Virtual NonStop
| true
|
| Number Of CLIMs
| 3,2,2
|
| Number Of NSCs
| 5
|
| Expiration Date
| 20210224
|
| Cloud ID
| aaaaaaaabbbb4ccc8dddeeeeee999999 |
| Use State
| unused
|
| Database Table Version | 1
|
+------------------------+----------------------------------+
License list command
The license list command does not require any arguments to display all licenses. There is an
optional argument: ‘—use-state’ of “unused” or “using” as shown in this these examples.
vnonstop license list
+----------------------+--------+
| System Serial Number | Use
|
+----------------------+--------+
| 000001
| using |
| 000002
| unused |
| 555555
| using |
| 999998
| unused |
| 999999
| unused |
+----------------------+--------+
License management for Virtualized NonStop
25
vnonstop license list –-use-state unused
+----------------------+--------+
| System Serial Number | Use
|
+----------------------+--------+
| 000002
| unused |
| 999998
| unused |
| 999999
| unused |
+----------------------+--------+
vnonstop license list –-use-state using
+----------------------+-------+
| System Serial Number | Use
|
+----------------------+-------+
| 000001
| using |
| 555555
| using |
+----------------------+-------+
License update command
The license update command replaces a currently installed license with a new license and can
be performed on a license that is in use or not in use.
There is one required parameter which is the path and file name of the new license as shown in
this example.
vnonstop license update /home/licenses/555555a
License was updated.
26
Managing Virtualized NonStop (vNS)
4 Planning tasks for a Virtualized NonStop system
Mandatory prerequisites for a vNS system
Verify that you have the items in this checklist. Several procedures and dialogs will require these.
Record the information that you gather (such as Expand Node Number) to use during subsequent
procedures.
√
Verify that you have
1. vNS system components planned and unique identifiers for some components
The vNS System Name (can be alphanumeric and up to 6 letters without leading \ )
The Expand Node Number which must be:
• Unique in any Expand network that the node will participate on, including a RoCE cluster
• Unique between all vNS systems in the cloud
The number of vCLIMs being deployed by vCLIM type, including IP addresses for these vCLIMs, and network
assignments for each interface on the vCLIMs
NSK volume names and sizes to be connected to Storage vCLIMs and primary and mirror choices for each
volume
The VSA back-end has a unique name and is registered with Cinder and configured by the OpenStack
Administrator in cinder.conf:
[vsa-1]
hplefthand_password: hpnonstop
hplefthand_clustername: cluster-vsa1
hplefthand_api_url: https://192.168.0.4:8081/lhos
hplefthand_username: vsaroot
hplefthand_iscsi_chap_enabled: true
volume_backend_name: vsabackend1
volume_driver: cinder.volume.drivers.san.hp.hp_lefthand_iscsi.HPLeftHandISCSIDriver
hplefthand_debug: false
2. Software and licenses ready
CLIM and HSS ISO image versions (L17.02 or later)
Licenses that support the vNS CPUs or vCLIMs for the intended configuration (for example, number of CPUs,
vCLIMs, cores, system class, RoCE clustering, volume-level encryption, and memory)
An installed license in the unused state in the NonStop license location. Record the license serial number for
later use. For license states, see “License list command” (page 25).
3. Planned the network and set up manageability support
The name of the provider networks registered with the OpenStack OVS agent or the OpenStack Administrator.
Typically these networks are configured on each compute node in the ml2_conf.ini file. For example:
[ml2_type_vlan]
network_vlan_ranges = opsnet1,extnetA,extnetB
Created the Virtualized NonStop external customer networks by following the procedures in the OpenStack
administration guides while adhering to the considerations mentioned in this guide.
IP addresses for $ZTCP0 and $ZTCP1 NonStop Maintenance LAN TCP/IP stacks used for SSH and SSL. HPE
recommends using the default LAN IP addresses used on NonStop X systems as well as using the other default
IP address already used on those systems.
Mandatory prerequisites for a vNS system
27
√
Verify that you have
• Created the IPv4 Virtualized Maintenance LAN for your new vNS system that is available to your project in
OpenStack. For example:
. ./service.osrcneutron net-create vNonStop_Maintenance_LAN
neutron net-show vNonStop_Maintenance_LAN
neutron subnet-create <netID>192.168.0.0/16 --enable-dhcp --no-gateway \
–name “vNonStop_Maintenance_Subnet”
NOTE: In the previous example, the net-show command provides the maximum transmission unit (MTU)
result. Record the MTU result for later use. For more information, see “Reviewing maximum transmission
unit (MTU) in OpenStack ” (page 57).
• Created the Virtualized NonStop Operations LAN which must be an external network that a physical Windows
console can access for vNSC, OSM, and TACL. For example:
. ./service.osrcneutron net-create "vNonStop_Operations_LAN“ \
--shared \
--provider:network-type vlan \
--provider:physical_network physnet1 \
--provider:segmentation_id 102
neutron subnet-create <netID> 10.1.0.0/16 –-enable-dhcp --no-gateway \
–name “vNonStop_Operations_Subnet”
4. Proper NIC configurations for RoCE and vCLIMs (IP and Telco)
The name of the 560-series or 530-series physical device(s) registered with OpenStack (optional, but
recommended)
TIP: These items will have been configured by the OpenStack Administrator on each compute node in the
ml2_conf_sriov_agent.ini file. For example:
[sriov_nic]
physical_device_mappings = extnetA:hed1,extnetB:hed2
exclude_devices =
The NIC configuration on each compute node has been verified by the OpenStack administrator.
The RoCE NIC with RoCEv2 enabled. For example: ~# dmesg | grep RoCE
[618.282135] mlx4_core: device is working in RoCE mode: RoCE V2
[633.360934] mlx4_core: device is working in RoCE mode: RoCE V2
The RoCE NIC with SR-IOV enabled (virtual functions are listed). For example: ~# ospci | grep Mell
87:00.0
87:00.1
87:00.2
87:00.3
87:00.4
Network
Network
Network
Network
Network
controller: Mellanox Technologies MT27520 Family
controller: Mellanox Technologies MT27500/MT27520
controller: Mellanox Technologies MT27500/MT27520
controller: Mellanox Technologies MT27500/MT27520
controller: Mellanox Technologies MT27500/MT27520
[ConnectX-3 Pro]
Family [ConnectX-3/ConnectX-3
Family [ConnectX-3/ConnectX-3
Family [ConnectX-3/ConnectX-3
Family [ConnectX-3/ConnectX-3
Pro
Pro
Pro
Pro
Virtual
Virtual
Virtual
Virtual
The NIC with virtual functions enabled (partial example shown)
09:00.0 Ethernet controller: Intel Corporation 82599 10 Gigabit Dual Port Backplane Connection (rev 01)
A NIC with connectivity at 10GbE speed (or the desired speed). For example:
root@comp004:~# ethtool hed1 | grep Speed
Speed: 10000Mb/s
root@comp004:~# ethtool hed2 | grep Speed
Speed: 10000Mb/s
The RoCE NIC with link alive at 40GbE, on both ports. For example:
root@comp004:~# ethtool hed5 | grep Speed
Speed: 40000Mb/s
root@comp004:~# ethtool hed6 | grep Speed
Speed: 40000Mb/s
5. Planned fault isolation
Reviewed “Fault zone isolation options for vNS” (page 22) and selected an option.
28
Planning tasks for a Virtualized NonStop system
Function]
Function]
Function]
Function]
√
Verify that you have
1 GB huge pages enabled for vCLIMs with:
• 16GB for IP vCLIM or Telco vCLIM
• 4GB for Storage vCLIM
• these pages evenly spread between available NUMA zones
Reviewed the huge pages on each compute node (this example assumes you have a static workload during
deployment):
root@comp004:~# cat /sys/devices/system/node/node*/meminfo | grep Huge
Node 0 HugePages_Total:
123
Node 0 HugePages_Free:
67
Node 1 HugePages_Total:
125
Node 1 HugePages_Free:
81
The vCLIMs have compute nodes with isolated hyperthreads. For example:
root@comp004:~$ cat /proc/cmdline
BOOT_IMAGE=/vmlinuz-4.4.21-1-amd64-hpelinux root=/dev/mapper/hlm--vg-root ro crashkernel=384M-2G:64M,2G-:256M
hugepagesz=1G hugepages=224
default_hugepagesz=1G
isolcpus=1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,25,26,27,28,29,30,31,32,33,34,35,36,37,38,39,40,41,42,43,44,45,46,47
intel_iommu=on
iommu=pt quiet
Isolated the CPUs and created a chart of available CPUs that is guided by CPU architecture. For example:
root@comp004:~$ sudo lscpu | grep NUMA
NUMA node(s):
2
NUMA node0 CPU(s):
0-11,24-35
NUMA node1 CPU(s):
12-23,36-47
You have obtained the list of VMs. This assumes you have a static workload. For example:
root@comp004:~$ sudo virsh list --all
Id
Name
State
---------------------------------------------------2
instance-0000000f
running
3
instance-00000017
running
4
instance-00000019
running
6
instance-0000001e
running
For each VM, you have marked the used cells in the CPU chart and associated hyperthreads (if not explicitly
listed). This shows a 4 core, 8 hyperthreaded vCLIM and assumes a static workload:
root@comp0004:~$ sudo
<vcpupin vcpu='0'
<vcpupin vcpu='1'
<vcpupin vcpu='2'
<vcpupin vcpu='3'
<vcpupin vcpu='4'
<vcpupin vcpu='5'
<vcpupin vcpu='6'
<vcpupin vcpu='7'
virsh dumpxml instance-00000019 | grep cpupin
cpuset='1'/>
cpuset='25'/>
cpuset='9'/>
cpuset='33'/>
cpuset='16'/>
cpuset='40'/>
cpuset='13'/>
cpuset='37'/>
Repeat the previous step for each VM and complete this checklist, then proceed to the next chapter.
Mandatory prerequisites for a vNS system
29
5 Configuring a Virtualized NonStop system for deployment
These topics describe the prerequisites or steps required to deploy vNS and should be followed
in this order.
1. “Configuring the Host OS on the compute nodes” (page 30)
2. “Configuring OpenStack” (page 30)
Configuring the Host OS on the compute nodes
You must configure the compute nodes to support SRIOV passthrough, Global Pause on the
RoCE NICs, huge page memory configuration, and CPU isolation to support the vNS system.
You must perform these steps on each compute node.
1. Install the OFED drivers version (3.3 or later) for your Host OS using this download link:
http://www.mellanox.com/page/firmware_download
2. Follow the firmware package instructions for downloading and installing the latest firmware
(minimum 2.35.5100 or later) on each HPE ConnectX-3 Pro 2p 544+ RoCE NIC.
3. Enable iommu on the node and add the following flag to the default command line arguments
in the grub configuration file to configure SRIOV on the NIC.
intel_iommu=on
iommu=pt
4.
5.
Run update-grub to update the grub loader and reboot the node.
Open this directory: /etc/modprobe.d/
a. Create the following file to configure SRIOV on the NICs.
mlx4_core.conf
options mlx4_core num_vfs=4 debug_level=1 roce_mode=2 port_type_array=2,2 probe_vf=1
b.
After the file is created, reboot the node or restart the OFED driver by issuing:
/etc/init.d/openibd restart
6.
Add these flags to the default command line arguments in the grub configuration file to
allocate memory for the VMS that will be deployed in the computer node. In the following
argument, num is the number of 1GB pages to pre-allocate. Make sure you leave sufficient
unallocated memory on the host to support the overhead of the Host OS and the KVM.
hugepagesz=1G
hugepages=<num>
transparent_hugepage=never
7.
Any CPU cores that will be assigned to VMs must be isolated so that the Host OS does not
use them. Add this flag to the default command line arguments in the grub configuration file.
isolcpus=<1-9,11-19,21-29,31-39>
Configuring OpenStack
You must perform some OpenStack configuration to support deployment of a Virtualized NonStop
system into the OpenStack cloud.
1. On each compute node, edit the pci_passthrough_whitelist parameter in the nova.conf
file located in /etc/nova/nova.conf with the applicable product ID entries (1004 for
SRIOV or 1007 for PCI passthrough as shown following).
SRIOV should be:
nova.conf. pci_passthrough_whitelist = [{"vendor_id":"15b3", "product_id":"1004"}]
PCI passthrough should be:
nova.conf. pci_passthrough_whitelist = [{"vendor_id":"15b3", "product_id":"1007"}]
2.
30
On each node in the control plane, add a pci_alias parameter to the nova.conf file for
the product ID for PCI passthrough and/or SRIOV.
Configuring a Virtualized NonStop system for deployment
SRIOV should be:
pci_alias={"name":"<alias name>", "product_id":"1004", "vendor_id":"15b3", "device_type":"type-VF"}
PCI passthrough should be:
pci_alias={"name":"<alias name>", "product_id":"1007", "vendor_id":"15b3", "device_type":"type-PF"}
where <alias name> in each of those is a unique string that will be used when adding the
vNS flavors.
3.
On the compute nodes, ensure the nova.conf file has these options:
DEFAULT section
vcpu_pin_set
The above option should match the isolcpus setting for CPU isolation set during Step
7
libvirt section
cpu_mode = host-passthrough
The above option passes physical CPU information to the VMs
disk_cachemodes = block-directsync
The above option disables write cache on block devices
4.
Add the Neutron SRIOV NIC agent to the compute node by following the OpenStack
instructions for configuring PCI passthrough or SRIOV for any networking NICs to be passed
in to the IP or Telco vCLIMs.
Configuring OpenStack
31
6 Installing and configuring Virtualized NonStop software
and tools
These topics describe the steps or prerequisites required to deploy vNS and should be followed
in this order.
1. “Obtaining Virtualized NonStop software” (page 32)
2. “Installing Virtualized NonStop Deployment Tools” (page 32)
3. “Configuring Virtualized NonStop Deployment Tools” (page 34)
4. “Importing Virtualized NonStop images into Glance” (page 35)
5. “Creating the Virtualized NonStop system flavors” (page 36)
6. “Installing licenses on the vNS license server” (page 38)
7. “Create the vNSC” (page 38)
8. “Mandatory prerequisites for a vNS system” (page 27)
Obtaining Virtualized NonStop software
The vNS software for OpenStack is obtained from HPE electronically or via DVD.
Sourcing in an OpenStack resource file
If OpenStack is installed, you can retrieve a resource file for your OpenStack user file using the
Horizon dashboard.
1. Log into your domain as your OpenStack user.
2. Select the Virtualized NonStop project name, and navigate to Compute->Access & Security
on the left-hand pane.
3. In the tabs shown, select API access, click the button to download the appropriate version
of OpenStack RC file for the OpenStack user account.
4. Move this file to the Linux controller node in your Linux user account home directory to make
it easy to locate.
5. Depending on how you name your resource file, you will issue a "source" or "." command
such as: $ source <my-openstack-user-name>-openrc
Here is an example using ".":
$. admin-openrc
Installing Virtualized NonStop Deployment Tools
NOTE: The installation procedure for the Virtualized NonStop Deployment Tools assumes that
you are root on the controller or have sudo permissions. OpenStack commands shown here
must be run as an administrator and assume the appropriate OpenStack resource file has been
sourced for an administrative user.
1.
2.
3.
Download the HPE Virtualized NonStop Deployment Tools from Scout. This download will
include a file named vnonstop-openstack-<x.y.z>.tar.gz where <x.y.z> is a version
string.
Copy the package to the control node.
Untar/gzip the package:
tar -pzvxf vnonstop-openstack-x.y.z.tar.gz
4.
Change directories into the untar'd package:
cd vnonstop-openstack-x.y.z/
32
Installing and configuring Virtualized NonStop software and tools
5.
6.
Run the install script.
•
If you are running as the root user: ./install.sh
•
If you are not running as the root user: sudo ./install.sh
Verify the installed version:
$ vnonstop --version
7.
Create the MySQL database for the vNS service, and grant privileges to the vNS user of
MySQL on local and remote hosts. MySQL should have been installed and started at an
early stage of OpenStack deployment, during installation of the KeyStone identity service.
TIP:
•
MariaDB is compatible with MySQL and can be used for the Virtualized NonStop
database. When using MySQL, the banner and prompt will differ from what is shown in
the example below.
•
Ensure that you replace PASSWORD in the example with a suitable password. This
password will be required later to configure the vNS service in OpenStack.
Example 1 Creating a Maria DB
$ mysql -u root -p
Enter password:
Welcome to the MariaDB monitor. ...
...
MariaDB [(none)]> create database vnonstop;
...
MariaDB [(none)]> grant all privileges on vnonstop.* to 'vnonstop'@'localhost' identified by ''<PASSWORD>'';
...
MariaDB [(none)]> grant all privileges on vnonstop.* to 'vnonstop'@'%' identified by ''<PASSWORD>'';
...
MariaDB [(none)]> exit
Bye
8.
Make sure you source in the resource file for the OpenStack user with admin rights before
performing the series of commands in the example.
This example displays the sequence of commands for creating the vNS service, user, role,
and endpoints in OpenStack.
Note that you need to provide information for <service project>, <RegionName>, and
<host or IP> as described in the Role, region, and host/IP table.
Example 2 Creating vNS service, user, role and endpoints in OpenStack
$ openstack service create vlicense --name "vnonstop" --description "vNonStop Licensing and API service"
$ openstack user create vnonstop --password-prompt
$ openstack role add --user vnonstop --project <service project> admin
$ openstack endpoint create --region <RegionName> vlicense admin http://<host or IP>:9990/v1
$ openstack endpoint create --region <RegionName> vlicense internal http://<host or IP>:9990/v1
$ openstack endpoint create --region <RegionName> vlicense public http://<host or IP>:9990/v1
Role, region, and host/IP
Replace with your
<service project>
OpenStack project in which other OpenStack services
such as Nova were created
<host or IP>
Hostname or IP address of the controller
<RegionName>
OpenStack region name (Only necessary if a
multi-region OpenStack setup is being used).
Installing Virtualized NonStop Deployment Tools
33
TIP: If you have problems finding the correct <service project> name or
<RegionName>, review the output of these commands:
$ openstack project list --long
$ openstack region list
9.
Create a configuration file called:
/etc/vnonstop/vnonstop.conf
Example 3 “vNS configuration file” shows a full configuration file.
Example 3 vNS configuration file
[DEFAULT]
log_file=/var/log/vnonstop/vnonstop-api.log
[database]
host = localhost
user = vnonstop
password = password
name = vnonstop
[keystone_authtoken]
auth_uri = http://localhost:5000
auth_url = http://localhost:35357
memcached_servers = localhost:11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = vnonstop
password = password
10. Open the configuration file with an editor to set or add the entries described in this table.
Edit this section of file
With this setting
[DEFAULT]
• Set the log file. Recommended: /var/log/vnonstop/vnonstop-api.log
• If a different IP port than 9990 is desired, add a line port=<port #>
[database]
• Set the host for the database connection. Current default is localhost.
• Set the database name for the database connection. Current default is
vnonstop.
• Set the user for the database connection. Current default is vnonstop.
• Set the password for the database connection. No default.
[keystone_authtoken]
Based on the Keystone configuration in your cloud, set the
keystone_authtoken for the vNS user previously created.
11. Restart the vNS-api service by issuing:
restart vnonstop-api service
Configuring Virtualized NonStop Deployment Tools
When the Virtualized NonStop tools are installed, the Virtualized NonStop license service will be
started on the control plane. Prior to deploying any systems, several administrative tasks need
to be performed as described in the following topics.
34
Installing and configuring Virtualized NonStop software and tools
Importing Virtualized NonStop images into Glance
You must acquire the images for the vNS components and save them to either:
•
Node where you installed the Virtualized NonStop Deployment tools, so you can use the
command-line tools to import the images
•
The system that will be used to access Horizon and upload the images
NOTE: The core license file will also need to be installed on the vNS system after NSK is
running, using the Install Core License guided procedure found on the system object in the
OSM Service Connection.
vNS component
Initial delivery
Updates
SUT
QCOW2 image
BACKUP format
Core license file
File
File
vCLIM software
QCOW2 image
Installer
vNSC software
ISO image
Installer
Halted State Services (HSS) initial boot
OS
ISO image
ISO image
Independent Products
Various formats
Various formats
Software Product revisions (SPRs)
-
Files
vNS deployment tools and license server Zipped file collection
Zipped file collection
Once the images have been acquired from Scout, use the openstack image create command
to upload the images into Glance.
The images may be added to the admin project and made public so that the users of the Virtualized
NonStop project or projects have access to them, or they may be added directly to the Virtualized
NonStop project(s).
Example 4 Uploading an image into Glance
root@comp004:~$ . ./service.osrc
root@comp004:~$ glance image-create --name $name --file ../Imgs/$name \
--container-format bare --disk-format $format --visibility public
Example 5 Verifying an image (QCOW)
root@comp004:~$ sha256sum T0976L03_15FEB2017_11JAN2017_L03.qcow2
6167dbd47f6e9b59a37f948a113a227cb9bd704440d9ef0b6c12c8d77c01b48b
root@comp004:~$ cat T0976L03_15FEB2017_11JAN2017_L03.sha256
6167dbd47f6e9b59a37f948a113a227cb9bd704440d9ef0b6c12c8d77c01b48b
T0976L03_15FEB2017_11JAN2017_L03.qcow2
T0976L03_15FEB2017_11JAN2017_L03.qcow2
Importing Virtualized NonStop images into Glance
35
Creating the Virtualized NonStop system flavors
IMPORTANT: Until the administrator grants permission to a user, only the administrator (by
default) can manage flavors for NonStop virtual machines (VMs).
NOTE:
36
•
Before a vNS system can be deployed, you must create flavors for a minimum of 1 vNS
CPU, 1 Storage vCLIM, and 1 IP vCLIM or 1 Telco vCLIM.
•
For an overview of creating flavors, see “Flavor management for NonStop virtual machines”
(page 23).
1.
2.
Log on to Horizon.
Using the Virtualized NonStop deployment tool tab, select Admin->NonStop->Flavors. The
flavors panel displays. If this is the first time you are adding a flavor, the panel will be empty.
The following example shows a flavor panel with flavors.
3.
Click +Create Flavor. The Create Virtualized NonStop Flavor dialog appears.
4.
Click Next. The Flavor Information dialog box appears.
Installing and configuring Virtualized NonStop software and tools
a.
b.
c.
d.
Enter a name in Flavor Name.
Using the Core Count drop-down menu, select a Core Count that is compatible with
your license(s) and system model (entry-class or high-end).
Enter a random access memory (RAM) size that is compatible with your license(s) and
system model (entry-class or high-end) in the RAM Size in Gigabytes.
Enter a string based on the pci_alias field values for the RoCE NIC PCI interface. The
PCI alias uses either VF for SRI-IOV or PF for PCI passthrough. The PCI alias
information is contained in the /etc/nova/nova.conf of the OpenStack control node.
It can also be obtained by logging onto the control node with SSH and issuing this
command:
~$ sudo cat /etc/nova/nova.conf | grep pci_alias
Example result:
pci_alias={"name":"Mellanox_VF", "product_id":"1004", "vendor_id":"15b3", "device_type":"type-VF"}
In the example above, "Mellanox_VF" is the alias name for the SR-IOV virtual function
of a RoCE NIC. A new CPU flavor for this alias name could be Mellanox_VF so that the
Flavor will help select a Nova virtual machine location requiring one (:1) SR-IOV virtual
function on a Mellanox PCI interface.
5.
6.
If you have finished your selections, click Create Flavor.
Repeat this procedure to create your other vNS flavors.
Creating the Virtualized NonStop system flavors
37
Installing licenses on the vNS license server
TIP: For more information about Virtualized NonStop licenses, see “License management for
Virtualized NonStop” (page 24).
1.
2.
3.
4.
5.
6.
Install the Virtualized NonStop license on the PC that uses the Horizon dashboard.
Log on to Horizon as a user for the OpenStack project where you intend to create the vNS
system.
Select Project->NonStop->Licensing in the tabs on the left of the Horizon dashboard. This
is where licenses will be uploaded to OpenStack and where files can be located later.
Click Upload New License on the upper right to display the dialog.
Click Browse to locate the license on your PC. Click Upload License in the dialog.
Repeat this procedure to install any other licenses.
Create the vNSC
The vNS system requires a Windows 2012 Server VM running as a vNSC. If you do not have a
windows image for use, see “Creating a Virtualized NonStop System console (vNSC)” (page
66).
38
Installing and configuring Virtualized NonStop software and tools
7 Deploying the Virtualized NonStop System
IMPORTANT: Make sure you complete “Mandatory prerequisites for a vNS system” (page 27)
before proceeding with deployment.
1.
Log on to Horizon and select your vNS project. Select Project->NonStop->Servers. Click
Launch System. The initial user interface displays.
2.
The Enter the Virtualized NonStop system details dialog appears.
a. In System Name, enter the system name.
b. In Expand Node Number, enter the Expand Node Number.
c. Select the HSS image and CLIM DVD image from the OpenStack images available to
this project and user.
d. From the Virtualized NonStop License drop-down menu, select the license that the vNS
system will use. (Licenses that have been uploaded are listed in
Project->NonStop->Licensing).
3.
Click Next. The Select the desired Fault Isolation level for this system dialog appears.
IMPORTANT: If you have not planned your fault isolation level, or need more information
about fault isolation, see “Fault zone isolation options for vNS” (page 22).
From the Fault Isolation drop-down menu, select one of the following:
•
NonStop Standard
•
CPUs Only
•
None
IMPORTANT:
fault-tolerant.
4.
HPE recommends selecting the NonStop Standard because it is
After selecting your Fault Isolation level for the system (and, if applicable, availability zones),
click Next. The Select which network you would like to use for the Maintenance LAN
dialog appears. You must select a Maintenance LAN for the vNS system that uses IPv4.
39
NOTE: If you have more than one subnet (optional) the Select subnet: field displays the
subnets.
5.
Enter the IP addresses in the $ZTCP0 IP Address (IPv4 format) and the $ZTCP1 IP Address
(IPv4 format) fields. HPE recommends that you use the same default IP addresses
(192.168.36.10 and 192.168.36.11) as physical NonStop X systems. These default IP
addresses are shown in this example dialog.
TIP: If two or more deployed vNS systems use the same Maintenance LAN and subnet
configured in OpenStack, see the procedure for adjusting IP addresses described in the
NonStop Dedicated Service LAN Configuration and Management Manual for NonStop X
Systems. That procedure also applies to vNS systems.
40
Deploying the Virtualized NonStop System
6.
Click Next. The Select the number of logical CPUs for the vNS system and the flavor
of the CPUs dialog appears. For more information, see “Flavor management for NonStop
virtual machines” (page 23)
IMPORTANT:
a.
b.
Ensure your license(s) allow your CPU count and flavor selections.
In CPU Count, enter the number of NonStop logical processors.
Under CPU Flavor, select a flavor name.
41
7.
You must configure a minimum of two Storage vCLIMs named SCLIM000 and SCLIM001.
From the CLIM Type dropdown menu, select Storage.
a. Expand the bottom of the dialog to display CLIM Flavor. Select the radio button that
has the flavor for SCLIM000.
b. In Maintenance LAN IP Address for this CLIM, enter the IP address for the virtualized
SCLIM000. HPE recommends that you use the same default IP address (192.168.37.0)
as a physical SCLIM000.
c. Under the IP address, click Add New.
The first required Storage CLIM (SCLIM000) appears in CLIMs to be created.
8.
42
The initial CLIM type dialog box reappears and the CLIMs selection remains selected in the
left pane. Add the second required Storage vCLIM (SCLIM001) with a new Maintenance
LAN IP address (the SCLIM001 vCLIM uses the same flavor as the SCLIM000 vCLIM). HPE
recommends that you use the same default IP address (192.168.37.1) as a physical
SCLIM001.
Deploying the Virtualized NonStop System
9.
Under the IP address, click Add New.
The second required Storage vCLIM (SCLIM001) appears in CLIMs to be created along
with previously added first Storage vCLIM.
TIP: If you have only two Storage vCLIMs, proceed to the next step to add IP vCLIMs.
Or, continue adding the number of Storage vCLIMs allowed by your license (with distinct IP
addresses). Remember to select “Add New” after filling in the entries for each Storage vCLIM
in the dialog.
43
10. After you click Add New for the last Storage vCLIM, select IP for the vCLIM type. You must
add a minimum of two vCLIMs named NCLIM000 and NCLIM001.
a.
b.
c.
Expand the bottom of the dialog to display CLIM Flavor. Select the radio button that
has the flavor for NCLIM000.
HPE recommends that you enter Maintenance LAN IP address 192.68.38.0 for the first
CLIM (IP or Telco) named NCLIM000.
Under the IP address, click Add New.
The first vCLIM (IP or Telco) NCLIM000 appears in CLIMs to be created along with
the previously added Storage vCLIMs.
44
Deploying the Virtualized NonStop System
11. HPE recommends that you enter Maintenance LAN IP address 192.68.38.1 for the second
vCLIM (IP or Telco) named NCLIM001.
a. Under the IP address, click Add New.
The second vCLIM (IP or Telco) NCLIM000 appears in CLIMs to be created along with
the previously added vCLIMs. At this point, the minimum required amount of vCLIMs
(2 Storage, 2 IP) have been added to create a new vNS system as shown in this example
dialog.
b.
c.
Repeat the same process for each additional vCLIM beyond the two required storage
and IP vCLIMs.
Click Next.
45
12. The Enter the requested information for each Volume you would like to create dialog
appears. This dialog lets you add OpenStack Volumes (block storage) and attach them to
Storage vCLIMs.
a. The first OpenStack Volume you must create is $SYSTEM which is required and
predefined. In this example dialog, $SYSTEM is shown in Name for this Volume.
b. In Size for Volume in Gigabytes, enter a value. The dialog shows a value of 100 just
as an example. If your OpenStack administer has created multiple volume "Types" in
OpenStack Cinder block storage, for different storage options, you can select the
appropriate Type for your Volume.
c. In Type for this Volume, select the radio button that corresponds with your storage
option. Click Add New to add $SYSTEM as the first volume.
NOTE: For the L17.02 release, you may attach an OpenStack Volume to only one
Storage vCLIM. NonStop Backup and Mirror Backup paths are not supported in this
release, due to OpenStack restrictions.
13. Create the $AUDIT volume. You’ll need $AUDIT for later installation steps involving TMF,
DSMSCM, and KMSF.
a. In Name for this Volume, enter $AUDIT.
b. In Size for Volume in Gigabytes, enter a value. The dialog shows a value of 600 just
as an example.
c. In Type for this Volume, select the radio button that corresponds with your storage
option.
46
Deploying the Virtualized NonStop System
NOTE: The selected volume type should correspond to raid-0 with full provisioning
in the storage backend.
d.
Click Add New to add $AUDIT as the second volume after $SYSTEM.
47
14. Click Next. The Select CLIM to configure network interfaces dialog appears.
IMPORTANT: Do not select the Maintenance LAN (provider-mlan) or RoCE network
(rocenet) in the Select Network section. You only use this dialog to configure the IO networks
for the CLIMs.
a.
From the Select VNIC port type drop-down, select one of the following:
•
normal — provides a default VirtIO port in OpenStack
•
direct — provides an SR-IOV port
•
direct-physical — provides a PCI passthrough port
For more information, click ? to bring up the help dialog.
b.
48
After adding all the networks for the vCLIMs, click Create System to deploy the
OpenStack instances, volumes, and other components for your vNS system. A minimum
system (2 CPUs, 2 IP/Telco CLIMs, 2 Storage CLIMs, and 1 vNSC) takes about 20
Deploying the Virtualized NonStop System
c.
minutes to be created as long as the administrator is familiar with the process. Larger
systems or more disk volumes, and so on, can increase the create time.
After Create System has completed, a new vNS system appears under the
Project->NonStop->Servers screen. The OpenStack components for NS vCPUs,
vCLIMs, and Volumes appear in your project as Project->Compute->Instances and
Volumes. The NonStop system name will appear as the first part of the name of these
instances and volumes.
Post-deployment procedures for vCLIMs
Perform this procedure on each vCLIM. This procedure is done inside the vNS systems’ project
(for example, system.osrc)
1.
Log in to Horizon. Select Project>Instances and then the vCLIM (for example, NCLIM000).
There are two interfaces for the vCLIM:
•
•
Log — A virtual serial port
◦
Provides diagnostic logs through vCLIM boot and run. Can be used to troubleshoot
unresponsive vCLIMs.
◦
Only intended for authorized service providers or administrators
Console — Has similar functionality as iLO remote console and provides a login shell
to the vCLIM with a virtual display.
Figure 9 Login screen for vCLIM
2.
Using the Horizon console login shell, configure each vCLIM’s eth0 address by logging in
with the vCLIM's login credentials.
•
user: root
•
password: hpnonstop
Post-deployment procedures for vCLIMs
49
Example 6 Configuring eth0 on vCLIM
climconfiginterface –add eth0
climconfig ip -add eth0 -ipaddress 192.168.38.0 -netmask 255.255.0.0
–climconfiginterface –modify eth0 –mtu 1450
NOTE:
The IP address should be the same as the port allocate IP address.
Use the MTU that you recorded earlier during “Planning tasks for a Virtualized NonStop
system” (page 27).
3.
4.
50
Do not configure any other ports until the vCLIM is in a STARTED state. The vCLIM
configuration is not complete until the vCLIM and other VMs are configured through the vNS
System Configuration Tool and the system is coldloaded.
Repeat steps 1-4 on each vCLIM. Once completed, proceed to “Configuring a provisioned
Virtualized NonStop system” (page 51).
Deploying the Virtualized NonStop System
8 Configuring a provisioned Virtualized NonStop system
A provisioned vNS system still needs to be configured to make sure that the provisioned vNS
VMs can communicate.
1. Log on to the vNSC, preferably using Remote Desktop. From the Windows Server 2012 R2
Start menu on the lower left, click OSM System Configuration Tool to launch the tool.
The OSM Console Tools on the vNSC have a new option, Configure a Virtualized NonStop
System. Select that option in the initial dialog displayed.
Click Next to view the Log on to MEUs dialog.
2.
Click Next to view the Input System Info dialog.
51
In this screen, enter the system name you configured in the first dialog of the OpenStack
project "Launch New Server" wizard. Enter a \ as a prefix, even though you did not do so in
the "Launch New Server" wizard. Then enter the Expand Node Number you chose earlier
as the Expand Node Number in the System Configuration Tool. Finally enter the Virtualized
NonStop system serial number of the license you selected in the "Launch New Server"
wizard. Review these values to be sure that they match what you entered earlier.
NOTE:
3.
The "Advanced Configuration" button should not be used in the L17.02 release.
Click Next. The Select Number of Processors dialog appears.
In number of processors, enter the number of CPUs previously entered in the "NSK CPUs"
dialog of the "Launch New Server" wizard.
4.
52
Click Next. The Input CLIM info dialog appears. Review and follow the instructions on the
dialog.
Configuring a provisioned Virtualized NonStop system
5.
Continue discovering CLIMs until all of the CLIMs are listed in the pane at the bottom of the
dialog.
NOTE:
6.
Ignore the Change iLO Config since it is not used for Virtualized NonStop.
Click Next to see a summary of system information. If the information is as expected, click
Next again to perform the configuration steps on the CLIMs and MEUs.
53
7.
Now that the vNS system has been provisioned and configured, you can boot from HSS into
NSK using the System Startup Tool.
TIP: You can use the System Startup Tool to connect to the CLIMs and MEUs the same
way as physical NonStop X systems. For more information, see the online help within the
OSM System Startup Tool.
8.
a.
Select Start System in the Operations menu. A dialog like the following appears.
b.
Make only these changes in the System Startup dialog box
•
Under SYSnn and CIIN Option, enter 00 in SYSnn: (this is for initial system load)
•
Under Configuration File, select Saved Version (CONFxxyy) and enter 0000
(which results in CONF0000)
•
You can modify LUNs for the $SYSTEM primary drive or mirror drive by
double-clicking the row of the relevant CLIM. This brings up an additional dialog
for altering the LUN.
Click System Start. After a few minutes, the MR-Win6530, CLCI, and CNSL windows should
appear.
At this point, you need to configure initial NSK settings and perform the necessary vCLIM
and SCF configuration to recognize the attached OpenStack disk volumes as NonStop disks
and network interfaces on vCLIMs as configured with NSK CIP providers in SCF as described
in “Booting the vNS system” (page 55).
54
Configuring a provisioned Virtualized NonStop system
Booting the vNS system
1.
The system will first boot with a minimal configuration. The provisioned vCLIMs and volumes
will need to be configured in SCF so that they are available for use in NSK.
2. Configure additional network interfaces on the vCLIMs.
3. Configure IP providers in SCF.
4. Add additional vCLIMs to SCF.
5. Run LUNMGR on each storage vCLIM. For more information on adding LUNs and other
LUN Manager commands, see “LUN Manager changes to support virtualized environments”
(page 16).
6. Add disks to SCF. For more information, see the SCF Reference Manual for the Storage
Subsystem.
7. Make sure that the time and time zone offset are correct.
8. After setting up NonStop volumes, processor swap files should be added for the Virtualized
NonStop processors, using limited space on $SYSTEM and additional space on any KMSF
swap volumes you have created in OpenStack, and then configured in vCLIMs (accept LUNs
for the vCLIM storage devices) and in the SCF Storage Subsystem.
9. Save your new SCF configuration as some valid CONFxx.yy so that it can be referred to
explicitly at a later system load.
10. DSMSCM should generally be moved to the configured $DSMSCM from the custom
$SYSTEM disk image for your system, so that DSM/SCM can be configured and started.
11. NSK users should be added, and other basic NSK setup should be performed.
12. Use NonStop Software Essentials to install or update any desired SPRs. Once the vCLIMs
and volumes have been configured, NSE can be used to install any additional SPRs that
are needed by the customer, or to update any SPRs that need updating. See the L17.02
Software Installation and Upgrade Guide.
At this point, the system is ready for the customer application to be deployed on it.
Booting the vNS system
55
9 vNS administrator tasks
Scenarios for vNS administrators
These topics describe administrator tasks for the vNS system. All tasks (except viewing and
reprovisioning resources) offer the option of using the Horizon interface or OpenStack CLI.
•
“Deprovisioning a vNS system” (page 56)
•
“Listing created vNS systems” (page 56)
•
“Viewing virtual resources for a vNS system” (page 57)
•
“Reviewing maximum transmission unit (MTU) in OpenStack ” (page 57)
•
“Managing vNS resources” (page 58)
•
“Scenarios that prompt re-provision of resources” (page 59)
NOTE: All tasks in this chapter that use OpenStack CLI procedures show the optional [-h]
argument. The [-h] and {--help) optional arguments show a help message and exit. You can
choose not to provide these optional arguments.
Deprovisioning a vNS system
CAUTION: The deprovisioning procedures delete all resources for the system. If any data
should be saved, back up and save those volumes before proceeding. Ensure that the backup
remains accessible after the system is deleted.
Deprovisioning using Horizon interface:
1.
2.
Log on to the Horizon interface.
Select the Delete System action for the system that is to be deprovisioned.
3.
After the delete action issues successfully, a Delete successful popup dialog displays. The
state of the system will change from OK to DELETING in Horizon. Once all resources are
deprovisioned, the system disappers from the list.
Deprovisioning using the OpenStack CLI:
Using the OpenStack CLI, issue the following:
vnonstop system delete [-h] <system-serial-number>
Listing created vNS systems
To list vNS systems, select one of these methods:
•
Using the Horizon interface, select Horizon->NonStop Servers page to show a list of created
systems.
•
Using the Openstack CLI, issue the following:
vnonstop system list [-h]
+-------+--------+-------+------------+------+---------------+----------+-------------+------------+
| Name | SSN
| Expand Node Number | cpus | storage clims | ip clims | telco clims | task state |
+-------+--------+-------+------------+------+---------------+----------+-------------+------------+
| VOSM | 55555 | 55
|
2 |
2 |
2 |
2 |
0 |
+-------+--------+-------+------------+------+---------------+----------+-------------+------------+
56
vNS administrator tasks
Viewing virtual resources for a vNS system
To view virtual resources, issue the following the Openstack CLI command.
vnonstop system show [-h] <system-serial-number>
Example 7 Example display of virtual resources for a vNS system
System Name: VOSM1
SSN: 555555
Expand Node Number: 55
CPUs:
VOSM1-CPU01: flavor = vn.nskcpu, image = 28f4fe52-f86f-4ef3-acad-66f20eb0a31e
VOSM1-CPU00: flavor = vn.nskcpu, image = 28f4fe52-f86f-4ef3-acad-66f20eb0a31e
Storage CLIMS:
VOSM1-SCLIM001: flavor = vn.sclim
VOSM1-SCLIM000: flavor = vn.sclim
IP CLIMS:
VOSM1-NCLIM000: flavor = vn.nclim8p
VOSM1-NCLIM001: flavor = vn.nclim8p
NSK Volumes:
$SYSTEM: size = 100, clims: VOSM1-SCLIM001, VOSM1-SCLIM000
$AUDIT: size = 50, clims: VOSM1-SCLIM001, VOSM1-SCLIM000
$DSMSCM: size = 50, clims: VOSM1-SCLIM001, VOSM1-SCLIM000
Reviewing maximum transmission unit (MTU) in OpenStack
An incorrect MTU can result in fragmentation, or on some configurations even packet loss. This
example shows the result of checking the MTU on a vNS system using the net-show command
and the network tab on the horizon interface.
Figure 10 MTU result in OpenStack
Scenarios for vNS administrators
57
Managing vNS resources
$AUDIT is needed for TMF. TMF must be started in order to use the DSMSCM tool for installing
new versions of software on the NonStop system. The initial configuration of the DSM/SCM
database will made available in a customized $SYSTEM image for the initially ordered RVU,
based on a customer's purchased software. You will be able to move the initial DSM/SCM
database to the $DSMSCM volume, to provide additional storage for installing and keeping track
of software configuration changes such as new RVUs.
KMSF volumes will be configured for processor swap space after the Virtualized NonStop system
is first started, using the NSKCOM KMSF tool. Some initial swap files may be placed on $system,
but large processor memory sizes and a large number of NonStop logical processors will create
a need for one or more KMSF volumes.
58
vNS administrator tasks
Scenarios that prompt re-provision of resources
Scenario
Options
A system resource is not working
vCLIMs, vNS CPUs, and NSK volumes can be
reprovisioned
HSS update
vNS CPU VMs are deleted and re-created using the
updated HSS ISO image from Glance
Expand node number has changed
CPU identity is passed in to the CPU by OpenStack and
cannot be changed after the VM is created
vCLIM reimage
NOTE: By default, only deletes and recreates the vCLIM
VM using the same boot volume and networks
Can pass an optional parameter to specify a vCLIM image,
resulting in a new boot volume being created
Disk errors
NSK volume re-provisioning is done one “disk” at a time;
for a mirrored volume, this is one half of the mirrored
volume at a time
For more information, see “Reprovision examples” (page 60).
Scenarios for vNS administrators
59
Reprovision examples
Example 8 Example display of reprovisioning CPUs on a vNS system
vnonstop reprovision cpus [-h]
[--new-expand-node-number <new-expand-node-number>]
[--new-image-id <new-image-id>]
[--new-cpu-flavor <new-cpu-flavor>]
[--single-cpu <single-cpu>]
<system-serial-number>
Reprovisions the cpus on a Virtualized NonStop system.
positional arguments:
<system-serial-number>
System serial number
optional arguments:
-h, --help
show this help message and exit
--new-expand-node-number <new-expand-node-number>
New Expand Node number
--new-image-id <new-image-id>
New Image ID
--new-cpu-flavor <new-cpu-flavor>
New Cpu Flavor
--single-cpu <single-cpu>
Number of a single cpu to reprovision with the current
values
Example 9 Example display of reprovisioning CLIMs on a vNS system
vnonstop reprovision clim [-h] --clim-name <clim-name>
[--image-id <image-id>]
<system-serial-number>
Reprovisions a CLIM on a Virtualized NonStop system.
positional arguments:
<system-serial-number>
System serial number
optional arguments:
-h, --help
show this help message and exit
--clim-name <clim-name>
CLIM name. Required.
--image-id <image-id>
CLIM image ID. Default: None
60
vNS administrator tasks
Example 10 Example display of NSK volumes on a vNS system
vnonstop reprovision nsk-volume [-h] --volume-name <volume-name>
--clim-name <clim-name>
[--image-id <image-id>]
<system-serial-number>
Reprovisions an NSK Volume on a Virtualized NonStop system.
positional arguments:
<system-serial-number>
System serial number
optional arguments:
-h, --help
show this help message and exit
--volume-name <volume-name>
Volume name. Required.
--clim-name <clim-name>
Attached CLIM name. Required.
--image-id <image-id>
Volume image ID. Default: None
Scenarios for vNS administrators
61
10 Troubleshooting vNS problems
•
“Collecting vCLIM crash dumps and debug logs ” (page 62)
•
“vCLIM is unresponsive at the Horizon console” (page 62)
•
“Using the vCLIM serial log to assist with troubleshooting” (page 62)
•
“Debugging hypervisor issues” (page 63)
•
“Networking issues for vCLIMs” (page 63)
•
“Issues with HSS boot, reload, or CPU not being online” (page 63)
Collecting vCLIM crash dumps and debug logs
The primary tools for capturing debug information from a live vCLIM are:
•
CLIMCMD <climName> clim abort
•
CLIMCMD <climName> climdebuginfo
•
CLIMCMD <climName> clim onlinedebug
vCLIM is unresponsive at the Horizon console
If a vCLIM is unresponsive to commands issued with CLIMCMD, and you are unable to log into
the vCLIM through the Horizon interface, you can force a vCLIM to abort and generate a debug
archive through the hypervisor. This is equivalent to sending NMI to a physical CLIM through the
iLO.
1. Find the compute node hosting the VM and the instance number.
root@cpl-comp004–mgmt:~$ nova show VNS1_NCLIM000 | grep OS-EXT-SRV-ATTR
| OS-EXT-SRV-ATTR:host
| cp1-comp0004-mgmt
| OS-EXT-SRV-ATTR:hypervisor_hostname | cp1-comp0004-mgmt
| OS-EXT-SRV-ATTR:instance_name
| instance-00000019
2.
|
|
Issue virsh inject-nmi command to VM directly from the compute node.
ssh root@cp1-comp004-mgmt inject-nmi instance-00000019
Using the vCLIM serial log to assist with troubleshooting
The vCLIM moves initial boot logs to the virtual serial port during boot. If the vCLIM fails to boot,
this log might provide additional troubleshooting information. To access the log, click VIEW FULL
LOG which provides a full history of the serial log since boot. This can be downloaded for
attachment to support tickets.
TIP:
The Console may be quiet during boot. Especially initial boot when disks are resized.
Figure 11 vCLIM serial log
62
Troubleshooting vNS problems
Debugging hypervisor issues
When debugging problems involving the hypervisor, dumping the XML for the virtual machine
might be useful for also providing information as well as other hypervisor setup information such
as items validated during pre-installation.
Networking issues for vCLIMs
Tcpdump is the main tool for network troubleshooting. Just like many network problems are
switch/network problems when using virtio_net configurations. Many network vCLIM problems
or OpenStack or vCLIM configuration problems are related to mismatching IP, MTU, or being
blocked by security policies.
To troubleshoot virtio_net, simultaneously collect tcpdump of CLIM interface (for example
eth1) and its tap interface (for example, tapf72804f7-3d). To find the tap interface.
1.
Find the neutron port ID of interest:
root@cp1-c1-m1-mgmt:~$ neutron port-list | grep NCLIM000_MNT_PORT
| f72804f7-3d60-480c-864c-94be2c1afd73 | NCLIM000_MNT_PORT
| fa:16:3e:62:c0:ed | {"subnet_id":
"3423f1bc-fc80-4b75-a566-067c682bbc1a", "ip_address": "192.168.38.0"} |
2.
3.
Find and login to the compute node for the vCLIM:
The tap interface contains the first 10 bytes of ID:
root@comp004:~$ ip link show | grep tapf7280
33: tapf72804f7-3d: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc pfifo_fast master qbrf72804f7-3d
state UNKNOWN mode DEFAULT group default qlen 1000
4.
Trace it with tcpdump in the compute node:
root@cp1-comp0001-mgmt:~$ sudo tcpdump –itapf72804f7-3d –n
TIP: Often this will simply prove that the vCLIM is sending/receiving what you think it is, and
will need OpenStack troubleshooting. See the OpenStack administrator or troubleshooting guides
on the Internet.
Issues with HSS boot, reload, or CPU not being online
This illustration describes how to access the CPU instance on a compute node using the virtual
serial port.
Figure 12 HSS does not boot, reload issues, CPU not online
Debugging hypervisor issues
63
11 Support and other resources
Accessing Hewlett Packard Enterprise Support
•
For live assistance, go to the Contact Hewlett Packard Enterprise Worldwide website:
www.hpe.com/assistance
•
To access documentation and support services, go to the Hewlett Packard Enterprise Support
Center website:
www.hpe.com/support/hpesc
Information to collect
•
Technical support registration number (if applicable)
•
Product name, model or version, and serial number
•
Operating system name and version
•
Firmware version
•
Error messages
•
Product-specific reports and logs
•
Add-on products or components
•
Third-party products or components
Accessing updates
•
Some software products provide a mechanism for accessing software updates through the
product interface. Review your product documentation to identify the recommended software
update method.
•
To download product updates, go to either of the following:
◦
Hewlett Packard Enterprise Support Center Get connected with updates page:
www.hpe.com/support/e-updates
◦
Software Depot website:
www.hpe.com/support/softwaredepot
•
To view and update your entitlements, and to link your contracts and warranties with your
profile, go to the Hewlett Packard Enterprise Support Center More Information on Access
to Support Materials page:
www.hpe.com/support/AccessToSupportMaterials
IMPORTANT: Access to some updates might require product entitlement when accessed
through the Hewlett Packard Enterprise Support Center. You must have an HP Passport
set up with relevant entitlements.
Websites
64
Website
Link
Hewlett Packard Enterprise Information Library
www.hpe.com/info/enterprise/docs
Hewlett Packard Enterprise Support Center
www.hpe.com/support/hpesc
Support and other resources
Website
Link
Contact Hewlett Packard Enterprise Worldwide
www.hpe.com/assistance
Subscription Service/Support Alerts
www.hpe.com/support/e-updates
Software Depot
www.hpe.com/support/softwaredepot
Customer Self Repair
www.hpe.com/support/selfrepair
Insight Remote Support
www.hpe.com/info/insightremotesupport/docs
Single Point of Connectivity Knowledge (SPOCK) Storage www.hpe.com/storage/spock
compatibility matrix
Storage white papers and analyst reports
www.hpe.com/storage/whitepapers
Customer self repair
Hewlett Packard Enterprise customer self repair (CSR) programs allow you to repair your product.
If a CSR part needs to be replaced, it will be shipped directly to you so that you can install it at
your convenience. Some parts do not qualify for CSR. Your Hewlett Packard Enterprise authorized
service provider will determine whether a repair can be accomplished by CSR.
For more information about CSR, contact your local service provider or go to the CSR website:
www.hpe.com/support/selfrepair
Remote support
Remote support is available with supported devices as part of your warranty or contractual support
agreement. It provides intelligent event diagnosis, and automatic, secure submission of hardware
event notifications to Hewlett Packard Enterprise, which will initiate a fast and accurate resolution
based on your product’s service level. Hewlett Packard Enterprise strongly recommends that
you register your device for remote support.
For more information and device support details, go to the following website:
www.hpe.com/info/insightremotesupport/docs
Documentation feedback
Hewlett Packard Enterprise is committed to providing documentation that meets your needs. To
help us improve the documentation, send any errors, suggestions, or comments to Documentation
Feedback ([email protected]). When submitting your feedback, include the document
title, part number, edition, and publication date located on the front cover of the document. For
online help content, include the product name, product version, help edition, and publication date
located on the legal notices page.
Customer self repair
65
A Creating a Virtualized NonStop System console (vNSC)
Prerequisites for creating the Virtualized NonStop System Console (vNSC)
Prior to creating the vNSC, make sure you have the following.
•
Microsoft Windows Server 2012 R2 license and corresponding ISO file or physical DVD from
which you could create an ISO file
•
Downloaded ISO image of virtio drivers for Windows virtual machines running on KVM:
https://fedoraproject.org/wiki/Windows_Virtio_Drivers
•
NonStop System Console (NSC) Update 27 or later ISO
•
At least 20GB for the empty volume that will hold the installed version of Windows and for
creating the image
•
Horizon log on permissions
•
Internet Explorer 11.0
Creating a vNSC
1.
2.
3.
4.
Make sure you have met the prerequisites for vNSC installation.
Upload each of the following into Glance as individual images:
•
Microsoft Windows Server 2012 R2 ISO file
•
virtio driver ISO from Fedora Project
•
NSC Update 27 or later ISO
Create an empty volume to hold the Microsoft ISO.
Create the VM from one of the OpenStack controllers by selecting an NSC flavor and using
nova boot.
NOTE: If you have multiple networks, nova requires you to select one of those networks
and add the <Network ID> to nova boot (as shown in the example). This is not required if
you have a single network.
a.
b.
Select a vNSC flavor with at least 2 cores and 8GB of RAM. For information on flavors,
see “Flavor management for NonStop virtual machines” (page 23).
Use nova boot to launch the vNSC.
nova boot\
--flavor <flavor name>\
--image <Windows DVD ISO image ID>\
--block-device id=<Empty vol ID>,source=volume,dest=volume,shutdown=preserve\
--block-device id=<VirtIO driver ISO image ID>,\
source=image,dest=volume,size=1,bus=ide,type=cdrom,shutdown=remove\
--block-device id=<NSC DVD ISO>,\
source=image,dest=volume,size=3,bus=ide,type=cdrom,shutdown=remove\
--nic net-id=<Network ID, only necessary if more than one network is present>\
<Instance name>
c.
5.
66
Log on to Horizon. From the Instances panel, view the newly created VM. Once the VM
is in a Running state, select the instances and the Console tab.
Install Microsoft Windows Windows Server 2012 R2.
a. Select your language and keyboard input.
b. Enter the Microsoft Windows Server 2012 R2 license key.
c. Select the Server with a GUI that matches your license type: Windows Standard or
Windows Datacenter.
d. Select Custom: Install Windows only (advanced).
e. Use the “Where do you want to install windows” dialog to load the VirtIO storage drivers.
Click Load driver.
f. The Select the driver to install dialog appears. Click Browse.
Creating a Virtualized NonStop System console (vNSC)
g.
Expand the virtio CD drive (by clicking “+”). Expand viostor, expand 2k12R2, and select
amd64. Click OK.
h. Select the Red Hat VirtIO SCSI controller option displayed and click Next.
i.
Select the Drive 0 Partition and click New to create the partitions. Click Next to continue
OS installation.
j.
When prompted, enter a new password for the Administrator user.
6. Update drivers for the virtio network interface.
a. Open Device Manager using Start>Control Panel>Hardware>Device Manager. Click
Device Manager
b. Under Other Devices, locate Ethernet controller with a yellow warning icon. Right-click
and select Update driver software...
c. Click Browse my computer for driver software. Click Browse.
d. Navigate to the VirtIO CD drive. Click “+” to expand NetKVM. Expand 2k12R2 and
select amd64. Click OK and click Next.
e. When prompted, click Install to install the Red Hat VirtIO Ethernet adapter driver.
f. Navigate to the VirtIO CD drive. Click “+” to expand Balloon. Expand 2k12R2 and select
amd64. Click OK and click Next.
7. Update drivers for the VirtIO memory balloon device
a. Under Other Devices, locate PCI Device with a yellow warning icon. Right-click and
select Update driver software...
b. Click Browse my computer for driver software. Click Browse.
c. Navigate to the VirtIO CD drive. Click “+” to expand Balloon. Expand 2k12R2 and select
amd64. Click OK and click Next.
d. When prompted, click Install to install the VirtIO Balloon Driver.
8. Close Device Manager and the Control Panel.
9. Update Windows settings.
a. Enable Remote Desktop from the Server Manager under Local Server. Temporarily
disable the Windows Firewall. Start Internet Explorer 11.0 and from the toolbar select
Tools>Internet Options:
b. In Internet Options, under the Security tab, disable Protected Mode for Local intranet
and Trusted sites.
c. Add http://192.168.*.* and https://192.168.*.* to the Local intranet site
list.
d. Add localhost to the Compatibility View settings in Internet Explorer
10. Install .NET Framework 3.5 features by launching the Server Manager. From the Manage
Menu, select Add Roles and Features.
a. Click Next at both the Before you Begin page and the Installation Type page.
b. Make sure Select a server from the server pool and local server are selected. Click
Next.
c. Click Next at the Server Roles page. Only check the box next to NET Framework 3.5
Features. Click Next
d. Click Specify an alternate source path link at the bottom of the Confirmation
page. Enter D:\Sources\SxS in the Path and click OK. Click Install.
Prerequisites for creating the Virtualized NonStop System Console (vNSC)
67
11. Install the NSC DVD 27 or later.
a. Open the MASTER folder on the NSC DVD image and run Setup.exe. The License
Agreement dialog box appears. After accepting the agreement, click Next. The Welcome
dialog displays.
b. Select the following products: comforte MR-Win6530,OSM Low Level Link, OSM
Console Tools, PUTTY, and OpenSSH. Click Next.
c.
After the products in the previous step finish installing, launch MR-Win6530 and then
exit (this prepares the vNSC for automatic CLCI and CNSL launch from the OSM System
Startup Tool).
12. Shutdown and delete the Windows VM.
13. Upload the boot drive as an image into Glance.
14. Create vNSCs from the boot drive image.
NOTE: For a production system, HPE recommends you select the NSC boot volume size
of 250 GB to ensure adequate space for future CLIM software update packages, vNSC tool
updates, and possibly large volumes of problem data collection in the event of issues with
the future operation of your vNS system.
a.
15.
16.
17.
18.
68
In OpenStack, create a vNSC boot volume in Cinder from this image — one boot volume
for each vNSC instance to be booted.
b. Using Horizon, name the boot volumes for the vNSC name chosen (for example,
system2–vnsc1–boot).
Launch a vNSC instance (for example, system2–vnsc1) from its boot volume and ensure
that you associate the vNSC instance with your previously created maintenance network
and some external Operations network (for use with Remote Desktop.)
After launching the vNSC instance, connect to the Horizon console to perform the next step
before attempting to use Remote Desktop.
Using the Horizon console window, rename the Windows Server in the Window GUI to your
preference.
Check IP addresses in Windows for the maintenance LAN and operations LAN. Since there
is no requirement for a DHCP server on the vNS Maintenance LAN, the Windows operating
system is likely to choose a default IP address starting with 169.254. Change this IP address
Creating a Virtualized NonStop System console (vNSC)
(and subnet mask) to a static address on your vNS Maintenance LAN, such as these used
for a physical NonStop System Console:
•
192.168.36.1 for IP address
•
255.255.0.0 for subnet mask
19. Set up firewall rules and security software.
20. Resize the local drive from the initial size selected during step 3 (when you created the
empty volume to hold the Microsoft ISO) to the production size selected during step 14 using
these steps.
a. From the desktop of your Windows Server 2012 Cloud Server, open the Server Manager
and select Tools > Computer Management.
b. In the left pane, under the Storage folder, select Disk Management. The Disk
Management left pane displays the current formatted hard drive for your server, generally
(C:), and the right pane displays the amount of unallocated space.
c. Right-click the C:\\ drive. From the drop-down menu, select Disk Management. The
left pane of Disk Management displays the amount of unallocated space.
d. Right-click the C:\\ drive. From the drop-down menu, select Extend Volume. The Extend
Volume Wizard dialog displays. Click Next.
e. The default selection is to add all available space to your C:\\ drive (Disk 0). Click Next
to add all available space. Once you see the C:\\ drive expand to the maximum available
space, click Finish.
f. The additional disk drive volume displays in Computer Management as available to
use.
g. (Optional) Verify the resizing of the drive worked correctly by loading the Computer
Manager from the Server Manager and checking the disk size for the C:\\ drive in Disk
Management.
Prerequisites for creating the Virtualized NonStop System Console (vNSC)
69
B Warranty and regulatory information
For important safety, environmental, and regulatory information, see Safety and Compliance
Information for Server, Storage, Power, Networking, and Rack Products, available at
www.hpe.com/support/Safety-Compliance-EnterpriseProducts.
Warranty information
HPE ProLiant and x86 Servers and Options
www.hpe.com/support/ProLiantServers-Warranties
HPE Enterprise Servers
www.hpe.com/support/EnterpriseServers-Warranties
HPE Storage Products
www.hpe.com/support/Storage-Warranties
HPE Networking Products
www.hpe.com/support/Networking-Warranties
Regulatory information
70
Warranty and regulatory information
Belarus Kazakhstan Russia marking
Manufacturer and Local Representative Information
Manufacturer information:
•
Hewlett Packard Enterprise Company, 3000 Hanover Street, Palo Alto, CA 94304 U.S.
Local representative information Russian:
•
Russia:
•
Belarus:
•
Kazakhstan:
Local representative information Kazakh:
•
Russia:
•
Belarus:
•
Kazakhstan:
Manufacturing date:
The manufacturing date is defined by the serial number.
CCSYWWZZZZ (serial number format for this product)
Regulatory information
71
Valid date formats include:
•
YWW, where Y indicates the year counting from within each new decade, with 2000 as the
starting point; for example, 238: 2 for 2002 and 38 for the week of September 9. In addition,
2010 is indicated by 0, 2011 by 1, 2012 by 2, 2013 by 3, and so forth.
•
YYWW, where YY indicates the year, using a base year of 2000; for example, 0238: 02 for
2002 and 38 for the week of September 9.
Turkey RoHS material content declaration
Ukraine RoHS material content declaration
72
Warranty and regulatory information
© Copyright 2026 Paperzz