HP P6000 Replication Solutions Manager
User Guide
Abstract
This document describes how to use HP P6000 Replication Solutions Manager (the replication manager).
It is intended for operators and administrators of storage area networks (SANs) that include HP storage arrays. It is helpful to
have previous experience with SANs, LANs, and operation systems including Windows.
IMPORTANT: General references to HP P6000 Replication Solutions Manager may also refer to earlier versions of HP
Replication Solutions Manager EVA.
HP Part Number: T3680-96089
Published: October 2012
Edition: 11
© Copyright 2004, 2012 Hewlett-Packard Development Company, L.P.
Confidential computer software. Valid license from HP required for possession, use or copying. Consistent with FAR 12.211 and 12.212, Commercial
Computer Software, Computer Software Documentation, and Technical Data for Commercial Items are licensed to the U.S. Government under
vendor's standard commercial license.
Warranty
The information contained herein is subject to change without notice. The only warranties for HP products and services are set forth in the express
warranty statements accompanying such products and services. Nothing herein should be construed as constituting an additional warranty. HP shall
not be liable for technical or editorial errors or omissions contained herein.
Acknowledgments
Microsoft® and Windows® are U.S. registered trademarks of Microsoft Corporation.
Oracle® is a registered US trademark of Oracle Corporation, Redwood City, California.
Linux® is a U.S. registered trademark of Linus Torvalds.
UNIX® is a registered trademark of The Open Group.
Java™ is a US trademark of Sun Microsystems, Inc.
Contents
1 HP P6000 Replication Solutions Manager...................................................13
Prerequisites...........................................................................................................................13
Compatibility.........................................................................................................................13
Logging in to the GUI..............................................................................................................13
New support..........................................................................................................................14
New GUI features...................................................................................................................14
New job features....................................................................................................................14
New job templates.................................................................................................................14
New and updated job commands............................................................................................14
New and updated CLUI features...............................................................................................14
Overview..............................................................................................................................14
Capabilities......................................................................................................................14
Local replication................................................................................................................15
Remote replication.............................................................................................................15
Server software..................................................................................................................15
Host agent software...........................................................................................................16
CLUI software....................................................................................................................16
Jobs, templates, and commands...........................................................................................17
Replication kits and downloads............................................................................................18
GUI......................................................................................................................................18
GUI window overview........................................................................................................18
Configuration window........................................................................................................18
Content pane....................................................................................................................19
CLUI window (in the GUI)...................................................................................................19
Keyboard and right-click shortcuts........................................................................................20
Menu bar.........................................................................................................................21
Navigation pane...............................................................................................................22
Online help.......................................................................................................................22
Status bar.........................................................................................................................22
Toolbar............................................................................................................................22
Tooltips.............................................................................................................................23
View history......................................................................................................................23
Configuration.........................................................................................................................23
Accessing the configuration window.....................................................................................23
CLUI ports configuration......................................................................................................24
HP P6000 Replication Solutions Manager database configuration............................................24
RSM database cleanup.......................................................................................................24
Internet protocol configurations............................................................................................24
Jobs email server configuration............................................................................................25
Jobs run history configuration..............................................................................................25
Licenses configuration (applications).....................................................................................25
Logs configuration..............................................................................................................25
Security credentials configuration.........................................................................................26
Simulation mode................................................................................................................26
Single sign-on with HP P6000 Command View......................................................................27
Array management server configuration................................................................................27
Host volume......................................................................................................................27
User preferences configuration.............................................................................................27
RSM database.......................................................................................................................27
About the RSM database....................................................................................................27
About exports....................................................................................................................28
Contents
3
About imports...................................................................................................................28
Exporting an RSM database................................................................................................28
Importing an RSM database................................................................................................29
Importing a remote RSM database.......................................................................................30
Completing imports............................................................................................................30
Troubleshooting......................................................................................................................31
Troubleshooting—General...................................................................................................31
Browser window is blank...............................................................................................31
Database file name extension is missing...........................................................................31
Documents are not visible when selected..........................................................................31
Enabling failsafe on unavailable member in a managed set fails.........................................32
Illegal characters...........................................................................................................32
Invalid DR group pair—Source and source.......................................................................32
Invalid DR group settings—Failsafe on unavailable member with synchronous replication........33
Job instance fails with get error lock message...................................................................33
Job interacts with wrong array........................................................................................34
Job with host volume remount fails after an unclean unmount..............................................34
Logical volumes and volume groups in job commands........................................................34
Low-level refresh returns an error......................................................................................34
Maximum DR group log size error...................................................................................35
Monitor job window has no details.................................................................................35
Pop-up windows are not visible.......................................................................................35
Resource is not selectable...............................................................................................36
Scheduled job events do not run.....................................................................................36
Scheduled job event run times are wrong (AM/PM)...........................................................37
Second snapclone of the same storage volume or host volume fails......................................37
Slow login times...........................................................................................................38
Unable to resume a DR group.........................................................................................38
Troubleshooting–HP-UX.......................................................................................................38
Host agent discovery hangs and times out–HP-UX..............................................................38
Job with CreateHostVolume command fails–HP-UX.............................................................38
Job with MountEntireVolumeGroup command fails–HP-UX...................................................39
Host volume with PVLinks multipathing.............................................................................39
Troubleshooting–Linux.........................................................................................................40
Job with volume group remount fails–Linux........................................................................40
Troubleshooting–Tru64 UNIX...............................................................................................40
Job with AdvFS replication fails–Tru64 UNIX.....................................................................40
2 Replication resources................................................................................41
Working with resources...........................................................................................................41
Best practices for automatic refresh.......................................................................................41
Copying properties............................................................................................................41
Copying properties - tips.....................................................................................................41
Filtering displayed resources................................................................................................42
Global refresh monitor........................................................................................................43
Organizing displayed resources...........................................................................................43
Refreshing display panes....................................................................................................44
Refreshing resources (automatic)..........................................................................................44
Refreshing resources (global)...............................................................................................45
Refreshing individual resources............................................................................................45
Selection of multiple resources.............................................................................................45
Simulation mode................................................................................................................46
Resource concepts..................................................................................................................46
About replication resources.................................................................................................46
Resource states..................................................................................................................47
4
Contents
Resource names and UNC formats.......................................................................................48
Licenses............................................................................................................................49
License displays............................................................................................................49
License states................................................................................................................50
Replication licenses overview..........................................................................................50
Replication license policies.............................................................................................50
Dynamic Capacity Management licenses overview............................................................51
Thin provisioning license overview...................................................................................51
Security Credentials...........................................................................................................51
Security credentials for the server....................................................................................51
Security credentials for enabled hosts...............................................................................52
Security credentials versus tasks......................................................................................52
Topology views......................................................................................................................54
About topology views.........................................................................................................54
Displaying the topology tab................................................................................................55
DR groups topology view....................................................................................................56
Host volumes topology view................................................................................................57
Virtual disks topology view..................................................................................................59
Filters for topology views.....................................................................................................61
Tips (topology views)..........................................................................................................61
3 DR groups...............................................................................................63
Working with DR groups.........................................................................................................63
About DR group resources...................................................................................................63
DR group actions summary..................................................................................................63
DR group actions cross reference..........................................................................................64
DR group properties summary.............................................................................................66
DR group views.................................................................................................................66
Adding DR groups to a managed set....................................................................................66
Adding virtual disks to a DR group pair................................................................................67
Creating a DR group pair...................................................................................................67
Deleting a DR group pair....................................................................................................68
Editing DR group properties................................................................................................68
Enabling failsafe on unavailable member for a DR group pair.................................................69
Disabling the failsafe on unavailable member........................................................................69
Failing over a DR group pair...............................................................................................69
Forcing a full copy.............................................................................................................70
Launching the device manager............................................................................................71
Listing individual resource events..........................................................................................71
Low-level refreshing DR groups.............................................................................................71
Removing DR groups from a managed set.............................................................................72
Removing virtual disks from a DR group pair.........................................................................72
Resuming a DR group pair..................................................................................................72
Reverting a DR group pair to home......................................................................................73
Suspending a DR group pair...............................................................................................73
Using DR groups................................................................................................................74
Viewing DR groups............................................................................................................76
Viewing DR group properties...............................................................................................76
DR group concepts.................................................................................................................77
DR group pairs (source and destination)................................................................................77
Auto suspend on link-down..................................................................................................79
Auto suspend on full copy...................................................................................................79
Cascaded replication.........................................................................................................81
Copy state........................................................................................................................81
Destination access mode.....................................................................................................81
Contents
5
DR group states and icons...................................................................................................81
Failover............................................................................................................................82
Failsafe on link-down/power-up...........................................................................................82
Failsafe on unavailable member..........................................................................................83
Failsafe states....................................................................................................................83
Full copy mode..................................................................................................................84
Home...............................................................................................................................84
I/O throttling.....................................................................................................................85
Job command processing....................................................................................................85
Logs.................................................................................................................................85
Log overview and states.................................................................................................85
Log and disk group planning..........................................................................................86
Log size.......................................................................................................................86
Logging.......................................................................................................................87
Log merging.................................................................................................................88
Log disk and states.............................................................................................................88
Low-level refresh.................................................................................................................89
Managed sets of DR groups................................................................................................89
Normalization...................................................................................................................89
Operational state - blocked ................................................................................................90
Remote replication guidelines..............................................................................................91
Suspend on failover...........................................................................................................92
Suspension state................................................................................................................92
Write mode (async/sync replication)....................................................................................92
Write mode transitions........................................................................................................93
4 Enabled hosts..........................................................................................95
Working with enabled hosts.....................................................................................................95
About enabled host resources..............................................................................................95
Enabled Hosts actions summary...........................................................................................95
Enabled hosts actions cross reference...................................................................................96
Enabled hosts properties summary.......................................................................................97
Enabled hosts views...........................................................................................................97
Adding enabled hosts........................................................................................................97
Adding VM servers............................................................................................................98
Adding enabled hosts to a managed set...............................................................................98
Changing enabled host OS type..........................................................................................99
Deleting enabled hosts.......................................................................................................99
Deleting VM servers.........................................................................................................100
Executing a host script, command or batch file.....................................................................100
Low-level refreshing enabled hosts......................................................................................101
Low-level refreshing VM servers..........................................................................................101
Removing enabled hosts from a managed set......................................................................101
Setting security credentials for enabled hosts.......................................................................102
Setting security credentials for VM servers...........................................................................102
Viewing enabled hosts......................................................................................................103
Viewing enabled host properties........................................................................................103
Viewing VM server properties............................................................................................103
VM server actions summary...............................................................................................104
VM server actions cross reference.......................................................................................104
VM server properties summary..........................................................................................104
Enabled host concepts...........................................................................................................105
Enabled and standard hosts..............................................................................................105
Host names and ports.......................................................................................................105
Low-level refresh of enabled hosts.......................................................................................105
6
Contents
Low-level refresh of VM servers...........................................................................................105
Security credentials for enabled hosts.................................................................................106
Host names and ports.......................................................................................................106
VM servers......................................................................................................................106
5 Host volumes.........................................................................................107
Working with host volumes....................................................................................................107
About host volume resources.............................................................................................107
Host volume actions summary............................................................................................107
Host volume actions cross reference....................................................................................109
Host volume properties summary........................................................................................112
Host volume views............................................................................................................112
Adding host volumes to a managed set..............................................................................113
Cancelling (removing) replicas from round robin rotation.......................................................114
Creating a DR group pair (from host volume).......................................................................114
Creating a managed set for a host disk device container......................................................114
Creating a managed set of containers for host volumes.........................................................115
Creating a managed set of containers for host volume groups................................................115
Creating host volumes......................................................................................................116
Creating local replicas.....................................................................................................116
Creating round-robin replicas............................................................................................117
Deleting replicas..............................................................................................................117
Deleting host volumes, host volume groups, and host disk devices..........................................118
Editing replica properties..................................................................................................118
Flushing the file system cache of host volumes and host volume groups....................................119
Viewing a host volume capacity utilization report.................................................................119
Enabling host volume capacity utilization analysis................................................................119
Disabling host volume capacity utilization analysis...............................................................120
Mounting host volumes (assigning a drive letter)...................................................................120
Removing host volumes from a managed set........................................................................120
Restoring host volumes (Instant Restore)...............................................................................121
Unmounting host volumes (removing a drive letter)................................................................121
Using snapclones.............................................................................................................122
Using snapshots...............................................................................................................122
Using logical volumes and volume groups...........................................................................123
Using raw disks...............................................................................................................123
Viewing host volume resources...........................................................................................124
Viewing host volume resource properties.............................................................................124
Extending host volume capacity.........................................................................................125
Shrinking host volume capacity..........................................................................................125
Setting a dynamic capacity policy......................................................................................126
Editing a dynamic capacity policy......................................................................................126
Removing a dynamic capacity policy..................................................................................127
Disabling a dynamic capacity policy for multiple host volumes...............................................127
Enabling a dynamic capacity policy for multiple host volumes................................................128
Host volume concepts............................................................................................................128
Host volumes overview......................................................................................................128
Host volumes FAQ............................................................................................................129
Disk Devices....................................................................................................................129
File system types..............................................................................................................130
Instant Restore.................................................................................................................130
Logical volumes and volume groups...................................................................................130
LUN...............................................................................................................................130
Mounting all logical volumes in a replicated volume group....................................................131
Mount points (drive letters) and device names......................................................................131
Contents
7
Partitions and slices..........................................................................................................132
Raw disks.......................................................................................................................134
Local replication wizard....................................................................................................135
Replica repository............................................................................................................135
Round robin replicas (wizard)............................................................................................135
Snapclones (host volume)..................................................................................................136
Snapshots (host volume)....................................................................................................136
Snapshot FAQ.................................................................................................................136
Snapshot types (allocation policy)......................................................................................137
Types (components)..........................................................................................................137
Dynamic capacity management.........................................................................................138
Dynamic capacity management overview.......................................................................138
DC-Management operation..........................................................................................138
Methods for resizing a host volume file system and virtual disk..........................................139
DC-Management support.............................................................................................140
Selecting the proper dynamic capacity policy thresholds..................................................141
Using DC-Management with replication.........................................................................142
DC-Management FAQ.................................................................................................143
DC-Management best practices.....................................................................................144
DC-Management examples..........................................................................................144
6 Jobs......................................................................................................145
Working with jobs................................................................................................................145
About jobs......................................................................................................................145
Job actions summary........................................................................................................145
Job Planning - Tru64 UNIX................................................................................................146
Suspending I/O before replicating AdvFS volumes - Tru64 UNIX.......................................146
Job actions cross reference................................................................................................147
Job properties summary....................................................................................................148
Job views........................................................................................................................148
Aborting job instances......................................................................................................149
Continuing job instances...................................................................................................150
Copying jobs..................................................................................................................150
Creating jobs..................................................................................................................150
Deleting jobs...................................................................................................................151
Deleting job instances......................................................................................................151
Developing jobs..............................................................................................................152
Editing jobs.....................................................................................................................152
Exporting jobs.................................................................................................................152
Job editing tips and shortcuts.............................................................................................153
Editing individual commands (tasks)...................................................................................155
Generating job templates..................................................................................................155
Importing jobs.................................................................................................................156
Importing legacy jobs.......................................................................................................156
Importing overview......................................................................................................156
Preparing to import.....................................................................................................156
Legacy BC 2.x job command equivalence......................................................................157
Logical volumes and volume groups in job commands..........................................................160
Monitoring and managing job instances.............................................................................160
Pausing job instances.......................................................................................................161
Resource is not selectable..................................................................................................161
Scheduled job events do not run........................................................................................162
Running jobs...................................................................................................................162
Selecting values for arguments...........................................................................................163
Scheduling job events.......................................................................................................163
8
Contents
Creating scheduled job events......................................................................................163
Editing scheduled job events.........................................................................................164
Enabling and disabling scheduled job events..................................................................164
Choosing a run interval................................................................................................165
Removing scheduled job events.....................................................................................165
Viewing scheduled job events.......................................................................................166
Validating jobs................................................................................................................166
Viewing job status............................................................................................................167
Viewing jobs and job instances.........................................................................................167
Viewing job properties.....................................................................................................168
Job concepts........................................................................................................................168
Job language overview.....................................................................................................168
Jobs, templates, and commands.........................................................................................169
Job instances...................................................................................................................169
Aborted job instances......................................................................................................170
Arguments......................................................................................................................170
Argument lists..................................................................................................................170
Assignments (variables).....................................................................................................170
Branches........................................................................................................................171
Commands.....................................................................................................................171
Command result values.....................................................................................................172
Comments.......................................................................................................................177
E-mail from jobs...............................................................................................................177
Exits...............................................................................................................................177
Implicit jobs.....................................................................................................................177
Implicit job startup...........................................................................................................177
Imported jobs..................................................................................................................177
Job commands list............................................................................................................178
Job templates list..............................................................................................................184
Labels............................................................................................................................185
Pause and continue..........................................................................................................185
Resource names and UNC formats.....................................................................................186
Status and states..............................................................................................................188
Transactions....................................................................................................................188
Validation.......................................................................................................................189
Wait/nowait argument.....................................................................................................190
Job templates.......................................................................................................................191
Empty template................................................................................................................191
Fracture host volumes, mount to a host (template).................................................................191
Instant restore storage volumes to other storage volumes (template).........................................193
Mount existing storage volumes (template)...........................................................................195
Perform cascaded replication (template)..............................................................................196
Perform planned failover (template) ...................................................................................199
Perform unplanned failover (template).................................................................................201
Replicate (via snapclone) a host volume multiple times, mount to a host (template)....................202
Replicate host disk devices, mount to a host (template)..........................................................204
Replicate host volume group, mount components to a host (template)......................................205
Replicate host volume group, mount entire group to a host (template)......................................207
Replicate host volumes (template).......................................................................................208
Replicate host volumes, mount to a host (template)................................................................209
Replicate host volumes, mount to a host, then to a different host (template)...............................211
Replicate host volumes via preallocated replication, mount to a host (template).........................213
Replicate host volume, mount components to a host (template)...............................................215
Replicate raw storage volumes mount (raw) to a host (template)..............................................217
Replicate storage volumes (template)..................................................................................218
Contents
9
Replicate storage volumes via preallocated replication (template)...........................................219
Setup Continuous Access (remote replication template) .........................................................221
Throttle replication I/O (remote replication template)............................................................222
Unmount and delete existing host volumes (template) ...........................................................223
Unmount existing host volumes (template)............................................................................224
7 Managed sets........................................................................................226
Working with managed sets..................................................................................................226
About managed sets........................................................................................................226
Managed set actions summary..........................................................................................226
Managed set actions cross reference..................................................................................226
Managed set properties summary......................................................................................227
Managed set views..........................................................................................................227
Adding resources to a managed set...................................................................................228
Creating managed sets.....................................................................................................228
Deleting managed sets.....................................................................................................228
Renaming managed sets...................................................................................................229
Removing resources from managed sets..............................................................................229
Viewing managed sets.....................................................................................................229
Viewing managed set properties........................................................................................230
Managed sets concepts.........................................................................................................230
Managed sets overview....................................................................................................230
Managed sets of DR groups..............................................................................................230
Managed sets of virtual disks (or containers).......................................................................231
8 Storage systems......................................................................................232
Working with storage systems................................................................................................232
About storage system resources..........................................................................................232
Storage system actions summary........................................................................................232
Storage system actions cross reference................................................................................232
Storage system properties summary....................................................................................233
Storage system views........................................................................................................233
Adding storage systems to a managed set...........................................................................234
Checking and printing storage system licenses.....................................................................234
Launching the device manager..........................................................................................234
Listing individual resource events........................................................................................235
Removing storage systems from a managed set....................................................................235
Setting Remote Replication Port Preferences.........................................................................235
Setting DR Protocol Type...................................................................................................235
Viewing storage systems...................................................................................................236
Viewing storage system properties......................................................................................236
Storage system concepts........................................................................................................236
Replication licenses overview.............................................................................................236
Replication license policies................................................................................................237
Controller software features...............................................................................................237
Controller software features - local replication......................................................................237
Controller software features - remote replication...................................................................238
Disk groups.....................................................................................................................238
Remote data replication protocols......................................................................................238
Remote replication tunnels.................................................................................................238
Storage system types........................................................................................................239
9 Virtual disks...........................................................................................240
Working with virtual disks......................................................................................................240
About virtual disk resources...............................................................................................240
Virtual disk actions summary.............................................................................................240
10
Contents
Virtual disk actions cross reference.....................................................................................241
Virtual disk properties summary.........................................................................................243
Virtual disk views.............................................................................................................243
Adding virtual disks to a managed set................................................................................243
Creating containers for virtual disks....................................................................................244
Creating a DR group pair.................................................................................................244
Creating mirrorclones.......................................................................................................245
Creating snapclones (preallocated)....................................................................................245
Creating snapclones (standard).........................................................................................246
Creating snapshots (preallocated)......................................................................................246
Creating snapshots (standard)...........................................................................................246
Migrating a virtual disk.....................................................................................................247
Creating virtual disks........................................................................................................247
Deleting virtual disks........................................................................................................248
Detaching mirrorclones.....................................................................................................248
Editing virtual disk properties.............................................................................................249
Fracturing mirrorclones.....................................................................................................249
Launching the device manager..........................................................................................250
Listing individual resource events........................................................................................250
Restoring virtual disks (Instant Restore).................................................................................250
Low-level refreshing virtual disks.........................................................................................251
Presenting virtual disks......................................................................................................251
Removing virtual disks from a managed set.........................................................................252
Removing virtual disks from a DR group pair.......................................................................252
Restoring virtual disks (Instant Restore).................................................................................253
Resynchronizing mirrorclones.............................................................................................253
Migrating a mirrorclone....................................................................................................254
Unpresenting virtual disks.................................................................................................254
Viewing virtual disks.........................................................................................................255
Viewing virtual disk properties...........................................................................................255
Virtual disk concepts.............................................................................................................255
Virtual disks overview.......................................................................................................255
Controller software features...............................................................................................256
Controller software features - local replication......................................................................256
Controller software features - remote replication...................................................................256
Cache policies.................................................................................................................256
Containers......................................................................................................................257
Container guidelines........................................................................................................258
Cross Vraid replication.....................................................................................................258
Cross Vraid FAQ..............................................................................................................258
Disk groups.....................................................................................................................258
Cross Vraid guidelines......................................................................................................260
Instant restore overview (virtual disks)..................................................................................260
Instant restore from mirrorclones.........................................................................................261
Instant restore from snapclones..........................................................................................261
Instant restore from snapshots............................................................................................262
Instant restore..................................................................................................................262
Low-level refresh of virtual disks..........................................................................................262
LUN...............................................................................................................................262
Mirrorclones - fractured.....................................................................................................263
Mirrorclones - synchronized...............................................................................................264
Mirrorclone FAQ..............................................................................................................265
Mirrorclone guidelines......................................................................................................265
Mirrorclone states............................................................................................................267
Normalization.................................................................................................................267
Contents
11
Preferred controller...........................................................................................................267
Presentation (to host)........................................................................................................268
Remote replication guidelines............................................................................................268
Redundancy (Vraid) levels.................................................................................................269
Snapclones.....................................................................................................................269
Snapclone FAQ...............................................................................................................270
Snapclone guidelines.......................................................................................................271
Snapshots.......................................................................................................................271
Snapshot FAQ.................................................................................................................272
Snapshot guidelines.........................................................................................................272
Snapshots per virtual disk.................................................................................................273
Snapshot types (allocation policy)......................................................................................273
Thin provisioning.............................................................................................................274
Tru64 UNIX host volumes..................................................................................................275
Types.............................................................................................................................275
Virtual disk guidelines......................................................................................................275
10 Events.................................................................................................276
Working with events.............................................................................................................276
About events...................................................................................................................276
Event actions summary.....................................................................................................276
Event actions cross reference.............................................................................................276
Refreshing display panes..................................................................................................277
Organizing displayed events.............................................................................................277
Viewing events................................................................................................................278
Viewing the trace log.......................................................................................................278
Filtering displayed events..................................................................................................279
Event concepts.....................................................................................................................279
Events overview...............................................................................................................279
Events log.......................................................................................................................280
Event log views................................................................................................................281
Event severity...................................................................................................................281
Trace log........................................................................................................................281
11 CLUI....................................................................................................282
Accessing the CLUI via GUI...................................................................................................282
CLUI documentation..............................................................................................................282
Copying CLUI command responses.........................................................................................282
Legacy HP EVMCL commands cross reference..........................................................................283
Reusing CLUI commands........................................................................................................284
Using CLUI help...................................................................................................................284
12 Support and other resources...................................................................286
Release history.....................................................................................................................286
Contacting HP......................................................................................................................288
Related information...............................................................................................................288
Glossary..................................................................................................290
Index.......................................................................................................294
12
Contents
1 HP P6000 Replication Solutions Manager
Prerequisites
Use of this product requires:
•
HP storage array and controller software
•
Array management software
•
Local and/or remote replication licenses
•
Replication manager server and host agent software
•
Application integration license (optional)
For supported storage arrays, management server hardware and software, and replication
environments, including restrictions, see HP P6000 Enterprise Virtual Array Compatibility Reference
on the HP P6000 Continuous Access or HP P6000 Business Copy website.
Compatibility
See the HP P6000 Enterprise Virtual Array Compatibility Reference for support and version details
for HP Replication Solutions Manager and related products.
Logging in to the GUI
To log in to the replication manager GUI:
1. Do one of the following:
•
Browse to <server name>:4096. See Notes.
•
On a management server desktop, select Start > Programs > HP Replication Solutions
Manager or double-click the replication manager icon.
•
Open the replication manager through another HP application. See Notes.
An account login window opens.
2.
Enter a user account name and password and then click OK. See Notes.
The replication manager GUI appears.
Notes
•
Replication manager service. The replication manager server software runs continuously as a
service on a management server. This enables the server to run jobs, process CLUI commands,
interact with enabled hosts (host agents), and access storage resources whether or not the
GUI is active.
•
Browsing. You can enter a server name, fully-qualified server name, or IP address. Specific
browsers and JREs are required. See HP P6000 Enterprise Virtual Array Compatibility Reference
for support details.
•
Desktop The replication manager appears in a management server's start menu and as a
desktop icon.
•
Other applications. For other access methods, see the HP P6000 Replication Solutions Manager
Administrator Guide.
•
User accounts. A default account may exist (user name: admin, password: nimda). For more
about security, see the HP P6000 Replication Solutions Manager Administrator Guide.
Prerequisites
13
NOTE: The HP Storage Admin user account should be part of host administrators group. If
the user account is not part of the host administrators group, you will not be able to log in to
the replication manager.
New support
Version 5.6 includes support for HP P6000 Command View—Version support for HP P6000
Command View 10.2. See the HP P6000 Enterprise Virtual Array Compatibility Reference for
support and version details.
New GUI features
There are no new GUI features in version 5.6.
New job features
There are no new job features in version 5.6.
New job templates
There are no new job templates in version 5.6.
New and updated job commands
There are no new or updated job commands in version 5.6.
New and updated CLUI features
There are no new CLUI features in version 5.6.
Overview
Capabilities
HP P6000 Replication Solutions Manager is a centralized tool that simplifies and automates local
and remote replication features of HP arrays. The replication manager allows you to perform tasks
by using its graphical user interface (GUI), jobs, and a command line user interface (CLUI).
General replication management
14
•
Automatically discover resources such as arrays, virtual disks, and hosts. See Automatic refresh
of resources.
•
View resource properties in graphical trees and tabular lists. For example, see Virtual disk
views
•
Create and manage copies of data (replicas) in real time.
•
Create, run, monitor, and manage jobs that automate replication tasks. For more information
on jobs, see the HP P6000 Replication Solutions Manager Job Command Reference.
•
Present storage volumes to hosts.
•
Dynamically mount storage volumes containing file systems on enabled hosts*.
•
Dynamically extend or shrink a host volume on enabled hosts*.
•
Interact with host applications on enabled hosts*.
•
Create and manage collections of resources (managed sets). See Managed sets.
•
View replication event logs.
•
Run the replication manager in a resource simulation mode. See Simulation mode.
HP P6000 Replication Solutions Manager
* Requires an HP P6000 Replication Solutions Manager host agent.
Local replication
•
Create local copies using snapclone and snapshot technology. See Snapclones and Snapshots.
•
Create local copies by specifying source arrays and virtual disk names.
•
Create local copies by specifying source enabled hosts and volume names*.
•
Create local copies by specifying source enabled hosts and logical volume names*.
* Requires an HP P6000 Replication Solutions Manager host agent.
Remote replication
•
Create, configure, and manage DR groups (remote copies). See DR group pairs.
•
Reverse (failover) remote replication direction. See DR group Failover.
•
Manage remote replication. See Using DR groups.
Local replication
Local replication is a licensed feature of HP arrays that enables you to quickly create local copies
of your data using the array’s replication engine. These copies are known as Mirrorclones,
Snapclones, and Snapshots.
•
HP P6000 Business Copy is the HP brand name for the set of features, replication licenses,
and interfaces for local replication on arrays.
•
Local replication features are part of (installed with) the controller software on each storage
array and can vary with the controller software version. See Controller software features local replication.
•
To use the local replication features on a given array, the replication manager verifies that a
valid HP P6000 Business Copy replication LTU exists for the array. For information about
acquiring and installing local replication licenses, see the HP P6000 Replication Solutions
Manager Administrator Guide.
Remote replication
Remote replication is a licensed feature of HP arrays that enables you to create remote,
disaster-tolerant copies of your data using the array's replication engine. Remote copies are created
and managed through the use of DR group pairs.
•
HP P6000 Continuous Access is the HP brand name for the set of features, replication licenses,
and interfaces for remote replication on arrays.
•
Remote replication features are part of (installed with) the controller software on each storage
array and can vary with the controller software version. See Controller software features remote replication.
•
To use the remote replication features on a given pair of arrays, the replication manager
verifies that a valid HP P6000 Continuous Access replication license-to-use (LTU) exists for
each array. For information on acquiring and installing remote replication licenses, see the
HP P6000 Continuous Access Implementation Guide.
Server software
The replication manager server software includes a graphical user interface (GUI) and job engine,
a database, and a command line user interface (CLUI).
Overview
15
GUI and job engine
The GUI allows you to:
•
View all available resources in tabular lists, graphical tree views, and topology views. See
About replication resources.
•
Perform actions on resources.
•
Create jobs using the integrated job editor and job templates.
•
Validate job task logic and syntax before running a job.
•
Monitor job progress and view detailed job activity logs.
•
Configure the replication manager.
•
View replication manager events and logs.
For a visual overview, see GUI window.
Replication manager database
The server software includes an internal database of available resources, jobs, job instances, and
replication manager events. See About the RSM database.
The database can be exported and imported allowing you to:
•
Backup and restore the database.
•
Copy the database to other instances of the replication manager. This is especially useful in
remote replication environments with multiple management server.
CLUI server
The server software also includes a CLUI server application which supports several types of command
line style CLUI clients. See CLUI overview.
Host agent software
A replication manager host agent is OS-specific software that enables interactions between a host
and the replication manager server. A host that has a replication manager host agent installed is
called an enabled host. See Enabled and standard hosts.
A host agent allows you to:
•
Perform replication by specifying the host and host volume name.
•
Mount virtual disks on the host without rebooting the host (dynamic mount).
•
Dynamically extend or shrink a host volume.
•
Suspend and resume application I/O on the host.
•
Launch (run) applications on the host.
•
Interact with database, backup, and other applications.
•
View host properties, such as: operating systems, file systems, logical volume managers, cluster
software, multipathing software and Fibre Channel HBAs.
CLUI remote client
Host agents also include a CLUI remote client that you can use to run replication manager CLUI
commands from the host's command line.
CLUI software
The replication manager includes a CLUI server and client in the server software and a CLUI remote
client in the host agent software.
16
HP P6000 Replication Solutions Manager
The CLUI software allows you to:
•
Issue CLUI commands and run scripts from CLUI clients
•
Run jobs and manage job instances from CLUI clients.
•
Use job return codes for conditional interactions between jobs and scripts.
See Accessing the CLUI via GUI and CLUI documentation.
Jobs, templates, and commands
You can create, save, run, schedule, and manage jobs that automate replication tasks. For more
information on jobs, see the HP P6000 Replication Solutions Manager Job Command Reference.
Job editor
Use the replication manager's specialized job editor to create and edit jobs.
Job templates
Job templates allow you to quickly create typical jobs, for example, making local or remote copies
of virtual disks. For more information on jobs, see the HP P6000 Replication Solutions Manager
Job Command Reference.
Job commands
You can also create custom jobs from the set of specialized job commands. For more information
on jobs, see the HP P6000 Replication Solutions Manager Job Command Reference.
Overview
17
Replication kits and downloads
The HP P6000 Replication Solutions Manager DVD includes the following components:
•
HP P6000 Replication Solutions Manager Software
•
HP P6000 Replication Solutions Manager Software Host Agent
•
HP P6000 Replication Solutions Manager Software documentation
GUI
GUI window overview
The GUI window provides a menu bar, toolbar, navigation pane, content pane, event pane, and
status bar. Click the links in the callout list for more information.
1. Menu bar
3. Navigation pane
5. Status bar
2. Toolbar
4. Event pane
6. Content pane (jobs example)
Configuration window
The configuration window provides access to the replication manager configuration settings. See
also Accessing the configuration window.
18
HP P6000 Replication Solutions Manager
1. Navigation pane
2. Status bar
3. Settings pane (jobs example)
Content pane
Information about available replication resources is displayed in the content pane. See Replication
resources.
The following features are available in the pane.
Item
Description
Actions > Print
Prints the content pane.
Actions > Refresh
Refreshes the content pane using data from the database.
Actions > Help
Displays context-sensitive help.
Filter
Select and filter information displayed in the list tab. See filtering.
View
Select the information displayed in tree tabs.
Refreshes the content pane using data from the database.
Prints the content pane.
Displays context-sensitive help.
CLUI window (in the GUI)
The CLUI window (in the GUI) includes a command line, command history and response pane.
See also Accessing the CLUI via GUI.
GUI
19
1. Command line
2. Response pane
3. Command history
Keyboard and right-click shortcuts
Right-click actions
Right-click a resource to open its Actions menu.
General shortcuts
Action
Key combination
Copy selection
Ctrl+C
Cut selection
Ctrl+X
Extend selection left or right
Shift+left arrow or Shift+right arrow
Extend selection left or right
Ctrl+Shift+left arrow or Ctrl+Shift+right arrow
Extend selection to start or end
Shift+Home or Shift+End
Move to start or end of text
Home or End
Paste from clipboard
Ctrl+V
Select all
Ctrl+A
Print
Alt+P
Refresh
Alt+R
Help
Alt+H
Button shortcuts
Action
Key combination
Navigate forward
Tab
Navigate backward
Shift+Tab
List view shortcuts
20
Action
Key combination
Extend selection up
Shift+up arrow
Extend selection down
Shift+down arrow
HP P6000 Replication Solutions Manager
Action
Key combination
Move to next cell
Tab or right arrow
Move to previous cell
Shift+Tab or left arrow
Move to first cell in row
Home
Move to last cell in row
End
Move to first row in table
Ctrl+Home
Move to last row in table
Ctrl+End
Select all cells
Ctrl+A
Edit cell without overriding content
F2
Shortcuts
Action
Key combination
Move between items in the menu
Arrow keys
Select first item
F10
Select next item
Right arrow
Select previous item
Left arrow
Select default or selected item
Enter
Tree view shortcuts
Action
Key combination
Navigate out forward
Tab
Navigate out backward
Shift+Tab
Move up/down one entry
Up arrow or Down arrow
Move to first entry
Home or PgUp
Move to last visible entry
End or PgDn
Menu bar
The menu bar provides access to features, tools, the online help, and other documentation.
•
File. Export and import an RSM database or import RSM jobs. See Exporting an RSM database,
Importing an RSM database, and Importing a remote RSM database, Importing Jobs. For
GUI
21
more information on jobs, see the HP P6000 Replication Solutions Manager Job Command
Reference.
•
View. Display a session's view history and select a previously displayed view.
•
Tools. Configure the replication manager, issue CLUI commands, and run the replication
manager in simulation mode. See Accessing the configuration window, Accessing the CLUI
via GUI, and Simulation mode.
•
Help. Display the replication manager version, access the online help, and list other available
documentation. See Online help.
Navigation pane
The navigation pane alphabetically lists the Data replication resources you can manage, organizing
them by group in the hierarchical tree. Icons identify the state of the resources. (States other than
normal are displayed in the navigation pane. In the tree, the resource group displays the most
critical state of all its resources.)
•
To view a resource in the content pane, click its group or right-click its group, and select View.
•
To display online help for the resource, right-click its group, and select Help.
Behaviors
•
To expand or collapse an item in the tree, click the expand icon.
•
To size the navigation pane, place your cursor over the sizing bar until your cursor changes
to a double-headed arrow, and then click and drag the pane larger or smaller as needed.
Online help
The replication manager online help is a cross-platform, browser-based application that includes
the following types of help:
•
Main help (user guide). The main help is accessed from the Help button on the toolbar.
Information is organized and easily accessed with the use of Contents, Index, and Search
tabs.
•
Context-sensitive help. Context-sensitive help is specific to the window that you are viewing
and is accessed by clicking a question mark icon or help button in the window.
The online help includes the following features:
•
Contents tab. Displays the help contents. The contents are organized into books and topic
pages.
•
Index tab. Displays an index of topics.
•
Search tab. Displays a list of topics containing the words for which you searched.
Status bar
The Status bar located at the bottom of the page, displays an icon indicating the status of the server
connection as well as a graphical indication of server communication activity.
Toolbar
Item
Description
View history - back. Changes the content pane to show one view back in the view history.
View history - forward. Changes the content pane to show one view forward in the view history.
22
HP P6000 Replication Solutions Manager
Item
Description
View home. Changes the content pane to the home view (replication)
Global refresh. Performs a global resource discovery and refresh of the database. See Refreshing
resources (global).
Tooltips
Placing the cursor over a GUI item momentarily displays a tooltip.
< tooltip
View history
View History shows a list of recently displayed resource views. See About replication resources.
•
The view that is currently displayed in the content pane is indicated with a check mark. Selecting
another view from the list changes the contents pane to that view.
•
When a new session begins, the history list includes the top level Replication view and most
recent view from the previous session.
•
History list navigation:
Navigation
Remarks
Back
Displays one view before the current view (if any).
Forward
Displays one view after the current view (if any).
Up
Displays the top level resource view (Replication).
•
For legibility, only the most recent views are shown in the list. However, older views are tracked
and can be displayed from a Back or Forward selection.
Configuration
Accessing the configuration window
Open the configuration window. See Configuration window.
Considerations
IMPORTANT: Changing the configuration, especially storage access, should be carefully planned
to avoid impacting current jobs and scheduled jobs.
Configuration
23
Procedure
1.
On the toolbar, select Tools > Configure.
The Configuration window opens.
2.
In the Configuration navigation pane, select the item to configure.
CLUI ports configuration
You can configure the ports that are used by CLUI clients to access the CLUI server.
•
Default:
Unsecure port: enabled, port 9000, 10 simultaneous sessions
Secure SSL port: enabled, port 9001, 10 simultaneous sessions
•
HP recommends that you not change a default port number unless it conflicts with other
applications on the management server.
•
To allow unlimited simultaneous sessions, enter 0 in the Maximum Sessions box.
For the procedure, see Accessing the configuration window.
HP P6000 Replication Solutions Manager database configuration
You can configure the port that the replication manager server uses to access its internal database.
Database port
•
Default: port 1315.
•
HP recommends that you not change the default port number unless it conflicts with other
applications on the management server.
RSM database cleanup
You can remove records of unavailable storage resources from the database by using the cleanup
feature.
•
Over time, the database can accumulate records of resources on storage systems that the
replication manager can no longer communicate with.
•
HP recommends exporting a copy of the database (as backup) before performing a cleanup.
See Exporting an RSM database.
For the procedure, see Accessing the configuration window.
Internet protocol configurations
HP P6000 Replication Solutions Manager is designed for use with internet protocol versions as
follows:
•
HP Replication Solutions Manager versions 4.0.1 or later—IPv4, IPv6, and mixed IPv4/IPv6
networks
•
HP Replication Solutions Manager versions 1.0 through 4.0—IPv4 networks only
IP address formats
IP addresses in IPv6 have a different format than IPv4.
24
•
IPv6 IP addresses are generally shown in eight groups of four hexadecimal digits, separated
by colons, for example: 2001:0db8:85a3:08d3:1319:8a2e:0370:7334.
•
IPv4 IP address are generally shown in decimal notation, separated by dots (periods), for
example: 192.0.2.235.
HP P6000 Replication Solutions Manager
•
IPv6 IP addresses with port numbers are enclosed in brackets to avoid confusion with port
numbers, for example:
https://[2001:0db8:85a3:08d3:1319:8a2e:0370:7344]:443/.
•
IPv4 addresses with port number do not use brackets.
Jobs email server configuration
You can configure the replication manager to use an email server to send messages from jobs.
•
Default: No email server is specified.
•
To configure an email server, you must know its network name and a user name and password
to access the email application.
•
See also SendEmail and SetNotificationPolicy job commands.
For the procedure, see Accessing the configuration window.
Jobs run history configuration
You can configure the number of run histories (job instances) per job that are displayed in the GUI
and retained in the database.
•
Default: 7 run histories per job
•
The minimum is 1 run history per job. The maximum is limited only by practical considerations
such as readability in the GUI and RSM database size and performance.
•
When the limit is exceeded, a job's oldest run history is no longer displayed and the (job
instance) record is deleted from the database.
For the procedure, see Accessing the configuration window.
Licenses configuration (applications)
You can review and add HP application-integration licenses to the replication manager
configuration.
•
Review details of installed HP application-integration licenses.
•
Retrieve an HP application-integration license key file from a website.
•
Add a retrieved license key to the RSM database.
For the general configuration procedure, see Accessing the configuration window.
Logs configuration
See Event log and Trace log.
Event log configuration
You can configure when the replication manager event log is rolled. When the log is rolled, the
current event log is saved as an historical log and new current log is started.
•
Default: roll over on size. The current event log is rolled when its size exceeds 10 MB.
•
Instead of size, you can configure the event log to roll on a regular interval. Interval choices
include every hour, day, week, or month.
•
You can delete historical event logs by using the configuration window.
•
Properties. See Event log.
For the procedure, see Accessing the configuration window. See also Viewing events.
Configuration
25
Trace log configuration
You can enable and disable the replication manager trace log.
•
Default: enabled. The log file has a maximum size of 60 MB. As the log gets full, the oldest
events are discarded.
•
You can disable the trace log by using the configuration window.
•
Properties. See Trace log.
For the procedure, see Accessing the configuration window. See also Viewing the trace log.
Security credentials configuration
To log on and use the replication manager, special security groups must be configured on the
management server and each enabled host.
•
You cannot establish or maintain security credentials by using the replication manager
configuration window.
•
For further information on the creation and administration of security groups, see the HP P6000
Replication Solutions Manager Installation Guide and the HP P6000 Replication Solutions
Manager Administrator Guide.
Management server security groups
The following Windows OS-based security groups are required or supported on the management
server.
IMPORTANT: Group names are case sensitive. The use of all functions and features in the
replication manager, except viewing data, require membership in the administrator group.
•
HP Storage Admins. To access the replication manager server, this security group is required
on the management server and must have at least one member (user account). Members have
full replication manager execute and read/write privileges.
•
HP Storage Users. Members have only view/read replication manager privileges. This group
is required only when a group with extremely limited use of the replication manager is required.
Enabled hosts security groups
One of the following OS-based security groups is required on each enabled host, as appropriate.
IMPORTANT:
Group names are case sensitive.
•
HP Host Agent Admins. To interact with the replication manager server, each Windows enabled
host must have this security group and at least one member (user account).
•
hphaadm. To interact with the replication manager server, each Linux, UNIX, and OpenVMS
enabled host must have this security group and at least one member (user account).
Simulation mode
For more information, see the HP P6000 Replication Solutions Manager Simulation Guide, available
from the HP Storage website. See (page 289).
CAUTION: HP strongly recommends that you do not run simulation mode on a production machine
(that is, on the same machine as HP P6000 Command View). If you disconnect the replication
manager from HP P6000 Command View, you lose control of the storage resources that were
being managed by the replication manager. If you purge the replication manager database and
have not backed it up, you will lose all jobs.
26
HP P6000 Replication Solutions Manager
Overview
Simulation mode allows you to use all of the functions of the replication manager without having
to use any of your production data or resources. These functions include creating snapshots and
snapclones, creating DR groups and adding virtual disks to them, and performing failovers.
Features and benefits:
•
No SAN infrastructure, storage arrays, hosts, or management servers are required.
•
Requires only replication manager server software running on a business class Windows
desktop or laptop computer.
•
True simulation–Not a set of predefined scenarios. Simulates GUI actions, jobs, and CLUI
commands.
•
Allows you to create your own:
◦
Simulated storage arrays. Select features such as array capacity, controller software
versions and number of virtual disks.
◦
Simulated hosts. Include hosts and select the storage arrays to which they are connected.
Single sign-on with HP P6000 Command View
Administrators can establish a single sign-on (SSO) trust relationship in HP P6000 Command View
that allows the replication manager to connect without sending the Storage Agent user name and
password. The on/off status of SSO is displayed in the replication manager GUI but cannot be
configured from there.
To configure SSO, access HP P6000 Command View and select Server Options→System Insight
Manager/Replication Solutions Manager trust relationships.
Array management server configuration
Administrators can add and delete array management servers. The management server SMI-S port
can also be set.
For the procedure, see Accessing the configuration window.
Host volume
You can configure the interval at which data is collected for host volume capacity analysis. The
value selected is used for all host volumes.
For the procedure, see Accessing the configuration window.
User preferences configuration
You can configure various display aspects of lists and trees so that changes are remembered.
•
Default: User preferences are disabled.
•
Configurable display aspects include: column order, relative column width, sort column, and
tree expansion state.
For the procedure, see Accessing the configuration window. See also Organizing displayed
resources.
RSM database
About the RSM database
The RSM database contains information on available resources, jobs, job instances, replication
manager events, and DC-Management settings. The database is updated through several methods,
RSM database
27
including real-time updates, automatic refreshes, and manual global refreshes. See Refreshing
resources (automatic) and Refreshing resources (global).
The active RSM database cannot be accessed by administrators or users. Administrators, however,
can export and import copies of the database. See Exporting an RSM database, Importing an
RSM database, and Importing a remote RSM database.
About exports
The export feature creates a copy (XML file) of the active RSM database.
•
Exported copies can only be created in an existing folder on the storage management server.
•
You can assign a file name or let the replication manager automatically assign a name.
•
Automatically assigned file names have the format: CADATA_timestamp.export.xml
where timestamp is the year, month, day, hour, minute, and second the file was created. For
example, the file CADATA_20050211135232.export.xml was created on February 11,
2005, at 13:52:32.
•
Stored security credentials for resources and scheduled job events are not included in the
exported copies.
•
All scheduled job events are exported with a status of a run status of disabled. For more
information on jobs, see the HP P6000 Replication Solutions Manager Job Command Reference.
•
See also Exporting an RSM database.
About imports
The import feature merges the content of a previously exported RSM database or a remote RSM
database with the content of the currently active database.
•
Importing processing is as follows:
When a storage or host resource exists in both the imported and active RSM database, the active
database record is overwritten with the imported record.
When a storage or host resource exists only in the imported RSM database, a record is added to
the active database.
When a job or managed set exists in both the imported and active RSM database, the active
database record is not overwritten. Instead, a new record is added and the job or managed set
name is modified with the characters _# to make it unique.
•
If you use RSM jobs to import RSM databases, do not allow more than one import-job to run
at a time. If two or more import-jobs run at the same time, the jobs may fail, but will incorrectly
report that they completed successfully.
•
After an import has been run, additional procedures are required to make the imported
resources available. See Completing imports.
•
See also Importing an RSM database and Importing a remote RSM database.
Exporting an RSM database
This feature allows administrators to create a copy of the active RSM database. The copy is created
on the same management server as the active database. See RSM database.
28
HP P6000 Replication Solutions Manager
Considerations
•
You can use the GUI, jobs, or the CLUI. See the Export job command and CLUI Set Server
command.
•
After creating a copy, HP recommends that administrators move or back up the copy to a
physically separate server or storage device.
•
The security credentials for enabled hosts are not included in an exported database.
Procedure
This procedure uses the GUI.
1. Select File > Export RSM Database.
The Export RSM Database window opens.
2.
Enter an existing directory (full path) where you want the copy to be created. For example:
C:\colorado.
3.
4.
5.
Do one of the following:
•
If you want to name the file, enter a name, for example
RSMdb_on_server2_Mar_15_2007. Do not include a file extension. The replication
manager automatically adds the extension .xml.
•
If you want the replication manager to assign a file name, leave the file name box empty.
See About exports for the file name format.
Accept or deselect the option to overwrite the file you entered in the previous step.
Click OK.
An implicit job is started. The results and number of processed resources appear in the Monitor
job window.
6.
The replication manager creates and XML copy of the RSM database in the location that you
specified. See About exports.
Importing an RSM database
This feature allows administrators to import a previously exported copy of the RSM database. See
RSM database. See also Importing a remote RSM database.
Considerations
•
You can use the GUI, jobs, or the CLUI. See the Import job command and CLUI Set Server
command.
•
The database copy to be imported must be on the same management server as the active
database. Importing an RSM database by specifying a network share drive is not supported.
•
You must know the exact database copy file name and location (drive letter, full path name).
You cannot search for exported files from the replication manager.
•
Security
IMPORTANT:
database.
The security credentials for enabled hosts are not included in an exported
Procedure
This procedure uses the GUI.
RSM database
29
1.
Select File > Import RSM Database.
The Import RSM Database window opens.
2.
3.
Enter the drive letter, full path and file name of the RSM database copy to import, for example
C:\colorado\RSMdb_on_server2_Mar_15_2007.xml or
C:\colorado\CADATA_20050211135232.export.xml
Click OK.
An implicit job is started. The results and number of processed resources appear in the Monitor
job window.
4.
After importing, perform required follow-up procedures to ensure that all required resources
are available.
Importing a remote RSM database
This feature allows administrators to import an RSM database from another management server.
See RSM database.
Considerations
•
You can use the GUI or the CLUI. See CLUI Set Server command.
•
After importing, HP recommends that administrators examine the active database for duplicate
jobs and managed sets and make corrections, as necessary.
•
Security
IMPORTANT:
database.
The security credentials for enabled hosts are not included in an exported
Procedure
This procedure uses the GUI.
1. Select File > Import RSM Database from Remote RSM.
The Import RSM Database window opens.
2.
3.
4.
Enter the network name or IP address of the remote management server.
Enter the user name and password to access the remote instance of the replication manager.
Click OK.
An implicit job is started. The results and number of processed resources appear in the Monitor
job window.
5.
After importing, perform required follow-up procedures to ensure that all required resources
are available. See Completing an import.
Completing imports
After an import is run, perform the following procedures, as appropriate:
IMPORTANT: These procedures help ensure that resources, jobs, and managed sets are available
after importing an RSM database.
30
•
Re-enter the password to access the local instance of HP P6000 Command View. See Storage
management server configuration.
•
Re-enter the security credentials that are required to access enabled hosts. See Setting security
credentials for enabled hosts.
•
Re-enter the security credentials for imported scheduled job events, if any. See Editing scheduled
job events and Security credentials for the server.
HP P6000 Replication Solutions Manager
•
Re-enable scheduled job events to run automatically. See Enabling and disabling scheduled
job events.
•
Re-configure the replication manager to use an e-mail server for jobs notification, if any. See
Jobs email server configuration.
•
HP recommends that administrators examine the active database for duplicate jobs and
managed sets and make corrections, as necessary. See About imports.
Troubleshooting
Troubleshooting—General
Browser window is blank
Problem
The replication manager GUI is blank (gray) in the browser window.
Explanation / resolution
This can be caused by browsing away from, and back to, the replication manager.
To display the GUI:
1. While viewing the blank browser window, press and hold Ctrl and click the browser refresh
button.
The replication manager logon window opens.
2.
Log on to the replication manager.
The replication manager GUI appears.
Database file name extension is missing
Problem
When viewing an exported copy of the replication manager database on the management server
(that is a Windows computer), the file name may appear to be missing the .XML extension.
Explanation / resolution
When a copy is created, the file name is automatically assigned in the following format:
CADATA_timestamp.export.xml
When viewing files on a Windows computer, the file name may appear as:
CADATA_timestamp.export
This can occur when the Window is set to hide extensions for known file types.
To resolve this issue:
1. In the window, select Tools > Folder Options.
The Folder Options window opens.
2.
3.
4.
Select the View tab.
Under Files and Folders, clear the Hide extensions for known file types box.
Click OK.
File name extensions are displayed in the window.
Documents are not visible when selected
Problem
Documents are not visible when selected from the Help menu.
Explanation / resolution
Troubleshooting
31
This can happen when the document's File Download window opens behind the replication manager
window. This only occurs when running the replication manager in an application mode on the
management server desktop.
To make the document's File Download window visible, select it from the Taskbar or minimize the
replication manager window.
Enabling failsafe on unavailable member in a managed set fails
Problem
Attempting to enable failsafe on unavailable member for a managed set of DR groups fails. The
replication manager logs the message:
Not available:: ERROR: failsafe() not available in current state
Explanation / resolution
This can occur when the managed set in a GUI action or CLUI command contains one or more DR
groups whose remote replication I/O mode is set to asynchronous. Enabling the failsafe on
unavailable member results in an invalid DR group configuration and the action fails.
See also Invalid DR group pair configuration.
Illegal characters
In some cases the following characters are not valid in entries.
Character
Description
Valid in
Resource names
Virtual disk
names
Comments
=
equal to symbol
no
no
no
&
ampersand
no
no
no
*
asterisk
no
no
ok
:
colon
no
no
ok
,
comma
no
no
ok
>
greater than symbol
no
no
ok
<
less than symbol
no
no
no
%
percent sign
no
no
no
+
plus sign
no
no
no
?
question mark
no
no
no
"
quotes (double)
no
no
no
\
slash, backward (virgule, solidus)
no
no
ok
/
slash, forward (virgule, solidus)
no
no
no
|
vertical bar
no
no
ok
Spaces at the end of a virtual disk name
~
no
~
Two or more consecutive spaces
ok
no
ok
Other
Invalid DR group pair—Source and source
Problem
Both groups in a suspended DR group pair are identified as the source group.
32
HP P6000 Replication Solutions Manager
Explanation / resolution
This can occur if a DR group pair is suspended while the intersite links are unavailable. When the
intersite links become available again, both DR groups are identified as the source. The
source-source identification continues until remote replication is resumed.
This can happen when:
•
The DR group pair is suspended while the intersite links were unavailable. See Suspension
state.
•
The DR group pair is failed over with the suspend-on-failover action while the intersite links
were unavailable. See Suspend on failover.
•
Auto suspend mode is enabled while the intersite links are unavailable. See Auto suspend on
link down.
To resolve this issue
1. Use HP P6000 Command View to resume the suspended DR group pair.
2. After resuming the DR group pair, perform a manual refresh of the DR groups. See Low-level
refreshing DR groups.
The source and destination DR groups are correctly identified.
NOTE:
While the incorrect source-source identification exists:
•
The GUI Actions menu correctly shows only actions that are appropriate for the true role of
the DR group (source actions for the true source and destination actions for the true destination).
•
In the CLUI, issuing a Show DR_group command on the "new source group" shows it correctly
as a destination and only destination operations can be executed.
Invalid DR group settings—Failsafe on unavailable member with synchronous replication
Problem
When attempting to use CLUI commands to disable the failsafe on unavailable member, the
replication manager logs the message:
Failed to set failsafe for DR Group, group is currently :failsafe is
already enabled.
Explanation / resolution
You can use GUI actions or CLUI commands to create a DR group pair with the following invalid
configuration: failsafe on unavailable member enabled with synchronous remote replication I/O
mode. Attempting to disable the failsafe on unavailable member in this invalid configuration results
in an error.
To resolve this issue, use the GUI or CLUI commands to set a valid configuration. To retain an
asynchronous I/O mode and disable the failsafe on unavailable member:
1. Change the remote replication I/O mode to synchronous.
2. Change the failsafe on unavailable member to disabled.
3. Change the remote replication I/O mode back to asynchronous.
Job instance fails with get error lock message
Problem
A job instance fails with a message: get error lock failed.
Explanation / resolution
Too many job instances that interact with a specific enabled host may be running simultaneously.
To resolve the problem, reduce the number of running job instances that involve the enabled host.
Troubleshooting
33
Job interacts with wrong array
Problem
When a job is run, storage commands are attempted on the wrong array. Typically, the job does
not validate properly or fails.
Explanation / resolution
If two storage arrays in a replication manager configuration have the same name, a job can
attempt to interact with the wrong array. Ensure that all arrays have unique names.
Job with host volume remount fails after an unclean unmount
Problem
A job with host volume remount fails and the replication manager logs the error message:
The filename, directory name, or volume label syntax is incorrect for
the operating system associated with this mount point.
Explanation / resolution
If a host volume is uncleanly unmounted, such as during an unplanned failover, the host OS or file
system may retain information about the host volume. When the replication manager attempts to
remount that same volume, the host handles the request as a duplication and the remount fails. To
resolve this issue:
1. Ensure that the host volume is cleanly unmounted.
2. If necessary, clean up resources that were involved in the failed job.
3. After the host volume is cleanly unmounted, you can use the replication manager to mount it.
Logical volumes and volume groups in job commands
Some command arguments require the selection or entry of a logical volume (a component of a
volume group). Because the replication manager considers logical volumes and volume groups to
be host volumes, you must select the resource from a host volumes list. There is not a separate list
of logical volumes or volume groups.
See also, Host volumes and Logical volumes and volume groups.
Low-level refresh returns an error
Problem
When you perform a low-level refresh on a virtual disk or DR group, the replication manager logs
the message:
Unable to complete the requested action because of an unexpected error.
Explanation / resolution
34
HP P6000 Replication Solutions Manager
This can occur when the replication manager is unable to discover a virtual disk or DR group. Use
HP P6000 Command View to verify that the virtual disk or DR group exists, and then retry the
low-level refresh.
The following table lists the CLUI commands and GUI actions that perform a low-level refresh.
CLUI command and switch
GUI action
See Tru64 UNIX host volumes.
DR Groups > Actions > Low-Level Refresh
Set DR_Group refresh
Managed Sets > Actions > Low-Level Refresh
(for virtual disk and DR group managed sets)
Set Vdisk refresh
Virtual Disks > Actions > Low-Level Refresh
Show Container refresh
Show DR_Group refresh
Show Snapclone refresh
Show Snapshot refresh
Show Vdisk refresh
Maximum DR group log size error
Problem
If you enter an invalid size, the replication manager logs an error and does not set the maximum
log size for the DR group.
Explanation / resolution
Invalid sizes are ignored.
To resolve this issue, do one of the following:
•
Accept or enter zero (0) to apply the controller software default size algorithm.
•
Enter a valid size. See Maximum log disk size.
Monitor job window has no details
Problem
The Monitor Job window indicates that job details are currently unavailable for display. Even after
waiting several minutes, the details are not displayed. This happens frequently, regardless of the
job being run.
Explanation / resolution
If this happens frequently, there may be a port conflict between the replication manager server
software and other applications on the management server.
Contact the replication manager administrator to have the issue resolved. Default ports are listed
in the HP P6000 Replication Solutions Manager Administrator Guide.
Pop-up windows are not visible
Problem
After navigating from an open replication manager pop-up window to the replication manager
main window or to another application window, clicking a taskbar button does not redisplay the
pop-up window.
Explanation / resolution
Troubleshooting
35
Replication manager pop-up windows can become hidden under the main replication manager
window or under other application windows. Clicking the taskbar button for the replication manager
displays the main window, but not the hidden pop-up window.
To resolve this issue on a Windows computer:
1. Press and hold Alt.
2. Press and release Tab.
A menu of available windows opens.
3.
4.
Press and release Tab until the pop-up window is selected.
Release Alt.
The selected pop-up window is displayed.
Resource is not selectable
When a command's argument is not selectable (does not appear in a selection list), you must
manually enter the resource name or a variable that represents the resource.
Such circumstances typically occur when one command in a job refers to a resource that is created
by another command in the same job.
Example—Mounting a replica
When a job creates a replica of a disk, the replica will not exist until the job is run. Thus, when
editing a command that refers to the replica, the replica's UNC name does not appear in a selection
list. In this case, you can manually enter the name (as specified by the command that creates it),
or reference the resource by using a variable.
Example—Working with a new DR group
When a job creates a new DR group, a new destination storage volume will not exist until the job
is run. Thus, when editing the commands that refer to the new disks, the disk's UNC name does
not appear in a selection list. Because in this case the new disk cannot be referenced by a variable,
you must manually enter the name (as specified by the command that creates it).
Scheduled job events do not run
Problem
Scheduled job events do not run. In the events pane, the following message appears:
Internal error occurred starting job <job name> .
See Events pane.
Explanation / resolution
The security credentials (user name and password) that were saved with scheduled job events may
be incorrect. See Security credentials for the server. For more information on jobs, see the HP
P6000 Replication Solutions Manager Job Command Reference.
1. Open a scheduled job event.
2. Enter the correct security credentials (to log on to the replication manager server), and then
click OK.
3. Repeat for each scheduled job event that has incorrect security credentials.
In some cases, subject to your security policies, consider changing the security credentials of the
replication manager server (one change) to match those saved with the job events (potentially
many changes).
36
HP P6000 Replication Solutions Manager
Scheduled job event run times are wrong (AM/PM)
Problem
Symptoms that can indicate this problem:
•
When using the Schedule a Job wizard, a time of day entered as PM appears as AM on the
final page.
•
A scheduled job runs at an unexpected time.
Explanation / resolution
This can occur due to a non-English setting on a Windows-based storage management server. A
non-English setting changes the time entered in the Schedule a Job wizard to an AM time. For
example, a time entered in the wizard as 2:00 PM is saved as 2:00 AM. For more information
on jobs, see the HP P6000 Replication Solutions Manager Job Command Reference.
To correct this problem, perform the following procedure. You must be a member of the server's
Administrators group.
CAUTION:
1.
2.
3.
4.
5.
6.
7.
Performing this procedure requires a reboot.
Coordinate reboot activities to avoid disruption of services to others. Ensure that no replication
manager jobs are running or are scheduled to run during the reboot.
On the Window-based storage management server, select Start > Control Panel > Regional
and Language Options > Advanced tab.
On the Advanced tab, under Standards and formats, select English (United States).
Click OK and follow the Windows on-screen instructions.
When prompted, reboot the computer. Click Yes to complete the change to Windows.
In the replication manager, edit any scheduled job events that had incorrect run times.
If you have previously exported the replication manager database (that included incorrect
times), HP recommends that you perform a new export.
Second snapclone of the same storage volume or host volume fails
Problem
The second snapclone of the same storage volume or host volume fails and the replication manager
logs a message such as:
Copy type CLONE not available or no resource bundle
Explanation / resolution
You can make multiple snapclones of a source volume. However, only one snapclone can be in
the normalization (unsharing) phase at one time. If the second snapclone action is started too soon,
the action can fail.
This issue can occur when:
•
You run and then quickly rerun the same job.
•
The replication manager database was not refreshed after the first snapclone was completed.
When the second snapclone is requested, the replication manager encounters the out-of-date
normalization status in the database and fails the requested snapclone action.
To resolve this issue:
•
Allow sufficient time after between snapclones to allow the unsharing process to complete.
•
Allow sufficient time between snapclones to allow the replication manager database to be
refreshed.
Troubleshooting
37
Slow login times
Problem
After entering a user name and password to log in to the replication manager server, there is no
response for 30 seconds or more.
Explanation / resolution
In some cases, a slow login response can occur. This does not indicate that the server application
has stopped.
Unable to resume a DR group
Problem
When attempting to resume a DR group, the replication manager logs the message:
Unable to resume DR group <GroupName> because of current state of group
for job <JobName>
Explanation / resolution
You cannot use the replication manager to resume remote replication if the status of a DR group
pair is failed.
In some cases, the replication manager temporarily reports the status as failed, even though the
actual status is normal (as reported by HP P6000 Command View). This can happen because the
status of a DR group pair is updated in the replication manager only when an automatic refresh
occurs or you manually perform a refresh.
To resolve this issue and resume the DR group as quickly as possible:
1. In the replication manager DR Groups List tab or Tree tab, verify the operational status of the
DR group.
2. If the status is failed (red), use HP P6000 Command View to verify the status of the DR group
(see HP P6000 Command View online help).
3. In HP P6000 Command View, if the status is normal, issue a resume command to resume
the DR group.
4. After the DR group is resumed, manually refresh it. See Low-level refreshing DR groups.
You can again use the replication manager to resume/suspend the DR group.
Troubleshooting–HP-UX
Host agent discovery hangs and times out–HP-UX
Problem
If a device on an HP-UX enabled host is in a failed state, the discovery of host volumes on that host
can hang and cause the discovery to time out.
Explanation / resolution
This issue is caused by HP-UX and can happen on Versions 11.23 and 11.31.
To resolve this issue:
1. On the enabled host, remove or correct the failed devices.
2. Allow sufficient time for a discovery of enabled hosts to occur. See Automatic refresh of
resources.
3. Verify that the discovery was completed.
Job with CreateHostVolume command fails–HP-UX
Problem
38
HP P6000 Replication Solutions Manager
A job that replicates a storage volume fails at a CreateHostVolume command when attempting
to create a host volume on an HP-UX enabled host. The replication manager logs a message such
as:
Unknown exception on host: Volume group /dev/h32xg02_RV0 failed to
create: Volume Group /dev/h32xg02_RV0 is still active.
Explanation / resolution
The HP-UX kernel recognizes device drivers and peripheral devices by major and minor numbers.
The driver uses minor numbers to locate specific devices. HP-UX sometimes does not release device
minor numbers after a volume group is removed. When the CreateHostVolume command is
run, the volume group appears to still be active and the command fails.
To resolve this issue, you must reboot the enabled host.
IMPORTANT: Before proceeding, coordinate the reboot activities to avoid disruption of host
services to others.
1.
2.
3.
Ensure that no replication manager jobs involving the host are running or scheduled to run
during the reboot.
On the enabled host, close applications as appropriate.
Reboot the host.
This clears the device minor numbers for the host memory.
4.
5.
6.
Restart applications on the host, as appropriate.
If necessary, clean up resources that were involved in the failed job.
Rerun the job.
Job with MountEntireVolumeGroup command fails–HP-UX
Problem
A job that replicates and mounts an HP-UX volume group fails at a MountEntireVolumeGroup
command when attempting to mount the volume group on an enabled host. The replication manager
logs a message such as:
Host Volume has unknown or incompatible ...
Explanation / resolution
This can occur when a job is rerun.
If the replication manager discovers an HP-UX volume group at the same time that a job command
is attempting to remove the volume group, an error allows the underlying storage to be unpresented
from the host, even though the volume group is still active.
When the job is run again, the volume group cannot be replicated and the job fails at the mount
step because there is no underlying storage available.
To resolve this issue:
1. In the replication manager or HP P6000 Command View, correct the presentation of the
underlying storage for the volume group.
2. If necessary, clean up resources that were involved in the failed job.
3. Rerun the job.
Host volume with PVLinks multipathing
Problem
If the source host volume uses HP-UX PVLinks for multiple paths, a replica will have only one path
defined.
Explanation / resolution
Troubleshooting
39
When replicating a host volume with PVLinks multipathing, replication manager jobs define only
one path in the replica. If necessary, you can use PVLinks to add paths to the replica after the job
is finished.
See CreateHostVolume and CreateHostVolumeGroup job commands.
Troubleshooting–Linux
Job with volume group remount fails–Linux
Problem
A job creates a replica of a Linux volume group and mounts it on a host. In subsequent steps, the
job unmounts the replica, and then attempts to remount it on another host. During the remount step,
the job fails.
Explanation / resolution
You cannot remount the replica of a Linux volume group because the context of the volume group
is lost after the first mount.
IMPORTANT: The job template named Replicate host volume(s), mount to a host, then to a different
host, contains steps to mount and remount a replica. Do not use the template with Linux volume
groups.
To resolve this issue:
1. If necessary, cleanup resources that were involved in the failed job.
2. Edit the job and remove the remount steps.
3. Create another job to mount the replica on a different enabled host.
Troubleshooting–Tru64 UNIX
Job with AdvFS replication fails–Tru64 UNIX
Problem
Job that replicates Tru64 UNIX AdvFS host volumes fails on job commands such as:
CreateHostVolumeFromDiskDevices
CreateHostVolumeGroup.
Messages in the replication manager event log indicate that the host volume could not be found
in the volume group.
In the host directory /var/adm/messages, a message similar to the following can appear:
Jun 5 18:50:44 rita vmunix: Found bad xor in sbm_total_free_space!
Corrupted SBM metadata file!
Jun 5 18:50:44 rita vmunix: An AdvFS domain panic has occurred due to
either a metadata write error or an internal inconsistency. This domain
is being rendered Jun 5 18:50:44 rita vmunix: Please refer to guidelines
in AdvFS Guide to File System Administration regarding what steps to
take to recover this domain.
Explanation / resolution
When a job is run, heavy I/O in any of the mounted filesets in the domain can lead to domain
panic, causing the job to fail.
Resolve any issues on the host, modify the job to use freezefs and thawfs utilities, and rerun the
job.
40
HP P6000 Replication Solutions Manager
2 Replication resources
Working with resources
Best practices for automatic refresh
The refresh of storage systems information during an automatic refresh can take a significant amount
of time and place heavy demands on the management server and storage systems. See Refreshing
resources (automatic).
Depending on the circumstances, administrators may want to adjust the replication managers's
automatic refresh interval. For more information, see “Contacting HP” (page 288).
Typical circumstances include:
•
If other applications are using the same storage arrays, or are installed on the same
management server, a short discovery and database refresh interval can slow the performance
of the storage arrays or management server. In this case, consider a longer period.
•
If the storage configuration is large, or there are delays in the environment, discovery and
database refresh can take a long time and may slow the replication manager. In this case,
consider a longer interval.
•
If you are configuring an environment or performing testing, the default discovery and database
refresh interval may be too long for changes to be reflected quickly. In this case, consider
temporarily setting the interval to a few minutes.
•
If you are only monitoring storage, the default discovery and database refresh interval may
be shorter than required. In this case, consider setting the interval to several hours.
Copying properties
Select a resource's properties and copy them to another window or application.
Considerations
•
You can copy a single property, multiple adjacent properties, or all properties in a window
or window section. See Copying properties tips.
•
The entire property is copied. Except for comments, you cannot copy individual words or
characters in a property.
•
The copy-properties feature is not available in the Configuration windows or the Licensing tab
in the Storage Systems Properties window.
Procedure
1.
Open the window from which to copy properties.
See viewing properties for: DR groups, Enabled hosts, Host volumes, Jobs, Managed sets,
Storage systems, and Virtual disks. See also Editing DR group properties.
2.
Select the properties and press Ctrl+C.
The properties are copied to the clipboard (Windows) or similar memory area (other OSs).
3.
Open the window or application in which to paste the properties, navigate to the appropriate
location, and then press Ctrl+V.
The properties are pasted.
Copying properties - tips
Each procedure below is completed by pressing Ctrl+C to copy the selection.
Working with resources
41
Selecting adjacent property cells
In a properties window, do one of the following:
1. Click the first property cell to copy then press and hold the mouse button.
2. Drag the selection to the adjacent property cells. On the last property cell to copy, release
the mouse button.
or
1.
2.
Select the first property cell to copy.
Press and hold the Shift key and select the last property cell to copy.
Selecting all property cells
1.
2.
Click any cell in a property window or section of a window.
Press Ctrl+A.
Selecting a comment
In a comment cell, do one of the following:
1. To select the entire comment, click anywhere in the comment cell.
2. Press Ctrl+A.
or
1.
2.
To select part of a comment, click before the first character to copy then press and hold the
mouse button.
Drag the selection past the last character to copy and release the mouse button.
Nonadjacent cells
The replication manager does not support selection of nonadjacent cells.
Filtering displayed resources
You can filter (select) the resources that display in content pane list tabs.
1. Filter type
2. Filter value
Procedure
1.
On the content pane, click the Filter property box.
A list of filters appears.
2.
Select a resource filter.
The selected filter type is displayed.
3.
Click the Filter value box.
A list of values for the filter appears.
4.
Select a value. Selecting <Choose Value> is the same as selecting no filter.
The filter is applied and only the selected resources appear.
42
Replication resources
Global refresh monitor
The Global Refresh Monitor window opens when you start a global refresh. Progress bars indicate
the percent of the resources that have been refreshed.
1. Refresh button.
(restarts the refresh)
4. Progress of HP licences refresh.
2. Progress of dynamic capacity volumes and policies
refresh.
5. Progress of storage systems refresh.
(includes DR groups and virtual disks)
3. Progress of enabled hosts and host volumes refresh.
6. Progress of VM servers refresh.
Organizing displayed resources
On the content pane, you can change the size and position of the columns in list and tree views.
You can sort displayed resources in list views.
1. Sort indicator
2. Column edge
Procedures
Resizing columns
1.
2.
Move the cursor over a column edge in the heading.
When a selection arrow appears, click and drag the column edge as required.
Moving columns
1.
2.
Click the heading of the column to move.
Hold and drag the column to the desired location.
Sorting a list view
1.
2.
3.
Click the heading of the column on which to sort the list.
The list is sorted and a sort indicator appears.
To reverse the sort order, click the column heading again.
Working with resources
43
Refreshing display panes
Update the information that is displayed in the content and event panes.
Considerations
•
Do not use browser refresh button.
IMPORTANT: Do not use your browser’s refresh button to update the panes. Using the
browser refresh may end the replication manager session. To restart the session you must log
in again to the replication manager server. See Troubleshooting.
•
The content pane is not updated automatically. Following a resource change that is initiated
with the replication manager (GUI, jobs, or CLUI) HP recommends that you refresh the content
pane.
•
Whenever you select a different type of resource, the content pane is refreshed. For example,
if you are viewing DR Groups in the content pane and then select Host Volumes, the content
pane is refreshed and displays the current host volume resources.
•
See also Refreshing resources (automatic) and Refreshing resources (global).
Procedure
1.
On the content or event pane, click the refresh icon to refresh the display.
1. Refresh icon for content or event pane
Refreshing resources (automatic)
The replication manager automatically discovers resources and refreshes its database at the intervals
noted below.
Automatic discovery type
Refresh interval
Remarks
New storage systems, their DR groups and virtual disks
Every minute
Not changeable
Known enabled hosts and their host volumes
Every 4 hours
Not changeable
New host volumes on known enabled hosts
Every 10 minutes
Not changeable
HP replication and application-integration licenses
Every 6 hours
Not changeable
IMPORTANT: New enabled hosts are not automatically discovered or added to the database.
To be visible to the replication manager, an administrator must manually add them after installing
replication manager host agents.
44
•
To change the automatic storage refresh interval, see “Best practices for automatic refresh”
(page 41).
•
When new resources are found, it may take some time to gather the new information and
add it to the RSM database.
Replication resources
Refreshing resources (global)
Update replication manager database entries by manually performing a global discovery and
refresh of resources.
Considerations
•
Manual refresh
IMPORTANT: HP recommends that you manually refresh storage resources after using another
interface that changes the properties of storage systems, DR groups, or virtual disks. Refreshing
resources, especially storage resources, can take a significant amount of time and place heavy
demands on the management server and storage systems.
•
Information in the content pane is not automatically updated after a refresh.
Procedure
1.
On the toolbar, click the refresh icon.
The Global Refresh Monitor window appears. See Global refresh monitor.
1. Global refresh icon on toolbar
2.
3.
You can do one of the following:
•
Click Close to close the window and continue working.
•
View the refresh progress bars.
•
Click Refresh to restart the refresh.
Update the content pane. See Refreshing display panes.
Refreshing individual resources
Update replication manager database entries by manually performing a discovery and refresh of
individual resources.
Considerations
•
Individual resource refresh is only available for DR groups, enabled hosts, and virtual disks.
•
You can use the GUI or CLUI.
Procedures
•
DR groups. See Low-level refreshing DR groups.
•
Enabled hosts. See Low-level refreshing enabled hosts.
•
Virtual disks. See Low-level refreshing virtual disks.
Selection of multiple resources
When performing an action from the GUI, HP recommends selecting only a few resources at one
time. Selecting a large number of resources at once can cause slow response times.
Working with resources
45
For example, when deleting several virtual disks, it may appear to take a few seconds in the GUI.
However, deleting 50 virtual disks may take several minutes. The slower response can be
misinterpreted as a problem.
Simulation mode
For more information, see the HP P6000 Replication Solutions Manager Simulation guide. The
guide is available from the help menu of the replication manager GUI and from the HP Storage
website. See (page 289).
CAUTION: HP strongly recommends that you do not run simulation mode on a production machine
(that is, on the same machine as HP P6000 Command View). If you disconnect the replication
manager from HP P6000 Command View, you lose control of the storage resources that were
being managed by the replication manager. If you purge the replication manager database and
have not backed it up, you will lose all jobs.
Overview
Simulation mode allows you to use all of the functions of the replication manager without having
to use any of your production data or resources. These functions include creating snapshots and
snapclones, creating DR groups and adding virtual disks to them, and performing failovers.
Features and benefits:
•
No SAN infrastructure, storage arrays, hosts, or management servers are required.
•
Requires only replication manager server software running on a business class Windows
desktop or laptop computer.
•
True simulation—Not a set of predefined scenarios. Simulates GUI actions, jobs, and CLUI
commands.
•
Allows you to create your own:
◦
Simulated storage arrays. Select features such as array capacity, software controller
versions and number of virtual disks.
◦
Simulated hosts. Include hosts and select the storage arrays they are connect to.
Resource concepts
About replication resources
Resource types are displayed on the left, in the navigation pane. Individual resources are displayed
on the right, in the content pane.
46
Replication resources
1. Resource types
2. Individual resources
Resource types
The following resources types are supported. For more information on each type, click the link.
DR Groups
About About DR groups groups
Enabled Hosts
About enabled hosts
Host Volumes
About host volumes
Jobs
For more information on jobs, see the HP P6000 Replication
Solutions Manager Job Command Reference.
Managed Sets
About managed sets
Storage Systems
About storage systems
Virtual Disks
About virtual disks
Resource states
The states of resources are indicated as follows.
Icon
Description
Normal/Good. Resource is operating normally.
Warning. Resource is operating normally but is in a temporary abnormal state; the operator should monitor
the resource. Also, indicates any condition that needs attention.
Severe. Resource has experienced a catastrophic failure; the operator must act immediately to prevent further
failures or data loss.
Unknown. Resource cannot be reached or its state cannot be determined.
NOTE: One of the conditions that can result in an Unknown state is attempting to use invalid credentials
to access the resource.
Event states are similar and are displayed in the event pane. A DR group's operational state uses
Special icons to represent a combination of DR group states and modes.
Resource concepts
47
Resource names and UNC formats
Resources in a SAN are identified in several ways, including UNC format. UNC (Universal or
Uniform Naming Convention) identifies a resource in terms of its hierarchical location in a network.
See the following for name and UNC formats: DR groups, Enabled hosts, Host volumes, Storage
systems, and Virtual disks (storage volumes)
Resource names in job commands
When you initially enter a command in a job, the command's resource arguments are displayed
as %variable% names, for example:
CreateDiskDevice (%storvol_unc_name%, %host_name%, 0, READ_WRITE)
When selecting a specific resource from an Editing Task menu, the resource names are presented
in a UNC format, or other format, as appropriate for the resource. If you enter a resource name
from the keyboard, you must use the appropriate format.
IMPORTANT:
UNC names in job commands are case sensitive.
After selection or entry of arguments, the command is displayed with resource names in quotes,
for example:
CreateDiskDevice ( "\\ArrayA2\Cats", "HostA6", 12, READ_WRITE )
Name formats and examples for resource types follow.
DR groups
Name format (UNC)
Example
\\array name\path\DR group name
\\ArrayA2\DrGrpPets
Identifies the DR group named DrGrpPets on the storage
array named ArrayA2.
Enabled hosts
The job editor and command validation accept these formats.
Name format (other)
Example
Computer network name
HostA6
Fully qualified network name
HostA6.SiteA.corp
IP address
88.15.42.101
Each example identifies an enabled host using an accepted
format.
Host volumes
Applies to standard host volumes, volume groups, logical volumes, and host volume components
such as partitions and slices.
48
Name format (UNC)
OS specific example
\\host name\path\host volume name
AIX
\\HostA2\/home/cats
HP-UX
\\HostA2\/users/cats
Linux
\\HostA2\/var/cats
Replication resources
Name format (UNC)
OS specific example
OpenVMS
\\HostA2\[pets.cats]
Solaris
\\HostA2\/usr/cats
Windows
\\HostA2\E:\pets\cats
For each OS, the example identifies the path and file
named cats on the enabled host named HostA2
Storage systems
Name format (other)
Example
storage name
ArrayA2
Identifies the storage array named ArrayA2.
Virtual disks (storage volumes)
Applies to standard storage volumes (virtual disks), snapclones (standard virtual disks), snapshot
virtual disks, and storage containers.
Name format (UNC)
Example
\\array name\path\storage volume name
\\ArrayA2\Cats
Identifies the storage volume named Cats on the storage
array named ArrayA2.
Licenses
License displays
License information is displayed in the GUI. See the following table.
If the licenses summary indicates an issue, you can use the License Events pane to identify the
affected storage array. Then, you can review the affected array's license information in the Storage
Properties window.
License display
Remarks
License summary. A license status summary appears on
the right side of the status bar at the bottom of the
replication manager window.
• The status includes replication licenses.
• Status messages indicate if all discovered licenses are
valid or if there is an issue.
License events. Events that involve license status appear on • License events include replication licenses.
the License Events tab of the events content pane.
• See Events overview and Viewing events.
Licenses for an array. The status of licenses for an individual • Properties include the status of replication licenses.
array appears on the array's Storage Properties window.
• See Storage Properties window - Licensing tab and
Viewing storage properties.
Resource concepts
49
License states
General
•
Expiring. All discovered licenses are currently valid. However, one or more expires within 60
days.
•
Over capacity. One or more resources has exceeded a license's capacity.
Replication licenses overview
To use local and remote replication features on a given array, the replication manager verifies
that appropriate HP replication licenses exist for that array.
•
For information on acquiring and installing local replication licenses, see the HP P6000
Replication Solutions Manager Administrator Guide.
•
For information on acquiring and installing remote replication licenses, see the HP P6000
Continuous Access Implementation Guide.
•
Replication licenses are verified during automatic and global refreshes. See Refreshing resources
(automatic) and Refreshing resources (global).
•
Replication license status is displayed in the GUI. See License status.
Replication license policies
HP license policies determine how the replication manager interacts with resources that require
HP replication licenses.
The following list summarizes the policies:
•
Verification. License verification includes factors such as:
◦
Type of license
◦
Expiration date (if any)
◦
Capacity range (amount of storage)
•
Multi-array volumes. If a volume is comprised of virtual disks on two or more arrays, each
array must have an HP replication license in order in order to be considered fully licensed.
•
GUI actions. Some GUI actions may be disabled if an applicable HP license is not verified.
Disabled actions appear in gray and cannot be selected. A tooltip indicates why the action
cannot be performed. See Tooltips.
For example, if you select a virtual disk and the New Snapclone action is disabled, it can be
because the storage system that contains the disk does not have a local replication license.
•
Job commands. Some job commands may not execute if an applicable HP license is not
verified when a job is run. An event in the job log indicates the command did not execute
due to a licensing issue.
For example, if a SnapcloneStorageVolume command in a job did not execute when the job
was run, it may be because the storage system that contains the disk did not have a local replication
license.
50
Replication resources
IMPORTANT: HP recommends that you include validation commands in jobs and perform
validation tests to prevent or minimize the effect of license-related issues at run time.
•
CLUI commands. CLUI commands that run jobs or create implicit jobs behave like job
commands. An event in the job log indicates that the command did not execute due to a
licensing issue.
Dynamic Capacity Management licenses overview
The DC-Management feature requires an array-based DC-Management license in HP P6000
Replication Solutions Manager.
DC-Management license
HP P6000 Replication Solutions Manager includes an instant-on license for the DC-Management
feature. The instant-on license is valid for 60 days from the time of install, after which you may
request an extension to the license. This license enables the DC-Management feature for all arrays
during the validity period.
When the instant-on license expires, the DC-Management feature is disabled unless a permanent
license is installed. Permanent licenses are installed on a per-array basis.
The DC-Management license can be viewed, retrieved, and added by selecting Tools > Configure
> Licensing.
Thin provisioning license overview
The thin provisioning feature requires a separate license.
HP P6000 Replication Solutions Manager includes an instant-on license for thin provisioning feature.
The instant-on license is valid for 60 days from the time of install, or until you install a permanent
license.
When the instant-on license expires, thin provisioning is disabled unless a permanent license is
installed. Permanent licenses are installed on a per-array basis.
Security Credentials
Security credentials for the server
To establish and manage replication manager server security credentials, administrators must use
the OS on the management server. See security Groups configuration or the HP P6000 Replication
Solutions Manager Installation Guide and HP P6000 Replication Solutions Manager Administrator
Guide.
Server security credentials (user name and password) must be provided for the following actions:
•
GUI logon. To access the GUI, you must enter replication manager security credentials in the
logon window.
•
Creating a scheduled job event. To schedule a job event, you must enter replication manager
security credentials in the Schedule Job window.
•
CLUI logon. To access the CLUI using any method other than the GUI, you must enter include
replication manager security credentials in the CLUI logon commands.
Resource concepts
51
Password change considerations
Administrators should carefully plan and coordinate sever security credential changes with replication
manager operations. For example, if the security policy in your environment is to reset passwords
every six months, consider that such changes can result in the following:
•
Inability to log on to the GUI
•
Inability to run scheduled job events
•
Inability to log on to the CLUI
Security credentials for enabled hosts
To establish and manage replication manager host agent security credentials, administrators must
use the OS on each enabled host. See security Groups configuration and the HP P6000 Replication
Solutions Manager Installation Guide and HP P6000 Replication Solutions Manager Administrator
Guide.
For an enabled host to interact with the server, a valid host agent security credential must be present
in the replication manager database. See Setting security credentials for enabled hosts and Adding
a new enabled host.
IMPORTANT: If a valid security credential is not entered or if the credential expires or is changed,
the replication manager (GUI, jobs, and CLUI) will not be able to interact with the enabled host.
Host agent security credentials (user name and password) must be provided for the following cases:
•
New enabled host. To add a new enabled host to the replication manager database, an
administrator must enter a valid security credential.
•
Changed credentials. To update the security credential that is saved in the database, an
administrator must enter a new security credential.
•
Imported RSM database. After importing an RSM database that includes enabled hosts, an
administrator must re-enter the security credentials.
Password change considerations
Administrators should carefully plan and coordinate host agent security credential changes with
replication manager operations. For example, if the security policy in your environment is to reset
passwords every six months, consider that such changes can result in enabled hosts not interacting
with the replication manager server. This can lead to the following:
•
Jobs that interact with impacted enabled hosts do not validate or fail while running.
•
CLUI commands that interact with impacted enabled hosts fail.
•
Enabled hosts and host volume information is not updated in the server.
Security credentials versus tasks
Replication manager tasks can only be performed by members of the HP Storage Admins group
or HP Storage Users security groups on the management server. See Security credentials
configuration.
52
Replication resources
Most tasks require membership in the HP Storage Admins group. The following table shows the
relationship between security credentials and tasks that are allowed.
Item or resource
Task
Credential
Remarks
Admin
allowed
User allowed
Configure
Yes
No
Enable and disable simulation mode
Yes
No
Export and import RSM database
Yes
No
Manage and view event logs
Yes
Partial
Users can view only
Add, change, delete, and view
Yes
Partial
Users can view only
Fail over, suspend, resume, and force
full copy
Yes
No
Add, change, delete, and view
Yes
Partial
Run host script and set security
credentials
Yes
No
Add, change, delete, and view
Yes
Partial
Replicate
Yes
No
Add, edit, delete, monitor, and view
Yes
Partial
Schedule, run, pause, continue, and
abort
Yes
No
Managed sets
Add, change, delete, and view
Yes
Partial
Storage systems
Add, change, delete, and view
resources
Yes
No
Manage replication licenses
Yes
Partial
Users can view only
Add, change, delete,and view
resources
Yes
Partial
Users can view only
Replicate, present, and unpresent
Yes
No
Replication manager
DR groups
Enabled hosts
Host volumes
Jobs
Virtual disks
Users can view only
Users can view only
Users can view only
Users can view only
Users can view only
Resource concepts
53
Topology views
About topology views
The topology tab provides an interactive graphical environment in which to view resources and
perform replication tasks.
Key features include:
• Select from predefined resource views
• Perform actions on resources
• View resource properties and labels
• Change the layout
• Filter the view using managed sets
Sample topology view of two storage systems
and their virtual disks.
To access the topology tab, see Displaying the topology tab.
IMPORTANT: Do not use browser buttons to refresh the topology view or to navigate. Using
browser buttons will end the session. See troubleshooting Browser window is blank. The view is
automatically refreshed from the replication manager database every 15 seconds. See also
Automatic refresh of resources.
Views
The View menu allows you to select the resources to display. Each of the predefined views shows
the logical relationships among resources. See “DR groups topology view” (page 56), “Host
volumes topology view” (page 57), and “Virtual disks topology view” (page 59).
Menu of actions
You can right-click a resource to display a menu of actions or to navigate (jump) from the topology tab to a content
pane.
Properties
You can move the cursor over a resource to display its key properties.
Labels
To show or hide all resource labels, use the Toggle Labels action.
54
Replication resources
Layout control
You can change the layout by using the following tools:
Drag-and-drop
Move a resource to a new location. See Pinned view
tips.
Zoom buttons
Change the size and extent of the view.
Layout button
Redraw the view. Layout behavior varies. See Layout
tips and Pinned view tips.
Pin/unpin button
Pin (lock) or unpin (unlock) the locations of resources
in the view. See Pinned view tips and Layout tips.
Clear all pins action
Unpin (unlock) the locations of all resources in all views.
See Clear all pins tips.
Filters
By default all appropriate objects for a view are displayed. You can use the Filter menu to select
and apply a custom filter to the view. See Filters for views.
Displaying the topology tab
1.
In the GUI navigation pane, select Replication.
The Replication content pane is displayed.
2.
In the Replication content pane, select the Topology tab.
The current view of the Topology tab appears.
IMPORTANT: Do not use browser buttons to refresh the topology view or to navigate. Using
browser buttons will end the session. See troubleshooting Browser window is blank. The view is
automatically refreshed from the replication manager database every 15 seconds. See also
Automatic refresh of resources.
Topology views
55
DR groups topology view
Description
The DR groups topology view shows logical relationships among storage systems, DR groups, and
virtual disks within DR groups. The default layout is organized in centric circles. Storage systems,
DR groups, and virtual disks appear in the inner, middle and outer circles, respectively
Sample default layout
Typical resources in a view
1. Storage system
2. DR group (source)
3. Virtual disks in DR group
4. Remote replication link
(points from source to
destination)
5. DR group (destination)
6. Virtual disks in DR group
7. Storage system
Features
The following features are available in the tab:
Item
Description
Actions > Toggle Labels
Displays or hides resource labels in all views.
Actions > Clear all pins
Unpins (unlocks ) the locations of all resources in all views. See Clear all pins tips.
Filter
Filters the resources that are displayed. See Filters for views.
View
Selects the resource type (and related resources) to view.
Double-click a resource
Opens the content pane for the resource (and closes the topology tab).
Move the cursor over a resource Displays a short list of the resource's properties.
Right-click a resource
Opens an actions menu for the resource.
Drag-and-drop a resource
Moves a resource to a new location in an unpinned view. See Pinned view tips.
Pin toggle. Pins and unpins (locks/unlocks) all resource locations in a view. See
Pinned view tips.
Zooms the view in or out.
Redraws the view. Layout behavior varies. See Layout tips and Pinned view tips.
Prints the view.
Displays context-sensitive help.
IMPORTANT: Do not use browser buttons to refresh the topology view or to navigate. Using
browser buttons will end the session. See troubleshooting Browser window is blank. The view is
automatically refreshed from the replication manager database every 15 seconds. See also
Automatic refresh of resources.
56
Replication resources
Icons
The following icons may appear in the view:
Icon
Description
Storage system is normal
Storage system is initializing or in an unknown state
Storage system is disabled, degraded, or unmanaged
Storage system is failed or there is a bad connection to the storage manager
DR group is normal
DR group is disabled, degraded, unknown, copying, constructing, or deleting
DR group is failed
Remote replication is in a steady state
Remote replication is not in a steady state
Virtual disk is normal
Virtual disk is in an unknown state
Virtual disk is degraded, disabled, deleting, copying, or constructing
Virtual disk is failed
More help
•
Overview. See About topology views.
•
Tips. See topology view Tips.
•
Filters. See Filters for views.
Host volumes topology view
Description
The host volumes topology view shows logical relationships among enabled hosts, their host
volumes, and the underlying virtual disks in the host volumes. The default layout is organized in
centric circles. Enabled hosts, host volumes, and virtual disks appear in the inner, middle, and
outer circles, respectively.
Topology views
57
Sample default layout
Typical resources in a view
1. Enabled host
2. Host volume
3. Virtual disk
For more information, see About host volumes.
Features
The following features are available:
Item
Description
Actions > Toggle Labels
Displays or hides resource labels in all views.
Actions > Clear all pins
Unpins (unlocks ) the locations of all resources in all views. See Clear all pins tips.
Filter
Filters the resources that are displayed. See Filters for views
View
Selects the resource type (and related resources) to view.
Double-click a resource
Opens the content pane for the resource (and closes the topology tab).
Move the cursor over a resource Displays a short list of the resource's properties.
Right-click a resource
Opens an actions menu for the resource.
Drag-and-drop a resource
Moves a resource to a new location in an unpinned view. See Pinned view tips.
Pin toggle. Pins and unpins (locks/unlocks) all resource locations in a view. See
Pinned view tips.
Zooms the view in or out.
Redraws the view. Layout behavior varies. See Layout tips and Pinned view tips.
Prints the view.
Displays context-sensitive help.
IMPORTANT: Do not use browser buttons to refresh the topology view or to navigate. Using
browser buttons will end the session. See troubleshooting Browser window is blank. The view is
automatically refreshed from the replication manager database every 15 seconds. See also
Automatic refresh of resources.
58
Replication resources
Icons
The following icons may appear in the view:
Icon
Description
Enabled host is normal
Enabled host is in an unknown state
Host volume
Virtual disk is normal
Virtual disk is in an unknown state
Virtual disk is degraded, disabled, deleting, copying, or constructing
Virtual disk is failed
Snapshot of a virtual disk
More help
•
Overview. See About topology views.
•
Tips. See topology view Tips.
•
Filters. See Filters for views.
Virtual disks topology view
Description
The virtual disks topology view shows logical relationships between storage systems and virtual
disks. The default layout is organized into centric circles. Storage systems and virtual disks appear
in the inner and outer circles, respectively.
Sample default layout
Typical resources in a view
1. Storage system
2. Container
3. Virtual disk, original
4. Virtual disk, snapshot
5. Virtual disk, mirrorclone
Features
The following features are available:
Item
Description
Actions > Toggle Labels
Displays or hides resource labels in all views.
Actions > Clear all pins
Unpins (unlocks ) the locations of all resources in all views. See Clear all pins tips.
Topology views
59
Item
Description
Filter
Filters the resources that are displayed. See Filters for views
View
Selects the resource type (and related resources) to view.
Double-click a resource
Opens the content pane for the resource (and closes the topology tab).
Move the cursor over a resource Displays a short list of the resource's properties.
Right-click a resource
Opens an actions menu for the resource.
Drag-and-drop a resource
Moves a resource to a new location in an unpinned view. See Pinned view tips.
Pin toggle. Pins and unpins (locks/unlocks) all resource locations in a view. See
Pinned view tips.
Zooms the view in or out.
Redraws the view. Layout behavior varies. See Layout tips and Pinned view tips.
Prints the view.
Displays context-sensitive help.
IMPORTANT: Do not use browser buttons to refresh the topology view or to navigate. Using
browser buttons will end the session. See troubleshooting Browser window is blank. The view is
automatically refreshed from the replication manager database every 15 seconds. See also
Automatic refresh of resources.
Icons
The following icons may appear in the view:
Icon
Description
Storage system is normal
Storage system is initializing or in an unknown state
Storage system is disabled, degraded, or unmanaged
Storage system is failed or there is a bad connection to the storage manager
Virtual disk is normal
Virtual disk is in an unknown state
Virtual disk is degraded, disabled, deleting, copying, or constructing
Virtual disk is failed
Snapshot of a virtual disk
Container for a virtual disk
60
Replication resources
More help
•
Overview. See About topology views.
•
Tips. See topology view Tips.
•
Filters. See Filters for views.
Filters for topology views
There can be many resources in a default view. To simplify a view, you can create a managed
set then select it as a filter for the view.
In the following example, a filter is applied to a complex view. The example filter is a managed
set of virtual disks that includes three disks.
Sample view without a filter
Sample view with a filter
Applicable filters
The following table identifies the types of managed sets that are applicable to each view. If you
select a filer that is not applicable to a view, the view is not filtered.
Applicable managed set filter
DR groups
Enabled
hosts
Host
volumes
Storage
systems
Virtual
disks
Yes
-
-
Yes
Yes
-
Yes
Yes
-
-
Yes
-
-
Yes
Yes
View
DR groups view
Storage Systems > DR Groups > Virtual Disk
Host volumes view
Enabled Hosts > Host Volumes > Virtual Disks
Virtual disks view
Storage Systems > Virtual Disks
More help
•
General information. See About managed sets.
•
Procedures. See Creating managed sets and Adding resources to a managed set.
Tips (topology views)
General
IMPORTANT: Do not use browser buttons to refresh the topology view or to navigate. Using
browser buttons will end the session. See troubleshooting Browser window is blank. The view is
automatically refreshed from the replication manager database every 15 seconds. See also
Automatic refresh of resources.
Topology views
61
Pinned view
•
When a view is pinned (locked) you cannot move (drag-and-drop) resources in the layout. To
move resources, unpin the view, move the resources, then re-pin the view.
•
Each view can be independently pinned or unpinned.
Layout
•
An unpinned view is redrawn in its default layout and is sized to fit the current image area.
IMPORTANT: The locations of all moved resources in the view are discarded. If you have moved
many resources, HP recommends that you pin the view.
•
A pinned view is not redrawn unless it has been previously zoomed. When redrawn, the
layout is the original size but retains the locations of moved resources.
•
In some cases, resources can extend beyond the scrollable image area. This is indicated by
resource connection lines that extend off the tab.
To access these resources, you can zoom out until the resources appear in the view.
Clear all pins action
•
The Clear all pins action unpins all views and resets all views to their defaults. The displayed
view is immediately redrawn to fit the tab.
IMPORTANT:
You cannot undo a Clear all pins action.
Toggle labels action
•
The Toggle labels action displays or hides all labels in all views.
•
You cannot move a label independently of the object. If it is necessary to improve legibility,
you can move the object.
Filters
•
You can have multiple managed sets that are used as filters. However, only one managed set
can be applied as a filter at one time.
•
If you create an empty (but applicable) managed set and apply it to a view, no objects are
displayed.
•
If you create a managed set only for the purpose of filtering a view, HP recommends that you
assign a filter-related name. For example, you can assign a name such as: disk filter
- sales department.
This can help to quickly identify the type of filter. It can also help reduce the chance of
subsequently applying an action to the managed set (in the managed set content pane) that
might be undesirable, for example attempting to create snapshots of all the virtual disks in the
managed set.
62
Replication resources
3 DR groups
Working with DR groups
About DR group resources
The DR Groups content pane displays storage resources for remote replication. See GUI window
Content pane.
The properties of DR groups, and the remote replication actions that you can perform, depend on
the controller software version of the storage systems. See Controller software features - remote
replication.
For overviews of remote replication and licensing, see Remote replication overview and Replication
licenses overview. DR group resources are available only if a storage system is licensed for remote
replication.
Views
•
Tabular list view. See DR groups List view.
•
Graphical tree views. See DR Grp Source/Destination tree view and System/DR Grp/Virtual
Disk tree view.
Actions
•
Actions in the GUI. See DR groups Actions summary.
•
You can also interact with DR groups from a job and the CLUI. See DR groups actions cross
reference.
Properties
•
Properties displayed in the GUI. See DR groups Properties summary.
•
You can also display properties from the CLUI. See the CLUI command Show DR group.
DR group actions summary
The following DR group actions are available on the content pane. Some actions have equivalent
job commands or CLUI commands. See DR groups actions cross reference.
Some actions are permitted only on a source DR group; other actions are permitted only on a
destination DR group. See Using DR group actions.
•
View Properties. View the properties of a DR group. Procedure.
•
Edit Properties. Edit the properties of a DR group. Procedure.
•
New. Create a DR group pair. Procedure.
•
Delete. Delete a DR group pair. Procedure.
•
Add to Managed Set. Add a DR group to a managed set. Procedure.
•
Remove from Managed Set. Remove a DR group from a managed set. Procedure.
•
Add Members. Add virtual disks to a DR group pair. See Procedure.
•
Remove Members. Remove virtual disks from a DR group pair. Procedure.
•
Suspend. Stop remote replication in a DR group pair. Procedure.
•
Resume. Resume remote replication in a DR group pair. Procedure.
•
Failover. Reverse direction the remote replication in a DR group pair. Procedure.
Working with DR groups
63
•
Enable Failsafe On Unavailable Member. Enable remote replication failsafe on unavailable
member for a DR group pair. See DR groups Failsafe on unavailable member. Procedure.
•
Disable Failsafe On Unavailable Member. Disable remote replication failsafe on unavailable
member for a DR group pair. See DR groups Failsafe on unavailable member. Procedure.
•
Revert to Home Roles. Revert the DR group pair to its home configuration. See DR groups
Home configuration. Procedure.
•
Force Full Copy. Force data in a source DR group to be copied to its destination DR group,
rather than logging the data. See DR groups Full copy mode. Procedure.
•
Low-Level Refresh. Update the properties of a DR group. Procedure.
•
List Events. Display a list of events for the resource. Procedure.
•
Launch the Device Manager. Access HP P6000 Command View from the replication manager.
Procedure.
DR group actions cross reference
You can work with DR groups using GUI actions, jobs and CLUI commands. This table provides a
cross reference for performing typical tasks.
Create DR group
GUI action
Job template or command
CLUI command
DR Groups > New (New DR Group wizard) CreateDrGroup
Virtual Disk > Create DR Group (Replicate
wizard)
Add DR_Group
-
Setup HP Continuous Access (template)
-
-
Perform Cascaded Replication (template)
-
-
CreateDrGroupFromHostVolume
-
Configure DR group
GUI action
Job template or command
CLUI command
DR Groups > Add Members
AddDrGroupMember
Set DR_Group
DR Groups > Remove Members
-
Set DR_Group
DR Groups > Failover
(with or without suspend)
FailoverDrGroup (with or without suspend)
Set DR_Group
(with or without suspend)
-
Perform planned failover (template)
-
-
Perform unplanned failover
-
DR Groups > Force Full Copy
ForceFullCopyDrGroup
Set DR_Group
DR Groups > Edit
SetDrGroupAutoSuspend
Set DR_Group
DR Groups > Edit
SetDrGroupComments
Set DR_Group
DR Groups > Edit
SetDrGroupDestinationAccess
Set DR_Group
DR Groups > Enable Failsafe on
Unavailable Member
DR Groups > Disable Failsafe on
Unavailable Member
SetDrGroupFailsafe
Set DR_Group
DR Groups > Edit
64
DR groups
GUI action
Job template or command
CLUI command
DR Groups > Edit
SetDrGroupFailsafeOnLinkDownPowerUp
Set DR_Group
DR Groups > Revert to Home Roles
DR Groups > Edit
SetDrGroupHome
Set DR_Group
DR Groups > Edit
SetDrGroupIoMode
Set DR_Group
DR Groups > Edit
SetDrGroupMaxLogSize
Set DR_Group
DR Groups > Edit
SetDrGroupName
Set DR_Group
DR Groups > Suspend
DR Groups > Resume
DR Groups > Edit
SetDrGroupSuspend
Set DR_Group
Delete DR group
GUI action
Job template or command
CLUI command
DR Groups > Delete
DeleteDrGroup
Delete DR_Group
GUI action
Job template or command
CLUI command
DR Groups > Edit
-
Set DR_Group
GUI action
Job template or command
CLUI command
DR Groups > Add to Managed Set
-
Set Managed_Set
DR Groups > Remove from Managed Set
-
Set Managed_Set
GUI action
Job template or command
CLUI command
DR groups > Low-Level Refresh
-
Set DR_Group
-
DiscoverDiskDevicesForDrGroup
-
-
Throttle replication I/O (template)
-
-
WaitDrGroupNormalization
-
GUI action
Job template or command
CLUI command
DR groups > Low-Level Refresh
-
Set DR_Group
DR groups > View Properties
-
Show DR_Group
Edit DR group
Manage sets of DR groups
Other DR group tasks
View DR groups
Working with DR groups
65
DR group properties summary
For help on properties, see the following tabs in the DR group properties window.
•
DR group and the DR group pair. See General tab.
•
DR group's log disk. See Log tab.
•
Virtual disks in a DR group. See Members tab.
•
Managed sets in which the DR group is a member. See Membership tab.
See also Viewing DR group properties.
DR group views
See the following examples: List view, Grp Source/Destination tree view, and System/DR
Grp/Virtual Disk tree view.
List view
DR Grp Source/Destination Tree view
System/DR Grp/Virtual Disk Tree view
Adding DR groups to a managed set
Add DR groups to a managed set.
66
DR groups
Considerations
•
You can use the GUI or the CLUI. See DR groups Actions cross reference.
•
Guidelines apply. See Managed sets of DR groups.
Procedure
This
1.
2.
3.
procedure uses the GUI.
In the navigation pane, select DR Groups.
On the List tab, select the DR groups to add to a managed set.
Select Actions > Add to Managed Set.
The Create New Managed Set window opens.
4.
5.
Select a managed set, or select Create New Managed Set, and then enter a name.
Click OK.
Adding virtual disks to a DR group pair
Add source and destination virtual disks to a DR group pair (source and destination).
When you add virtual disks to a source DR group, the remote copies (virtual disks) are automatically
added to the destination DR group.
Considerations
•
You can use the GUI, jobs, or the CLUI. See DR groups Actions cross reference.
•
Guidelines apply. See Remote replication guidelines.
•
To add virtual disks to a DR group pair, you must specify the source DR group. See DR group
pair.
•
You cannot directly add virtual disks to a destination DR group.
Procedure
This
1.
2.
3.
procedure uses the GUI.
In the navigation pane, select DR Groups.
On the List tab, select the source group of the DR group pair in which to add virtual disks.
Select Actions > Add Members.
The Add Members wizard opens.
4.
Follow the instructions in the wizard.
Creating a DR group pair
Create a DR group pair (source and destination). See DR group pair.
Considerations
•
You can use the GUI, jobs, or the CLUI. See DR groups Actions cross reference.
•
Guidelines apply. See Remote replication guidelines.
Procedures
The following procedure uses the GUI. See also virtual disks Creating a DR group pair.
Working with DR groups
67
DR group procedure
1.
2.
In the navigation pane, select DR Groups.
On the List tab, select Actions > New.
The New DR Group wizard opens.
3.
Follow the instructions in the wizard.
Deleting a DR group pair
Delete a DR group pair (source and destination). See DR group pair.
When you delete a DR group, you delete the DR group pair (source and destination). Virtual disks
that are members of the source DR group are retained. You can discard or retain the remote copies
(virtual disks).
Considerations
•
You can use the GUI, jobs, or the CLUI. See DR groups Actions cross reference.
•
Guidelines apply. See Remote replication guidelines.
•
If you choose to discard the remote copies when the remote link is not operational, the remote
virtual disks are not deleted. You will need to manually delete them.
•
You cannot delete a DR group pair if any destination virtual disk is presented to a host.
CAUTION:
If you discard a remote copy, the destination virtual disk and its data are deleted.
Procedure
This
1.
2.
3.
procedure uses the GUI.
In the navigation pane, select DR Groups.
On the List tab, select the source or destination DR group in the DR group pair to delete.
Select Actions > Delete.
The Discard Remote Copies window opens.
4.
5.
Select the remote copy you want to discard, and then click OK.
Click Finish.
Editing DR group properties
Edit (set) the properties of a DR group.
Considerations
•
You can use the GUI, jobs, or the CLUI. See DR groups Actions cross reference.
•
Guidelines apply. See Remote replication guidelines.
•
DR groups exist in pairs. Editing one partner's property can affect the properties in the other
partner. For example, if you enable failsafe on unavailable member on a source DR group,
its destination DR group is failsafe-enabled, too.
Procedure
This
1.
2.
3.
procedure uses the GUI.
In the navigation pane, select DR Groups.
On the List tab, select the DR group to edit.
Select Actions > Edit Properties.
The Editing DR Group window opens.
68
DR groups
4.
5.
Edit the properties.
Click Finish.
Enabling failsafe on unavailable member for a DR group pair
Enable failsafe on unavailable member of a DR group pair. See DR groups Failsafe and states.
Considerations
•
You can use the GUI, jobs, or the CLUI. See DR groups Actions cross reference.
•
Guidelines apply. See virtual disks Remote replication guidelines.
•
To set failsafe on unavailable member on a DR group pair, you must specify the source DR
group. See DR group pair.
IMPORTANT: The failsafe on unavailable member setting for a DR group pair can impact
host I/O and data consistency between the source and destination DR groups. Ensure that
you understand the potential impacts of changing the mode.
Procedure
This
1.
2.
3.
4.
procedure uses the GUI.
In the navigation pane, select DR Groups.
On the List tab, select the source group in the DR group pair to failsafe-enable.
Select Actions > Enable Failsafe on Unavailable Member.
Click OK.
Disabling the failsafe on unavailable member
Disable failsafe on unavailable member of a DR group pair. See DR groups Failsafe on unavailable
member.
Considerations
•
You can use the GUI, jobs, or the CLUI. See DR groups Actions cross reference.
•
Guidelines apply. See Remote replication guidelines.
•
To set failsafe on unavailable member on a DR group pair, you must specify the source DR
group. See DR group pair.
•
You cannot disable failsafe on unavailable member if the status of the DR group pair is failsafe
locked. See DR groups Failsafe on unavailable member and Failsafe states.
IMPORTANT: The failsafe on unavailable member setting for a DR group pair can impact
host I/O and data consistency between the source and destination DR groups. Ensure that
you understand the potential impacts of changing the mode.
Procedure
This
1.
2.
3.
4.
procedure uses the GUI.
In the navigation pane, select DR Groups.
On the List tab, select the source group in the DR group pair to failsafe-disable.
Select Actions > Disable Failsafe on Unavailable Member.
Click OK.
Failing over a DR group pair
Fail over a DR group pair. See DR groups Failover.
Working with DR groups
69
Considerations
•
You can use the GUI, jobs, or the CLUI. See DR groups Actions cross reference.
•
Guidelines apply. See Remote replication guidelines.
•
HP recommends that you not fail over a DR group pair more often than every 15 minutes.
•
To fail over a DR group pair, you must specify the destination DR group. See DR group pair.
•
You cannot fail over a DR group pair if remote replication is suspended or if a remote copy
(virtual disk) is in an unknown state. See DR groups Suspension state and virtual disks resource
states.
•
If supported by controller software, an option is available to suspend remote replication after
the failover. See Controller software features - remote replication and DR groups Suspend on
failover.
IMPORTANT: Failing over a DR group pair impacts host I/O. Ensure that you understand
the potential impacts of performing a failover. For more information on failover, see the HP
P6000 Continuous Access Implementation Guide.
Procedure
This
1.
2.
3.
procedure uses the GUI.
In the navigation pane, select DR Groups.
On the List tab, select the destination group in the DR group pair to failover.
Select Actions > Failover.
The Confirm Failover window appears.
4.
If supported by controller software, an option to suspend remote replication is enabled. Select
Suspend on failover to suspend remote replication immediately after the failover is executed.
IMPORTANT: Do not perform this action when the links are down, especially during an
unplanned failover. Doing so can create an invalid source-source configuration. See Invalid
DR group pair - source and source.
5.
Click OK.
Forcing a full copy
Force a full copy in a DR group pair. See DR groups Full copy.
Considerations
•
You can use the GUI, jobs, or the CLUI. See DR groups Actions cross reference.
•
Guidelines apply. See Remote replication guidelines.
•
To force a full copy in a DR group pair, you must specify the source DR group. See DR group
pair.
Procedure
This
1.
2.
3.
4.
70
DR groups
procedure uses the GUI.
In the navigation pane, select DR Groups.
On the List tab, select the source group in the DR group pair to force a full copy.
Select Actions > Force Full Copy.
Click OK.
Launching the device manager
Access HP P6000 Command View from the replication manager. Each time you use this action in
the same replication manager session, a new window for HP P6000 Command View is opened.
Considerations
•
You can only use the GUI to launch the device manager. The action is not available unless
an individual resource is selected (highlighted) in the replication manager content pane.
•
You must know the security credentials (user name and password) to log on to HP P6000
Command View.
Procedure
1.
2.
3.
In the navigation pane, select DR Groups, Storage Systems, or Virtual Disks.
On a List or Tree tab, select any storage resource.
Select Actions > Launch the Device Manager.
A new browser window opens.
4.
Respond to the security alert message, and then log on to HP P6000 Command View.
Listing individual resource events
Display the events for an individual resource.
Considerations
•
You can only use the GUI.
•
Applies only to individual DR groups, storage systems, and virtual disks.
Procedure
1.
2.
3.
In the navigation pane, select resource type.
On the List tab, select the specific resource whose events are to be displayed.
Select Actions > List events.
An events window for the resource opens.
Low-level refreshing DR groups
Perform a low-level refresh of the virtual disks and log disk in a DR group. See virtual disks Low-level
refresh.
Considerations
•
You can use the GUI or CLUI. See DR groups Actions cross reference.
Procedure
This procedure uses the GUI.
1. In the navigation pane, select DR Groups.
2. On the List tab, select the DR group whose virtual disks and log disk properties are to be
updated.
3. Select Actions > Low-Level Refresh.
The Confirmation Action window appears.
4.
To continue, click OK.
The properties of the virtual disks and log disk in the DR group are updated.
Working with DR groups
71
Removing DR groups from a managed set
Remove DR groups from a managed set.
Considerations
•
You can use the GUI or the CLUI. See DR groups Actions cross reference.
Procedure
This
1.
2.
3.
procedure uses the GUI.
In the navigation pane, select DR Groups.
On the List tab, select the DR groups to remove from a managed set.
Select Actions > Remove From Managed Set.
The Select Managed Sets window opens.
4.
5.
Select the managed set from which to remove the DR groups.
Click OK.
Removing virtual disks from a DR group pair
Remove virtual disks from a DR group pair.
When you remove virtual disks from a source DR group, the source disks are retained. You can
discard or retain the remote copies (virtual disks).
Considerations
•
You can use the GUI, or the CLUI. See DR groups Actions cross reference.
•
Guidelines apply. See Remote replication guidelines.
•
To remove virtual disks from a DR group pair, you must specify the source DR group. See DR
group pair.
•
You cannot directly remove virtual disks from a destination DR group.
•
If you attempt to discard the remote copies when the remote link is not operational, the remote
virtual disks are not deleted. You will need to manually delete them.
•
You cannot remove virtual disks from a DR group pair if remote replication is suspended. See
DR groups Suspension state.
CAUTION:
If you discard a remote copy, the destination virtual disk and its data are deleted.
Procedure
This procedure uses the GUI.
1. In the navigation pane, select DR Groups.
2. On the List tab, select the source group in the DR group pair from which to remove virtual
disks.
3. Select Actions > Remove members.
The Remove Member Wizard opens.
4.
Follow the instructions in the wizard.
Resuming a DR group pair
Resume (allow) remote replication in a suspended DR group pair. See DR groups Suspension state.
72
DR groups
Considerations
•
You can use the GUI, jobs, or the CLUI. See DR groups Actions cross reference.
•
Guidelines apply. See Remote replication guidelines.
•
To resume remote replication in a DR group pair, you must specify the source DR group. See
DR group pair.
Procedure
This procedure uses the GUI.
1. In the navigation pane, select DR Groups.
2. On the List tab, select the source group of the DR group pair in which to resume remote
replication.
3. Select Actions > Resume.
4. Click OK.
Reverting a DR group pair to home
Revert a DR group pair to its home configuration. See DR groups Home.
Considerations
•
You can use the GUI, jobs, or the CLUI. See DR groups Actions cross reference.
•
Guidelines apply. See Remote replication guidelines.
•
To revert a DR group pair to its home configuration, you must specify the destination DR group.
See DR group pair.
•
You cannot revert to home if the DR group pair is suspended. See DR groups Suspension state.
•
If the DR group pair is already in its home configuration, reverting to home has no impact.
•
If the DR group pair is not already in its home configuration, reverting to home causes a
failover. See DR groups Failover.
IMPORTANT: Failing over a DR group pair impacts host I/O. Ensure that you understand
the potential impacts of performing a failover. For more information on failover, see the HP
P6000 Continuous Access Implementation Guide.
Procedure
This
1.
2.
3.
4.
procedure uses the GUI.
In the navigation pane, select DR Groups.
On the List tab, select the destination group in the DR group pair to revert to home configuration.
Select Actions > Revert to Home.
Click OK.
Suspending a DR group pair
Suspend remote replication in a DR group pair. See DR groups Suspension state.
Working with DR groups
73
Considerations
IMPORTANT: Do not suspend remote replication when the links are down, especially during an
unplanned failover. Doing so can create an invalid source-source configuration. See Invalid DR
group pair - source and source.
•
You can use the GUI, jobs, or the CLUI. See DR groups Actions cross reference.
•
Guidelines apply. See Remote replication guidelines.
•
To suspend remote replication in a DR group pair, you must specify the source DR group. See
DR group pair.
•
You cannot suspend a DR group pair if it is failsafe-enabled. See DR groups Failsafe on
unavailable member.
Procedure
This procedure uses the GUI.
1. In the navigation pane, select DR Groups.
2. On the List tab, select the source group of the DR group pair in which to suspend remote
replication.
3. Select Actions > Suspend.
A confirmation window appears.
IMPORTANT:
4.
Read the confirmation notice before performing the next step.
Click OK.
Using DR groups
When using a DR group pair, some actions and properties require that you specify either the
source or destination. See the following tables: Controlling, Creating and deleting, Adding and
deleting virtual disks, and Editing (setting) properties.
Controlling
Task
74
DR group to
specify
Result on source DR
group
Result on destination DR
group
Remarks
Fail over a DR group
pair (do not suspend).
Destination
The source becomes the The destination
destination.
becomes the source.
Procedure See also DR
group Failover
Fail over and suspend a
DR group pair.
Destination
The source becomes the
destination, and then
remote replication is
suspended.
The destination
becomes the source,
and then remote
replication is
suspended.
Procedure See also DR
group Suspend on failover
Force a full copy in a DR
group pair.
Source
When applied after
resuming, the source
begins to copy all data
on its virtual disks to the
destination. No logs are
used.
The destination begins Procedure See also DR
to receive the data from group Full copy mode
the source virtual disks.
No logs are used.
Resume remote
replication in a DR group
pair.
Source
Remote replication from
the source is allowed. If
applicable, begins log
merging or full copy
from the source.
Remote replication to
the destination is
allowed. If applicable,
begins log merging or
full copy to the
destination.
DR groups
Procedure See also DR
group Suspension state
DR group to
specify
Result on source DR
group
Destination
If the DR group pair is
not in its home
configuration a failover
occurs. Otherwise, there
is no change in
operation.
If the DR group pair is Procedure See also DR
not in its home
group Home
configuration a failover
occurs. Otherwise,
there is no change in
operation.
Source
Remote replication from
the source is not
allowed. Host writes to
the source continue but
are logged.
Remote replication to
the destination is not
allowed.
Task
DR group to
specify
Result on source DR
group
Result on destination DR
group
Create a DR group pair.
Source
Task
Revert a DR group pair
to its home
configuration.
Suspend remote
replication in a DR group
pair.
Result on destination DR
group
Remarks
Procedure See also DR
group Suspension state
Creating and deleting
Delete a DR group or
pair.
Either
Creates a source DR
Creates the destination
group that includes the DR group and remote
specified virtual disks. copies (virtual disks).
Deletes the source DR
group. Its virtual disks
are retained.
Remarks
Procedure
Deletes the destination
DR group. Its virtual disks
are retained or
Procedure
discarded (deleted) as
requested.
Adding and deleting virtual disks
Task
DR group to
specify
Add virtual disks to a DR
group pair.
Remote virtual disks from
a DR group pair.
Source
Source
Result on source DR
group
Adds a source virtual
disk to the source DR
group.
Result on destination DR
group
Adds a corresponding
remote copy to the
corresponding
destination DR group.
Prompts you to keep or
Deletes a source virtual
discard the
disk from the source DR
corresponding remote
group.
copy.
Remarks
Procedure
Procedure
Editing (setting) properties
Task
DR group
to specify
Result on source DR
group
Result on destination DR
group
Edit (general) a DR group.
Either
Properties are changed. Properties are changed. Procedure
Auto suspend on link down
for a DR group pair.
Source
Auto suspend on links
down is disabled or
enabled.
Auto suspend on links
down is disabled or
enabled.
Procedure See also DR
group Auto suspend on
link down
Auto suspend on full copy
for a DR group pair.
Source
Auto suspend on full
copy is disabled or
enabled.
Auto suspend on full
copy is disabled or
enabled.
Procedure See also DR
group Auto suspend on
full copy
Comment for a DR group.
Either
Comment for the DR
group is edited.
Comment for the DR
group is edited.
Procedure
Remarks
Working with DR groups
75
Task
Destination access mode
DR group
to specify
Result on source DR
group
Result on destination DR
group
Destination Destination access mode Destination access
is changed.
mode is changed.
Remarks
Procedure
Failsafe on unavailable
member for a DR group
pair.
Source
Failsafe on unavailable
member is disabled or
enabled.
Failsafe on unavailable Disabling - Procedure
member is disabled or Enabling - Procedure See
enabled.
also DR group Failsafe on
unavailable member and
DR group Failsafe on
link-down/power-up
Failsafe on
link-down/power-up for a
DR group pair.
Source
Failsafe on
link-down/power-up is
disabled or enabled.
Failsafe on
link-down/power-up is
disabled or enabled.
Home
Either
Sets home true or false
in coordination with
other group.
Sets home true or false Procedure See also DR
in coordination with
group Home
other group.
I/O mode
Source
Remote replication I/O
mode is changed.
Remote replication I/O Procedure See also DR
mode is changed.
group I/O mode
Maximum log disk size
Source
Maximum log disk size
is changed.
Maximum log disk size Procedure See also DR
is changed.
group Maximum log disk
size
Name
Either
Name of the DR group
is changed.
Name of the DR group Procedure
is changed
Editing the setting Procedure
Viewing DR groups
Display DR group list and tree views. See DR groups Views.
Considerations
•
You can use the GUI or CLUI to display lists. See DR groups Actions cross reference.
•
Tree views are available only in the GUI.
Procedure
This procedure uses the GUI.
1. In the navigation pane, select DR Groups.
The content pane displays DR groups.
2.
Click the List tab.
A tabular list of DR groups is displayed.
3.
Click the Tree tab.
A graphical tree of DR groups is displayed.
4.
Click View to select another tree view.
Viewing DR group properties
View the properties of a specific DR group. See DR groups Properties summary.
Considerations
•
You can use the GUI or CLUI. See DR groups Actions cross reference.
Procedure
This procedure uses the GUI.
76
DR groups
1.
2.
3.
In the navigation pane, select DR Groups.
On the List tab, select the DR group to view.
Select Actions > View Properties.
The DR Group Properties window opens.
4.
Click the properties tabs.
DR group concepts
DR group pairs (source and destination)
DR groups operate in a paired relationship, with one DR group being a source and the other a
destination. The terms source and destination are sometimes called a DR mode or DR role. The
terms source and destination are also applied to virtual disks, with a destination virtual disk also
known as a remote copy. Various actions, such as a planned or unplanned failover, allow you to
reverse the relationship so that the source DR group becomes the destination and destination
becomes the source. See DR groups Failover.
IMPORTANT: The role of a DR group in remote replication is of critical importance and can be
reversed at any time.
For example, a DR group can be a source at 8:00 and then, after a user-initiated failover, be a
destination at 8:05. When reviewing a DR group's role, ensure that you are looking at current
properties.
In a typical environment, hosts perform I/O with the virtual disks in the source DR group at one
site and the data is remotely replicated to the destination DR group at another site. The virtual disks
in a source DR group remotely replicate to the same destination, fail over together, share a DR
group log, and preserve write-order within the group.
DR group concepts
77
Site 1
< remote link >
1. A source virtual disk
(1 of 3 in the source DR group)
Site 2
2. A destination virtual disk (remote
copy)
(1 of 3 in the destination DR group)
3. The source DR group log disk
A DR group log is a designated virtual disk in a source DR group that is used under certain
circumstances to temporarily store host writes to source virtual disks. Logging can be started
automatically when remote replication to the destination DR group cannot be performed, for
example when a remote link is down. Logging can also be started on demand by performing a
suspend action from the GUI, jobs, or a CLUI command. When remote replication is re-established,
the contents of the source DR group log are written to the destination virtual disks. See DR groups
Logs.
Home group and home configuration
When designated as a home group, the group is considered be the preferred source in the remote
replication relationship. By default, the property for the source and destination DR group is set true
and false, respectively. This is called the home configuration. See DR groups Home.
DR group names
When a DR group pair is created, the same DR group name is assigned to the source and
destination DR group.
IMPORTANT:
names.
78
DR groups
After creation, HP recommends that you edit partner DR groups and change the
After creation, you can change the names to something logical for your environment. For example,
rename them based on their physical locations, Boston and London, or their home roles SiteA_Srce
and SiteB_Dest. Certain characters not allowed in the names. See Illegal characters.
Auto suspend on link-down
The auto suspend feature controls whether remote replication in a DR group pair is automatically
suspended when a remote link goes down. Values are:
•
Enabled. Remote replication in the DR group pair is automatically suspended when a remote
link goes down.
•
Disabled. Remote replication in the DR group pair is not automatically suspended when a
remote link goes down.
Auto suspend on link-down usage guidelines:
•
Remote replication is not automatically restarted after the link becomes operational again.
IMPORTANT:
•
A resume command must be issued to resume remote replication.
Improper use of auto suspend can create and invalid source-source configuration. See Invalid
DR group pair - source and source.
IMPORTANT: Do not set auto suspend when the links are down, especially during an
unplanned failover.
•
If not specified when creating a DR group pair, the value is set to disabled.
•
You cannot change this property when failsafe on unavailable member is enabled. See DR
groups Failsafe on unavailable member.
See also DR groups Suspension state.
Controller software versions
Implementation of this feature is controller software dependent. See Controller software features remote replication.
Auto suspend on full copy
The auto suspend feature controls whether remote replication in a DR group pair is automatically
suspended when a full copy is required. Values are:
•
Enabled. Remote replication in the DR group pair is automatically suspended when a full copy
is required.
•
Disabled. Remote replication in the DR group pair is not automatically suspended when a full
copy is required.
Auto suspend on full copy guidelines:
•
Auto suspend on full copy in enabled by default.
•
This feature enables you to control when a full copy begins. This allows you to create a replica
of the destination before the full copy begins, protecting the data in the event of a disruption
during the full copy.
IMPORTANT:
A resume command must be issued to resume remote replication.
See also DR groups Suspension state.
DR group concepts
79
Controller software versions
Implementation of this feature is controller software dependent. See Controller software features remote replication.
80
DR groups
Cascaded replication
Cascaded replication refers to a replication event, and a configuration, that involves three sites
(storage systems). See the following diagram.
The source volume (a) at Site 1 is in a remote replication relationship with its copy (b) at Site 2.
Whenever a point-in-time snapclone copy is needed (c) , it is created at Site 2. The snapclone
copy is then remotely replicated (cascaded) to another remote copy (d) at Site 3.
Site 1
source (a)
Site 2
>>>
Site 3
remote copy (b)
\/
snapclone
\/
snapclone (c)
>>>
remote copy (d)
The result is that a point-in-time replica of the source exists at two different sites.
Copy state
Copy state identifies the state of remote replication between a source and destination virtual disk.
Values are:
•
Normal. Remote replication involving the virtual disk is normal. There is no logging or merging
involving the virtual disk.
•
Full Copy. The DR Group log threshold has been reached and the virtual disk is involved in a
full copy. See DR groups Full copy mode.
Destination access mode
Destination access mode specifies the type of host I/O that is allowed when a virtual disk is in a
destination (remote copy) role. Values are:
•
None. The virtual disk cannot be presented to any hosts.
•
Inquiry Only. The virtual disk can be presented to hosts, but no reads (or writes) are allowed.
•
Read Only. The virtual disk can be presented to hosts for read only.
The Inquiry Only mode allows SCSI inquiry but prohibits host reads. This mode is typically used
to support host clusters.
Controller software versions
Implementation of this feature is controller software dependent. See Controller software features remote replication.
DR group states and icons
Icons in the DR group content pane indicate the key states of each DR group. You can move the
cursor over an icon to display details.
DR group concepts
81
1. Operational state
3. Suspension state
2. Failsafe state
4. Log state
Failover
To fail over means to reverse the direction of remote replication in a DR group pair. A failover
event is an event in which the remote replication direction was reversed.
When a failover event occurs, the roles of source and destination in a DR group pair are reversed.
For example, in a planned failover, if DR group A was originally the source and DR group B was
the destination, those roles are reversed when DR group B is failed over.
In the case of an unplanned failover (disaster), there may not be a literal reversal of remote
replication because Site A might not be operational. Instead, when DR group B is failed over, Site
B can become an independent site, or a source that remotely replicates to third site.
Performing a planned or unplanned failover involves technical preparation and significant
operational planning. Host I/O is impacted by a failover. The following guidelines capture just a
few of the technical considerations.
•
To fail over a DR group pair, you must specify the destination DR group. See DR group pair.
•
You cannot fail over a DR group pair if remote replication is suspended or if a remote copy
(virtual disk) is in an unknown state. See DR groups Suspension state and virtual disks Resource
states.
•
You can failover a DR group while it is normalizing.
•
If only one component in a DR group pair fails, repairing that single component may be
preferable to performing a failover.
IMPORTANT: Failing over a DR group pair impacts host I/O. Ensure that you understand
the potential impacts of performing a failover. For more information on failover, see the HP
P6000 Continuous Access Implementation Guide.
See also DR groups Suspend on failover and Remote replication guidelines.
Failsafe on link-down/power-up
Failsafe on link-down/power-up is a data protection feature that specifies whether or not virtual
disks in a source DR group are automatically re-presented to hosts after a power-up (reboot) of the
source storage system when the links to the destination DR group are down. Values are:
82
•
Enabled. When enabled, virtual disks in a source DR group are not automatically re-presented
to hosts after a link-down/power-up (reboot) of the source storage system. This behavior is
called presentation blocking and provides data protection under several circumstances. See
Operational State - Blocked.
•
Disabled. When disabled, virtual disks in a source DR group are automatically re-presented
to hosts after a link-down/power-up (reboot) of the source storage system.
DR groups
Default setting and user control
•
When a DR group pair is created, Failsafe on link-down/power-up (presentation blocking) is
enabled by default.
•
After a DR group pair is created, some versions of controller software allow you to change
the failsafe on link-down/power-up setting. In other versions, the setting cannot be changed.
See Page 238.
Failsafe on unavailable member
Failsafe on unavailable member is a data protection feature that specifies how host-writes, logging,
and remote replication occur when a component in the DR pair becomes unavailable during normal
power-on operation. Values are:
•
•
Enabled
◦
While all components in the DR group pair function normally, host writes, logging, and
remote replication continue normally.
◦
When any virtual disk in the DR group pair becomes unavailable, host writes, logging,
and remote replication are stopped.
Disabled
◦
While all components in the DR group pair function normally, host writes, logging, and
remote replication continue normally.
◦
If any remote copy (virtual disk) in the DR group pair becomes unavailable, all host writes
to the source DR group and logging continue, but remote replication to the destination
DR group is automatically stoped. Host writes are stored in the source DR group log until
remote replication is re-established.
◦
If a source virtual disk becomes unavailable, host writes to the virtual disk are automatically
stopped. Remote replication to the remote copy is also automatically stopped. Host writes,
logging, and remote replication to other virtual disks in the source and destination DR
groups continue normally.
Default setting and user control
When you create a DR group pair, you can choose to initially enable or disable the feature. The
setting can be changed after the DR group pair is created.
In some older versions of storage management software, failsafe on unavailable member is called
failsafe mode. See also Failsafe states, Remote replication guidelines, and Failsafe on
link-down/power-up.
Failsafe states
The failsafe state for a DR group pair indicates the state of host I/O, logging, and remote replication
functions. Values are:
•
Locked. A locked failsafe state for a DR group pair indicates that host writes, logging, and
remote replication have been automatically stopped due to a problem.
•
Unlocked. An unlocked failsafe state for a DR group pair indicates that host writes, logging,
and remote replication are occurring normally.
Failsafe states are related to settings for Failsafe on link-down/power-up and Failsafe on unavailable
member.
DR group concepts
83
Full copy mode
When in full copy mode, data in a DR group pair is copied directly from the source group virtual
disk to the destination group, without using any data in the log disk. Typically, this is an automatic
event that happens when remote replication is resumed and the DR group log disk is full. See
Default maximum log size.
To control when an automatic full copy occurs, use the Auto suspend on full copy feature. This
enables you to make a replica of the destination before the full copy begins. See Auto suspend
on full copy.
Force full copy
In a forced full copy:
•
All virtual disks in the source DR group are copied; you cannot copy individual disks.
•
All host write-transactions in the DR group log are deleted.
Even when supported by the controller software, the force full copy feature is not available if
conditions in the DR group pair are not appropriate.
Controller software versions
Implementation of this feature is controller software dependent. See Controller software features remote replication.
Home
When designated as home, the DR group is the preferred source in the DR group pair. Values are:
•
True/Yes. The DR group is the preferred source.
•
False/No. The DR group is the preferred destination.
Home group is a property that is maintained only by the replication manager.
Home configuration
When the replication manager creates a DR group pair, the home property for the source and
destination groups is set to true and false, respectively. This is called the home configuration. When
failed over from the home configuration, the home property remains unchanged.
Reverting to home
Reverting to home is an event that causes a failed over DR group pair to return to their original
roles in the home configuration.
Setting the home property
•
You can change the home property using a GUI action, job command, and CLUI command.
See DR groups Actions cross reference.
•
During discovery, the replication manager automatically adds the home group property to
source and destination DR group pairs that do not have it. See resources Discovery.
Use of home
The home concept is useful because the preferred source is flagged by the home property, even
if it is currently operating as the destination. This helps you to revert to home configuration easily.
Consider using home group to designate a business center for your replication activities. For
example, if your headquarters is in Chicago, you can set your Chicago DR group as home. If your
headquarters moves or becomes inoperable, you can then change home to some other location.
84
DR groups
I/O throttling
I/O throttling refers to suspending some DR groups during a log merge or full copy event. By
suspending noncritical DR groups, the critical DR groups complete the merge or full copy sooner.
Throttling of I/O after logging
When logging, and not in the failsafe mode, remote replication automatically resumes when the
links are restored. If there are several DR groups with large logs, they can compete for remote
replication bandwidth, thus slowing the overall merge or full copy between the two sites.
By suspending the merge or full copy of the noncritical DR groups, the critical data is available
sooner at the destination. After the critical DR groups finish merging or copying, the remote
replication of the noncritical DR groups can be resumed.
Job command processing
When a job is run, some job commands may specify features that are not available on the target
array. See Controller software versions - remote replication. Rather than fail the job, the replication
manager ignores the unsupported feature. The following job commands are processed in this
manner.
Job commands
Remarks
CreateDrGroup
CreateDrGroupFromHostVolume
These arguments are ignored if the feature is not available.
ForceFullCopyDrGroup
This job command is ignored if the feature is not available.
SetDrGroupAutoSuspend
This job command is ignored if the feature is not available.
AutoSuspend On Link Down
Destination RAID level
Source Log Disk Group
Destination Log Disk Group
Maximum Log Disk Size
SetDrGroupFailsafeOnLinkdownPowerup This job command is ignored if the feature is not available.
PresentStorageVolume
When a virtual disk is a remote copy, the Access Type = None (No read)
argument is ignored if the feature is not available.
Logs
Log overview and states
A DR group log is a designated virtual disk that stores the host data that is written to the virtual
disks in a source DR group. The contents of the log are replicated to the destination virtual disks
to synchronize them with their sources.
The process of storing host data, in the exact order received in the DR group, is called logging.
The process of replicating the logged data to the destination DR group, in the same order as
received, and synchronizing with the sources is called merging or DR group normalization. See
Logging, Merging, and DR group normalization.
When a DR group log becomes full, a full copy is automatically initiated. See full copy mode and
log size.
DR group concepts
85
Log states
The DR group log state depends on the Write mode as shown below.
Replication write mode
Log states
Enhanced asynchronous
• Normal. All source disks in the DR group are continuously logging and
merging.
• Running down. All source disks in the DR are merging as the result of a
requested transition to synchronous mode. See Write mode transitions.
Basic asynchronous
• Not in use. No source virtual disk in the DR group is logging or merging.
• Logging. At least one source virtual disk in the DR group is logging but
none are merging.
• Merging. At least one source virtual disk in the DR group is merging.
Synchronous
• Not in use. No source virtual disk in the DR group is logging or merging.
• Logging. At least one source virtual disk in the DR group is logging but
none are merging.
• Merging. At least one source virtual disk in the DR group is merging.
Log contents
A small portion of log space is reserved for array commands. The largest portion stores host data
that is written to all of the virtual disks in the disk group.
Log and disk group planning
When you create a DR group pair, a log disk is automatically created for the source and destination
DR group. An important planning factor is selecting the disk groups in which to create the log
disks.
Disk group category
•
•
Select a disk group whose category is appropriate to the write mode of the DR group pair.
See Write mode and Online and near-online disk groups. You can use HP P6000 Command
View to determine the category the disk groups that are displayed in the replication manager.
Write mode
Recommended disk group category
Enhanced asynchronous
Online
Basic asynchronous
Near-online
Synchronous
Near-online
If near-online disk groups cannot be made available, use the online disk groups that have the
most free space.
Disk group size
•
When selecting a disk group, ensure there is space for log disks to expand to their maximum
size. See Log size.
•
When creating disk groups, plan for the impacts of DR group logs.
Log size
When a DR group pair is created, the size of each log disk (source and destination) is 136 MB
(Vraid1) by default. The initial size is also called its minimum size.
86
DR groups
When logging, the size automatically increases in proportion to the host data written to the DR
group, up to a preset maximum. By default, the maximum size is determined by the controller
software. However, you can manually set the maximum.
Log size example
Maximum size
[varies]
^
^
^
Initial (minimum) size
•
The actual size and the requested size of a log are reported. When these differ, it indicates
the requested size change cannot be made until the log state changes.
•
In synchronous write mode, the log is automatically returned to its minimum size when a merge
is completed or a full copy is performed. In enhanced asynchronous write mode, the log is
not returned to its minimum size when a merge is completed or when a full copy is performed.
•
Sometimes a log can become much larger than is typically needed. In this case, HP recommends
that you edit the DR group properties and make the log smaller.
Default maximum size
The default maximum size depends on the controller software version and equals the DR group
size multiplied by a factor. For example, if a DR group contains virtual disks with a total of 10 GB
and the factor is 1x then the default maximum size is 10 GB. For specific factors, see Controller
software features - remote replication.
Log size and disk group occupancy warnings
The size of a log disk can cause a disk group occupancy warning. See disk group occupancy
alarm level. If this happens, consider the following options:
•
Temporarily stop host I/O to the virtual disks in the impacted DR group and plan necessary
changes.
•
Allow the log to completely fill and trigger an automatic full copy, or manually force a full
copy. See Full copy mode. This returns the log to its minimum size.
•
Add storage to the disk group that the log disk is in.
•
Delete unneeded virtual disks or containers in the disk group to create more free space.
Logging
The process of storing host write-transactions to the virtual disks in a source DR group, in the exact
order received, is called logging. The details of the logging process depend on remote replication
write mode. See Write mode.
Logging when in enhanced asynchronous write mode
When using enhanced asynchronous write mode, logging occurs continuously, even if replication
is not suspended or the destination is available. In effect, the log continuously acts as a FIFO buffer,
accepting host write-transactions in and replicating them out to the destination.
In many cases, an asynchronous log can buffer up to several days of host write-transactions,
depending on the log size and the transaction rate.
DR group concepts
87
Logging when in synchronous write mode
When in synchronous write mode, logging occurs only if replication is suspended or the destination
is not available. When replication is resumed or the destination becomes accessible the logged
host write-transactions are merged to the destination. See Merging.
Log-full event
When a log is full and its size cannot be increased, a full copy is automatically started. See Full
copy mode. The controller software considers a log to be full when any of the following occur:
•
The log disk reaches its maximum log size.
•
The log disk reaches the maximum size for a disk. See Virtual disk guidelines.
•
No free space remains in the disk group that contains the log disk.
Log merging
The process of replicating a log's host write-transactions to a destination DR group, in the same
order as received, is called merging.
•
In asynchronous write mode, merging occurs whenever required and the destination is
available.
•
In synchronous write mode, merging occurs when replication is resumed and the destination
is available. When a merge is complete, the log disk is automatically returned to its minimum
size. See Log size.
Log disk and states
The DR group log is a designated virtual disk that stores a source DR group's host writes while
remote replication to the destination DR group is stopped.
When replication is re-established, the contents of the log are written to the destination virtual disks
within the destination DR group to synchronize the destinations with their sources. This process of
writing the log disk contents to the destination in the order that the writes occurred is called merging.
Log states
A DR group log can be in one of the following states:
•
Unused (Normal). No source virtual disk in the DR group is logging or merging.
•
Logging. At least one source virtual disk in the DR group is logging but none are merging.
•
Merging. At least one source virtual disk in the DR group is merging and logging.
Log-full threshold
Storage system controller software considers a log disk full when any of the following occurs:
•
No free space remains in the disk group.
•
The log disk reaches 2 TB of Vraid1 (4 TB total).
•
The log disk reaches its maximum size.
When the log disk is declared full, DR group members are marked for full copy and the log disk
is deleted.
Maximum log disk size
When a DR group is created, the log disk is initially sized as 136 MB (Vraid1). Whenever logging
occurs, the size is increased in proportion to the amount of writes. The size can increase only up
to a user specified maximum value or to the controller software's default maximum value.
88
DR groups
The default maximum value depends on the controller software version and is expressed as a
multiple of the total capacity of the virtual disks in the DR group. For example, if the DR group
contains virtual disks with a total of 10 GB and the multiple is 1x then the default maximum size
for the log disk is 10 GB.
For specific values, see Controller software features - remote replication.
When creating disk groups, ensure there is sufficient space for log disks to expand to their maximum
size. HP recommends creating DR group log disks in near-line disk groups, if available. Otherwise,
create log disks in the online disk groups with the most free space.
Low-level refresh
Update replication manager database entries by manually performing a discovery and refresh of
individual DR groups. To update the DR group pair, apply a low-level refresh to the source and
destination DR groups.
The action does the following:
•
Performs a discovery refresh at the storage system level to gather the properties of the specified
DR groups, including the log disks.
•
Updates the replication manager database with the new properties.
Managed sets of DR groups
Managed sets of DR groups require careful planning. Consider the following guidelines:
•
Some DR group actions are permitted only on source DR groups; other actions are permitted
only on destination DR groups. See Using DR group actions.
•
HP recommends that you create separate managed sets for source and destination DR groups.
When you conduct failover operations, you can perform the actions that correspond to the
new roles of the managed set.
•
Source and destination DR groups (from different DR group pairs) can be members of the
same managed set.
•
The source and destination DR group in a DR group pair cannot be members of the same
managed set.
•
If you plan to use DR group managed sets for failover operations, ensure the managed sets
are controlled by the same management server at the time of failover.
Normalization
Normalization is a replication background process which verifies, on a block-by-block basis, that
data is identical on a source virtual disk and its replica. When the source and its replica have
identical data, they are said to be normalized or in a normalized state. The following types of
normalization can apply.
DR group normalization
DR group normalization is a process which verifies that data is identical on source virtual disks
and their remote copies.
Some actions should not be performed during remote copy normalization. When creating jobs,
use wait commands to ensure that normalization is completed. See the WaitDrGroupNormalization
job command.
The disks in a DR group cannot become normalized while any of the following conditions are in
effect:
•
Remote replication is suspended. See suspension state.
•
An intersite link is down.
DR group concepts
89
•
In basic asynchronous write mode, or synchronous write mode, its DR group log is logging
or merging. See Write mode and DR group log.
•
In enhanced asynchronous write mode, its DR group log is logging, merging, or contains any
transactions to be merged.
Mirrorclone normalization
Mirrorclone normalization is a process that verifies that data is identical on a source virtual disk
and its synchronized mirrorclone. See Synchronized mirrorclones.
Some actions should not be performed during mirrorclone normalization. When creating jobs, use
wait commands to ensure that normalization is completed. See the
WaitStorageVolumeNormalization, WaitStorageVolumesNormalization, and
WaitVolumeGroupNormalization job commands.
Snapclone normalization (unsharing)
The snapclone normalization is a process which verifies that data is identical on a source virtual
disk and its snapclone, before the snapclone can become an independent disk (a point-in-time
copy). See Snapclones. Snapclone normalization is also called unsharing.
Some actions should not be performed during snapclone normalization. When creating jobs, use
wait commands to ensure that normalization is completed. See the
WaitStorageVolumeNormalization, WaitStorageVolumesNormalization, and
WaitVolumeGroupNormalization job commands.
Operational state - blocked
Blocked is an operational state of a DR group, or a virtual disk, that indicates if the array has
detected a potential replication issue and is preventing presentation of impacted source virtual
disks. See Presentation to hosts.
The presentation of virtual disks in a source DR group is blocked after power is cycled off and on
to both controllers in the source array and:
•
The destination DR group was unavailable after power was cycled off and on in the source
array.
•
Remote replication was not suspended after power was cycled off and on in the source array.
•
The Destination mode was something other than Read Only after power was cycled off and
then on in the source array.
You can determine if disk presentation is blocked by checking the operational state of the DR group
or individual disk. See Viewing DR group properties and Viewing virtual disk properties.
Unblocking
The virtual disks in a source DR group are automatically unblocked when the destination DR group
becomes available again. You can manually unblock virtual disks by suspending the source DR
group in which they are members. See Suspending a DR group pair.
Presentation blocking provides protection
Disk presentation blocking is a feature that is similar to failsafe operation. See Failsafe mode.
Blocking can provide the following:
90
•
Data consistency during a site failure–when a source and its destination virtual disk could be
simultaneously presented to different hosts.
•
Protection in stretched-host clusters against host members accessing two copies of the same
data.
•
Protection for boot-from-SAN servers against two hosts booting from the same boot image.
DR groups
Remote replication guidelines
Storage arrays
•
Source and destination arrays must have remote replication licenses. See Replication licenses
overview.
•
The array selected for the destination DR group depends on the remote replication configuration.
For supported configurations, see the HP P6000 Continuous Access Implementation Guide.
•
A storage array can have DR group relationships with up to two other storage arrays.
•
The maximum number of virtual disks in a DR group and the maximum number of DR groups
per array vary with controller software versions. See Controller software features - remote
replication.
DR groups and failover
•
Failover of a DR group pair is permitted only by specifying a destination DR group.
•
When failsafe mode is enabled, a DR group pair cannot be suspended. See Failsafe on
unavailable member and Suspend on failover.
•
When a DR group pair is suspended, it cannot be failed over or reverted to home.
•
A DR group cannot be deleted if a virtual disk in the destination DR group is presented.
•
For additional help on usage, see Using DR groups.
Log disks
•
Logs disks for the source and destination DR groups should be kept the same size.
•
The log size cannot be changed when in enhanced asynchronous write mode. See Write
mode.
Write mode
•
The write mode cannot be changed if the source log disk is in use or contains data. Before
changing, a non-empty source log disk must be merged to its destination.
•
If a source and destination array are not running the same implementation of write mode
(basic or enhanced), only synchronous write mode can be selected.
Virtual disks
•
All virtual disks that contain data for an application must be in the same DR group.
•
When in enhanced asynchronous write mode, virtual disks cannot be added to or deleted
from a DR group. This restriction is controller software version dependent. See Controller
software features - remote replication.
•
When suspended, virtual disks cannot be removed from the source or destination DR group.
•
Virtual disks cannot be directly added to a destination DR group.
•
To be added to a source DR group, a virtual disk:
◦
Cannot be a member of another DR group
◦
Cannot be a snapshot or a mirrorclone. see virtual disks Types
◦
Cannot have a mirrorclone; this restriction is controller software version dependent, see
Controller software features - remote replication.
◦
Must be in a normal operational state, see resources Operational states
DR group concepts
91
◦
Must use mirrored cache, see virtual disks Cache policies
◦
Must have the same presentation status as other virtual disks in the DR group. With some
versions of controller software, the virtual disk must be presented to a host. See storage
systems Controller software features - remote replication. See also virtual disks Presentation.
Suspend on failover
Suspend on failover is an event that combines the failover of a DR group pair and suspension of
remote replication. The DR group pair is failed over, followed immediately by suspension of remote
replication. See DR groups Suspension state and Remote replication guidelines.
This event can be initiated through the replication manager GUI, jobs, or the CLUI. See DR groups
Actions cross reference.
IMPORTANT: Do not initiate this event when the remote replication links are down, especially
during an unplanned failover. Doing so can create an invalid source-source configuration. See
Invalid DR group pair - source and source.
IMPORTANT: Do not initiate this event if the failsafe mode of the DR group pair is enabled. This
can create an invalid configuration.
Controller software versions
Implementation of this feature is controller software dependent. See Controller software features remote replication.
Suspension state
Suspension state
Suspension state indicates if remote replication in a DR group pair is allowed or has been stopped.
Values are:
•
Suspended. Remote replication in the DR group pair has been stopped.
•
Resumed. Remote replication in the DR group pair is allowed.
IMPORTANT:
suspended.
You cannot remove virtual disks from a DR group pair when remote replication is
Write mode (async/sync replication)
The replication write mode of each DR group pair can be asynchronous or synchronous. The choice
is typically a business decision based on your goals and the bandwidth of the intersite link. Unless
specified otherwise, replication write mode is set to synchronous when you create a DR group
pair.
Write mode comparison (summary)
Best data protection
Synchronous
Best host I/O performance
Asynchronous
Asynchronous write mode
In asynchronous write mode, the source array acknowledges host writes before the data is replicated
on the destination array. This process allows faster host I/O than with synchronous. From a data
protection standpoint, there can be brief instances in which the data is not identical on the source
and destination DR group.
92
DR groups
Asynchronous write mode can be basic or enhanced, depending on the controller software version,
or it may be user selectable. See Controller software version - remote replication features. The
following is a summary of basic and enhanced operations.
Basic asynchronous operation
Enhanced asynchronous operation
1. Host data written to the virtual disks in a source DR
group is stored in a write-pending cache on the
source array.
2. The source array acknowledges the writes to the
host.
3. The source array replicates the data in the
write-pending cache to the destination array.
4. The destination array writes the data to the virtual
disks in the destination DR group and acknowledges
completion back to the source array.
1. Host data written to the virtual disks in a source DR
group is stored in a write-pending cache on the
source array.
2. The source array copies the cached data to the
source DR group log.
3. The source array acknowledges the writes to the
host.
4. The source array replicates the data in the DR group
log to the destination array.
5. The destination array writes the data to the virtual
disks in the destination DR group and acknowledges
completion back to the source array.
IMPORTANT: If a source and destination array are not running the same implementation of write
mode (basic or enhanced), only synchronous write mode can be selected.
Synchronous write mode
In synchronous write mode, the array acknowledges I/O completion after the data is cached on
the source and destination arrays. This process maintains identical data on a source DR group
and its destination DR group at all times. The following is a summary of synchronous operation.
Synchronous operation
1. Host data written to the virtual disks in a source DR group is stored in a write-pending cache on the source array.
2. The source array replicates the data in the write-pending cache to the destination array.
3. The destination array writes the data to the virtual disks in the destination DR group and acknowledges completion
back to the source array.
4. The source array acknowledges the writes to the host.
Write mode states
•
Asynch normal. Normal conditions for replication.
•
Asynch paused. Host writes can be accepted but not replicated.
•
Asynch rundown normal. Normal conditions with an increased priority to replicate from the
log.
•
Asynch rundown paused. Host writes can be accepted but not replicated.
•
Full copy normal and full copy paused. See Full copy mode.
•
Synch normal. Normal conditions for replication.
•
Synch merge normal. Also called merging. Host writes in the log are being replicated.
•
Synch merge paused. Also called logging. Host writes are being stored in the log because
replication is suspended or the destination in not available.
•
Unknown. Unitialized or startup state.
Write mode transitions
The write mode in a DR group pair can be changed, subject to operational considerations and
technical guidelines. See Remote replication guidelines - write mode.
DR group concepts
93
Transition from synchronous to asynchronous
•
The source DR group log cannot be Logging or Merging, and the destination DR group must
be available.
•
On the source and destination arrays, the disk groups specified for the logs must accommodate
the log's maximum size.
•
The DR group pair cannot be suspended, failsafe enabled, or performing a full copy. See
Suspension, Failsafe mode, and Full copy.
Transition from asynchronous to synchronous
94
•
The DR group pair cannot be suspended.
•
The DR group log (with basic asynchronous) cannot be logging.
•
The transition from enhanced asynchronous does not occur until the source DR group log is
run down (merged) to the destination.
•
During a transition, the log is run down or merged to the destination and host I/O is
automatically throttled. See I/O throttling. The merging may cause an apparent delay in
completing the transition, but is expected behavior.
DR groups
4 Enabled hosts
Working with enabled hosts
About enabled host resources
The Enabled Hosts content pane displays hosts that you can interact with. See GUI window Content
pane.
These are standard storage hosts that also are running the replication manager host agent. See
Enabled and standard hosts.
NOTE: VM servers are also included as enabled host resources although they differ from standard
enabled hosts. A host agent is installed on an enabled host providing the mechanism by which
HP P6000 Replication Solutions Manager interacts with the host. A host agent is not installed on
a VM server. HP P6000 Replication Solutions Manager recognizes a VM server but does not
interact with it directly to perform replication tasks. See VM servers for more information.
Views
•
Tabular list view. See Enabled hosts list view.
•
Graphical tree views. See Host/Host Vol/Mt Point tree view.
•
VM Servers view. See VM Servers view.
Actions
•
Actions in the GUI. See Enabled hosts actions summary.
•
You can also interact with enabled hosts from a job and the CLUI. See Enabled hosts actions
cross reference.
Properties
•
Properties displayed in the GUI. See Enabled hosts properties summary.
•
You can also display properties from the CLUI. See the CLUI command Show Host_Agent.
Enabled Hosts actions summary
The following enabled host actions are available on the content pane. Some actions have equivalent
job commands or CLUI commands. See Enabled host actions cross reference.
•
View Properties. View the properties of an enabled host. Procedure.
•
Low Level Refresh. Update the properties of an enabled host. Procedure.
•
Set Credentials. Add or update the logon credentials for accessing an enabled host. Procedure.
•
Execute Script. Run a command, batch file, or script on an enabled host. Procedure.
•
New. Add an enabled host to the replication manager database of resources. Procedure.
•
Delete. Delete an enabled host from the replication manager database of resources. Procedure.
•
Change to Traditional Host OS. Change the enabled from a VM server guest OS to a traditional
host OS. Procedure.
•
Change to VM Guest OS. Change the enabled host from a traditional host OS to a VM server
guest OS. Procedure.
•
Add to Managed Set. Add an enabled host to a managed set. Procedure.
•
Remove from Managed Set. Remove an enabled host from a managed set. Procedure.
Working with enabled hosts
95
Enabled hosts actions cross reference
You can work with enabled hosts using GUI actions, jobs and CLUI commands. The following
tables provide a cross reference for performing typical tasks.
Create enabled hosts
GUI action
Job template or command
CLUI command
Enabled Hosts > New
–
Add Host_Agent
GUI action
Job template or command
CLUI command
Enabled Hosts > Delete
–
Delete Host_Agent
GUI action
Job template or command
CLUI command
Enabled Hosts > Add to Managed Set
–
Set Managed_Set
Enabled Hosts > Remove from Managed Set –
Set Managed_Set
Delete enabled hosts
Manage sets of enabled hosts
Other enabled host tasks
GUI action
Job template or command
CLUI command
Enabled Hosts > Execute Script
Launch
Set Host_Agent
Enabled Hosts > Low Level Refresh
–
Set Host_Agent
Set Credentials
–
–
GUI action
Job template or command
CLUI command
Enabled Hosts > View Properties
ValidateHost
–
GUI action
Job template or command
CLUI command
Enabled Hosts > View Properties
–
Show Host_Agent
Job template or command
CLUI command
Validate enabled hosts
View enabled hosts
Change enabled host OS type
GUI action
96
Enabled Hosts > Change to Traditional Host –
OS
Set Host_Agent
Enabled Hosts > Change to VM Guest OS
Set Host_Agent
Enabled hosts
–
Enabled hosts properties summary
For help on properties, see the following tabs in the enabled hosts properties window.
•
Enabled host computer. See General tab.
•
Host bus adapters in an enabled host. See HBAs/ports tab.
•
Storage communication ports. See P6000 EVA Hosts tab.
•
Managed sets in which the DR group is a member. See Membership tab.
See also Viewing enabled hosts properties.
Enabled hosts views
See the following examples: List view, Host/Host Vol/Mt Point tree view, and VM Servers view.
List view
Host/Host Vol/Mt Point Tree view
VM Servers view
Adding enabled hosts
Add an enabled host to the replication manager database. After an enabled host is added, the
host can interact with the replication manager.
Working with enabled hosts
97
Considerations
•
You can use the GUI or the CLUI. See Enabled hosts actions cross reference.
•
Before an enabled host can be added, an administrator must first install the appropriate
OS-specific HP P6000 Replication Solutions Manager host agent software on the host and
ensure the agent is running correctly.
•
To add the host, the administrator must know that fully qualified network name or IP address
of the host.
•
When adding a guest OS-enabled host, be sure to use the network name or IP address of the
guest OS and not the network name or IP address of the parent VM server on which the guest
OS is running.
•
VM server. Guest OS-enabled hosts can be added before the parent VM server is added. The
properties will display the parent VM server as Unknown until it is added.
•
HP recommends using the same host name in HP P6000 Command View and HP P6000
Replication Solutions Manager to identify each host. This provides consistency in both interfaces
when presenting a virtual disk to a host.
Procedure
This procedure uses the GUI.
1. In the navigation pane, select Enabled Hosts.
2. On the List tab, select Actions > New.
The New Enabled Host window opens.
3.
Follow the instructions in the window.
Adding VM servers
Add a VM server to the replication manager database. After a VM server is added, it is recognized
by the replication manager and any guest OS-enabled hosts on it can be managed by HP P6000
Replication Solutions Manager.
Considerations
•
You can use the GUI or the CLUI. See VM servers actions cross reference.
•
To add a VM server, the administrator must know the fully qualified network name or IP address
of the host.
•
A VM server can be added before or after adding the guest OS-enabled hosts on the VM
server.
•
HP recommends using the same host name in HP P6000 Command View and HP P6000
Replication Solutions Manager to identify each host. This provides consistency in both interfaces
when presenting a virtual disk to a host.
Procedure
This procedure uses the GUI.
1. In the navigation pane, select Enabled Hosts.
2. On the VM Servers tab, select Actions > New VM Server.
The New VM Server window opens.
3.
Follow the instructions in the window.
Adding enabled hosts to a managed set
Add enabled hosts to a managed set.
98
Enabled hosts
Considerations
•
You can use the GUI or the CLUI. See Enabled hosts actions cross reference.
Procedure
This
1.
2.
3.
procedure uses the GUI.
In the navigation pane, select Enabled Hosts.
On the List tab, select the enabled hosts to add to a managed set.
Select Actions > Add to Managed Set.
The Create New Managed Set window opens.
4.
5.
Select a managed set, or select Create New Managed Set and enter a name.
Click OK.
Changing enabled host OS type
Change the type of enabled host OS from Traditional Host OS to VM Guest OS, or from VM Guest
OS to Traditional.
Considerations
•
You can use the GUI or the CLUI. See Enabled hosts actions cross reference.
•
Some examples of when it is necessary to change the OS type include:
◦
If you entered an IP address for the enabled host but for the wrong host OS type. You
can change it to the correct type using the GUI or CLUI.
◦
Because VMware virtual machines can be migrated between VM servers, the IP address
previously assigned to a guest OS could be reused by a physical machine, or vice versa.
Procedure
This
1.
2.
3.
4.
procedure uses the GUI.
In the navigation pane, select Enabled Hosts.
On the List tab, select the enabled host for which you want to change the OS type.
Depending on which OS type you want for the enabled host, select either Actions > Change
to Traditional Host OS or Actions > Change to VM Guest OS.
Click OK.
Deleting enabled hosts
Delete an enabled host from the replication manager database. After an enabled host is deleted,
the host cannot interact with the replication manager.
Considerations
•
You can use the GUI or the CLUI. See Enabled hosts actions cross reference.
•
Check for running jobs and scripts
CAUTION: Ensure that no jobs or scripts are running that involve the enabled host. Deleting
an enabled host while a job or script is running can cause job or script failures.
•
Check impacted jobs and scripts
CAUTION: If they are deleted, jobs or scripts that involve the enabled host can fail when
run or not run properly.
Working with enabled hosts
99
Procedure
This
1.
2.
3.
4.
procedure uses the GUI.
In the navigation pane, select Enabled Hosts.
On the List tab, select the enabled host to delete.
Select Actions > Delete.
Click OK.
Deleting VM servers
Delete a VM server from the replication manager database. After a VM server is deleted, it is no
longer recognized by the replication manager.
Considerations
•
You can use the GUI or the CLUI. See VM server actions cross reference.
•
Any guest OS-enabled hosts on the VM server being deleted will remain. The properties panel
for guest OS-enabled hosts will display the parent VM server as Unknown.
•
Check for running jobs and scripts
CAUTION: Ensure that no jobs or scripts are running that involve a guest OS on the VM
server. Deleting a VM server while a job or script is running can cause job or script failures.
•
Check impacted jobs and scripts
CAUTION: If they are deleted, jobs or scripts that involve a guest OS on the VM server can
fail to run or not run properly.
Procedure
This
1.
2.
3.
4.
procedure uses the GUI.
In the navigation pane, select Enabled Hosts.
On the VM Servers tab, select the VM server to delete.
Select Actions > Delete.
Click OK.
Executing a host script, command or batch file
Execute a command, batch file, or script on an enabled host. For example, you can run a script
that pauses database I/O before you make a snapshot of a virtual disk. After making a snapshot,
run another script to resume the I/O.
Considerations
•
You can use the GUI, jobs, or the CLUI. See Enabled hosts actions cross reference.
•
Root and administrator permissions
CAUTION: Because commands are executed using root or administrator permissions on the
host, it is possible to severely impact the host and host storage operations.
•
Commands are not validated before sending them to the host. Ensure that commands fully
comply with the host OS or application software rules and syntax.
Procedure
This procedure uses the GUI.
100 Enabled hosts
1.
2.
3.
In the navigation pane, select Enabled Hosts.
On the List tab, select the enabled host on which you want to run the command, batch file,
or script.
Select Actions > Execute Script.
The Execute Console Command window opens.
4.
5.
Enter the full path to the command, batch file, or script you want to run. You can also enter
parameters for a command/script.
Click OK.
Low-level refreshing enabled hosts
Perform a low-level refresh of specific enabled hosts. See enabled hosts Low-level refresh.
Considerations
•
You can use the GUI or CLUI. See Enabled host actions cross reference.
Procedure
This
1.
2.
3.
procedure uses the GUI.
In the navigation pane, select Enabled Hosts.
On the List tab, select the specific enabled hosts whose properties are to be updated.
Click Low-Level Refresh.
The Confirmation Action window opens.
4.
To continue, click OK.
The enabled host properties are updated.
Low-level refreshing VM servers
Perform a low-level refresh of specific VM servers. See VM servers low-level refresh.
Considerations
•
You can use the GUI or CLUI. See VM server actions cross reference.
Procedure
This
1.
2.
3.
procedure uses the GUI.
In the navigation pane, select Enabled Hosts.
On the VM Servers tab, select the specific VM servers whose properties are to be updated.
Click Low-Level Refresh.
The Confirmation Action window opens.
4.
To continue, click OK.
The VM server properties are updated.
Removing enabled hosts from a managed set
Remove enabled hosts from a managed set.
Considerations
•
You can use the GUI or the CLUI. See Enabled host actions cross reference.
Procedure
This procedure uses the GUI.
Working with enabled hosts
101
1.
2.
3.
In the navigation pane, select Enabled Hosts.
On the List tab, select the enabled hosts to remove from a managed set.
Select Actions > Remove From Managed Set.
The Select Managed Sets window opens.
4.
5.
Select the managed set from which to remove the enabled hosts.
Click OK.
Setting security credentials for enabled hosts
To add or update a security credential for accessing one or more enabled hosts. See Security
credentials for enabled hosts.
Considerations
•
You can only use the GUI to update security credentials.
•
When setting security credentials for multiple enabled hosts in a single action, the selected
enabled hosts must have the same security credentials.
•
Valid credentials
IMPORTANT: If a valid security credential is not entered or if the credential expires or is
changed, the replication manager (GUI, jobs, and CLUI) will not be able to interact with the
enabled host.
Procedure
1.
2.
3.
4.
In the navigation pane, select Enabled Hosts.
On the List tab, select the enabled host or hosts whose security credential you want to save
in the replication manager database.
Select Set Credentials.
Follow the instructions in the window.
Setting security credentials for VM servers
To add or update a security credential for accessing one or more VM servers. See Security
credentials for enabled hosts.
Considerations
•
You can use the GUI or the CLUI. See VM server actions cross reference.
•
When setting security credentials for multiple VM servers in a single action, the selected VM
servers must have the same security credentials.
•
Valid credentials
IMPORTANT: If a valid security credential is not entered or if the credential expires or is
changed, the replication manager (GUI, jobs, and CLUI) will not be able to interact with the
VM server.
Procedure
1.
2.
3.
4.
In the navigation pane, select Enabled Hosts.
On the VM Servers tab, select the VM servers whose security credential you want to save in
the replication manager database.
Select Set Credentials.
Follow the instructions in the window.
102 Enabled hosts
Viewing enabled hosts
Display enabled host list and tree views. See Enabled hosts views.
Considerations
•
You can use the GUI or the CLUI to display lists. See Enabled hosts actions cross reference.
•
Tree views are available only in the GUI.
Procedures
This procedure uses the GUI.
1. In the navigation pane, select Enabled Hosts.
The content pane displays enabled hosts.
2.
Click the List tab.
A tabular list of enabled hosts appears.
3.
Click the Tree tab.
A graphical tree appears that shows the enabled hosts, their host volumes, and the OS
components and virtual disks associated with the host volumes.
4.
Click the VM Servers tab.
A tabular list of VM servers appears.
Viewing enabled host properties
View the properties of a specific enabled host. See Enabled hosts properties summary.
Considerations
•
You can use the GUI or the CLUI. See Enabled hosts actions cross reference.
Procedure
This
1.
2.
3.
procedure uses the GUI.
In the navigation pane, select Enabled Hosts.
On the List tab, select the enabled host to view.
Select Actions > View Properties.
The Enabled Host Properties window opens.
4.
Click the appropriate properties tabs.
Viewing VM server properties
View the properties of a specific VM server. See VM servers properties summary.
Considerations
•
You can use the GUI or the CLUI. See VM server actions cross reference.
Procedure
This
1.
2.
3.
procedure uses the GUI.
In the navigation pane, select Enabled Hosts.
On the VM Servers tab, select the VM server to view.
Select Actions > View Properties.
The VM server properties window opens.
4.
Click the appropriate properties tab.
Working with enabled hosts 103
VM server actions summary
The following enabled host actions are available from the VM Server tab. Some actions have
equivalent job commands or CLUI commands. See VM server actions cross reference.
•
View Properties. View the properties of a VM server. Procedure.
•
Low Level Refresh. Update the properties of a VM server. Procedure.
•
Set Credentials. Add or update the logon credentials for accessing a VM server. Procedure.
•
New VM Server. Add a VM server to the replication manager database of resources. Procedure.
•
Delete. Delete a VM server from the replication manager database of resources. Procedure.
VM server actions cross reference
You can work with VM servers using GUI actions and CLUI commands. The following tables provide
a cross reference for performing typical tasks.
Add VM servers
GUI action
Job template or command
CLUI command
Enabled Hosts > VM Server > New
–
add vm_server
GUI action
Job template or command
CLUI command
Enabled Hosts > VM Server > Delete
–
delete vm_server
Delete VM servers
Other VM server tasks
GUI action
Job template or command
CLUI command
Enabled Hosts > VM Server > Low Level Refresh –
set vm_server
Enabled Hosts > VM Server > Set Credentials
–
set vm_server
GUI action
Job template or command
CLUI command
Enabled Hosts > VM Server > View Properties
–
show vm_server
View VM servers
VM server properties summary
For help on properties, see the following tabs in the VM server properties window.
•
VM server computer. See General tab.
•
Host bus adapters in a VM server. See HBAs/ports tab.
•
Storage communication ports. See P6000 EVA Hosts tab.
•
Guest OSs running as enabled hosts on the VM server. See VM Guests tab.
See also Viewing enabled hosts properties.
104 Enabled hosts
Enabled host concepts
Enabled and standard hosts
Enabled and standard hosts
An enabled host is a computer in a SAN that is capable of performing I/O with connected storage
arrays and that has been configured to interact with the replication manager server. Only enabled
hosts are displayed in the replication manager interface.
A standard host is a computer in a SAN that is capable of performing I/O with connected storage
arrays, but has not been configured to interact with the replication manager server. Standard hosts
are not displayed in the replication manager interface.
Notes:
•
In HP P6000 Replication Solutions Manager documentation, the term “enabled host” refers
specifically to a host that has been configured with an HP P6000 Replication Solutions Manager
host agent.
•
A host can also be enabled with an HP Business Copy EVA/MA/EMA V2.x host agent. Such
host agents are part of HP Business Copy 2.x software and do not interact with or appear in
the replication manager interface. A host that is enabled with both host agents appears in
both interfaces.
Enabled host capabilities
When hosts are enabled, you can interact with them using the replication manager GUI, jobs, and
CLUI. For example, you can instruct an enabled host to dynamically mount a storage volume, or
suspend and resume application I/O when backing up virtual disks.
In addition, the replication manager can discover and interact with host volumes, volume groups,
and logical volumes.
Host names and ports
HP recommends that you use the same host name in HP P6000 Command View and the replication
manager to identify each host. This provides consistency in both interfaces when presenting a
virtual disk to a host.
An enabled host's storage system communication ports that are shared with HP P6000 Command
View are listed in the Enabled Hosts properties window, P6000 EVA Hosts tab. See Viewing
enabled host properties.
Low-level refresh of enabled hosts
Update replication manager database entries by manually performing a discovery and refresh of
individual enabled hosts.
The action does the following:
•
Performs a discovery refresh of the enabled hosts to gather the host's properties
•
Updates the replication manager database with the new properties
Low-level refresh of VM servers
Update replication manager database entries by manually performing a discovery and refresh of
individual VM servers.
The action does the following:
•
Performs a discovery refresh of the VM servers to gather the server's properties
•
Updates the replication manager database with the new properties
Enabled host concepts 105
Security credentials for enabled hosts
To establish and manage replication manager host agent security credentials, administrators must
use the OS on each enabled host. See security Groups configuration or the HP P6000 Replication
Solutions Manager Installation Guide and HP P6000 Replication Solutions Manager Administrator
Guide.
For an enabled host to interact with the server, a valid host agent security credential must be present
in the replication manager database. See Setting security credentials for enabled hosts and Adding
a new enabled host.
IMPORTANT: If a valid security credential is not entered or if the credential expires or is changed,
the replication manager (GUI, jobs, and CLUI) will not be able to interact with the enabled host.
Host agent security credentials (user name and password) must be provided in the following cases:
•
New enabled host. To add a new enabled host to the replication manager database, an
administrator must enter a valid security credential.
•
Changed credentials. To update the security credential that is saved in the database, an
administrator must enter a new security credential.
•
Imported replication manager database. After importing a replication manager database that
includes enabled hosts, an administrator must re-enter the security credentials.
Password change considerations
Administrators should carefully plan and coordinate host agent security credential changes with
replication manager operations. For example, if the security policy in your environment is to reset
passwords every six months, consider that such changes can result in enabled hosts not interacting
with the replication manager server. This can lead to the following:
•
Jobs that interact with impacted enabled host do not validate or fail while running.
•
CLUI commands that interact with impacted enabled host fail.
•
Enabled host and host volume information is not updated in the server.
Host names and ports
HP recommends that you use the same host name in HP P6000 Command View and the replication
manager to identify each host. This provides consistency in both interfaces when presenting a
virtual disk to a host.
An enabled host's storage system communication ports that are shared with HP P6000 Command
View are listed in the Enabled Hosts properties window, P6000 EVA Hosts tab. See Viewing
enabled host properties.
VM servers
VM servers are included with the enabled host resources in HP P6000 Replication Solutions
Manager. However, VM servers differ from enabled hosts in some important respects:
•
An HP P6000 Replication Solutions Manager host agent is not installed on the VM server
itself. HP P6000 Replication Solutions Manager recognizes a VM server but does not interact
with it directly to perform replication tasks.
•
The operating systems running on the VM server are added to HP P6000 Replication Solutions
Manager as guest OS-enabled hosts. This distinguishes them from traditional enabled hosts.
•
The appropriate HP P6000 Replication Solutions Manager host agents are installed on each
guest OS on the VM server.
106 Enabled hosts
5 Host volumes
Working with host volumes
About host volume resources
The Host Volumes content pane displays host volumes that have been discovered by the replication
manager. See GUI window Content pane.
Host volumes are an enabled host's identification of a storage device. See Host volumes overview.
Views
•
Tabular list views. See Host volumes list view Host Volumes, Host Volume Groups, Host Disk
Devices, Dynamic Capacity Volumes, and Replica Repositories.
•
Graphical tree views. See: Host/Replicable Components tree view, Host/Host
Volume/Component tree view, and Host/Devices/Partitions tree view.
Actions
•
Actions in the GUI. See Host volumes actions summary.
•
You can also interact with host volumes from a job and the CLUI. See Host volumes actions
cross reference.
Properties
•
Properties displayed in the GUI. See Host volumes properties summary.
•
You can also display properties from the CLUI. See the CLUI command Show Host_Volume.
Host volume actions summary
The following actions are available on the Host Volume content pane. Some actions have equivalent
job commands or CLUI commands. See Host volumes actions cross reference.
Host volumes
•
View Properties. View the properties of a host volume. Procedure.
•
Mount. Mount a host volume on an enabled host. Procedure.
•
Unmount. Unmount a host volume from an enabled host. Procedure.
•
Add to Managed Set. Add a host volume to a managed set. Procedure.
•
Remove from Managed Set. Remove a host volume from a managed set. Procedure.
•
New DR Group. Create a DR group pair by specifying a host volume. Procedure.
•
Instant Restore. Restore a host volume from one of its replicas. Procedure.
•
New Container. Create (preallocate) a managed set of virtual disk containers based on a host
volume's size. Procedure.
•
Replicate. Create a copy of a host volume. Procedure.
•
Extend Capacity. Increase the capacity of the host volume. Procedure.
•
Shrink Capacity. Decrease the capacity of the host volume. Procedure.
•
Set Dynamic Capacity Policy. Set a policy to automatically extend or shrink a host volume.
Procedure.
•
Edit Dynamic Capacity Policy. Change an existing a dynamic capacity policy. Procedure.
Working with host volumes 107
•
Remove Dynamic Capacity Policy. Remove an existing a dynamic capacity policy. Procedure.
•
Disable Policy. Disable a dynamic capacity policy for multiple host volumes. Procedure.
•
Enable Policy. Enable a dynamic capacity policy for multiple host volumes. Procedure.
•
Flush Cache. Flush the file system cache of a host volume. Procedure.
•
Analyze Capacity Utilization. Configure and view the host volume capacity utilization reports.
Procedure.
•
Enable Capacity Utilization Analysis. Enable capacity utilization analysis for a host volume.
Procedure.
•
Disable Capacity Utilization Analysis. Disable capacity utilization analysis for a host volume.
Procedure.
Host volume groups
•
View Properties. View the properties of a host volume group. Procedure.
•
Instant Restore. Restore a host volume group from one of its replicas. Procedure.
•
New Container. Create (preallocate) a managed set of virtual disk containers based on a host
volume group's size. Procedure.
•
Replicate. Create a copy of a host volume group. Procedure.
•
Flush Cache. Flush the file system caches of the logical volumes in a volume group. Procedure.
Host disk devices
•
View Properties. View the properties of a raw host disk device. Procedure.
•
Instant Restore. Restore a raw host disk device from one of its replicas. Procedure.
•
New Container. Create (preallocate) a managed set of virtual disk containers for a host disk
device. Procedure.
•
Replicate. Create a copy of a raw host disk device. Procedure.
Dynamic capacity volumes
•
View Properties. View the properties of a host volume. Procedure.
•
Mount. Mount a host volume on an enabled host. Procedure.
•
Unmount. Unmount a host volume from an enabled host. Procedure.
•
Add to Managed Set. Add a host volume to a managed set. Procedure.
•
Remove from Managed Set. Remove a host volume from a managed set. Procedure.
•
New DR Group. Create a DR group pair by specifying a host volume. Procedure.
•
Instant Restore. Restore a host volume from one of its replicas. Procedure.
•
New Container. Create (preallocate) a managed set of virtual disk containers based on a host
volume's size. Procedure.
•
Replicate. Create a copy of a host volume. Procedure.
•
Extend Capacity. Increase the capacity of the host volume. Procedure.
•
Shrink Capacity. Decrease the capacity of the host volume. Procedure.
•
Edit Dynamic Capacity Policy. Change an existing a dynamic capacity policy. Procedure.
•
Remove Dynamic Capacity Policy. Remove an existing a dynamic capacity policy. Procedure.
•
Disable Policy. Disable a dynamic capacity policy for multiple host volumes. Procedure.
108 Host volumes
•
Enable Policy. Enable a dynamic capacity policy for multiple host volumes. Procedure.
•
Flush Cache. Flush the file system cache of a host volume. Procedure.
Replicas
•
Edit Properties. Change the properties of a host volume replica. Procedure.
•
View Properties. View the properties of a host volume replica. Procedure.
•
Delete. Delete a host volume replica. Procedure.
•
Cancel Round Robin. Cancel (remove) a replica from a round robin rotation. Procedure.
Host volume actions cross reference
You can work with host volumes and raw disk devices using GUI actions, jobs and CLUI commands.
This table provides a cross reference for performing typical tasks.
NOTE:
There are no jobs or CLUI commands for DC-Management tasks.
Create host volume resources
GUI action
Job template or command
CLUI command
Host Volumes > Host Volumes > New
Container
CreateContainersForHostVolume
-
Host Volumes > Host Volume Groups > New CreateContainersForHostVolumeGroup
Container
-
Virtual Disks > Add Presentations
PresentStorageVolume
Set virtual disk
Host Volumes > Host Disk Devices > New
Container
CreateContainerForHostDiskDevice
-
-
CreateDiskDevice
-
-
CreateHostVolume
-
-
CreateHostVolumeGroup
-
-
CreateHostVolumeDiscrete
-
CreateReplicaRepository
-
DiscoverDiskDevice
-
GUI action
Job template or command
CLUI command
Host Volumes > Replicas > Delete
DeleteReplicaRepository
-
Virtual Disks > Remove Presentations
UnpresentStorageVolume
Set virtual disk
-
DeleteHostVolume
-
-
RemoveDiskDevice
-
Delete host volume resources
Edit host volume resource properties
GUI action
Job template or command
CLUI command
Host Volumes > Replicas > Edit Properties
-
-
Working with host volumes 109
Instant Restore host volume resources
GUI action
Job template or command
CLUI command
Host Volumes > Host Disk Devices > Instant Restore
-
Host Volumes > Host Volumes > Instant
Restore
-
-
Host Volumes > Host Volume Groups >
Instant Restore
-
-
GUI action
Job template or command
CLUI command
Host Volumes > Host Volumes > Add To
Managed Set
-
Set Managed_Set
Host Volumes > Host Volumes > Remove
From Managed Set
-
Set Managed_Set
GUI action
Job template or command
CLUI command
Host Volumes > Host Volumes > Mount
MountHostVolume
Set Host_Agent
-
MountEntireVolumeGroup
-
-
MountVolumeGroupComponent
-
Job template or command
CLUI command
Manage sets of resources
Mount host volume resources
Other host volume resource tasks
GUI action
110
Host Volumes > Host Volumes > Flush Cache FlushCache
-
Host Volumes > Host Volume Groups > Flush Cache
-
Host Volumes > Replicas > Cancel Round
Robin
-
-
Host Volumes > Replicas > Edit Properties
-
-
-
SetHostDiskDeviceWriteCacheMode
-
-
SetHostVolumeGroupWriteCacheMode
-
-
SetHostVolumeWriteCacheMode
-
-
WaitForHostDiskDeviceWriteCacheFlush
-
-
WaitForHostVolumeGroupWriteCacheFlush
-
-
WaitHostDiskDeviceNormalization
-
Host volumes
Replicate host volume resources
GUI action
Job template or command
CLUI command
Host Volumes > Host Disk Devices >
Multiple job commands are required to
Replicate (Wizard for snapclones, snapshots provide equivalent functions. See commands
and fractured mirrorclones)
below.
Host Volumes > Host Volumes > Replicate
(Wizard for snapclones, snapshots and
fractured mirrorclones)
Multiple job commands are required to
provide equivalent functions. See commands
below.
Host Volumes > Host Volume Groups >
Multiple job commands are required to
Replicate (Wizard for snapclones, snapshots provide equivalent functions. See commands
and fractured mirrorclones)
below.
Host Volumes > Host Volumes > New DR
Group (remote replication)
CreateDrGroupFromHostVolume
Add DR_Group
-
FractureHostDiskDeviceMirrorclone
-
-
FractureHostVolumeGroupMirrorclones
-
-
MirrorcloneHostDiskDeviceToContainer
-
-
MirrorcloneHostDiskDeviceToContainerInManagedSet -
-
RetainLatestRoundRobinReplicasForHostDiskDevice -
-
RetainLatestRoundRobinReplicasForHostVolume -
-
RetainLatestRoundRobinReplicasForHostVolumeGroup -
-
SnapcloneHostDiskDeviceToContainerInManagedSet -
-
SnapshotHostDiskDeviceToContainerInManagedSet -
-
SnapcloneHostVolume
-
-
SnapcloneHostVolumes
-
-
SnapcloneHostVolumeGroup
-
-
SnapcloneHostVolumeGroups
-
-
SnapshotHostVolumeGroupToContainersInManagedSet -
-
SnapshotHostVolume
-
-
SnapshotHostVolumeGroup
-
-
SnapcloneDiskDevice (raw disks)
-
-
SnapshotDiskDevice (raw disks)
-
-
Replicate host volumes (template)
-
-
Replicate host volumes, mount to a host
(template)
-
-
Replicate host volumes, mount to a host, then to a different host (template)
-
Replicate host volume, mount component to
a host (template)
-
-
Replicate host volume group, mount
components to a host (template)
-
-
Replicate host volume group, mount entire
group to a host (template)
-
Working with host volumes
111
Unmount host volume resources
GUI action
Job template or command
CLUI command
Host Volumes > Host Volumes > Unmount
UnmountHostVolume
Set Host_Agent
-
UnmountEntireVolumeGroup
-
-
Unmount existing host volumes (template)
-
GUI action
Job template or command
CLUI command
-
ValidateHostVolume
-
-
ValidateHostVolumeGroup
-
-
ValidateSnapcloneHostVolume
-
-
ValidateSnapcloneHostVolumeGroup
-
-
ValidateSnapshotHostVolume
-
-
ValidateSnapshotHostVolumeGroup
-
GUI action
Job template or command
CLUI command
Host Volumes > Host Disk Devices > View
Properties
-
-
Host Volumes > Host Volumes > View
Properties
-
Show Host_Volume
Validate host volume resources
View host volume resources
Host Volumes > Host Volume Groups > View Properties
-
Host Volumes > Replicas > View Properties
-
-
Host volume properties summary
Help for host volumes properties is available on the following properties windows.
•
Host Volumes. See Host Volume Properties window General tab, Structure tab, Membership
tab, Extend Policy tab, and Shrink Policy tab.
•
Host Volume Groups. See host volume group properties window General tab and Structure
tab.
•
Host Disk Devices. See Disk Device Properties window General tab.
•
Replica Repositories. See Replica Repository properties window General tab.
•
Dynamic Capacity Volumes. See Host Volume Properties window General tab, Structure tab,
Membership tab, Extend Policy tab, and Shrink Policy tab.
See also Viewing host volume properties.
Host volume views
See the following examples: List view, Host/Replicable Components tree view, Host/Host
Volume/Component tree view, and Host/Devices/Partitions tree view.
112
Host volumes
List view
List views show host volumes, host volume replicas, host volume groups, dynamic capacity volumes,
and host disk devices.
Host/Replicable Components Tree view
Host/Host Volume/Component Tree view
Host/Devices/Partitions Tree view
Adding host volumes to a managed set
Add host volumes to a managed set.
Working with host volumes
113
Considerations
•
You can use the GUI or the CLUI. See Host volumes actions cross reference.
Procedure
This
1.
2.
3.
4.
procedure uses the GUI.
In the navigation pane, select Host Volumes. The content pane displays host volumes.
Click the List tab.
Select the host volumes to add to a managed set.
Select Actions > Add to Managed Set.
The Create New Managed Set window opens.
5.
Follow the instructions in the window.
Cancelling (removing) replicas from round robin rotation
Remove a host volume replica from rotation in a round robin job.
Considerations
•
You can only use the GUI to remove a host volume replica from its job rotation.
•
Once removed, a host volume replica cannot be placed back into a job rotation.
Procedure
1.
2.
3.
In the navigation pane, select Host Volumes.
On the Replica Repository list tab, select the replica to remove from round robin rotation.
Select Actions > Cancel Round Robin.
The replica is removed from rotation.
Creating a DR group pair (from host volume)
Create a DR group pair (source and destination) by specifying the source host volume. See DR
group pair.
Considerations
•
You can use the GUI, jobs or the CLUI. See Host volumes actions cross reference.
•
Guidelines apply. See Remote replication guidelines.
Procedure
This procedure uses the GUI.
DR group procedure
1.
2.
In the navigation pane, select Host Volumes.
On the List tab, select Actions > New DR Group.
The Replicate wizard opens.
3.
Follow the instructions in the wizard.
Creating a managed set for a host disk device container
Create a storage container corresponding to the virtual disk that underlies a host disk device, and
add it to the specified managed set. See Containers and Managed sets.
114
Host volumes
Considerations
•
You can use the GUI or jobs to create host disk device containers. See Host volume actions
cross reference.
•
You cannot create a container for an internal disk or a disk that is not part of an HP Enterprise
Virtual Array.
•
Although you can create a container for all array host disk devices, only raw disk devices
can be replicated.
•
See also Creating a managed set of containers for host volume groups.
•
See also Creating a managed set of containers for host volumes.
Procedure
This procedure uses the GUI.
1. In the navigation pane, select Host Volumes.
2. On the Host Disk Devices List tab, select the disk device for which you want to create a
container.
3. Select Actions > New Container.
The Create Container window opens.
4.
Follow the instructions in the window.
Creating a managed set of containers for host volumes
Create a managed set of storage containers that correspond to the virtual disks that underlie a
host volume. See Containers and Managed sets.
Considerations
•
You can use the GUI or jobs to create the containers. See host volume actions cross reference.
•
See also Creating a managed set of containers for host volume groups.
•
See also Creating managed set of containers for host disk devices.
Procedure
This
1.
2.
3.
procedure uses the GUI.
In the navigation pane, select Host Volumes.
On the Host Volumes List tab, select the host volume for which you want to create containers.
Select Actions > New Container.
The Create Container for Host Volume window opens.
4.
Follow the instructions in the window.
Creating a managed set of containers for host volume groups
Create a managed set of storage containers that correspond to the virtual disks that underlie a
host volume group. See Containers and Managed sets.
Considerations
•
You can use the GUI or jobs to create the containers. See host volume, Host volume groups
actions cross reference.
•
See also Creating a managed set of containers for host volumes.
•
See also Creating managed set of containers for host disk devices.
Working with host volumes
115
Procedure
This procedure uses the GUI.
1. In the navigation pane, select Host Volumes.
2. On the Host Volume Groups List tab, select the host volume group for which you want to create
containers.
3. Select Actions > New Container.
The Create Container window opens.
4.
Follow the instructions in the window.
Creating host volumes
Create host volumes on an enabled host.
Considerations
•
You can use the GUI, jobs, or the CLUI to present the virtual disk to the enabled host.
•
You can use a job with create-host-volume style commands. See Host volumes actions cross
reference.
Procedures
Creating a host volume using a GUI presentation action
1.
In the navigation pane, select Virtual Disks.
The content pane displays virtual disks.
2.
3.
4.
Click the List tab.
Select the unpresented virtual disk that is to be the basis for a host volume.
Present the virtual disk to the enabled host. Procedure.
A host volume is created on the enabled host.
5.
To identify the host volume:
1. Navigate to the Host Volumes content pane and select the Host/Host Volumes/Components
tree view.
2. In the Tree view, locate the enabled host and expand the tree to the component level.
The new host volume is the one whose component is the virtual disk that you specified in
the presentation action.
Creating a host volume using job commands
1.
2.
Create a job and include the appropriate host volume commands. See CreateHostVolume,
CreateHostVolumeGroup, CreateHostVolumeDiscrete, and PresentStorageVolume job
commands. For more information on jobs, see the HP P6000 Replication Solutions Manager
Job Command Reference.
Run the job.
The host volume is created on the enabled host.
Creating local replicas
Create local replicas (snapclones, snapshots, or mirrorclone copies) of the virtual disks that underlie
a host volume, a host volume group, or a host disk device. See Snapclones, Snapshots, and
Mirrorclones.
116
Host volumes
Considerations
•
You can use the GUI or jobs to create the copies. See Host volume actions cross reference.
Procedure
This
1.
2.
3.
4.
procedure uses the GUI.
In the navigation pane, select Host Volumes.
Click the tab for host volumes, host volume groups, or host disk devices.
Select one or more host resources to replicate. To make a fractured mirrorclone replica, you
must select a resource which has an underlying virtual disk with a mirrorclone
Select Actions > Replicate.
The local replication wizard opens.
5.
Follow the instructions in the wizard.
Creating round-robin replicas
Create a job and schedule a job to produce replicas in a round robin rotation.
Considerations
•
Round robin features for host volumes are only available in the GUI replication wizard.
•
To ensure proper operation when using the round robin feature with replicate option, ensure
that all replicas created using the round robin policy are not presented to hosts. If any replicas
are presented to a host when the job runs, the resulting replicas will not be created in the
correct order and more than the specified number of replicas may be created.
Procedure
1.
2.
3.
4.
In the navigation pane, select Host Volumes.
Click the tab for host volumes, host volume groups, or host disk devices.
Select the host resource to replicate.
Select Actions > Replicate.
The local replication wizard opens. Complete pages 1 and 2 as desired.
5.
6.
7.
8.
On page 3, Replication Options, select Use Round Robin Replicas, and select the number of
replicas to keep in the rotation.
On wizard page 4, select an option that saves the wizard results as a job and enter a job
name.
Click Finish to close the wizard and save your round robin instructions as a job.
Manually run the round-robin job when needed, or schedule it to run automatically.
Deleting replicas
Delete a host volume replica from the replica repository. Optionally, delete the virtual disks that
underlie the replica.
Considerations
•
You can only use the GUI to delete a replica from the repository.
•
The host volume replica repository contains only replicas that were created using the local
replication wizard.
•
Deleting host volume replicas that were not created using the local replication wizard requires
a different procedure. See Deleting host volume resources.
Working with host volumes
117
Procedure
1.
2.
3.
In the navigation pane, select Host Volumes.
On the Replica Repository tab, select the replica to delete.
Select Actions > Delete.
The Delete Replica Repository window opens.
4.
Follow the instructions in the window.
Deleting host volumes, host volume groups, and host disk devices
Delete a host volume, host volume group or host disk device on an enabled host.
Considerations
•
You can use the GUI, jobs, or the CLUI. See Host volumes actions cross reference.
•
See also Deleting host volume replicas.
•
Coordinating with host I/O
CAUTION: Unpresenting the virtual disks that underlie a host volume resource can result in
loss of data if not coordinated with the host's I/O.
Procedures
Deleting host volume resources using a GUI presentation action
1.
In the navigation pane, select Virtual Disks.
The content pane displays virtual disks.
2.
3.
4.
Click the List tab.
Select the presented virtual disks that underlie the host volume.
Unpresent the virtual disks from the enabled host. Procedure.
The host volume resource is deleted from the enabled host.
Deleting host volumes using job commands
1.
2.
Create a job and include the appropriate host volume commands. See DeleteHostVolume and
UnpresentStorageVolume. For more information on jobs, see the HP P6000 Replication Solutions
Manager Job Command Reference.
Run the job.
The host volume is deleted from the enabled host.
Editing replica properties
Edit the properties of a replica in the replica repository.
Considerations
•
You can only use the GUI to edit the properties of a replica in the replica repository.
•
The host volume replica repository contains only replicas that were created using the local
replication wizard.
Procedure
1.
2.
118
In the navigation pane, select Host Volumes.
On the Replica Repository tab, select the replica to edit.
Host volumes
3.
Select Actions > Edit Properties.
The Editing Replica Repository window opens.
4.
Follow the instructions in the window.
Flushing the file system cache of host volumes and host volume groups
Flush the file system cache for a host volume or host volume group.
Considerations
•
You can use the GUI or jobs to flush a host's file system cache.
•
Flushing a host cache does not flush the underlying virtual disks write caches on the storage
array.
Procedure
This
1.
2.
3.
4.
procedure uses the GUI.
In the navigation pane, select Host Volumes.
Select the tab for host volumes or host volume groups.
Select the host resource to replicate.
Select Actions > Flush Cache.
A confirmation window opens.
5.
Click OK to confirm the action.
Viewing a host volume capacity utilization report
View the capacity utilization report for a host volume.
Considerations
•
You can use the GUI to view a capacity utilization report.
Procedure
This
1.
2.
3.
4.
procedure uses the GUI.
In the navigation pane, select Host Volumes.
Select the tab for host volumes.
Select the host volume on which you want to perform a capacity utilization analysis.
Select Actions > Analyze Capacity Utilization.
The Analyze Capacity Utilization window is displayed. Follow the instructions in the window.
Enabling host volume capacity utilization analysis
Enable capacity utilization analysis for a host volume.
Considerations
•
You can use the GUI to enable capacity utilization analysis.
•
Capacity utilization can be enabled on multiple host volumes simultaneously.
•
Although the capacity utilization tool can be used on any host volume, it is primarily intended
for use with host volumes that support DC-Management. This enables you to use the data from
the capacity analysis reports to create the appropriate DC-Management expand and shrink
policies for the host volume. For more information, see Page 140.
Working with host volumes
119
Procedure
This
1.
2.
3.
4.
procedure uses the GUI.
In the navigation pane, select Host Volumes.
Select the tab for host volumes.
Select the host volume on which you want to enable capacity utilization analysis.
Select Actions > Enable Capacity Utilization Analysis.
5.
Click OK to confirm the action.
Disabling host volume capacity utilization analysis
Disable capacity utilization analysis for a host volume.
Considerations
•
You can use the GUI to disable capacity utilization analysis.
•
Capacity utilization can be disabled on multiple host volumes simultaneously.
Procedure
This
1.
2.
3.
4.
5.
procedure uses the GUI.
In the navigation pane, select Host Volumes.
Select the tab for host volumes.
Select the host volume on which you want to disable capacity utilization analysis.
Select Actions > Disable Capacity Utilization Analysis.
Click OK to confirm the action.
Mounting host volumes (assigning a drive letter)
Mount a host volume on an enabled host.
Considerations
•
You can use the GUI, jobs, or the CLUI. See Host volumes actions cross reference.
•
The mount point or drive letter cannot already be in use.
•
You must enter the mount point or drive letter using the OS-specific format required by the
enabled host.
•
Mounting restrictions can apply to file system types, cluster types, and raw devices.
Procedure
This procedure uses the GUI.
1. In the navigation pane, select Host Volumes.
The content pane displays host volumes.
2.
3.
4.
Click the List tab.
Select the host volume you want to mount.
Select Actions > Mount.
The Mount Host Volume window opens.
5.
Follow the instructions in the window.
Removing host volumes from a managed set
Remove host volumes from a managed set.
120 Host volumes
Considerations
•
You can use the GUI or the CLUI. See Host volumes actions cross reference.
Procedure
This procedure uses the GUI.
1. In the navigation pane, select Host Volumes.
The content pane displays host volumes.
2.
3.
4.
Click the List tab.
Select the host volumes to remove from a managed set.
Select Actions > Remove From Managed Set.
The Select Managed Sets window opens.
5.
Follow the instructions in the window.
Restoring host volumes (Instant Restore)
Restore a host volume or host volume group by restoring the virtual disks that underlie it.
Considerations
•
You can use the GUI or jobs to restore a host resource. See Host volume actions cross reference.
•
The host volume to restore must already have a replica (fractured mirrorclone, snapclone, or
snapshot) that was created by the replication manager. Only those replicas can be selected
in the Instant Restore wizard.
•
To ensure a Windows host can see the restored data, after the wizard completes unmount the
host volume, unpresent the virtual disk underlying the host volume, and present it back with
the same mount point.
•
Before using the following procedure, ensure there is no host I/O to the host volume to restore
or its replica.
Procedure
This
1.
2.
3.
4.
procedure uses the GUI.
In the navigation pane, select Host Volumes.
Select the tab for host volumes or host volume groups.
Select the host resource to replicate.
Select Actions > Instant Restore.
The Instant Restore wizard opens.
5.
Follow the instructions in the wizard.
Unmounting host volumes (removing a drive letter)
Unmount a host volume from an enabled host.
Working with host volumes
121
Considerations
•
You can use the GUI, jobs, or the CLUI. See Host volumes actions cross reference.
•
Look for running jobs and scripts.
CAUTION:
Ensure that no jobs or scripts are running that involve the mounted volume.
Unmounting a volume while a job or script is running can cause job or script failures.
•
Verify impacted jobs and scripts.
CAUTION: After the volume is unmounted, jobs or scripts that involve the volume can fail or
not run correctly.
Procedure
This procedure uses the GUI.
1. In the navigation pane, select Host Volumes.
The content pane displays host volumes.
2.
3.
4.
Click the List tab.
Select the host volume to unmount.
Select Actions > Unmount.
The Unmount Host Volume window appears.
5.
Follow the instructions in the window.
Using snapclones
The following job commands are available when using snapclones with host volumes. For more
information on jobs, see the HP P6000 Replication Solutions Manager Job Command Reference.
Job commands for snapclones of host volumes
SnapcloneHostVolume
SnapcloneHostVolumeToContainers
SnapcloneHostVolumeToContainersInManagedSet
SnapcloneHostVolumeGroup
ValidateSnapcloneHostVolume
ValidateSnapcloneHostVolumeGroup
Using snapshots
The following job commands are available when using snapshots with host volumes. For more
information on jobs, see the HP P6000 Replication Solutions Manager Job Command Reference.
Job commands for snapshots of host volumes
SnapshotHostVolume
SnapshotHostVolumeGroup
ValidateSnapshotHostVolume
SnapshotHostVolumeToContainers
122
Host volumes
Job commands for snapshots of host volumes
SnapshotHostVolumeToContainersInManagedSet
ValidateSnapshotHostVolumeGroup
Using logical volumes and volume groups
You can present storage volumes (virtual disks) to enabled hosts and subsequently use a host's
logical volume manager to create and manage host volume groups and logical volumes.
The following job commands are available when using logical volumes and volume groups. For
more information on jobs, see the HP P6000 Replication Solutions Manager Job Command
Reference.
Job commands for volume groups and logical volumes
CreateHostVolumeGroup
MountEntireVolumeGroup
MountVolumeGroupComponent
SnapcloneHostVolumeGroup
SnapshotHostVolumeGroup
UnmountEntireVolumeGroup
ValidateHostVolumeGroup
ValidateSnapcloneHostVolumeGroup
ValidateSnapshotHostVolumeGroup
WaitVolumeGroupNormalization
Using raw disks
The replication manager discovers raw host volumes (raw disks) on enabled hosts. In general, you
can work with raw host volumes just as you would with other host volumes, except that you should
not use replication manager mounting features to attempt to mount them. You can also work directly
with the raw storage volumes (virtual disks) that underlie raw host volumes.
Raw disks are listed in the Host Disk Devices tab along with disk devices that contain file systems.
Raw disks are not displayed as host volumes. Raw disks can be replicated from the Host Disk
Devices tab by creating a container for the disk device.
Job templates and commands for raw disks
The following job templates can be used with raw host volumes and storage volumes. For more
information on jobs, see the HP P6000 Replication Solutions Manager Job Command Reference.
Job templates for raw disks
Replicate host disk (raw) devices, mount (raw) to a host
Replicate host volume group, mount components to a host
Replicate raw storage volumes, mount (raw) to a host
Most job templates which include mounting steps can be adapted for use with raw host volumes
by removing the mounting steps.
Working with host volumes
123
The following job commands are typically used with raw disks. For more information on jobs, see
the HP P6000 Replication Solutions Manager Job Command Reference.
Job commands for raw disks
CreateDiskDevice
CreateDiskDevice
DiscoverDiskDevicesForDrGroup
RemoveDiskDevice
SnapcloneDiskDevice
SnapshotDiskDevice
Viewing host volume resources
Display list and tree views for host volumes, host volume groups, host disk devices and replica
repositories. See Host volumes views.
Considerations
•
You can use the GUI or CLUI to display lists. See Host volumes actions cross reference.
•
Tree views are available only in the GUI.
•
Tru64 UNIX clusters. Only host volumes on the enable host in a cluster are displayed in the
replication manager GUI. Host volumes on other hosts in the cluster (that do not have a
replication manager host agent) are not displayed.
Procedure
This procedure uses the GUI.
1. In the navigation pane, select Host Volumes.
The content pane displays host volume resources.
2.
Click the appropriate List tab.
A tabular list of host resources is displayed.
3.
Click the Tree tab.
A graphical tree of host volumes is displayed.
4.
Click View to select another tree view.
Viewing host volume resource properties
View the properties of a specific host volume, host volume group, host disk device, or replica. See
Host volumes properties summary.
Considerations
•
You can use the GUI or the CLUI. See Host volumes actions cross reference.
Procedure
This procedure uses the GUI.
1. In the navigation pane, select Host Volumes.
The content pane displays host volumes resources.
2.
3.
124
Click the appropriate List tab.
Select the resource to view.
Host volumes
4.
Select Actions > View Properties.
The Host Volume Properties window opens.
5.
Click the properties tabs.
Extending host volume capacity
Increase the size of a host volume.
Considerations
•
You can only use the GUI to extend a host volume.
•
You can extend the file system or the underlying array virtual disk.
•
You can extend a host volume until its size reaches that of the underlying virtual disk. At that
point, it is necessary to extend the size of the virtual disk to provide additional capacity to
extend the file system.
•
You execute custom commands or scripts when extending the host volume.
•
Make sure the DC-Management license installed supports the desired extend capacity.
•
When you enter a New Size for a virtual disk, the File System New Size automatically updates
to the same value.
•
Entering a File System New Size has no effect on the virtual disk New Size.
•
If you increase the File System New Size, the VDisk New Size will not change.
•
You can use the Capacity Utilized value to determine the percent of capacity currently being
used.
•
Use care when manually extending a host volume on which you have set a dynamic policy.
See Using DC-Management with replication for more information.
Procedure
1.
2.
3.
In the navigation pane, select Host Volumes.
On the Host Volumes tab, select the host volume that you want to extend.
Select Actions > Extend capacity.
The Host Volume Extend window opens.
4.
Follow the instructions in the window.
Shrinking host volume capacity
Decrease the size of a host volume.
Considerations
•
You can only use the GUI to shrink a host volume.
•
You can shrink the file system or the underlying array virtual disk.
•
The minimum size to which you can shrink a host volume is determined by the amount of data
on the host volume. You cannot make a host volume too small to accommodate the data on
it.
•
The minimum virtual disk size depends on the file system size.
•
You execute custom commands or scripts when shrinking the host volume.
•
When you enter a File System New Size, the virtual disk New Size automatically updates to
the same value.
•
Entering a New Size for a virtual disk has no effect on the File System New Size.
Working with host volumes
125
•
You can use the Capacity Utilized value to determine the percent of capacity currently being
used.
•
Use care when manually shrinking a host volume on which you have set a dynamic policy.
For more information, see Using DC-Management with replication.
Procedure
1.
2.
3.
In the navigation pane, select Host Volumes.
On the Host Volumes tab, select the host volume that you want to shrink.
Select Actions > Shrink capacity.
The Host Volume Shrink window opens.
4.
Follow the instructions in the window.
Setting a dynamic capacity policy
Set a policy to automatically extend or shrink a host volume.
Considerations
•
You can only use the GUI to set a dynamic capacity policy.
•
A dynamic capacity policy can be applied simultaneously to multiple host volumes.
•
An extend policy and a shrink policy can be implemented for the same host volume.
•
When using an extend policy or shrink, the size of both the file system and the underlying
virtual disk is changed when the specified threshold is reached.
•
When selecting a policy enforcement period, be sure to take into account the current setting
for the automatic storage refresh interval. See Refreshing resources (automatic). If the current
refresh interval is longer than the policy enforcement period, the policy may not be triggered
properly. For example, if the refresh interval is set to 6 hours and the extend policy enforcement
period is set to 4 hours, the policy enforcement period lies between refresh periods. If the
policy threshold is exceeded during the refresh period, it will not be detected and the extend
policy will not be triggered.
•
If the set threshold is reached but the host volume cannot be resized, an event will be logged
and displayed in the event panel. This can occur in the following situations:
◦
The threshold was reached during a period of time that falls outside of the specified policy
enforcement period. Resize can only occur during the enforcement period.
◦
The specified maximum resize limit for the host volume has been reached.
Procedure
1.
2.
3.
In the navigation pane, select Host Volumes.
On the Host Volumes tab, select the host volume for which you want to set a dynamic capacity
policy. Select multiple host volumes to apply the policy to all of them.
Select Actions > Set Dynamic Capacity Policy.
The Set Dynamic Capacity Policy wizard opens.
4.
Follow the instructions in the wizard.
Editing a dynamic capacity policy
Change an existing dynamic capacity policy.
126
Host volumes
Considerations
•
You can only use the GUI to edit a dynamic capacity policy.
Procedure
1.
2.
3.
In the navigation pane, select Host Volumes.
On the Host Volumes tab, select the host volume for which you want to set a dynamic capacity
policy.
Select Actions > Edit Dynamic Capacity Policy.
The Set Dynamic Capacity Policy wizard opens.
4.
Follow the instructions in the wizard.
Removing a dynamic capacity policy
Remove an existing a dynamic capacity policy.
Considerations
•
You can only use the GUI to remove a dynamic capacity policy.
•
If a dynamic capacity policy has been applied to multiple host volumes, it can be removed
simultaneously from all of them.
Procedure
1.
2.
3.
In the navigation pane, select Host Volumes.
On the Host Volumes tab, select the host volume for which you want to remove a dynamic
capacity policy. Select multiple host volumes to remove a policy from all of them.
Select Actions > Remove Dynamic Capacity Policy.
The Remove Dynamic Capacity Policy window opens.
4.
Follow the instructions in the window.
Disabling a dynamic capacity policy for multiple host volumes
Considerations
•
You can only use the GUI to disable a dynamic capacity policy.
•
If a dynamic capacity policy has been applied to multiple host volumes, it can be disabled
simultaneously from all of them.
•
If you select a host volume with an already disabled policy status, the disable policy action
is not available.
Procedure
1.
2.
In the navigation pane, select Host Volumes.
Click the Dynamic Capacity Volumes tab.
An Enabled check box indicates the policy status.
3.
4.
Select the volumes whose policy you want to disable.
Right-click the window, and select Disable Policy.
Working with host volumes
127
Enabling a dynamic capacity policy for multiple host volumes
Considerations
•
You can only use the GUI to enable a dynamic capacity policy.
•
A dynamic capacity policy can be enabled simultaneously to multiple host volumes.
•
If you select a host volume with an already enabled policy status, the enable policy action is
not available.
Procedure
1.
2.
In the navigation pane, select Host Volumes.
Click the Dynamic Capacity Volumes tab.
An Enabled check box indicates the policy status.
3.
4.
Select the volumes whose policy you want to enable.
Right-click the window, and selectEnable Policy.
Host volume concepts
Host volumes overview
The replication manager uses the term host volumes in two ways:
•
Broadly
•
Narrowly
When used broadly, as in the phrase “select the host volumes content pane”, the term refers to all
types of host storage resources that are discovered by the replication manager on an enabled
host. These resources include the following major categories:
•
Entire disks, partitions, slices, and Logical volumes.
•
Host volume groups. See Logical volumes and volume groups.
•
Host disk devices. See host Disk devices.
When used narrowly, as in the phrase “select the host volumes tab”, the term refers only to the
category of physical disks, partitions, slices, and logical volumes.
The properties of host volumes, host volume groups, and host disk devices are automatically
discovered by the replication manager and are maintained in the replication manager's database.
See Automatic discovery (refresh). An important aspect of host volumes is where the underlying
physical storage is located, either in the SAN or on the host proper.
SAN-based host volumes
With SAN-based host volumes, the underlying physical storage is in an HP storage system and is
organized as virtual disks. When a virtual disk is presented to a host, the host identifies the virtual
disk (LUN) as a specific host volume.
Underlying storage volume
Array + Virtual disk + presentation
LUN
<=>
Examples
128
Host volume
Enabled Host + OS
host volume name
Mount point
(or raw)
Examples
ArrayA2 + Cats + presentation
<=>
HostA1 + AIX
/dev/hd1
/home/cats
ArrayA2 + Cats + presentation
<=>
HostA2 + HP-UX
/dev/dsk/c2t0d2
/users/cats
Host volumes
ArrayA2 + Cats + presentation
<=>
HostA3 + Linux
/dev/sda3
/var/cats
ArrayA2 + Cats + presentation
<=>
HostA4 + Solaris
$1$DGA2:
CATS
ArrayA2 + Cats + presentation
<=>
HostA5 + OpenVMS
/dev/rdsk/c2t0d5s2
/usr/cats
ArrayA2 + Cats + presentation
<=>
HostA6 + Tru64 UNIX
/dev/disk/disk100c
/users/cats
ArrayA2 + Cats + presentation
<=>
HostA7 + Windows
Disk 3
E:\pets\cats
UNC format
\\array name\virtual disk name
UNC format
<=>
\\host name\host volume name
See Resource names and UNC formats
SAN-based host volumes are displayed in the Host Volumes content pane and can represent storage
with a file system, or raw storage. Storage with a file system can be mounted or unmounted. See
Host volumes views.
You can use a GUI action, a job, or CLUI command to mount and unmount SAN-based host
volumes. You can also locally replicate them using a job with the appropriate host volume
commands.
Non-SAN-based host volumes
Non-SAN-based host volumes, such as floppy drives, CDs, and internal hard drives, are also
discovered and displayed in the Host Volumes content pane. You cannot use the replication manager
to interact with these non-SAN-based host volumes.
Network volumes
Depending on the host OS, the replication manager may or may not discover network volumes on
an enabled host.
Host volumes FAQ
•
With host volumes, what does the replicable property indicate?
The replicable property indicates whether you can locally replicate a host volume. If snapclones
or snapshots are indicated, you can replicate the host volume
•
How do I locally replicate a host volume?
Host volumes are replicated by using a GUI action or a job with the appropriate host volume
commands. You cannot locally replicate a host volume by using a CLUI command.
•
What is the advantage in replicating a host volume rather than a storage volume (virtual disk)?
The main advantage is simplicity. When you use a job to specify the host volume to replicate,
the replication manager automatically determines the underlying storage systems and virtual
disks that are involved and issues the required low level replication commands via the job.
•
Why can't I locally replicate a specific host volume?
When replicable property is N/A or No, a host volume cannot be locally replicated. This
occurs when the underlying physical storage for a host volume is not on an HP storage system
or the storage system is not licensed for local replication. See Local replication licensing. In
some cases, local replication is not possible because the underlying virtual disks do not comply
with local replication guidelines. See virtual disk Snapclone guidelines and Snapshot guidelines.
Disk Devices
The replication manager uses the term host disk device to refer a host's identification of storage
devices. With SAN-based storage resources, for each virtual disk that is presented to a host, there
Host volume concepts
129
is a corresponding entry (identification) in a host's low-level devices table. The replication manager
discovers these devices on enabled hosts and displays them in the Disk Devices tab on the Host
Volumes content pane. See Host volume views and Viewing host volumes.
File system types
The file system property indicates whether a host volume is formatted with a file system or not.
Examples of file system types:
AdvFS. Tru64 UNIX
ufs. AIX, HP-UX, Solaris, Tru64 UNIX
vxfs. HP-UX
NTFS. Windows
Files-11, ODS-2, ODS-5. OpenVMS ext2, ext3 Linux
If a host volume is not formatted with a file system, the property displays raw disk information. See
host volumes Raw disks.
Instant Restore
The instant restore feature allows you to restore data on a host volume or host volume group with
data from one of its previously created replicas (snapclone or snapshot). The operation is instant
because the restored data is available within seconds for host I/O (the actual data transfer occurs
in the background).
For example, assume that the database named sales_db becomes corrupt. You can instantly restore
it to a prior state from one of its replicas.
sales_db
volume being restored
<======
RR-20070523...
replica to restore from
Logical volumes and volume groups
Some enabled hosts use a logical volume manager (LVM) to organize storage into volume groups
and logical volumes. The replication manager discovers host volumes on enabled hosts if the
volumes were created with a supported OS and LVM.
The following description uses general terms that may differ from the terminology used in a specific
OS and LVM. See also Tru64 UNIX host volumes.
Volume groups
A host-based volume group is a named pool of storage that the host's LVM can access. When the
underlying physical storage is on an HP storage system, you can use the replication manager to
interact with a volume group. A volume group is not considered to be a component with which
hosts can perform I/O.
Logical volumes
An LVM organizes a host's volume group into smaller portions called logical volumes. When the
underlying physical storage is on an HP storage system, you can use the replication manager to
interact with a logical volume.
Each logical volume in a volume group is considered to be a component with which hosts can
perform I/O. Logical volumes can contain file systems or be raw storage. See Raw disks.
LUN
In each storage system, logical unit numbers (LUNs) are assigned to its virtual disks. When a virtual
disk is presented to hosts, the storage system and the hosts perform I/O by referencing the LUN.
130 Host volumes
At a low level, a host OS typically reports each storage device that it detects in the format of C#
T# D#, where:
C#
Identifies a host I/O controller
T#
Identifies the target storage system on the controller
D#
Identifies the virtual disk (LUN) on the storage system
Automatic LUN assignment
•
When presenting a virtual disk to a host, enter zero (0) to allow the storage controller software
to automatically assign the LUN.
Mounting all logical volumes in a replicated volume group
The local replication wizard mounts all of the logical volumes in a replicated volume group, as
appropriate:
•
If a logical volume is mounted when the volume group is replicated, then the logical volume
replica will be mounted.
•
If a logical volume is not mounted when the volume group is replicated, the logical volume
replica will not be mounted.
•
When replicated, the original volume group must have at least one mounted logical volume.
If the volume group has no mounted logical volumes, or has only raw logical volumes, this
operation may fail when the wizard's job is run.
Mount point prefixes – general
•
When mounting to the same host as the original, you must include a mount point prefix.
•
When mounting to a different host than the original, a prefix is optional.
•
If you use the wizard to create multiple replicas of a volume group and you plan to mount the
replicas to the same host at the same time, you must use a unique mount point prefix for each
volume group replica.
Mount point prefixes – OS specifics
Mount points that you specify must be comply with OS formats. For example, with most UNIX OSs,
specifying /backup would add /backup to mount points for all logical volumes in the volume
group.
Linux. You cannot mount a volume group replica on the same host as the source volume group.
OpenVMS. OpenVMS does not support volume groups. This operation is not applicable to OpenVMS
volume sets.
Solaris. You cannot mount a volume group replica on the same host as the source volume group.
Tru64 UNIX.
Windows. Windows does not support volume groups.
Mount points (drive letters) and device names
The mount point property indicates whether a host volume is mounted in the host's file system or
not. Values are:
•
Mount point identifier. Identifies the location in the host's file system where a host volume is
attached. Mount points are displayed in OS-specific format.
•
Not mounted. Indicates that the host volume is not mounted in the host's file system.
See also host volumes Raw disks.
Host volume concepts
131
Linux and UNIX mount points and device names
Examples
Host OS
Device name
Mount point
AIX
/dev/dsk/hd1
/home/cats
HP-UX
/dev/dsk/c2t0d2
/users/cats
Linux
/dev/sda3
/var/cats
Solaris
/dev/dsk/c0t5d0s6
/usr/cats
Tru64 UNIX
/dev/disk/dsk100c
/users/cats
Windows mount points and device names
Examples
Host OS
Device name
Mount point
Drive
Disk3
E:\
Drive & folders
Disk3
E:\pets\cats
Windows
In a Windows OS, mount points are typically called drive letters. In the drive example the host
volume is mounted as drive letter E:\. In the drive & folder example, the host volume is mounted
as drive letter E:\, in the folder \pets\cats.
OpenVMS mount points and device names
Examples
Host OS
Device name
Volume label
Mount point
OpenVMS
$1$DGA2:
CATS_DB
CATS_DB
$1$DGA2:
PETS.CATS
PETS.CATS
Mount point names are based on OpenVMS volume labels.
Partitions and slices
In most OSs a single disk (host volume) is divided into logical parts called partitions. In some OSs,
partitions are called slices or disk sections.
For some replication manager job commands, you may need to enter a host volume's partition or
ID into a command argument. If necessary, see your host operating system documentation for
details on identifying partitions.
HP-UX disk sections
If you are a superuser for an HP-UX host, you can identify host volume disk sections by viewing
the /etc/mnttab file.
HP-UX supports up to 16 disk sections, numbered 0 through 15. Disk section number 2 refers to
the entire disk.
132
Host volumes
In the example below, the root directory is on controller 0, target 0, disk section 10. and the users
drive is on controller 0, target 0, disk section 2 (entire disk).
/dev/dsk/c0t0d10 /
hfs rw 0 1 # root directory #7937
/dev/dsk/c2t0d2 /users hfs rw 0 1 # /users drive #7937
Linux partitions
If you are an administrator for a Linux host, you can identify host volume partition numbers using
utilities such as parted and fdisk -1.
Red Hat Linux partition examples
Partition ID
1
Partition
Remarks
/boot
boot, Linux native
/
root, Linux native
2
3
4
5
6
Linux swap
7 and up
SUSE Linux Enterprise Server (SLES) partition examples
Partition ID
Partition
Remarks
/dev/sda1
/
root
/dev/sda2
swap
/dev/sda3
/var
/dev/sda4
extended partition
/dev/sda5
/home
/dev/sda6
/export
Solaris slices
If you are a superuser for a Solaris host, you can identify host volume slice numbers using the
format utility.
Solaris slice examples
Slice ID
Slice
Remarks
0
/
root
1
swap
2
-
entire disk
3
/export
alternative versions of the OS
4
/export/swap
swap space for client systems
5
/opt
applications
Host volume concepts
133
Solaris slice examples
6
/usr
programs and libraries
7
/home
user files
Tru64 UNIX partitions
If you are a superuser of a Tru64 UNIX host, you can identify host volume partitions by using the
command disklabel <diskname>.
After a virtual disk is presented to a Tru64 UNIX host, you must execute the host command
disklabel –rwn <diskname> on the disk before the default partitions (a to h) will be available
for use.
Tru64 UNIX partition examples
Partition ID
a
Partition
Remarks
/
root (OS directories and files)
b
swap
c
entire disk
d
swap
e
unused
f
unused
g
/usr
h
user programs and libraries
unused
UNIX partitions
If you are a superuser for a UNIX host, you can identify host volume partitions by viewing the
/etc/fstab file.
UNIX partition examples
Partition
Remarks
/
root
swap
/opt
applications
/usr
user files
/home
Windows partitions
If you are an administrator for a Windows host, you can identify host volume partitions using Disk
Management.
Raw disks
When a host volume is not controlled and formatted by a host file system, it is termed a raw disk
or raw storage. Typically, the host's I/O with a raw disk is under the direct control of a database
or a similar application. Raw disks are typically used to improve I/O performance or facilitate the
creation disk-image copies.
You can use the replication manager to work with raw disks. See host volumes Using raw disks.
134
Host volumes
Local replication wizard
The local replication wizard allows you to make point-in-time copies of a host volume, host volume
group, or host disk device. The wizard generates replication manager jobs and keeps track of
your replication actions in the replica repository. See Creating replicas and Replica repository.
You can restore a host volume from the replicas in the repository. See Restoring host volumes
(Instant Restore).
Replica repository
The replica repository feature supports use of the local replication wizard and the instant restore
wizard. Whenever the local replication wizard is used, replica information is saved in the repository.
Whenever the instant restore wizard is used, the wizard checks the repository to find relevant
replicas to restore from. At a low level, the replica repository contains a large amount of technical
information. Summary information is displayed in the Replica Repository tab and in the Replica
Repository Properties window.
Use of the replica repository
Generally, you do not need to directly perform actions on replicas. The replication manager
automatically uses the replica repository for many functions. When necessary, you can view, edit,
and delete replicas in the repository. You can also remove replicas from round robin rotation.
Replica names
The wizard assigns replica names based on the date and time when the replication is performed.
Names begin with RR (replica repository) and use this format.
Replica name format
Example
RR-YYYYMMDD.HHMMSS.SSS
RR-20070414.012616.753
YYYY=year, MM=month, DD=day,
2007, 04, 14
HH=hours, MM=minutes, SS.SSS=seconds
01, 26, 16.753
To determine if a replica was made in the AM or PM you can view its properties. See Viewing
host volume properties (replicas).
Replica types
The replicas in the repository are classified by type. Values for the replica type property are:
•
Round robin. Indicates the replica is part of a round robin rotation (rolling backup).
•
Instant Restore. Indicates the replica is not part of a round robin rotation (rolling backup).
Has-replica property
The has-replica property (in the Replica Repository Properties window) indicates if a replica is
logically complete. For example, if one or more virtual disks underlie a host volume replica and
one or more virtual disks are deleted, the replica will still appear in the repository, but when its
properties are viewed, the has-replica property will be set to no. This property is used in the Instant
Restore wizard to ensure that you cannot accidently restore from a logically incomplete replica.
Round robin replicas (wizard)
Round robin refers to repeatedly creating replicas of a host volume in a way that reuses the
underlying virtual disk resources. This reduces resource consumption and simplifies replica
management. A local replication wizard allows you to set up round robin replication. See Creating
round robin replicas.
Host volume concepts
135
For example, say that you use the wizard to create and save a round robin job that maintains
three replicas of a host volume. The first three times that you run a job instance, a new replica will
be created and added to the wizard's replica repository; and, each replica will have its own
underlying virtual disks. When you run a job instance four times or more, the oldest replica will
be deleted from the repository and a new replica added. However, no additional underlying virtual
disks will required for the new replica.
Snapclones (host volume)
Snapclone replication of a host volume instantly creates independent point-in-time copies of the
virtual disks that underlie a host volume. The copies are called snapclones.
The snapclone property indicates whether the host volume can be locally replicated using the
snapclone method. Values are:
•
Yes. All virtual disks that underlie the host volume comply with snapclone guidelines. Snapclone
replication can be performed.
•
No. One or more virtual disks that underlie the host volume do not comply with snapclone
guidelines. Snapclone replication cannot be performed.
See also virtual disks Snapclones, Snapclone FAQ and Snapclone guidelines.
Snapshots (host volume)
Snapshot replication of a host volume instantly creates virtual, point-in-time copies of the virtual
disks that underlie a host volume. The copies are called snapshots.
The snapshot property indicates whether the host volume can be locally replicated using the snapshot
method. Values are:
•
Yes. All virtual disks that underlie the host volume comply with snapshot guidelines. Snapshot
replication can be performed.
•
No. One or more virtual disks that underlie the host volume do not comply with snapshot
guidelines. Snapshot replication cannot be performed.
See also virtual disks Snapshots, Snapshot types, Snapshot FAQ and Snapshot guidelines.
Snapshot FAQ
•
How can I tell a snapshot from other types of virtual disks?
Because snapshots are not independent virtual disks, they are identified differently than original
virtual disks. See virtual disks Types.
•
How long does it take to create a snapshot?
A snapshot requires only a matter of seconds, no matter how large the original virtual disk.
•
If it is virtual, can a host write to a snapshot?
Yes. A snapshot is functionally equivalent to a physical disk with both read and write capability.
•
After I create a snapshot, can I delete the original virtual disk?
No. A snapshot always relies, at least in part, on the original (active) virtual disk for data. If
the original virtual disk is deleted, its associated snapshot becomes unusable. A snapshot
should be thought of as a temporary copy.
•
Can I make multiple snapshots of an original virtual disk?
Yes. However, there is a limit. See virtual disks Snapshot guidelines.
•
What is the maximum number of snapshots on a storage system?
There is no limit. However, the greater the number of snapshots, the longer it takes to shut
down the storage system during maintenance and upgrade activities.
136
Host volumes
•
Can I create a snapshot of a snapclone?
Yes.
•
Can I create an snapshot of a snapshot?
No.
Snapshot types (allocation policy)
Snapshot types (allocation policy) specifies how the storage system allocates space in a disk group
for a snapshot. Values are:
•
Demand allocated. The space reserved for the snapshot can automatically change from an
initial minimum amount, up to the full capacity of the original virtual disk.
•
Fully allocated. The space reserved for the snapshot is initially set to, and remains fixed at,
the full capacity of the source virtual disk.
When selecting a snapshot type, one consideration is the lifetime of the snapshot and the amount
of source data that will change during its lifetime.
Snapshot
lifetime
Estimated changes
in source data
Recommended
snapshot type
Short
Less than 25%
Demand allocated
Long
25% or more
Fully allocated
Demand-allocated snapshots
When a snapshot is demand allocated, the storage system allocates only enough space to store
metadata and pointers to the source data. As the source is overwritten, the array allocates more
space and copies the original data to the snapshot. If all the original data on the source is
over-written, the controller increases the allocated space on the snapshot to the full size of the
source.
The size of the disk group in which the source and snapshot are located must be sufficient to handle
increases in snapshot size, whenever the increases might occur. Insufficient space in the disk group
can not only prevent the controller from increasing the space allocation, but it can also prevent
writes to both the source and snapshot.
Fully allocated snapshots
When a snapshot is fully allocated, the storage system allocates only enough space to store
metadata and pointers to the source data, but reserves space equal to the capacity of the source
virtual disk. As the source is overwritten, the array allocates more space and copies the original
data to the snapshot.
Once created, a fully allocated snapshot cannot run out of space.
Types (components)
The type property indicates the component structure of the host volume. This varies with the host
operating system and logical volume manager. Examples:
Device. All OSs.
Logical volume. AIX, HP-UX, Linux, Solaris, Tru64 UNIX.
Partition (slice). AIX, HP-UX, Linux, Solaris, Windows. See Partitions and slices.
Volume set, dynamic disk, spanned volume. Windows
Host volume concepts
137
Dynamic capacity management
Dynamic capacity management overview
DC-Management provides you with the capability to extend (increase) or shrink (decrease) the size
of a host volume without disrupting host I/O. This gives you greater control over the size of host
volumes as your system storage requirements change.
If a host volume is nearing capacity, you can extend the size of the volume to avoid the situation
that occurs when a host volume reaches its full capacity. If a host volume has too much capacity
allocated to it, you can shrink the size of the volume, freeing up the unused capacity for other
applications.
DC-Management can be implemented manually or automatically.
•
Manual DC-Management enables you to immediately change the volume size. The new size
remains in effect until you manually reset it again.
•
Automatic DC-Management policies enable you to specify the threshold at which you want
the size of the volume to be changed. RSM monitors the capacity of the host volume and
automatically changes the size when the specified threshold is reached. You can set e-mail
notification alerting you when a volume has been resized. You can also control the time during
which the policy is enforced.
NOTE: The policy threshold is checked against the current size of the host volume during
each automatic storage refresh cycle (default 30 minutes). Therefore, even if the current size
of the host volume exceeds the set threshold, the earliest the policy will be triggered is at the
next refresh cycle. You can also manually invoke a global refresh, which may trigger a policy
as well.
TIP: To help you evaluate your storage and use DC-Management effectively, you can analyze
the current host volume capacity utilization. For more information on enabling this feature, see
Page 119.
DC-Management operation
NOTE: During a resize operation, DC-Management communicates with both the RSM host agent
and HP P6000 Command View. It is necessary to have both of these components installed to
implement DC-Management on a host.
When resizing the host volume file system, DC-Management communicates with the RSM host
agent running on the enabled host. The Windows RSM host agent uses the Windows Virtual Disk
Service (VDS) to resize the file system.
When resizing a virtual disk, DC-Management communicates with HP P6000 Command View,
which processes virtual disk resize requests and passes them to the array, where the resize operation
is implemented.
Because it is communicating with both the array and the host operating system, DC-Management
can coordinate the interaction between the file system and the underlying virtual disk during a
resize operation. This ensures you do not change the size of one component without making the
necessary changes to the other as well. And you can make the necessary changes in a single step
using RSM.
138
Host volumes
Methods for resizing a host volume file system and virtual disk
In addition to DC-Management, you can resize a host volume from the operating system and from
HP P6000 Command View. The following table lists the various resize options available and
describes what you should consider when choosing a specific option.
Resize option
DC-Management
Operations available
Extend or shrink a host volume or the
underlying virtual disk.
Comments
Provides full resizing capability and
integration with the operating system. With
a single step you can resize the file system
and the underlying virtual disk
simultaneously, simplifying capacity
management.
HP-UX operating system
• Non-LVM host volumes using the Veritas
• HP P6000 Command View must be used
(without DC-Management)
file system can be resized using the fsadm
to manually resize the underlying virtual
disks before (extend) or after (shink)
command if online JFS is available, and
resizing the file system. The storage stack
the extendfs command if online JFS is
on the host needs to be made aware of
not available.
the change in the virtual disk's capacity.
• LVM host volumes can be extended in two
steps. First, the volume group and logical • Before extending virtual disks on the
array, care must be taken to ensure that
volume has to be extended using
the additional capacity can be used by
vgmodify and other LVM commands.
the host file system. This can be a
Then the file system is extended using the
problem if the firmware version does not
fsadm command if online JFS is available,
allow shrinking of the virtual disk to
and the extendfs command if online JFS
recover the storage. DC-Management
is not available.
prevents such accidental grow of virtual
See the following website for information on
disks by validating the host volume
resizing disks:
structure and blocking the operation if
http://docs.hp.com/en/5991-6481/
necessary.
ch03s04.html
• Extra care must be taken to not shrink a
host volume beyond the file system
allocated boundaries.
Windows operating
system (without
DC-Management)
Extend (Windows 2003 and Windows Server • HP P6000 Command View must be used
2008) or shrink (Windows Server 2008) a
to manually resize the underlying virtual
host volume file system using Windows Disk
disks before (extend) or after (shink)
Manager.
resizing the file system.
• Before extending virtual disks on the
array, care must be taken to ensure that
the additional capacity can be used by
the host file system. This can be a
problem if the firmware version does not
allow shrinking of the virtual disk to
recover the storage. DC-Management
prevents such accidental grow of virtual
disks by validating the host volume
structure and blocking the operation if
necessary.
HP P6000 Command
View (without
DC-Management)
Extend or shrink a virtual disk presented to a Does not provide integration with the host
host.
operating system. You must resize the file
system manually.
CAUTION: Care must be taken to not
shrink a virtual disk beyond the boundaries
of usage by either file systems or
volume/partition managers on the host.
Host volume concepts
139
DC-Management support
The following table lists the general configuration requirements for using DC-Management.
Item
Requirement
Host agent
An RSM host agent must be installed on a host to enable resizing of host volumes. See below for
host agents that support DC-Management.
Licensing
A DC-Management license must be purchased for each array on which you want to use this
feature.
File system
See the operating system requirements and considerations below for supported file systems.
Controller software
DC-Management is supported on all versions of controller software supported by RSM 4.0 and
later. See Controller software features.
HP P6000
Command View
A supported version of HP Command View must be installed on the management server to enable
DC-Management
Windows requirements and considerations
•
Windows 2003 and Windows Server 2008 host agents support DC-Management.
For information on using Windows Server 2008 with the array, see the Implementing Microsoft
Windows Server "Longhorn" RCO on HP StorageWorks Arrays White Paper available on the
following HP website:
http://www.hp.com/go/ws2008
•
Extending a host volume is supported on Windows 2003 and Windows Server 2008.
•
Shrinking a host volume is supported on Windows Server 2008, but only on basic disks and
dynamic simple volumes that are spanned or are mirrors. Striped or Vraid5 dynamic disks do
not support shrink.
•
Supported file systems include NTFS. MBR and GPT partitions are supported. Replication and
DC-Management of Windows dynamic disks and VERITAS Volume Manager are not supported.
HP-UX requirements and considerations
•
HP-UX 11.31 and 11.23 host agents support DC-Management. The following table identifies
the DC-Management features supported for different HP-UX configurations.
•
DC-Management policies are supported only for single virtual disk host volumes.
•
All offline resize operations must be performed using the manual DC-Management feature.
Automatic DC-Management policies can only be used to perform an online resize.
•
When taking a host volume offline to perform a resize operation, all other host volumes that
are part of same volume group must also be unmounted. Before performing a resize, stop I/O
on all host volumes and close all open files so that RSM can take the host volumes offline.
After completion of the resize operation, all host volumes will be mounted and you can resume
I/O.
140 Host volumes
•
Veritas file system (VxFS) is supported. HFS file system is not supported.
•
The following resize operations are not supported when using LVM:
◦
Extend logical volumes in native LVM Volume Groups with mirrored, striped, or contiguous
logical volumes.
◦
Extend logical volumes in native LVM Volume Group when VG is cluster aware (activated
in shared or exclusive mode).
◦
Extend logical volumes in native LVM Volume Group using PV that is not allocatable
(includes spare PVs and PVs in read-only mode).
Host volume configuration
Extend support?
Shrink support?
HP-UX 11.31
LVM and Veritas online JFS
Manual1 (offline2 only)
None
Non-LVM and Veritas online JFS
Policy3 and manual (online)
Policy3 and manual1 (online)
LVM and VxFS without online JFS.
Manual1 (offline2 only)
None
Non-LVM and VxFS without online JFS.
Manual1 (offline2 only)
None
VxVM
None
None
HFS filesystem
None
None
HP-UX 11.23
LVM and VxFS with or without online
JFS
Manual1, 4 (offline2 only)
None
Non-LVM and VxFS with or without
online JFS
Manual1 (offline2 only)
Manual1 (offline2 only)
VxVM
None
None
HFS filesystem
None
None
1
2
3
4
When performing a manual extend or shrink, the operation is executed immediately. No policy is created when performing
a manual operation.
If the host volume is not part of a volume group, taking it offline involves unmounting it. If the host volume is part of a
volume group, taking it offline involves unmounting it and deactivating the volume group.
Policies are supported only for single virtual disk host volumes.
Requires installation of the vgmodify tool. This tool is part of the HPUX patch PHCO_35524 for 11.23.
VMFS requirements and considerations
•
Only extend is supported with VMFS.
•
Support is provided only for host volumes on basic disks on VMFS volumes. Dynamic disks
are not supported.
•
Support is provided only for host volumes created on LUNs from the same storage system.
Selecting the proper dynamic capacity policy thresholds
When setting a dynamic policy, the following guidelines apply:
•
Use an extend threshold of 75% or greater.
•
Use a shrink threshold of 35% or less.
•
Use an increment value of approximately 10% of the current host volume size. For example,
if the current size is 50 GB use an increment value of 5 GB.
Host volume concepts
141
NOTE: Not following the above guidelines can result in a situation where the values used to set
one policy may prevent you from setting the opposite policy. For example, if inappropriate values
are used to set an extend policy, they may prevent you from setting a shrink policy on the same
host volume. In this situation, you will be informed that the shrink policy option is not available for
the selected host volume. If this situation occurs, reset the policy values using the above guidelines.
When setting a dynamic capacity policy threshold, you must be aware of the interaction between
the extend and shrink policy values. RSM will not allow you to select threshold values that could
result in cyclic resize—a condition in which the host volume goes into a loop resizing itself. If you
attempt to select a policy threshold that would result in cyclic resize, RSM alerts you and recommends
a policy threshold that will avoid the situation.
For example, assume you have a 1 GB host volume on which you have set a shrink policy threshold
of 10% and an extend policy threshold of 50%. The desired increment size is 5 GB. When the
host volume capacity reaches 0.5 GB (50% of 1 GB) the extend policy is triggered, extending the
host volume size to 6 GB (1 GB + 5 GB increment). This causes the host volume capacity to drop
to 8.33%, which invokes the shrink policy (<10%). This in turn causes extend policy to be
retriggered. The host volume would now be in cyclic resize.
Manually resizing a host volume on which you have set a policy should be done with care. The
manual resize will be performed, but interaction with the existing policy may cause undesired
results. For example, assume you manually extend a host volume on which a shrink policy has
been set. The manual extend will be implemented, but the larger size of the host volume may cause
the shrink policy to be triggered at the next refresh cycle, causing the size of the host volume to
be reduced.
Using DC-Management with replication
Guidelines for using DC-Management with HP P6000 Business Copy
•
You cannot use DC-Management with snapshots or mirrorclones created using HP P6000
Business Copy. You cannot use DC-Management on the source or the replica.
•
You can resize a snapclone once normalization is complete. During normalization, you cannot
use DC-Management on a snapclone.
•
If you set a dynamic policy on a host volume and then create a snapshot or mirrorclone on
the underlying virtual disk, the policy will not be executed properly. When the policy threshold
is reached, the policy will trigger and the associated job will be created. However, the job
will fail with a message indicating the resize cannot be performed because a replica exists
on the virtual disk. You should disable or remove the policy to prevent it from running.
Guidelines for using DC-Management with HP P6000 Continuous Access
142
•
When the source virtual disk of a DR pair is extended using DC-Management, the size of the
destination virtual disk is increased automatically. If the destination array does not have enough
capacity to increase the size of the virtual disk, the attempt to extend the source virtual disk
fails. The automatic increase of the destination virtual disk does not require a DCM license
on the destination array. The increase is enabled by the CA license.
•
RSM checks for a DCM license on the source array only. If the destination array does not
have a DCM license, it will stop working after a fail over.
•
You cannot shrink a destination DR group member.
•
You cannot extend a DR group destination.
•
You cannot extend a source virtual disk if the DR link is suspended.
Host volumes
•
Resizing is not supported when the destination is unavailable.
•
You can expand the size of a DR group member (virtual disk) whether the DR group is in
synchronous or asynchronous mode. However, there are certain considerations for expansion
in enhanced asynchronous mode (between arrays running controller software 6.0x or later).
If you expand a virtual disk while in enhanced asynchronous mode, the DR group log is not
resized to reflect the expanded virtual disk. If the expansion is small, the log may be able to
accommodate it. If the expansion is substantial, the log may not be large enough to handle
it.
Expand a DR group member only when the DR group is in synchronous mode as follows:
1. Change the write mode from asynchronous to synchronous.
2. Complete the expansion of the DR group member.
3. If you previously set the maximum size of the DR group log, change the size of the log
to accommodate the expanded DR group member. If the log size was previously
determined by the controller software, no user action is necessary.
4. Change the write mode to asynchronous.
The DR group log size is adjusted to accommodate the expanded DR group member.
DC-Management FAQ
•
Can I extend or shrink any host volume?
No. DC-Management is not supported on all host agents. See DC-Management support.
•
Will extending or shrinking a host volume impact host I/O to the array?
Using DC-Management automatic policies, you can extend or shrink a host volume at any
time without disrupting host I/O.
•
Can I use DC-Management with HP P6000 Continuous Access?
Yes. See Using DC-Management with replication for more information.
•
Can I use DC-Management with HP P6000 Business Copy?
You cannot use DC-Management with snapshots or mirrorclones created using HP P6000
Business Copy. See Using DC-Management with replication for more information.
•
Can I create a snapshot or mirrorclone on a virtual disk that has a DC-Management policy set
on it?
No. If you set a dynamic policy on a host volume and then create a snapshot or mirrorclone
on the underlying virtual disk, the policy will not execute properly. See Using DC-Management
with replication for more information.
•
Can I manually resize a host volume that has a DC-Management policy set on it?
Yes, but you should be aware of the interaction. See Using DC-Management with replication
for more information.
•
How much can I expand a host volume?
You can extend a host volume until its size reaches that of the underlying virtual disk. At that
point, it is necessary to extend the size of the virtual disk to provide additional capacity to
extend the file system.
•
How much can I shrink a host volume?
The minimum size to which you can shrink a host volume is determined by the amount of data
on the host volume. You cannot make a host volume too small to accommodate the data on
it.
Host volume concepts
143
•
When I perform a manual extend or shrink, I can choose to extend the virtual disk or the file
system. When I use a dynamic capacity policy, I don't have this choice. How does a extend
or shrink policy work?
When using an extend or shrink policy, the size of both the file system and the underlying
virtual disk is changed when the specified threshold is reached.
•
Can I install more than one extend policy or shrink policy on the same host volume?
No. You can only install one extend policy or shrink policy on a host volume.
DC-Management best practices
•
When setting a dynamic policy, observe the guidelines in Selecting the proper dynamic
capacity policy thresholds.
•
You cannot extend or shrink a file system on a write-protected disk. You can extend or shrink
the underlying virtual disk on a write-protected disk. Shrinking the virtual disk in this case
would enable you recover unused capacity.
•
Be sure to observe the guidelines when using DC-Management with replication. See Using
DC-Management with replication for more information.
•
Ensure that you have the necessary DC-Management license installed. When using
DC-Management with HP P6000 Continuous Access, HP recommends you install a
DC-Management license on both the source and destination arrays to ensure proper operation
during failover.
•
Use care when manually resizing a host volume on which you have set a policy. For more
information, see Selecting the proper dynamic capacity policy thresholds.
DC-Management examples
The following example presents one scenario for using DC-Management to manage storage
capacity.
Scenario 1
An administrator needs to create a database for the production group. The administrator does not
know how much storage to allocate, but the request is for 200 GB. The administrator decides to
allocate 100 GB initially and use DC-Management policies to resize the capacity as necessary.
An extend policy is set using a threshold of 80% and an increment value of 10 GB. When the size
of the database reaches 80 GB (80% of 100 GB), the DCM policy will trigger and incrementally
increase the size of the host volume.
The administrator also decides to set a DC-Management shrink policy to reduce the size of the
host volume. This may be useful to account for periods of activity during which the database
temporarily grows to a larger size. When the size of the database decreases to the set threshold,
the shrink policy triggers to reduce the size of the host volume.
The administrator uses the email notification feature to be alerted when either policy triggers.
144 Host volumes
6 Jobs
Working with jobs
About jobs
The jobs content pane displays the jobs that you can use, their run histories and scheduled run
times, if any.
Views
•
List of all jobs. See Jobs list tab.
•
List of job instances. See Jobs Run History tab.
•
List of scheduled job events. See Jobs schedule tab.
Actions
•
Actions in the GUI. See Jobs actions summary.
•
You can also interact with jobs from the CLUI. See Job actions cross reference.
Properties
•
Properties displayed in the GUI. See Job properties summary.
•
You can also display properties from the CLUI. See the CLUI command Show Job.
Job actions summary
The following jobs actions are available on the content pane. Some actions have equivalent CLUI
commands. See Job actions cross reference.
Help on job actions on the: List tab, Run History tab, Monitor Job window, and Schedule tab.
Job actions on the List tab
You can perform the following job actions by right-clicking a job in the List tab, or by selecting a
job and using the Actions menu.
•
View Properties. View the commands in a job. Procedure.
•
New. Create a new job. Procedure.
•
Edit. Edit a job. Procedure.
•
Delete. Delete a job. Procedure.
•
Run. Run or validate a job. Procedure.
•
Monitor. Monitor the progress of a job instance and control its execution. Procedure.
•
Export. Export the selected job(s) to an XML format file. Procedure.
Job instance actions on the Run History tab
You can perform the following actions on a job instance by right-clicking a job instance in the Run
History tab, or by selecting a job instance and then using the Run History tab's Actions menu.
•
Monitor. Monitor the progress of a job instance and control its execution. Procedure.
•
Delete. Delete a job instance. Procedure.
•
Abort. Stop a job instance. Procedure.
Working with jobs
145
•
Continue. Continue a paused job instance. Procedure.
•
Pause. Pause a job instance. Procedure.
Scheduled job event actions on the Schedule tab
•
View Properties. View the interval (frequency) and start time of a scheduled job event.
Procedure.
•
Enable/Disable. Toggle a scheduled job event on or off. Procedure.
•
Schedule Job. Schedule a job event. Procedure.
•
Edit Schedule. Edit a scheduled job event. Procedure.
•
Remove. Delete a scheduled job event. Procedure.
Job instance actions on the Monitor Job window
•
Continue. Continue a paused job instance. Procedure.
•
Pause. Pause a running job instance. Procedure.
•
Abort. Stop a running job instance. Procedure.
•
Refresh. Refresh the Monitor Job window.
Job Planning - Tru64 UNIX
Suspending I/O before replicating AdvFS volumes - Tru64 UNIX
If you plan to replicate an AdvFS host volume or volume group (fileset or domain) that has heavy
I/O, HP recommends that the job includes steps to briefly suspend I/O of the appropriate mount
points just before the replication steps.
When generating a job template, select the option Suspend Source Before Replication to add the
steps. If creating a custom job, manually enter the steps. An example follows.
Line Task
...
14
// Suspend the host application.
15
Launch ( $source_host, %suspend_command_line%, "", WAIT, "0" ) onerror pauseat E1:
16
DO {
17
$Rep1 = SnapshotHostVolume ( $source_hostvol_unc1, FULLY_ALLOCATED, SAME, NOWAIT ) onerror pauseat
E1:
18
//
19
} ALWAYS {
20
// Resume the host application.
21
Launch ( $source_host, %resume_command_line%, "", WAIT, "0" )
22
}
23
//
...
In the first Launch command (example, line 15) enter a host command or script file name to suspend
I/O (freeze the mount point). For example, the syntax for the freezefs utility is:
"freezefs –t -1 /mountPoint"
146
Jobs
Be sure to include the -1 argument. This prevents the OS from automatically performing a subsequent
thaw. If omitted, the job can fail on the resume step (example, line 21).
In the second Launch command (example, line 21) enter a host command or script file name to
resume I/O (thaw the mount point), for example:
"thawfs /mountPoint"
Job actions cross reference
You can work with jobs using GUI actions and CLUI commands. See Jobs and job instances and
Scheduled job events.
Jobs and job instances
Create jobs
GUI action
Job command or template
CLUI command
Jobs > New
-
-
GUI action
Job command or template
CLUI command
Jobs > Delete
-
-
GUI action
Job command or template
CLUI command
Jobs > Edit
-
-
GUI action
Job command or template
CLUI command
Jobs > Abort
-
Set Job
Jobs > Export
-
Set Server
Jobs > Monitor (status)
-
Show Job
Jobs > Monitor > Abort
-
Set Job
Jobs > Monitor > Continue
-
Set Job
Jobs > Monitor > Pause
-
Set Job
Jobs > Run
-
Set Job
GUI action
Job command or template
CLUI command
Jobs > View Properties
-
Show Job
Delete jobs
Edit jobs
Other job tasks
View jobs
Working with jobs
147
Scheduled job events
Create scheduled job event
GUI action
Job command or template
CLUI command
Jobs > Schedule > Schedule job
-
-
GUI action
Job command or template
CLUI command
Jobs > Schedule > Remove
-
-
GUI action
Job command or template
CLUI command
Jobs > Schedule > Edit
-
-
GUI action
Job command or template
CLUI command
Jobs > Schedule > Enable/Disable
-
Set Job
GUI action
Job command or template
CLUI command
Jobs > View Properties
-
Show Job
Delete scheduled job event
Edit scheduled job event
Other scheduled job event tasks
View scheduled job event
Job properties summary
For help on properties, see the following tab in the jobs properties window.
•
Job listing.
See also Viewing job properties.
Job views
See the following examples: List view, Run history, Schedule
List view
148
Jobs
Run History
Schedule
Aborting job instances
Abort (stop) a running or paused job instance.
Considerations
•
You can use the GUI or CLUI. See Job actions cross reference.
•
If using the GUI, consider using the Monitor Job window to be more selective about when (on
which task) to abort the job.
Procedures
Aborting a job instance from the GUI
1.
2.
3.
4.
In the navigation pane, select Jobs to display the Jobs window in the content pane.
Click the Run History tab.
Select the job instance you want to abort.
Select Actions > Abort.
NOTE: You can also abort a job using the Monitor Job window. Access this window from
either the List tab or run History tab by selecting a job/job instance, and then selecting Actions
> Monitor Job in the window, click Abort.
5.
Click OK to confirm the action.
Messages appear in the Job Events pane of the Monitor Job window, indicating cancellation
of the job instance and its current task.
Aborting a job instance from the CLUI
1.
2.
3.
4.
Open a CLUI window.
Issue a Show Job command. Include the job name and the instances switch. Review the output
and note the instance name or instance ID of the specific instance you want to manage, for
example, daily_backup-4 or ID: 55ed822a-0afb-48ad-95bd-d362389344ad.
Issue a Set Job command. Include the job instance name or ID and the abort switch.
Review the output and confirm the action.
Working with jobs
149
Continuing job instances
Continue a paused job or a running job that is waiting. See Pause and continue.
Considerations
•
You can use the GUI Run History tab or Monitor Job window. You can also use the CLUI. See
Job actions cross reference.
Procedures
Continuing a job instance from the GUI
1.
2.
3.
4.
In the navigation pane, select Jobs to display the Jobs window in the content pane.
Click the Run History tab.
Select the job instance to continue.
Do one of the following:
•
Select Continue.
A confirmation window opens.
•
Select Monitor.
On the Monitor Job window, click Continue.
A confirmation window opens.
5.
Click OK to confirm the action.
Execution of the job instance resumes.
Continuing a job instance from the CLUI
1.
2.
Open a CLUI window.
Issue a Set Job command. Include the job instance name (or ID) and the continue switch.
Copying jobs
Copy an existing job and use it to create a new job.
Considerations
•
You can only use the GUI to copy a job.
Procedure
1.
2.
3.
4.
In the navigation pane, select Jobs to display the Jobs window in the content pane.
Click the List tab.
Select the job you want to copy.
Select Actions > Edit.
The job is displayed in the Editing Job window.
5.
6.
Enter a new name for the job.
Click Save As.
The job is saved under the new name.
Creating jobs
Create a job. When you create a job you define which tasks (commands) a job will perform, the
order in which the commands will be executed, and the parameters for each command. See Job
commands list and Job templates list.
150
Jobs
Considerations
•
You can only use the GUI to create a job. You cannot use a separate text editor or the CLUI.
•
If appropriate, make a copy of an existing job to use as a starting point.
Procedure
1.
2.
In the navigation pane, select Jobs to display the Jobs window in the content pane.
Select Actions > New.
The Create Job window opens.
3.
4.
5.
6.
7.
Enter the job name (required).
Enter the job description (optional).
Enter the job commands. To use a template, click Generate and select a template. See
Generating job templates. To enter individual commands, select a command. See Adding
commands.
As appropriate, double-click each command in the Job content pane to open it in an editing
window. Select or enter values for the command arguments. If applicable, enter a label, or
an assignment (variable), or select a branching type (see Job language for more information).
Click Save.
Deleting jobs
Considerations
•
You can delete a job from the GUI or CLUI. See Job actions cross reference.
•
Some jobs cannot be deleted because the status of one or more instances prevents deletion.
For example, you cannot delete a job with a job instance that is paused.
Procedures
Deleting a job from the GUI
1.
2.
3.
4.
5.
In the navigation pane, select Jobs to display the Jobs window in the content pane.
Click the List tab.
Select the job you want to delete.
Select Actions > Delete.
Click OK.
Deleting a job from the CLUI
1.
2.
Open a CLUI window.
Issue a Delete Job command.
Deleting job instances
Delete a job instance.
Considerations
•
You can only use the use the GUI or CLUI to delete a job instance.
Procedures
Deleting a job instance from the GUI
1.
2.
In the navigation pane, select Jobs to display the Jobs window in the content pane.
Click the Run History tab.
Working with jobs
151
3.
4.
5.
Select the job instance you want to delete.
Select Actions > Delete.
Click OK to confirm the action.
Deleting a job instance from the CLUI
1.
2.
Open a CLUI window.
Issue a Delete Job command. Include the instance switch and job instance name (or ID).
Developing jobs
HP recommends that you develop and thoroughly test jobs in a non-production environment before
using them in production.
1. Plan the job. Determine the goals for the job and identify resources that must be identified in
the job.
2. Create and save the job.
3. Validate the job.
4. Run the job and evaluate the results.
5. Edit and retest the job, if necessary.
Editing jobs
Edit a job.
Considerations
•
You can only use the GUI to edit a job. You cannot use a separate text editor or the CLUI.
•
If you edit a job name and click Save, the job and its run history listings are renamed.
•
If you edit a job name and click Save As, a new job is created.
Procedure
1.
2.
3.
4.
In the navigation pane, select Jobs to display the Jobs window in the content pane.
Click the List tab.
Select the job you want to edit.
Select Actions > Edit.
The job is displayed in the Editing Job window.
5.
6.
Edit the job.
Click Save.
Exporting jobs
Export one or more jobs.
Considerations
•
You can use the GUI or the CLUI set server command to export jobs.
•
You can select a single job or multiple jobs to export.
•
The jobs file is saved in XML format.
GUI Procedure
1.
2.
3.
152
Jobs
In the navigation pane, select Jobs to display the Jobs window in the content pane.
Click the List tab.
Select the job or jobs you want to export.
4.
5.
6.
Select Actions > Export.
Enter the full path and file name.
Click OK.
CLUI Procedure
1.
2.
Open a CLUI window.
Issue a set server command with the export_job=joblist option.
Job editing tips and shortcuts
In this section, the word command refers to a job command (script action).
Adding a command
Job editing shortcuts
Copying lines
Moving a line
Commenting-out lines
Nonadjacent lines
Deleting lines
Selecting all lines
Deleting a transaction
Selecting adjacent lines
Editing a command
Adding a command
Using double-click:
1. In the right pane, select a line.
2. In the left pane, double-click the command to be added.
3. The command is added in the right pane after the selected line.
Using drag and drop:
1. In the left pane, select the command.
2. Click the selected command and hold the mouse button down, or press Ctrl+C.
3. Drag and drop the command on a line in the right pane, or press Ctrl+V.
The command is added after the line.
Copying lines
1.
2.
3.
In the right pane, select the lines.
Click the selected lines and hold the mouse button down, or press Ctrl+C.
Drag and drop the lines on another line in the right pane.
The copied lines are added after the line.
Changing command lines to comments
Changing one line
1.
2.
In the right pane, select the line you want to comment out.
Press Ctrl+/.
The line is changed to a comment line that retains the command.
To restore the line as a command, select it and press Ctrl+/.
Working with jobs
153
Changing all lines
1.
2.
In the right pane, select any line.
Press Ctrl+Alt+/.
All active lines are changed to comments.
Deleting lines
1.
2.
In the right pane, select the lines that you want to delete.
Press the Delete key.
The lines are deleted.
Deleting a transaction
1.
In the right pane, double-click the DO { line.
The transaction lines are selected.
2.
Press the Delete key.
The transaction lines are deleted.
Editing a command
1.
2.
In the right pane, select the command (line).
Double-click the command.
The Editing Task window appears.
3.
4.
The entries in the window vary with the type of command. Edit as appropriate for the command.
Click OK.
Job editing shortcuts
The following keyboard shortcuts are helpful when working in the job editor.
Action
Key combination
Copy selection
Ctrl+C
Cut selection
Ctrl+X
Extend selection left or right
Shift+left arrow or Shift+right arrow
Extend selection to previous or next word
Ctrl+Shift+left arrow or Ctrl+Shift+right arrow
Extend selection to start or end
Shift+Home or Shift+End
Move to start or end of text
Home or End
Paste from clipboard
Ctrl+V
Select all
Ctrl+A
Moving a line
1.
2.
154 Jobs
In the right pane, select the line.
Click (and release) the up arrow or down arrow. The selected line moves up or down in the
job.
Selecting adjacent lines
1.
2.
3.
In the right pane, select the first line.
Press and hold the Shift key.
Select the last line.
All lines between the first and last line are selected.
Selecting all lines
1.
2.
Click anywhere in the right pane.
Click Ctrl+A.
All lines are selected.
Nonadjacent lines
The replication manager does not support selection of nonadjacent lines.
Editing individual commands (tasks)
Select or enter argument values for the command. If applicable, enter job flow controls for the
command.
Procedure
1.
2.
3.
4.
5.
6.
In the Create Job or Edit Job window, double-click the command (task) to edit. The Editing
Task window opens.
Enter values for the command arguments.
If applicable, enter a label for this command (task). See Job command labels.
If applicable, enter an assignment (variable name) for this command (task). See Job command
assignment (variables).
If applicable, select a branching instruction for this command (task). See Job command
branches.
Click OK.
Properties
•
Label. Label for this command (task).
•
Assignment. Variable name assigned for this command (task).
•
Branching Type and Label. Branching instruction for this command (task)
Generating job templates
Generate a job template that contains the correct commands, in the proper order, to perform your
tasks. Complete the commands by specifying the specific resources for the job. See Job templates
list for brief descriptions.
Considerations
•
You can only use the GUI to generate a job template.
Procedure
1.
2.
In the Create Job or Editing Job window, enter a job name. Otherwise, you cannot save the
job that you are generating.
Click Template.
The Job Template Creation window opens.
Working with jobs
155
Importing jobs
Import a saved jobs file.
Considerations
•
You can use the GUI or the CLUI set server command to import jobs.
GUI Procedure
1.
2.
3.
Select File→Import RSM Job(s).
Enter the full path and file name of the jobs file to import.
Click OK.
CLUI Procedure
1.
2.
Open a CLUI window.
Issue a set server command with the import path=path/filename option.
Importing legacy jobs
Importing overview
Import an HP P6000 Business Copy legacy job into the replication manager.
Considerations
•
You can only use the GUI to import a legacy job.
Procedure
1.
2.
3.
4.
Locate and prepare legacy job files for importing. See Preparing to Import.
Using a text editor, open the legacy job that you want to import. Select and copy the legacy
job lines.
In the replication manager, start a new job. See Creating jobs.
In the Create Job window, click Import.
The Import Business Copy Legacy Job window opens.
5.
6.
Press CTRL+V to paste the captured legacy job lines.
Click Import.
The importing window closes. The legacy job is imported and appears in the Create Job
window.
7.
8.
Review the imported job and edit as necessary. See Imported jobs.
Validate and save the job.
Preparing to import
Determine the management servers in your environment on which HP Business Copy EVA 2.X is
installed. For each of these servers, perform the following procedure.
Procedure
1.
2.
3.
4.
156
Jobs
Log on to the management server.
Navigate to the directory: C:\Program Files\Compaq\SANworks\Enterprise Volume
Manager\bin\Jobs
Select files with the extension .evm. These are the job files. Do not select the files named
sample job.evm and Scheduler.evm.
Copy the files to a location that is readily available to you when using the replication manager.
You will need to open each job file in a job editor while using the Create Job window.
Legacy BC 2.x job command equivalence
The following table lists HP Business Copy EVA/MA/EMA 2.x job operations and the equivalent
job commands in the replication manager. Details regarding individual command parameters are
not included.
Business Copy EVA/MA/EMA 2.x Job operation name and Replication Solutions Manager equivalent job command
use
;
indicates a comment line
//
CLONE UNIT
Creates a mirrorset of a virtual disk. The clone operation
applies to MA/EMA arrays only.
None—MA/EMA arrays are not supported.
CLONE VOLUME
None—MA/EMA arrays are not supported.
Creates mirrorsets of virtual disks that comprise a volume.
The clone operation applies to MA/EMA arrays only.
DELAY
Wait
Halts the BC job for the specified number of seconds before
executing the next step in the job.
LAUNCH
Executes the specified command, batch file or script on a
BC enabled host.
Launch
LAUNCHUNDO Launch
Executes the specified command, batch file or script on a
BC enabled host when undoing a job.
Launch
Create a rollback (undo) branch in the job and include a
launch command that specifies the command, batch file or
Launchundo is used to perform an action on a BC enabled script.
host when that action is required only when undoing a job.
MOUNT UNIT
Presents a virtual disk replica (or pre-existing virtual disk)
to a host OS and requests mounting using the specified
parameters.
There is no single equivalent command. The following can
apply:
PresentStorageVolume
CreateHostVolume
CreateHostVolumeDiscrete
MountHostVolume
MOUNT VOLUME_ALL
Presents the virtual disks that underlie a host volume replica
(or pre-existing host volume) to a host OS and requests
mounting of all components using the specified parameters.
Components can only be logical volumes.
There is no single equivalent command. The following can
apply:
CreateHostVolumeGroup
CreateHostVolumeDiscrete
MountEntireVolumeGroup
MOUNT VOLUME_ALL {raw}
CreateDiskDevice
Presents the virtual disks that underlie a host volume replica
(or pre-existing host volume) to a host OS and requests that
all components be handled as raw devices. The
components can only be logical volumes.
MOUNT VOLUME_SINGLE
Presents the virtual disks that underlie a host volume replica
(or pre-existing host volume) to a host OS and requests
mounting of one component using the specified parameters.
The component may be an OS defined partition, slice, disk
section or logical volume.
There is no single equivalent command. The following can
apply:
CreateHostVolumeGroup
CreateHostVolumeDiscrete
MountVolumeGroupComponent
MOUNT VOLUME_SINGLE {raw}
CreateDiskDevice
Presents the virtual disks that underlie a host volume replica
(or pre-existing host volume) to a host OS and requests that
the specified component be handled as a raw device.
Components can only be logical volumes
Working with jobs
157
Business Copy EVA/MA/EMA 2.x Job operation name and Replication Solutions Manager equivalent job command
use
NORMALIZE UNIT
WaitStorageVolumeNormalization
Checks the state of a unit (virtual disk). When the state is WaitStorageVolumesNormalization
normal or unshared, BC executes the next step in the job. MA/EMA arrays are not supported.
While the state is normalizing or unsharing, BC continues
to check the state.
Normalization applies to EVA and MA/EMA arrays.
NORMALIZE VOLUME
Checks the states of units (virtual disks) that comprise a
volume. When all of the states are normal or unshared,
BC executes the next step in the job.
WaitHostVolumeNormalization
WaitVolumeGroupNormalization
MA/EMA arrays are not supported.
While any one state is normalizing or unsharing, BC
continues to check the state. Normalization applies to EVA
arrays and MA/EMA arrays.
PAUSE
Pause
BC stops the job at the step containing the pause operation.
When a continue command is issued from the graphical
user interface, or from the BC command line interface
(EVMCL), BC restarts the job, beginning at the step after
the pause step.
RESUME
Launch
Executes the specified command, batch file, or script on a
given host. Resume is typically used to restart database
I/O that was halted by a suspend operation.
SCHEDULE
Creates a job event in the BC job scheduler.
Use the job scheduling feature in the GUI. The feature
replaces the schedule command.
SET CA_SUBSYSTEM
None—Identification is handled automatically.
Stores the name of an EVA array in a $BCV variable. This
identifies the storage system in Continuous Access
environment on which to select the source virtual disk and
create the snapshot or snapclone BCV.
SET DISKGROUP
SetDiskGroupForSnapclone
For a given job, specifies the disk group in which EVA
snapclones are created by BC when the job is run. Setting
the disk group for snapclones applies to EVA arrays only.
SET UNIT_BCV
None—Not required
Stores the name of a existing unit (virtual disk) in a $BCV
variable. The operation is required to mount a virtual disk
that already exists before the job is run.
SET VOLUME_BCV
Stores references to an existing host volume in a $BCV
variable. The operation is required to unmount a host
volume that already exists before the job is run.
None—Not required
SNAP UNIT {demand_allocated_hsv}
SnapshotStorageVolume (demand_allocated)
Creates a point-in-time copy of a virtual disk. The copy is
a demand allocated snapshot. Demand allocation applies
to EVA arrays only.
158
Jobs
SNAP UNIT {fully_allocated}
Creates a point-in-time copy of a unit (virtual disk). The
copy is a fully allocated snapshot. Full allocation applies
to EVA and MA/EMA arrays.
SnapshotStorageVolume (fully_allocated)
MA/EMA arrays are not supported.
SNAP UNIT {snapclone_hsv}
Creates a point-in-time copy of a virtual disk. The copy is
a snapclone. Snapclones apply to EVA arrays only.
SnapcloneStorageVolume
Business Copy EVA/MA/EMA 2.x Job operation name and Replication Solutions Manager equivalent job command
use
SNAP VOLUME {demand_allocated_hsv}
SnapshotHostVolume (demand_allocated)
Creates point-in-time copies of the virtual disks that
SnapshotHostVolumeGroup (demand_allocated)
comprise a host volume. The copies are demand allocated
snapshots. Demand allocation applies to EVA arrays only.
SNAP VOLUME {fully_allocated}
Creates point-in-time copies of the virtual disks that
comprise a host volume. The copies are fully allocated
snapshots. Full allocation applies to EVA and MA/EMA
arrays only.
SnapshotHostVolume (fully_allocated)
SnapshotHostVolumeGroup (fully_allocated)
SNAP VOLUME {snapclone_hsv}
Creates point-in-time copies of the virtual disks that
comprise a host volume. The copies are snapclones.
Snapclones apply to EVA arrays only.
SnapcloneHostVolume
SnapcloneHostVolumeGroup
SPLIT_BEGIN UNIT
Begins the split of a clone virtual disk (mirrorset). Clone
split operations apply to MA/EMA arrays only.
None—MA/EMA arrays are not supported.
SPLIT_BEGIN VOLUME
Begins the split of the clone virtual disks (mirrorsets) that
comprise a host volume. Clone split operations apply to
MA/EMA arrays only.
None—MA/EMA arrays are not supported.
MA/EMA arrays are not supported.
SPLIT_FINISH UNIT
None—MA/EMA arrays are not supported.
Completes the split of a clone virtual disk (mirrorset). Clone
split operations apply to MA/EMA arrays only.
SPLIT_FINISH VOLUME
None—MA/EMA arrays are not supported.
Completes the split of the clone virtual disks (mirrorsets)
that comprise a host volume. Clone split operations apply
to MA/EMA arrays only.
SPLIT UNIT
None—MA/EMA arrays are not supported.
Splits a clone virtual disk (mirrorset) in a single operation.
Clone split operations apply to MA/EMA arrays only.
SPLIT VOLUME
Splits the clone virtual disks (mirrorsets) that comprise a
host volume in a single operation. Clone split operations
apply to MA/EMA arrays only.
None—MA/EMA arrays are not supported.
SUSPEND
Launch
Executes the specified command, batch file, or script on a
given host. Suspend is typically used to briefly halt the I/O
of a database or other application that is running on a
host computer.
UNDO
Causes BC to immediately undo the preceding steps in a
job.
There is no single equivalent command. Steps can be
undone by including rollback branches that contain the
appropriate inverse commands. For example, to undo
create steps, include appropriate delete steps in the
rollback branch.
UNMOUNT
Requests that a host OS unmount a host volume.
UnmountHostVolume
UnmountEntireVolumeGroup
WAITUNTIL
Halts the BC job until the specified date and time, before
continuing with the next step.
Waituntil
Job background operations
Working with jobs
159
Business Copy EVA/MA/EMA 2.x Job operation name and Replication Solutions Manager equivalent job command
use
VGFREEZE
This background operation is generated automatically at
runtime when the job requires that a host volume group be
frozen (host I/O is not allowed).
None—The replication manager does not automatically
freeze volume groups. If special processing is needed for
the volume group, you must include a launch command
that runs the appropriate command, batch file for script on
the host.
VGTHAW
This background operation is generated automatically at
runtime when the job requires that host volume group be
thawed (host I/O is allowed).
None—The replication manager does not automatically
thaw volume groups. If special processing is needed for
the volume group, you must include a launch command
that runs the appropriate command, batch file for script on
the host.
Logical volumes and volume groups in job commands
Some command arguments require the selection or entry of a logical volume (a component of a
volume group). Because the replication manager considers logical volumes and volume groups to
be host volumes, you must select the resource from a host volumes list. There is not a separate list
of logical volumes or volume groups.
Monitoring and managing job instances
Monitor the start date and time, elapsed time, and status of each task in a job instance, as well
as job events and progress using the Monitor Job window. You can also pause, continue, and
abort job instances from this window.
You can access the Monitor Job window from either the List tab or the Run History tab. This
procedure uses the Run History tab.
Considerations
•
You can also manage job instances from the CLUI, using the Set Job command.
•
You cannot monitor a job instance that is not running or paused.
Procedure
1.
2.
3.
4.
In the navigation pane, select Jobs to display the Jobs window in the content pane.
Click the Run History tab.
Select the job instance you want to monitor or manage.
Select Actions > Monitor.
The Monitor job window opens, displaying the operating state and progress of the selected
job instance.
160 Jobs
5.
6.
To manage the job instance, click one of the following:
•
Pause. Pauses the job instance at the next break in its execution.
•
Continue. Continues the job instance at its paused step.
•
Abort. Stops the job instance and sets the job status to failed.
•
Refresh. Refreshes the window.
Click OK.
Pausing job instances
Pause a job instance at the next step in its execution.
Considerations
•
You can use the GUI Run History tab or Monitor Job window. You can also use the CLUI.
Procedures
Pausing a job instance from the GUI
1.
2.
3.
4.
In the navigation pane, select Jobs to display the Jobs window in the content pane.
Click the Run History tab.
Select the job instance to pause.
Do one of the following:
•
Select Pause.
A confirmation window opens.
•
Select Monitor, and then n the Monitor Job window, click Pause.
A confirmation window opens.
5.
Click OK to confirm the action.
Execution of the job instance is paused.
Pausing a job instance from the CLUI
1.
2.
Open a CLUI window.
Issue a Set Job command. Include the job instance name (or ID) and pause switch.
Resource is not selectable
When a command's argument is not selectable (does not appear in a selection list), you must
manually enter the resource name or a enter variable that represents the resource.
This typically occurs when one command in a job refers to a resource that is created by another
command in the same job.
Example - Mounting a replica
When a job creates a replica of a disk, the replica will not exist until the job is run. Thus, when
editing a command that refers to the replica, the replica's UNC name does not appear in a selection
list. In this case, you can manually enter the name (as specified by the command that creates it),
or reference the resource by using a variable.
Example - Working with a new DR group
When a job creates a new DR group, a new destination storage volume will not exist until the job
is run. Thus, when editing the commands that refer to the new disks, the disk's UNC name does
Working with jobs
161
not appear in a selection list. Because in this case the new disk cannot be referenced by a variable,
you must manually enter the name (as specified by the command that creates it).
Scheduled job events do not run
Problem
Scheduled job events do not run. In the events pane, one of the following messages is displayed:
Internal error occurred starting job <job name>.
Missed scheduled execution time of <job name>.
Explanation / resolution
The security credentials (user name and password) that were saved with scheduled job events
might be incorrect. This can occur if the security credentials have been changed on the RSM server.
The job credentials must be changed to match those of the RSM server.
1. Open a scheduled job event. See Editing scheduled job events.
2. Enter the correct security credentials (to log on to the replication manager server) and click
OK.
3. Repeat for each scheduled job event that has incorrect security credentials.
In some cases, subject to your security policies, consider changing the security credentials of the
replication manager server (one change) to match those saved with the job events (potentially
many changes).
Running jobs
Run a job. Whenever you run a job, HP recommends that you validate it. Validation helps prevent
job failures by checking resource availability before the job begins executing commands. See jobs
Validation.
Considerations
•
You can use the GUI or CLUI.
•
Jobs often interact with resources in the SAN. If those resources change, the jobs may not run
properly.
•
Jobs that interact with the same resources can impact each other. One job can change resources
in such a manner that another job is impacted.
Procedures
Running a job from the GUI
1.
2.
3.
4.
In the navigation pane, select Jobs to display the Jobs window in the content pane.
Click the List tab.
Select the job you want to run.
Select Actions > Run.
The Confirm Job Run window opens.
5.
6.
Select an execution mode:
•
Normal - Validates the job and then runs it.
•
Validate Only - Validates the job but does not run it.
•
Skip Validation - Runs the job without first validating it.
Click OK to run the job.
The Monitor Job window opens, indicating the job execution mode.
162
Jobs
Running a job from the CLUI
1.
2.
Open a CLUI window.
Issue a Set Job command. Include the job name, mode switch, and run switch. The mode
should be normal or skip validation.
Selecting values for arguments
Select an argument value. The argument editor allows you to select one or more values for job
command arguments. (See Arguments and Argument lists.) Depending on the command, you can
select from several types of resources. For example, the SetListVariable command allows you to
select from storage volumes and storage containers.
Considerations
•
You can use only use the GUI.
Procedure
The following is the general procedure for selecting arguments in the Editing Task window.
1. Select an argument and click the expand-argument button [...].
The Argument Editor window opens.
2.
Select a resource type (if appropriate) and click OK.
A list of available resources appears. In some cases, you can filter the list or narrow the
choices.
3.
Select a resource (or several if appropriate) to include in the argument and click OK.
The selected resources are added to the command argument and the Argument Editor window
closes.
4.
In the Editing Task window, select another argument, or click OK to close the window.
IMPORTANT:
saved.
Selected values appear in the job editor but are not saved until the job is
Scheduling job events
Creating scheduled job events
You can create a scheduled job event that runs a job at a specified interval (frequency) and start
time.
You can create multiple scheduled job events for a single job. For example, create a scheduled
job event to run a job every day at 5:00 PM and create another event to run the job once a week
(every 7 days) at 12:00 AM.
See also Editing scheduled job events.
Procedure
1.
2.
In the navigation pane, select Jobs to display the Jobs window in the content pane.
Click the Schedule tab.
The Schedule tab shows the list of scheduled job events.
3.
Select Actions > Schedule Job.
The Schedule a Job wizard opens, displaying the Select a Job page.
4.
Follow the instructions in the wizard.
Working with jobs 163
Editing scheduled job events
You can edit a scheduled job event to change the start time and run interval (frequency). You can
also update the saved logon credentials. See also Creating scheduled job events.
Considerations
•
You cannot save a scheduled job event without entering security credentials.
Procedure
1.
2.
In the navigation pane, select Jobs to display the Jobs window in the content pane.
Click the Schedule tab.
The Schedule tab shows the list of scheduled job events.
3.
4.
Select the scheduled job event to edit.
Select Actions > Edit Schedule.
The Edit Job Schedule window opens.
5.
6.
7.
On the Credentials tab, enter the logon credentials that are required for the job to access the
replication manager server.
On the Interval tab, you can change the start time and interval (frequency). See Choosing a
run interval.
Click OK.
The scheduled event is updated.
Enabling and disabling scheduled job events
A scheduled job event can be enabled or disabled. When enabled, the job is automatically run
at the scheduled interval and start time.
When disabled, the job event remains in the list of scheduled events but is not automatically run.
Procedure
1.
2.
3.
4.
5.
In the navigation pane, select Jobs to display the Jobs window in the content pane.
Click the Schedule tab. The Schedule tab shows the list of scheduled job events.
Select the scheduled job event.
Review the Enabled property check box. If the box selected, the scheduled job event is enabled,
otherwise it is not. Performing the next step changes the property.
Select Actions > Enable/Disable.
The enabled property is changed (toggled) and the check box display is updated.
NOTE:
164 Jobs
You cannot change the enabled property by clicking the check box.
Choosing a run interval
Use the following table to help determine the type of interval to choose in the Schedule a Job
wizard.
Interval type
Behavior
Past-time adjustment
Run once
Runs a job one time (today) at the specified
time.
Example: run at 2:00 PM (today)
If you enter a time of day that is in the past, the
schedule wizard adjusts the start time by adding
5 minutes to the current time.
Example: You enter 2:00 PM when it is already
2:05 PM. The adjusted start time becomes 2:10
PM.
Every x
hours-minutes
Runs a job every x hours, or hours-and-minutes, If you enter a time of day that is in the past, the
or minutes only, after initially running at the
schedule wizard adjusts the initial run time by
specified time (today).
adding whole hours until the time is at least 1
Example: run every 48 hours after first running hour in the future.
Example: You enter 2:00 PM when it is already
at 2:00 PM (today)
2:05 PM. The adjusted time for the initial run
Minimum interval = 10 minutes
becomes 4:00 PM. Subsequent runs are at 4:00
PM.
Maximum interval = 999 hours
See also Hourly interval equivalents.
Once per day
Runs the job once a day at the specified time.
Example: run at 2:00 PM every day
If you enter a time of day that is in the past, the
schedule wizard makes no adjustment to the start
time, but the first run occurs the next day.
Example: You enter 2:00 PM when it is already
2:05 PM. The first run of the job is at 2:00 PM
the next day.
Weekly on
Runs a job once a week at the specified day
and time.
Example: run on Mondays at 2:00 PM
If you specify a day of the week or time of day
that is in the past, the schedule wizard adjusts
the start day by adding one week.
Example: On Monday, 21 November you specify
Monday at 2:00 PM when it is already 2:05 PM
or is already Tuesday. The adjusted start day
becomes next Monday, 28 November.
Hourly interval equivalents
Typical hourly interval equivalents are shown below. A value of 999 hours is the maximum that
can be entered.
Hourly interval equivalents
24 hrs Every day
48 hrs Every other day
168 hrs Every week
336 hrs Every other week
720 hrs Every 30 days
728 hrs Every month (average)
999 hrs Every 41.63 days, 5.95 weeks, or 1.40 months
Removing scheduled job events
Remove a scheduled job event from the scheduler.
Working with jobs
165
Procedure
1.
2.
In the navigation pane, select Jobs to display the Jobs window in the content pane.
Click the Schedule tab.
The Schedule tab shows the list of scheduled job events.
3.
4.
Select the scheduled job event to remove.
Select Actions > Remove.
The Confirm Action window opens.
5.
Click OK.
The scheduled event is removed from the list.
Viewing scheduled job events
A scheduled job event can be enabled or disabled. When enabled, the job is automatically run
at the scheduled interval and start time.
When disabled, the job event remains in the list of scheduled events but is not automatically run.
Procedure
1.
2.
In the navigation pane, select Jobs to display the Jobs window in the content pane.
Click the Schedule tab.
The Schedule tab shows the list of scheduled job events.
3.
4.
Select the scheduled job event.
Select Actions > View Properties.
The Job Schedule Properties window opens and displays the scheduled job event description.
5.
Click OK to close the window.
Validating jobs
Validate a job to check resource availability before the job begins executing. See jobs Validation.
Considerations
•
You can use the GUI or CLUI. See Job actions cross reference.
Procedures
Validating a job from the GUI
1.
2.
3.
4.
In the navigation pane, select Jobs to display the Jobs window in the content pane.
Click the List tab.
Select the job to validate.
Select Actions > Run.
The Confirm Run Job window opens.
5.
6.
Select Validate.
Click OK.
Validating a job from the CLUI
1.
2.
166 Jobs
Open a CLUI window.
Issue a Set Job command. Include the job name, mode switch, and run switch. The mode
switch should be validate.
Viewing job status
You can view the overall job status or the status of an individual job.
Considerations
•
You can use the GUI or CLUI. See Job actions cross reference.
Procedures
Viewing the overall job status from the GUI
1.
In the navigation pane, locate the Jobs resources in the resources tree.
The icon indicates the overall job status. For example, if just one job has run and failed, the
icon indicates there is a job failure.
Viewing a single job's status from the GUI
1.
In the Jobs content pane click the Run History tab.
The Run History window appears.
2.
3.
4.
Click the Name column heading to sort the jobs by name.
Locate the run history records for the job.
Select the Run Date/Time of the job instance whose status you want to check.
An icon in the Completion Status column indicates the job status.
5.
To see the job status in text form, place the cursor in the Completion Status field.
A tooltip displays the job status text.
Viewing jobs by job status from the GUI
1.
In the Jobs content pane click the Run History tab.
The Run History window appears.
2.
3.
Click the Completion Status column heading to sort the jobs by status.
If necessary, scroll down the list.
Viewing jobs and job instances
View the list of the jobs and job instances.
Considerations
•
You can use the GUI or CLUI. See Job actions cross reference.
Procedures
Viewing the jobs list from the GUI
1.
2.
In the navigation pane, select Jobs to display the Jobs window in the content pane.
Click the List tab.
A list of available jobs is displayed.
Viewing job instances
1.
2.
In the navigation pane, select Jobs to display the Jobs window in the content pane.
Click the Run History tab.
A list of job instances is displayed.
Working with jobs
167
Viewing job properties
View a job's listing without opening it in the job editor.
Considerations
•
You can only use the GUI to view a job's properties (listing).
Procedure
1.
2.
3.
4.
In the navigation pane, select Jobs to display the Jobs window in the content pane.
Click the List tab.
Select the job whose properties you want to view.
Select Actions > View Properties.
The Job Properties window opens, displaying the job's run state and commands.
Job concepts
Job language overview
The job language provides a simple and structured way to automate replication and other storage
related tasks. A typical job consists of commands, assignments (variables), branching instructions,
labels, comments and exits. Jobs with launch commands often include transactions. See also
Imported jobs.
Click the links below for information.
Line Task
Comment >
command >
1
// Replicate storage volumes.
2
//
3
ValidateStorageSystem ( %array_name% )
...
Branch >
9
Launch ( %source_host%, %suspend_command_line%, "", WAIT, "0" ) onerror
pauseat E1:
Transaction start >
10
DO {
Assignment >
11
$Rep1 = SnapshotStorageVolume ( %array_name_source_storvol_unc1%,
FULLY_ALLOCATED, SAME, %dest_storvol1%, NOWAIT ) onerror pauseat E1:
12
//
13
} ALWAYS {
14
// Resume the host application.
15
Launch ( %source_host%, %resume_command_line%, "", WAIT, "0" )
16
}
transaction end >
...
Exit >
Label >
27
Exit (SUCCESS)
28
//
29
// Failure exit - no rollback needed.
30
E1: Exit (FAILURE)
...
168 Jobs
Jobs, templates, and commands
You can create, save, run, schedule, and manage jobs that automate replication tasks.
Job editor
Use the replication manager's specialized job editor to create and edit jobs.
Job templates
Job templates allow you to quickly create typical jobs, for example, making local or remote copies
of virtual disks. See Job templates list.
Job commands
You can also create custom jobs from the set of specialized job commands. See Job commands
list.
Job instances
When a job is running or has been run, it is called a job instance. Job instances are displayed in
the Jobs Run History tab.
Job concepts
169
The format of a job-instance name is the job name, plus a sequence number. For example, the job
named daily_backup when run two times would have job instance names of daily_backup-1 and
daily_backup-2.
Aborted job instances
In some cases, a job instance may not stop when it is aborted. This can happen if the instance is
hung while executing a command to a low level device.
When the replication manager server is stopped and restarted, all aborted job instances are
cancelled.
Arguments
Most job commands include arguments. When a command is initially entered in a job, default
values may appear in command arguments. In some cases, there are no defaults and you must
specify a value. Required arguments are denoted by % characters. In the following
example,%stor_unc_name% indicates that a value must be entered for the UNC name of the storage
volume.
SnapcloneStorageVolume ( %storvol_unc_name%, "", SAME, "", WAIT )
^ required argument
See Resource names and UNC format.
Argument lists
In some commands an argument list can be specified. An argument list consists of individual
resource names, separated by commas, and enclosed in parenthesis.
For example, a list of storage volumes (virtual disks):
("\\Array2\Cats", "\\Array2\Dogs", "\\Array3\Cars")
Assignments (variables)
The Editing Task window allows you to create assignments (variables) that refer to specific resources
or that reference the results of a command.
In the following example, a lengthy UNC-formatted name is stored in a variable on line 5. When
the job is run, a snapclone is created by the command on line 10, which refers to the variable.
Line Task
...
5
$disk = SetVariable ("\\ArrayA2\Pets\Cats\Vdisk66")
...
10
SnapcloneStorageVolume ($disk, "", SAME, "", WAIT)
...
In the next example, the results of a command are saved in a variable when the job is run. On
line 7, the UNC name of the snapshot is stored in the variable $Rep1. Then on line 13 the variable
is referenced to delete the snapshot.
Line Task
...
170
Jobs
7
$Rep1 = SnapshotHostVolume ("\\source_host\path\source_hostvol1", FULLY_ALLOCATED, SAME) ONERROR
PAUSEAT E1:
...
//
13
E2: DeleteStorageVolumes ( $Rep1 ) ONERROR PAUSEATE2:
...
Usage
Assignments are:
•
Local to each job and cannot be referenced across jobs.
•
Not case sensitive.
Format
•
The first two characters must be a dollar sign ($) followed by an alpha character. No special
characters are allowed after the first character.
•
Upper and lower case, alpha and numeric are allowed.
•
Underscores are allowed; spaces are not allowed.
Branches
Branches and labels are typically used to handle errors and to create jobs that can be looped
repeatedly. In line 9 of the following example, the command branches to label E1 on line 30 if
there is an error when the command is executed.
Line Task
...
9
Launch ( %source_host%, %suspend_command_line%, "", WAIT, "0" ) onerror pauseat E1:
...
30
E1: Exit (FAILURE)
Branching types
•
Default. If the command fails, abort the job at this task (line).
•
None. Use the default behavior.
•
Onerror Goto. If the command fails, go to the label. Execute the command at the label.
•
Onerror Pauseat. If the command fails, go to the label and pause the job. When the job is
continued from the GUI or CLUI, resume the job by executing the command at the label.
•
Onsuccess Goto. If the command is successful, go to the label. Execute the command at the
label.
Commands
When you include a command in a job, the command's arguments and default values are displayed
in the job editor window. Argument names that appear with red % markers indicate that specific
values are required. See job Arguments. For example:
SnapcloneStorageVolume ( %storvol_unc_name%, "", SAME, "", WAIT )
^ command
^ arguments ... ^
^
^
^
Job concepts
171
You must edit the command and select or enter values for any required arguments. After editing,
the command displays the argument value as normal text, in quotes.
SnapcloneStorageVolume ( "\\Array2\Cats", "", SAME, "", WAIT )
You can also edit a command to change its default values. In the example below, the defaults for
disk group name and snapclone name have been changed.
SnapcloneStorageVolume ( "\\Array2\Cats", "DskGrp3", SAME, "CatsCopy", WAIT )
You can also edit a command to add job flow controls and assignments. See job Labels, Branches
and Assignments.
R1: $Rep1 = SnapcloneStorageVolume ( "\\Array2\Cats", "", SAME, "", WAIT ) onerror pauseat E1:
label ^
^ assignment
branch ^
See also Editing individual commands.
Command result values
Some commands return a result that can be assigned to a variable. The variable can then be
referenced in a succeeding step in the job. See also job Assignments.
The following table lists Command result values and formats. See also Resource names and UNC
formats.
172
Jobs
Job command
Command result value (format)
//
-
AddDrGroupMember
-
AddReplicaToReplicaRepository
-
AddReplicasToReplicaRepository
-
CombineLists
Combined list of resources (UNC)
ConvertStorageVolumeIntoContainer
Combined list of resources (UNC)
ConvertStorageVolumesInManagedSetIntoContainers
List of container names (UNC)
ConvertStorageVolumesIntoContainers
List of container names (UNC)
CreateContainer
Container name (UNC)
CreateContainerForHostDiskDevice
Managed set name of virtual disk containers
(simple)
CreateContainersForHostVolume
Managed set name of virtual disk containers
(simple)
CreateContainersForHostVolumeGroup
Managed set name of virtual disk containers
(simple)
CreateDiskDevice
-
CreateDrGroup
DR group name (UNC)
CreateDrGroupFromHostVolume
DR group name (UNC)
CreateHostVolume
Host volume name (UNC)
CreateHostVolumeDiscrete
Host volume name (UNC)
CreateHostVolumeFromDiskDevices
Host volume name (UNC)
Job command
Command result value (format)
CreateHostVolumeGroup
Host volume group name (UNC)
CreateReplicaRepository
Replica repository name (simple)
CreateStorageVolume
Storage volume name (UNC)
CreateThinProvisionedStorageVolume
Storage volume name (UNC)
DeleteContainer
-
DeleteDrGroup
-
DeleteDrGroupMember
-
DeleteHostVolume
-
DeleteHostVolumeGroup
-
DeleteReplicaRepository
-
DeleteStorageVolume
-
DeleteStorageVolumes
-
DeleteStorageVolumesInManagedSet
-
DetachMirrorclones
-
DiscoverDiskDevice
-
DiscoverDiskDevices
-
DiscoverDiskDevicesForDrGroup
-
DiscoverDiskDevicesGuest
-
DiscoveryRefresh (obsolete)
-
Exit
-
Export
-
ExportJobs
-
FailoverDrGroup
-
FailoverDrGroups
-
FlushCache
-
ForceFullCopyDrGroup
-
FractureHostDiskDeviceMirrorclone
Name of the fractured mirrorclone (UNC)
FractureHostVolumeGroupMirrorclones
List of fractured mirrorclone names (UNC)
FractureHostVolumeMirrorclones
List of fractured mirrorclone names (UNC)
FractureMirrorclones
-
Import
-
ImportJobs
-
InstantRestoreFromMirror
-
InstantRestoreFromSnapshot
-
Launch
Result from enabled host
LaunchJob
-
Log
Job concepts
173
174
Jobs
Job command
Command result value (format)
MigrateStorageVolume
-
MirrorcloneHostDiskDeviceToContainer
-
MirrorcloneHostDiskDeviceToContainerInManagedSet
-
MirrorcloneHostVolumeGroupToContainers
-
MirrorcloneHostVolumeGroupToContainersInManagedSet
-
MirrorcloneHostVolumeToContainers
-
MirrorcloneHostVolumeToContainersInManagedSet
-
MirrorcloneStorageVolumeToContainer
-
MountVolumeGroupComponent
Mount point name (UNC)
Pause
-
PrepareContainer
-
PrepareContainers
-
PrepareContainerForHostDiskDeviceReplication
-
PrepareContainersForHostVolumeReplication
-
PrepareContainersForHostVolumeGroupReplication
-
PresentStorageVolume
-
PresentStorageVolumes
-
PresentStorageVolumeGuest
-
PresentStorageVolumesGuest
-
PresentThinProvisionedStorageVolume
-
PresentThinProvisionedStorageVolumes
-
RemoveDiskDevice
-
ResyncMirrorclone
-
ResyncMirrorclones
-
RetainLatestRoundRobinReplicasForHostDiskDevice
-
RetainLatestRoundRobinReplicasForHostVolume
-
RetainLatestRoundRobinReplicasForHostVolumeGroup
-
SendEmail
-
SetDiskGroupForSnapclone
-
SetDrGroupAutoSuspend
-
SetDrGroupComments
-
SetDrGroupDestinationAccess
-
SetDrGroupFailsafe
-
SetDrGroupFailsafeOnLinkDownPowerUp
-
SetDrGroupHome
-
SetDrGroupIoMode
-
SetDrGroupMaxLogSize
-
Job command
Command result value (format)
SetDrGroupName
-
SetDrGroupSuspend
-
SetDrProtocolType
-
SetHostDiskDeviceWriteCacheMode
-
SetHostVolumeGroupWriteCacheMode
-
SetHostVolumesWriteCacheMode
-
SetListVariable
The resources listed in the argument (UNC)
SetNotificationPolicy
-
SetPreferredPortConfiguration
-
SetStorageVolumeName
New name for the storage volume (UNC)
SetStorageVolumeSize
-
SetStorageVolumeWriteCacheMode
-
SetStorageVolumesWriteCacheMode
-
SetVariable
The resource in the argument (UNC)
SnapcloneDiskDevice
Snapclone storage volume name (UNC)
SnapcloneHostDiskDeviceToContainerInManagedSet
Snapclone storage volume name (UNC)
SnapcloneHostVolume
List of snapclone storage volume names (UNC)
SnapcloneHostVolumeGroup
List of snapclone storage volume names (UNC)
SnapcloneHostVolumeGroupToContainersInManagedSet
List of snapclone storage volume names (UNC)
SnapcloneHostVolumeToContainers
List of snapclone storage volume names (UNC)
SnapcloneHostVolumeToContainersInManagedSet
List of snapclone storage volume names (UNC)
SnapcloneStorageVolume
Snapclone storage volume name (UNC)
SnapcloneStorageVolumeToContainer
Snapclone storage volume name (UNC)
SnapcloneStorageVolumesToContainers
List of snapclone storage volume names (UNC
SnapshotDiskDevice
Snapshot storage volume name (UNC)
SnapshotHostDiskDeviceToContainerInManagedSet
Snapshot name (UNC)
SnapshotHostVolume
List of snapshot storage volume names (UNC)
SnapshotHostVolumes
SnapshotHostVolumeGroup
List of snapshot storage volume names (UNC)
SnapshotHostVolumeGroups
-
SnapshotHostVolumeGroupToContainersInManagedSet
List of snapshot storage volume names (UNC)
SnapshotHostVolumeToContainers
List of snapshot storage volume names (UNC)
SnapshotHostVolumeToContainersInManagedSet
List of snapshot storage volume names (UNC)
SnapshotStorageVolume
Snapshot storage volume name (UNC)
SnapshotStorageVolumeToContainer
Snapshot storage volume name (UNC)
SnapshotStorageVolumesToContainers
List of snapshot storage volume names (UNC)
MigrateMirrorclone
Job concepts
175
176
Jobs
Job command
Command result value (format)
MigrateMirrorclones
-
TestJobState
Boolean true-false
UnmountEntireVolumeGroup
-
UnmountHostVolume
-
UnmountHostVolumes
-
UnpresentStorageVolume
-
UnpresentStorageVolumes
-
UnpresentThinProvisionedStorageVolume
-
UnpresentThinProvisionedStorageVolumes
-
ValidateHost
-
ValidateHostVolume
-
ValidateHostVolumeDoesNotExist
-
ValidateHostVolumeGroup
-
ValidateHostVolumeMirrorclones
-
ValidateSnapcloneHostVolume
-
ValidateSnapcloneHostVolumeGroup
-
ValidateSnapcloneStorageVolume
-
ValidateSnapshotHostVolume
-
ValidateSnapshotHostVolumeGroup
-
ValidateSnapshotStorageVolume
-
ValidateStorageSystem
-
ValidateStorageVolume
-
ValidateStorageVolumes
-
Wait
-
WaitDrGroupSynchronizationTransition
-
WaitForHostDiskDeviceWriteCacheFlush
-
WaitForHostVolumeGroupWriteCacheFlush
-
WaitForHostVolumeWriteCacheFlush
-
WaitForHostVolumesWriteCacheFlush
-
WaitForJob
-
WaitForStorageVolumeDiscovery
-
WaitForStorageVolumesDiscovery
-
WaitForStorageVolumeWriteCacheFlush
-
WaitForStorageVolumesWriteCacheFlush
-
WaitHostDiskDeviceNormalization
-
WaitHostVolumeNormalization
-
WaitStorageVolumeNormalization
-
Job command
Command result value (format)
WaitStorageVolumesNormalization
-
WaitUntil
-
WaitVolumeGroupNormalization
-
Comments
The comment command can be used to add comments to a job. You can also comment-out other
commands. See also job Comment command and Commenting-out commands.
E-mail from jobs
A job (each job instance) can send e-mail messages.
Job instances can send e-mail messages that you write and they can also send predefined job
status notification messages. See SendEmail and SetNotificationPolicy, respectively.
Exits
Exit commands identify termination points in a job and can help provide confirmation of success
or failure. HP recommends the following best practices for using exit commands:
•
Include at least one successful exit command in a job.
•
Include a successful exit command for each branch of the job that can result in successful
termination.
•
If you create branches in a job to handle failures, conclude each with a failure exit command.
See job Exit command.
Implicit jobs
When responding to certain requests, the replication manager may create and immediately run a
job. Such jobs are called implicit jobs. Implicit jobs are not saved and cannot be edited.
When an implicit job runs, it appears in the Monitor Job window and the Events pane. See also
job Implicit job startup.
Implicit job startup
When clicking OK or Finish to perform an action on a resource, the window or wizard immediately
closes and the Monitor Job window appears. This is normal operation. To perform your requested
action, the replication manager creates and starts an implicit job.
It is not necessary to leave the Monitor Job window open. If you close the window, the implicit job
continues to run.
Imported jobs
Replication manager jobs that have been imported from legacy HP Business Copy 2.X jobs include
special comments to help resolve potential command conversion issues. For example:
Line Task
import note >
legacy job start >
1
// This job was imported to RSM from an existing BC job,
2
// which included the following operations:
3
// [line 1] SNAP UNIT Array2 Cats\ACTIVE $BCV1 SNAPCLONE_HSV
SAME_AS_SOURCE
...
Job concepts 177
legacy job end >
12
// [line 5] "we're done"
13
template applied >
14
// Replicate storage volumes.
15
commands start >
required argument >
16
ValidateStorageSystem ( "Array2" )
17
ValidateStorageVolume ( "\\Array2\Cats\ACTIVE" )
18
ValidateSnapshotStorageVolume ( "\\Array2\Cats\ACTIVE" )
19
//
20
//$BCV1 = SnapshotStorageVolume ( "\\Array2\Cats\ACTIVE",
FULLY_ALLOCATED, SAME, %dest_storvol1%, WAIT ) onerror pauseat E1:
...
commands end >
33
E1: Exit ( FAILURE )
In the example:
•
Lines 1 and 2 indicate the job was created by importing a legacy job.
•
Line 3 shows (as a comment) the first legacy job command that was encountered. All legacy
commands are displayed in this manner.
•
Line 12 indicates that all legacy job commands have been listed.
•
Line 15 indicates the start of the replication manager template that has been applied to create
an equivalent job.
•
Line 20 indicates a required argument. See Job arguments.
Job commands list
The following commands can be included in jobs. The replication type indicates if the command
is specifically for use with the local or remote replication features of a storage system. Storage
family indicates the storage system family that the command supports. Some commands cannot
be used unless a host agent is running on the target host.
Job command
Command
category
Replication
type
Requires host
agent
general
~
~
DR group
remote
~
AddReplicaToReplicaRepository
host volume
local
~
AddReplicasToReplicaRepository
host volume
local
~
general
~
~
ConvertStorageVolumeIntoContainer
storage volume
local,
container
~
ConvertStorageVolumesInManagedSetIntoContainers
storage volume
local,
container
~
ConvertStorageVolumesIntoContainers
storage volume
local,
container
~
CreateContainer
storage volume
local,
container
~
//
AddDrGroupMember
CombineLists
178
Jobs
Job command
Command
category
Replication
type
Requires host
agent
host disk
device
local,
container
yes
CreateContainersForHostVolume
host volume
local,
container
yes
CreateContainersForHostVolumeGroup
host volume
group
local,
container
yes
CreateDiskDevice
host disk
device
~
yes
CreateDrGroup
DR group
remote
~
CreateDrGroupFromHostVolume
DR group
remote
~
CreateHostVolume
host volume
~
yes
CreateHostVolumeDiscrete
host volume
~
yes
CreateHostVolumeFromDiskDevices
host volume
~
yes
CreateHostVolumeGroup
host volume
group
~
yes
CreateReplicaRepository
host volume
local
yes
CreateStorageVolume
storage volume
~
~
CreateThinProvisionedStorageVolume
storage volume
~
~
DeleteContainer
storage volume
local,
container
~
DeleteDrGroup
DR group
remote
~
DeleteDrGroupMember
DR group
remote
~
DeleteHostVolume
host volume
~
yes
DeleteHostVolumeGroup
host volume
group
~
yes
DeleteReplicaRepository
host volume
local
yes
DeleteStorageVolume
storage volume
~
~
DeleteStorageVolumes
storage volume
~
~
DeleteStorageVolumesInManagedSet
storage volume
~
~
DetachMirrorclones
storage volume
local,
mirrorclone
~
DiscoverDiskDevice
host disk
device
~
yes
DiscoverDiskDevices
host disk
device
~
yes
DiscoverDiskDevicesForDrGroup
host disk
device
remote
yes
DiscoverDiskDevicesGuest
host disk
device
~
yes
DiscoveryRefresh< (obsolete)
general
~
~
script flow
~
~
CreateContainerForHostDiskDevice
Exit
Job concepts
179
Job command
Command
category
Replication
type
Requires host
agent
Export
general
~
~
ExportJobs
general
~
~
FailoverDrGroup
DR group
remote
~
FailoverDrGroups
DR group
remote
~
host volume
~
yes
ForceFullCopyDrGroup
DR group
remote
~
FractureHostDiskDeviceMirrorclone
host disk
device
local,
mirrorclone
yes
FractureHostVolumeGroupMirrorclones
host volume
group
local,
mirrorclone
yes
FractureHostVolumeMirrorclones
host volume
local,
mirrorclone
yes
storage volume
local,
mirrorclone
~
Import
general
~
~
ImportJobs
general
~
~
InstantRestoreFromMirror
storage volume
local,
mirrorclone
~
InstantRestoreFromSnapshot
storage volume local, snapshot
FlushCache
FractureMirrorclones
Launch
host
~
yes
script flow
~
~
general
~
~
storage volume
~
~
MirrorcloneHostDiskDeviceToContainer
host disk
device
local,
mirrorclone
yes
MirrorcloneHostDiskDeviceToContainerInManagedSet
host disk
device
local,
mirrorclone
yes
MirrorcloneHostVolumeGroupToContainers
host volume
group
local,
mirrorclone
yes
MirrorcloneHostVolumeGroupToContainersInManagedSet
host volume
group
local,
mirrorclone
yes
MirrorcloneHostVolumeToContainers
host volume
local,
mirrorclone
yes
MirrorcloneHostVolumeToContainersInManagedSet
host volume
local,
mirrorclone
yes
MirrorcloneStorageVolume
storage volume
local,
mirrorclone
~
MountEntireVolumeGroup
host volume
group
~
yes
MountHostVolume
host volume
~
yes
MountVolumeGroupComponent
host volume
group
~
yes
LaunchJob
Log
MigrateStorageVolume
180 Jobs
~
Job command
Command
category
Replication
type
Requires host
agent
Pause
script flow
~
~
PrepareContainer
storage volume
local,
container
~
PrepareContainers
storage volume
local,
container
~
host disk
device
local,
container
~
PrepareContainersForHostVolumeReplication
host volume
local,
container
~
PrepareContainersForHostVolumeGroupReplication
host volume
group
local,
container
~
PresentStorageVolume
storage volume
~
~
PresentStorageVolumes
storage volume
~
~
PresentStorageVolumeGuest
storage volume
~
~
PresentStorageVolumesGuest
storage volume
~
~
PresentThinProvisionedStorageVolume
storage volume
~
~
PresentThinProvisionedStorageVolumes
storage volume
~
~
RemoveDiskDevice
host disk
device
~
yes
ResyncMirrorclone
storage volume
local,
mirrorclone
~
ResyncMirrorclones
storage volume
local,
mirrorclone
~
host disk
device
local
yes
RetainLatestRoundRobinReplicasForHostVolume
host volume
local
yes
RetainLatestRoundRobinReplicasForHostVolumeGroup
host volume
group
local
yes
general
~
~
host volume
local,
snapclone
~
SetDrGroupAutoSuspend
DR group
remote
~
SetDrGroupComments
DR group
remote
~
SetDrGroupDestinationAccess
DR group
remote
~
SetDrGroupFailsafe
DR group
remote
~
SetDrGroupFailsafeOnLinkDownPowerUp
DR group
remote
~
SetDrGroupHome
DR group
remote
~
SetDrGroupIoMode
DR group
remote
~
SetDrGroupMaxLogSize
DR group
remote
~
SetDrGroupName
DR group
remote
~
SetDrGroupSuspend
DR group
remote
~
PrepareContainerForHostDiskDeviceReplication
RetainLatestRoundRobinReplicasForHostDiskDevice
SendEmail
SetDiskGroupForSnapclone
Job concepts
181
182
Jobs
Job command
Command
category
Replication
type
Requires host
agent
SetDrProtocolType
DR group
remote
~
SetHostDiskDeviceWriteCacheMode
host disk
device
~
yes
SetHostVolumeGroupWriteCacheMode
host volume
group
~
yes
SetHostVolumeWriteCacheMode
host volume
~
yes
SetHostVolumesWriteCacheMode
host volume
~
yes
SetListVariable
general
~
~
SetNotificationPolicy
general
~
~
SetPreferredPortConfiguration
storage volume
~
~
SetStorageVolumeName
storage volume
~
~
SetStorageVolumeSize
storage volume
~
~
SetStorageVolumeWriteCacheMode
storage volume
~
~
SetStorageVolumesWriteCacheMode
storage volume
~
~
SetVariable
general
~
~
SnapcloneDiskDevice
host disk
device
local,
snapclone
yes
SnapcloneHostDiskDeviceToContainerInManagedSet
host disk
device
local,
snapclone
yes
SnapcloneHostVolume
host volume
local,
snapclone
yes
SnapcloneHostVolumeGroup
host volume
group
local,
snapclone
yes
SnapcloneHostVolumeGroupToContainersInManagedSet
host volume
group
local,
snapclone
yes
SnapcloneHostVolumeToContainers
host volume
local,
snapclone
yes
SnapcloneHostVolumeToContainersInManagedSet
host volume
local,
snapclone
yes
SnapcloneStorageVolume
storage volume
local,
snapclone
~
SnapcloneStorageVolumeToContainer
storage volume
local,
snapclone
~
SnapcloneStorageVolumesToContainers
storage volume
local,
snapclone
~
SnapshotDiskDevice
host disk
device
local, snapshot
yes
SnapshotHostDiskDeviceToContainerInManagedSet
host disk
device
local, snapshot
yes
SnapshotHostVolume
host volume
local, snapshot
yes
SnapshotHostVolumes
host volume
local, snapshot
yes
SnapshotHostVolumeGroup
host volume
group
local, snapshot
yes
Job command
Command
category
Replication
type
Requires host
agent
SnapshotHostVolumeGroups
host volume
group
local, snapsho
yes
SnapshotHostVolumeGroupToContainersInManagedSet
host volume
group
local, snapshot
yes
SnapshotHostVolumeToContainers
host volume
local, snapshot
yes
SnapshotHostVolumeToContainersInManagedSet
host volume
local, snapshot
yes
SnapshotStorageVolume
storage volume local, snapshot
~
SnapshotStorageVolumeToContainer
storage volume local, snapshot
~
SnapshotStorageVolumesToContainers
storage volume local, snapshot
~
MigrateMirrorclone
storage volume
~
~
MigrateMirrorclones
storage volume
~
~
script flow
~
~
UnmountEntireVolumeGroup
host volume
group
~
yes
UnmountHostVolume
host volume
~
yes
UnmountHostVolumes
host volume
~
yes
UnpresentStorageVolume
storage volume
~
~
UnpresentStorageVolumes
storage volume
~
~
UnpresentThinProvisionedStorageVolume
storage volume
~
~
UnpresentThinProvisionedStorageVolumes
storage volume
~
~
ValidateHost
validation
~
yes
ValidateHostVolume
validation
~
yes
ValidateHostVolumeDoesNotExist
validation
~
yes
ValidateHostVolumeGroup
validation
~
yes
ValidateHostVolumeMirrorclones
validation
local,
mirrorclone
yes
ValidateSnapcloneHostVolume
validation
local,
snapclone
yes
ValidateSnapcloneHostVolumeGroup
validation
local,
snapclone
yes
ValidateSnapcloneStorageVolume
validation
local,
snapclone
~
ValidateSnapshotHostVolume
validation
local, snapshot
yes
ValidateSnapshotHostVolumeGroup
validation
local, snapshot
yes
ValidateSnapshotStorageVolume
validation
local, snapshot
~
ValidateStorageSystem
validation
~
~
ValidateStorageVolume
validation
~
~
ValidateStorageVolumes
validation
~
~
Wait
script flow
~
~
TestJobState
Job concepts 183
Job command
Command
category
Replication
type
Requires host
agent
WaitDrGroupNormalization
DR group
remote
~
WaitDrGroupSynchronizationTransition
DR group
remote
~
WaitForHostDiskDeviceWriteCacheFlush
host disk
device
~
yes
WaitForHostVolumeGroupWriteCacheFlush
host volume
group
~
yes
WaitForHostVolumeWriteCacheFlush
host volume
~
yes
WaitForHostVolumesWriteCacheFlush
host volume
~
yes
script flow
~
~
WaitForStorageVolumeDiscovery
storage volume
~
~
WaitForStorageVolumesDiscovery
storage volume
~
~
WaitForStorageVolumeWriteCacheFlush
storage volume
~
~
WaitForStorageVolumesWriteCacheFlush
storage volume
~
~
host disk
device
~
yes
host volume
~
yes
WaitStorageVolumeNormalization
storage volume
~
~
WaitStorageVolumesNormalization
storage volume
~
~
script flow
~
~
host volume
group
~
yes
WaitForJob
WaitHostDiskDeviceNormalization
WaitHostVolumeNormalization
WaitUntil
WaitVolumeGroupNormalization
Job templates list
Job templates provide frameworks and guidelines for creating typical jobs.
Template name (alphabetical order)
Empty template
local
Instant restore storage volumes to other storage volumes
local
Perform cascaded replication
Remarks
-
Fracture host volumes, mount to a host
Mount existing storage volumes
184 Jobs
Replication
type
-
requires host agent
requires host agent
remote and
local
Perform planned failover
remote
requires host agent
Perform unplanned failover
remote
requires host agent
Replicate (via snapclone) a host volume multiple times, mount to a host
local
requires host agent
Replicate host disk devices, mount to a host
local
requires host agent
Replicate host volume group, mount components to a host
local
requires host agent
Replicate host volume group, mount entire group to a host
local
requires host agent
Replicate host volumes
local
requires host agent
Template name (alphabetical order)
Replication
type
Remarks
Replicate host volumes via pre-allocated replication, mount to a host
local
requires host agent
Replicate host volumes, mount to a host
local
requires host agent
Replicate host volume, mount components to a host
local
requires host agent
Replicate host volumes, mount to a host, then to a different host
local
requires host agent
Replicate raw storage volumes, mount (raw) to a host
local
requires host agent
Replicate storage volumes
local
Replicate storage volumes via pre-allocated replication
local
Setup Continuous Access
remote
Throttle replication I/O
remote
Unmount and delete existing host volumes
-
requires host agent
Unmount existing host volumes
-
requires host agent
* Template can be modified for use without a host agent.
Labels
Labels are used in conjunction with branches. See Job branches.
Usage
Labels are:
•
Local to each job and cannot be referenced across jobs.
•
Not case sensitive.
Format
•
Must be at least two characters and end with a colon.
•
Upper and lower case, alpha and numeric are allowed.
•
Underscores are allowed; spaces are not allowed.
Pause and continue
Pause
A pause action or command halts execution of a running job instance and places the job in a
paused state.
•
You can use a GUI action, a pause command within the job, or a CLUI command. See Pausing
a job instanceand Job status and states.
•
When using a Pause command within a job, execution is halted on the pause step.
•
When using the GUI or CLUI, the job instance typically completes the current step, then halts
before starting the next step. However, if the step is an interrupt-enabled wait command, the
job instance immediately halts. Interrupt-enabled wait commands include: Wait, WaitForJob,
and WaitUntil.
Job concepts 185
Continue
A continue action or command resumes execution of a job instance that is paused, or waiting. See
Job status and states.
•
You can use a GUI action or CLUI command. See Continuing jobs.
•
When used for a waiting job instance, the wait is ended and execution continues at the next
step.
•
When used for a paused job instance, execution typically continues at the next step.
However, if the job instance was paused on an interrupt-enabled wait command, the wait
condition is checked before continuing. If the wait condition has been detected, then the next
step is executed. Otherwise, the job instance waits for the condition to be detected before
starting the next step.
For example, if a job command specifies a 10-minute wait and the continue occurs after 20
minutes, the next step would be immediately executed. If the continue occurs after 5 minutes,
the job instance would wait 5 minutes before starting the next step.
Resource names and UNC formats
Resources in a SAN are identified in several ways, including UNC format. UNC (Universal or
Uniform Naming Convention) identifies a resource in terms of its hierarchical location in a network.
See the following for name and UNC formats: DR groups, Enabled hosts, Host volumes, Storage
systems, and Virtual disks (storage volumes).
Resource names in job commands
When you initially enter a command in a job, the command's resource arguments are displayed
as %variable% names, for example:
CreateDiskDevice ( %storvol_unc_name%, %host_name%, 0, READ_WRITE)
When selecting a specific resource from an Editing Task menu, the resource names are presented
in a UNC format, or other format, as appropriate for the resource. If you enter a resource name
from the keyboard, you must use the appropriate format.
IMPORTANT:
UNC names in job commands are case sensitive.
After selection or entry of arguments, the command is displayed with resource names in quotes,
for example:
CreateDiskDevice ( "\\ArrayA2\Cats", "HostA6", 12, READ_WRITE )
Name formats and examples for resource types follow.
DR groups
186 Jobs
Name format (UNC)
Example
\\array name\path\DR group name
\\ArrayA2\DrGrpPets
Identifies the DR group named DrGrpPets on the storage array named
ArrayA2
Enabled hosts
The job editor and command validation accept these formats.
Name format (other)
Example
Computer network name
HostA6
Fully qualified network name
HostA6.SiteA.corp
IP address
88.15.42.101
Each example identifies an enabled host using an accepted format
Host volumes
Applies to standard host volumes, volume groups, logical volumes, and host volume components
such as partitions and slices.
Name format (UNC)
OS specific example
\\host name\path\host volume name
AIX
\\HostA2\/home/cats
HP-UX
\\HostA2\/users/cats
Linux
\\HostA2\/var/cats
OpenVMS
\\HostA2\CATS
Solaris
\\HostA2\/usr/cats
Tru64 UNIX
\\HostA2\/users/cats
Windows
\\HostA2\E:\pets\cats
For each OS, identifies the path and file named Cats on the enabled
host named HostA2
Storage systems
Name format (other)
Example
storage name
ArrayA2
Identifies the storage array named ArrayA2
Virtual disks (storage volumes)
Applies to standard storage volumes (virtual disks), snapclones (standard virtual disks), snapshot
virtual disks, and storage containers.
Name format (UNC)
Example
\\array name\path\storage volume name
\\ArrayA2\Cats
Identifies the storage volume named Cats on the storage array named
ArrayA2.
Job concepts
187
Status and states
Job status
The collection of all jobs may have one of the following status conditions. These status conditions
appear in the resources pane.
Icon
Status
When last checked, new information on jobs was not available.
When last checked, new information on jobs was available.
Job instance states
A job instance may have one of the following states. These states appear in the Run History tab.
Icon
Job instance state
Job instance description
Aborted
Aborted by a user while the job was running.
Completed
Completed and the exit status was success.
Executing
Running
Failed
Finished but the exit status was failed.
Paused
Running, but is paused by a job command or user action.
Paused/Error
Running, but is paused due to an error condition.
Stalled
Running, but is halted due to an unexpected condition.
Waiting
Running, but is executing a planned wait.
Job task states
An individual job task (command) within a job instance my have one of the following states. These
states appear in the Monitor Job window.
Icon
Task state
Task description
none
Not executed
The task (command) has not been executed or is a comment.
Completed
The task (command) was completed successfully.
Failed
The task (command) failed.
Paused
The job instance was paused on this task.
Paused/Error
The job instance was paused with an error on this task.
Stalled
The job instance is in an unknown state.
Transactions
A transaction is a special block of lines that is executed as an entity. It consists of two parts: Do
and Always. Transactions are typically used with jobs that launch external activities.
188 Jobs
In the following example, a launch command on line 9 suspends host I/O to a storage volume
(virtual disk). The transaction begins at line 10 and ends on line 16. The Do consists of lines 10
through 12, and the Always consists of lines 13 through 16.
Line Task
...
9
Launch ( %source_host%, %suspend_command_line%, "", WAIT, "0" ) onerror
pauseat E1:
transaction start >
10
DO {
transaction end >
11
$Rep1 = SnapshotStorageVolume ( %array_name_source_storvol_unc1%,
FULLY_ALLOCATED, SAME, %dest_storvol1%, NOWAIT ) onerror pauseat E1:
12
//
13
} ALWAYS {
14
// Resume the host application.
15
Launch ( %source_host%, %resume_command_line%, "", WAIT, "0" )
16
}
...
In this example, if the snapshot command in the Do portion of the transaction fails, for whatever
reason, the launch command in the Always portion is executed to resume host I/O to the storage
volume. This transaction helps ensure that host I/O is not suspended indefinitely.
Validation
Validation refers to the use of resource validation commands in a job and subsequently performing
a validation of the job. Job validation, especially at run time, helps ensure that a job instance runs
successfully.
The values that you enter for arguments within job commands are checked in the job editor only
for compliance with basic syntax rules. See job command Arguments.
Resource validation commands
There are several types of resource validation commands. Each type validates a different type of
resource or the status of the resource.
For example, ValidateStorageVolume checks for the availability of a specific virtual disk on a
specific storage system, while ValidateSnapcloneStorageVolume checks whether the storage volume
is available and if it can be copied using snapclone replication.
IMPORTANT:
HP recommends placing resource validation commands together, in the first lines
of a job. They can be preceded by comments, but not by other types of commands.
Resource validation processing
Jobs can be run and validated for resources from the GUI or the CLUI. When done from the GUI,
a new job instance is displayed in the Monitor Job window. Validation processing generally occurs
as follows.
•
Normal. If the line is a validation command, it is executed. If successful, the next line is
executed. If not successful, the job is stopped with a failure status.
•
Validate-only. If the line is a comment, SetVariable command or validation command, it is
executed. If successful, the next line is checked. If not successful, the job is stopped with a
Job concepts 189
failure status. When the first command is encountered that is not a SetVariable or validation
command, the job is stopped with a success status.
•
Skip validation. If the line is a validation command it is ignored.
In the validate-only and skip validation cases, job transactions and branches within validation
commands are ignored.
Resource validation processing in transactions
Special cases exist for validation commands in a transaction block:
•
Validate-only. The validation command in the transaction block is not executed.
•
Skip validation. The validation command in the transaction block is executed.
See also job Transactions.
Wait/nowait argument
Some job commands include a wait/nowait argument. Values are:
•
Wait. When the job is run, issue the command, then wait until the command has been
completed before going to the next task in the job.
•
Nowait. When the job is run, issue the command, then immediately go to the next task in the
job.
In general, use wait when you need to ensure that all aspects of a command have been performed
by the command before any other task in the job is performed. Use nowait, when other tasks in
the job do not depend its completion.
Wait/nowait for discovery
When you include a local replication command in a job, the wait/nowait for discovery argument
requires that you specify whether or not to wait until the new replica is discovered before executing
the next step. Choices are wait and nowait. For example, in the job below, the replication manager
waits at line 7 until the new replica is discovered, before executing line 8.
Line Task
...
7
$Rep1 = SnapshotHostVolume ( "%\\source_host\source_hostvol1%", FULLY_ALLOCATED, SAME, WAIT )
onerror pauseat E1:
8
//
...
With both choices, the replica is instantly created on the storage system. With the nowait choice,
the replication manager immediately executes the next job step, whether or not its database of
resources has been updated with the new replica information. With the wait choice, the replication
manager does not execute the next step until its database of resources has been updated with the
new replica information.
Typical use of nowait with wait-for-discovery commands
The nowait for discovery option is typically used in conjunction with the
WaitForStorageVolumeDiscovery and WaitForStorageVolumesDiscovery commands to minimize
the amount of time that a database is suspended while being locally replicated.
For example, the general structure of a job to replicate a database might be:
190 Jobs
1.
2.
3.
4.
A launch command suspends the database I/O.
A local replication command instantaneously creates a replica. The command uses the nowait
option so that the next step (resuming I/O) is immediately executed.
A launch command resumes the database I/O.
With I/O resumed, a WaitForStorageVolumeDiscovery command causes the job to wait until
the replica is detected by a forced discovery of storage volumes. The helps ensure that
subsequent steps that involve the new replica, for example a mount step, will be valid.
Job templates
Empty template
Template summary
Provides a series of comments that outline the basic structure of a job. After generating this template
you must add individual job commands to accomplish specific tasks.
Template options
•
Include e-mail notification. Adds a command for e-mail notification of the job instance status.
See SetNotificationPolicy.
Example
This template was generated to create the basic structure of a job. No template option was selected.
Line
Task
1
// This is an empty job template.
2
//
3
//
4
// Wait for user to initiate rollback.
5
Pause ()
6
//
7
// Rollback.
8
Exit (SUCCESS)
9
//
10
// Failure exit - no rollback needed.
11
E1: Exit (FAILURE)
Fracture host volumes, mount to a host (template)
Template summary
A.
B.
C.
D.
E.
Fractures the mirrorclones of storage volumes that underlie a host volume on an enabled host.
Presents the fractured mirrorclones to a second enabled host (creates a new host volume).
Mounts the new host volume on the second enabled host.
Pauses the job.
After continuing, unmounts and deletes the new host volume from the second enabled host;
unpresents the mirrorclones; then resynchronizes the mirrorclones with their sources.
Job templates
191
Template options
•
Number of volumes to replicate. Adds commands for each host volume
•
Suspend source before replication. Adds launch commands for interacting with an enabled
host, for example to suspend and resume host application I/O.
•
Launch backup after replication. Adds a launch command for interacting with an enabled
host, for example, to start a tape backup.
•
Include e-mail notification. Adds a command for e-mail notification of the job instance status.
See SetNotificationPolicy.
Considerations
•
Tru64 UNIX. When replicating AdvFS volumes that have heavy I/O, select the option Suspend
source before replication. See Suspending I/O before replicating AdvFS volumes.
Example
This template was generated to fracture and mount one host volume on an enabled host. No other
template options were selected.
Line Task
192 Jobs
1
// Fracture host volume and mounting it to a host.
2
//
3
// Assign some variables that will be used in this job.
4
$source_hostvol_unc1 = SetVariable(%source_hostvol_unc1%)
5
$source_host = SetVariable(%source_host%)
6
$mount_host = SetVariable(%mount_host%)
7
//
8
// Validate that resources are as expected.
9
ValidateHost ($source_host)
10
ValidateHost ($mount_host)
11
ValidateHostVolumeMirrorclones ($source_hostvol_unc1)
12
//
13
WaitHostVolumeNormalization($source_hostvol_unc1)
14
//
15
//
16
DO {
17
// The return value from FractureHostVolumeMirrorclones is a list of mirrorclones of under line storage volumes
of a host volume.
18
$Rep1 = FractureHostVolumeMirrorclones ($source_hostvol_unc1)
19
//
20
} ALWAYS {
21
}
22
//
23
// Mount the replicated volume(s) on a host.
24
PresentStorageVolumes ($Rep1, $mount_host) onerror pauseat E2:
25
DiscoverDiskDevices ($mount_host, $Rep1) onerror continue
26
$HV1 = CreateHostVolumeFromDiskDevices ($source_hostvol_unc1, $Rep1, $mount_host) onerror pauseat E3:
27
$MP1 = MountHostVolume ($HV1, %mount_point1%) onerror pauseat E4:
28
//
29
// Wait for user to initiate rollback.
30
Pause ()
31
//
32
// Rollback.
33
E5: UnmountHostVolume ($MP1) onerror pauseat E5:
34
E4: DeleteHostVolume ($HV1) onerror pauseat E4:
35
//
36
E3: UnpresentStorageVolumes ($Rep1, $mount_host) onerror pauseat E3:
37
//
38
E2: ResyncMirrorclones ($Rep1) onerror pauseat E2:
39
//
40
// Uncomment the following line if you want the job to wait for the mirrors to resynchronize before completing
41
//WaitHostVolumeNormalization($source_hostvol_unc1)
42
//
43
//
44
Exit (SUCCESS)
45
//
46
// Failure exit - no rollback needed.
47
E1: Exit (FAILURE)
Instant restore storage volumes to other storage volumes (template)
Template summary
A.
B.
C.
D.
Disables (flushes) the write cache of the storage volume (virtual disk) to restore from.
Converts the storage volume to restore to into a container.
Copies (restores) from the storage volume (by snapclone) to the container.
Re-enables the write cache of the source storage volume.
NOTE:
This template cannot be used with some older versions of controller software.
Template options
•
Number of volumes. Adds commands for each storage volume to restore from.
•
Suspend source before replication. Adds launch commands for interacting with an enabled
host, for example to suspend and resume host application I/O.
•
Include e-mail notification. Adds a command for e-mail notification of the job instance status.
See SetNotificationPolicy.
Job templates
193
Comments
•
This template cannot be used with some older versions of controller software.
•
HP recommends that you perform restores of host volumes using the GUI.
Example
This template was generated to copy (restore) one storage volume. No other template options were
selected.
Line Task
194
Jobs
1
// Synchronize storage volume(s) to other storage volume(s)
2
//
3
// Assign some variables that will be used in this job.
4
$source_storvol_unc1 = SetVariable(%source_storvol_unc1%)
5
$dest_storvol_unc1 = SetVariable(%dest_storvol_unc1%)
6
//
7
// Validate that resources are as expected.
8
ValidateStorageVolume ($dest_storvol_unc1)
9
//
10
// Begin flushing the cache on the storage volume(s).
11
DO {
12
SetStorageVolumeWriteCacheMode ($source_storvol_unc1, WRITE_CACHE_DISABLED, NOWAIT)
13
//
14
// Wait for the cache flush to complete.
15
WaitForStorageVolumeWriteCacheFlush ($source_storvol_unc1)
16
//
17
// Convert the destination storage volume(s) into container(s).
18
$CT1 = ConvertStorageVolumeIntoContainer ($dest_storvol_unc1)
19
//
20
SnapcloneStorageVolumeToContainer ($source_storvol_unc1, $CT1, NOWAIT)
21
//
22
} ALWAYS {
23
// Restore the writeback cache on the storage volume(s).
24
SetStorageVolumeWriteCacheMode ($source_storvol_unc1, WRITE_CACHE_ENABLED, NOWAIT) onerror
continue
25
//
26
}
27
//
28
Exit (SUCCESS
29
//
Mount existing storage volumes (template)
Template summary
A.
B.
C.
D.
Creates a host volume by presenting an existing* storage volume to an enabled host.
Mounts the host volume.
Pauses the job until continued by a user.
After continuing, unmounts and deletes the host volume.
* A storage volume that is created by means other than this job. The volume must exist and be in
the replication manager database when this job is run.
Template options
•
Number of volumes to replicate. Adds commands for each volume.
•
Launch backup after replication. Adds a launch command for interacting with an enabled
host, for example, to start a tape backup.
•
Include e-mail notification. Adds a command for e-mail notification of the job instance status.
See SetNotificationPolicy.
Comments
•
This template cannot be used with HP-UX host volumes.
•
When used with Tru64 UNIX hosts, this template supports only UFS file systems.
Example
This template was generated to mount one existing storage volume on a host. No other template
options were selected.
Line Task
1
// Mount existing storage volume(s).
2
//
3
// Assign some variables that will be used in this job.
4
$storvol_unc1 = SetVariable(%storvol_unc1%)
5
$mount_host = SetVariable(%mount_host%)
6
//
7
// Validate that resources are as expected.
8
ValidateHost ($mount_host)
9
ValidateStorageVolume ($storvol_unc1)
10
//
11
// Mount the volume(s) on a host.
12
$HV1 = CreateHostVolumeDiscrete (%component%, $storvol_unc1, $mount_host) onerror pauseat E1:
13
$MP1 = MountHostVolume ($HV1, %mount_point1%) onerror pauseat E2:
14
//
15
// Wait for user to initiate rollback.
16
Pause ()
Job templates 195
17
//
18
// Rollback.
19
E3: UnmountHostVolume ($MP1) onerror pauseat E3:
20
E2: DeleteHostVolume ($HV1) onerror pauseat E2:
21
//
22
Exit (SUCCESS)
23
//
24
// Failure exit - no rollback needed.
25
E1: Exit (FAILURE)
Perform cascaded replication (template)
Template summary
Performs a three-site cascaded replication. Sites 1 and 2 have an existing remote replication
relationship. Remote replication between sites 2 and 3 is temporarily added and point in-time
snapclone copies of the storage volumes at site 1 are remotely replicated to site 3.
A. Configures an existing DR group pair for synchronous replication with failsafe on unavailable
member data protection.
B. Halts the job (waits) to ensure that data on the destination disks is identical to the source disks
(DR group normalization).
C. When ready, continues the job and makes point-in-time snapclone copies of the storage
volumes at site 2 (which are identical to those at site1).
D. Returns the existing DR group pair to its prior operational settings.
E. Presents the snapclones to a host at site 2. (This is required by some controller software versions
before the snapclones can be members of a DR group.
F. Creates a new DR group pair that contains the snapclone copies at site 2.
G. Halts the job (waits) to ensure that the snapclones are remotely replicated to new storage
volumes at site 3.
H. Deletes the newly created DR group pair for sites 2 and 3, but retains the new storage volumes
at site 3.
When the job is completed, point-in-time copies of the storage volumes at site 1 exist at site 3.
Guidelines apply.
Template Options
196
Jobs
•
Number of virtual disks in DR group. Adds commands for each virtual disk.
•
Suspend source before replication. Adds launch commands for interacting with an enabled
host, for example to suspend and resume host application I/O.
•
Include e-mail notification. Adds a command for e-mail notification of the job instance status.
See SetNotificationPolicy.
Comments
•
Guidelines apply.
Example
This template was generated to perform one iteration of the cascaded replication involving one
virtual disk. No other template options were selected.
Line Task
1
// Perform cascaded replication.
2
//
3
// This template assumes that you have three storage systems located at three different sites:
4
// Site 1: Contains a source DR group.
5
// Site 2: Contains a destination DR group.
6
// Site 3: After the script completes, it will contain a snapclone of the virtual disk.
7
//
8
// Assign some variables that will be used in this job.
9
$site_1_array = SetVariable(%site_1_array%)
10
$site_1_array_DR_Group_name_unc = SetVariable(%site_1_array_DR_Group_name_unc%)
11
$source_storvol_unc1 = SetVariable(%source_storvol_unc1%)
12
$dest_storvol1 = SetVariable(%dest_storvol1%)
13
$DiskGroup_name = SetVariable(%DiskGroup_name%)
14
$site_2_array = SetVariable(%site_2_array%)
15
$host_name = SetVariable(%host_name%
16
$site_3_array = SetVariable(%site_3_array%)
17
$DR_group_name = SetVariable(%DR_group_name%)
18
$DR_source_storvol_unc1 = SetVariable(%DR_source_storvol_unc1%)
19
//
20
// Validate some resources needed by this job:
21
// - The arrays at Sites 1, 2 and 3.
22
// - The source DR group at Site 1 and it's virtual disk member(s).
23
// - The storage system at Site 2 with the destination DR group and it's virtual disk member(s).
24
// - The host name at Site 2.
25
// - The source host used for suspend/resume application at Site 1 (if applicable).
26
//
27
// Validate that resources are as expected.
28
ValidateHost ($host_name)
29
ValidateStorageSystem ($site_1_array)
30
ValidateStorageSystem ($site_2_array)
31
ValidateStorageSystem ($site_3_array)
Job templates
197
198 Jobs
32
ValidateStorageVolume (%site_1_array_source_storvol_unc1%)
33
ValidateStorageVolume (%site_2_array_source_storvol_unc1%)
34
//
35
// The setting of the following 2 steps may be dependent on the configuration
36
// of the DR group. Please consult the documentation for the appropriate setting options.
37
// Set the DR group to synchronous.
38
// [Use 'onerror continue' in case the mode is already set.]
39
SetDrGroupIoMode ($site_1_array_DR_Group_name_unc, SYNCHRONOUS) onerror continue
40
//
41
// Set the DR group to failsafe enabled.
42
// [Use 'onerror continue' in case the mode is already set.]
43
SetDrGroupFailsafe ($site_1_array_DR_Group_name_unc, ENABLED) onerror continue
44
//
45
// If the DR group from Site 1 to Site 2 is not normalized, wait.
46
WaitDrGroupNormalization ($site_1_array_DR_Group_name_unc)
47
//
48
// Create a snapclone at Site 2.
49
$Rep1 = SnapcloneStorageVolume ($source_storvol_unc1, $DiskGroup_name, SAME, $dest_storvol1, WAIT)
onerror pauseat E1:
50
//
51
// Wait for the snapclone(s) to finish.
52
WaitStorageVolumeNormalization ($Rep1) onerror pauseat E2:
53
//
54
// Set the DR group at Site 1 back to failsafe disabled.
55
SetDrGroupFailsafe ($site_1_array_DR_Group_name_unc, DISABLED) onerror continue
56
//
57
// Set the DR group at Site 1 back to asynchronous.
58
SetDrGroupIoMode ($site_1_array_DR_Group_name_unc, ASYNCHRONOUS) onerror continue
59
//
60
// Present the snapclone at Site 2 to a host.
61
// NOTE: This is required only for VCS 3.x. For later versions, you may remove this step.
62
// Typically you will NOT want to present to an Enabled Host Agent, but instead
63
// use the name of a 'dummy' EVA host that you have set up in Command View.
64
PresentStorageVolume ($Rep1, $host_name, %LUN1%, READ_WRITE) onerror pauseat E2:
65
//
66
// Make a DR group with the Site 3 storage system.
67
$DRG1 = CreateDrGroup ($DR_group_name, $DR_source_storvol_unc1, $site_3_array, "", "", SAME, "", "",
0, FALSE) onerror pauseat E3:
68
//
69
// Check the state of the DR group before we continue.
70
WaitDrGroupNormalization ($DRG1)
71
//
72
// Delete the DR group from Site 2 to Site 3, but keep the copy at Site 3.
73
// Keep trying until the destination is finished allocating.
74
// The time required depends on virtual disk size, workload, etc.
75
ddgwt: Wait ("0:0:3"
76
DeleteDrGroup ($DR_group_name, DETACH) onerror goto ddgwt:
77
//
78
// Delete the presentation.
79
// This is required only for VCS 3.x. For later versions, you may remove this step.
80
E3: UnpresentStorageVolume ($Rep1, $host_name) onerror pauseat E3:
81
//
82
// Delete the snapclone(s) at Site 2 that the DR group was using.
83
E2: DeleteStorageVolume ($Rep1) onerror pauseat E2:
84
//
85
Exit (SUCCESS)
86
//
87
// Failure exit - no rollback needed.
88
E1: Exit (FAILURE)
Perform planned failover (template)
Template summary
Performs a failover of two sites in the case where the source site resources and link remain available.
The DR group pair contains one storage volume (virtual disk) at the source and destination sites.
A. Stops an application on the enabled host at site 1 (if necessary).
B. Unmounts the host volume on the enabled host at site 1.
C. Performs a failover from site 1 to site 2.
D. Discovers the host volume at site 2
E. Mounts host volume on the enabled host at site 2.
F. Starts applications on enabled host at site 2.
Guidelines apply.
Job templates 199
Template options
•
Include e-mail notification. Adds a command for e-mail notification of the job instance status.
See SetNotificationPolicy.
Comments
•
Guidelines apply.
Example
This template was generated to perform one iteration of a planned failover of a DR group pair that
contains one virtual disk. No template option was selected.
Line Task
200 Jobs
1
// Perform a planned failover of a CA configuration.
2
//
3
// Assign some variables that will be used in this job.
4
$site_2_array = SetVariable(%site_2_array%)
5
$site_1_host = SetVariable(%site_1_host%
6
$site_2_host = SetVariable(%site_2_host%)
7
$site_1_host_hostvol_unc = SetVariable(%site_1_host_hostvol_unc%)
8
//
9
// Validate that resources are as expected.
10
ValidateHost ($site_1_host)
11
ValidateHost ($site_2_host)
12
ValidateStorageSystem ($site_2_array)
13
//
14
// Execute any necessary commands on the host to get the volumes ready to dismount,
15
// such as export volumegroups, stop the local application running on the volume(s), etc.
16
Pause()
17
//
18
// Unmount the local volume(s).
19
UnmountHostVolume ($site_1_host_hostvol_unc)
20
// Failover all DR groups to the remote site.
21
FailoverDrGroups ( %DR_group_name_list%, FALSE )
22
//
23
// Do a bus scan for the new volumes, make sure that new devices are seen by multipath driver.
24
// Repeat the Discover for each DR group.
25
DiscoverDiskDevicesForDrGroup($site_2_host, %DR_group_name%)
26
//
27
// Execute any necessary commands on the remote host to get the volumes ready to mount,
28
// such as import volumegroups, fsck volumes, etc.
29
// Mount devices on the remote host.
30
// Start the application on the remote host.
31
Pause()
32
//
33
Exit (SUCCESS
34
//
Perform unplanned failover (template)
Template summary
Performs an unplanned failover of two sites in the case where the source site resources or link is
no longer available. The DR group pair contains one storage volume (virtual disk) at the source
and destination sites.
A. Performs a failover from site 1 to site 2.
B. Discovers the host volume on an enabled host at site 2.
C. Pauses the job prior to mounting the host volume on the enabled host at site 2.
Template options
•
Include e-mail notification. Adds a command for e-mail notification of the job instance status.
See SetNotificationPolicy.
Comments
•
Guidelines apply.
IMPORTANT: During actual emergencies, HP recommends that you manually perform a failover
by using the Failover action in the GUI.
Example
This template was generated to perform one iteration of an unplanned failover of a DR group pair
containing one virtual disk.
Line Task
1
// Perform an unplanned failover of a CA configuration.
2
// Since it is 'unplanned', we do not quiesce or unmount the source volumes.
3
//
4
// Assign some variables that will be used in this job.
5
$site_2_array = SetVariable(%site_2_array%
6
$site_2_host = SetVariable(%site_2_host%)
7
//
8
// Validate that resources are as expected.
9
ValidateHost ($site_2_host)
10
ValidateStorageSystem ($site_2_array)
11
//
Job templates 201
12
// Failover all DR groups to the remote site.
13
FailoverDrGroups ( %DR_group_name_list%, FALSE )
14
//
15
// Do a bus scan for the new volumes, make sure that new devices are seen by multipath driver.
16
// Repeat the Discover for each DR group.
17
// This assumes that the devices are presented on the site 2 host
18
DiscoverDiskDevicesForDrGroup($site_2_host, %DR_group_name%)
19
//
20
// Execute any necessary commands on the remote host to get the volumes ready to mount,
21
// such as import volumegroups, fsck volumes, etc.
22
// Mount devices on the remote host.
23
// Start the application on the remote host.
24
Pause()
25
//
26
Exit (SUCCESS)
27
//
Replicate (via snapclone) a host volume multiple times, mount to a host (template)
Template summary
A.
B.
C.
D.
E.
F.
Replicate the same host volume more than once, via snapclone.
Between each snapclone, wait for the previous one to normalize.
Mount the replicas on a host.
Pause.
Delete the mounted host volumes.
Delete the storage volumes.
Template options
202 Jobs
•
Number of times to snapclone. Adds commands for each snapclone.
•
Suspend source before replication. Adds launch commands for interacting with an enabled
host, for example to suspend and resume host application I/O.
•
Launch backup after replication. Adds a launch command for interacting with an enabled
host, for example, to start a tape backup.
•
Include e-mail notification. Adds a command for e-mail notification of the job instance status.
See SetNotificationPolicy.
Considerations
•
Tru64 UNIX. When replicating AdvFS volumes that have heavy I/O, select the option Suspend
source before replication. See Suspending I/O before replicating AdvFS volumes.
Example
This template was generated to replicate a host volume one time No other template options were
selected.
Line Task
1
// Make multiple snapclones of the same Host Volume, and mount to a host.
2
// This requires normalization between each snapclone.
3
//
4
// Assign some variables that will be used in this job.
5
$source_hostvol_unc1 = SetVariable(%source_hostvol_unc1%)
6
$mount_host1 = SetVariable(%mount_host1%)
7
//
8
// Validate that resources are as expected.
9
ValidateHost ($mount_host1)
10
ValidateSnapcloneHostVolume ($source_hostvol_unc1)
11
//
12
$Rep1 = SnapcloneHostVolume ($source_hostvol_unc1, SAME, WAIT) onerror pauseat E1:
13
//
14
// Mount the replicated volume(s) on a host.
15
PresentStorageVolumes ($Rep1, $mount_host1) onerror pauseat E2:
16
DiscoverDiskDevices
17
$HV1 = CreateHostVolumeFromDiskDevices ($source_hostvol_unc1, $Rep1, $mount_host1) onerror pauseat E2
18
$MP1 = MountHostVolume ($HV1, %mount_point1%) onerror pauseat E3:
19
//
20
// Wait for user to initiate rollback.
21
Pause ()
22
//
23
// Rollback.
24
E4: UnmountHostVolume ($MP1) onerror pauseat E4:
25
E3: DeleteHostVolume ($HV1) onerror pauseat E3:
26
//
27
E2: DeleteStorageVolumes ($Rep1) onerror pauseat E2:
28
//
29
Exit (SUCCESS)
30
//
Job templates 203
31
// Failure exit - no rollback needed.
32
E1: Exit (FAILURE)
Replicate host disk devices, mount to a host (template)
Template summary
A.
B.
C.
D.
E.
Locally replicates (copies) the storage volumes that underlie a raw host volume on an enabled
host.
Presents the underlying storage volume copies to a second enabled host (creates a new raw
host volume).
Pauses the job.
After continuing, removes the raw host volume from the second enabled host.
Unpresents and deletes the storage volume copies from the storage system
Template options
•
Number of volumes to replicate. Adds commands for each volume.
•
Suspend source before replication. Adds launch commands for interacting with an enabled
host, for example to suspend and resume host application I/O.
•
Use snapclone instead of snapshot. Generates a template that uses snapclone replication.
•
Include e-mail notification. Adds a command for e-mail notification of the job instance status.
See SetNotificationPolicy.
Example
This template was generated to replicate one raw host volume and duplicate it on another enabled
host. No other template options were selected.
Line Task
204 Jobs
1
// Replicate host disk device(s), and mount to a host.
2
//
3
// Assign some variables that will be used in this job.
4
$source_host = SetVariable(%source_host%)
5
$source_disk_device_unc1 = SetVariable(%source_disk_device_unc1%)
6
$dest_storvol1 = SetVariable(%dest_storvol1%)
7
$mount_host = SetVariable(%mount_host%)
8
//
9
// Validate that resources are as expected.
10
ValidateHost ($source_host)
11
ValidateHost ($mount_host)
12
//
13
$Rep1 = SnapshotDiskDevice ($source_disk_device_unc1, FULLY_ALLOCATED, SAME, $dest_storvol1, WAIT)
onerror pauseat E1:
14
//
15
// Create disk device(s) on a host.
16
CreateDiskDevice ($Rep1, $mount_host, %LUN%, READ_WRITE) onerror pauseat E2:
17
//
18
// Wait for user to initiate rollback.
19
Pause ()
20
//
21
// Rollback.
22
E3: RemoveDiskDevice ($Rep1, $mount_host) onerror pauseat E2:
23
//
24
E2: DeleteStorageVolume ($Rep1) onerror pauseat E2:
25
//
26
Exit (SUCCESS)
27
//
28
// Failure exit - no rollback needed.
29
E1: Exit (FAILURE)
Replicate host volume group, mount components to a host (template)
Template summary
A.
B.
C.
Locally replicates (copies) the storage volumes that underlie a host volume group on an enabled
host.
Presents the underlying storage volume copies to a second enabled host (creates a new host
volume group).
By default, mounts the components (logical volumes) in the new host volume group on the
second enabled host.
Optionally, use raw disk I/O (do not mount the replicated components).
D.
E.
Pauses the job.
After continuing, unmounts the new volume group components from the second enabled host,
unpresents and deletes the storage volume copies from the storage system.
Template options
•
Number of components to mount. Enter 1 or more. Adds mount and unmount commands for
each component.
To use raw disk I/O, instead of mounting, enter zero (0). No mount or unmount commands
are added.
•
Suspend source before replication. Adds launch commands for interacting with an enabled
host, for example to suspend and resume host application I/O.
•
Launch backup after replication. Adds a launch command for interacting with an enabled
host, for example, to start a tape backup.
•
Use snapclone instead of snapshot. Generates a template that uses snapclone replication.
•
Include e-mail notification. Adds a command for e-mail notification of the job instance status.
See SetNotificationPolicy.
Job templates 205
Comments
•
Tru64 UNIX. Replication is not supported when an AdvFS domain spans partitions.
•
Tru64 UNIX. When replicating AdvFS volumes that have heavy I/O, select the option Suspend
source before replication. See Suspending I/O before replicating AdvFS volumes.
Example
This template was generated to locally replicate one host volume group and mount one of its
components (logical volume). No other template options were selected.
Line Task
206 Jobs
1
// Replicate a host volume group, and mount component(s) to a host.
2
//
3
// Assign some variables that will be used in this job.
4
$source_VolumeGroup_unc1 = SetVariable(%source_VolumeGroup_unc1%
5
$source_host = SetVariable(%source_host%)
6
$mount_host = SetVariable(%mount_host%
7
$source_VG_component_unc1 = SetVariable(%source_VG_component_unc1%
8
//
9
// Validate that resources are as expected.
10
ValidateHost ($source_host)
11
ValidateHost ($mount_host)
12
ValidateSnapshotHostVolumeGroup ($source_VolumeGroup_unc1)
13
//
14
$Rep1 = SnapshotHostVolumeGroup ($source_VolumeGroup_unc1, FULLY_ALLOCATED, SAME, WAIT) onerror
pauseat E1:
15
//
16
// Mount the replicated volume(s) on a host.
17
$HV1 = CreateHostVolumeGroup ($source_VolumeGroup_unc1, $Rep1, $mount_host) onerror pauseat E2:
18
$MP1 = MountVolumeGroupComponent ($HV1, $source_VolumeGroup_unc1, $source_VG_component_unc1,
%mount_point1%) onerror pauseat E3:
19
//
20
// Wait for user to initiate rollback.
21
Pause ()
22
//
23
// Rollback.
24
E4: UnmountHostVolume ($MP1) onerror pauseat E4:
25
E3: DeleteHostVolumeGroup ($HV1) onerror pauseat E3:
26
//
27
E2: DeleteStorageVolumes ($Rep1) onerror pauseat E2
28
//
29
Exit (SUCCESS)
30
//
31
// Failure exit - no rollback needed.
32
E1: Exit (FAILURE)
Replicate host volume group, mount entire group to a host (template)
Template summary
A.
B.
C.
D.
E.
Locally replicates (copies) the storage volumes that underlie a host volume group on an enabled
host.
Presents the underlying storage volume copies to a second enabled host (creates a new host
volume group).
Mounts all of the components (logical volumes) in the new host volume group on the second
enabled host.
Pauses the job.
After continuing, unmounts the new volume group components from the second enabled host,
unpresents and deletes the storage volume copies from the storage system.
Template options
•
Number of volume groups to replicate. Adds commands for each component.
•
Suspend source before replication. Adds launch commands for interacting with an enabled
host, for example to suspend and resume host application I/O.
•
Launch backup after replication. Adds a launch command for interacting with an enabled
host, for example, to start a tape backup.
•
Use snapclone instead of snapshot. Generates a template that uses snapclone replication.
•
Include e-mail notification. Adds a command for e-mail notification of the job instance status.
See SetNotificationPolicy.
Comments
•
Tru64 UNIX. Replication is not supported when an AdvFS domain spans partitions.
•
Tru64 UNIX. When replicating AdvFS volumes that have heavy I/O, select the option Suspend
source before replication. See Suspending I/O before replicating AdvFS volumes.
Example
This template was generated to locally replicate one host volume group and mount all of its
components (logical volumes). No other template options were selected.
Line Task
1
// Replicate host volume group(s), and mount to a host.
2
//
3
// Assign some variables that will be used in this job.
4
$source_VolumeGroup_unc1 = SetVariable(%source_VolumeGroup_unc1%)
5
$source_host = SetVariable(%source_host%)
6
$mount_host = SetVariable(%mount_host%)
7
//
8
// Validate that resources are as expected.
Job templates 207
9
ValidateHost ($source_host)
10
ValidateHost ($mount_host)
11
ValidateSnapshotHostVolumeGroup ($source_VolumeGroup_unc1)
12
//
13
$Rep1 = SnapshotHostVolumeGroup ($source_VolumeGroup_unc1, FULLY_ALLOCATED, SAME, WAIT) onerror
pauseat E1:
14
//
15
// Mount the replicated volume(s) on a host.
16
$HV1 = CreateHostVolumeGroup ($source_VolumeGroup_unc1, $Rep1, $mount_host) onerror pauseat E2:
17
$VG1 = MountEntireVolumeGroup ($HV1, $source_VolumeGroup_unc1, %VGcopyPrefix%) onerror pauseat
E3:
18
//
19
// Wait for user to initiate rollback.
20
Pause ()
21
//
22
// Rollback.
23
E4: UnmountEntireVolumeGroup ($VG1) onerror pauseat E3:
24
E3: DeleteHostVolumeGroup ($HV1) onerror pauseat E3:
25
//
26
E2: DeleteStorageVolumes ($Rep1) onerror pauseat E2:
27
//
28
Exit (SUCCESS)
29
//
30
// Failure exit - no rollback needed.
31
E1: Exit (FAILURE)
Replicate host volumes (template)
Template summary
A.
B.
C.
Locally replicates (copies) the storage volumes that underlie a host volume on an enabled host.
Pauses the job.
After continuing, deletes the storage volume copies from the storage system.
Template Options
208 Jobs
•
Number of volumes to replicate. Adds commands for each volume.
•
Suspend source before replication. Adds launch commands for interacting with an enabled
host, for example to suspend and resume host application I/O.
•
Use snapclone instead of snapshot. Generates a template that uses snapclone replication.
•
Include e-mail notification. Adds a command for e-mail notification of the job instance status.
See SetNotificationPolicy.
Considerations
•
Tru64 UNIX. When replicating AdvFS volumes that have heavy I/O, select the option Suspend
source before replication. See Suspending I/O before replicating AdvFS volumes.
Example
This template was generated to replicate one host volume. No other template options were selected.
Line Task
1
// Replicate host volume(s).
2
//
3
// Assign some variables that will be used in this job.
4
$source_hostvol_unc1 = SetVariable(%source_hostvol_unc1%)
5
$source_host = SetVariable(%source_host%)
6
//
7
// Validate that resources are as expected.
8
ValidateHostVolume ($source_hostvol_unc1)
9
ValidateSnapshotHostVolume ($source_hostvol_unc1)
10
//
11
$Rep1 = SnapshotHostVolume ($source_hostvol_unc1, FULLY_ALLOCATED, SAME, WAIT) onerror pauseat E1:
12
//
13
// Wait for user to initiate rollback.
14
Pause ()
15
//
16
// Rollback.
17
E2: DeleteStorageVolumes ($Rep1) onerror pauseat E2:
18
//
19
Exit (SUCCESS)
20
//
21
// Failure exit - no rollback needed.
22
E1: Exit (FAILURE)
Replicate host volumes, mount to a host (template)
Template summary
A.
B.
C.
D.
E.
Locally replicates (copies) the storage volumes that underlie a host volume on an enabled host.
Presents the underlying storage volume copies to a second enabled host (creates a new host
volume).
Mounts the new host volume on the second enabled host.
Pauses the job.
After continuing, unmounts the new host volume from the second enabled host, unpresents
and deletes the storage volume copies from the storage system.
Job templates 209
Template options
•
Number of volumes to replicate. Adds commands for each volume.
•
Suspend source before replication. Adds launch commands for interacting with an enabled
host, for example to suspend and resume host application I/O.
•
Launch backup after replication. Adds a launch command for interacting with an enabled
host, for example, to start a tape backup.
•
Use snapclone instead of snapshot. Generates a template that uses snapclone replication.
•
Include e-mail notification. Adds a command for e-mail notification of the job instance status.
See SetNotificationPolicy.
Considerations
•
Tru64 UNIX. When replicating AdvFS volumes that have heavy I/O, select the option Suspend
source before replication. See Suspending I/O before replicating AdvFS volumes.
Example
This template was generated to replicate one host volume and mount it on an enabled host. No
other template options were selected.
Line Task
210
Jobs
1
// Replicate host volume(s), and mount to a host.
2
//
3
// Assign some variables that will be used in this job.
4
$source_hostvol_unc1 = SetVariable(%source_hostvol_unc1%)
5
$source_host = SetVariable(%source_host%)
6
$mount_host = SetVariable(%mount_host%)
7
//
8
// Validate that resources are as expected.
9
ValidateHost ($mount_host)
10
ValidateHostVolume ($source_hostvol_unc1)
11
ValidateSnapshotHostVolume ($source_hostvol_unc1)
12
//
13
$Rep1 = SnapshotHostVolume ($source_hostvol_unc1, FULLY_ALLOCATED, SAME, WAIT) onerror pauseat E1:
14
//
15
// Mount the replicated volume(s) on a host.
16
PresentStorageVolumes ($Rep1, $mount_host) onerror pauseat E2:
17
DiscoverDiskDevices ($mount_host, $Rep1) onerror continue
18
$HV1 = CreateHostVolumeFromDiskDevices ($source_hostvol_unc1, $Rep1, $mount_host) onerror pauseat E2:
19
$MP1 = MountHostVolume ($HV1, %mount_point1%) onerror pauseat E3:
20
//
21
// Wait for user to initiate rollback.
22
Pause ()
23
//
24
// Rollback.
25
E4: UnmountHostVolume ($MP1) onerror pauseat E4:
26
E3: DeleteHostVolume ($HV1) onerror pauseat E3:
27
//
28
E2: DeleteStorageVolumes ($Rep1) onerror pauseat E2:
29
//
30
Exit (SUCCESS)
31
//
32
// Failure exit - no rollback needed.
33
E1: Exit (FAILURE)
Replicate host volumes, mount to a host, then to a different host (template)
Template summary
Involves five enabled hosts, EH1 through EH5*.
A. Locally replicates (copies) the storage volumes that underlie a host volume on an EH1.
B. Presents the underlying storage volume copies to EH2 (creates a new host volume).
C. Mounts the new host volume on EH2, launches a backup process on EH3, and waits for the
process to complete. After completion, unmounts and deletes the new host volume from EH2.
D. Mounts the new host volume on EH4, launches a backup process on EH5, and waits for the
process to complete. After completion, unmounts and deletes the new host volume from EH4.
E. Unpresents and deletes the storage volume copies from the storage system.
* EH1 though EH5 are labels in the summary only. They are not variables in the template.
Template options
•
Number of volumes to replicate. Adds commands for each volume.
•
Suspend source before replication. Adds launch commands for interacting with an enabled
host, for example to suspend and resume host application I/O.
•
Use snapclone instead of snapshot. Generates a template that uses snapclone replication.
•
Include e-mail notification. Adds a command for e-mail notification of the job instance status.
See SetNotificationPolicy.
Comments
•
Linux. Do not use this template with Linux volume groups.
•
Tru64 UNIX. When replicating AdvFS volumes that have heavy I/O, select the option Suspend
source before replication. See Suspending I/O before replicating AdvFS volumes.
Job templates
211
Example
This template was generated to replicate one host volume and mount it on two different enabled
hosts. No other template options were selected.
Line Task
212
Jobs
1
// Replicate host volume(s), mount to a host, then to a different host.
2
//
3
// Assign some variables that will be used in this job.
4
$source_hostvol_unc1 = SetVariable(%source_hostvol_unc1%)
5
$source_host = SetVariable(%source_host%)
6
$mount_host1 = SetVariable(%mount_host1%)
7
$launch_host_name = SetVariable(%launch_host_name%)
8
$mount_host2 = SetVariable(%mount_host2%)
9
//
10
// Validate that resources are as expected.
11
ValidateHost ($launch_host_name)
12
ValidateHost ($mount_host2)
13
ValidateHost ($mount_host1)
14
ValidateHostVolume ($source_hostvol_unc1)
15
ValidateSnapshotHostVolume ($source_hostvol_unc1
16
//
17
$Rep1 = SnapshotHostVolume ($source_hostvol_unc1, FULLY_ALLOCATED, SAME, WAIT) onerror pauseat E1:
18
//
19
// Mount the replicated volume(s) on a host.
20
PresentStorageVolumes ($Rep1, $mount_host1) onerror pauseat E2:
21
DiscoverDiskDevices ($mount_host1, $Rep1) onerror continue
22
$HV1 = CreateHostVolumeFromDiskDevices ($source_hostvol_unc1, $Rep1, $mount_host1) onerror pauseat
E2:
23
$MP1 = MountHostVolume ($HV1, %mount_point1%) onerror pauseat E5:
24
//
25
// Launch a backup process on a host.
26
Launch ($launch_host_name, %command_line%, "", WAIT, "0") onerror pauseat E6:
27
//
28
// Unmount the volume(s).
29
E6: UnmountHostVolume ($MP1) onerror pauseat E6:
30
E5: DeleteHostVolume ($HV1) onerror pauseat E5:
31
//
32
//
33
PresentStorageVolumes ($Rep1, $mount_host2) onerror pauseat E2:
34
DiscoverDiskDevices ($mount_host2, $Rep1) onerror continue
35
$HV2 = CreateHostVolumeFromDiskDevices ($source_hostvol_unc1, $Rep1, $mount_host2) onerror pauseat
E2:
36
$MP2 = MountHostVolume ($HV2, %mount_point2%) onerror pauseat E3:
37
//
38
// Launch a backup process on a host.
39
Launch ($launch_host_name, %command_line%, "", WAIT, "0") onerror pauseat E4:
40
//
41
// Wait for user to initiate rollback.
42
Pause (
43
//
44
// Rollback.
45
E4: UnmountHostVolume ($MP2) onerror pauseat E4:
46
E3: DeleteHostVolume ($HV2) onerror pauseat E3:
47
E2: DeleteStorageVolumes ($Rep1) onerror pauseat E2:
48
//
49
Exit (SUCCESS
50
//
51
// Failure exit - no rollback needed.
52
E1: Exit (FAILURE
Replicate host volumes via preallocated replication, mount to a host (template)
Template summary
A.
B.
C.
D.
E.
F.
G.
H.
I.
J.
Disables (flushes) the write cache of the storage volumes that underlie a host volume on an
enabled host.
Prepares the container(s) for host volume replication.
Locally replicates (copies) the storage volumes that underlie the host volume to containers.
Re-enables the write cache of the source storage volumes that underlie the host volume.
Waits for the container copies to become storage volume copies and be discovered.
Presents the underlying storage volume copies to a second enabled host (creates a new host
volume group).
Mounts the new host volume on the second enabled host.
Pauses the job.
After continuing, unmounts the new host volume from the second enabled host, unpresents
and deletes the storage volume copies from the storage system.
Converts the storage volume copies back into containers.
NOTE:
This template cannot be used with some older versions of controller software.
Template options
•
Number of volumes to replicate. Adds commands for each volume.
•
Suspend source before replication. Adds launch commands for interacting with an enabled
host, for example to suspend and resume host application I/O.
Job templates
213
•
Launch backup after replication. Adds a launch command for interacting with an enabled
host, for example, to start a tape backup.
•
Use snapclone instead of snapshot. Generates a template that uses preallocated snapclone
replication.
•
Include e-mail notification. Adds a command for e-mail notification of the job instance status.
See SetNotificationPolicy.
Comments
•
This template cannot be used with some older versions of controller software.
•
Tru64 UNIX. When replicating AdvFS volumes that have heavy I/O, select the option Suspend
source before replication. See Suspending I/O before replicating AdvFS volumes.
Example
This template was generated to replicate one host volume. No other template options were selected.
Line Task
214
Jobs
1
// Replicate host volume(s) via pre-allocated replication, and mount to a host.
2
//
3
// Assign some variables that will be used in this job.
4
$source_hostvol_unc1 = SetVariable(%source_hostvol_unc1%)
5
$source_host = SetVariable(%source_host%
6
$mount_host = SetVariable(%mount_host%)
7
//
8
// Validate that resources are as expected.
9
ValidateHost ($source_host)
10
ValidateHost ($mount_host)
11
ValidateHostVolume ($source_hostvol_unc1
12
//
13
// Begin flushing the cache on the host volume(s).
14
SetHostVolumeWriteCacheMode ($source_hostvol_unc1, WRITE_CACHE_DISABLED, NOWAIT) onerror pauseat
E1:
15
//
16
// Wait for the cache flush to complete.
17
WaitForHostVolumeWriteCacheFlush ($source_hostvol_unc1) onerror pauseat E1:
18
//
19
#Prepare the container(s) for host volume replication
20
PrepareContainerForHostDiskDeviceReplication ( %hostvol_unc_name%, %managed_set_of_containers%,
%copytype% ) onerror continue
21
DO {
22
$Rep1 = SnapshotHostVolumeToContainersInManagedSet ($source_hostvol_unc1, %dest_container_set1%,
FULLY_ALLOCATED, NOWAIT) onerror pauseat E1:
23
//
24
} ALWAYS {
25
// Restore the writeback cache on the host volume(s).
26
SetHostVolumeWriteCacheMode ($source_hostvol_unc1, WRITE_CACHE_ENABLED, NOWAIT) onerror continue
27
//
28
}
29
//
30
// Wait for replicated storage to appear.
31
WaitForStorageVolumesDiscovery ($Rep1) onerror pauseat E2:
32
//
33
// Mount the replicated volume(s) on a host.
34
PresentStorageVolumes ($Rep1, $mount_host) onerror pauseat E2:
35
DiscoverDiskDevices ($mount_host, $Rep1) onerror continue
36
$HV1 = CreateHostVolumeFromDiskDevices ($source_hostvol_unc1, $Rep1, $mount_host) onerror pauseat E2:
37
$MP1 = MountHostVolume ($HV1, %mount_point1%) onerror pauseat E3:
38
//
39
// Wait for user to initiate rollback.
40
Pause ()
41
//
42
// Rollback.
43
E4: UnmountHostVolume ($MP1) onerror pauseat E4:
44
E3: DeleteHostVolume ($HV1) onerror pauseat E3:
45
//
46
E2: DeleteStorageVolumesInManagedSet (%dest_container_set1%) onerror pauseat E2:
47
//
48
Exit (SUCCESS)
49
//
50
// Failure exit - no rollback needed.
51
E1: Exit (FAILURE)
Replicate host volume, mount components to a host (template)
Template summary
A.
B.
C.
D.
E.
Locally replicates (copies) the storage volumes that underlie a host volume on an enabled host.
Presents the underlying storage volume copies to a second enabled host (creates a new host
volume).
Mounts a component in the new host volume on the second enabled host.
Pauses the job.
After continuing, unmounts the component from the second enabled host, unpresents and
deletes the storage volume copies from the storage system.
Job templates
215
Template options
•
Number of components to mount. Adds commands for each component.
•
Suspend source before replication. Adds launch commands for interacting with an enabled
host, for example to suspend and resume host application I/O.
•
Launch backup after replication. Adds a launch command for interacting with an enabled
host, for example, to start a tape backup.
•
Use snapclone instead of snapshot. Generates a template that uses snapclone replication.
•
Include e-mail notification. Adds a command for e-mail notification of the job instance status.
See SetNotificationPolicy.
Considerations
•
Tru64 UNIX. When replicating AdvFS volumes that have heavy I/O, select the option Suspend
source before replication. See Suspending I/O before replicating AdvFS volumes.
Example
This template was generated to locally replicate and mount one host volume. No other template
options were selected.
Line Task
216
Jobs
1
// Replicate a host volume, and mount slices/partitions/LVs to a host.
2
//
3
// Assign some variables that will be used in this job.
4
$mount_host = SetVariable(%mount_host%)
5
$source_host = SetVariable(%source_host%)
6
$source_hostvol_unc1 = SetVariable(%source_hostvol_unc1%)
7
//
8
// Validate that resources are as expected.
9
ValidateHost ($mount_host)
10
ValidateHostVolume ($source_hostvol_unc1)
11
ValidateSnapshotHostVolume ($source_hostvol_unc1)
12
//
13
$Rep1 = SnapshotHostVolume ($source_hostvol_unc1, FULLY_ALLOCATED, SAME, WAIT) onerror pauseat E1:
14
//
15
// Mount the replicated volume(s) on a host.
16
PresentStorageVolumes ($Rep1, $mount_host) onerror pauseat E2:
17
DiscoverDiskDevices ($mount_host, $Rep1) onerror continue
18
$HV1 = CreateHostVolumeFromDiskDevices ($source_hostvol_unc1, $Rep1, $mount_host) onerror pauseat E2:
19
$MP1 = MountHostVolume ($HV1, %mount_point1%) onerror pauseat E3:
20
//
21
// Wait for user to initiate rollback.
22
Pause ()
23
//
24
// Rollback.
25
E4: UnmountHostVolume ($MP1) onerror pauseat E4:
26
E3: DeleteHostVolume ($HV1) onerror pauseat E3:
27
//
28
E2: DeleteStorageVolumes ($Rep1) onerror pauseat E2:
29
//
30
Exit (SUCCESS)
31
//
32
// Failure exit - no rollback needed.
33
E1: Exit (FAILURE)
Replicate raw storage volumes mount (raw) to a host (template)
Template summary
A.
B.
C.
D.
E.
Locally replicates (copies) raw storage volumes.
Presents the storage volume copies to an enabled host (creates a raw host volume).
Pauses the job.
After continuing, removes the raw host volume from the enabled host.
Unpresents and deletes the storage volume copy from the storage system.
Template options
•
Number of volumes to replicate. Adds commands for each volume.
•
Use snapclone instead of snapshot. Generates a template that uses snapclone replication.
•
Include e-mail notification. Adds a command for e-mail notification of the job instance status.
See SetNotificationPolicy.
Example
This template was generated to locally replicate one storage volume. No other template options
were selected.
Line Task
1
// Replicate raw storage volume(s), and mount to a host.
2
//
3
// Assign some variables that will be used in this job.
4
$source_storvol_unc1 = SetVariable(%source_storvol_unc1%)
5
$dest_storvol1 = SetVariable(%dest_storvol1%)
6
$mount_host = SetVariable(%mount_host%)
7
//
8
// Validate that resources are as expected.
9
ValidateHost ($mount_host)
Job templates
217
10
ValidateStorageVolume ($source_storvol_unc1)
11
ValidateSnapshotStorageVolume ($source_storvol_unc1)
12
//
13
$Rep1 = SnapshotStorageVolume ($source_storvol_unc1, FULLY_ALLOCATED, SAME, $dest_storvol1, WAIT)
onerror pauseat E1:
14
//
15
// Create disk device(s) on a host.
16
CreateDiskDevice ($Rep1, $mount_host, %LUN%, READ_WRITE) onerror pauseat E2:
17
//
18
// Wait for user to initiate rollback.
19
Pause ()
20
//
21
// Rollback.
22
E3: RemoveDiskDevice ($Rep1, $mount_host) onerror pauseat E2:
23
//
24
E2: DeleteStorageVolume ($Rep1) onerror pauseat E2:
25
//
26
Exit (SUCCESS)
27
//
28
// Failure exit - no rollback needed.
29
E1: Exit (FAILURE)
Replicate storage volumes (template)
Template summary
A.
B.
C.
Locally replicates (copies) storage volumes.
Pauses the job.
After continuing, deletes the storage volume copies from the storage system.
Template options
218
Jobs
•
Number of volumes to replicate. Adds commands for each volume.
•
Suspend source before replication. Adds launch commands for interacting with an enabled
host, for example to suspend and resume host application I/O.
•
Use snapclone instead of snapshot. Generates a template that uses snapclone replication.
•
Include e-mail notification. Adds a command for e-mail notification of the job instance status.
See SetNotificationPolicy.
Considerations
•
Tru64 UNIX. When replicating virtual disks with AdvFS volumes that have heavy I/O, select
the option Suspend source before replication. See Suspending I/O before replicating AdvFS
volumes.
Example
This template was generated to replicate one storage volume. No other template options were
selected.
Line Task
1
// Replicate storage volume(s).
2
//
3
// Assign some variables that will be used in this job.
4
$source_storvol_unc1 = SetVariable(%source_storvol_unc1%)
5
$dest_storvol1 = SetVariable(%dest_storvol1%)
6
//
7
// Validate that resources are as expected.
8
ValidateStorageVolume ($source_storvol_unc1
9
ValidateSnapshotStorageVolume ($source_storvol_unc1
10
//
11
$Rep1 = SnapshotStorageVolume ($source_storvol_unc1, FULLY_ALLOCATED, SAME, $dest_storvol1, WAIT)
onerror pauseat E1:
12
//
13
// Wait for user to initiate rollback.
14
Pause ()
15
//
16
// Rollback.
17
E2: DeleteStorageVolume ($Rep1) onerror pauseat E2:
18
//
19
Exit (SUCCESS
20
//
21
// Failure exit - no rollback needed.
22
E1: Exit (FAILURE)
Replicate storage volumes via preallocated replication (template)
Template summary
A.
B.
C.
D.
Disables (flushes) the write cache of a storage volume.
Prepares the container(s) for storage volume replication.
Locally replicates (copies) the storage volume to a container.
Re-enables the write cache of the storage volume.
NOTE:
This template cannot be used with some older versions of controller software.
Job templates
219
Template options
•
Number of volumes to replicate. Adds commands for each volume.
•
Suspend source before replication. Adds launch commands for interacting with an enabled
host. For example, to suspend and resume host application I/O.
•
Use snapclone instead of snapshot. Generates a template that uses preallocated snapclone
replication.
•
Include e-mail notification. Adds a command for e-mail notification of the job instance status.
See SetNotificationPolicy.
Considerations
•
Tru64 UNIX. When replicating virtual disks with AdvFS volumes that have heavy I/O, select
the option Suspend source before replication. See Suspending I/O before replicating AdvFS
volumes.
Example
This template was generated to replicate one storage volume. No other template options were
selected.
Line Task
1
// Replicate storage volume(s) via pre-allocated replication.
2
//
3
// Assign some variables that will be used in this job.
4
$source_storvol_list = SetListVariable(%source_storvol_list%)
5
$dest_container_list = SetListVariable(%dest_container_list%)
6
//
7
// Validate that resources are as expected.
8
ValidateStorageVolumes ($source_storvol_list)
9
//
10 // Begin flushing the cache on the storage volume(s).
11 SetStorageVolumesWriteCacheMode ($source_storvol_list, WRITE_CACHE_DISABLED, NOWAIT) onerror pauseat
E1:
12 //
13 // Wait for the cache flush to complete.
14 WaitForStorageVolumesWriteCacheFlush ($source_storvol_list) onerror pauseat E1:
15 //
16 #Prepare the containers(s) for storage volume replication
17 PrepareContainers ( $source_storvol_list, $dest_container_list, FULLY_ALLOCATED ) onerror continue
18 DO {
19 $Rep1 = SnapshotStorageVolumesToContainers ($source_storvol_list, $dest_container_list, FULLY_ALLOCATED,
NOWAIT) onerror pauseat E2:
20 //
21 } ALWAYS {
22 // Restore the writeback cache on the storage volume(s).
220 Jobs
23 SetStorageVolumesWriteCacheMode ($source_storvol_list, WRITE_CACHE_ENABLED, NOWAIT) onerror continue
24 //
25 }
26 //
27 // Wait for user to initiate rollback.
28 Pause ()
29 //
30 E2: DeleteStorageVolumes ( $Rep1 ) onerror pauseat E2:
31 //
32 Exit (SUCCESS)
33 //
34 // Failure exit - no rollback needed.
35 E1: Exit (FAILURE
Setup Continuous Access (remote replication template)
Template summary
Establishes and configures a DR group pair and initiates remote replication between two sites. The
DR group pair contains one storage volume (virtual disk) at the source and destination sites.
A. Creates a DR group pair that contains the source storage volume that is to be remotely
replicated. The source DR group is at site 1. The destination DR group is at site 2.
B. Configures the remote replication with failsafe on unavailable member data protection, I/O
mode, and the destination host I/O access mode.
C. Initiates (resumes) remote replication between the sites.
D. Presents the destination storage volume to an enabled host at site 2.
Guidelines apply.
Template options
•
Include e-mail notification. Adds a command for e-mail notification of the job instance status.
See SetNotificationPolicy.
Comments
•
Guidelines apply.
Example
This template was generated to create one DR group pair.
Line Task
1
// Setup Continuous Access.
2
//
3
// Assign some variables that will be used in this job.
4
$DR_source_array = SetVariable(%DR_source_array%)
Job templates 221
5
$DR_dest_array = SetVariable(%DR_dest_array%)
6
$DR_group_name = SetVariable(%DR_group_name%)
7
$DR_source_storvol_unc1 = SetVariable(%DR_source_storvol_unc1%)
8
$dest_host_name = SetVariable(%dest_host_name%)
9
//
10
// Validate that resources are as expected.
11
ValidateHost ($dest_host_name)
12
ValidateStorageSystem ($DR_source_array)
13
ValidateStorageSystem ($DR_dest_array)
14
ValidateStorageVolume ($DR_source_storvol_unc1)
15
//
16
$DRG1 = CreateDrGroup ($DR_group_name, $DR_source_storvol_unc1, $DR_dest_array, "", "", SAME, "",
"", 0, FALSE) onerror pauseat E1:
17
//
18
SetDrGroupFailsafe($DRG1, %failsafe_mode%) onerror continue
19
SetDrGroupIoMode($DRG1, %io_mode%) onerror continue
20
SetDrGroupDestinationAccess($DRG1, NONE) onerror continue
21
SetDrGroupSuspend($DRG1, RESUMED) onerror continue
22
//
23
// Present destination vdisk to a destination host.
24
WaitDrGroupNormalization ($DRG1
25
// The dest_storvol will be created AFTER the job,
26
// so enter the name of the storage volume manually.
27
// Default name is the same as the source storage volume.
28
PresentStorageVolume (%dest_storvol%, $dest_host_name, %lun_number%, READ_WRITE)
29
//
30
Exit (SUCCESS)
31
//
32
// Failure exit - no rollback needed.
33
E1: Exit (FAILURE)
Throttle replication I/O (remote replication template)
IMPORTANT:
Jobs generated by this template are intended to be run only in conjunction with
a log merge or full-copy events on a single storage system.
Template summary
Suspends remote replication in non-critical DR group pairs until critical DR group pairs have
normalized across the two sites. The DR groups must be source groups on a single storage system.
The terms critical and non-critical used in the template do not refer to the critical operational status.
For the purpose of using the template, you can consider any DR group to be critical or non-critical.
222 Jobs
A.
B.
C.
Suspends the remote replication of the non-critical DR group pairs.
Halts the job (waits) until the critical DR group pairs have normalized across the two sites.
After normalization is completed, resumes the remote replication in the non-critical DR group
pairs.
Template options
•
Number of non-critical DR groups in the CA configuration. Adds commands for the non-critical
DR groups.
•
Include e-mail notification. Adds a command for e-mail notification of the job instance status.
See SetNotificationPolicy.
Example
This template was generated to perform one iteration of throttling I/O for source DR groups on
one storage system.
Line Task
1
// Throttle replication I/O on non-critical DR groups,
2
// so that critical DR groups will have the best performance.
3
//
4
// Assign some variables that will be used in this job.
5
$DR_Group_name_unc1 = SetVariable(%DR_Group_name_unc1%)
6
//
7
// Suspend the non-critical DR group(s).
8
SetDrGroupSuspend ($DR_Group_name_unc1, SUSPENDED) onerror pauseat E1:
9
//
10
// Wait for the critical DR group(s) to normalize.
11
WaitDrGroupNormalization (%critical_DR_Group_name_unc%)
12
//
13
// Resume the non-critical DR group(s).
14
E2: SetDrGroupSuspend ($DR_Group_name_unc1, RESUMED) onerror pauseat E2:
15
//
16
Exit (SUCCESS)
17
//
18
// Failure exit - no rollback needed.
19
E1: Exit (FAILURE)
Unmount and delete existing host volumes (template)
Template summary
Unmounts and deletes host volumes.
Job templates 223
Template options
•
Number of volumes to unmount and delete. Adds commands for each volume.
•
Include e-mail notification. Adds a command for e-mail notification of the job instance status.
See SetNotificationPolicy.
Example
This template was generated to unmount and delete one host volume. No other template options
were selected.
Line Task
1
// Unmount and delete the host volume(s).
2
//
3
// Assign some variables that will be used in this job.
4
$hostvol_unc1 = SetVariable(%hostvol_unc1%)
5
$host_name = SetVariable(%host_name%)
6
//
7
// Validate that resources are as expected.
8
ValidateHost ($host_name)
9
ValidateHostVolume ($hostvol_unc1)
10
//
11
E1: UnmountHostVolume ($hostvol_unc1)
12
E2: DeleteHostVolume ($hostvol_unc1)
13
//
14
Exit (SUCCESS)
15
//
Unmount existing host volumes (template)
Template summary
A.
B.
Unmounts an existing* host volume on an enabled host. Existing refers to a host volume that
is created by means other than this job. The volume must exist and be in the replication
manager database when this job is run.
The underlying storage volume remains presented to the host.
Template options
224 Jobs
•
Number of volumes to unmount. Adds commands for each volume.
•
Include e-mail notification. Adds a command for e-mail notification of the job instance status.
See SetNotificationPolicy.
Example
This template was generated to unmount one host volume on an enabled host. No other template
options were selected.
Line Task
1
// Unmount existing host volume(s).
2
//
3
// Assign some variables that will be used in this job.
4
$hostvol_unc1 = SetVariable(%hostvol_unc1%)
5
$host_name = SetVariable(%host_name%)
6
//
7
// Validate that resources are as expected.
8
ValidateHost ($host_name)
9
ValidateHostVolume ($hostvol_unc1)
10
//
11
// Unmount the volume(s).
12
E1: UnmountHostVolume ($hostvol_unc1)
13
Exit (SUCCESS)
14
//
Job templates 225
7 Managed sets
Working with managed sets
About managed sets
The Managed Sets content pane displays resource sets that you can interact with through the
replication manager. See GUI window Content pane.
Views
•
List of all managed sets. See managed set views list view.
•
Members in a managed set. See managed set views Members.
Actions
•
Actions in the GUI. See Managed set actions summary.
•
You can also interact with Managed Sets from the CLUI. See Managed set actions cross
reference.
Properties
•
Properties displayed in the GUI. See Managed sets properties summary.
•
You can also display properties from the CLUI. See the CLUI command Show Managed_Set.
Managed set actions summary
The following actions are available on the Managed Sets content pane and allow you to edit the
managed set name. Some actions have equivalent job commands or CLUI commands. See Managed
sets actions cross reference.
•
View/Edit Properties. Displays the properties of the selected managed set. Procedure.
•
New. Create a managed set. Procedure.
•
Delete. Delete a managed set. Procedure.
•
Remove Member. Remove a resource from a managed set. Procedure.
The following actions are available on the resource content panes. Some actions have equivalent
job commands or CLUI commands
•
Add To Managed Set. See DR groups, Enabled hosts, Host volumes, Storage systems, and
virtual disks.
•
Remove From Managed Set. See DR groups, Enabled hosts, Host volumes, Storage systems,
and virtual disks.
Managed set actions cross reference
You can work with managed sets using GUI actions, jobs and CLUI commands. This table provides
a cross reference for performing typical tasks.
Create managed sets
GUI action
Job command or template
CLUI command
Managed Sets > New
-
Add Managed_Set
226 Managed sets
Delete managed sets
GUI action
Job command or template
CLUI command
Managed Sets > Delete
-
Delete Managed_Set
GUI action
Job command or template
CLUI command
Managed Sets > Add Member
-
Set Managed_Set
Managed Sets > Low-Level Refresh
-
Set Managed_Set
-
-
Show Managed_Set
Managed Sets > Remove Member
-
Set Managed_Set
-
ConvertStorageVolumesInManagedSetIntoContainers -
-
SnapcloneHostVolumeToContainersInManagedSet -
-
SnapshotHostVolumeToContainersInManagedSet -
Other managed set tasks
View managed sets
GUI action
Job command or template
CLUI command
Managed Sets > View Properties
-
Show Managed_Set
Add resources to a managed set. See Adding DR groups, Adding enabled hosts, Adding host
volumes, Adding storage systems, and Adding virtual disks.
Remove resources from a managed set. See Removing DR groups, Removing enabled hosts,
Removing host volumes, Removing storage systems, and Removing virtual disks.
Managed set properties summary
For help on properties, see the following tab in the jobs properties window.
•
Managed set listing. See General tab.
See also Viewing managed set properties.
Managed set views
See the following examples: List view, Members
List view
Working with managed sets 227
Members
Adding resources to a managed set
Add a resource to a managed set.
Considerations
•
You can use the GUI or CLUI. See Managed set actions cross reference.
•
A managed set can include only resources of the same type.
Procedure
Resources are added from the resource's action menu. See Adding DR groups, Adding enabled
hosts, Adding host volumes, Adding storage systems, and Adding virtual disks.
Creating managed sets
Create a managed set of resources.
Considerations
•
You can use the GUI or CLUI. See Managed sets actions cross reference.
•
A managed set can include only resources of the same type.
Procedures
This
1.
2.
3.
4.
5.
procedure uses the GUI.
In the navigation pane, select Managed Sets.
On the List tab, select Actions > New. The New Managed Set window opens.
Select the type of managed set you want to create in the drop-down list.
Enter a name for the managed set.
Click OK. An empty managed set is created. To add members, see Adding resources to
managed sets.
Deleting managed sets
Delete a managed set of resources.
Considerations
•
You can use the GUI or CLUI. See Managed sets actions cross reference.
•
When you delete a managed set, its members (resources) are removed from the managed set
but they are not deleted as individually available resources.
Procedure
This procedure uses the GUI.
228 Managed sets
1.
2.
3.
4.
In the navigation pane, select Managed Sets.
On the List tab, select the managed set you want to delete.
Select Actions > Delete.
Click OK.
Renaming managed sets
Rename a managed set.
Considerations
•
You can only use the GUI.
•
If you rename a managed set that is referenced in a job, you must also edit the job to reflect
the name change, or the job will fail when run.
Procedure
1.
2.
3.
4.
5.
In the navigation pane, select Managed Sets.
On the List tab, select the managed set to rename.
Select Actions > View/Edit Properties. The Managed Set Properties window opens.
In the name value field, enter a new name.
Click OK. The managed set is renamed.
Removing resources from managed sets
Remove a resource from a managed set.
Considerations
•
You can use the GUI or CLUI. See Managed sets actions cross reference.
•
You can also remove resources from resource's action menu. See Removing DR groups,
Removing enabled hosts, Removing host volumes, Removing storage systems, and Removing
virtual disks.
Procedure
This
1.
2.
3.
4.
5.
procedure uses the GUI.
In the navigation pane, select Managed Sets.
On the List tab, select the managed set.
Select Actions > Remove Member. The Remove Members window opens.
Select the members to remove.
Click OK. The members are removed
Viewing managed sets
Display the managed set list and member views. See Managed sets views.
Considerations
•
You can use the GUI or CLUI. See Managed sets actions cross reference.
Procedures
This
1.
2.
3.
procedure uses the GUI.
In the navigation pane, select Managed Sets. The content pane displays managed sets.
Click the List tab. A tabular list of managed sets appears.
Click the Members tab. A tabular list of managed set members appears.
Working with managed sets 229
Viewing managed set properties
View information about managed sets and their members. See Managed sets properties summary.
You can use the GUI or CLUI. See Managed sets actions cross reference.
Procedure
This
1.
2.
3.
procedure uses the GUI.
In the navigation pane, select Managed Sets.
On the List tab, select the managed set whose properties you want to view.
Select Actions > View Properties. The Managed Set Properties window opens.
Managed sets concepts
Managed sets overview
A managed set is a named set of resources that can be acted on as a single entity. For example,
the managed set Sales_Disks might include two virtual disks, West_Sales and East_Sales.
Performing an action on a managed set performs the action on all the members in the set. For
example, if you perform the New Snapshot action on the managed set Sales_Disks, the replication
manager creates a snapshot of West_Sales and a snapshot of East_Sales.
Set types
When you create a managed set, you must specify its resource type. After the set is created, you
can only add members of that resource type The following resource types are supported.
Set type
Guidelines
DR groups
See Managed sets of DR groups.
Enabled hosts
Host volumes
Storage systems
Virtual disks
SeeManaged sets of virtual disks (and containers)
(and containers)
Membership
•
A specific resource can be a member of more than one managed set.
•
The order in which members are added to, or appear in, a managed set does not affect the
order in which actions are performed on the set's members.
•
Performing actions on a large managed set (a set with many members) can take a long time.
For flexibility, consider using several small managed sets rather than a single large set.
Managed sets of DR groups
Managed sets of DR groups require careful planning. Consider the following guidelines.
•
Some DR group actions are permitted only on source DR groups; other actions are only
permitted only on destination DR groups. See Using DR group actions.
•
HP recommends that you create separate managed sets for source and destination DR groups.
When you conduct failover operations, you can perform the actions that correspond to the
new roles of the managed set.
230 Managed sets
•
Source and destination DR groups (from different DR group pairs) can be members of the
same managed set.
•
The source and destination DR group in a DR group pair cannot be members of the same
managed set.
•
If you plan to use DR group managed sets for failover operations, ensure the managed sets
are controlled by the same management server at the time of failover.
Managed sets of virtual disks (or containers)
A managed set of the type virtual disks can contain two types of resources, either virtual disks or
virtual disk containers. The distinction between disks and containers is important when using
managed sets. See Virtual disks overview.
Managed sets of virtual disks (and containers) require careful planning. Consider the following
guidelines.
General guidelines
•
In general, do not include virtual disks and containers in the same managed set. Doing so
can result in errors when certain actions are applied to the set. For example, applying a
replication action to a managed set that includes containers will result in errors because
containers cannot be replicated.
•
A managed set of virtual disks can include disks (or containers) from more than one storage
system.
•
Ensure that you are aware of the current content of managed sets. When some GUI actions,
job commands, or CLUI commands are applied to a managed set, the members are converted
from virtual disks (storage volumes) to containers or from containers to a virtual disks.
Guidelines for managed sets of containers
•
When using a managed set of containers for preallocated snapclones, the number of containers
and the size of each container must correspond to the source virtual disks being replicated.
See About containers.
Managed sets concepts
231
8 Storage systems
Working with storage systems
About storage system resources
The Storage Systems content pane displays HP storage arrays that you can interact with through
the replication manager. See GUI window Content pane.
Views
•
Tabular list view. See Storage systems list view.
•
Graphical tree views. See: System/Disk Group/Virtual Disk tree view and DR group
Source/Destination tree view
Actions
•
Actions in the GUI. See Storage systems actions summary.
•
You can also interact with storage systems from a job and the CLUI. See Storage systems
actions cross reference.
Properties
•
Properties displayed in the GUI. See Storage systems properties summary.
•
You can also display properties from the CLUI. See the CLUI command Show System.
Storage system actions summary
The following storage system actions are available on the content pane. Some actions have
equivalent CLUI commands. See Storage systems actions cross reference.
•
View Properties. View properties of a storage system. Procedure.
•
Add to Managed Set. Add a storage system to a managed set. Procedure.
•
Remove from Managed Set. Remove a storage system from a managed set. Procedure.
•
Launch the Device Manager. Access HP P6000 Command View from the replication manager.
Procedure.
•
List Events. Display a list of events for the resource. Procedure.
•
Set Remote Replication Port Preferences. Change the priority settings on the host ports used
for HP P6000 Continuous Access remote replication. Procedure.
•
Set DR Protocol Type. Set the protocol used for HP P6000 Continuous Access remote replication.
Procedure.
Storage system actions cross reference
You can work with storage systems using GUI actions, jobs and CLUI commands. This table provides
a cross reference for performing typical tasks.
232 Storage systems
Manage storage systems
GUI action
Job template or command
CLUI command
Storage Systems > Add to Managed Set
-
Set Managed_Set
Storage Systems > Set DR Protocol Type
SetDrProtocolType
Set System
Storage Systems > Set Remote Replication
Port Preferences
SetPreferredPortConfiguration
Validate storage systems
GUI action
Job template or command
CLUI command
-
ValidateStorageSystem
-
GUI action
Job template or command
CLUI command
Storage Systems > View Properties
-
Show System
View storage systems
Storage system properties summary
For help on properties, see the following tabs in the storage systems properties window.
•
Storage system. See General tab.
•
Local and remote replication licensing for the storage system. See Licensing tab.
•
Managed sets in which the storage system is a member. See Membership tab.
See also Viewing storage system properties.
Storage system views
See the following examples: List view, System/Disk Group/Virtual Disk tree view, DR group
Source/Destination tree view
List view
System/Disk Group/Virtual Disk tree view
Working with storage systems 233
DR group Source/Destination tree view
Adding storage systems to a managed set
Add storage systems to a managed set.
Considerations
•
You can use the GUI or CLUI. See Storage systems actions cross reference.
Procedure
This
1.
2.
3.
4.
5.
procedure uses the GUI.
In the navigation pane, select Storage Systems.
On the List tab, select the storage systems to add to a managed set.
Select Actions > Add to Managed Set. The Create New Managed Set window opens.
Select a managed set, or select Create New Managed Set and enter a name.
Click OK.
Checking and printing storage system licenses
Check and print the status of replication licenses and application-integration licenses for a specific
storage array.
1. In the navigation pane, select Storage Systems. The Storage Systems content pane opens.
2. Select a storage system.
3. Select Actions > View Properties. The Storage Properties window opens.
4. Select the Licensing tab.
5. Click the Print button or select Actions > Print.
Launching the device manager
Access HP P6000 Command View from the replication manager. Each time you use this action in
the same replication manager session, a new window for HP P6000 Command View is opened.
Considerations
•
You can only use the GUI to launch the device manager. The action is not available unless
an individual resource is selected (highlighted) in the replication manager content pane.
•
You must know the security credentials (user name and password) to log on to HP P6000
Command View.
Procedure
1.
2.
3.
4.
In the navigation pane, select DR Groups, Storage Systems or Virtual Disks.
On a List or Tree tab select any storage resource.
Select Actions > Launch the Device Manager. A new browser window opens.
Respond to the security alert message and then log on to HP P6000 Command View.
234 Storage systems
Listing individual resource events
Display the events for an individual resource.
Considerations
•
You can only use the GUI.
•
Applies to only to individual DR groups, storage systems, and virtual disks.
Procedure
1.
2.
3.
In the navigation pane, select resource type.
On the List tab, select the specific resource whose events are to be displayed.
Select Actions > List events. An events window for the resource opens.
Removing storage systems from a managed set
Remove storage systems from a managed set.
Considerations
•
You can use the GUI or CLUI. See Storage systems actions cross reference.
Procedure
This
1.
2.
3.
4.
5.
procedure uses the GUI.
In the navigation pane, select Storage Systems.
On the List tab, select the storage systems to remove from a managed set.
Select Actions > Remove From Managed Set. The Select Managed Sets window opens.
Select the managed set from which to remove the storage systems.
Click OK.
Setting Remote Replication Port Preferences
Change the priority settings on the host ports used for HP P6000 Continuous Access remote
replication.
Considerations
•
You can use the GUI or jobs to set replication port preferences. See Storage systems actions
cross reference.
Procedure
1.
2.
3.
In the navigation pane, select Storage Systems.
On the List tab, select the storage system on which you want to change the port settings.
Select Actions > Set Remote Replication Port Preferences. The Storage Systems Set Remote
Replication Preferred Port Preferences window opens. Follow the instructions in the window.
Setting DR Protocol Type
Change the protocol used for HP P6000 Continuous Access remote replication. For more information
on DR protocols, see “Remote data replication protocols” (page 238).
Working with storage systems 235
Considerations
•
You can use the GUI, CLUI, or jobs to set the DR protocol. See Storage systems actions cross
reference.
Procedure
1.
2.
3.
In the navigation pane, select Storage Systems.
On the List tab, select the storage system on which you want to change the protocol.
Select Actions > Set DR Protocol Type.
The Set DR Protocol window opens.
4.
5.
Select the desired DR protocol type.
Click Apply.
Viewing storage systems
Display storage system list and tree views. See Storage systems views.
Considerations
•
You can use the GUI or CLUI. See Storage systems actions cross reference.
•
Tree views are available only in the GUI.
Procedures
This
1.
2.
3.
procedure uses the GUI.
In the navigation pane, select Storage Systems. The content pane displays storage system.
Click the List tab. A tabular list of storage systems is displayed.
Click the Tree tab. A graphical tree of storage systems, their disk groups, and virtual disks is
displayed. Click View to select another tree view.
Viewing storage system properties
View the properties of a specific storage system. See Storage systems properties summary.
Considerations
•
You can use the GUI or CLUI. See Storage systems actions cross reference.
Procedure
This
1.
2.
3.
4.
procedure uses the GUI.
In the navigation pane, select Storage Systems.
On the List tab, select the storage system to view.
Select Actions > View Properties. The Storage System Properties window opens.
Click the properties tabs.
Storage system concepts
Replication licenses overview
To use local and remote replication features on a given array, the replication manager verifies
that appropriate HP replication licenses exist for that array.
•
For information on acquiring and installing local replication licenses, see the HP P6000
Replication Solutions Manager Administrator Guide.
•
For information on acquiring and installing remote replication licenses, see the HP P6000
Replication Solutions Manager Implementation Guide.
236 Storage systems
•
Replication licenses are verified during automatic and global refreshes. See Refreshing resources
(automatic) and Refreshing resources (global)
•
Replication license status is displayed in the GUI. See License status.
Replication license policies
HP license policies determine how the replication manager interacts with resources that require
HP replication licenses.
The following summarizes the policies.
•
Verification. License verification includes factors such as:
◦
Type of license
◦
Expiration date (if any)
◦
Capacity range (amount of storage)
•
Multi-array volumes. If a volume is comprised of virtual disks on two or more arrays, each
array must have an HP replication license to be considered fully licensed.
•
GUI actions. Some GUI actions may be disabled if an applicable HP license is not verified.
Disabled actions appear in gray and cannot be selected. A tooltip indicates why the action
cannot be performed. See Tooltips.
For example, if you select a virtual disk and the New Snapclone action is disabled, it can be
because the storage system that contains the disk does not have a local replication license.
•
Job commands. Some job commands may not be executed if an applicable HP license is not
verified when a job is run. An event in the job log indicates the command did not execute
due to a licensing issue.
For example, if a SnapcloneStorageVolume command in a job did not execute when the job
was run, it may be because the storage system that contains the disk did not have a local
replication license.
IMPORTANT: HP recommends that you include validation commands in jobs and perform
validation tests to prevent or minimize the effect of license-related issues at run time.
•
CLUI commands. CLUI commands that run jobs or create implicit jobs behave like job
commands. An event in the job log indicates that the command did not execute due to a
licensing issue.
Controller software features
The properties of storage systems, their virtual disks, and replication features depend upon the
controller software version. For information about controller software versions that are supported
with HP P6000 Replication Solutions Manager, see Table 1.0, Array software solution compatibility,
in the HP P6000 Enterprise Virtual Array Compatibility Reference. See (page 288).for document
location.
For support, visit the HP Storage website. See (page 289).
Controller software features - local replication
HP P6000 Replication Solutions Manager supports the local replication features of the array
controller software. For more information, see Table 6.2, Supported local replication features by
controller software version, in the HP P6000 Enterprise Virtual Array Compatibility Reference. See
(page 288) for document location. For support of newer versions, visit the HP Storage website.
Storage system concepts 237
Controller software features - remote replication
HP P6000 Replication Solutions Manager supports the remote replication features of the array
controller software listed below. For more information, see Table 6.3, Supported remote replication
features by controller software version, in the HP P6000 Enterprise Virtual Array Compatibility
Reference. See (page 288) for document location. For support of newer versions, visit the HP Storage
website.
Disk groups
Disk group is the term for a named pool of storage on a storage system in which virtual disks can
be created. See Online and near-online disk group categories.
IMPORTANT: Unless noted otherwise, the term disk group applies to storage system disk groups
and not to host volume disk groups that are under the control of a host OS or logical volume
manager. Storage system disk groups are managed through HP P6000 interfaces and not by
logical volume managers.
See also DR groups Maximum log disk size.
Remote data replication protocols
Remote replication can be configured with the following protocols.
HP FC Data Replication Protocol. This protocol uses Fibre Channel in-order delivery in which all
exchanges between the source and destination use the same path. Performance may be limited
because routing cannot utilize available alternate paths. A storage system that configured with this
protocol will only successfully create a DR group with another storage system that uses the same
protocol.
This protocol requires that fabric switches be configured for in-order delivery and Source
ID/Destination ID (SID/DID) routing. See the HP P6000 Continuous Access Implementation Guide
for specific settings.
CAUTION: Improper configuration of fabric switches can result in significant performance
degradation. HP FC Data Replication Protocol should not be used with a SAN configured for
exchange-based routing.
HP SCSI FC Compliant Data Replication Protocol. This protocol uses Fibre Channel exchange-based
routing in which all frames within a given exchange use the same path, but other exchanges can
use other paths. This allows for improved performance.
This protocol supports fabric switches that are configured for exchange-based routing. For more
information on data replication protocols, see the HP P6000 Continuous Access Implementation
Guide.
Remote replication tunnels
In HP P6000 Continuous Access remote replication configurations, controller software attaches
remote replication tunnels to the path between a controller host port on the source and destination
storage systems.
Although there can be several paths between source and destination controllers, there can only
be one tunnel connecting a source and destination controller (two tunnels between a given pair of
source and destination storage systems). Controllers optimize performance by automatically
changing the host ports (paths) that are used for the tunnels.
With some versions of controller software, you can set the priorities for host port selection, for
example to help load balance remote replication traffic. User requested priorities are targets; they
are not absolute settings. The controllers will make the actual host port assignments based on
real–time circumstances; for example, when a link, port or switch failure in the highest priority
238 Storage systems
choice requires assigning another port. See Controller software features - remote replication to
determine if your controller software supports this feature.
Default host port priorities for remote replication are:
Default priorities for tunnels
1st
2nd
3rd
4th
Controller A
Port 3
Port 1
Port 4
Port 2
Controller B
Port 4
Port 2
Port 3
Port 1
Storage system types
The replication manager discovers and reports the following storage system types:
Array model
Controller type
EVA3000
HSV100
EVA5000
HSV110
EVA4000, EVA4100
HSV200, HSV200-B
EVA4400
HSV300, HSV300-S
EVA6000, EVA6100
HSV200, HSV200-B
P6300 EVA
HSV340
EVA6400
HSV400
P6500 EVA
HSV360
EVA8000, EVA8100
HSV210, HSV210-B
EVA8400
HSV450
P6350
HSV340
P6550
HSV360
Storage system concepts 239
9 Virtual disks
Working with virtual disks
About virtual disk resources
The Virtual Disks content pane displays the virtual disks (storage volumes) that you can interact
with through the replication manager. See GUI window Content pane.
Views
•
Tabular list view. See Virtual disks list view.
•
Graphical tree views. See: Virtual Disk tree view
Actions
•
Actions in the GUI. See Virtual disks actions summary.
•
You can also interact with storage systems from a job and the CLUI. See Virtual disks actions
cross reference.
Properties
•
Properties displayed in the GUI. See Virtual disks properties summary.
•
You can also display properties from the CLUI. See the CLUI command Show Vdisk.
Virtual disk actions summary
The following virtual disk actions are available on the content pane. Some actions have equivalent
CLUI commands. See Virtual disks actions cross reference.
•
View Properties. View the properties of a virtual disk. Procedure.
•
Edit Properties. Change the name of a virtual disk. Procedure.
•
Delete. Deletes a virtual disk. Procedure.
•
Add to Managed Set. Add a virtual disk to a managed set. Procedure.
•
Remove from Managed Set. Remove a virtual disk from a managed set. Procedure.
•
Remove from DR group. Remove a virtual disk from a DR group. Procedure.
•
Add Presentations. Present a virtual disk to an enabled host. Procedure.
•
Remove Presentations. Remove a virtual disk's presentation from an enabled host. Procedure.
•
New DR Group. Create a new DR group pair. Procedure.
•
New Container. Create a container. Procedure.
•
New Snapshot. Create a point-in-time snapshot copy of a virtual disk. Procedure.
•
Migrate. Migrate a virtual disk to a new disk group and/or change the Vraid level of the
virtual disk. Both operations can be performed simultaneously. Procedure
•
New Snapclone. Create a point-in-time snapclone copy of a virtual disk. Procedure.
•
Snapclone-Preallocated. Use a container to create a point-in-time snapclone copy of a virtual
disk. Procedure.
•
Snapshot-Preallocated. Use a container to create a point-in-time snapshot copy of a virtual
disk. Procedure.
•
Low-Level Refresh. Update the properties of a virtual disk. Procedure.
240 Virtual disks
•
Launch the Device Manager. Access HP P6000 Command View from the replication manager.
Procedure.
•
List Events. Display a list of events for the resource. Procedure.
•
New Virtual Disk. Create a new virtual disk. Procedure.
•
Instant Restore. Restore a virtual disk from one of its replicas. Procedure.
•
Create Mirrorclone. Create a synchronized mirrorclone of a virtual disk. Procedure.
•
Detach Mirrorclone. Detach a fractured mirrorclone from its source virtual disk. Procedure.
•
Fracture Mirrorclone. Fracture a synchronized mirrorclone. Procedure.
•
Resync Mirrorclone. Resynchronize a fractured mirrorclone with its source virtual disk.
Procedure.
•
Migrate mirrorclone. Swap Vraid level and disk groups between the mirrorclone and its source
virtual disk without changing their names or roles.Procedure.
Virtual disk actions cross reference
You can work with virtual disks using GUI actions, jobs and CLUI commands. The following tables
provide a cross reference for performing typical tasks.
Create virtual disks
GUI action
Job command
CLUI command
Virtual Disks > New
CreateStorageVolume
Add Vdisk
Virtual Disks > New Container
CreateContainer
Add Container
GUI action
Job command
CLUI command
Virtual Disks > Delete
DeleteStorageVolume
Delete Vdisk
-
DeleteStorageVolumes
-
-
DeleteContainer
-
GUI action
Job command
CLUI command
Virtual Disks > Add presentations
PresentStorageVolume
Set Vdisk
Virtual Disks > Edit properties
-
Set Vdisk
Virtual Disks > Remove presentations
UnpresentStorageVolume
Set Vdisk
-
SetStorageVolumeWriteCacheMode
Set Vdisk
GUI action
Job command
CLUI command
Virtual Disks > Add to managed set
-
Set Managed_Set
Virtual Disks > Remove from managed set
-
Set Managed_Set
Delete virtual disks
Edit virtual disks
Manage virtual disks
Working with virtual disks
241
Mount virtual disks
GUI action
Job command
CLUI command
-
Mount existing storage volumes (template)
-
GUI action
Job command
CLUI command
Virtual Disks > Low-Level Refresh
-
Set Vdisk
Virtual Disks > Migrate
MigrateStorageVolume
Set Vdisk
-
-
Set Container
-
-
Show Container
-
-
Show Snapclone
-
-
Show Snapshot
-
-
Show Vdisk
-
ConvertStorageVolumeIntoContainer
Set Vdisk
-
WaitForStorageVolumeWriteCacheFlush
-
GUI action
Job command
CLUI command
Virtual Disks > New DR group
-
-
Virtual Disks > New Snapclone
SnapcloneStorageVolume
Add Snapclone
Virtual Disks > New Snapshot
SnapshotStorageVolume
Add Snapshot
Virtual Disks > Preallocated-Snapclone
SnapcloneStorageVolumeToContainer
-
Virtual Disks > Instant Restore
-
Set Container
Virtual Disks > Migrate Mirrorclone
MigrateMirrorclone
Set Mirrorclone
-
Replicate storage volumes (template)
-
-
Replicate storage volumes via preallocated
snapclones (template)
-
-
Instant restore storage volumes to other
storage volumes (template)
-
GUI action
Job command
CLUI command
-
ValidateSnapcloneStorageVolume
-
-
ValidateSnapshotStorageVolume
-
GUI action
Job command
CLUI command
Virtual Disks > View Properties
-
Show Vdisk
Other virtual disk tasks
Replicate virtual disks
Validate virtual disks
View virtual disks
242 Virtual disks
Virtual disk properties summary
For help on properties, see the following tabs in the virtual disk properties window.
•
Virtual disk. See General tab.
•
Hosts to which the virtual disk is presented. See Presentation tab.
•
DR group in which the virtual disk is a member, if any. See Remote Replication tab.
•
Managed sets in which the virtual disk is a member. See Membership tab.
See also Viewing virtual disk properties.
Virtual disk views
See the following examples: List view and System/Virtual Disk tree view.
List view
System/Virtual Disk tree view
Adding virtual disks to a managed set
Add virtual disks to a managed set.
Considerations
•
You can use the GUI or the CLUI. See Storage systems actions cross reference.
•
A managed set can include only resources of the same type.
Procedure
This
1.
2.
3.
procedure uses the GUI.
In the navigation pane, select Virtual Disks.
On the List tab, select the virtual disks to add to a managed set.
Select Actions > Add to Managed Set.
The Create New Managed Set window opens.
4.
5.
Select a managed set, or select Create New Managed Set and enter a name.
Click OK.
The virtual disks are added.
Working with virtual disks 243
Creating containers for virtual disks
Create a container on a storage system by specifying virtual disk properties. See virtual disk
Containers. To create containers by specifying a host volume or host volume group, see Creating
a managed set of containers for host volumes and Creating a managed set of containers for host
volume groups respectively.
Considerations
•
You can use the GUI, jobs, or the CLUI to create containers on a storage system. See Virtual
disks actions cross reference.
•
In the GUI, the New Container action can be started by selecting an existing virtual disk, and
the new container inherits the properties of that disk. Alternately, if a virtual disk is not selected,
you can specify all of the container properties.
Procedures
The following procedures use the GUI.
Creating a container from a selected disk
This
1.
2.
3.
procedure creates a new container that inherits the properties of an existing virtual disk.
In the navigation pane, select Virtual Disks.
On the List tab, select a virtual disk from which the new container will inherit properties.
Select Actions > New Container.
The New Container window opens.
4.
Follow the instructions in the window.
Creating a container by specifying its properties
This
1.
2.
3.
4.
procedure creates a new container with properties that you specify.
In the navigation pane, select Virtual Disks.
Click the List tab.
If you are already viewing the Virtual Disk content pane, refresh the pane to clear any
selections. See Refreshing the content pane.
Select Actions > New Container.
The New Container window opens.
5.
Follow the instructions in the window.
Creating a DR group pair
Create a DR group pair by specifying source virtual disks to remotely replicate. See DR group pair.
Considerations
•
To create a DR group by specifying the source virtual disks, you can only use the GUI. See
Virtual disks actions cross reference.
•
Maximum number of virtual disks per DR group. See Controller software features - remote
replication.
•
General guidelines for remote replication. See virtual disks Remote replication guidelines.
Procedure
This procedure uses the GUI. See also DR groups Creating a DR group pair.
1. In the navigation pane, select Virtual Disks.
2. On the List tab, select the source virtual disks you want to remotely replicate.
244 Virtual disks
3.
Select Actions > New DR Group.
The Replicate Wizard opens.
4.
Follow the instructions in the wizard.
Creating mirrorclones
Use a container to create a synchronized mirrorclone copy of a virtual disk. See virtual disks
Synchronized mirrorclones and Containers.
Considerations
•
You can use the GUI, or jobs to create a synchronized mirrorclone. See Virtual disks actions
cross reference.
•
A container that is the same size as the source virtual disk must already exist. Procedure.
•
Guidelines apply. See virtual disks Mirrorclone guidelines.
•
Dynamic capacity management guidelines apply. For more information, see Using
DC-Management with replication.
Procedure
The
1.
2.
3.
following procedure uses the GUI.
In the navigation pane, select Virtual Disks.
On the List tab, select the source virtual disk to replicate using the mirrorclone method.
Select Actions > Create Mirrorclone.
The Mirrorclone window opens.
4.
Follow the instruction in the window.
Creating snapclones (preallocated)
Use a container to create a point-in-time snapclone copy of a virtual disk. See virtual disks
Preallocated snapclones and containers.
Considerations
•
You can use the GUI, jobs, or the CLUI to create preallocated snapclones. See Virtual disks
actions cross reference.
•
A container that is the same size as the original virtual disk must already exist. Procedure.
•
Guidelines apply. See virtual disks Snapclone guidelines.
•
A flush of the source virtual disk write cache is required before replication is started. See
Preallocated snapclones.
•
Dynamic capacity management guidelines apply. See Using DC-Management with replication
for more information.
Procedure
The
1.
2.
3.
following procedure uses the GUI.
In the navigation pane, select Virtual Disks.
On the List tab, select the virtual disk to replicate using the preallocated snapclone method.
Select Actions > Snapclone-Preallocated.
The New Preallocated Snapclone window opens.
4.
Follow the instruction in the window.
Working with virtual disks 245
Creating snapclones (standard)
Create a point-time snapclone copy of a virtual disk. See virtual disks Snapclones.
Considerations
•
You can use the GUI, jobs, or the CLUI to create snapclones. See Virtual disks actions cross
reference.
•
Guidelines apply. See virtual disks Snapclone guidelines.
•
Dynamic capacity management guidelines apply. See Using DC-Management with replication
for more information.
Procedure
This
1.
2.
3.
procedure uses the GUI.
In the navigation pane, select Virtual Disks.
On the List tab, select the virtual disk to replicate using snapclone replication.
Select Actions > New Snapclone.
The New Snapclone wizard opens.
4.
Follow the instructions in the wizard.
The Monitor Job window opens. An implicit job is begun to execute the action.
5.
After the implicit job completes, refresh the content pane to display the most current resources.
See Refreshing the content pane.
Creating snapshots (preallocated)
Use a container to create a point-in-time snapshot copy of a virtual disk. See virtual disks Containers
and Preallocated snapshots.
Considerations
•
You can use the GUI, jobs, or the CLUI to create preallocated snapshots. See Virtual disks
actions cross reference.
•
A container that is the same size as the original virtual disk must already exist. Procedure.
•
Guidelines apply. See virtual disks Snapshot guidelines.
•
A flush of the source virtual disk write cache is required before replication is started. See
Preallocated snapshots.
•
Dynamic capacity management guidelines apply. See Using DC-Management with replication
for more information.
Procedure
The
1.
2.
3.
following procedure uses the GUI.
In the navigation pane, select Virtual Disks.
On the List tab, select the virtual disk to replicate using the preallocated snapshot method.
Select Actions > Snapshot-Preallocated.
The New Preallocated Snapshot window opens.
4.
Follow the instruction in the window.
Creating snapshots (standard)
Create a point-in-time snapshot copy of a virtual disk. See virtual disks Snapshots.
246 Virtual disks
Considerations
•
You can use the GUI, jobs, or the CLUI to create snapshots. See Virtual disks actions cross
reference.
•
Guidelines apply. See virtual disks Snapshot guidelines.
•
Dynamic capacity management guidelines apply. See Using DC-Management with replication
for more information.
Procedure
This
1.
2.
3.
procedure uses the GUI.
In the navigation pane, select Virtual Disks.
On the List tab, select the virtual disk to replicate using the snapshot method.
Select Actions > New Snapshot.
The New Snapshot wizard opens.
4.
Follow the instructions in the wizard.
Migrating a virtual disk
Migrate a virtual disk to a new disk group and/or change the Vraid level of a virtual disk. Both
operations can be performed simultaneously.
Considerations
•
You can use the GUI, jobs, or the CLUI to migrate a virtual disk. See Virtual disks actions cross
reference.
•
Only virtual disks that do not have local or remote replicas can be migrated.
•
Only fully allocated virtual disks can be migrated. Virtual disks using thin provisioning cannot
be migrated.
•
While a migration is in progress, other operations cannot be performed on the virtual disk.
This includes presenting or unpresenting the virtual disk and creating local replicas.
•
When migrating a virtual disk, the write cache policy is set to write-back as part of the migrate
process. If the virtual disk had write cache policy set to write-through, the policy will be changed
to write-back.
Procedure
This
1.
2.
3.
procedure uses the GUI.
In the navigation pane, select Virtual Disks.
On the List tab, select the source virtual disk to migrate.
Select Actions > Migrate.
The Migrate a Virtual Disk window opens.
4.
Follow the instructions in the window.
Creating virtual disks
Create a virtual disk on a storage system.
Working with virtual disks 247
Considerations
•
You can use the GUI, jobs or the CLUI to create virtual disks. See Virtual disks actions cross
reference.
•
The virtual disk must be a member of an enhanced disk group to use Vraid6.
•
If you have obtained a thin provisioning license, you have the option of creating a thin provision
virtual disk.
Procedure
This procedure uses the GUI.
1. In the navigation pane, select Virtual Disks.
2. Select Actions > New Virtual Disk.
The New Virtual Disk window opens.
3.
Follow the instructions in the window.
Deleting virtual disks
Delete a virtual disk on a storage system.
Considerations
•
You can use the GUI, jobs, or the CLUI to delete virtual disks. See Virtual disks actions cross
reference.
•
You cannot delete a virtual disk that is presented. You must first unpresent the disk. See
Unpresenting virtual disks
•
You cannot delete an original virtual disk that has a snapshot. You must first delete the
snapshots.
•
You cannot delete a virtual disk that is in a DR group. You must first remove it from the DR
group. You can remove a virtual disk from a DR group by deleting the DR group that contains
it, removing the virtual disk from the DR group, or removing replication.
CAUTION: Using the Delete action permanently deletes virtual disks and the data on them. You
cannot undo this action.
Procedure
This
1.
2.
3.
procedure uses the GUI.
In the navigation pane, select Virtual Disks.
On the List tab, select the source virtual disk to delete.
Select Actions > Delete.
A confirmation window appears.
4.
Click OK to confirm the action.
Detaching mirrorclones
Detach a fractured mirrorclone from its source virtual disk. The detached mirrorclone becomes an
independent (original) virtual disk. See Fractured mirrorclones.
Considerations
•
•
You can use the GUI, or jobs to detach fractured mirrorclones. See Virtual disks actions cross
reference.
A mirrorclone must be fractured before it can be detached. See Mirrorclone guidelines and
Mirrorclone states.
248 Virtual disks
Procedure
1.
2.
3.
In the navigation pane, select Virtual Disks.
On the List tab, select the fractured mirrorclone virtual disk to detach. Alternately, you can
select the mirrorclone's source virtual disk.
Select Actions > Detach Mirrorclone.
The Detach Mirrorclone confirmation window opens.
4.
Click OK.
The window closes and the Monitor Job window opens. An implicit job is begun to execute
the action.
5.
After the implicit job completes, refresh the content pane to display the most current resources.
See Refreshing display panes.
Editing virtual disk properties
Edit (set) the properties of a virtual disk.
Considerations
•
You can use the GUI or the CLUI. See Virtual disks actions cross reference. The CLUI allows
you to edit several different properties. The GUI action allows you to edit only the virtual disk
name.
•
Guidelines apply. See Virtual disk guidelines.
Procedure
This
1.
2.
3.
procedure uses the GUI.
In the navigation pane, select Virtual Disks.
On the List tab, select the virtual disk whose properties you want to edit.
Select Actions > Edit Properties.
The Editing the Virtual Disk window opens.
Fracturing mirrorclones
Stop (fracture) the synchronized local replication from a source virtual disk to its mirrorclone. This
action converts a synchronized mirrorclone to a fractured mirrorclone. See fractured mirrorclones
and synchronized mirrorclones.
Considerations
•
You can use the GUI, or jobs to fracture a mirrorclone. See Virtual disks actions cross reference.
•
Guidelines apply. See virtual disks Mirrorclone guidelines.
•
When using jobs, you must explicitly flush the source virtual disk write cache before the fracture
is started. See Fractured mirrorclone write cache flush.
Procedure
The following procedure uses the GUI.
1. In the navigation pane, select Virtual Disks.
2. On the List tab, select the mirrorclone virtual disk to fracture. Alternately, you can select the
mirrorclone's source virtual disk.
3. Select Actions > Fracture Mirrorclone.
The Fracture Mirrorclone confirmation window opens.
Working with virtual disks 249
4.
Click OK.
The window closes and the Monitor Job window opens. An implicit job is begun to execute
the action.
5.
After the implicit job completes, refresh the content pane to display the most current resources.
See Refreshing display panes.
Launching the device manager
Access HP P6000 Command View from the replication manager. Each time you use this action in
the same replication manager session, a new window for HP P6000 Command View is opened.
Considerations
•
You can only use the GUI to launch the device manager. The action is not available unless
an individual resource is selected (highlighted) in the replication manager content pane.
•
You must enter the security credentials (user name and password) to log on to HP P6000
Command View.
Procedure
1.
2.
3.
In the navigation pane, select DR Groups, Storage Systems or Virtual Disks.
On a List or Tree tab, select any storage resource.
Select Actions > Launch the Device Manager.
A new browser window opens.
4.
Respond to the security alert message, and then log on to HP P6000 Command View.
Listing individual resource events
Display the events for an individual resource.
Considerations
•
You can only use the GUI.
•
Applies to only to individual DR groups, storage systems, and virtual disks.
Procedure
1.
2.
3.
In the navigation pane, select a resource type.
On the List tab, select the specific resource whose events are to be displayed.
Select Actions > List events.
An events window for the resource opens.
Restoring virtual disks (Instant Restore)
Restore a virtual disk by replacing its data with data from one of its replicas.
Considerations
•
You can use the GUI, jobs, or the CLUI. See Virtual disks actions cross reference.
IMPORTANT:
•
•
HP recommends that you use the GUI to restore virtual disks.
The virtual disk to restore must already have a replica (mirrorclone, snapclone, or snapshot)
that was created by the replication manager. Only those replicas can be selected in the Instant
Restore wizard.
Before using the following procedure, ensure there is no host I/O to the virtual disk being
restored or its replica.
250 Virtual disks
•
If either disk is part of a host volume, do not use the following procedure. Instead, use the
procedure Restoring a host volume (Instant Restore). See also Host volumes overview.
•
A virtual disk cannot be restored from a snapshot of its mirrorclone. To restore from a
mirrorclone, the mirrorclone must be fractured. See Fractured mirrorclones and Fracturing
mirrorclones.
Procedure
This
1.
2.
3.
procedure uses the GUI.
In the navigation pane, select Virtual Disks.
On the List tab, select the virtual disk to restore.
Select Actions > Instant Restore.
The Instant Restore wizard opens.
4.
Follow the instructions in the wizard.
Low-level refreshing virtual disks
Perform a low-level refresh of specific virtual disks and containers. See virtual disks Low-level refresh.
Considerations
•
You can use the GUI or CLUI. See Virtual disks actions cross reference.
Procedure
This procedure uses the GUI.
1. In the navigation pane, select Virtual Disks.
2. On the List tab, select the specific virtual disks and containers whose properties are to be
updated.
3. Select Actions > Low-Level Refresh.
The Confirmation Action window appears.
4.
To continue, click OK.
The virtual disk and container properties are updated.
Presenting virtual disks
Present a virtual disk to a host. See virtual disks Presentation.
Considerations
•
You can use the GUI, jobs, or the CLUI to present a virtual disk to a host. See Virtual disks
actions cross reference.
•
You can allow storage controller software to assign a LUN. See LUN automatic assignment.
CAUTION: Presenting a virtual disk to more than one host at a time can cause I/O write conflicts
on the disk and the possible loss of host application data.
Procedure
This procedure uses the GUI.
1. In the navigation pane, select Virtual Disks.
2. On the List tab, select the virtual disk you want to present to an enabled host.
Working with virtual disks
251
3.
Select Actions > Add Presentation.
The Add Presentations window opens.
4.
Follow the instructions in the window.
Removing virtual disks from a managed set
Remove virtual disks from a managed set.
Considerations
•
You can use the GUI or the CLUI. See Storage systems actions cross reference.
Procedure
This
1.
2.
3.
procedure uses the GUI.
In the navigation pane, select Virtual Disks.
On the List tab, select the virtual disks to remove from a managed set.
Select Actions > Remove From Managed Set.
The Select Managed Sets window opens.
4.
5.
Select the managed set from which to remove the virtual disks.
Click OK.
The virtual disks are removed from the set.
Removing virtual disks from a DR group pair
Remove virtual disks from a DR group pair. See DR group pair.
When you remove virtual disks from a source DR group, the source disks are retained. You can
discard or retain the remote copies (virtual disks).
Considerations
•
You can use the GUI or CLUI. See Virtual disks actions cross reference.
•
Guidelines apply. See virtual disks Remote replication guidelines.
•
To remove virtual disks from a DR group pair, you must specify virtual disks in the source DR
group. See DR group pair.
•
You cannot directly remove virtual disks from a destination DR group.
•
If you choose to discard the remote copies when the remote link is not operational, the remote
virtual disks are not deleted. You must delete them manually.
•
You cannot remove virtual disks from a DR group pair if remote replication is suspended. See
DR group Suspension state.
•
You cannot remove a virtual disk from a DR group if its remote copy is presented. You must
first unpresent the remote copy. Procedure
CAUTION:
If you discard a remote copy, the destination virtual disk and its data are deleted.
Procedure
This procedure uses the GUI.
1. In the navigation pane, select Virtual Disks.
2. On the List tab, select the source virtual disks you want to remove from a DR group pair.
252 Virtual disks
3.
Select Actions > Remove From DR Group.
The Remove RSM Replication window opens.
4.
Follow the instructions in the window.
Restoring virtual disks (Instant Restore)
Restore a virtual disk by replacing its data with data from one of its replicas.
Considerations
•
You can use the GUI, jobs, or the CLUI. See Virtual disks actions cross reference.
IMPORTANT:
HP recommends that you use the GUI to restore virtual disks.
•
The virtual disk to restore must already have a replica (mirrorclone, snapclone, or snapshot)
that was created by the replication manager. Only those replicas can be selected in the Instant
Restore wizard.
•
Before using the following procedure, ensure there is no host I/O to the virtual disk being
restored or its replica.
•
If either disk is part of a host volume, do not use the procedure below. Instead, use the
procedure Restoring a host volume (Instant Restore). See also Host volumes overview.
•
A virtual disk cannot be restored from a snapshot of its mirrorclone. To restore from a
mirrorclone, the mirrorclone must be fractured. See Fractured mirrorclones and Fracturing
mirrorclones.
Procedure
This
1.
2.
3.
procedure uses the GUI.
In the navigation pane, select Virtual Disks.
On the List tab, select the virtual disk to restore.
Select Actions > Instant Restore.
The Instant Restore wizard opens.
4.
Follow the instructions in the wizard.
Resynchronizing mirrorclones
Restart the synchronized local replication from a source virtual disk to its mirrorclone. This action
converts a fractured mirrorclone to a synchronized mirrorclone. See Fractured mirrorclones and
Synchronized mirrorclones.
Considerations
•
You can use the GUI, or jobs to resynchronize a mirrorclone to its source virtual disk. See
Virtual disks actions cross reference.
•
Guidelines apply. See virtual disks Mirrorclone guidelines.
Procedure
The following procedure uses the GUI.
1. In the navigation pane, select Virtual Disks.
2. On the List tab, select the fractured mirrorclone to resynchronize with its source. Alternately,
you can select the mirrorclone's source virtual disk.
3. Select Actions > Resync Mirrorclone. The Resync Mirrorclone confirmation window opens.
Working with virtual disks 253
4.
Click OK.
The window closes and the Monitor Job window opens. An implicit job is begun to execute
the action.
5.
After the implicit job completes, refresh the content pane to display the most current resources.
See Refreshing display panes.
Migrating a mirrorclone
Swaps Vraid level and disk groups between the mirrorclone and its source virtual disk without
changing their name or roles.
Considerations
•
You can use the GUI, CLUI, or jobs to migrate a mirrorclone. See Virtual disks actions cross
reference.
•
When creating a mirrorclone, you must wait for it to normalize before it can be migrated.
•
The mirrorclone must be in a synchronized state to be migrated.
Procedure
The following procedure uses the GUI.
1. In the navigation pane, select Virtual Disks.
2. On the List tab, select the mirrorclone virtual disk to swap. You can select multiple mirrorclones
and swap them all simultaneously.
3. Select Actions > Migrate Mirrorclone.
The Migrate Mirrorclone confirmation window opens.
4.
Click OK.
The Monitor Job window opens. An implicit job is begun to execute the action.
5.
After the implicit job completes, refresh the content pane to display the most current resources.
See Refreshing display panes.
Unpresenting virtual disks
Unpresent a virtual disk from a host. See Virtual disk presentation.
Considerations
•
You can use the GUI, jobs, or the CLUI to present a virtual disk to a host. See Virtual disks
actions cross reference.
•
You can select multiple virtual disks to unpresent at one time.
CAUTION: When presentation is removed, the host can no longer perform I/O with the virtual
disk. This can cause loss of host application data.
Procedure
This
1.
2.
3.
procedure uses the GUI.
In the navigation pane, select Virtual Disks.
On the List tab, select one or more virtual disks to unpresent from enabled hosts.
Select Actions > Remove Presentation.
The Remove Presentations window opens.
4.
Follow the instructions in the window.
254 Virtual disks
Viewing virtual disks
Display virtual disk list and tree views. See Virtual disk views.
Considerations
•
You can use the GUI or CLUI to display lists. See Virtual disks actions cross reference.
•
Tree views are available only in the GUI.
Procedures
This procedure uses the GUI.
1. In the navigation pane, select Virtual Disks.
The content pane displays virtual disks.
2.
Click the List tab.
A tabular list of storage systems is displayed.
3.
Click the Tree tab.
A graphical tree of storage systems and their virtual disks is displayed.
Viewing virtual disk properties
View the properties of a specific virtual disk. See Virtual disks properties summary.
Considerations
•
You can use the GUI or the CLUI. See Virtual disks actions cross reference.
Procedure
This
1.
2.
3.
procedure uses the GUI.
In the navigation pane, select Virtual Disks.
On the List tab, select the virtual disk to view.
Select Actions > View Properties.
The Virtual Disks Properties window opens.
4.
5.
Click the properties tabs.
To perform a low-level refresh of the properties in the General tab, click Refresh. See also
Low-level refresh of virtual disks.
Virtual disk concepts
Virtual disks overview
The replication manager uses the terms virtual disk and virtual disk container to indicate a storage
systems identification or name for a storage device. An important distinction between virtual disks
and virtual disk containers is that virtual disks can be presented to hosts for I/O, but containers
cannot be presented. See virtual disks Presentation and Containers.
Virtual disk properties are automatically discovered by the replication manager and are maintained
in the replication manager's database. See resources Automatic discovery (refresh). Virtual disks
and containers are displayed in the Virtual Disks content pane. See Virtual disks views. You can
use a GUI action, a job, or CLUI command to work with virtual disks and containers.
Virtual disks are also called storage volumes, especially in replication manager job commands.
Virtual disk concepts 255
Controller software features
The properties of storage systems, their virtual disks, and replication features depend upon the
controller software version. For information about controller software versions that are supported
with HP P6000 Replication Solutions Manager, see Table 1.0,HP P6000 software solutions
compatibility, in the HP P6000 Enterprise Virtual Array Compatibility Reference. See (page 288).for
document location.
For support, visit the HP Storage website. See (page 289).
Controller software features - local replication
HP P6000 Replication Solutions Manager supports the local replication features of the HP P6000
EVA controller software. For more information, see Table 6.2, Supported local replication features
by controller software version, in the HP P6000 Enterprise Virtual Array Compatibility Reference.
See (page 288) for document location. For support of newer versions, visit the HP Storage website.
Controller software features - remote replication
HP P6000 Replication Solutions Manager supports the remote replication features of the HP P6000
EVA controller software listed below. For more information, see Table 6.3, Supported remote
replication features by controller software version, in the HP P6000 Enterprise Virtual Array
Compatibility Reference. See (page 288) for document location. For support of newer versions, visit
the HP Storage website.
Cache policies
Each virtual disk on a storage system has a read cache and a write cache. The following are key
cache policies (settings): Write cache, Write cache mirror, and Read cache.
Write cache
The virtual disk write cache policy specifies how host writes to virtual disks are performed. Values
are:
•
Write-back. The storage system notifies a host that a write operation is complete after data is
written to the storage system cache memory (but before the data is written from the cache to
disk media). Write-back is the default.
•
Write-through. The storage system notifies a host that a write operation is complete after the
data is written from the cache to disk media.
Write-back caching improves host write performance because data is written to the high speed
cache memory faster then to disk media. Write-through caching is safer, in terms of fault tolerance.
Storage systems that are fully functional use write-back only. Under certain failure conditions,
storage systems automatically switch to write-through for data safety.
Actual and requested values. Policy values are reported as actual and requested. An actual value
is the policy that is in effect for the virtual disk. A requested value is a value that has been requested
but is not yet in effect. For example, a policy change can be requested when a virtual disk is not
presented to a host but the change does not take effect until the disk is presented to a host.
Write cache mirror
The virtual disk write cache mirror policy specifies whether each controller in a storage system
contains a copy of the other controller's write cache. The following values are reported:
•
Enabled. In the storage system, each controller contains a copy of the other controller's write
cache. Enabled is the default.
•
Disabled. In the storage system, the controller does not contain a copy of the companion
controller's write cache.
256 Virtual disks
Mirrored write-back is the default mode for all virtual disks, provided that a cache battery or UPS
is present and operational for the preferred controller. Mirrored write-back provides the high
performance of write back under normal operation, but reverts to the safety of write-through in the
event of failure.
You cannot use the replication manager to set the write cache mirror policy.
Read cache
The virtual disk read cache policy specifies how host reads from virtual disks are performed. Values
are:
•
On. The storage system satisfies host read requests, when possible, from the storage system's
cache memory. On is the default.
•
Off. The storage system always satisfies host read requests from the disk media.
Read caching increases host read performance because data is read from the high speed cache
memory faster than from disk media.
Containers
A container is disk space that is reserved, or preallocated, for later use as a mirrorclone, snapclone
or a snapshot. A container has most of the same properties as a virtual disk, including a name
and size. However, a container:
•
Cannot be presented to a host
•
Cannot store data
Implementation of container features is controller software dependent. See Controller software
features - local replication.
Containers for preallocated replication
Preallocated replication refers to a snapclone or snapshot that is created by copying data from a
source virtual disk to a container and immediately converting the container into a virtual disk.
Compared to standard snapclones and snapshots, the preallocated methods are faster. In cases
where host I/O must be suspended, the improved speed of preallocated replication reduces the
time that a host application is suspended.
Container planning
A container must be the same size as the virtual disk that you want to copy. For example, if you
have virtual disks of 10-, 20-, and 30-GB to copy, you should create 10-, 20-, and 30-GB containers.
A container of 25-GB would not be usable with any of those disks.
When used for mirrorclones and preallocated snapclones, the container can be in a different disk
group than the source virtual disk. When used for preallocated snapshots, the container must be
in the same disk group as the source virtual disk.
Conversion of virtual disks to containers
A snapclone virtual disk can be converted to a container. This is useful when you repeatedly create
a snapclone of the same virtual disk, for example, when making daily backups of a virtual disk.
See the following job commands: ConvertStorageVolumeIntoContainer,
ConvertStorageVolumesIntoContainers, and ConvertStorageVolumesInManagedSetIntoContainers.
For more information on jobs, see the HP P6000 Replication Solutions Manager Job Command
Reference.
The following virtual disks cannot be converted into containers:
•
Virtual disks that are presented to hosts
•
Snapshots
Virtual disk concepts 257
Container guidelines
The following guidelines apply to containers:
•
The array must have a local replication license.
•
When used for mirrorclones and preallocated snapclones, the container may be in a different
disk group than the source virtual disk. When used for preallocated snapshots, the container
must be in the same disk group as the source virtual disk.
•
A container must be the same size as the source of the preallocated snapclone or snapshot.
•
The redundancy (Vraid) level of the container determines the Vraid level of a preallocated
snapclone. See Redundancy level (Vraid).
•
For a preallocated snapshot, the redundancy (Vraid) level of the container must be the same
or lower than the source. If the source has other snapshots, the Vraid level of the container
must be the same as the other snapshots.
•
Containers cannot be presented to hosts.
•
A snapclone virtual disk can be converted to a container. See the following job commands:
ConvertStorageVolumeIntoContainer, ConvertStorageVolumesIntoContainers, and
ConvertStorageVolumesInManagedSetIntoContainers. For more information on jobs, see the
HP P6000 Replication Solutions Manager Job Command Reference.
Virtual disks cannot be converted to containers if the disk is:
•
A snapshot
•
Presented to a host
•
In the process of normalizing or being deleted. See Normalization.
Cross Vraid replication
Cross Vraid replication is a feature that allows the copy of a virtual disk to have a different
redundancy level (Vraid) than the source disk. See virtual disk Redundancy level.
Implementation of cross Vraid is controller software dependent. See Controller software features local replication and Controller software features - remote replication.
Cross Vraid FAQ
Can I use cross Vraid when locally or remotely copying any virtual disk? Typically, yes. However,
the source virtual disk must be on a storage system that supports the feature. See Controller software
features - local replication and Controller software features - remote replication.
Why would I use cross Vraid? Typically, to create a copy that is smaller than the source virtual
disk.
For example, if you copy a disk whose redundancy level is Vraid 1 or Vraid 5 and use a cross
Vraid assignment of Vraid0, the copy will be smaller. This more efficient use of space can be
appropriate when the copy is retained only for a short time and is not intended to be a high
availability resource.
Disk groups
Disk group is the term for a named pool of storage on a storage system in which virtual disks can
be created. See Online and near-online disk group categories. On newer versions of controller
software you can create enhanced disk groups, which support Vraid6. To determine if your controller
software supports enhanced disk groups, see the HP P6000 Enterprise Virtual Array Compatibility
Reference.
258 Virtual disks
IMPORTANT: Unless noted otherwise, the term disk group applies to storage system disk groups
and not to host volume disk groups that are under the control of a host OS or logical volume
manager. Storage system disk groups are managed through HP Enterprise Virtual Array interfaces
and not by logical volume managers.
Online and near-online disk categories
When a disk group is created using HP P6000 Command View, it is assigned a disk category of
online or near-online. Whenever a virtual disk is created, it assumes the disk category of its disk
group.
•
Online disk category. The online disk category provides first-tier performance and reliability.
All physical disk drives in this category of disk group are dual ported, high performance Fibre
Channel drives.
•
Near-online disk category. The near-online disk category provides second- tier performance
and reliability. All physical disk drives in this category of disk group are lower cost, hybrid
Fibre Channel disk drives. Hybrid drives are dual ported, FATA (Fibre Attached Technology
Adapted) drives.
Typical uses and recommended category are as follows.
Disk use
Online
Near-online
Disaster tolerance
yes
yes
Disk-to-disk backup
-
yes
Disk-to-tape backup
-
yes
Frequent file access
yes
-
Infrequent file access
-
yes
Highest performance
yes
-
Virus attack recovery
-
yes
Disk group capacity and occupancy
The capacity of a disk group is established when it is created using HP P6000 Command View
and is expressed in terms of redundancy level (Vraid). For example, the disk group named Test
might have the following Vraid-based capacities:
Vraid0:
Vraid1:
Vraid5:
Vraid6:
337 GB
156 GB
270 GB
224 GB
These capacity values are estimates only and can vary depending on the configuration.
The term occupancy refers to the amount of disk space (GB) being used in a disk group.
Disk group occupancy alarm level
The occupancy alarm level is the point (percent of capacity) at which the array issues an a disk
group occupancy warning. For example, if a disk group's capacity is 200 GB, and the occupancy
alarm level is 80%, the warning is issued when the total capacity of the virtual disks in the disk
group reaches 160 GB.
The default occupancy alarm level is 95% of capacity.
Virtual disk concepts 259
Impacts of an occupancy warning
When an occupancy warning is issued:
•
No virtual disks (LUNs) in the disk group can be created or enlarged.
•
No mirrorclones, snapclones, or snapshots can be created in the disk group.
•
The disk drive failure protection level (spares) in the disk group cannot be increased.
Cross Vraid guidelines
Cross Vraid (redundancy) guidelines:
•
Snapclone copy. Redundancy level can be higher or lower than the source.
•
Snapshot copy. Redundancy level cannot be higher than the source.
•
Remote copy. Redundancy level can be higher or lower than the source.
See also virtual disk Redundancy level.
Quick reference
Source
Snapclone copy
Snapshot copy
Remote copy
Vraid0
none
>>>
Vraid0, Vraid1, or Vraid5,
Vraid6
Vraid0
Vraid0, Vraid1, or Vraid5,
Vraid6
Vraid1
high
>>>
Vraid0, Vraid1, or Vraid5,
Vraid6
Vraid0, Vraid1, or Vraid5
Vraid0, Vraid1, or Vraid5,
Vraid6
Vraid5
medium
>>>
Vraid0, Vraid1, or Vraid5,
Vraid6
Vraid0, Vraid5
Vraid0, Vraid1, or Vraid5,
Vraid6
Vraid0, Vraid1, or Vraid5,
Vraid6
Vraid0, Vraid1, or Vraid5,
Vraid6
Vraid0, Vraid1, or Vraid5,
Vraid6
Vraid6
Instant restore overview (virtual disks)
The instant restore feature allows you to restore data on a virtual disk with data from one of its
previously created replicas (mirrorclone, snapclone or snapshot). The operation is instant because
the restored data is available within seconds for host I/O (the actual data transfer occurs in the
background).
For example, assume that the database named sales_db becomes corrupt. You can instantly restore
it to a prior state from one of its replicas.
sales_db
disk being restored
<======
sales_db_backup
replica to restore from
See also, instant restore from Mirrorclones, Snapclones, and Snapshots.
Best practices
•
For most situations, HP recommends that you perform instant restores using GUI actions, not
jobs.
•
To ensure data integrity, an instant restore should only be performed on an unmounted,
unpresented virtual disk.
•
Instant restore of the snapshot of a mirrorclone is supported on some versions of controller
software. See Controller software features - local replication for the software versions that
support this feature.
•
If the virtual disk is mounted, use the following general approach.
260 Virtual disks
1.
2.
Stop host I/O to the disk that is being restored.
Unmount the disk.
This forces the disk's write cache to be flushed (emptied).
3.
Perform an instant restore to the disk. Procedure.
If the restored host volume does not have consistent data left in cache after following this
procedure, repeat the procedure.
See Restoring host volumes (instant restore) for the detailed procedure.
Instant restore from mirrorclones
This feature allows you to restore data on a virtual disk with data from its fractured mirrorclone.
For example, restore the virtual disk named sales_db with the data from it's fractured mirrorclone
sales_db_backup.
sales_db
<======
disk being restored
sales_db_backup
replica to restore from (fractured mirrorclone)
See also, Instant restore overview (virtual disks).
Scenario, restoring from "a planned point-in-time"
1.
2.
Create a mirrorclone of the disk to be restored at some future date. Procedure.
At a point-in-time you want, fracture the mirrorclone.
The mirrorclone contains point-in-time data.
3.
When necessary, restore the disk from its mirrorclone. Procedure.
Scenario, restoring from "the most recent"
1.
2.
Create a mirrorclone of the disk to be restored at some future date. Procedure.
Leave the mirrorclone in a synchronized state with its source.
The mirrorclone always contains the most recent data.
3.
4.
When necessary to restore the disk, fracture its mirrorclone. Procedure
When a restore is required, restore the disk from its mirrorclone. Procedure.
Instant restore from snapclones
This feature allows you to restore data on a virtual disk with data from one of its snapclones. For
example, restore the virtual disk named sales_db with the data from the snapclone sales_db_backup.
sales_db
<======
disk being restored
sales_db_backup
replica to restore from (snapclone)
See also, Instant restore overview (virtual disks).
Scenario, restoring from "planned points-in-time"
1.
2.
At the points-in-time you want (for example, daily), create snapclones or preallocated
snapclones of the disk. The snapclones contain point-in-time data. Procedures: Creating
snapclones and Creating preallocated snapclones.
When necessary, restore the disk from an appropriate snapclone. Procedure.
Virtual disk concepts
261
Instant restore from snapshots
This feature allows you to restore data on a virtual disk with data from one of its snapshots. For
example, restore the virtual disk named sales_db with the data from one of its snapshots
sales_db_backup_B.
sales_db_backup_C
|
sales_db
<======
sales_db_backup_B
disk being restored
|
sales_db_backup_B
replicas to restore from (snapshots)
See also, Instant restore overview (virtual disks).
Scenario, restoring from "planned points-in-time"
1.
At the points-in-time you want (for example, daily), create snapshots or preallocated snapshots
of the disk.
The snapshots contain point-in-time data. Procedures: Creating snapshots and Creating
preallocated snapshots.
2.
When necessary, restore the disk from an appropriate snapshot. Procedure.
Instant restore
The instant restore feature in the replication manager uses snapclone replication to copy the contents
of a virtual disk to a like-sized container. The container is then converted to a virtual disk. See
Instantly restoring disks.
Low-level refresh of virtual disks
Update replication manager database entries by manually performing a discovery and refresh of
individual virtual disks.
The action does the following:
•
Performs a discovery refresh at the storage system level to gather the properties of the specified
virtual disks.
•
Updates the replication manager database with the new properties.
Low-level refresh by specifying a DR group
When applied to a DR group, the properties of the log disk and the DR group state are updated.
To update the properties the DR group pair, apply to the source and destination DR groups.
LUN
In each storage system, logical unit numbers (LUNs) are assigned to its virtual disks. When a virtual
disk is presented to hosts, the storage system and the hosts perform I/O by referencing the LUN.
At a low level, a host OS typically reports each storage device that it detects in the format of c#
t# d#, where:
C#
identifies a host I/O controller
T#
identifies the target storage system on the controller
D#
identifies the virtual disk (LUN) on the storage system
262 Virtual disks
Automatic LUN assignment
•
When presenting a virtual disk to a host, enter 0 to allow the storage controller software to
automatically assign the LUN.
Mirrorclones - fractured
Mirrorclone replication establishes and maintains a copy of a original virtual disk, via a local
replication link. See virtual disk Types.
source
=====||=====
mirrorclone
local link
(fractured)
When the local replication between a synchronized mirrorclone and its source is stopped by an
action or command, the mirrorclone is said to be fractured. In a fractured state, the mirrorclone is
not updated when the source virtual disk is updated. At the instant replication is stopped, the
mirrorclone is a point-in-time copy of its source. See also Mirrorclone states and Synchronized
mirrorclones.
Task summary for fractured mirrorclones
Fractured mirrorclones
Deleting
No. The disk must first be detached, then deleted.
Detaching
Yes.
Fracturing
Not applicable.
Presenting
Yes. The disk can immediately be presented to hosts for I/O.
Replicating - snapclones
No. Snapclones of mirrorclones are not supported.
Replicating - snapshots
Yes. Multiple snapshots are allowed.
Restoring
Yes. The disk must be unpresented first.
Resynchronizing
Yes. The disk must be unpresented first.
Swapping
No.
See also Mirrorclone guidelines and Mirrorclone FAQ.
Mirrorclone write-cache flush
The source virtual disk write cache must be flushed before a fracture is started. (See cache policies
Write cache.) This ensures that the source virtual disk and its mirrorclone contain identical data
when the fracture occurs. The following table shows how a write cache flush is implemented.
IMPORTANT:
When using jobs, you must explicitly ensure that write caches are flushed.
Method
Flush implementation
Write cache setting after replication
GUI action
The replication manager automatically sets
the source disk to write-though mode and
ensures the flush is completed before starting
the fracture.
When the fracture is completed, the controller software
automatically sets the source disk and mirrorclone to
write-back mode.
Job
You must include job commands to set the
source disk to write-through mode and wait
for the flush to complete before starting the
fracture.
If you want the source disks or mirrorclones to be in
write-through mode, you must explicitly set them.
Virtual disk concepts 263
Fracture for disk group failure
In the unusual event that a source virtual disk with a mirrorclone experiences a disk group failure,
the mirrorclone is automatically fractured by the array. After the failure is corrected, the source
can be restored from the fractured mirrorclone.
The mirrorclone feature is controller software version dependent. See Controller software features
- local replication.
Mirrorclones - synchronized
Mirrorclone replication establishes and maintains a copy of an original virtual disk, via a local
replication link. See virtual disk Types.
source
===========>
mirrorclone
local link
(synchronized)
When first created (and whenever resynchronized by an action or command), a mirrorclone is
said to be synchronized. In a synchronized state, the mirrorclone is automatically updated whenever
its source virtual disk is updated. See also mirrorclone states and Fractured mirrorclones.
Task summary for synchronized mirrorclones
Synchronized mirrorclones
Deleting
No. The disk must be first detached, then deleted.
Detaching
No. The disk must first be fractured, then detached.
Fracturing
Yes.
Presenting
No.
Replicating - snapclones
No. Snapclones of mirrorclones are not supported.
Replicating - snapshots
No. The disk must first be fractured, then replicated.
Restoring
No. The disk must first be fractured, then used to restore.
Resynchronizing
Not applicable.
Swapping
Yes.
See also general Mirrorclone guidelines and Mirrorclone FAQ.
Benefits of separate disk groups
Mirrorclones created in a separate disk group than their source provide the following benefits:
•
Hardware protection. Compared to a snapshot, a mirrorclone is more hardware protected.
(A snapshot must be in the same disk group as its source.)
•
Redundancy levels (Vraid). Compared to a snapshot, a mirrorclone can have higher redundancy
levels. (A snapshot must have the same or lower redundancy level as its source.)
•
Optimization. To optimize cost and performance, a mirrorclone can have a different type of
physical disks than its source.
The mirrorclone feature is controller software version dependent. See Controller software features
- local replication.
264 Virtual disks
Mirrorclone FAQ
•
How can I tell a mirrorclone from other types of virtual disks?
Because mirrorclones are not independent virtual disks, they are identified differently than
original (independent) virtual disks. See virtual disk Types.
•
How long does it take to create a mirrorclone?
A mirrorclone requires only a matter of seconds to create.
•
What do the terms synchronized and fractured refer to?
When a mirrorclone is in a synchronized state, the local replication link to its source is active.
Changes to data on the source are automatically replicated to the mirrorclone.
When a mirrorclone is in a fractured state, the local replication link is inactive. Changes to
data on the source are not replicated to the mirrorclone.
See Synchronized mirrorclones and Fractured mirrorclones.
•
When can a host read from or write to a mirrorclone?
Hosts can read from and write to fractured mirrorclones but not synchronized mirrorclones.
•
After I create a mirrorclone, can I delete the source virtual disk?
No.
•
Can I make multiple mirrorclones of a virtual disk?
No. See “Mirrorclone guidelines” (page 265).
•
What is the maximum number of mirrorclones on a storage system?
There is no limit.
•
Can I create a mirrorclone of a mirrorclone?
No.
•
Can I create snapclones of a mirrorclone?
No.
•
Can I create snapshots of a mirrorclone?
Yes.
•
Can I use the round robin feature to make multiple snapshots of a fractured mirrorclone?
Yes.
•
Can I create mirrorclones of virtual disks that are in DR groups?
Yes, but only with specific versions of controller software. See Controller software features local replication
Mirrorclone guidelines
The following general guidelines apply:
•
The array must have a local replication license. See Replication licenses overview.
•
A mirrorclone can be in a different disk group than its source. For optimum protection from
hardware failures, HP recommends creating a mirrorclone in a different disk group than its
source. (A mirrorclone is created in the same disk group as its source, unless specified
otherwise.)
•
The redundancy (Vraid) level of a mirrorclone can be the same, lower, or higher than its
source. See Redundancy level (Vraid).
Virtual disk concepts 265
•
The maximum number of mirrorclones per source is one.
•
Neither the source disk or its mirrorclone can be a member of a DR group.
•
A detached mirrorclone cannot be reattached to its source.
A mirrorclone cannot be created if the intended source virtual disk is:
•
A snapshot or has any snapshots.
•
Has any snapclones that are in the process of being normalized.
•
Is a member of a DR group.
Synchronized mirrorclone guidelines
Synchronized mirrorclones
Deleting
No. The disk must first be fractured and detached, then deleted.
Detaching
No. The disk must first be fractured, then detached.
Fracturing
Yes.
Presenting
Yes. The disk can immediately be presented to hosts for I/O.
Replicating - snapclones
No. The disk must first be fractured, then presented.
Replicating - snapshots
No. The disk must first be fractured, then replicated.
Restoring
No. The disk must first be fractured, then used to restore.
Resynchronizing
Not applicable.
Swapping
Yes
Fractured mirrorclone guidelines
Fractured mirrorclones
Deleting
No. The disk must first be detached, then deleted.
Detaching
Yes.
Fracturing
Not applicable.
Presenting
Yes. The disk can immediately be presented to hosts for I/O.
Replicating - snapclones
No.
Replicating - snapshots
Yes. Multiple snapshots are allowed.
Restoring
Yes. The disk must be unpresented first.
Resynchronizing
Yes. The disk must be unpresented first.
Swapping
No
266 Virtual disks
Mirrorclone states
The following mirrorclone states are reported by the replication manager.
Mirrorclone replication state
Remarks
fractured
There is no replication activity between the mirrorclone virtual disk and its
source virtual disk. At the instant of the fracture, data on the mirrorclone is
identical to its source.
After a fracture, data on the two disks may no longer be identical due to
host I/O to the source or the mirrorclone.
restore in progress
In response to a replication command, data is being copied from the
mirrorclone virtual disk to its original source virtual disk. Data on the original
source is not yet identical to its mirrorclone.
synchronized or normalized
The source virtual disk and its mirrorclone are synchronized. Data on the
mirrorclone is identical to its source. Any changes to data on the source
virtual disk are automatically copied to its mirrorclone.
sync in progress
In response to a replication command, data is being copied from the source
virtual disk to its mirrorclone virtual disk. Data on the mirrorclone is not yet
identical to its source.
Normalization
Snapclone normalization (unsharing)
Snapclone normalization is a local replication background process in which a storage system
copies the data from the initial snapshot to an independent virtual disk. Snapclone normalization
is also called unsharing.
Some actions should not be performed during snapclone normalization. When creating jobs, use
wait commands to ensure that normalization is completed. See the
WaitStorageVolumeNormalization, WaitStorageVolumesNormalization, and
WaitVolumeGroupNormalization job commands. For more information on jobs, see the HP P6000
Replication Solutions Manager Job Command Reference.
Remote copy normalization
Remote copy normalization is a remote replication background process in which two storage
systems ensure that the data on the source and destination virtual disks in a DR group are identical.
Some actions should not be performed during remote copy normalization. When creating jobs,
use wait commands to ensure that normalization is completed. See the WaitDrGroupNormalization
job command.
Preferred controller
Preferred controller specifies which controller manages and presents the virtual disk. The primary
purpose is for load balancing. Although set by a storage administrator, the controller software can
override the setting when required. Values are:
•
No preference. Presentation of the virtual disk is completely controlled by the controller software.
•
Path A - Failover only. Controller A presents the virtual disk to hosts when both controllers are
simultaneously started.
•
Path A - Failover/failback. Controller A presents the virtual disk to hosts, but if controller A
fails, control of the virtual disk is transferred to controller B, then transferred back to controller
A when it is available again.
Virtual disk concepts 267
•
Path B - Failover only. Controller B presents the virtual disk to hosts when both controllers are
simultaneously started.
•
Path B - Failover/failback. Controller B presents the virtual disk to hosts, but if controller B
fails, control of the virtual disk is transferred to controller A, then transferred back to controller
B when it is available again.
Presentation (to host)
Presentation of a virtual disk to a host makes the virtual disk visible to the host and allows the host
to perform I/O with it. After a virtual disk is presented to an enabled host, the replication manager
can identify the presented virtual disk as a host volume.
CAUTION: Presenting a virtual disk to more than one host at a time, or multiple times to the same
host, can cause I/O write conflicts on the disk and the possible loss of host application data.
Remote replication guidelines
Storage arrays
•
Source and destination arrays must have remote replication licenses. See Replication licenses
overview.
•
The array selected for the destination DR group depends on the remote replication configuration.
For supported configurations, see the HP P6000 Continuous Access Implementation Guide.
•
A storage array can have DR group relationships with up to two other storage arrays.
•
The maximum number of virtual disks in a DR group and the maximum number of DR groups
per array vary with controller software versions. See Controller software features - remote
replication.
DR groups and failover
•
Failover of a DR group pair is permitted only by specifying a destination DR group.
•
When failsafe on unavailable member is enabled, a DR group pair cannot be suspended.
See Failsafe on unavailable member and Suspend on failover.
•
When a DR group pair is suspended, it cannot be failed over or reverted to home.
•
A DR group cannot be deleted if a virtual disk in the destination DR group is presented.
•
For additional help on usage, see Using DR groups.
Virtual disks
•
All virtual disks that contain the data for an application must be in the same DR group.
•
When in enhanced asynchronous write mode, virtual disks cannot be added to or deleted
from a DR group. This restriction is controller software version dependent. See Controller
software features - remote replication.
•
When suspended, virtual disks cannot be removed from the source or destination DR group.
•
Virtual disks cannot be directly added to a destination DR group.
•
To be added to a source DR group, a virtual disk:
◦
Cannot be a member of another DR group
◦
Cannot be a snapshot or a mirrorclone. See virtual disks Types.
◦
Cannot have a mirrorclone. This restriction is controller software version dependent. See
Controller software features - remote replication.
268 Virtual disks
◦
Must be in a normal operational state. See resources Operational states.
◦
Must use mirrored cache. See virtual disks Cache policies.
◦
Must have the same presentation status as other virtual disks in the DR group. With some
versions of controller software, the virtual disk must be presented to a host. See storage
systems Controller software features - remote replication. See also virtual disks Presentation.
Redundancy (Vraid) levels
Vraid is an HP term for implementing RAID storage (Redundant Array of Independent Disks). Virtual
disks with HP Vraid use three key RAID methods: data striping, data mirroring, and parity error
checking.
Data striping improves speed by performing virtual disk I/O with an entire group of physical disks
at the same time. Data mirroring provides data redundancy by storing data and a copy of the
data. Parity error checking provides automatic detection and correction if corruption of a physical
disk occurs. Unlike traditional RAID, all HP P6000 Vraid levels distribute data across all available
physical disks.
The redundancy (Vraid) level for a virtual disk determines the virtual disk's availability (data
protection) and influences its I/O performance. Once a virtual disk is created, its Vraid type cannot
be changed.
Vraid levels are:
•
Vraid0. Vraid0 (striping) and is optimized for speed and disk space utilization but does not
provide any redundancy.
IMPORTANT:
HP does not recommend using Vraid0 when high availability is required.
•
Vraid1. Vraid1 (striping with mirroring) is optimized for speed and high data redundancy. A
Vraid1 virtual disk can automatically recover (reconstruct) from the failure of one physical
disk. Vraid1 uses about twice the physical disk space than Vraid0.
•
Vraid5. Vraid5 (striping with parity) is optimized for speed, disk space utilization, and moderate
redundancy. A Vraid5 virtual disk can automatically recover (reconstruct) from the failure of
one physical disk. Vraid5 uses about 20% more physical disk space than Vraid0.
•
Vraid6. Vraid6 (striping with dual parity) is optimized for speed and the highest redundancy.
A Vraid6 virtual disk can automatically recover (reconstruct) from the concurrent failure of two
physical disks. Vraid6 uses about 33% more physical disk space than Vraid0. Vraid6 support
is dependent on the controller software version. See the HP P6000 Enterprise Virtual Array
Compatibility Reference.
Disk groups with large physical disks
IMPORTANT: If a disk group includes large physical disks (for example, 1 TB or larger), HP
recommends using Vraid6 for the virtual disks.
The recovery (reconstruction) time for virtual disks increases with the size of the physical disks in
the disk group. With Vraid1 or Vraid5, a second physical disk failure during the longer recovery
time could prevent a complete recovery. The use of Vraid6 reduces the risk.
Snapclones
Snapclone replication instantly creates an independent, point-in-time copy of a virtual disk. The
copy begins as a fully allocated snapshot, and then automatically becomes an independent virtual
disk. The copy is called a snapclone. See also virtual disks Snapclone FAQ and Snapclone
guidelines.
Virtual disk concepts 269
The snapclone property indicates whether a virtual disk can be locally replicated using the snapclone
method. Values are:
•
Yes. The virtual disk complies with snapclone guidelines. Snapclone replication can be
performed.
•
No. The virtual disk does not comply with snapclone guidelines. Snapclone replication cannot
be performed.
Preallocated snapclones
Preallocated snapclone refers to a snapclone that is created by copying data from a source virtual
disk to a container and immediately converting the container into a virtual disk. Compared to a
standard snapclone, creating a preallocated snapclone is faster. In cases where host I/O must be
suspended, the improved speed of preallocated snapclones reduces the time that a host application
is suspended.
When a preallocated snapclone is created, the source virtual disk write cache must be flushed
before replication is started. (See cache policies Write cache.) This ensures that the source virtual
disk and snapclone copy contain identical data. The following table shows how a write cache
flush is implemented.
IMPORTANT:
flushed.
When using jobs or the CLUI, you must explicitly ensure that write caches are
Method
Flush implementation
GUI action
The replication manager automatically sets the
When replication is complete, the controller
source disk to write-though mode and ensures the software automatically sets the source disk and
flush has completed before starting the replication. preallocated copy (converted container) to
write-back mode.
You must include job commands to set the source If you want the source disks or copies to be in
disk to write-through mode and wait for the flush write-through mode, you must explicitly set them.
to complete before starting the replication.
Job
CLUI
Write cache setting after replication
You must issue CLUI commands to set the source
disk to write-through mode and wait for the flush
to complete before starting the replication.
This feature is controller software version dependent. See Controller software features - local
replication. See also virtual disk Containers.
Snapclone FAQ
•
How can I tell a snapclone from other types of virtual disks?
Because snapclones are independent virtual disks, they are identified as original (active) virtual
disks. This distinguishes them from snapshots, but not from other independent virtual disks.
See virtual disks Types.
•
How long does it take to create a snapclone?
A snapclone requires only a matter of seconds to create, no matter how large the source.
However, the snapclone does not become an independent virtual disk until unsharing is
completed.
•
What is snapclone normalization or unsharing?
Normalization or unsharing is a snapclone background process. See virtual disks
Normalization.
270 Virtual disks
•
When can a host read from or write to a snapclone?
A host can immediately read from and write to a snapclone, even during the unsharing process.
•
After I create a snapclone, can I delete the source virtual disk?
Yes. However, you cannot delete the source virtual disk until the background unsharing process
is completed. After the snapclone is an independent virtual disk, you can delete the source.
Snapclone guidelines
The following guidelines apply to snapclone virtual disks:
•
The array must have a local replication license. See Replication licenses overview.
•
A snapclone can be in a different disk group than the source. (A snapclone is created in the
same disk group as its source, unless specified otherwise.)
•
The redundancy (Vraid) level of a snapclone can be the same, lower, or higher than the source.
See Redundancy level (Vraid).
•
Until a snapclone is normalized, another snapclone of the same source cannot be created.
See Normalization.
Snapclones cannot be created when the disk to be replicated is:
•
A snapshot
•
A disk that has a snapshot
•
In the process of normalizing or being deleted
Snapshots
Snapshot replication of a virtual disk instantly creates a virtual, point-in-time copy of the disk. The
copy is called a snapshot. See also virtual disks Snapshot types and Snapshot FAQ.
The snapshot property indicates whether a virtual disk can be locally replicated using the snapshot
method. Values are:
•
Yes. The virtual disk complies with snapshot guidelines. Snapshot replication can be performed.
•
No. The virtual disk does not comply with snapshot guidelines. Snapshot replication cannot
be performed.
Preallocated snapshots
Preallocated snapshot refers to a fully allocated snapshot that is created by copying data from a
source virtual disk to a container and immediately converting the container into a virtual disk.
Compared to a standard fully allocated snapshot, creating a preallocated snapshot is faster. In
cases where host I/O must be suspended, the improved speed of preallocated snapshots reduces
the time that a host application is suspended.
When a preallocated snapshot is created, the source virtual disk write cache must be flushed before
replication is started. (See cache policies Write cache.) This ensures that the source virtual disk
and snapclone copy contain identical data. The following table shows how a write cache flush is
implemented.
IMPORTANT:
flushed.
When using jobs or the CLUI, you must explicitly ensure that write caches are
Method
Flush implementation
Write cache setting after replication
GUI action
The replication manager automatically sets the
When replication is complete, the controller
source disk to write-though mode and ensures the software automatically sets the source disk and
flush has completed before starting the replication.
Virtual disk concepts
271
Method
Flush implementation
Job
You must include job commands to set the source preallocated copy (converted container) to
disk to write-through mode and wait for the flush write-back mode.
to complete before starting the replication.
If you want the source disks or copies to be in
write-through mode, you must explicitly set them.
You must issue CLUI commands to set the source
disk to write-through mode and wait for the flush
to complete before starting the replication.
CLUI
Write cache setting after replication
This feature is controller software version dependent. See Controller software features - local
replication. See also virtual disk Containers.
Snapshot FAQ
•
How can I tell a snapshot from other types of virtual disks?
Because snapshots are not independent virtual disks, they are identified differently than original
(active) virtual disks. See virtual disks Types.
•
How long does it take to create a snapshot?
A snapshot requires only a matter of seconds, no matter how large the original (active) virtual
disk.
•
If it is virtual, can a host write to a snapshot?
Yes. A snapshot is functionally equivalent to a physical disk with both read and write capability.
•
After I create a snapshot, can I delete the original (active) virtual disk?
No. A snapshot always relies, at least in part, on the original (active) virtual disk for data. If
the original virtual disk is deleted, its associated snapshot becomes unusable. A snapshot
should be thought of as a temporary copy.
•
Can I make multiple snapshots of an original (active) virtual disk?
Yes. However, there is a limit. See virtual disks Snapshot guidelines.
•
What is the maximum number of snapshots on a storage system?
There is no limit. However, the greater the number of snapshots, the longer it takes to shut
down the storage system during maintenance and upgrade activities.
•
Can I create a snapshot of a snapclone?
Yes.
•
Can I create an snapshot of a snapshot?
No.
Snapshot guidelines
The following guidelines apply to snapshot virtual disks:
•
The array must have a local replication license. See Replication licenses overview.
•
The maximum number of snapshots per source varies with controller software versions. See
Controller software features - local replication.
•
A snapshot is always created in the same disk group as the source virtual disk.
•
The redundancy level (Vraid) of a snapshot must be the same or lower than the source. See
Redundancy level (Vraid).
•
All snapshots of the same virtual disk must be the same type (demand allocated or fully
allocated) and redundancy level. See Snapshot types (allocation policy).
272 Virtual disks
•
If the disk group has insufficient space to increase the capacity of demand-allocated snapshots,
the snapshots will automatically be invalidated, but the source virtual disks will continue
accepting requests.
•
Snapshots count against the maximum number of virtual disks per array.
•
You can perform an instant restore of a snapshot of a mirrorclone.
Snapshots cannot be created when the disk to be replicated is:
•
A snapshot
•
In the process of normalizing or being deleted. See Normalization.
Snapshots per virtual disk
In HP XCS controller software, the maximum number of snapshots per virtual disk varies with the
size of the disk. This is because the total snapshot size (per disk) cannot exceed 15 TB.
Virtual disk snapshot estimator
Source size (TB)
Snapshots (max)
0 < to 0.94
16
0.95 to 1.00
15
1.01 to 1.07
14
1.08 to 1.15
13
1.16 to 1.25
12
1.26 to 1.36
11
1.37 to 1.50
10
1.51 to 1.67
9
1.68 to 1.88
8
1.89 to 2.00
7
Snapshot types (allocation policy)
Snapshot types (allocation policy) specifies how the storage system allocates space in a disk group
for a snapshot. Values are:
•
Demand allocated. The space allocated for the snapshot can automatically change from an
initial minimum amount, up to the full capacity of the original (active) virtual disk.
•
Fully allocated. The space allocated for the snapshot is initially set to, and remains fixed at,
the full capacity of the source (active) virtual disk.
Demand-allocated snapshots
When a snapshot is demand allocated, the storage system initially allocates only a small amount
of space for the snapshot, just enough to store point-in-time information and pointers to data on
the source. As data on the source is over-written, the controller increases the allocated space for
the snapshot and copies the original (point-in-time) data from the source to the snapshot.
If all the original data on the source is over-written, the controller increases the allocated space on
the snapshot by an amount equal to the full size of the source.
The size of the disk group in which the source and snapshot are located must be sufficient to handle
increases in snapshot size, whenever the increases might occur. Insufficient space in the disk group
can not only prevent the controller from increasing the space allocation, but it can also prevent
writes to both the source and snapshot.
Virtual disk concepts 273
Fully allocated snapshots
When a snapshot is fully allocated, the storage system initially allocates an amount of space that
is equal to the capacity of the source virtual disk, plus a small amount of space for point-in-time
information and pointers to data on the source. As data is over-written on the source, the controller
copies the original (point-in-time) data from the source to the snapshot. The amount of space
allocated on the snapshot never changes.
Once created, a fully allocated snapshot cannot run out of space.
Thin provisioning
Thin provisioning is a licensed feature that allows you to specify the capacity of a virtual disk when
you create it. The physically allocated space will dynamically increase from 0 to its requested
capacity as data is added to the virtual disk.
You can set thresholds on a thin provisioned virtual disk and its disk group to alert you when the
disk allocation is approaching capacity.
NOTE: This feature requires both an HP P6000 Command View license and an HP Thin
Provisioning license.
The key features of thin provisioned virtual disks differ from standard virtual disks in several
significant ways:
•
The amount of physical disk space which is allocated to a thin provisioned virtual disk can
automatically change in response to the amount of data being stored, up to the specified size
of the virtual disk. A traditional virtual disk requires the full amount of physical disk space to
be allocated at all times.
•
A well planned thin provisioned virtual disk does not require explicit resizing (manually or
with scripts). With a traditional virtual disk, any time the size needs to be changed, it must
be explicitly resized.
•
There is no unused physical disk space associated with a thin provisioned virtual disk, thus
physical disk space cannot become stranded. With a traditional virtual disk, the allocated but
unused physical disk space can create stranded capacity.
•
The requested capacity for a thin provisioned virtual disk can actually exceed the amount of
physical disk drive capacity that can be allocated. This is not possible with traditional virtual
disks. (An example is also provided in the OLH.)
The following restrictions apply to thin provisioned virtual disks:
•
You must have a valid HP Thin Provisioning license to create or modify a thin provision virtual
disk.
•
You cannot create a snapshot, snapclone, or mirrorclone from a thin provision virtual disk.
•
A thin provisioned virtual disk cannot be included in a remote replication DR group.
•
When creating a thin provisioned virtual disk, you cannot request a capacity greater than:
◦
The largest virtual disk capacity of a corresponding RAID level in the array
◦
The largest virtual disk capacity of a corresponding RAID level in the disk group
◦
32 TB
CAUTION: An array’s addressable capacity value is commonly larger than the maximum supported
capacity of the array. The controller software prevents oversubscribing of the addressable capacity
limitation, but will not prevent oversubscribing the physical limitations of the LDAD (logical disk
addressable space) or array. If you oversubscribe the capacity, you must delete the virtual disk
from the disk group.
274
Virtual disks
Tru64 UNIX host volumes
General
•
The replication manager displays information about UFS, AdvFS host volumes and LSM logical
volumes on enabled hosts.
•
In a host cluster, only the volumes on the cluster's enabled host are displayed. See Enabled
hosts.
•
Local replication of UFS and AdvFS host volumes is supported. Local replication of LSM logical
volumes is not supported. See Local replication.
Mounting
•
The replication manager does not support mounting a Tru64 host volume in a subdirectory of
/tmp.
Types
Virtual disks types reported by the replication manager:
Type
Icon
Remarks
Container
Empty disks (disk space) that are preallocated for later use.
See Containers.
Mirrorclone
Dependent virtual disks that use mirrorclone technology.
See Synchronized mirrorclones and Fractured mirrorclones.
Original
Independent virtual disks, including snapclones.
See snapclones.
Snapshot
Dependent virtual disks that use snapshot technology.
See snapshots.
Virtual disk guidelines
The following apply to virtual disks and containers.
•
Names. Up to 32 characters long. Names are not case sensitive but are subject to character
restrictions. See Illegal characters.
•
Size. When creating, the size must be specified in whole GBs (no decimals).
NOTE: Some versions of controller software support virtual disks larger than 2 TB. The
following considerations apply to virtual disks larger than 2 TB:
◦
Local replication (snapshots, snapclones, mirrorclones) cannot be used with them.
◦
They cannot be a member of an HP P6000 Continuous Access remote replication group.
◦
DC-Management cannot be used to expand or shrink them.
Virtual disk concepts 275
10 Events
Working with events
About events
The events pane displays replication manager event messages.
Views
•
Tabular list views are available for storage events and license events. See Events pane.
•
Standard and correlated views are available. See Viewing events.
Actions
•
Actions in the GUI. See Event actions summary.
•
You can also work with events using a job command or a CLUI command. See Event actions
cross reference.
Properties
•
Properties displayed in the GUI. See Event and Trace log properties.
Event actions summary
The following event actions are available. Some actions have related job commands or CLUI
commands. See Events actions cross reference.
•
Details. From the events pane, view an event's details. See Viewing events.
•
List Events. From the content pane, display a list of events for the resource. See Listing individual
resource events.
•
Event actions cross reference
You can work with events using GUI actions, job commands, and CLUI commands. This table
shows the actions and commands that are available for performing typical tasks.
Create event records
GUI action
Job command
CLUI command
-
Log
-
GUI action
Job command
CLUI command
Events > Details
-
Show Job ( event switch)
DR groups > List events
-
-
Storage Systems > List events
-
-
Virtual Disks > List events
-
-
View event records
276 Events
Refreshing display panes
Update the information that is displayed in the content and event panes.
Considerations
•
Do not use browser refresh button.
IMPORTANT: Do not use your browser’s refresh button to update the panes. Using the
browser refresh may end the replication manager session. To restart the session you must login
again to the replication manager server. See Troubleshooting.
•
The content pane is not updated automatically. Following a resource change that is initiated
with the replication manager (GUI, jobs or CLUI) you should refresh the content pane.
•
Whenever you select a different type of resource, the content pane is refreshed. For example,
if you are viewing DR Groups in the content pane and then select Host Volumes, the content
pane is refreshed and displays the current host volume resources.
•
See also Refreshing resources (automatic) and Refreshing resources (global).
Procedure
1.
On the content or event pane, click the refresh icon.
1. Refresh icon for content or event pane
2.
The display is refreshed.
Organizing displayed events
You can change the size and position of the columns in the event pane. You can also sort the
displayed events.
1. Sort indicator
2. Column edge
Procedures
Resizing columns
1.
2.
Move the cursor over a column edge in the heading.
When a selection arrow appears, click and drag the column edge as required.
Moving columns
1.
2.
Click the heading of the column to move.
Hold and drag the column to the desired location.
Working with events 277
Sorting
1.
2.
3.
Click the heading of the column on which to sort the list.
The list is sorted and a sort indicator appears.
To reverse the sort order, click the column heading again.
Viewing events
View storage events and license events. See Events overview.
Procedures
Selecting events to view
1.
On the events pane, select the Storage Events tab or License Events tab. The events in a
corresponding current or historical event log are displayed.
2.
To display events in another log (if any), select the log. The log is applied. Depending on the
number of events, the pane may not refresh immediately.
Selecting a view type
1.
On the events pane, select a View. A list of views appears. See Event log views.
2.
The view is applied. Depending on the number of events, the pane may not refresh immediately.
Viewing event details
When using the Standard view, the details window displays the full text message of the event.
When using the Correlated view, the details window displays the full text message of the last
recorded event, plus other events for the individual resource.
1. On the events pane, select an event.
2. Select Actions > Details. The Details window opens.
Viewing the trace log
View the trace log.
278 Events
Considerations
•
The trace log is a detailed log for the use of administrators and technical support personnel.
Procedure
1.
2.
3.
On the toolbar, click Tools > Configure. The Configuration window opens.
Click Logs. The logs pane appears.
Click View Trace Log. The View Trace Log window opens.
Filtering displayed events
You can filter (select) the events to display in the event pane. Filters are available for event severity
and event source.
1. Filter type
2. Filter value
Procedure
1.
2.
3.
4.
5.
On the event pane, select a filter property box. A list of filters appears.
Select an event filter. The selected filter name is displayed.
Click the filter value box. A list of values for the filter appears.
Select a value. Selecting Any is the same as selecting no filter.
The filter is applied and only the selected events appear.
Event concepts
Events overview
Events can be generated by GUI actions, jobs, the CLUI, and background activities.
Events pane
The events pane displays summary and detailed information for storage events and license events.
See Viewing events.
Event details pane
The top line of the event resource pane identifies the type of event. Examples of event identification
formats are shown below.
Resource name format
Event concepts 279
Job name format
Implicit job ID format
Event logs
•
All generated events are stored in the trace log, by default. The trace log is for use by technical
support personnel. See Trace log, Viewing the trace log, and Trace log configuration.
•
Events that are useful to administrators and operators are copied to the current storage event
log. See Event log and Event log configuration.
•
Events that involve license status are copied to the current license event log.
Events log
The replication manager generates and logs many events. See Events overview. The event log
contains those events that are useful to administrators and operators.
•
Properties
Property
Remarks
Date/Time
Date and time the event was recorded.
Event Severity
Severity of the event. See event and trace log Severity.
Message
Details of the event.
Source
Replication manager resource type, job, or CLUI command that wrote the event record.
For example, a storage system resource or a job.
•
A current log (server.evt) and multiple historical logs are maintained. The current event log is
rolled (saved as a historical log) when a specified condition applies. See Event log
configuration.
•
See also Viewing events.
280 Events
Event log views
Two views are available in the event pane.
•
Correlated view. The current log, or history log, displays only the last recorded event for each
individual resource. For example, the last recorded event for each storage system. For the
jobs source type, the last recorded event is per job instance is displayed.
•
Standard view. The current log, or history log, displays recorded events for all resources. For
the jobs source type, events for all job instances are displayed. The standard view displays
up to 400 events.
See Viewing events.
Event severity
The following icons are used to indicate the severity of an event.
Icon
Description
Informational. The resource was operating normally.
Warning. The resource experienced a temporary abnormal state. The operator should monitor the resource.
Severe. The resource has experienced a catastrophic failure. The operator should act immediately to prevent
further failure or data loss.
Trace log
The replication manager generates and logs many events. See Events overview.
The trace log contains detailed event information that is intended for technical support personnel.
•
Properties
Property
Remarks
Event Severity
Icons indicating the severity of the event. See event and trace log Severity.
Date/Time
When the event occurred.
Message
Summary description of the event.
Source
Replication manager resource type, job, or CLUI command that wrote the event record.
For example, a storage system resource or a job.
•
As the log gets full, the oldest events are discarded.
•
See also Viewing the trace log and Logs configuration, trace logs.
Event concepts
281
11 CLUI
Accessing the CLUI via GUI
Start an interactive CLUI session via the GUI.
Considerations
•
For a list of CLUI commands, see Using CLUI help.
•
Some commands can several minutes or more to return a result.
Procedure
1.
2.
Do one of the following:
•
Browse to the replication manager.
•
Start the replication manager as an application.
In the GUI, select Tools→Command Line User Interface.
The Command Line User Interface window opens.
3.
In the Command box, enter a CLUI command, and then press Enter.
The command is processed and results are displayed in the Command Response pane.
4.
To end the session, close the window or click Cancel.
CLUI documentation
For detailed information on CLUI clients and CLUI commands, see the HP P6000 Replication
Solutions Manager CLUI Reference. The document is available in the replication manager help
menu on the HP Storage website. See (page 289).
Copying CLUI command responses
Copy CLUI command responses and paste them into other applications.
Procedure
1.
2.
On the CLUI window, click anywhere in the Command Response pane.
Press Ctrl+A.
All information within the pane is selected.
3.
Press Ctrl+C.
The information is copied.
282 CLUI
4.
5.
Open or select another application.
Press Ctrl+V to paste the command response information.
Example
To display and copy information about all host agents:
1. Enter show host_agents full, and then press Enter
2. Click anywhere in the Command Response pane, select (highlight) the response, and then
press Ctrl+C.
Legacy HP EVMCL commands cross reference
The following table lists legacy HP Business Copy EVMCL commands and the equivalent replication
manager CLUI commands.
HP EVMCL 2.X command
CLUI command
Remarks
evmcl <bc server>
abort
<job name>
>
Set Job
<job instance>
abort
-
continue
<job name>
>
Set Job
<job instance>
continue
[wait|nowait]
Default is wait
pause
<job name>
>
Set Job
<job instance>
pause
-
run
<job name>
>
Set Job
<job instance>
run [wait|nowait]
Default is nowait
status
<job name>
>
Show Job
<job instance>
-
-
statusdetail
<job name>
>
Show Job
<job instance>
events
-
statusfull
<job name>
>
-
-
-
-
undo
<job name>
>
-
-
-
-
validate
<job name>
>
Set Job
<job name>
run mode = validate -
-
-
-
-
-
-
getjoblist
-
>
Show Job
-
list
list of job names
getjoblist
-
>
Show Job
-
full
list of jobs and
properties
-
-
Show Job
<job name>
-
list of single job's
properties
-
-
Show Job
<job name>
tasks
list of single job's tasks
Legacy HP EVMCL commands cross reference 283
Reusing CLUI commands
You can reuse a command previously entered during an interactive session. This can help eliminate
errors in retyping complex commands.
Procedure
1.
2.
Click the command history arrow, and then select a command from the list.
Press Enter.
Results are displayed in the Command Response pane.
Using CLUI help
The following procedures assume that you have already started an interactive CLUI session.
Displaying the short help menu
Type ?, and then press Enter.
The short main help menu appears. The menu includes command categories, for example add
commands, and individual commands that are not included in the major categories.
Command ?
0 Success
Usage:
{ a[dd] | c[apture] | del[ete] | exit | h[elp] | login
| sel[ect] | set | sho[w] }
To display a list of commands in a category, enter the category followed by a question mark. For
example, to display a list of add commands, enter add ?, and then press Enter.
Command: add ?
0 Success
Usage:
a[dd]
{ cont[ainer] <container name>
| dr[_group]|drg <dr_group name>
| host_a[gent]|ha <host agent name>
| managed[_set]|ms|mset <managed_set name>
| snapc[lone]|sc <snapclone name>
| snaps[hot]|ss <snapshot name>
| vd[isk] <vdisk name> }
284 CLUI
Displaying the full help menu
Enter help, and then press Enter.
The full main help menu appears.
Displaying short help on a specific command
To view short help on a specific command, enter the command followed by ?, and then press
Enter.
The short help for the command appears.
Command: add ha ?
0 Success
Usage:
a[dd] host_a[gent]|ha <host agent name>
Displaying full help on a specific command
To view the full help on a specific command, enter help followed by the command, and then
press Enter.
The full help for the command appears.
Command: help add ha
0 Success
Usage:
Name:
ADD HOST_AGENT
Synopsis:
A[DD] HOST_A[GENT]|HA <Host Agent Name>
Description:
Add a Host Agent.
a[dd] host_a[gent]|ha <host agent name>
Using CLUI help 285
12 Support and other resources
Release history
HP Replication Solutions Manager releases:
Release
Version
Host agents
2012
Kit and Web update
5.6
HP-UX
HP OpenVMS
HP Tru64 UNIX
IBM AIX
Linux
Red Hat Enterprise Linux 6.1
Sun Solaris
Microsoft Windows
VMware
2012
Kit and Web update
5.5
HP-UX
HP OpenVMS
HP Tru64 UNIX
IBM AIX
Linux
Red Hat Enterprise Linux 6.1
Sun Solaris
Microsoft Windows
VMware
2011
Kit and Web update
5.4
HP-UX
HP OpenVMS
HP Tru64 UNIX
IBM AIX
Linux
Red Hat Enterprise Linux 5.6
Sun Solaris
Microsoft Windows
VMware
2011
Kit and Web update
5.3
HP-UX
HP OpenVMS
HP Tru64 UNIX
IBM AIX
Linux
Sun Solaris
Microsoft Windows
VMware
2010
Kit and Web update
5.2
HP-UX
HP OpenVMS
HP Tru64 UNIX
IBM AIX
Linux
Sun Solaris
Microsoft Windows
VMware
2010
Kit and Web update
5.1
HP-UX
HP OpenVMS
HP Tru64 UNIX
IBM AIX
Linux
Sun Solaris
Microsoft Windows
286 Support and other resources
Release
February 2009
Kit and Web update
Version
Host agents
5.0
HP-UX
HP OpenVMS
HP Tru64 UNIX
IBM AIX
Linux
Sun Solaris
Microsoft Windows
4.0.1
HP-UX
HP OpenVMS
HP Tru64 UNIX
IBM AIX
Linux
Sun Solaris
Microsoft Windows
February 2008
Kit and Web update
4.0
HP-UX
HP OpenVMS
HP Tru64 UNIX
IBM AIX
Linux
Sun Solaris
Microsoft Windows
November 2007
Kit and Web update
3.1
HP-UX
HP OpenVMS
HP Tru64 UNIX
IBM AIX
Linux
Sun Solaris
Microsoft Windows
June 2007
Kit and Web update
3.0
HP-UX
HP OpenVMS
HP Tru64 UNIX
IBM AIX
Linux
Sun Solaris
Microsoft Windows
June 2006
Kit and Web update
2.1
HP-UX
HP OpenVMS
HP Tru64 UNIX
IBM AIX
Linux
Sun Solaris
Microsoft Windows
February 2006
Kit and Web update
2.0
HP-UX
HP OpenVMS
IBM AIX
Linux
Sun Solaris
Microsoft Windows
August 2005
Kit and Web update
1.2
HP-UX
IBM AIX
Linux
Sun Solaris
Microsoft Windows
May 2005
Kit and Web update
1.1
HP-UX
IBM AIX
Linux
June 2008
Web update
Release history 287
Release
Version
Host agents
Sun Solaris
Microsoft Windows
December 2004
Kit and Web update
1.0
HP-UX
Linux
Sun Solaris
Microsoft Windows
Contacting HP
HP technical support
Telephone numbers for worldwide technical support are listed on the HP support website:
http://www.hp.com/support/
Collect the following information before calling:
•
Technical support registration number (if applicable)
•
Product serial numbers (if applicable)
•
Product model names and numbers
•
Error messages
•
Operating system type and revision level
•
Detailed questions
Subscription service
HP recommends that you register your product at the Subscriber's Choice for Business website at:
http://www.hp.com/go/wwalerts
After registering, you will receive email notification of product enhancements, new driver versions,
firmware updates, and other product resources.
HP replication feedback
To provide feedback on replication features, functionality, or documentation, send a message to
[email protected]
For continuous quality improvement, calls may be recorded or monitored.
Related information
Documents
Related documents include:
•
HP P6000 Replication Solutions Manager Administrator Guide
•
HP P6000 Replication Solutions Manager Command Line User Interface (CLUI) Reference
•
HP P6000 Replication Solutions Manager Installation Guide
•
HP P6000 Replication Solutions Manager Job Command Reference
•
HP P6000 Replication Solutions Manager User Guide
Software kits. The Replication Solutions Manager documentation can be found on the HP P6000
Replication Solutions Manager DVD that came with your media kit.
Here are the titles of all DVDs that came with your media kit:
288 Support and other resources
•
HP P6000 Command View SW Suite DVD
•
HP P6000 Replication Solution Mg DVD
•
HP P6000 SmartStart SW for Linux CD
Replication manager GUI. The replication manager GUI includes the online help.
1. Start or browse to the replication manager GUI.
2. To view the online help, select Help > Topics.
3. To download documentation from the HP website, select Help > Documentation.
Locating documentation
Replication Solutions Manager documentation can be found on the HP P6000 Replication Solutions
Manager DVD that came with your media kit or on the Manuals page of the HP Business Support
Center website (http://www.hp.com/support/manuals).
To access the Manuals page: In the Storage section, click Storage Software, and then select your
product.
Related information 289
Glossary
A
ABM
array-based management
array
Synonymous with storage array, storage system, and virtual array. A group of disks in one or
more disk enclosures connected to two controllers running controller software that presents disk
storage capacity as one or more virtual disks. See also virtual array, storage system.
array-based
management
A management structure in which HP P6000 Command View is installed on the management
module within the array controller enclosure.
asynchronous
A term used to describe computing models that eliminate timing dependencies between sequential
processes. In asynchronous replication, the array controller acknowledges that data has been
written at the source before the data is copied at the destination. Asynchronous write mode is an
optional DR group property. See also synchronous.
B
bandwidth
The transmission capacity of a link or system, usually measured in bits per second (b/s).
bandwidth latency
product
The measurement of the ability to buffer data; the raw transfer speed in bytes/second times the
round-trip latency in seconds.
bidirectional
An array that contains both source and destination virtual disks. A bidirectional configuration
allows multidirectional I/O flow among several arrays.
C
container
Virtual disk space that is preallocated for later use as a snapclone, snapshot, or mirrorclone.
controller software
Software used by array controllers to manage all aspects of array operations.
copy set
A source-destination pair of virtual disks that are members of a DR group.
D
data currency
A measure of how current the last I/O on the DR group destination member is when compared
to the last I/O written to the DR group source member. The time difference between the last I/O
written to the source member and the last I/O written to the destination member represents the
amount of data that would be lost if the source member was no longer available, (assuming a
non-recoverable event at the source site). See also RPO.
DC-Management
Dynamic Capacity Management. A feature enabling you to extend or shrink the size of a host
volume.
default disk group
The disk group created when the array is initialized. The disk group must contain a minimum of
eight disks. The maximum is the number of installed disks.
destination
The targeted recipient (for example, array or virtual disk) of replicated data.
disk group
A named group of disks selected from all the available disks in a disk array. One or more virtual
disks can be created from a disk group. Also refers to the physical disk locations associated with
a parity group.
DR group
Data Replication group. A logical group of virtual disks in a remote replication relationship with
a corresponding group on another array.
DR relationship
When one array mirrors data to a second, remote array, they are said to have a DR relationship.
E
enabled host
A host that is equipped with a replication manager host agent.
enhanced
asynchronous
A write mode in which all host write I/Os are added to the write history log (WHL). The controller
then acknowledges that data has been written at the source before the data is copied at the
destination.
290 Glossary
event
Any significant change in the state of the Enterprise storage system hardware or software
component reported by the controller to HP P6000 Command View.
F
fabric
A network of Fibre Channel switches or hubs and other devices.
failover
An operation that reverses replication direction so that the destination becomes the source and
the source becomes the destination. Failovers can be planned or unplanned and can occur
between DR groups, managed sets, fabrics or paths, and array controllers.
failsafe
A safe state that devices automatically enter after a malfunction. Failsafe DR groups stop accepting
host input and stop logging write history if a group member becomes unavailable.
fast copy
The process for quickly synchronizing the data on a source virtual disk and a destination virtual
disk by coping only data blocks that have changed. As I/O is written to the write history log, a
bitmap tracks the region of the virtual disk written. Only the regions that are indicated by the
corresponding bit in the bitmap are copied to the destination virtual disk. This process synchronizes
the source and destination virtual disks faster than a full copy.
full copy
The process of copying all data written on a source virtual disk directly to the destination virtual
disk. If a data block contains all zeros, a zeroing message is sent to the destination array's virtual
disk for the corresponding data block. This minimizes the amount of data that must be written to
the destination virtual disk. When the full copy is complete, the source and destination virtual
disks contain the same data (synchronized).
G
GBIC
Gigabit interface converter. A hardware module that connects fiber optic cables to a device and
converts electrical signals to optical signals.
GBps
Gigabytes per second. A measurement of the rate at which the transfer of bytes of data occurs.
A GBps is a transfer rate of 1,000,000,000 (109) bytes per second.
Gbps
Gigabits per second. A measurement of the rate at which the transfer of bits of data occurs.
Nominally, a Gb is a transfer rate of 1,000,000,000 (109) bits per second.
general-purpose
server
A server that runs customer applications such as file and print services. HP P6000 Command
View and HP P6000 Replication Solutions Manager can be used on a general purpose server
in limited configurations.
H
home
A DR group that is the preferred source in a replication relationship. By default, home is the
original source, but it can be set to the destination DR group.
host
A computer that runs user applications and uses the information stored on an array.
host volume
The storage capacity that is defined and mountable by a host operating system.
HP P6000
Continuous Access
A storage-based HP solution consisting of two or more arrays performing disk-to-disk replication,
along with management user interfaces that facilitate configuring, monitoring, and maintaining
the replicating capabilities of the arrays.
I
initialization
A configuration step that binds the controllers together and establishes preliminary data structures
on the array. Initialization also sets up the first disk group, called the default disk group, and
makes the array ready for use.
intersite link
A connection from an E-port on a local switch to an E-port on a remote switch.
ISL
Intersite link.
IVR
Inter-VSAN routing.
291
J
job
A repeatable custom script that automates replication tasks. A job can be simple (for example,
create a DR group) or complex (for example, perform cascaded replication). Jobs can be run
from the GUI, from the command line, from batch files, or by a scheduler.
M
managed set
A selection of resources grouped together for convenient management. For example, you can
create a managed set to manage all DR groups whose sources reside in the same rack.
management
server
A server on which HP P6000 Enterprise Virtual Array management software is installed, such as
HP P6000 Command View and HP P6000 Replication Solutions Manager.
merge
The act of transferring the contents of the write history log contents to the destination virtual disk
to synchronize the source and destination.
mirrorclone
A copy of a virtual disk that is continually updated to reflect changes in the source. When first
created (and whenever re-synchronized by an action or command), the content of a mirrorclone
is synchronized to the source virtual disk.
N
near-online
storage
The on-site storage of data on media that takes only slightly longer to access than online storage
kept on high-speed disk drives.
normalization
The background process of copying the contents of a source virtual disk to a snapclone. The
snapclone is dependent on the source until normalization is complete. Also called unsharing.
O
online storage
An allotment of storage space that is available for immediate use, such as a peripheral device
that is turned on and connected to a server.
P
presentation
The array controller action that makes a virtual disk accessible to a host computer.
R
remote copy
A copy of a virtual disk on the destination array.
resource
An object that is listed in the Replication Solutions Manager navigation pane. Data replication
groups, enabled hosts, host volumes, managed sets, storage systems, and virtual disks are
resources. Replication is performed using these resources.
RPO
Recovery point objective. The maximum age of the data you want the ability to restore in the
event of a disaster. For example, if your RPO is six hours, you want to be able to restore systems
back to the state they were in as of no longer than six hours ago. To achieve this objective, you
need to make backups or other data copies at least every six hours.
RSM
Replication Solutions Manager.
RTO
Recovery time objective. The time needed to recover from a disaster—usually determined by how
long you can afford to be without your systems.
RWP
Replication Workload Profiler
S
SAN
Storage area network. A network of storage devices available to one or more servers.
server-based
management
Management from a server. See also management server.
SFP
Small form-factor pluggable transceiver.
SMI-S
Storage Management Initiative Specification.
292 Glossary
snapclone
A copy that begins as a fully allocated snapshot and becomes an independent virtual disk. Applies
only to the HP Enterprise Virtual Array.
snapshot
A nearly instantaneous copy of the contents of a virtual disk created without interruption of
operations on the source virtual disk. Snapshots are typically used for short-term tasks such as
backups.
source
The virtual disk, DR group, or virtual array where I/O is stored before replication. See also
destination.
source-destination
pair
A copy set.
SPOF
Single point of failure.
standby
management
server
A backup management server. See management server.
storage system
A system consisting of one or more arrays; synonymous with virtual array. See also array.
synchronous
Describes computing models that perform tasks in chronological order without interruption. In
synchronous replication, the source waits for data to be copied at the destination before
acknowledging that it has been written at the source.
V
virtual array
Synonymous with disk array and storage system; a group of disks in one or more disk enclosures
combined with control software that presents disk storage capacity as one or more virtual disks.
See also virtual disk.
Virtual Controller
Software (VCS)
See controller software.
virtual disk
Variable disk capacity that is defined and managed by the array controller and presented to
hosts as a disk. May be called Vdisk in the user interface.
Vraid
The level to which user data is protected. Redundancy is directly proportional to cost in terms of
storage usage; the greater the level of data protection, the more storage space is required. See
also: Vraid0, Vraid1, Vraid5, Vraid6.
Vraid0
A virtualization technique that provides no data protection. The data host is broken down into
chunks and distributed on the disks comprising the disk group from which the virtual disk was
created. Reading and writing to a Vraid0 virtual disk is very fast and makes the fullest use of the
available storage, but there is no data protection (redundancy) unless there is parity.
Vraid1
A virtualization technique that provides the highest level of data protection. All data blocks are
mirrored, or written on two separate disks. For read requests, the block can be read from either
disk, which can increase performance. Mirroring requires the most storage space because twice
the storage capacity must be allocated for a given amount of data.
Vraid5
A virtualization technique that uses parity striping to provide moderate data protection. For a
striped virtual disk, data is broken into chunks and distributed across the disk group. If the striped
virtual disk has parity, another chunk (a parity chunk) is calculated from the data chunks and
written to the disks. If a data chunk becomes corrupted, the data can be reconstructed from the
parity chunk and the remaining data chunks.
Vraid6
Offers the features of Vraid5 while providing more protection for an additional drive failure, but
uses additional physical disk space.
W
WDM
Wavelength division multiplexing. The technique of placing multiple optical signals on a single
optical cable simultaneously.
write history log
A dedicated area of disk capacity used to record host write I/O to the source virtual disks.
X
XCS
See controller software.
293
Index
C
capacity utilization analysis, 119
cascaded remote replication, 81
changing OS type
enabled hosts, 99
CLUI, 16, 24, 282, 284
configuring the replication manager, 23
containers, 257, 258
content pane, 19
controller software features, 237, 256
copy state, 81
creating, 67, 244, 245, 246
cross Vraid replication, 258
FAQ, 258
guidelines, 260
D
database
about RSM, 27
cleanup, 24
exporting RSM, 28
importing RSM, 30
port number, 24
DC-Management
disabling dynamic capacity policy, 127
editing dynamic capacity policy, 126
enabling dynamic capacity policy, 128
HP-UX requirements, 140
overview, 138
removing dynamic capacity policy, 127
setting dynamic capacity policy, 126
support, 140
VMFS requirements, 141
Windows requirements, 140
deleting
containers, 248
DR groups, 68
enabled hosts, 99
remote copies, 248
snapclones, 248
snapshots, 248
VM servers, 100
destination access mode, 81
discovery, 27
automatic, 44
database refresh, 27
global, 45
disk groups
enhanced, 258
document
prerequisites, 13
documentation
HP website, 289
DR groups
about, 63
294 Index
adding virtual disks to, 67
creating, 67
deleting, 68
DR group pair (source and destination), 77
editing properties, 68
guidelines, 91
home, 84
logs, 88
removing virtual disks from, 72
resuming, 72
suspend on failover, 92
suspending, 73
suspension state, 92
topology view, 56
viewing, 76
viewing properties, 76
dynamic capacity management see DC-Management
E
enabled hosts
about, 105
actions, 95
adding, 97
changing OS type, 99
deleting, 99
executing commands, batch files, and scripts, 100
security credentials, 106
viewing, 103
viewing properties, 103
enhanced disk groups, 258
events
about, 276
states, 281
executing a host script, command, or batch file, 100
F
failover
about, 82
failing over DR groups, 69
suspend on failover, 92
failsafe on link-down/power-up, 82
failsafe on unavailable member
about, 83
disabling, 69
enabling, 69
failsafe states, 83
filter
filtering displayed resources, 42
filters for topology views, 61
full copy
about, 84
forcing, 70
G
GUI
about, 18
content pane, 19
keyboard shortcuts and right-click actions, 20
H
home (DR group)
about, 84
reverting to home, 73
host agents, 16
host volume
capacity utilization analysis, 119
host volumes
actions, 107
disabling dynamic capacity policy, 127
editing dynamic capacity policy, 126
enabling dynamic capacity policy, 128
extending capacity, 125
mounting, 120
removing dynamic capacity policy, 127
setting dynamic capacity policy, 126
shrinking capacity, 125
snapclones, 136
snapshots, 136
topology view, 57
unmounting, 121
using volume groups and logical volumes, 123
viewing, 124
viewing properties, 124
volume groups and logical volumes, 130
HP FC Data Replication Protocol , 238
HP SCSI FC Compliant Data Replication Protocol, 238
I
illegal characters, 32
instant restore, 262
Internet protocol configurations, 24
J
jobs
arguments, 170
Command result values, 172
commands, 17
implicit, 177
imported, 177
instances, 169
templates, 184
transactions, 188
validation, 189
K
keyboard shortcuts, 20
L
licensing
checking, 234
replication licenses overview, 236
states, 50
links (remote replication)
auto suspend (on link-down), 79
auto suspend (on links down), 79
DR group pair (source and destination), 77
local replication
about, 15
licensing, 50, 236
local replication features, 237, 256
logging in to the GUI, 13
logs
DR group logs, 88
event log, 276
RSM logs, 25
trace log, 278
low level refresh
about, 262
refreshing, 251
LUN, 262
M
mirrorclone
swapping, 254
mounting (assigning a drive letter)
mount points, 131
mounting, 120
unmounting, 121
N
normalization, 267
O
online help, 22
P
passwords, 23
preallocated snapclones, 269
prerequisites, 13
presentation of virtual disks
unpresenting virtual disks, 254
protocols, remote data replication , 238
R
raw disks
about, 134
using, 123
refresh
automatic, 44
content pane, 277
manual, 45
related documentation, 289
remote data replication protocols, 238
remote replication
about, 15
licensing, 50, 236
remote replication features, 238, 256
remote replication tunnels, 238
replicating virtual disks
creating a DR group pair, 244
creating snapclones (preallocated), 245
creating snapclones (standard), 246
creating snapshots (preallocated), 246
creating snapshots (standard), 246
295
Replication Solutions Manager
capabilities, 14
database, 27
host agents, 16
jobs, templates and commands, 17
kits and downloads, 18
server, 15
simulation mode, 26
resource states, 47
resources
about, 46
names and UNC formats, 48
selection of multiple, 45
states, 47
resuming DR groups, 72
reverting a DR group pair to home, 73
running
CLUI via the GUI, 282
host commands, 100
S
security, configuring, 23
snapclones
about, 269
creating (preallocated), 245
creating (standard), 246
deleting, 248
guidelines, 271
snapshots
about, 271
creating (preallocated), 246
creating (standard), 246
deleting, 248
guidelines, 272
types (allocation policy), 273
states
event severity, 281
resource states, 47
storage systems
about, 232
actions, 232
disk groups, 238
firmware features, 237
viewing, 236
viewing properties, 236
suspend on failover, 92
suspending DR groups, 73
swapping a mirrorclone, 254
T
templates, job
list, 184
thin provisioning, 274
throttling i/o
about, 85
template, 222
toolbar, 22
tooltips, 23
topology
296 Index
about topology views, 54
filters for views, 61
tips, 61
trace logs
states, 281
viewing, 278
U
UNC format, 48
unmounting, 121
unmounting host volumes, 121
unpresenting virtual disks, 251
V
virtual disk
thin provisioning , 274
virtual disks
about, 240
actions, 240
adding to DR groups, 67
adding to managed sets, 243
deleting, 248
presenting, 251
removing from DR groups, 252
topology view, 59
unpresenting, 254
viewing, 255
viewing properties, 255
VM server
actions, 104
VM servers
adding, 98
deleting, 100
viewing properties, 103
© Copyright 2026 Paperzz