What’s New in 12c High
Availability
Aman Sharma
@amansharma81
http://blog.aristadba.com
Who Am I?
Aman Sharma
About 12+ years using Oracle Database
Oracle ACE
Frequent Contributor to OTN Database forum(Aman….)
Oracle Certified
Sun Certified
|@amansharma81 *
|http://blog.aristadba.com *
Sangam14-2014
2
Agenda
Sangam14-2014
3
(Actual)Agenda
• Flex Cluster
• Flex Cluster- Server Pool
Enhancements
• Multitenant database with 12c RAC
• Bundled Agents(XAG)
• What-If command
• Transaction Guard
• Application Continuity
• Flex ASM
Sangam14-2014
4
Pre 12c Oracle RAC-Database Tier
• Software based clustering using Grid
Infrastructure software
• Cluster nodes contain only database
and ASM instances
• Homogenous configuration
• Dedicated access to the shared
storage for the cluster nodes
• Applications/users connect via nodes
outsides the cluster
• Reflects Point-to-Point model
Sangam14-2014
Database Tier
5
Pre 12c Oracle RAC-Application Tier
Application Tier
Database Tier
Sangam14-2014
6
Pre-12.1 Cluster vs 12c Flex Cluster
Sangam14-2014
7
Oracle RAC Using Point-to-Point System
• Requires a lot of resources
• Each node is connected to each other via
interconnect for node-node heartbeat
• Each node is connected to the storage directly
• Possible Interconnect paths for N node cluster
– N*(N-1)/2 Interconnect Paths for Node Heartbeat
– N connection paths for storage
• For 16 Node RAC
• Heartbeat paths: 16(16-1)/2=120
• Storage paths:16
Sangam14-2014
8
Let’s Talk Big!
• Recap:
– N*(N-1)/2 Node Heartbeat paths
– N Storage paths
• For 16 Node RAC
– 120 Interconnects, 16 storage paths
• What about 500 node cluster?
– 124,750 Heartbeat connections
– 500 Storage Paths
Sangam14-2014
9
Introducing 12c Flex Cluster
Oracle CW
Leaf Node1
ORCL1
+ASM1
Hub Node1
Oracle CW
Oracle CW
Oracle CW
Application Tier
Leaf Node1
ORCL2
Leaf Node2
+ASM2
Hub Node2
Oracle CW
ORCL3
GNS
Hub Node3
Oracle CW
Oracle CW
Leaf Node4
ORCL4
+ASM3
Hub Node4
Oracle CW
Database Tier
Sangam14-2014
Flex ASM
10
12c Flex Clusters-Overview
• Based on Hub-Spoke topology
• Two different categories of cluster
nodes
– Hub Nodes
• Runs database and ASM
instances
– Leaf Nodes
• Loosely coupled
• Runs applications
• Connects to a Hub node
– Flex ASM
• Required for Flex Cluster
• Hub nodes connect to Flex ASM
based storage
Sangam14-2014
11
11.2 RAC vs 12c Flex Cluster
• 5 Hub Nodes
• 16 Node cluster
– 5 Hub, 16 Leaf
– 120 Interconnects
– 16 Storage paths
• 8 Interconnects
• 5 Storage paths
• 21 Hub-Leaf node
connections
• 500 node cluster?
– 124,750
Interconnects
– 500 Storage Paths
• 500 node cluster?
– 25 Hub,475 Leaf
• 300 Interconnects
• 25 Storage paths
• 775 Storage Paths
Sangam14-2014
12
Flex Cluster Benefits
Much lesser resource requirements
Much larger scalability. Number of nodes can be
now up to 2000
More High Availability for the application tier
Previously, application HA was dependent on the
application code
Application nodes also can now be able to use Server
Pools
Better management of dependency mapping for
applications
Sangam14-2014
13
Say Hello to Leaf & Hub Nodes
• 1 Leaf Node 1 Hub node
• Leaf Nodes don’t talk to each other(neither
needs to)
• Leaf node(s) choose the Hub nodes when they
join the cluster
• Applications running on Leaf nodes connect to
the database using the Hub nodes
• Must less internode interactions are
required(Hub-Spoke model)
Sangam14-2014
14
Leaf Nodes-Closer Look
•
•
•
•
•
•
Light weight
Oracle CW
Oracle CW
Loosely coupled
Leaf Node1
Leaf Node2
Works as Spoke
Each Leaf node gets connected to a Hub node
Heartbeat only to the Hub node
Required to run applications and clients over
them
• No direct access to the storage managed by Flex
ASM(it’s accessibly only to Hub nodes)
Sangam14-2014
15
Leaf Nodes-Closer Look(contd.)
• Requires GNS to discover the Hub nodes
• No private inter-connect between the leaf
nodes i.e. no inter-leaf node communication
• Uses the same Public and Private networks as
are used by the Hub nodes
• If a Hub nodes goes down, connected Leaf
node(s) get evicted
• Evicted Leaf node can be added back by
restarting the Clusterware on it
Sangam14-2014
16
Leaf Nodes-Resource Requirements
• Very less compared to the Hub nodes
• Contains only the application specific workload
• Do not contain
– Database instances
– ASM instances
– VIP’s
• Can be either virtual or physical
• Contains no Voting Disk or OCR
• Can be converted into Hub nodes if they have
access to the storage
Sangam14-2014
17
Grid Naming Service(GNS) & Flex Cluster
• For enabling Flex Cluster mode,
GNS is mandatory
• GNS runs on one of the Hub nodes
• Leaf Nodes use GNS as naming
service to locate the Hub nodes
• Applications, services running on
Leaf nodes will be requiring GNS
to locate the resources that they
need in order to function
• Leaf nodes use GNS only at the
time when they join the cluster for
the 1st time
• Alike 11.2, GNS requires a static
IP (GNSVIP)
Sangam14-2014
18
12c-Shared GNS Configuration
• In the previous versions, only one GNS/cluster was
allowed
• For multiple clusters, multiple GNS VIP’s were
required
• Causes more resource requirements
• In 12c, GNS configuration can be shared among
clusters
• GNS configuration needs to be exported before being
shared with the clusters
$ srvctl export gns -clientdata /tmp/gnsconfig
• Use the option USE SHARED GNS when doing the
next cluster installation
Sangam14-2014
19
So What Are Hub Nodes?
• Just the same as what the cluster nodes were in
pre-12c clusters
• Have access to the ASM managed storage
• Runs database instances, ASM(Flex) instances
and resources for the applications
• Maximum number of Hub nodes can be 64 in
12.1(HUBSIZE)
Sangam14-2014
20
Enabling Flex Cluster Mode
To convert a Standard Cluster:
• Check the current cluster mode
$crsctl get cluster mode
status
• Check GNS is enabled or not
#srvctl status gns
• If GNS is not added, add it
#srvctl add gns –vip
192.168.10.12 –domain
cluster01.example.com
• Set Flex Cluster mode
#crsctl set cluster mode flex
Stop & start clusterware on each
node
#crsctl stop crs
#crsctl start crs
• Note: Flex clusters can’t be
converted back to Standard cluster
Sangam14-2014
21
Flex Cluster Administration-Example Commands
• Show the current role of the node
$ crsctl get node role status –node rac01
Node ‘rac01’ active role is ‘hub’
• Change the node role
$ crsctl set node role –node rac01 leaf
• Requires a CRS restart on the node
• Checking the maximum number of Hub nodes
allowed(HubSize)
$ crsctl get cluster hubsize
Sangam14-2014
22
Agenda
• Flex Cluster
• Flex Cluster- Server Pool
Enhancements
• Multitenant database with 12c RAC
• Bundled Agents(XAG)
• What-If command
• Transaction Guard
• Application Continuity
• Flex ASM
Sangam14-2014
23
Server Pools- Recap
Feature starting from 11.2
Offers the traditional facility of logical division
of cluster
Nodes are allocated to the pools
Resources are hosted over the pools
Resource can be an application, a database, a
process
Policy-managed interface
Resource allocation is based on priority
Sangam14-2014
24
Hub & Leaf Node Server Pools
• Server Pools are now available for both Hub
and Leaf nodes
• Provide better resource management by
isolating workloads
• Leaf Nodes and Hub can never be in the same
server pool
• Server pool management for Leaf nodes is
independent from server pools containing Hub
Nodes
Sangam14-2014
25
Flex Cluster – Server Pool Enhancements
Leaf
Siebel
Apache
Hub
OLTP_SP
MIN_SIZE=1,Max_SIZE=3
IMP=3
DSS_SP
MIN_SIZE=2,Max_SIZE=2
IMP=2
Sangam14-2014
26
Flex Cluster – Policy Based Cluster Administration
• Enhances the concept of Server Pools
introduced in 11.2
• Previously, only server pool attributes would
determine node placement in server pools
• From 12c Flex clusters, two new concepts
– Server Categorization
• Extended node attributes for servers to decide the
allocation in server pools
– Cluster Configuration Policy Sets
• Workload based management of servers in the server
pools
Sangam14-2014
27
Flex Cluster – Server Categorization
OLTP_SP
SERVER_CATEGORY
Server Configuration Attributes
ACTIVE_CSS_ROLE: HUB| LEAF
CONFIGURED_CSS_ROLE: HUB| LEAF
CPU_CLOCK_RATE: MHz
CPU_COUNT
CPU_EQUIVALENCY
CPU_HYPERTHREADING
MEMORY_SIZE
NAME
RESOURCE_USE_ENABLED:1|0
SERVER_LABEL
Sangam14-2014
Server Category Attributes
NAME
ACTIVE_CSS_ROLE:HUB| LEAF
EXPRESSION:
:=: equal
eqi: equal, case insensitive
>: greater than
<: less than
!=: not equal
co: contains
coi: contains, case insensitive
st: starts with
en: ends with
nc: does not contain
nci: does not contain, case insensitive
28
Flex Cluster – Server Categorization in Action
[root@rac0 ~]# crsctl status
server rac0 -f
NAME=rac0
MEMORY_SIZE=1997
CPU_COUNT=1
CPU_CLOCK_RATE=3
CPU_HYPERTHREADING=0
CPU_EQUIVALENCY=1000
DEPLOYMENT=other
CONFIGURED_CSS_ROLE=hub
RESOURCE_USE_ENABLED=1
SERVER_LABEL=
PHYSICAL_HOSTNAME=
STATE=ONLINE
ACTIVE_POOLS=Generic ora.ORCL
STATE_DETAILS=AUTOSTARTING
RESOURCES
ACTIVE_CSS_ROLE=hub
Sangam14-2014
[root@rac0 ~]# crsctl status
server rac3 -f
NAME=rac3
MEMORY_SIZE=1997
CPU_COUNT=1
CPU_CLOCK_RATE=3
CPU_HYPERTHREADING=0
CPU_EQUIVALENCY=1000
DEPLOYMENT=other
CONFIGURED_CSS_ROLE=leaf
RESOURCE_USE_ENABLED=1
SERVER_LABEL=
PHYSICAL_HOSTNAME=
STATE=ONLINE
ACTIVE_POOLS=Free
STATE_DETAILS=AUTOSTART
QUEUED
ACTIVE_CSS_ROLE=leaf
29
Flex Cluster – Listing Server Categories
[root@rac0 ~]# crsctl status category
NAME=ora.hub.category
ACL=owner:root:rwx,pgrp:root:r-x,other::r-ACTIVE_CSS_ROLE=hub
EXPRESSION=
NAME=ora.leaf.category
ACL=owner:root:rwx,pgrp:root:r-x,other::r-ACTIVE_CSS_ROLE=leaf
EXPRESSION=
[root@rac0 ~]# crsctl status server -category ora.hub.category
NAME=rac0
STATE=ONLINE
NAME=rac1
STATE=ONLINE
NAME=rac2
STATE=ONLINE
Sangam14-2014
30
Flex Cluster – Creating Server Category
[root@rac0 ~]# crsctl add category testcat -attr
"EXPRESSION='(MEMORY > 1900)'“
[root@rac0 ~]# crsctl status server -category
ora.leaf.category
NAME=rac3
STATE=ONLINE[root@rac0 ~]# crsctl status category testcat
NAME=testcat
ACL=owner:root:rwx,pgrp:root:r-x,other::r-ACTIVE_CSS_ROLE=hub
EXPRESSION=( MEMORY > 1900 )
Sangam14-2014
31
Flex Cluster – Cluster Policy Set
•
•
•
•
Policy based server pool assignment
Default policy-CURRENT
Managed by a Policy set
Policy set contains 2 attributes
• SERVER_POOL_NAMES
• LAST_ACTIVATED_POLICY
• Policy set may contain 0 or more than one
policies
• Each Policy contain definitions for one server
pool only
Sangam14-2014
32
POOL1
MIN_SIZE=2,Max_SIZE=2
IMP=0
POOL2
POOL3
MIN_SIZE=1,Max_SIZE=1
IMP=0
MIN_SIZE=1,Max_SIZE=1
IMP=0
app1
app2
app3
4 Node Cluster
Sangam14-2014
33
Varying Times & Varying Workloads
Day Time:
app1 uses two servers
app2 and app3 use one server, each
Node allocation
should be done
depending
on the
requirements at
different timings
Night Time:
app1 uses one server
app2 uses two servers
app3 uses one server
Weekend:
app1 is not running (0 servers)
app2 uses one server
app3 uses three servers
Sangam14-2014
34
Flex Cluster – Proposed Cluster Policy Set
SERVER_POOL_NAMES=Free
pool1 pool2 pool3
POLICY
NAME=DayTime
SERVERPOOL
NAME=pool1
IMPORTANCE=0
MAX_SIZE=2
MIN_SIZE=2
SERVER_CATEGORY=
SERVERPOOL
NAME=pool2
IMPORTANCE=0
MAX_SIZE=1
MIN_SIZE=1
SERVER_CATEGORY=
SERVERPOOL
NAME=pool3
IMPORTANCE=0
MAX_SIZE=1
MIN_SIZE=1
SERVER_CATEGORY=
POLICY
NAME=NightTime
SERVERPOOL
NAME=pool1
IMPORTANCE=0
MAX_SIZE=1
MIN_SIZE=1
SERVER_CATEGORY=
SERVERPOOL
NAME=pool2
IMPORTANCE=0
MAX_SIZE=2
MIN_SIZE=2
SERVER_CATEGORY=
SERVERPOOL
NAME=pool3
IMPORTANCE=0
MAX_SIZE=1
MIN_SIZE=1
SERVER_CATEGORY=
Sangam14-2014
POLICY
NAME=Weekend
SERVERPOOL
NAME=pool1
IMPORTANCE=0
MAX_SIZE=0
MIN_SIZE=0
SERVER_CATEGORY=
SERVERPOOL
NAME=pool2
IMPORTANCE=0
MAX_SIZE=1
MIN_SIZE=1
SERVER_CATEGORY=
SERVERPOOL
NAME=pool3
IMPORTANCE=0
MAX_SIZE=3
MIN_SIZE=3
SERVER_CATEGORY=
35
Flex Cluster – Cluster Policy Set Creation
Modify the Default policy set to manage the three server pools:
$ crsctl modify policyset –attr "SERVER_POOL_NAMES=Free pool1 pool2 pool3"
Add the required three policies:
$ crsctl add policy DayTime
$ crsctl add policy NightTime
$ crsctl add policy Weekend
Modify the server pools:
$
$
$
$
$
$
$
$
$
crsctl
crsctl
crsctl
crsctl
crsctl
crsctl
crsctl
crsctl
crsctl
modify
modify
modify
modify
modify
modify
modify
modify
modify
serverpool
serverpool
serverpool
serverpool
serverpool
serverpool
serverpool
serverpool
serverpool
pool1
pool1
pool1
pool2
pool2
pool2
pool3
pool3
pool3
-attr
-attr
-attr
-attr
-attr
-attr
-attr
-attr
-attr
"MIN_SIZE=2,MAX_SIZE=2"
"MIN_SIZE=1,MAX_SIZE=1"
"MIN_SIZE=0,MAX_SIZE=0"
"MIN_SIZE=1,MAX_SIZE=1"
"MIN_SIZE=2,MAX_SIZE=2"
"MIN_SIZE=1,MAX_SIZE=1"
"MIN_SIZE=1,MAX_SIZE=1"
"MIN_SIZE=1,MAX_SIZE=1"
"MIN_SIZE=3,MAX_SIZE=3"
Sangam14-2014
-policy
-policy
-policy
-policy
-policy
-policy
-policy
-policy
-policy
DayTime
NightTime
Weekend
DayTime
NightTime
Weekend
DayTime
NightTime
Weekend
36
Flex Cluster – Cluster Policy Set Creation
Activate the policy-Weekend
$ crsctl modify policyset -attr "LAST_ACTIVATED_POLICY=Weekend”
Server allocations after the policy being applied
$ crsctl status resource -t
-------------------------------------------------------------------------------Name
Target State
Server
State details
-------------------------------------------------------------------------------Cluster Resources
-------------------------------------------------------------------------------app1
1
ONLINE OFFLINE
STABLE
2
ONLINE OFFLINE
STABLE
app2
1
ONLINE ONLINE
mjk_has3_1
STABLE
app3
1
ONLINE ONLINE
mjk_has3_0
STABLE
2
ONLINE ONLINE
mjk_has3_2
STABLE
3
ONLINE ONLINE
mjk_has3_3
STABLE
-------------------------------------------------------------------------------Sangam14-2014
37
Agenda
• Flex Cluster
• Flex Cluster- Server Pool
Enhancements
• Multitenant database with 12c RAC
• Bundled Agents(XAG)
• What-If command
• Transaction Guard
• Application Continuity
• Flex ASM
Sangam14-2014
38
12c Multitenant Database & 12c RAC
• Multitenant Databases contain Containers and
Pluggables
• Supported with 12c RAC
• Each PDB is going to be running as a service
• Each PDB service can run on one or more
RAC instances
• Each PDB service can be deployed over server
pool(s)
Sangam14-2014
39
Agenda
• Flex Cluster
• Flex Cluster- Server Pool
Enhancements
• Multitenant database with 12c RAC
• Bundled Agents(XAG)
• What-If command
• Transaction Guard
• Application Continuity
• Flex ASM
Sangam14-2014
40
Flex Cluster – Bundled Agents(XAG)
Oracle CW
XAG
Ag
Leaf Node1
ORCL1
+ASM1
Hub Node1
Oracle CW
Oracle CW
Oracle CW
Application Tier
XAG
XAG
Leaf Node1
ORCL2
Leaf Node2
+ASM2
Hub Node2
Oracle CW
ORCL3
GNS
Hub Node3
Oracle CW
Oracle CW
XAG
Leaf Node4
ORCL4
+ASM3
Hub Node4
Oracle CW
Database Tier
Sangam14-2014
Flex ASM
41
Flex Cluster – Bundled Agents(XAG) Introduction
• Oracle CW can be used to provide HA to applications
• HA for applications was available earlier through the
applications API’s and Services
• With 11.2.0.3, agents were available as standalone
(http://oracle.com/goto/clusterware)
• 12.1 introduced Bundled Agents(XAG)- supplied with
the GI software itself
• In 12c, XAG agents can reside on both Leaf and Hub
nodes
http://www.oracle.com/technetwork/database/database-technologies/clusterware/downloads/ogiba-2189738.pdf
Sangam14-2014
42
GI & Bundled Agents
• GI provides pre-configured public core network resourceora.net1.network
• Applications bind Application VIP’s(APPVIP) to this
network layer
• AGCTL-interface to add an application resource to the GI,
managed by the bundled agents
• Shared storage access-ACFS/NFS/DBFS
• Applications for which XAG are available :
–
–
–
–
–
–
Apache HTTP & Tomcat
Golden Gate
Siebel
JD Edwards
PeopleSoft
MySQL
Sangam14-2014
43
Agenda
• Flex Cluster
• Flex Cluster- Server Pool
Enhancements
• Multitenant database with 12c RAC
• Bundled Agents(XAG)
• What-If command
• Transaction Guard
• Application Continuity
• Flex ASM
Sangam14-2014
44
12c Cluster- What-If Command
• DBA’s, from 12c, can predict the impact of an
operation
• Can be used with both CRSCTL and SRVCTL
commands
• Available for the following category of events
Resources: Start, stop, relocate, add,modify
Server pools: Add, remove, and modify
Servers: Add, remove, and relocate
Policy: Change active policy
Server category: Modify
Sangam14-2014
45
12c Cluster- What-If Command
[root@rac0 ~]# crsctl eval stop res ora.rac0.vip -f
Stage Group 1:
-------------------------------------------------------------------------------Stage Number Required
Action
-------------------------------------------------------------------------------1
Y
Resource 'ora.LISTENER.lsnr' (rac0) will be in
state [OFFLINE]
2
Y
Resource 'ora.rac0.vip' (1/1) will be in state
[OFFLINE]
--------------------------------------------------------------------------------
[root@rac0 ~]# crsctl eval start res ora.rac0.vip -f
Stage Group 1:
-------------------------------------------------------------------------------Stage Number Required
Action
-------------------------------------------------------------------------------1
N
Error code [223] for entity [ora.rac0.vip].
Message is [CRS-5702: Resource 'ora.rac0.vip' is
already running on 'rac0'].
-------------------------------------------------------------------------------Sangam14-2014
46
[root@rac0 ~]# crsctl eval delete server
rac0
-f
Stage Group 1:
-------------------------------------------------------------------------------Stage Number
Required
Action
-------------------------------------------------------------------------------1
Y
Y
Y
Y
Y
Y
Y
Y
Y
Y
Y
Y
Y
Y
2
Resource 'ora.ASMNET1LSNR_ASM.lsnr' (rac0) will
be in state [OFFLINE]
Resource 'ora.DATA.dg' (rac0) will be in state
[OFFLINE]
Resource 'ora.LISTENER.lsnr' (rac0) will be in
state [OFFLINE]
Resource 'ora.LISTENER_SCAN1.lsnr' (1/1) will be
in state [OFFLINE]
Resource 'ora.asm' (1/1) will be in state
[OFFLINE]
Resource 'ora.gns' (1/1) will be in state
[OFFLINE]
Resource 'ora.gns.vip' (1/1) will be in state
[OFFLINE]
Resource 'ora.net1.network' (rac0) will be in
state [OFFLINE]
Resource 'ora.ons' (rac0) will be in state
[OFFLINE]
Resource 'ora.orcl.db' (1/1) will be in state
[OFFLINE]
Resource 'ora.proxy_advm' (rac0) will be in
state [OFFLINE]
Resource 'ora.rac0.vip' (1/1) will be in state
[OFFLINE]
Resource 'ora.scan1.vip' (1/1) will be in state
[OFFLINE]
Server 'rac0' will be removed from pools
[Generic ora.ORCL]
Y
Resource 'ora.gns.vip' (1/1) will be in state
[ONLINE] on server [rac1]
Y
Resource 'ora.rac0.vip' (1/1) will be in state
[ONLINE|INTERMEDIATE] on server [rac1]
<<output bridged for abbreviation>>
Sangam14-2014
--------------------------------------------------------------------------------
47
Agenda
• Flex Cluster
• Flex Cluster- Server Pool
Enhancements
• Multitenant database with 12c RAC
• Bundled Agents(XAG)
• What-If command
• Transaction Guard
• Application Continuity
• Flex ASM
• Cloud File System
Sangam14-2014
48
Transaction Issues Before 12c
• Outage on Database or
Application level can
cause In-flight work loss
• User’s reattempt for
transaction may lead to
logical errors i.e.
duplication of data
• Handling of such
exceptions at application
level is not easy
Sangam14-2014
1
Indoubt
5
Application
error
4
2
Database
error
3
49
Solution: Transaction Guard & Appl. Continuity
• Transaction Guard
– Transaction Guard provides a generic protocol and
API for applications to use for at-most-once
execution in case of planned and unplanned
outages and repeated submissions
• Application Continuity
– Enables the replay of in-flight, recoverable
transactions following the outage of database
Sangam14-2014
50
What Is Transaction Guard
• Part of both Standard & Flex cluster
• Returns the outcome of the last transaction
after a recoverable error using Logical
Transaction ID(LTXID)
• Used by Application Continuity(automatically
enabled)
• Can be used independently also
Sangam14-2014
51
What Is Transaction Guard
• Database Request
– Unit of work submitted by SQL, PL/SQL etc.
• Recoverable Error
– Error due to any issue independent of application i.e. network,
node, database, storage errors
• Reliable Commit Outcome
– Outcome of the last transaction(preserved by TG using LTXID)
• Session State Consistency
– Describes how the application changes the non-transaction state
during a database
• Mutable Functions
– Functions that change their state with every executions
Sangam14-2014
52
What Is Logical TX ID(LTXID)?
• LTXID=Logical Transaction ID
• Used to fetch the outcome of the last transaction’s commit
status
• DBMS_CONT_APP.GET_LTXID_OUTCOME
• Client is supplied unique LTXID for each authentication
and for each round-trip for client driver for commit
operations
• Both client and database hold LTXID
• Transaction Guard ensure that each LTXID is unique
• LTXID is present at the commit for default retention period24 hours
• While obtaining the outcome, LTXID is blocked to ensure
it’s integrity
Sangam14-2014
53
Transaction Guard-Pseudo Workflow
Receive a FAN down event (or recoverable error)
FAN aborts the dead session
If recoverable error (new OCI_ATTRIBUTE for OCI, isRecoverable for JDBC)
Get last LTXID from dead session using getLTXID or from your callback
Obtain a new session
Call GET_LTXID_OUTCOME with last LTXID to obtain COMMITTED and
USER_CALL_COMPLETED status
If COMMITTED and USER_CALL_COMPLETED
Then return result
ELSEIF COMMITTED and NOT USER_CALL_COMPLETED
Then return result with a warning (that details such as out binds or row count were not
returned)
ELSEIF NOT COMMITTED
Cleanup and resubmit request, or return uncommitted result to the client
Sangam14-2014
54
Transaction Guard-(Un)Supported Transactions
• Supported
–
–
–
–
–
–
Local Transactions
Parallel Transactions
Distributed & Remote Transactions
DDL & DCL Transactions
Auto-commit and commit-on success
Pl/SQL with embedded Commit
• Unsupported
–
–
–
–
Recursive transactions
Autonomous transactions
Active Data Guard with read/write DB links for forwarding transactions
Golden Gate & Logical Standby
• API supported for
– 12c JDBC Type 4
– 12c OCI/OCCI Client drivers
– 12c ODP.net
Sangam14-2014
55
Configuring Database for Transaction Guard
•
•
•
•
Database release 12.1.0.1 or later
Grant execute on DBMS_APP_CONT to <user>;
Configure Fast Application Notification(FAN)
Locate and define Transaction History
table(LTXID_TRANS)
• Configure following parameters for Service
– COMMIT_OUTCOME = TRUE
– FAILOVER_TYPE=TRANSACTION
– RETENTION_TIMEOUT=<value>
Sangam14-2014
56
Sample Service Configuration for Transaction Guard
Adding an Admin-managed Service
srvctl add service -database orcl -service GOLD -prefer inst1 -available inst2 -commit_outcome TRUE retention 604800
Modifying a Single Instance Service
DECLARE
params dbms_service.svc_parameter_array;
BEGIN params('COMMIT_OUTCOME'):='true';
params('RETENTION_TIMEOUT'):=604800;
dbms_service.modify_service('<service-name>',params);
END; /
Sangam14-2014
57
Agenda
• Flex Cluster
• Flex Cluster- Server Pool
Enhancements
• Multitenant database with 12c RAC
• Bundled Agents(XAG)
• What-If command
• Transaction Guard
• Application Continuity
• Flex ASM
• Cloud File System
Sangam14-2014
58
What Is Application Continuity
• Masks the issues for the applications
• Replays the in-flight transactions
• Uses Transaction Guard implicitly
Sangam14-2014
59
Application Continuity-Workflow
Image courtesy-Oracle documentation
Sangam14-2014
60
Application Continuity-Resource Requirements
• For Java Client
– Increase memory for replay queues
– Additional CPU for garbage collection
• For Database Server
– Additional CPU for validation
• Transaction Guard
– Bundled with the kernel
– Minimal overhead
Sangam14-2014
61
Disabling Application Continuity
• Use disableReplay() API
• Check for
– UTL_FILE, UTL_MAIL,
UTL_FILE_TRANSFER, UTL_HTTP,
UTL_TCP, UTL_SMPT, DBMS_ALERT
• Disable the replay when application
– Assumes that location value doesn't change
– Assumes that rowid value doesn't change
– Uses Autonomous Transactions, External Pl/SQL
Sangam14-2014
62
Agenda
• Flex Cluster
• Flex Cluster- Server Pool
Enhancements
• Multitenant database with 12c RAC
• Bundled Agents(XAG)
• What-If command
• Transaction Guard
• Application Continuity
• Flex ASM
• Cloud File System
Sangam14-2014
63
ASM of Past Times
• ASM instances run
locally on a node
• ASM clients can
access ASM only
from the local
node
• Loss of local ASM
Instance causes the
unavailability of
the clients
connected to it
Image courtesy-Oracle documentation
Sangam14-2014
64
12c’s Flex ASM
• 1:1 mapping of ASM instance
with the clients is not required
• Number of ASM instances=
Cardinality(3)
• Uses a dedicated network called
ASM Network
• ASM network is used exclusively
for communication between
ASM instances and clients
• If local ASM instance fails, client
failover to another Hub node
running ASM instance
• Mandatory for 12c Flex Cluster
Image courtesy-Oracle documentation
Sangam14-2014
65
Dedicated ASM Network in 12c Flex ASM
Public Network
ORCL1
+ASM1
Hub Node1
Oracle CW
ORCL2
+ASM2
Hub Node2
Oracle CW
ORCL3
GNS
Hub Node3
Oracle CW
ASM Network
CSS Network
Storage Network
ASM Storage
Sangam14-2014
66
Dedicated ASM Network in 12c Flex ASM
Sangam14-2014
67
Flex ASM- Failover
Sangam14-2014
68
Administering Flex ASM
• Flex ASM can be managed using
–
–
–
–
ASMCA
CRSCTL
SQL*PLUS
SRVCTL
$ asmcmd showclustermode
ASM cluster : Flex mode enabled
$ srvctl status asm -detail
ASM is running on mynoden02,mynoden01
ASM is enabled.
$ srvctl config asm
ASM instance count: 3
SQL> SELECT instance_name, db_name, status FROM V$ASM_CLIENT;
INSTANCE_NAME DB_NAME STATUS
--------------- -------- -----------+ASM1
+ASM CONNECTED
orcl1
orcl CONNECTED
orcl2
orcl CONNECTED
Sangam14-2014
69
12c ASM- Mixed Mode Configuration
• Pure 12c Mode
–
–
–
–
Cardinality != Number of Nodes
Supports DB instance failover to other ASM instances
Supports any DB instance to connect to ASM instance
Managed by Cardinality
• Mixed Mode
–
–
–
–
Flex ASM with Cardinality = Number of Nodes
ASM instance on all the nodes
Allows 12c DB instances to connect to remote ASM instances
Pre-12c DB instances can connect to local ASM instance
• Standard Mode
– Standard ASM installation and configuration
– Can be converted to Flex ASM mode using
• ASMCA
• converttoFlexASM.sh
Sangam14-2014
70
Agenda
• Flex Cluster
• Flex Cluster- Server Pool
Enhancements
• Multitenant database with 12c RAC
• Bundled Agents(XAG)
• What-If command
• Transaction Guard
• Application Continuity
• Flex ASM
• Cloud File System
Sangam14-2014
71
12c Cloud File System(Cloud FS)
• Next generation file system
• 12c Cloud File System integrates:
– ASM Cluster File System(ACFS)
– ASM Dynamic Volume Manager(ADVM)
• Using Cloud FS, applications, database,
storage in private clouds
Sangam14-2014
72
Overview of Cloud FS in 12c
Image courtesy- Google Images
Sangam14-2014
73
Cloud FS-Advanced Data Services
•
•
•
•
•
Support for all types of files
Enhanced Snapshots(snap-of-snap)
Auditing
Encryption
Tagging
Sangam14-2014
74
Take Away
• 12c has revolutionized HA stack, yet again
• Flex cluster and Flex ASM are new paradigms
• Multitenancy is the solution for database
consolidation
• Using Flex Cluster along with Multitenancy
gives you a much better foundation for
creating a private cloud
• Cloud FS is the foundation for next generation
storage solution for Oracle’s clusters
Sangam14-2014
75
Thank You!
| @amansharma81
| http://blog.aristadba.com
| [email protected]
Sangam14-2014
76
© Copyright 2026 Paperzz