OCFS2(Oracle Cluster FS) 11G RAC 구축 테스트

OCFS2(Oracle Cluster FS)
11G RAC 구축 테스트
`
 INDEX
 구축 환경
 공유 스토리지 추가
 OCFS2 설치 및 클러스터 구성
 GRID Infrastructure
 RAC SoftWare 설치
 RAC Database 설치
D ATA B A N K
페이지 1
 구축 환경
가상 머신
: VM ware 8.0.2
OS
: Linux Server 5.2 - 64bit
Grid
: 11.2.0.4 (p13390677_112040_Linux-x86-64_3of7.zip)
RAC S/W
: 11.2.0.4 (p13390677_112040_Linux-x86-64_1of7.zip
(p13390677_112040_Linux-x86-64_2of7.zip)
각 노드 정보 : ocfsrac1
각 노드 IP
& ocfsrac2
: 192.168.81.138 &
192.168.81.140
 공유 스토리지 추가
1.
Setting -> Add -> Hard Disk 추가
2.
Disk Type 및 공간 할당
D ATA B A N K
페이지 2
3. 위치 지정
- 생성된 Disk 공유를 위해 SCSI 1:n 으로 지정
* Voting & OCR 전용 디스크를 위해 공유 디스크 2개 이상 할당 필요합니다.
- SCSI 1:1로 반복 작업 수행
4. ocfstest VMX 파일을 수정합니다.
- D:\oracle\ocfs\ocfstest1\ocfstest.vmx 파일을 메모장으로 열어서 해당 라인 추가
* 디스크 공유를 위한 VMware상 Lock 해제 옵션
disk.locking = "FALSE"
diskLib.dataCacheMaxSize = "0"
scsi1.sharedBus = "virtual"
**확인 (ocfstest.vmx)
scsi1:0.present = "TRUE"
scsi1:0.fileName = " D:\oracle\ocfs\disk\ocfstest -1.vmdk"
scsi1:0.mode = "independent-persistent"
scsi1:1.present = "TRUE"
scsi1:1.fileName = "D:\oracle\ocfs\disk1\ocfstest-0.vmdk"
scsi1:1.mode = "independent-persistent"
D ATA B A N K
페이지 3
 OCFS2 설치 및 클러스터 구성
1. 추가된 디스크 포맷팅
[root@ocfsrac1 ~]# fdisk /dev/sdb
Device contains neither a valid DOS partition table, nor Sun, SGI or OSF disklabel
Building a new DOS disklabel. Changes will remain in memory only,
until you decide to write them. After that, of course, the previous
content won't be recoverable.
Warning: invalid flag 0x0000 of partition table 4 will be corrected by w(rite)
Command (m for help): n
Command action
e extended
p
primary partition (1-4)
p
Partition number (1-4): 1
First cylinder (1-130, default 1): 엔터
Using default value 1
Last cylinder or +size or +sizeM or +sizeK (1-130, default 130): 엔터
Using default value 130
Command (m for help): w
The partition table has been altered!
Calling ioctl() to re-read partition table.
Syncing disks.
**확인
[root@ocfsrac1 ~]# fdisk -l
Disk /dev/sda: 32.2 GB, 32212254720 bytes
255 heads, 63 sectors/track, 3916 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Device Boot
Start
End
Blocks
Id System
/dev/sda1
*
1
13
104391
83 Linux
/dev/sda2
14
3916
31350847+ 8e Linux LVM
Disk /dev/sdb: 21.4 GB, 21474836480 bytes
255 heads, 63 sectors/track, 2610 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Device Boot
Start
End
Blocks
Id System
/dev/sdb1
1
2610
20964793+ 83 Linux
Disk /dev/sdc: 10.7 GB, 10737418240 bytes
255 heads, 63 sectors/track, 1305 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Device Boot
/dev/sdc1
Start
1
End
1305
Blocks
Id System
10482381
83 Linux
D ATA B A N K
페이지 4
2. OCFS2 PACKAGE 설치
* 설치 파일에 존재하하는 ocfs2 & ocfs2console & ocfs2-tools 설치
/media/Enterprise Linux dvd 20080528/Server
[root@ocfsrac1 Server]# rpm -Uvh ocfs2-2.6.18-92.el5-1.2.8-2.el5.x86_64.rpm
warning: ocfs2-2.6.18-92.el5-1.2.8-2.el5.x86_64.rpm: Header V3 DSA signature: NOKEY, key ID
1
e5e0159
Preparing...
########################################### [100%]
package ocfs2-2.6.18-92.el5-1.2.8-2.el5 is already installed
[root@ocfsrac1 Server]# rpm -Uvh ocfs2console-1.2.7-1.el5.x86_64.rpm
warning: ocfs2console-1.2.7-1.el5.x86_64.rpm: Header V3 DSA signature: NOKEY, key ID
1e5e0159
Preparing...
########################################### [100%]
package ocfs2console-1.2.7-1.el5 is already installed
[root@ocfsrac1 Server]#
[root@ocfsrac1 Server]# rpm -Uvh ocfs2-tools-1.2.7-1.el5.x86_64.rpm
warning: ocfs2-tools-1.2.7-1.el5.x86_64.rpm: Header V3 DSA signature: NOKEY, key ID
1e5e0159
Preparing...
########################################### [100%]
package ocfs2-tools-1.2.7-1.el5 is already installed
[root@ocfsrac1 Server]#
[root@ocfsrac1 Server]# service o2cb configure
>> 에러발생 -> 클러스터 생성 되지 않았기 때문에 Fail 되어져야 정상
3. 클러스터생성
[root@ocfsrac1 Server]# o2cb_ctl -C -n ocfscrac1 -t cluster -a name= ocfs2
4. 각 노드를 클러스터 등록
노드명
노드넘버
노드ip
노드간 통신 port
클러스터명
o2cb_ctl -C -n ocfsrac1 -t node -a number=0 -a ip_address=192.168.81.138 -a ip_port=7777 -a cluster=ocfsc2
o2cb_ctl -C -n ocfsrac2 -t node -a number=1 -a ip_address=192.168.81.140 -a ip_port=7777 -a cluster=ocfsc2
**클러스터 등록확인
[root@ocfsrac1 Server]# cat /etc/ocfs2/cluster.conf
node:
ip_port = 7777
ip_address = 192.168.81.138
number = 0
name = ocfsrac1
cluster = ocfs2
node:
ip_port = 7777
ip_address = 192.168.81.140
number = 1
name = ocfsrac2
cluster = ocfs2
cluster:
node_count = 2
name = ocfs2
D ATA B A N K
페이지 5
4. o2bc 설정
[root@ocfsrac1 Server]# service o2cb configure
Configuring the O2CB driver.
This will configure the on-boot properties of the O2CB driver.
The following questions will determine whether the driver is loaded on
boot. The current values will be shown in brackets ('[]'). Hitting
ENTER without typing an answer will keep that current value. Ctrl-C
will abort.
Load O2CB driver on boot (y/n) [y]: y
Cluster stack backing O2CB [o2cb]: 엔터
Cluster to start on boot (Enter "none" to clear) [ocfs2]: 엔터
Specify heartbeat dead threshold (>=7) [31]:
Specify network idle timeout in ms (>=5000) [30000]: 엔터
Specify network keepalive delay in ms (>=1000) [2000]: 엔터
Specify network reconnect delay in ms (>=2000) [2000]: 엔터
Writing O2CB configuration: OK
Setting cluster stack "o2cb": OK
Registering O2CB cluster "ocfscluster1": OK
Setting O2CB cluster timeouts : OK
5. 클러스터 마운트
[root@ocfsrac1 ~]# mkfs.ocfs2 -b 4K -C 32K -N 2 -L "OCFS2Filesystem" /dev/sdb1
fstab을 이용한 OS reboot시 자동 mount 설정(추가해주시면 됩니다.)
[root@ocfsrac1 ~]# vi /etc/fstab
/dev/sdb1
/dev/sdc1
/oradata
/oradata1
ocfs2
ocfs2
_netdev,datavolume,nointr
_netdev,datavolume,nointr
00
00
o2bc명령어
service o2cb status
-상태확인
service o2cb online
-ocfs볼륨on
service o2cb offline
-ocfs볼륨off
클러스터 ip변경 혹은 클러스터 추가시 offline 이후 online 해줘야합니다.
* tunefs.ocfs2 로도 변경 가능합니다.
6. 클러스터 상태 확인
[root@ocfsrac1 Server]# service o2cb status
Driver for "configfs": Loaded
Filesystem "configfs": Mounted
Stack glue driver: Loaded
Stack plugin "o2cb": Loaded
Driver for "ocfs2_dlmfs": Loaded
Filesystem "ocfs2_dlmfs": Mounted
Checking O2CB cluster "ocfscluster1": Online
Heartbeat dead threshold: 31
Network idle timeout: 30000
Network keepalive delay: 2000
Network reconnect delay: 2000
Heartbeat mode: Local
Checking O2CB heartbeat: Active
D ATA B A N K
페이지 6
===============1번 노드 Setting 완료=============
2번 노드 Setting

1번 노드 이미지 파일 복사 후 이미지명 변경

IP 변경

hostname 변경
변경 후 ocfs볼륨 재시작
*확인
각 노드에서 클러스터 파일시스템 확인 (oradata & oradata1)
 GRID Infrastructure 설치
1. OS 설치 요구사항 Check
필요한 Package 설치
[root@ocfsrac1 ~]# cd /media/cdrom/Server
rpm -Uvh binutils-2.*
rpm -Uvh compat-libstdc++-33*
rpm -Uvh elfutils-libelf-0.*
rpm -Uvh elfutils-libelf-devel-*
rpm -Uvh gcc-4.*
rpm -Uvh gcc-c++-4.*
rpm -Uvh glibc-2.*
rpm -Uvh glibc-common-2.*
rpm -Uvh glibc-devel-2.*
rpm -Uvh glibc-headers-2.*
rpm -Uvh ksh-2*
rpm -Uvh libaio-0.*
rpm -Uvh libaio-devel-0.*
rpm -Uvh libgcc-4.*
rpm -Uvh libstdc++-4.*
rpm -Uvh libstdc++-devel-4.*
rpm -Uvh make-3.*
rpm -Uvh sysstat-7.*
rpm -Uvh unixODBC-2.*
rpm -Uvh unixODBC-devel-2.*
메모리 사용 제한 설정(추가)
[root@ocfsrac1 ~]# vi /etc/security/limits.conf
oracle
soft
nproc
2047
oracle
hard
nproc 16384
oracle
soft
nofile
1024
oracle
hard
nofile
65536
메모리 사용 제한 모듈 추가(추가)
session
required
pam_limits.so
D ATA B A N K
페이지 7
Linux 커널 값 설정(추가 & 변경)
[root@ocfsrac1 ~]# vi /etc/sysctl.conf
fs.aio-max-nr = 1048576
fs.file-max = 6815744
kernel.shmall = 2097152
kernel.shmmax = 1054504960
kernel.shmmni = 4096
# semaphores: semmsl, semmns, semopm, semmni
kernel.sem = 250 32000 100 128
net.ipv4.ip_local_port_range = 9000 65500
net.core.rmem_default=262144
net.core.rmem_max=4194304
net.core.wmem_default=262144
net.core.wmem_max=1048586
적용
[root@ocfsrac1 ~]# /sbin/sysctl -p
2. HOSTS 설정 및 SSH 인증 설정
hostname확인
/etc/hosts 등록
양노드 동일하게 구성
(vip & scan IP 의 경우 가상IP이므로 hosts에만 임의로 입력해주면 된다.)
vip = 실제 서비스아이피
scan ip = 외부접속 시 경우해서 들어오는 iP이며, scan Listener를 거쳐 Database에 접근하
게 된다.
* PUBLIC IP 와 PIRIVATE IP 는 서브넷 주소가 달라야 하므로,
PUBLIC IP
는 NAT 설정
PRIVIATE IP 는 호스트전용 IP로 설정해주어야한다.
* 양방향간 public IP & private IP PING Check
[root@ocfsrac2 ~]# ping ocfsrac1
[root@ocfsrac2 ~]# ping ocfsrac2
[root@ocfsrac2 ~]# ping ocfsrac1-priv
[root@ocfsrac2 ~]# ping ocfsrac2-priv
D ATA B A N K
페이지 8
양노드에서 양방향간 SSH 인증없이 접속하기 위해 keygen 생성 후 인증키 값 authorized_keys
에 추가
1번노드
[root@ocfsrac1 ~]# ssh-keygen -t rsa
[root@ocfsrac1 ~]# cat id_rsa.pub >> authorized_keys
2번노드
[root@ocfsrac1 ~]# ssh-keygen -t rsa
[root@ocfsrac1 ~]# cat id_rsa.pub >> authorized_keys
1번노드
/root/.ssh ssh ocfsrac2 cat /home/oracle/.ssh/id_rsa.pub >> /home/oracle/.ssh/authorized_keys
/root/.ssh ssh ocfsrac1 cat /home/oracle/.ssh/id_rsa.pub >> /home/oracle/.ssh/authorized_keys
2번노드
/root/.ssh ssh ocfsrac2 cat /home/oracle/.ssh/id_rsa.pub >> /home/oracle/.ssh/authorized_keys
/root/.ssh ssh ocfsrac1 cat /home/oracle/.ssh/id_rsa.pub >> /home/oracle/.ssh/authorized_keys
* 각 노드에서 Key값 확인
/home/oracle> cat /home/oracle/.ssh/authorized_keys
ssh-rsa
AAAAB3NzaC1yc2EAAAABIwAAAQEA1kQhkjNW0S8aoat/Kf77VbJkeBKUDXnzWTovdq1IQ
piIoN7ksSHQS7+80ieqtL8nZR3lqgTJRsJ4aCXIvi7BSwGwMLXSi5IAqC/fdtOzpzmerUHQkMM
h3YG8lyABX4EIcuetL3Rvhy3XgZVR8J0xCuSChfAwkQWiFXmEFFgUEwuc7i+zdZXLd6/7QBL
zFsTv2T7tMSnPMErDk8VLFG9mvMH6i56x2+ywQhx3P0etl6K2BtfYKShCmKu2aBIin4BMkJB
acQMdVaDvBmwgvWcRG7LlOa6fcbwYQtY8eR1K8xxX1wzDTwhKgUtLx0Wjw9Qu2FHHrKdo
b5DsBjgjek4CXw== oracle@ocfsrac1
ssh-rsa
AAAAB3NzaC1yc2EAAAABIwAAAQEApW3M6QQYMCUfBQK5Wg6nn69EAMVkYvz2QFRqU
G2zG64kLel8j70TlnfZz7S6FirnJvNpamWCJMmrNBHybZx+XEwBoIEbYHbY3PI17kozPLYYE
mI/FQz/Iay0ZRfiTUb9T2o5xcnA/+qXZpiPPCuV7FqeZc0tkIWKxKgN0dEgnGWYVFgIKfstdW
1WXy5YH3gXIf+f43p2u41GvHuK/LPvjiWEBygNtmP4wMCBbIw+A65XXCWsZFCha8C2lu1m
0CA3a4MSPEswEg1/yJvR4mQ/cUFkMSrcr7S2RfR+QFw7Fm0uEpJ/77liZ1QT2zv0uZBGxvY
duLGCOXbZcLZYiOmF2Q== oracle@ocfsrac2
/home/oracle>
* 각 노드 ssh 인증 확인
/home/oracle> ssh ocfsrac1 date
Thu Dec 15 10:59:10 KST 2016
/home/oracle> ssh ocfsrac2 date
Thu Dec 15 10:59:14 KST 2016
/home/oracle> ssh ocfsrac2-priv date
Thu Dec 15 10:59:17 KST 2016
/home/oracle> ssh ocfsrac1-priv date
Thu Dec 15 10:59:22 KST 2016
/home/oracle>
D ATA B A N K
페이지 9
* .bash_profile 확인
# .bash_profile
# Get the aliases and functions
if [ -f ~/.bashrc ]; then
. ~/.bashrc
fi
# User specific environment and startup programs
PATH=$PATH:$HOME/bin
export PATH
####GRID Environment ##########
export ORA_CRS_HOME=/oracle/app/grid/product/11.2.0
####Oracle Environment ##########
export ORACLE_BASE=/oracle/app/oracle
export ORACLE_HOME=$ORACLE_BASE/product/11.2.0
export ORACLE_SID=ocfs1
export NLS_LANG=AMERICAN_AMERICA.KO16MSWIN949
########## PATH/LIB ##########
export
PATH=$ORA_CRS_HOME/bin:$ORACLE_HOME/bin:/bin:/bin/mkdir:/usr/local/bin:/usr/b
in:/usr/sbin:/etc:/usr/ccs/bin:/usr/ucb:/usr/bin/X11:$ORACLE_HOME/OPatch:/usr/ope
nwin/bin:.
export LD_LIBRARY_PATH=/usr/local/lib:$ORACLE_HOME/lib:.
export ORA_NLS10=$ORA_CRS_HOME/nls/data
########## OS Environment ##########
umask 022
export TERM=vt220
export EDITER=vi
export LANG=C
export PS1="\$PWD> "
set -o vi
export LC_ALL=C
stty erase ^?
stty erase ^H
alias sm='sqlplus / as sysdba'
alias oh='cd $ORACLE_HOME'
alias ob='cd $ORACLE_BASE'
D ATA B A N K
페이지 10
3. GRID 설치시작
./runInstaller
install and configure grid infrastructure for a cluster -> NEXT
Advanced Installation -> NEXT
D ATA B A N K
페이지 11
HOSTS에 설정된 SCAN 명 지정(SCAN IP는 가상 IP 입니다.)
D ATA B A N K
페이지 12
Add -> 2번 노드 추가 -> 모두 선택 -> NEXT
PUBLIC & PRIVATE IP 설정 (각 Subnet 주소는 대역대가 달라야 합니다.)
D ATA B A N K
페이지 13
OCR(RAC구성의 전체 정보를 저장하고 있는 디스크) 위치 지정
Vote Disk(각 노드 장애 여부 판단 & 다중화 권장) 위치지정
11g RAC 부터는 OCR 과 Vote disk 를 모두 ASM Storage 에 저장 할 수 있습니다.
D ATA B A N K
페이지 14
그룹지정
ORACLE_BASE & GRID_HOME 지정
ORACLE Inventory 지정
D ATA B A N K
페이지 15
설치 시작
D ATA B A N K
페이지 16
설치 완료를 위해 각 노드에서 해당 스크립트 수행
[root@ocfsrac1 ~]# sh /oracle/app/oraInventory/orainstRoot.sh
Changing permissions of /oracle/app/oraInventory.
Adding read,write permissions for group.
Removing read,write,execute permissions for world.
Changing groupname of /oracle/app/oraInventory to dba.
The execution of the script is complete.
[root@ocfsrac1 ~]# sh /oracle/app/grid/product/11.2.0/root.sh
Performing root user operation for Oracle 11g
The following environment variables are set as:
ORACLE_OWNER= oracle
ORACLE_HOME= /oracle/app/grid/product/11.2.0
Enter the full pathname of the local bin directory: [/usr/local/bin]:
Copying dbhome to /usr/local/bin ...
Copying oraenv to /usr/local/bin ...
Copying coraenv to /usr/local/bin ...
Creating /etc/oratab file...
Entries will be added to the /etc/oratab file as needed by
Database Configuration Assistant when a database is created
Finished running generic part of root script.
Now product-specific root actions will be performed.
Using configuration parameter file: /oracle/app/grid/product/11.2.0/crs/install/crsconfig_params
Creating trace directory
User ignored Prerequisites during installation
Installing Trace File Analyzer
OLR initialization - successful
root wallet
root wallet cert
root cert export
peer user cert
pa user cert
D ATA B A N K
페이지 17
Adding Clusterware entries to inittab
CRS-2672: Attempting to start 'ora.mdnsd' on 'ocfsrac1'
CRS-2676: Start of 'ora.mdnsd' on 'ocfsrac1' succeeded
CRS-2672: Attempting to start 'ora.gpnpd' on 'ocfsrac1'
CRS-2676: Start of 'ora.gpnpd' on 'ocfsrac1' succeeded
CRS-2672: Attempting to start 'ora.cssdmonitor' on 'ocfsrac1'
CRS-2672: Attempting to start 'ora.gipcd' on 'ocfsrac1'
CRS-2676: Start of 'ora.gipcd' on 'ocfsrac1' succeeded
CRS-2676: Start of 'ora.cssdmonitor' on 'ocfsrac1' succeeded
CRS-2672: Attempting to start 'ora.cssd' on 'ocfsrac1'
CRS-2672: Attempting to start 'ora.diskmon' on 'ocfsrac1'
CRS-2676: Start of 'ora.diskmon' on 'ocfsrac1' succeeded
CRS-2676: Start of 'ora.cssd' on 'ocfsrac1' succeeded
clscfg: -install mode specified
Successfully accumulated necessary OCR keys.
Creating OCR keys for user 'root', privgrp 'root'..
Operation successful.
Now formatting voting disk: /oradata1/storage/vdsk1.
Now formatting voting disk: /oradata1/storage/vdsk2.
Now formatting voting disk: /oradata1/storage/vdsk3.
CRS-4603: Successful addition of voting disk /oradata1/storage/vdsk1.
## STATE
File Universal Id
File Name Disk group
-- ----------------------------- --------1. ONLINE 19cb0bcb1d0b4f0cbf7e9a0b200871a7 (/oradata1/storage/vdsk1) []
Located 3 voting disk(s).
Preparing packages for installation...
cvuqdisk-1.0.9-1
Configure Oracle Grid Infrastructure for a Cluster ... succeeded
[root@ocfsrac1 ~]#
설치완료 후 클러스터 체크 (ORACLE 엔진 & DB 생성하지 않아 DB 상태 확인불가)
/home/oracle> crsctl status resource -t
NAME
TARGET STATE
SERVER
STATE_DETAILS
-------------------------------------------------------------------------------ora.LISTENER.lsnr
ONLINE ONLINE
ocfsrac1
ONLINE ONLINE
ocfsrac2
ora.gsd
OFFLINE OFFLINE
ocfsrac1
OFFLINE OFFLINE
ocfsrac2
ora.net1.network
ONLINE ONLINE
ocfsrac1
ONLINE ONLINE
ocfsrac2
ora.ons
ONLINE ONLINE
ocfsrac1
ONLINE ONLINE
ocfsrac2
ora.registry.acfs
OFFLINE OFFLINE
ocfsrac1
OFFLINE OFFLINE
ocfsrac2
Cluster Resources
-------------------------------------------------------------------------------ora.LISTENER_SCAN1.lsnr
1
ONLINE ONLINE
ocfsrac1
ora.cvu
1
ONLINE ONLINE
ocfsrac1
ora.oc4j
1
ONLINE ONLINE
ocfsrac1
ora.ocfsrac1.vip
1
ONLINE ONLINE
ocfsrac1
ora.ocfsrac2.vip
1
ONLINE ONLINE
ocfsrac2
ora.scan1.vip
1
ONLINE ONLINE
ocfsrac1
D ATA B A N K
페이지 18
 RAC SoftWare 설치
./runInstaller
RAC DATABASE INSTALLRATION -> Select All -> Next
D ATA B A N K
페이지 19
ORACLE_BASE & ORACLE_HOME 지정
D ATA B A N K
페이지 20
그룹지정
설치시작
D ATA B A N K
페이지 21
* 설치 도중
clusterwareprct-1011 failed to run oifcfg . detailed error null
에러 발생 시
export ORA_NLS10=$ORA_CRS_HOME/nls/data
변경 후 re install 수행 하시면 됩니다.
설치 완료 전 각 노드 스크립트 수행
[root@ocfsrac1 ~]# sh /oracle/app/oracle/product/11.2.0/root.sh
Performing root user operation for Oracle 11g
The following environment variables are set as:
ORACLE_OWNER= oracle
ORACLE_HOME= /oracle/app/oracle/product/11.2.0
Enter the full pathname of the local bin directory: [/usr/local/bin]:
The contents of "dbhome" have not changed. No need to overwrite.
The contents of "oraenv" have not changed. No need to overwrite.
The contents of "coraenv" have not changed. No need to overwrite.
Entries will be added to the /etc/oratab file as needed by
Database Configuration Assistant when a database is created
Finished running generic part of root script.
Now product-specific root actions will be performed.
Finished product-specific root actions.
[root@ocfsrac1 ~]#
 RAC Database 설치
/home/oracle> dbca
create database -> Next
D ATA B A N K
페이지 22
Custom Database -> Next
Database name & 설치 Node 선택
D ATA B A N K
페이지 23
sys/system Password 지정
Databafile 위치(Cluster file system_) -> /oradata(OCFS Data영역) -> Next
D ATA B A N K
페이지 24
Database 설치 관련 설정 -> Next
설치 완료
D ATA B A N K
페이지 25
* 설치 확인
/home/oracle> crsctl status resource -t
-------------------------------------------------------------------------------NAME
TARGET
STATE
SERVER
STATE_DETAILS
-------------------------------------------------------------------------------Local Resources
-------------------------------------------------------------------------------ora.LISTENER.lsnr
ONLINE ONLINE
ocfsrac1
ONLINE ONLINE
ocfsrac2
ora.gsd
OFFLINE OFFLINE
ocfsrac1
OFFLINE OFFLINE
ocfsrac2
ora.net1.network
ONLINE ONLINE
ocfsrac1
ONLINE ONLINE
ocfsrac2
ONLINE ONLINE
ocfsrac1
ONLINE ONLINE
ocfsrac2
ONLINE OFFLINE
ocfsrac1
ONLINE OFFLINE
ocfsrac2
ora.ons
ora.registry.acfs
-------------------------------------------------------------------------------Cluster Resources
-------------------------------------------------------------------------------ora.LISTENER_SCAN1.lsnr
1
ONLINE
ONLINE
ocfsrac2
ONLINE
ONLINE
ocfsrac2
ONLINE
ONLINE
ocfsrac1
1
ONLINE
ONLINE
ocfsrac1
Open
2
ONLINE
ONLINE
ocfsrac2
Open
ONLINE
ONLINE
ocfsrac1
ONLINE
ONLINE
ocfsrac2
ONLINE
ONLINE
ocfsrac2
ora.cvu
1
ora.oc4j
1
ora.ocfc.db
ora.ocfsrac1.vip
1
ora.ocfsrac2.vip
1
ora.scan1.vip
1
/home/oracle>
D ATA B A N K
페이지 26