5. Install Database S/W & TEST Database

   

1) ASM Disk 구성

-Grid, Oracle 계정 Path 지정(.bash_profile)

[Grid]

export PATH

export ORACLE_BASE=/home/grid/app

export ORACLE_HOME=/home/grid/11.2.0/grid

export PATH=$PATH:$ORACLE_HOME/bin

export ORACLE_SID=+ASM1

[Oracle]

export PATH

export ORACLE_BASE=/home/oracle/app

export ORACLE_HOME=/home/oracle/11.2.0/db

export PATH=$PATH:$ORACLE_HOME/bin

export ORACLE_SID=EXAORCL1

export ORACLE_UNQNAME=EXAORCL

   

ASMCA

[grid@exadb1 ~]$ ls

11.2.0 app oradiag_grid setup

[grid@exadb1 ~]$

[grid@exadb1 ~]$ asmca

   

-oradata01로 구성된 모든 Disk를 선택해서 External로 구성을 진행하였다.

-Cell Disk를 2개이상으로 구성할 경우에 High, Normal로도 구성이 가능하다.

   

[ASM 추가 내역 확인]

[grid@exadb1 bin]$ ./crsctl stat res -t

--------------------------------------------------------------------------------

NAME TARGET STATE SERVER STATE_DETAILS

--------------------------------------------------------------------------------

Local Resources

--------------------------------------------------------------------------------

ora.LISTENER.lsnr

ONLINE ONLINE exadb1

ora.MGMT.dg

ONLINE ONLINE exadb1

ora.ORADATA01.dg

ONLINE ONLINE exadb1

ora.asm

ONLINE ONLINE exadb1 Started

ora.gsd

OFFLINE OFFLINE exadb1

ora.net1.network

ONLINE ONLINE exadb1

ora.ons

ONLINE ONLINE exadb1

ora.registry.acfs

ONLINE ONLINE exadb1

--------------------------------------------------------------------------------

Cluster Resources

--------------------------------------------------------------------------------

ora.LISTENER_SCAN1.lsnr

1 ONLINE ONLINE exadb1

ora.cvu

1 ONLINE ONLINE exadb1

ora.exadb1.vip

1 ONLINE ONLINE exadb1

ora.oc4j

1 ONLINE ONLINE exadb1

ora.scan1.vip

1 ONLINE ONLINE exadb1

[grid@exadb1 bin]$

   

2) Install DB S/W

DB 설치 파일 이관 및 unzip

   

DB S/W Only 설치 진행

install readme.html response rpm runInstaller sshsetup stage welcome.html

[oracle@exadb1 database]$

[oracle@exadb1 database]$ ./runInstaller

Starting Oracle Universal Installer...

   

   

-Install database software only 선택

   

   

-SSH Connectivity 설치 (Oracle)

   

   

-위 해당 에러는 무시하고 진행

   

   

[root@exadb1 ~]# /home/oracle/app/oracle/product/11.2.0/dbhome_1/root.sh

Performing root user operation for Oracle 11g

   

The following environment variables are set as:

ORACLE_OWNER= oracle

ORACLE_HOME= /home/oracle/app/oracle/product/11.2.0/dbhome_1

   

Enter the full pathname of the local bin directory: [/usr/local/bin]:

The contents of "dbhome" have not changed. No need to overwrite.

The contents of "oraenv" have not changed. No need to overwrite.

The contents of "coraenv" have not changed. No need to overwrite.

   

Entries will be added to the /etc/oratab file as needed by

Database Configuration Assistant when a database is created

Finished running generic part of root script.

Now product-specific root actions will be performed.

Finished product-specific root actions.

[root@exadb1 ~]#

   

   

   

3) Create TEST DB

DB 생성

[oracle@exadb1 ~]$

[oracle@exadb1 ~]$ dbca

   

   

-Oracle Real Application Clusters (RAC) database 선택

   

   

-SID 지정

   

   

-ASM DISK 선택 후에 해당 DISK GROUP 선택

   

   

-DB가 구성되는데 시간이 조금 오래 걸릴 수 있으니 기다리면 된다.(alert log를 확인)


4) Oracle Exadata 상태확인

   

Oracle Exadata 구성 상태 확인

[grid@exadb1 ~]$ crsctl stat res -t

--------------------------------------------------------------------------------

NAME TARGET STATE SERVER STATE_DETAILS

--------------------------------------------------------------------------------

Local Resources

--------------------------------------------------------------------------------

ora.LISTENER.lsnr

ONLINE ONLINE exadb1

ora.MGMT.dg

ONLINE ONLINE exadb1

ora.ORADATA1.dg

ONLINE ONLINE exadb1

ora.asm

ONLINE ONLINE exadb1 Started

ora.gsd

OFFLINE OFFLINE exadb1

ora.net1.network

ONLINE ONLINE exadb1

ora.ons

ONLINE ONLINE exadb1

ora.registry.acfs

ONLINE ONLINE exadb1

--------------------------------------------------------------------------------

Cluster Resources

--------------------------------------------------------------------------------

ora.LISTENER_SCAN1.lsnr

1 ONLINE ONLINE exadb1

ora.cvu

1 ONLINE ONLINE exadb1

ora.exadb1.vip

1 ONLINE ONLINE exadb1

ora.exaorcl.db

1 ONLINE ONLINE exadb1 Open

ora.oc4j

1 ONLINE ONLINE exadb1

ora.scan1.vip

1 ONLINE ONLINE exadb1

SQL> SELECT NAME, ALLOCATION_UNIT_SIZE AU, STATE, TYPE, TOTAL_MB, COMPATIBILITY FROM V$ASM_DISKGROUP;

   

NAME AU STATE TYPE TOTAL_MB

------------------------------ ---------- ----------- ------ ----------

COMPATIBILITY

------------------------------------------------------------

MGMT 4194304 MOUNTED EXTERN 896

11.2.0.4.0

   

ORADATA1 4194304 CONNECTED EXTERN 4480

11.2.0.4.0

   

   

SQL> SELECT LIBRARY, TOTAL_MB, NAME, FAILGROUP, LABEL FROM V$ASM_DISK ORDER BY FAILGROUP, NAME;

   

LIBRARY TOTAL_MB

---------------------------------------------------------------- ----------

NAME FAILGROUP

------------------------------ ------------------------------

LABEL

-------------------------------

CELL 448

DATA_CD_DISK01_STOCELL1 DATA_CD_DISK01_STOCELL1

DATA_CD_DISK01_STOCELL1

   

CELL 448

DATA_CD_DISK02_STOCELL1 DATA_CD_DISK02_STOCELL1

DATA_CD_DISK02_STOCELL1

   

LIBRARY TOTAL_MB

---------------------------------------------------------------- ----------

NAME FAILGROUP

------------------------------ ------------------------------

LABEL

-------------------------------

   

CELL 448

DATA_CD_DISK03_STOCELL1 DATA_CD_DISK03_STOCELL1

DATA_CD_DISK03_STOCELL1

   

CELL 448

DATA_CD_DISK04_STOCELL1 DATA_CD_DISK04_STOCELL1

   

LIBRARY TOTAL_MB

---------------------------------------------------------------- ----------

NAME FAILGROUP

------------------------------ ------------------------------

LABEL

-------------------------------

DATA_CD_DISK04_STOCELL1

   

CELL 448

DATA_CD_DISK05_STOCELL1 DATA_CD_DISK05_STOCELL1

DATA_CD_DISK05_STOCELL1

   

CELL 448

   

LIBRARY TOTAL_MB

---------------------------------------------------------------- ----------

NAME FAILGROUP

------------------------------ ------------------------------

LABEL

-------------------------------

DATA_CD_DISK06_STOCELL1 DATA_CD_DISK06_STOCELL1

DATA_CD_DISK06_STOCELL1

   

CELL 448

DATA_CD_DISK07_STOCELL1 DATA_CD_DISK07_STOCELL1

DATA_CD_DISK07_STOCELL1

   

   

LIBRARY TOTAL_MB

---------------------------------------------------------------- ----------

NAME FAILGROUP

------------------------------ ------------------------------

LABEL

-------------------------------

CELL 448

DATA_CD_DISK08_STOCELL1 DATA_CD_DISK08_STOCELL1

DATA_CD_DISK08_STOCELL1

   

CELL 448

DATA_CD_DISK09_STOCELL1 DATA_CD_DISK09_STOCELL1

DATA_CD_DISK09_STOCELL1

   

LIBRARY TOTAL_MB

---------------------------------------------------------------- ----------

NAME FAILGROUP

------------------------------ ------------------------------

LABEL

-------------------------------

   

CELL 448

DATA_CD_DISK10_STOCELL1 DATA_CD_DISK10_STOCELL1

DATA_CD_DISK10_STOCELL1

   

CELL 448

DATA_CD_DISK11_STOCELL1 DATA_CD_DISK11_STOCELL1

   

LIBRARY TOTAL_MB

---------------------------------------------------------------- ----------

NAME FAILGROUP

------------------------------ ------------------------------

LABEL

-------------------------------

DATA_CD_DISK11_STOCELL1

   

CELL 448

DATA_CD_DISK12_STOCELL1 DATA_CD_DISK12_STOCELL1

DATA_CD_DISK12_STOCELL1

   

   

12 rows selected.

   

SQL> SELECT NAME, DATABASE_ROLE, PLATFORM_NAME FROM V$DATABASE;

   

NAME DATABASE_ROLE

--------- ----------------

PLATFORM_NAME

--------------------------------------------------------------------------------

EXAORCL PRIMARY

Linux x86 64-bit

   

   

SQL> SELECT INSTANCE_NAME, HOST_NAME, VERSION, DATABASE_STATUS, INSTANCE_ROLE FROM V$INSTANCE;

   

INSTANCE_NAME

----------------

HOST_NAME

----------------------------------------------------------------

VERSION DATABASE_STATUS INSTANCE_ROLE

----------------- ----------------- ------------------

EXAORCL1

exadb1

11.2.0.4.0 ACTIVE PRIMARY_INSTANCE

[root@exadb1 bin]# ./ocrcheck

Status of Oracle Cluster Registry is as follows :

Version : 3

Total space (kbytes) : 262120

Used space (kbytes) : 2932

Available space (kbytes) : 259188

ID : 411888517

Device/File Name : +MGMT

Device/File integrity check succeeded

   

Device/File not configured

   

Device/File not configured

   

Device/File not configured

   

Device/File not configured

   

Cluster registry integrity check succeeded

   

[root@exadb1 bin]# ./crsctl query css votedisk

## STATE File Universal Id File Name Disk group

-- ----- ----------------- --------- ---------

1. ONLINE 8890e5fc853e4fa5bfa8c39090e6cf6d (o/192.168.56.102/DATA_CD_DISK01_stocell1) [MGMT]

Located 1 voting disk(s).

   

[oracle@exadb1 ~]$ free

total used free shared buffers cached

Mem: 2685952 2425212 260740 0 17192 796020

-/+ buffers/cache: 1612000 1073952

Swap: 4095992 112112 3983880

   

Oracle Exadata Smart Scan TEST

alter session set cell_offload_processing=false;

   

select /*+ GATHER_PLAN_STATISTICS WITH_SMART_SCAN */sum(sal) from scott.emp where deptno='20';

   

 

SELECT *

FROM TABLE(DBMS_XPLAN.DISPLAY_CURSOR(NULL,NULL, 'allstats last -rows +predicate'));

   

SQL_ID d8gtt1g1rufkx, child number 1

-------------------------------------

select /*+ GATHER_PLAN_STATISTICS WITH_SMART_SCAN */sum(sal) from

scott.emp where deptno='20'

 

Plan hash value: 2083865914

 

   

Predicate Information (identified by operation id):

---------------------------------------------------

 

2 - filter("DEPTNO"=20)

 

Note

-----

- dynamic sampling used for this statement (level=2)

 

alter session set cell_offload_processing=true;

   

select /*+ GATHER_PLAN_STATISTICS WITH_SMART_SCAN */sum(sal) from scott.emp where deptno='20';

   

 

SELECT *

FROM TABLE(DBMS_XPLAN.DISPLAY_CURSOR(NULL,NULL, 'allstats last -rows +predicate'));

   

SQL_ID d8gtt1g1rufkx, child number 0

-------------------------------------

select /*+ GATHER_PLAN_STATISTICS WITH_SMART_SCAN */sum(sal) from

scott.emp where deptno='20'

 

Plan hash value: 2083865914

 

 

Predicate Information (identified by operation id):

---------------------------------------------------

 

2 - storage("DEPTNO"=20)

filter("DEPTNO"=20)

 

Note

-----

- dynamic sampling used for this statement (level=2)

 

   

   

[참고 문서 및 사이트]

   


블로그 이미지

운명을바꾸는자

IT와 함께 살아가는 삶

,


4. Install Grid Infrastructure

   

1) Clone the VM from Storage Cell

Stocell1 복제 진행

   

이름 지정 & 모든 네트워크 카드의 Mac 초기화

   

완전한 복제 진행

 

   

네트워크 및 Disk 구성 수정

-stocell1에서 내부 네트워크 추가진행

   

-stocell1에서 Disk 제거(25GB 기본 공간을 제외한 500MB, 400MB Disk 연결 삭제 후 OS Disk 제거)

   

[OS start]

   

Cell Disk 제거

[root@stocell1 oracle]# rpm -evv cell-11.2.3.2.1_LINUX.X64_130109-1

[root@stocell1 ~]# rm -rf /opt/oracle/cell*

   

Network 제거 및 재구성

-eth0.bak, eth1.bak 제거 진행

   

   

virbr0 Link encap:Ethernet HWaddr 00:00:00:00:00:00

inet addr:192.168.122.1 Bcast:192.168.122.255 Mask:255.255.255.0

UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1

RX packets:0 errors:0 dropped:0 overruns:0 frame:0

TX packets:19 errors:0 dropped:0 overruns:0 carrier:0

collisions:0 txqueuelen:0

RX bytes:0 (0.0 b) TX bytes:3583 (3.4 KiB)

   

OVM으로 OS를 설치하면 기본적으로 할당되는 IP 대역을 사용하여 내부 네트워크 구성

   

[root@exadb1 ~]# cat /etc/hosts

# Do not remove the following line, or various programs

# that require network functionality will fail.

127.0.0.1 localhost.localdomain localhost

::1 localhost6.localdomain6 localhost6

   

#InfiniBand IPs

192.168.56.102 stocell1 stocell1.exatest

#192.168.10.21 stocell2 stocell2.exatest -- Cell Disk를 2개를 사용할 경우

192.168.122.10 exadb1-ib exadb1-ib.exatest

#Public IPs

   

192.168.56.101 exadb1 exadb1.exatest

192.168.56.130 exadb1-vip exadb1-vip.exatest

192.168.56.100 exadb-cluster-scan exadb-cluster-scan.exatest

   

[root@stocell1 ~]# vi /etc/sysconfig/network

NETWORKING=yes

NETWORKING_IPV6=yes

HOSTNAME=exadb1

   

2) Install Grid Infrastructure

   

Grid 설치 파일 준비

-p13390677_112040_Linux-x86-64_3of7.zip

[/home/grid/setup]

[root@exadb1 ~]# unzip p13390677_112040_Linux-x86-64_3of7.zip

   

Cell disk user 제거

[root@exadb1 ~]# userdel celladmin

[root@exadb1 ~]# userdel cellmonitor

   

OS 환경 변수 추가 수정

[root@exadb1 ~]# cat /etc/sysctl.conf

   

fs.aio-max-nr = 1048576

fs.file-max = 6815744

kernel.shmall = 2097152

kernel.shmmax = 1375207424

kernel.shmmni = 4096

kernel.sem = 250 32000 100 128

net.ipv4.ip_local_port_range = 9000 65500

   

   

net.core.rmem_default=262144

net.core.rmem_max=4194304

net.core.wmem_default=262144

net.core.wmem_max=2097152

   

--해당 설정 값 중요.

   

[root@exadb1 ~]# sysctl -p

[root@exadb1 ~]# cat /etc/security/limits.conf

grid soft nofile 131072

grid hard nofile 131072

grid soft nproc 131072

grid hard nproc 131072

grid soft core unlimited

grid hard core unlimited

grid soft memlock 50000000

grid hard memlock 50000000

   

   

oracle soft nproc 2047

oracle hard nproc 16384

oracle soft nofile 1024

oracle hard nofile 65536

   

Group 및 User 생성

[root@exadb1 ~]# groupadd oper

[root@exadb1 ~]# groupadd dba

[root@exadb1 ~]# groupadd oinstall

[root@exadb1 ~]# groupadd asmadmin

[root@exadb1 ~]# useradd -g oinstall -G dba,oper,asmadmin grid

   

Cell Disk와 DB Instance 간에 네트워크 설정

[root@exadb1 ~]# mkdir -p /etc/oracle/cell/network-config

   

[root@exadb1 ~]# vi /etc/oracle/cell/network-config/cellinit.ora

ipaddress1 = 192.168.56.101/24 #InfiniBand IP/network of exadb1

   

[root@exadb1 ~]# vi /etc/oracle/cell/network-config/cellip.ora

cell = "192.168.56.102" #InfiniBand IP of stocell1

   

#cell = "192.168.56.103"    #InfiniBand IP of stocell2 -- DISK 확장 가능

   

[권한 할당]

[root@exadb1 ~]# chown -R grid:oinstall /etc/oracle/cell/network-config

[root@exadb1 ~]# chmod -R 775 /etc/oracle/cell/network-config

   

NTP 설정

[root@exadb1 ~]#vi /etc/sysconfig/ntpd

# Drop root to id 'ntp:ntp' by default.

OPTIONS="-x -u ntp:ntp -p /var/run/ntpd.pid" (-x 추가)

   

#Set to 'yes' to sync hw clock after successful ntpdate

   

SYNC_HWCLOCK=no

NTPDATE_OPTIONS="-t 10″

chkconfig ntpd on

service ntpd start

   

Grid Infrastructure 설치

-설치 진행 전 Disk 연결 상태 확인(설치파일 unzip 이후 사용가능)

[root@exadb1 bin]# export LD_LIBRARY_PATH=/home/grid/setup/grid/stage/ext/lib

[root@exadb1 bin]# ./kfod disks=all op=disks (/tmp/OraInstall2015-08-17_04-34-05PM/ext/bin/kfod)

------------------------------------------------------------------------- -------

Disk Size Path User Group

=====================================================================

1: 448 Mb o/192.168.56.102/DATA_CD_DISK01_stocell1 <unknown> <unknown>

2: 448 Mb o/192.168.56.102/DATA_CD_DISK02_stocell1 <unknown> <unknown>

3: 448 Mb o/192.168.56.102/DATA_CD_DISK03_stocell1 <unknown> <unknown>

4: 448 Mb o/192.168.56.102/DATA_CD_DISK04_stocell1 <unknown> <unknown>

5: 448 Mb o/192.168.56.102/DATA_CD_DISK05_stocell1 <unknown> <unknown>

6: 448 Mb o/192.168.56.102/DATA_CD_DISK06_stocell1 <unknown> <unknown>

7: 448 Mb o/192.168.56.102/DATA_CD_DISK07_stocell1 <unknown> <unknown>

8: 448 Mb o/192.168.56.102/DATA_CD_DISK08_stocell1 <unknown> <unknown>

9: 448 Mb o/192.168.56.102/DATA_CD_DISK09_stocell1 <unknown> <unknown>

10: 448 Mb o/192.168.56.102/DATA_CD_DISK10_stocell1 <unknown> <unknown>

11: 448 Mb o/192.168.56.102/DATA_CD_DISK11_stocell1 <unknown> <unknown>

12: 448 Mb o/192.168.56.102/DATA_CD_DISK12_stocell1 <unknown> <unknown>

[root@exadb1 bin]#

   

-Local에서 진행할 경우, xhost +를 활용하여 설치 진행

[root@stocell1 media]# xhost +

[root@stocell1 media]# su – grid

[grid@stocell1 grid]$ ./runInstaller

   

-xmanager를 통하여 진행할 경우, passive mode를 수행한 뒤에

[grid@exadb1 ~]$ export DISPLAY=192.168.56.1:0.0

[grid@stocell1 grid]$ ./runInstaller

   

   

   

   

-Install and Configure Oracle Grid Infrastucture for a Cluster 선택

   

   

-Advanced Installation 선택

   

   

- /etc/hosts 설정에서 scan ip 설정 중요

   

   

-ssh connectivity를 선택하여 Setup 진행(Grid, Oracle)

   

   

-Public과 Private의 IP 대역대가 달라야 한다.

   

   

-Oracle ASM 선택

   

   

-disk 그룹이 안보일 경우에 change discovery Path에서 o/*/* 로 검색 (AU Size 4MB, External로 선택)

1. Normal

- 2-way mirroring, 2배의 디스크가 필요 (실제 데이터가 저장되는 공간이 100G면, 100G 디스크 2개가 필요)

   

2. High

- 3-way mirroring, 3배의 디스크 필요

   

3. External

- ASM mirroring 기능 사용하지 않음, Hardware RAID 기능으로 디스크를 보호하고 있을 경우에만 사용하길 권장

   

   

   

   

-선택 없이 Next 진행

   

   

   

   

-해당 Check 내역에 나오는 RPM 및 설정을 모두 설정해준다.

-단 Task resolv.conf lntegrity는 무시하고 진행해도 된다.

   

-root.sh까지 수행하면 Grid 구성은 완료된다.

[root@exadb1 ~]# /home/grid/app/oraInventory/orainstRoot.sh

Changing permissions of /home/grid/app/oraInventory.

Adding read,write permissions for group.

Removing read,write,execute permissions for world.

   

Changing groupname of /home/grid/app/oraInventory to oinstall.

The execution of the script is complete.

   

[root@exadb1 ~]# /home/grid/11.2.0/grid/root.sh

Performing root user operation for Oracle 11g

   

The following environment variables are set as:

ORACLE_OWNER= grid

ORACLE_HOME= /home/grid/11.2.0/grid

   

Enter the full pathname of the local bin directory: [/usr/local/bin]:

Copying dbhome to /usr/local/bin ...

Copying oraenv to /usr/local/bin ...

Copying coraenv to /usr/local/bin ...

   

   

Creating /etc/oratab file...

Entries will be added to the /etc/oratab file as needed by

Database Configuration Assistant when a database is created

Finished running generic part of root script.

Now product-specific root actions will be performed.

Using configuration parameter file: /home/grid/11.2.0/grid/crs/install/crsconfig_params

Creating trace directory

User ignored Prerequisites during installation

Installing Trace File Analyzer

OLR initialization - successful

root wallet

root wallet cert

root cert export

peer wallet

profile reader wallet

pa wallet

peer wallet keys

pa wallet keys

peer cert request

pa cert request

peer cert

pa cert

peer root cert TP

profile reader root cert TP

pa root cert TP

peer pa cert TP

pa peer cert TP

profile reader pa cert TP

profile reader peer cert TP

peer user cert

pa user cert

Adding Clusterware entries to inittab

CRS-2672: Attempting to start 'ora.mdnsd' on 'exadb1'

CRS-2676: Start of 'ora.mdnsd' on 'exadb1' succeeded

CRS-2672: Attempting to start 'ora.gpnpd' on 'exadb1'

CRS-2676: Start of 'ora.gpnpd' on 'exadb1' succeeded

CRS-2672: Attempting to start 'ora.cssdmonitor' on 'exadb1'

CRS-2672: Attempting to start 'ora.gipcd' on 'exadb1'

CRS-2676: Start of 'ora.gipcd' on 'exadb1' succeeded

CRS-2676: Start of 'ora.cssdmonitor' on 'exadb1' succeeded

CRS-2672: Attempting to start 'ora.cssd' on 'exadb1'

CRS-2672: Attempting to start 'ora.diskmon' on 'exadb1'

CRS-2676: Start of 'ora.diskmon' on 'exadb1' succeeded

CRS-2676: Start of 'ora.cssd' on 'exadb1' succeeded

   

ASM created and started successfully.

   

Disk Group MGMT created successfully.

   

clscfg: -install mode specified

Successfully accumulated necessary OCR keys.

Creating OCR keys for user 'root', privgrp 'root'..

Operation successful.

CRS-4256: Updating the profile

Successful addition of voting disk 2d929fc5b7f24fd6bfc418ae56169886.

Successfully replaced voting disk group with +MGMT.

CRS-4256: Updating the profile

CRS-4266: Voting file(s) successfully replaced

## STATE File Universal Id File Name Disk group

-- ----- ----------------- --------- ---------

1. ONLINE 2d929fc5b7f24fd6bfc418ae56169886 (o/192.168.56.102/DATA_CD_DISK01_stocell1) [MGMT]

Located 1 voting disk(s).

CRS-2672: Attempting to start 'ora.asm' on 'exadb1'

CRS-2676: Start of 'ora.asm' on 'exadb1' succeeded

CRS-2672: Attempting to start 'ora.MGMT.dg' on 'exadb1'

CRS-2676: Start of 'ora.MGMT.dg' on 'exadb1' succeeded

Configure Oracle Grid Infrastructure for a Cluster ... succeeded

[root@exadb1 ~]#

   

   

-위 에러는 무시하고 진행을 완료 한다. (해당 에러에 대한 상세 로그를 확인해보면 좋다)

   

Grid 구성 확인

[Cell Disk alert log 확인]

Tue Aug 18 20:31:42 2015

Heartbeat with diskmon started on exadb1

Tue Aug 18 20:31:49 2015

Resilvering incompatible ASM instance (exadb1, pid: 15062) connected. Any griddisks with resilvering tables will be dropped and recreated.

Tue Aug 18 20:32:39 2015

Heartbeat with diskmon stopped on exadb1

   

[Grid 확인]

[grid@exadb1 bin]$ ./crsctl stat res -t

--------------------------------------------------------------------------------

NAME TARGET STATE SERVER STATE_DETAILS

--------------------------------------------------------------------------------

Local Resources

--------------------------------------------------------------------------------

ora.LISTENER.lsnr

ONLINE ONLINE exadb1

ora.MGMT.dg

ONLINE ONLINE exadb1

ora.asm

ONLINE ONLINE exadb1 Started

ora.gsd

OFFLINE OFFLINE exadb1

ora.net1.network

ONLINE ONLINE exadb1

ora.ons

ONLINE ONLINE exadb1

ora.registry.acfs

ONLINE ONLINE exadb1

--------------------------------------------------------------------------------

Cluster Resources

--------------------------------------------------------------------------------

ora.LISTENER_SCAN1.lsnr

1 ONLINE ONLINE exadb1

ora.cvu

1 ONLINE ONLINE exadb1

ora.exadb1.vip

1 ONLINE ONLINE exadb1

ora.oc4j

1 ONLINE ONLINE exadb1

ora.scan1.vip

1 ONLINE ONLINE exadb1

[grid@exadb1 bin]$

  

   

구성 시 에러 발생 내역

2015-08-18 20:28:56.305:

[/home/grid/11.2.0/grid/bin/oraagent.bin(13738)]CRS-5019:All OCR locations are on ASM disk groups [MGMT], and none of these disk groups are mounted. Details are at "(:CLSN00100:)" in "/home/grid/11.2.0/grid/log/exadb1/agent/ohasd/oraagent_grid/oraagent_grid.log".

2015-08-18 20:28:57.349:

[/home/grid/11.2.0/grid/bin/oraagent.bin(13738)]CRS-5019:All OCR locations are on ASM disk groups [MGMT], and none of these disk groups are mounted. Details are at "(:CLSN00100:)" in "/home/grid/11.2.0/grid/log/exadb1/agent/ohasd/oraagent_grid/oraagent_grid.log".

   

--Root.sh를 수행할 때 CRS Alert log에서 위 에러가 계속해서 발생되면서 다음 Step으로 진행되지 않을 경우에 /etc/sysctl.con의 설정 값이 정확하게 되었는지 확인

   

[참고 문서 및 사이트]

   


블로그 이미지

운명을바꾸는자

IT와 함께 살아가는 삶

,


3. Storage Cell disk

  • Stocell1의 환경 구성을 기반으로 계속해서 진행한다. (CPU 2개, Memory 2Gb, Network 이더넷1,2 구성)

       

1) Install Oracle Linux 5

  • Oracle Linux 5 설치 파일 (V40139-01.iso) 준비해야 하며 다른 버전으로 할 경우에 이슈가 있을 수 있어 되도록이면 Linux 5버전을 추천한다.
  • 게스트 확장 설치를 할 경우에 진행에 있어 설치 파일 이관 및 공유폴더 설정 등 편리한 점이 많다.(하지만 RHEL5의 경우에게스트 확장설치를 할 경우에 OS가 깨지는 현상이 있어 게스트 확장 설치를 하지 않고 진행해도 이상 없다.)

       

[root@stocell1 media]# cd VBOXADDITIONS_4.3.30_101610/

/media/VBOXADDITIONS_4.3.30_101610

[root@stocell1 VBOXADDITIONS_4.3.30_101610]# ls

32Bit autorun.sh runasroot.sh VBoxWindowsAdditions-amd64.exe

64Bit cert VBoxLinuxAdditions.run VBoxWindowsAdditions.exe

AUTORUN.INF OS2 VBoxSolarisAdditions.pkg VBoxWindowsAdditions-x86.exe

[root@stocell1 VBOXADDITIONS_4.3.30_101610]#

[root@stocell1 VBOXADDITIONS_4.3.30_101610]# ./VBoxLinuxAdditions.run

   

OVM 새로 만들기

-OVM 이름 & Oracle Linux 64bit로 설치 진행

   

-파일 설치 위치 및 파일 크기 지정(25GB를 권장, 그 이상으로 할당하면 된다.)

-OVM 만들기 선택

   

ISO 설치 파일 추가

   

Oracle Linux 5 설치

-따로 설정할 것은 없고 아래 세부 설정에서만 선택하면 된다.

-Next를 계속 진행해서 설치를 진행하면 된다.

   

-Software Development

-Virtualization

-Clustering

-Storage Clustering

   

   

-설치가 완료 되면 추가 설정을 진행하게 되는데 여기에서 바오하벽과 SELinux를 모두 비활성화로 진행한다.

   

-설치 완료

   

Network 설정 확인

-DHCP로 자동 할당 받아서 사용

-무선 와이파이로 인터넷을 사용.(192.168.0.* 대역)

   

   

-eth1의 경우에는 VirtualBox Host-Only Ethernet Adapter를 통해서 IP를 할당 받음

-192.168.56.* 대역을 사용

   

   

[root@stocell1 ~]# cat /etc/hosts

# Do not remove the following line, or various programs

# that require network functionality will fail.

127.0.0.1 stocell1 localhost.localdomain localhost

::1 localhost6.localdomain6 localhost6

[root@stocell1 VBOXADDITIONS_4.3.30_101610]# ifconfig -a

eth0 Link encap:Ethernet HWaddr 08:00:27:4A:70:23

inet addr:192.168.0.14 Bcast:192.168.0.255 Mask:255.255.255.0

inet6 addr: fe80::a00:27ff:fe4a:7023/64 Scope:Link

UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1

RX packets:20133 errors:0 dropped:0 overruns:0 frame:0

TX packets:37 errors:0 dropped:0 overruns:0 carrier:0

collisions:0 txqueuelen:0

RX bytes:1263813 (1.2 MiB) TX bytes:10165 (9.9 KiB)

   

eth1 Link encap:Ethernet HWaddr 08:00:27:C2:28:D1

inet addr:192.168.56.102 Bcast:192.168.56.255 Mask:255.255.255.0

inet6 addr: fe80::a00:27ff:fec2:28d1/64 Scope:Link

UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1

RX packets:338 errors:0 dropped:0 overruns:0 frame:0

TX packets:275 errors:0 dropped:0 overruns:0 carrier:0

collisions:0 txqueuelen:1000

RX bytes:35142 (34.3 KiB) TX bytes:37744 (36.8 KiB)

   

lo Link encap:Local Loopback

inet addr:127.0.0.1 Mask:255.0.0.0

inet6 addr: ::1/128 Scope:Host

UP LOOPBACK RUNNING MTU:16436 Metric:1

RX packets:9865 errors:0 dropped:0 overruns:0 frame:0

TX packets:9865 errors:0 dropped:0 overruns:0 carrier:0

collisions:0 txqueuelen:0

RX bytes:8315266 (7.9 MiB) TX bytes:8315266 (7.9 MiB)

   

   

불필요한 기능을 Stop

-해당 내역 때문에 OVM이 느려지는 현상이 발생될 수 있으므로 정지 시켜둔다.

[root@stocell1 ~]# service yum-updatesd stop

[root@stocell1 ~]# chkconfig yum-updatesd off

   

2) Install Storage cell S/W

-설치 파일을 전송할 때, 공유폴더에 넣어서도 가능하고 FTP로 전송해서 진행해도 된다.

   

설치 파일 전송

-Oracle Database Machine Exadata Storage Cell 설치 파일 (V36290-01.zip) 설치 파일을 OVM에 넣어준다.

   

알집 풀기

[root@stocell1 ~]# unzip V36290-01.zip

[root@stocell1 ~]# cd dl180/boot/cellbits/

-rwxrwx— 1 root root 245231205 Jan  9  2013 cell.bin

해당 파일을 Copy 해서 다른 공간에 unzip을 해도 된다.

   

[root@stocell1 ~]# unzip cell.bin

Archive:  cell.bin

warning [cell.bin]:  6408 extra bytes at beginning or within zipfile

(attempting to process anyway)

inflating: cell-11.2.3.2.1_LINUX.X64_130109-1.x86_64.rpm

inflating: jdk-1_5_0_15-linux-amd64.rpm

Install jdk

[root@stocell1 ~]# rpm -ivh jdk-1_5_0_15-linux-amd64.rpm

Preparing…                ########################################### [100%]

1:jdk                    ########################################### [100%]

Preparing to install cell rpm (thanks to Lee ..)

   

[권한 설정]

[root@stocell1 ~]# mkdir /var/log/oracle

[root@stocell1 ~]# chmod 775 /var/log/oracle

(It will be used also by celladmin user …)

Install cell sw

[root@stocell1 ~]# rpm -ivh cell-11.2.3.2.1_LINUX.X64_130109-1.x86_64.rpm

Preparing…                ########################################### [100%]

Pre Installation steps in progress …

1:cell                   ########################################### [100%]

Post Installation steps in progress …

Set cellusers group for /opt/oracle/cell11.2.3.2.1_LINUX.X64_130109/cellsrv/deploy/log directory

Set 775 permissions for /opt/oracle/cell11.2.3.2.1_LINUX.X64_130109/cellsrv/deploy/log directory

/

/

   

Installation SUCCESSFUL.

   

-RPM 설치 시에 사전에 필요한 RPM이 있을 수 있어 필요한 RPM은 yum으로 설치하고 진행 하면 된다.

-RPM 2개를 설치하면 끝난다.

   

   

3) Add Virtual Disks

Disk 추가(새 디스크 만들기)

   

   

   

   

-500MB 12개 추가(stocell_disk1~12), 400MB 6개 추가(fd_stocell_disk1~6) 진행

   

4) Start Storage cell S/W

-Disk 추가 이후에 Stocell1 OVM Start 진행

-로그인시에 아래와 같은 에러가 발생하면 export DISPLAY=:0 를 추가해 재 로그인 하면 된다.

[root@stocell1]# vi /etc/bashrc

…..

export DISPLAY=:0

   

-해당 내역을 추가하고 다시 로그인을 진행하면 된다.

   

OS 환경 변수 추가

/etc/sysctl.ctl

   

fs.file-max = 65536

/etc/security/limit.conf 

   

* soft nofile 65536

* hard nofile 65536

   

[root@stocell1 ~]# modprobe rds

[root@stocell1 ~]# modprobe rds_tcp

[root@stocell1 ~]# modprobe rds_rdma

[root@stocell1 ~]# vi /etc/modprobe.d/rds.conf

install rds /sbin/modprobe --ignore-install rds && /sbin/modprobe rds_tcp && /sbin/modprobe rds_rdma

[해당 내역을 삽입]

[설정 확인]

[root@stocell1 ~]# lsmod | grep rds

rds_rdma 106561 0

rds_tcp 48097 0

rds 155561 2 rds_rdma,rds_tcp

rdma_cm 73429 2 rds_rdma,ib_iser

ib_core 108097 7 rds_rdma,ib_iser,rdma_cm,ib_cm,iw_cm,ib_sa,ib_mad

   

Cell disk 추가 및 설정

-500MB 12개, 400MB 6개 Disk 내역 확인

[root@stocell1 ~]# fdisk -l 2>/dev/null |grep "B,"

Disk /dev/sda: 26.8 GB, 26843545600 bytes

Disk /dev/sdb: 524 MB, 524288000 bytes

Disk /dev/sdc: 524 MB, 524288000 bytes

Disk /dev/sdd: 524 MB, 524288000 bytes

Disk /dev/sde: 524 MB, 524288000 bytes

Disk /dev/sdf: 524 MB, 524288000 bytes

Disk /dev/sdg: 524 MB, 524288000 bytes

Disk /dev/sdh: 524 MB, 524288000 bytes

Disk /dev/sdi: 524 MB, 524288000 bytes

Disk /dev/sdj: 524 MB, 524288000 bytes

Disk /dev/sdk: 524 MB, 524288000 bytes

Disk /dev/sdl: 524 MB, 524288000 bytes

Disk /dev/sdm: 524 MB, 524288000 bytes

Disk /dev/sdn: 419 MB, 419430400 bytes

Disk /dev/sdo: 419 MB, 419430400 bytes

Disk /dev/sdp: 419 MB, 419430400 bytes

Disk /dev/sdq: 419 MB, 419430400 bytes

Disk /dev/sdr: 419 MB, 419430400 bytes

Disk /dev/sds: 419 MB, 419430400 bytes

Disk /dev/dm-0: 22.5 GB, 22515023872 bytes

Disk /dev/dm-1: 4194 MB, 4194304000 bytes

(500MB data, 400MB flash)

[root@stocell1 unix]# echo $T_WORK

/opt/oracle/cell11.2.3.2.1_LINUX.X64_130109/disks

cd /opt/oracle/cell11.2.3.2.1_LINUX.X64_130109/disks

--디렉토리가 없을 경우 생성할 것(mkdir disks/raw)

[할당된 disk 설정]

cd /opt/oracle/cell11.2.3.2.1_LINUX.X64_130109/disks/raw

   

ln -s /dev/sdb stocell1_DISK01

ln -s /dev/sdc stocell1_DISK02

ln -s /dev/sdd stocell1_DISK03

ln -s /dev/sde stocell1_DISK04

ln -s /dev/sdf stocell1_DISK05

ln -s /dev/sdg stocell1_DISK06

ln -s /dev/sdh stocell1_DISK07

ln -s /dev/sdi stocell1_DISK08

ln -s /dev/sdj stocell1_DISK09

ln -s /dev/sdk stocell1_DISK10

ln -s /dev/sdl stocell1_DISK11

ln -s /dev/sdm stocell1_DISK12

ln -s /dev/sdn stocell1_FLASH01

ln -s /dev/sdo stocell1_FLASH02

ln -s /dev/sdp stocell1_FLASH03

ln -s /dev/sdq stocell1_FLASH04

ln -s /dev/sdr stocell1_FLASH05

ln -s /dev/sds stocell1_FLASH06

   

Cell Disk Service 설정

[root@stocell1 ~]# su – celladmin

[celladmin@stocell1 ~]$ cellcli -e alter cell restart services all

   

Stopping the RS, CELLSRV, and MS services…

CELL-01509: Restart Server (RS) not responding.

Starting the RS, CELLSRV, and MS services…

Getting the state of RS services… running

Starting CELLSRV services…

The STARTUP of CELLSRV services was not successful.

CELL-01547: CELLSRV startup failed due to unknown reasons.

Starting MS services…

The STARTUP of MS services was successful.

The error in not unknown (as stated) but well known and expected

[InfiniBand Storage Network 설정]

[celladmin@stocell1 ~]$ cellcli -e create cell stocell1 interconnect1=eth1

Cell stocell1 successfully created

Starting CELLSRV services…

The STARTUP of CELLSRV services was successful.

Flash cell disks, FlashCache, and FlashLog will be created…

CellDisk FD_00_stocell1 successfully created

CellDisk FD_01_stocell1 successfully created

CellDisk FD_02_stocell1 successfully created

CellDisk FD_03_stocell1 successfully created

CellDisk FD_04_stocell1 successfully created

CellDisk FD_05_stocell1 successfully created

Flash log stocell1_FLASHLOG successfully created

Flash cache stocell1_FLASHCACHE successfully created

[Cell Disks 설정]

[celladmin@stocell1 ~]$ cellcli -e create celldisk all

CellDisk CD_DISK01_stocell1 successfully created

CellDisk CD_DISK02_stocell1 successfully created

CellDisk CD_DISK03_stocell1 successfully created

CellDisk CD_DISK04_stocell1 successfully created

CellDisk CD_DISK05_stocell1 successfully created

CellDisk CD_DISK06_stocell1 successfully created

CellDisk CD_DISK07_stocell1 successfully created

CellDisk CD_DISK08_stocell1 successfully created

CellDisk CD_DISK09_stocell1 successfully created

CellDisk CD_DISK10_stocell1 successfully created

CellDisk CD_DISK11_stocell1 successfully created

CellDisk CD_DISK12_stocell1 successfully created

[Grid disks 설정]

[celladmin@stocell1 ~]$ cellcli -e create griddisk all harddisk prefix=DATA

GridDisk DATA_CD_DISK01_stocell1 successfully created

GridDisk DATA_CD_DISK02_stocell1 successfully created

GridDisk DATA_CD_DISK03_stocell1 successfully created

GridDisk DATA_CD_DISK04_stocell1 successfully created

GridDisk DATA_CD_DISK05_stocell1 successfully created

GridDisk DATA_CD_DISK06_stocell1 successfully created

GridDisk DATA_CD_DISK07_stocell1 successfully created

GridDisk DATA_CD_DISK08_stocell1 successfully created

GridDisk DATA_CD_DISK09_stocell1 successfully created

GridDisk DATA_CD_DISK10_stocell1 successfully created

GridDisk DATA_CD_DISK11_stocell1 successfully created

GridDisk DATA_CD_DISK12_stocell1 successfully created

[celladmin@stocell1 ~]$ cellcli -e alter cell restart services all

   

Stopping the RS, CELLSRV, and MS services...

The SHUTDOWN of services was successful.

Starting the RS, CELLSRV, and MS services...

Getting the state of RS services... running

Starting CELLSRV services...

The STARTUP of CELLSRV services was successful.

Starting MS services...

The STARTUP of MS services was successful.

   

[cell disk alert log]

[root@stocell1 trace]# cd /opt/oracle/cell11.2.3.2.1_LINUX.X64_130109/log/diag/asm/cell/stocell1/trace

[root@stocell1 trace]# ls -al alert*

-rw-rw---- 1 root celladmin 156850 Aug 21 10:00 alert.log

..

..

[RS] Started Service MS with pid 5390

Fri Aug 21 10:00:51 2015

Smart Flash Logging enabled on FlashLog stocell1_FLASHLOG (1842424628), size=80MB, cdisk=FD_05_stocell1

Fri Aug 21 10:00:51 2015

Smart Flash Logging enabled on FlashLog stocell1_FLASHLOG (3996181628), size=80MB, cdisk=FD_04_stocell1

Fri Aug 21 10:00:51 2015

Smart Flash Logging enabled on FlashLog stocell1_FLASHLOG (3466652180), size=80MB, cdisk=FD_03_stocell1

Fri Aug 21 10:00:51 2015

Smart Flash Logging enabled on FlashLog stocell1_FLASHLOG (1448144612), size=80MB, cdisk=FD_02_stocell1

Fri Aug 21 10:00:51 2015

Smart Flash Logging enabled on FlashLog stocell1_FLASHLOG (2992946252), size=80MB, cdisk=FD_00_stocell1

Smart Flash Logging enabled on FlashLog stocell1_FLASHLOG (1465279500), size=80MB, cdisk=FD_01_stocell1

FlashLog: initialization complete

Fri Aug 21 10:00:51 2015

CELLSRV Server startup complete

[RS] Started Service CELLSRV with pid 5397

Fri Aug 21 10:00:57 2015

Info: Assigning flash CD FD_00_stocell1 to group#1

Info: Assigning flash CD FD_01_stocell1 to group#2

Fri Aug 21 10:00:57 2015

Info: Assigning flash CD FD_02_stocell1 to group#3

Info: Assigning flash CD FD_03_stocell1 to group#0

Info: Assigning flash CD FD_04_stocell1 to group#1

Info: Assigning flash CD FD_05_stocell1 to group#2

   

-stocell1 OVM이 완료된 상태로 해당 내역을 복제하여 exadata 구성을 진행한다.

-해당 stocell1을 바탕으로 DB 1개+Cell Disk 2개 구성, DB 2개+Cell Disk1개 구성(RAC)등 다양하게 활용할 수 있으므로

OVM export(내보내기)로 백업 받아 두고 사용하면 좋다.

   

Cellcli 명렁어

cellcli 명령어

1. check cell status

cellcli -e list cell

cellcli -e list cell detail

   

2. Physical Disk Information

cellcli -e list physicaldisk

cellcli -e list physicaldisk E15SBS detail

   

3. LUN Disks Detail

cellcli -e list lun

cellcli -e list lun 0_0 detail 

cellcli -e list lun 5_3 detail

   

4. Cell Disks Report

cellcli -e list celldisk

cellcli -e list celldisk CD_08_exadatalcel10 detail

cellcli -e list celldisk attributes name, devicePartition where size>200g;

cellcli -e list celldisk attributes name,status,size

   

5. Grid Disk Knowledge

cellcli -e list griddisk

cellcli -e list celldisk CD_02_exadatalcel10 detail

cellcli -e list griddisk DATA_DMORL_CD_02_exadatalcel10 detail

cellcli -e list griddisk DBFS_DG_CD_02_exadatalcel10 detail

cellcli -e list griddisk RECO_DMORL_CD_02_exadatalcel10 detail

   

6. Display Exadate Alerts

cellcli -e list alerthistory

cellcli -e list alerthistory 8_1 detail

cellcli -e  list alerthistory where severity like '[warning|critical]'

cellcli -e  list alertdefinition detail

   

7. Restart Cell Services

cellcli -e list cell detail

cellcli -e alter cell restart services rs

cellcli -e alter cell restart services ms

cellcli -e alter cell restart services cellsrv

cellcli -e alter cell restart services all

cellcli -e alter cell shutdown services rs

cellcli -e alter cell shutdown services ms

cellcli -e alter cell shutdown services cellsrv

   

구성 시에 에러(CELL-01518)

/opt/oracle/cell11.2.3.2.0_LINUX.X64_120713/cellsrv/deploy/config/cellinit.ora

   

[celladmin@cell ~]$ cellcli -e create cell cell1 interconnect1=eth1

CELL-01518: Stop CELLSRV. Create Cell cannot continue with CELLSRV running.

   

[celladmin@cell ~]$ cellcli

CellCLI: Release 11.2.3.2.0 - Production on Tue Jan 20 22:14:40 CST 2015

Copyright (c) 2007, 2012, Oracle.  All rights reserved.

Cell Efficiency Ratio: 1

   

CellCLI> alter cell shutdown services cellsrv

   

Stopping CELLSRV services... 

The SHUTDOWN of CELLSRV services was successful.

   

CellCLI> exit

quitting

   

[celladmin@cell ~]$ cellcli -e create cell cell1 interconnect1=eth1

Cell cell1 successfully created

   

   

[참고 문서 및 사이트]

   

블로그 이미지

운명을바꾸는자

IT와 함께 살아가는 삶

,


2.OVM TEST Environment

   

1) 설치 전 준비 파일

2) BASE Environment

  • OVM 환경 구성에서 중요한 요소는 CPU, Memory, Network, Disk로 볼 수 있다. (Disk 구성은 Storage Cell Disk 참조)
  • Oracle Grid & DB S/W 1대, Storage Cell S/W OVM 1대
  • Stocell1을 먼저 구축한 뒤에 Exadata1을 복제하여 사용할 것임으로 Stocell1을 중점적으로 먼저 구성한다.(중요)

       

       

       

    CPU 설정

    - Exadata1 & Stocell1

       

    Memory 설정(중요)

    - Exadata1

     

    - Stocell1

       

    Network 설정(중요)

    - Exadata1(3개)

       

       

     

    - Stocell1(2개)

       

 

   

3) ETC

  • Stocell1을 먼저 구성한 뒤에 해당 내역을 복제하여 Exadata1로 사용하는 형태로 진행된다.(중요)
  • Memory의 경우에 Stocell1은 2GB 정도면 되지만 Exadata1의 경우에는 적어도 3GB는 할당해야 한다.
  • Network 구성 시에 2개의 대역대만(192.168.56.*, 192.168.122.*) 있으면 되며 yum의 사용을 위하여 추가 인터넷 IP(192.168.0.*)를 받아 설정하면 RPM 설치 시 수월하게 진행할 수 있다. (PC 마다 IP할당 설정은 다를 수 있음)
  • Disk 설정의 경우에 OS 설치 이후에 추가적으로 구성되므로 "Storage Cell Disk"에서 이야기 하겠다.

     

[참고 문서 사이트]


블로그 이미지

운명을바꾸는자

IT와 함께 살아가는 삶

,