VMware下Oracle 11g RAC环境搭建(2)

6.配置IP和hosts、hostname

(1)配置ip
//这里的网关有vmware中网络设置决定,eth0为连接外网,eth0内网心跳
//rac1主机下:
[root@rac1 ~]# vi /etc/sysconfig/network-scripts/ifcfg-eth0
IPADDR=192.168.248.101
PREFIX=24
GATEWAY=192.168.248.2
DNS1=114.114.114.114

[root@rac1 ~]# vi /etc/sysconfig/network-scripts/ifcfg-eth1
IPADDR=192.168.109.101
PREFIX=24

//rac2主机下
[root@rac2 ~]# vi /etc/sysconfig/network-scripts/ifcfg-eth0
IPADDR=192.168.248.102
PREFIX=24
GATEWAY=192.168.248.2
DNS1=114.114.114.114

[root@rac2 ~]# vi /etc/sysconfig/network-scripts/ifcfg-eth1
IPADDR=192.168.109.102
PREFIX=24

(2)配置hostname
//rac1主机下
[root@rac1 ~]# vi /etc/sysconfig/network
NETWORKING=yes
HOSTNAME=rac1
GATEWAY=192.168.248.2
NOZEROCONF=yes

//rac2主机下
[root@rac2 ~]# vi /etc/sysconfig/network
NETWORKING=yes
HOSTNAME=rac2
GATEWAY=192.168.248.2
NOZEROCONF=yes

(3)配置hosts
rac1和rac2均要添加:
[root@rac1 ~]# vi /etc/hosts
192.168.248.101 rac1
192.168.248.201 rac1-vip
192.168.109.101 rac1-priv

192.168.248.102 rac2
192.168.248.202 rac2-vip
192.168.109.102 rac2-priv

192.168.248.110 scan-ip

7.配置grid和Oracle用户环境变量

Oracle_sid需要根据节点不同进行修改
[root@rac1 ~]# su - grid
[grid@rac1 ~]$ vi .bash_profile

export TMP=/tmp export TMPDIR=$TMP export ORACLE_SID=+ASM1 # RAC1 export ORACLE_SID=+ASM2 # RAC2 export ORACLE_BASE=/u01/app/grid export ORACLE_HOME=/u01/app/11.2.0/grid export PATH=/usr/sbin:$PATH export PATH=$ORACLE_HOME/bin:$PATH export LD_LIBRARY_PATH=$ORACLE_HOME/lib:/lib:/usr/lib export CLASSPATH=$ORACLE_HOME/JRE:$ORACLE_HOME/jlib:$ORACLE_HOME/rdbms/jlib umask 022

需要注意的是ORACLE_UNQNAME是数据库名,创建数据库时指定多个节点是会创建多个实例,ORACLE_SID指的是数据库实例名

[root@rac1 ~]# su - oracle
[oracle@rac1 ~]$ vi .bash_profile

export TMP=/tmp export TMPDIR=$TMP export ORACLE_SID=orcl1 # RAC1 export ORACLE_SID=orcl2 # RAC2 export ORACLE_UNQNAME=orcl export ORACLE_BASE=/u01/app/oracle export ORACLE_HOME=$ORACLE_BASE/product/11.2.0/db_1 export TNS_ADMIN=$ORACLE_HOME/network/admin export PATH=/usr/sbin:$PATH export PATH=$ORACLE_HOME/bin:$PATH export LD_LIBRARY_PATH=$ORACLE_HOME/lib:/lib:/usr/lib

$ source .bash_profile使配置文件生效

8.配置oracle用户ssh互信

这是很关键的一步,虽然官方文档中声称安装GI和RAC的时候OUI会自动配置SSH,但为了在安装之前使用CVU检查各项配置,还是手动配置互信更优。

ssh-keygen -t rsa ssh-keygen -t dsa [oracle@RAC1 ~]$ ssh rac1 cat ~/.ssh/id_rsa.pub >> authorized_keys ssh rac2 cat ~/.ssh/id_rsa.pub >> authorized_keys ssh rac1 cat ~/.ssh/id_dsa.pub >> authorized_keys ssh rac2 cat ~/.ssh/id_dsa.pub >> authorized_keys [oracle@RAC1 .ssh]$ scp authorized_keys rac2:~/.ssh/ [oracle@RAC1 .ssh]$ chmod 600 authorized_keys ssh rac1 date ssh rac2 date ssh rac1-priv date ssh rac2-priv date

需要注意的是生成密钥时不设置密码,授权文件权限为600,同时需要两个节点互相ssh通过一次。

9.配置裸盘

使用asm管理存储需要裸盘,前面配置了共享硬盘到两台主机上。配置裸盘的方式有两种(1)oracleasm添加(2)/etc/udev/rules.d/60-raw.rules配置文件添加(字符方式帮绑定udev) (3)脚本方式添加(块方式绑定udev,速度比字符方式快,最新的方法,推荐用此方式)

在配置裸盘之前需要先格式化硬盘:

fdisk /dev/sdb Command (m for help): n Command action e extended p primary partition (1-4) p Partition number (1-4): 1 最后 w 命令保存更改

重复步骤,格式化其他盘,得到如下分区
[root@rac1 ~]# ls /dev/sd*
/dev/sda /dev/sda2 /dev/sdb1 /dev/sdc1 /dev/sdd1 /dev/sde1 /dev/sdf1 /dev/sda1 /dev/sdb /dev/sdc /dev/sdd /dev/sde /dev/sdf

添加裸盘:

[root@rac1 ~]# vi /etc/udev/rules.d/60-raw.rules ACTION=="add",KERNEL=="/dev/sdb1",RUN+='/bin/raw /dev/raw/raw1 %N" ACTION=="add",ENV{MAJOR}=="8",ENV{MINOR}=="17",RUN+="/bin/raw /dev/raw/raw1 %M %m" ACTION=="add",KERNEL=="/dev/sdc1",RUN+='/bin/raw /dev/raw/raw2 %N" ACTION=="add",ENV{MAJOR}=="8",ENV{MINOR}=="33",RUN+="/bin/raw /dev/raw/raw2 %M %m" ACTION=="add",KERNEL=="/dev/sdd1",RUN+='/bin/raw /dev/raw/raw3 %N" ACTION=="add",ENV{MAJOR}=="8",ENV{MINOR}=="49",RUN+="/bin/raw /dev/raw/raw3 %M %m" ACTION=="add",KERNEL=="/dev/sde1",RUN+='/bin/raw /dev/raw/raw4 %N" ACTION=="add",ENV{MAJOR}=="8",ENV{MINOR}=="65",RUN+="/bin/raw /dev/raw/raw4 %M %m" ACTION=="add",KERNEL=="/dev/sdf1",RUN+='/bin/raw /dev/raw/raw5 %N" ACTION=="add",ENV{MAJOR}=="8",ENV{MINOR}=="81",RUN+="/bin/raw /dev/raw/raw5 %M %m" KERNEL=="raw[1-5]",OWNER="grid",GROUP="asmadmin",MODE="660" [root@rac1 ~]# start_udev Starting udev: [ OK ] [root@rac1 ~]# ll /dev/raw/ total 0 crw-rw---- 1 grid asmadmin 162, 1 Apr 13 13:51 raw1 crw-rw---- 1 grid asmadmin 162, 2 Apr 13 13:51 raw2 crw-rw---- 1 grid asmadmin 162, 3 Apr 13 13:51 raw3 crw-rw---- 1 grid asmadmin 162, 4 Apr 13 13:51 raw4 crw-rw---- 1 grid asmadmin 162, 5 Apr 13 13:51 raw5 crw-rw---- 1 root disk 162, 0 Apr 13 13:51 rawctl

这里需要注意的是配置的,前后都不能有空格,否则会报错。最后看到的raw盘权限必须是grid:asmadmin用户。

方法(3):

[root@rac1 ~]# for i in b c d e f ; do echo "KERNEL==\"sd*\", BUS==\"scsi\", PROGRAM==\"/sbin/scsi_id --whitelisted --replace-whitespace --device=/dev/\$name\", RESULT==\"`/sbin/scsi_id --whitelisted --replace-whitespace --device=/dev/sd$i`\", NAME=\"asm-disk$i\", OWNER=\"grid\", GROUP=\"asmadmin\", MODE=\"0660\"">> /etc/udev/rules.d/99-oracle-asmdevices.rules done [root@rac1 ~]# start_udev Starting udev: [ OK ]

[root@rac1 ~]# ll /dev/*asm*
brw-rw—- 1 grid asmadmin 8, 16 Apr 27 18:52 /dev/asm-diskb
brw-rw—- 1 grid asmadmin 8, 32 Apr 27 18:52 /dev/asm-diskc
brw-rw—- 1 grid asmadmin 8, 48 Apr 27 18:52 /dev/asm-diskd
brw-rw—- 1 grid asmadmin 8, 64 Apr 27 18:52 /dev/asm-diske
brw-rw—- 1 grid asmadmin 8, 80 Apr 27 18:52 /dev/asm-diskf

用这种方式添加,在后面的添加asm磁盘组的时候,需要指定Change Diskcovery Path为/dev/*asm*

10.配置grid用户ssh互信 [root@rac1 ~]#sh-keygen -t rsa [root@rac1 ~]#ssh-keygen -t dsa [root@rac2 ~]#sh-keygen -t rsa [root@rac2 ~]#ssh-keygen -t dsa [root@rac1 ~]#su - grid [grid@RAC1 ~]$ ssh rac1 cat ~/.ssh/id_rsa.pub >> authorized_keys ssh rac2 cat ~/.ssh/id_rsa.pub >> authorized_keys ssh rac1 cat ~/.ssh/id_dsa.pub >> authorized_keys ssh rac2 cat ~/.ssh/id_dsa.pub >> authorized_keys [grid@RAC1 .ssh]$ scp authorized_keys rac2:~/.ssh/ [oracle@RAC1 .ssh]$ chmod 600 authorized_keys 11.挂载安装软件文件夹

这里是主机windows系统开启文件夹共享,让后虚拟机挂载即可
mkdir -p /home/grid/db
mount -t cifs -o username=share,password=123456 //192.168.248.1/DB /home/grid/db

mkdir -p /home/oracle/db
mount -t cifs -o username=share,password=123456 //192.168.248.1/DB /home/oracle/db

12.安装用于Linux的cvuqdisk

在Oracle RAC两个节点上安装cvuqdisk,否则,集群验证使用程序就无法发现共享磁盘,当运行(手动运行或在Oracle Grid Infrastructure安装结束时自动运行)集群验证使用程序,会报错“Package cvuqdisk not installed”
注意使用适用于硬件体系结构(x86_64或i386)的cvuqdisk RPM。
cvuqdisk RPM在grid的安装介质上的rpm目录中。

13.手动运行cvu使用验证程序验证Oracle集群件要求(所有节点都执行)

rac1到grid软件目录下执行runcluvfy.sh命令:

[root@rac1 ~]# su - grid [grid@rac1 ~]$ cd db/grid/ [grid@rac1 grid]$ ls doc readme.html rpm runInstaller stage install response runcluvfy.sh sshsetup welcome.html [grid@rac1 grid]$ ./runcluvfy.sh stage -pre crsinst -n rac1,rac2 -fixup -verbose

查看cvu报告,修正错误
这里CVU执行的所有其他检查的结果为”passed”,只出现了如下错误:
Checking DNS response time for an unreachable node
Node Name Status

rac2 failed
rac1 failed
PRVF-5637 : DNS response time could not be checked on following nodes: rac2,rac1

File “/etc/resolv.conf” is not consistent across nodes

这个错误是因为没有配置DNS,但不影响安装,后面也会提示resolv.conf错误,我们用静态的scan-ip,所以可以忽略。

内容版权声明:除非注明,否则皆为本站原创文章。

转载注明出处:https://www.heiqu.com/f57f7418e5d4fd619f420864af733380.html