当前位置:七道奇文章资讯数据防范Oracle防范
日期:2011-01-25 22:55:00  来源:本站整理

在OracleVM虚拟机上安装OracleClusterware11g-安装配置[Oracle防范]

赞助商链接



  本文“在OracleVM虚拟机上安装OracleClusterware11g-安装配置[Oracle防范]”是由七道奇为您精心收集,来源于网络转载,文章版权归文章作者所有,本站不对其观点以及内容做任何评价,请读者自行判断,以下是其具体内容:

  很多用户都想理解Oracle RAC ,但是又贫乏硬件条件来安装和理解RAC.这里我们利用Oracle VM,在XEN虚拟机上来实现安装.

  Oracle VM于2007年11月12日正式推出,目前最新的版本是2.1.1.它是一款基于开源Xen管理器的虚拟化软件,支持Oracle和非Oracle的利用程序.在OTN 上可以免费下载到相关资源.用户可以在OVM中通过量种方法快速地成立虚拟机和虚拟磁盘.

  1 成立虚拟机

  这里我们成立2台虚拟机作为集群里的2个节点.

  ·通过Oracle Virtual Machine Template成立虚拟机 RAC1_13 和 RAC2_13.

  ·虚拟机的内存至少为 1G

  ·每台机械应成立 2 块虚拟网卡,以下图所示:

  ·作为RAC节点的虚拟机的OS 版本应一致,这里我们都挑选 Oracle Enterprise Linux Release 4 Update 5.

  ·成立完毕,"Power On"全部的节点.

  2 安装Clusterware前的预备

  2.1 查抄系统硬件环境 (在全部节点上)

  系统硬件条件至少应满意

  ·1G RAM

  # grep MemTotal /proc/meminfo

  ·Swap 1.5G

  # grep SwapTotal /proc/meminfo

  ·/tmp >400MB

  # df -k /tmp

  ·650MB的磁盘空间作为Oracle Clusterware home

  ·1G磁盘空间用来放Oracle Clusterware file

  假如考虑冗余的话,需求再增添分区

  ·至少4G磁盘空间作为Oracle Database home

  ·虚拟机的磁盘空间不够的话,可以通过增添虚拟磁盘的办法办理

  2.2 配置和查抄系统软件环境 (在全部节点上)

  查抄系统能否已经安装以下的包

  binutils-2.15.92.0.2-18

  elfutils-libelf-0.97-5

  elfutils-libelf-devel-0.97.5

  glibc-2.3.9.4-2.19

  glibc-common-2.3.9.4-2.19

  glibc-devel-2.3.9.4-2.19

  gcc-3.4.5-2

  gcc-c++-3.4.5-2

  libaio-devel-0.3.105-2

  libaio-0.3.105-2

  libgcc-3.4.5

  libstdc++-3.4.5-2

  libstdc++-devel-3.4.5-2

  make-3.80-5

  通过模板成立的虚拟机,OS大概没有安装全部需求的包.

  用户在安装前请参照Oracle官方文档查抄系统能否已经安装所需的包.

  2.3 配置和查抄网络 (在全部节点上)

  RAC1_13 eth0 10.182.108.86 eth1 192.168.0.11

  RAC2_13 eth0 10.182.108.88 eth1 192.168.0.12

  ·改正节点的/etc/hosts文件

  127.0.0.1 localhost.localdomain localhost

  10.182.108.86 rac1_13.cn.oracle.com rac1_13

  10.182.108.87 rac1_13-vip.cn.oracle.com rac1_13-vip

  192.168.0.11 rac1_13-priv.cn.oracle.com rac1_13-priv

  192.168.0.12 rac2_13-priv.cn.oracle.com rac2_13-priv

  10.182.108.88 rac2_13.cn.oracle.com rac2_13

  10.182.108.89 rac2_13-vip.cn.oracle.com rac2_13-vip

  ·改正节点的hostname

  vi /etc/sysconfig/network

  设置节点的hostname辨别为RAC1_13和RAC2_13.

  2.4 配置内核参数 (在全部节点上)

  编辑/etc/sysctl.conf

  kernel.core_uses_pid = 1

  fs.file-max=327679

  kernel.msgmni=2878

  kernel.msgmax=8192

  kernel.msgmnb=65536

  kernel.sem=250 32000 100 142

  kernel.shmmni=4096

  kernel.shmall=3279547

  kernel.sysrq=1

  net.core.rmem_default=262144

  net.core.rmem_max=2097152

  net.core.wmem_default=262144

  net.core.wmem_max=262144

  fs.aio-max-nr=3145728

  net.ipv4.ip_local_port_range=1024 65000

  vm.lower_zone_protection=100

  kernel.shmmax=536934400

  2.5 成立用于安装oracle的用户和用户组 (在全部节点上)

  首先确认系统中能否已成立oinstall,dba用户组和oracle用户,

  #id oracle

  假如没有成立,请用号令成立

  # /usr/sbin/groupadd –g 501 dba

  # /usr/sbin/groupadd –g 502 dba

  # /usr/sbin/useradd –g oinstall –G dba oracle

  在每个节点上成立.ssh目录并生成RSA Key

  1) 以oracle用户登录

  2) 查抄在在/home/oracle/下能否已有.ssh目录

  假如没有.ssh目录,请成立该目录

  mkdir ~/.ssh

  成立后改正目录权限

  [oracle@rac1_13 ~]$ chmod 700 ~/.ssh

  3) 生成rsa key

  [oracle@rac1_13 ~]$ /usr/bin/ssh-keygen -t rsa

  Generating public/private rsa key pair.

  Enter file in which to save the key (/home/oracle/.ssh/id_rsa):

  Enter passphrase (empty for no passphrase):

  Enter same passphrase again:

  Your identification has been saved in /home/oracle/.ssh/id_rsa.

  Your public key has been saved in /home/oracle/.ssh/id_rsa.pub.

  The key fingerprint is:

  3f:d2:e4:a3:ee:a1:58:e5:73:92:39:0d:8e:3f:9b:11 oracle@rac1_13

  4) 在每个节点上反复以上步骤

  将全部的RSA Key增添到文authorized_keys

  1) 在节点rac1_13上面,将RSA Key增添到文件authorized_keys

  [oracle@rac1_13 ~]$ cd .ssh

  [oracle@rac1_13 .ssh]$ cat id_rsa.pub >> authorized_keys

  [oracle@rac1_13 .ssh]$ ls

  authorized_keys id_rsa id_rsa.pub

  2) 将节点rac1_13上的 authorized_keys 抄送到节点rac2_13

  [oracle@rac1_13 .ssh]$ scp authorized_keys rac2_13:/home/oracle/.ssh/

  The authenticity of host 'rac2_13 (10.182.108.88)' can't be established.

  RSA key fingerprint is e6:dc:07:c3:d5:2a:45:43:66:72:d3:44:17:4d:54:42.

  Are you sure you want to continue connecting (yes/no) yes

  Warning: Permanently added 'rac2_13,10.182.108.88' (RSA) to the list of known hosts.

  oracle@rac2_13's password:

  authorized_keys 100% 224 0.2KB/s 00:00

  3) 在节点rac2_13上,将该节点的RSA Key也增添到authorized_keys

  [oracle@rac2_13 .ssh]$ cat id_rsa.pub >> authorized_keys

  4) 当全部节点的RSA Key都增添到authorized_keys时,将authorized_keys文件抄送到每个节点

  在节点上启用SSH协议

  1) 在每个节点上履行 SSH hostname date

  [oracle@rac1_13 .ssh]$ ssh rac1_13 date

  The authenticity of host 'rac1_13 (10.182.108.86)' can't be established.

  RSA key fingerprint is e6:dc:07:c3:d5:2a:45:43:66:72:d3:44:17:4d:54:42.

  Are you sure you want to continue connecting (yes/no) yes

  Warning: Permanently added 'rac1_13,10.182.108.86' (RSA) to the list of known hosts.

  Enter passphrase for key '/home/oracle/.ssh/id_rsa':

  Sun Apr 20 23:31:06 EDT 2008

  [oracle@rac1_13 .ssh]$ ssh rac2_13 date

  …

  在节点rac2_13上反复以上步骤

  2) 在每个节点上启动SSH Agent,并将SSH keys装载到内存

  [oracle@rac1_13 .ssh]$ exec /usr/bin/ssh-agent $SHELL

  [oracle@rac1_13 .ssh]$ /usr/bin/ssh-add

  [oracle@rac2_13 ~]$ exec /usr/bin/ssh-agent $SHELL

  [oracle@rac2_13 ~]$ /usr/bin/ssh-add

  ·考证SSH 协议

  [oracle@rac1_13 .ssh]$ ssh rac1_13 date

  Sun Apr 20 23:40:04 EDT 2008

  [oracle@rac1_13 .ssh]$ ssh rac2_13 date

  Sun Apr 20 23:40:09 EDT 2008

  [oracle@rac1_13 .ssh]$ ssh rac2_13-priv date

  Sun Apr 20 23:41:20 EDT 2008

  …

  到这里SSH信任拜候协议配置完毕.

  2.6.2 RSH 协议

  ·查抄系统能否已经安装rsh协议所需的包

  [root@rac1_13 rpm]# rpm -q rsh rsh-server

  rsh-0.17-25.4

  rsh-server-0.17-25.4

  确认 Disable SELinux

  履行 system-config-securitylevel

  编辑/etc/xinetd.d/rsh文件,将 disable 属性设置为 no

  运行以下号令重新装载xinetd

  [root@rac1_13 rpm]# chkconfig rsh on

  [root@rac1_13 rpm]# chkconfig rlogin on

  [root@rac1_13 rpm]# service xinetd reload

  Reloading configuration: [ OK ]

  成立/etc/hosts.equiv文件,将可托节点信息加入到文件中

  [root@rac1_13 rpm]# more /etc/hosts.equiv

  +rac1_13 oracle

  +rac1_13-priv oracle

  +rac2_13 oracle

  +rac2_13-priv oracle

  改正/etc/hosts.equiv文件的属性

  [root@rac1_13 rpm]# chown root:root /etc/hosts.equiv

  [root@rac1_13 rpm]# chmod 775 /etc/hosts.equiv

  改正rsh的途径

  [root@rac1_13 rpm]# which rsh

  /usr/kerberos/bin/rsh

  [root@rac1_13 rpm]# cd /usr/kerberos/bin

  [root@rac1_13 bin]# mv rsh rsh.original

  [root@rac1_13 bin]# which rsh

  /usr/bin/rsh

  考证RSH协议,以oracle 用户

  [oracle@rac1_13 ~]$ rsh rac1_13 date

  Wed Apr 16 22:13:32 EDT 2008

  [oracle@rac1_13 ~]$ rsh rac1_13-priv date

  Wed Apr 16 22:13:40 EDT 2008

  [oracle@rac1_13 ~]$ rsh rac2_13 date

  Wed Apr 16 22:13:48 EDT 2008

  [oracle@rac1_13 ~]$ rsh rac2_13-priv date

  Wed Apr 16 22:13:56 EDT 2008

  [oracle@rac2_13 ~]$ rsh rac1_13 date

  Wed Apr 16 22:14:33 EDT 2008

  [oracle@rac2_13 ~]$ rsh rac1_13-priv date

  Wed Apr 16 22:14:41 EDT 2008

  [oracle@rac2_13 ~]$ rsh rac2_13 date

  Wed Apr 16 22:14:47 EDT 2008

  [oracle@rac2_13 ~]$ rsh rac2_13-priv date

  Wed Apr 16 22:14:54 EDT 2008

  2.7 配置用户环境 (在全部节点上)

  root 用户

  编辑/etc/bashrc 文件,加入以下语句

  if [ -t 0 ]; then

  stty intr ^C

  fi

  oracle用户环境配置

  编辑文件 /etc/security/limits.conf,加入以下内容

  oracle soft nproc 2047

  oracle hard nproc 16384

  oracle soft nofile 1024

  oracle hard nofile 65536

  编辑文件/etc/pam.d/login 文件,加入以下内容

  session required pam_limits.so

  编辑/etc/profile,加入以下内容

  if [ $USER = "oracle" ]; then

  if [ $SHELL = "/bin/ksh" ]; then

  ulimit -u 16384

  ulimit -n 65536

  else

  ulimit -u 16384 -n 65536

  fi

  umask 022

  fi

  2.8 NFS 服务设置

  我们筹划将Clusterware和RAC DB的相关文件都放在NFS目录中.

  NFS服务器 style="COLOR: #000000" href="http://server.it168.com/" target=_blank>服务器端设置

  1) 10.182.108.27 作为NFS服务器

  2) 在NFS服务器的本地磁盘上成立同享目录

  /crs_13

  /racdb_13

  3) 编辑/etc/exports文件

  /crs_13 10.182.108.0/255.255.255.0(rw,sync,no_root_squash)

  /racdb_13 10.182.108.0/255.255.255.0(rw,sync,no_root_squash)

  在RAC节点上成立安装目录

  [root@rac1_13 etc]# mkdir /crs_13

  [root@rac1_13 etc]# chown -R root:oinstall /crs_13/

  [root@rac1_13 etc]# chmod -R 775 /crs_13/

  [root@rac1_13 etc]# mkdir /racdb_13

  [root@rac1_13 etc]# chown -R oracle:dba /racdb_13/

  [root@rac1_13 etc]# chmod -R 775 /racdb_13/

  [root@rac2_13 ~]# mkdir /crs_13

  [root@rac2_13 ~]# chown -R root:oinstall /crs_13/

  [root@rac2_13 ~]# chmod -R 775 /crs_13/

  [root@rac2_13 ~]# mkdir /racdb_13

  [root@rac2_13 ~]# chown -R oracle:dba /racdb_13/

  [root@rac2_13 ~]# chmod -R 775 /racdb_13/

  在RAC节点上配置NFS服务

  编辑/etc/fstab 文件,将NFS目录加入文件

  10.182.108.27:/crs_13 /crs_13 nfs rw,bg,hard,nointr,rsize=32768,wsize=32768,tcp,actimeo=0,vers=3,timeo

  =600

  10.182.108.27:/racdb_13 /racdb_13 nfs rw,bg,hard,nointr,rsize=32768,wsize=32768,tcp,actimeo=0,vers=3,timeo

  =600

  在NFS服务器端和客户端重启NFS服务

  service nfs restart

  df –h查抄NFS目录能否已经mount上

  [root@rac1_13 etc]# df -h

  Filesystem Size Used Avail Use% Mounted on

  /dev/mapper/VolGroup00-LogVol00

  3.9G 1.6G 2.1G 43% /

  /dev/hda1 99M 8.3M 86M 9% /boot

  none 513M 0 513M 0% /dev/shm

  10.182.108.27:/crs_13

  127G 7.8G 113G 7% /crs_13

  10.182.108.27:/racdb_13

  127G 7.8G 113G 7% /racdb_13

  [root@rac2_13 ~]# df -h

  Filesystem Size Used Avail Use% Mounted on

  /dev/mapper/VolGroup00-LogVol00

  3.9G 1.6G 2.1G 43% /

  /dev/hda1 99M 8.3M 86M 9% /boot

  none 513M 0 513M 0% /dev/shm

  10.182.108.27:/crs_13

  127G 7.8G 113G 7% /crs_13

  10.182.108.27:/racdb_13

  127G 7.8G 113G 7% /racdb_13

  为虚拟机增添磁盘

  通过模板成立的虚拟机磁盘空间不够来安装clusterware 和 database,需求增添磁盘空间.

  我们可以通过OVM Manager Console来为每个节点增添一个名为data,大小为5000MB的磁盘.

  磁盘成立完毕在每个节点上可以用 fdisk –l号令查看新增添的磁盘.

  [root@rac1_13 ~]# fdisk -l

  Disk /dev/hda: 6442 MB, 6442450944 bytes

  255 heads, 63 sectors/track, 783 cylindersUnits = cylinders of 16065 * 512 = 8225280 bytes

  Device Boot Start End Blocks Id System

  /dev/hda1 * 1 13 104391 83 Linux

  /dev/hda2 14 783 6185025 8e Linux LVM

  Disk /dev/hdb: 5242 MB, 5242880000 bytes

  255 heads, 63 sectors/track, 637 cylinders

  Units = cylinders of 16065 * 512 = 8225280 bytes

  Disk /dev/hdb doesn't contain a valid partition table

  可以看到新增添的磁盘为/dev/hdb,但是里面还没有成立磁盘分区

  为新磁盘成立分区 hdb1

  # fdisk /dev/hdb

  # fdisk -l /dev/hdb

  Disk /dev/hdb: 5242 MB, 5242880000 bytes

  255 heads, 63 sectors/track, 637 cylinders

  Units = cylinders of 16065 * 512 = 8225280 bytes

  Device Boot Start End Blocks Id System

  /dev/hdb1 1 633 5084541 83 Linux

  格局化分区/dev/hdb1

  # mkfs.ext3 -b 1024 -i 8192 /dev/hdb1

  成立Mount Point

  mkdir /data

  将分区挂载到安装目录下

  mount /dev/hdb1 /data

  将目录信息写人文件/etc/fstab

  /dev/hdb1 /data ext3 defaults 0 0

  成立安装所需的目录

  mkdir –p /data/crs

  chown –R oracle:oinstall /data/crs

  chmod –R 775 /data/crs

  2.10 成立ocr和voting file文件

  ocr和voting file文件必须放在NFS目录.在此中一个节点成立ocr文件和voting file文件便可.

  [root@rac1_13 crs_13]# chown root:oinstall ocrfile

  [root@rac1_13 crs_13]# chmod 775 ocrfile

  [root@rac1_13 crs_13]# touch votingfile

  [root@rac1_13 crs_13]# chown oracle:dba votingfile

  [root@rac1_13 crs_13]# chmod 775 votingfile

  Step 1 到解压目录履行 ./runInstaller

  Step 2 指定inventory目录以及安装用户组

  Step 3 指定CRS Home

  Step 4 输入集群中的节点信息,与/etc/hosts里面的信息保持一致

  Step 5 指定节点公网/私网信息

  Step 6 指定OCR 文件的途径

  Step 7 指定Voting File文件的位置

  Step 8 开始安装

  Step 9 在全部节点上顺次履行这些脚本(请注意一个节点上全部脚本履行完毕,才能去别的的节点履行)

  [root@rac1_13 crs_13]# /data/crs/root.sh

  Checking to see if Oracle CRS stack is already configured

  Setting the permissions on OCR backup directory

  Setting up Network socket directories

  Oracle Cluster Registry configuration upgraded successfully

  Successfully accumulated necessary OCR keys.

  Using ports: CSS=49895 CRS=49896 EVMC=49898 and EVMR=49897.

  node :

  node 1: rac1_13 rac1_13-priv rac1_13

  node 2: rac2_13 rac2_13-priv rac2_13

  Creating OCR keys for user 'root', privgrp 'root'..

  Operation successful.

  Now formatting voting device: /crs_13/votingfile

  Format of 1 voting devices complete.

  Startup will be queued to init within 30 seconds.

  Adding daemons to inittab

  Expecting the CRS daemons to be up within 600 seconds.

  Cluster Synchronization Services is active on these nodes.

  rac1_13

  Cluster Synchronization Services is inactive on these nodes.

  rac2_13

  Local node checking complete. Run root.sh on remaining nodes to start CRS daemons

  [root@rac2_13 crs]# sh root.sh

  Checking to see if Oracle CRS stack is already configured

  Setting the permissions on OCR backup directory

  Setting up Network socket directories

  Oracle Cluster Registry configuration upgraded successfully

  clscfg: EXISTING configuration version 4 detected.

  clscfg: version 4 is 11 Release 1.

  Successfully accumulated necessary OCR keys.

  Using ports: CSS=49895 CRS=49896 EVMC=49898 and EVMR=49897.

  node :

  node 1: rac1_13 rac1_13-priv rac1_13

  node 2: rac2_13 rac2_13-priv rac2_13

  clscfg: Arguments check out successfully.

  NO KEYS WERE WRITTEN. Supply -force parameter to override.

  -force is destructive and will destroy any previous cluster

  configuration.

  Oracle Cluster Registry for cluster has already been initialized

  Startup will be queued to init within 30 seconds.

  Adding daemons to inittab

  Expecting the CRS daemons to be up within 600 seconds.

  Cluster Synchronization Services is active on these nodes.

  rac1_13

  rac2_13

  Cluster Synchronization Services is active on all the nodes.

  Waiting for the Oracle CRSD and EVMD to start

  Oracle CRS stack installed and running under init(1M)

  Running vipca(silent) for configuring nodeapps

  Creating VIP application resource on (2) nodes...

  Creating GSD application resource on (2) nodes...

  Creating ONS application resource on (2) nodes...

  Starting VIP application resource on (2) nodes...

  Starting GSD application resource on (2) nodes...

  Starting ONS application resource on (2) nodes...

  Done.

  脚本都履行完毕后,进入下一步.

  Step 10 配置确认

  Step 11 完毕安装

  3.3 查看CRS状况

  [root@rac1_13 bin]# ./crs_stat -t

  Name Type Target State Host

  ------------------------------------------------------------

  ora...._13.gsd application ONLINE ONLINE rac1_13

  ora...._13.ons application ONLINE ONLINE rac1_13

  ora...._13.vip application ONLINE ONLINE rac1_13

  ora...._13.gsd application ONLINE ONLINE rac2_13

  ora...._13.ons application ONLINE ONLINE rac2_13

  ora...._13.vip application ONLINE ONLINE rac2_13

  [root@rac2_13 bin]# ./crs_stat -t

  Name Type Target State Host

  ------------------------------------------------------------

  ora...._13.gsd application ONLINE ONLINE rac1_13

  ora...._13.ons application ONLINE ONLINE rac1_13

  ora...._13.vip application ONLINE ONLINE rac1_13

  ora...._13.gsd application ONLINE ONLINE rac2_13

  ora...._13.ons application ONLINE ONLINE rac2_13

  ora...._13.vip application ONLINE ONLINE rac2_13

  [root@rac1_13 bin]# ps -ef|grep d.bin

  oracle 20999 20998 0 06:45 00:00:00 /data/crs/bin/evmd.bin

  root 21105 20310 0 06:45 00:00:00 /data/crs/bin/crsd.bin reboot

  oracle 21654 21176 0 06:45 00:00:00 /data/crs/bin/ocssd.bin

  root 26087 5276 0 06:54 pts/0 00:00:00 grep d.bin

  到这里Oracle Clusterware的安装完毕.

  Oracel Real Application Cluster的安装和数据库的成立在这里就不介介绍了

  以上是“在OracleVM虚拟机上安装OracleClusterware11g-安装配置[Oracle防范]”的内容,如果你对以上该文章内容感兴趣,你可以看看七道奇为您推荐以下文章:
  • 如安在Oracle中导入额外的字段作为空值
  • 在Oracle存储历程中实现分页
  • <b>如安在Oracle中利用Java存储历程</b>
  • <b>在Oracle 8x中实现自动断开后再衔接</b>
  • <b>如安在Oracle数据库中联合异构数据-开辟技术</b>
  • 在Oracle数据库中按用户名重建索引的办法-开辟技术
  • <b>在Oracle数据库中提高查询后果的可读性-性能调优</b>
  • 教你怎样在Oracle数据库中高速导出/导入-入门底子
  • <b>在Oracle数据库保护中的前瞻性需求考虑的问题</b>
  • 在Oracle数据库保护中要做到前瞻性-性能调优
  • 在OracleVM虚拟机上安装OracleClusterware11g-安装配置
  • 本文地址: 与您的QQ/BBS好友分享!
    • 好的评价 如果您觉得此文章好,就请您
        0%(0)
    • 差的评价 如果您觉得此文章差,就请您
        0%(0)

    文章评论评论内容只代表网友观点,与本站立场无关!

       评论摘要(共 0 条,得分 0 分,平均 0 分) 查看完整评论
    Copyright © 2020-2022 www.xiamiku.com. All Rights Reserved .