Installation Notes on Install Oracle RAC with Openfiler on ESXi 5.5


Oracle provides a very good guide ("Build Your Own Oracle RAC 11g Cluster on Oracle Linux and iSCSI") to install 2 Oracle 11g R2 RAC nodes with a Openfiler 2.3 as shared storage.  The guide is based on using cheap hardware to setup a lab environment.

Using VMware provides a even lower cost to setup a lab.  Since VMware allows VM cloning, I can saved my time to setup RAC node 2 by copying node 1.  The below document is my log on setup the environment.
  1. Follow the Oracle's instruction, except the setup of node 2 (we will clone Node 1 later).
    1. Create a VM, using 3GB RAM, 40GB harddisk (thin provision)
    2. Install Oracle Linux and all the stuff
    3. Changed to VMX LAN card and install VMWare Tools.
    4. Reconfigure all LAN because of change LAN card (need to reboot to check if the new LAN card is "activate" automatically during boot up.  Otherwise, please configure the LAN card at system-config.network.
    5. For the Openfiler
      1. 1GB RAM, 4GB harddisk for the system
      2. I have create another 20GB virtual harddisk for data.
      3. Configure all iSCSI stuff per instruction 
      4. if you cannot list the iSCSI target at racnodes, try these method
        1. Openfiler won’t accept connections; check /etc/initiators.deny for an entry denying access from ALL. Just comment it out and save the file. Take care: Openfiler will regenerate this file from time to time… (reference)
        2. use this method 2.
      5. The openfiler shall returns only 3 SCSI targets by command  iscsiadm -m discovery -t st -p 192.168.2.x.  If iscsi added 6 devices, add the following line at /etc/initiators.deny
        • ALL 192.168.1.0/24
    6. Follow all steps in the instruction until step 15.
    7. There is a bug on the  /home/grid/.bash_profile provided.  Edit the below lines:
      • Remark line 24 and line 27
    8. There is a bug on the  /home/oracle/.bash_profile provided.  Edit the below lines:
      • Remark line 23 and line 26
    9. Skip step 16.
    10. In order to pass runcluvfy.sh, we need to add "dba" as supplementary group of user "grid".
    11. In order to pass runcluvfy.sh, we need to enable ntpd.
      1. chkconfig ntpd on
      2. edit /etc/sysconfig/ntpd.  Add option "-x " before "-u".  Details at here.
      3. service ntpd start
    12. Follow all steps in the instruction until step 18 ( Install and Configure ASMLib 2.0)
  2. Shutdown racnode1 and clone racnode1 to racnod2
    1. Clone racnode1 to racnode2.  You may use VMware Converter Standalone 5.5 to clone.  It supports ESXi to ESXi VM clone.
    2. We need to modify the below items for the clone
      1. Hostname and IP addresses (via system-config-network inside GUI)
    3. Hostname and IP addresses (via system-config-network inside GUI
      )
      1. Edit IP addresses of both network interface
      2. Delete old NIC profile
      3. Update the hostname
      4. Make sure the ethernet NICs use the same name (i.e. eth0 and eth1) across both nodes.
      5. A new unique name for the iscsi initaitor (the iSCSI client).  Edit the file:
        •  nano /etc/iscsi/initiatorname.iscsi
    4. Edit line 29 of /home/grid/.bash_profile
      1. Change +ASM1 to +ASM2
    5. Edit line 28 of /home/oracle/.bash_profile
      1. Change racdb1 to racdb2
  3. Using Xming to login to remote system
    1. In my case, I receives "Xlib: connection to "[my_windows_ip]:0.0" refused by server" error when try to run xterm.
    2. For the connection to be successful, start Xming with the below command:
      • "C:\Program Files\Xming\Xming.exe" :0 -clipboard -multiwindow -ac
    3. The above command allows Xming to accept connection from any IP.
  4. Generate keys for SSH passwordless login
    1. At racnode2, delete the ~/.ssh
    2. re-run /usr/bin/ssh-keygen -t dsa
  5. For the DB creation
    1. before DB creation, we need to reset ASMSNMP password
    2. su - grid
    3. sqlplus / as sysadm
    4. alter user asmsnmp identified by password;
  6. If you failed to properly setup the 3 DNS A record for the cluster hostname (racnode-cluster-scan) during OUI installation, you will end up with only one scan listener.  To add back the other two Scan Listener, we can follow the instruction here

Comments

Popular Posts