Oracle 19c RAC on VirtualBox – Silent Installation

The extra requirements include:at least 2 seperate networks: one public network (for client connections) and one private network (for cluster interconnect)

VirtualBox provides NAT Networks, appropriate for both network types in a sandbox environment. I decided to use this configuration, because it is “portable”. The virtual machines can communicate each other on the host they run on, and they can access the external network (for example download package updates), but this configuration does not depend on the actual network where the host machine is connected to.shared storage, that is attached to all nodes, used for ASM disks

A virtual disk in VirtualBox can be easily shared between multiple virtual machines.at least 1 virtual IP adress per node on the public network, used for virtual IP addresses in the cluster

This is nothing special.a SCAN name, that resolves to at least 1, but typically 3 virtual IPs on the public network, used as SCAN IP addresses

This requires* a DNS server. Dnsmasq is a lightweight DNS server that can be used for this task.
(*Resolving a single name to 1 SCAN IP can be done by using /etc/hosts, DNS is not required for that.)
For simplicity I will use 1 SCAN IP without DNS.

The planned addresses, hostnames and their purpose:

172.16.1.101    rac1       # public address of the first node
172.16.1.102    rac2       # public address of the second node
172.16.1.103    rac1-vip   # virtual address of the first node
172.16.1.104    rac2-vip   # virtual address of the second node
172.16.1.105    rac-scan   # SCAN address of the cluster
10.0.1.101      rac1-priv  # private address of the first node
10.0.1.102      rac1-priv  # private address of the second node

The specific software used for this setup:Windows 10 Professional 64-bit on the host PCVirtualBox 6.1.4Oracle Linux 7.7 64-bit in the virtual machinesOracle Grid Infrastructure and Database version 19.3 for Linux x86-64

The first step is creating the virtual networks for VirtualBox (On Windows, the default location for VBoxManage is %ProgramFiles%\Oracle\VirtualBox).

VBoxManage natnetwork add --netname rac_public --enable --network 172.16.1.0/24
VBoxManage natnetwork add --netname rac_private --enable --network 10.0.1.0/24

Obviously, this can be done in the GUI as well, but it is much simpler to document it this way.

Next, create the virtual machines.

VBoxManage createvm --name rac1 --ostype Oracle_64 --register --groups "/rac" --basefolder "C:\Users\balaz\VirtualBox VMs"
 
Virtual machine 'rac1' is created and registered.
UUID: e69baac6-a0b4-4e7e-8ce7-ff3ada7879f1
Settings file: 'C:\Users\balaz\VirtualBox VMs\rac\rac1\rac1.vbox'
 
VBoxManage createvm --name rac2 --ostype Oracle_64 --register --groups "/rac" --basefolder "C:\Users\balaz\VirtualBox VMs"
 
Virtual machine 'rac2' is created and registered.
UUID: b7a000cd-f92e-45aa-ae01-ceac567f2549
Settings file: 'C:\Users\balaz\VirtualBox VMs\rac\rac2\rac2.vbox'

With the above, a group called “rac” was created, and the virtual machines were created in “C:\Users\balaz\VirtualBox VMs”, in the group folder “rac”.

In 19c, the minimum memory requirement for Grid Infrastructure is 8 GB. Setting the CPU (1 core), memory (10 GB), network interfaces for the virtual machines:

VboxManage modifyvm rac1 --cpus 1 --memory 10240 --nic1 natnetwork --nat-network1 rac_public --nic2 natnetwork --nat-network2 rac_private

VboxManage modifyvm rac2 --cpus 1 --memory 10240 --nic1 natnetwork --nat-network1 rac_public --nic2 natnetwork --nat-network2 rac_private

Add a storage controller to each virtual machine, create and attach a 100 GB virtual disk for each virtual machine, and attach a virtual DVD drive as well (this is where the installer ISO of the operating system will be inserted):

VBoxManage storagectl rac1 --name rac1 --add sata
VBoxManage storagectl rac2 --name rac2 --add sata
 
VBoxManage createmedium --filename "C:\Users\balaz\VirtualBox VMs\rac\rac1\rac1.vdi" --size 102400 --variant Standard
0%...10%...20%...30%...40%...50%...60%...70%...80%...90%...100%
Medium created. UUID: dd7c41bc-c21c-463d-b04e-9fa5be294fbd
 
VBoxManage createmedium --filename "C:\Users\balaz\VirtualBox VMs\rac\rac2\rac2.vdi" --size 102400 --variant Standard
0%...10%...20%...30%...40%...50%...60%...70%...80%...90%...100%
Medium created. UUID: d0fbfeaf-58f0-41d2-83f9-f2d492339fb0
 
VBoxManage storageattach rac1 --storagectl rac1 --port 0 --type hdd --medium "C:\Users\balaz\VirtualBox VMs\rac\rac1\rac1.vdi"
VBoxManage storageattach rac2 --storagectl rac2 --port 0 --type hdd --medium "C:\Users\balaz\VirtualBox VMs\rac\rac2\rac2.vdi"
VBoxManage storageattach rac1 --storagectl rac1 --port 1 --type dvddrive --medium emptydrive
VBoxManage storageattach rac2 --storagectl rac2 --port 1 --type dvddrive --medium emptydrive

Finally, start the virtual machines and install the operating system on them. This is just a generic Linux installation, I will not post all the details here, just some notes:

  • Kdump reserves some memory (enabled by default), but it can be disabled to have more available memory in the virtual machine.
  • A minimal install is perfectly fine, we do not need GUI and other unnecessary packages on a server.
  • At network configuration, enable “Automatically connect to this network when it is available (unchecked by default).
  • At network configuration, on the IPv4 settings tab, set it to Manual, and add manually set the addresses as below:
           | rac1 public  | rac2 public  | rac1 private | rac2 private
-----------------------------------------------------------------------
Address     | 172.16.1.101 | 172.16.1.102 | 10.0.1.101   | 10.0.1.102
Netmask     | 24           | 24           | 24           | 24
Gateway     | 172.16.1.1   | 172.16.1.1   | empty        | empty
DNS Servers | 172.16.1.1   | 172.16.1.1   | empty        | empty

VirtualBox NAT Networks use the .1 IP address (*.*.*.1) on the network as gateway and DNS server.The minimum swap required is the same amount as the available physical memory, or 16 GB, whichever is lower. Configuring less swap than required results in a warning during installation that can be ignored though.

Once the installation is done, in order to log in to the virtual machines through SSH, portforward rules need to be set on the rac_public NAT Network:

VBoxManage natnetwork modify --netname rac_public --port-forward-4 "SSH - rac1:tcp:[]:60001:[172.16.1.101]:22"

VBoxManage natnetwork modify --netname rac_public --port-forward-4 "SSH - rac2:tcp:[]:60002:[172.16.1.102]:22"

By setting the above, we can log in through SSH to the rac1 and rac2 virtual machines from the host using the addresses 127.0.0.1:60001 and 127.0.0.1:60002.

Log in, make sure the configuration is correct and the virtual machines can communicate with each other on both interfaces.

rac1:

[[email protected] ~]# df -h
Filesystem                Size  Used Avail Use% Mounted on
devtmpfs                  4.9G     0  4.9G   0% /dev
tmpfs                     4.9G     0  4.9G   0% /dev/shm
tmpfs                     4.9G  8.5M  4.9G   1% /run
tmpfs                     4.9G     0  4.9G   0% /sys/fs/cgroup
/dev/mapper/ol_rac1-root   89G  1.3G   88G   2% /
/dev/sda1                1014M  168M  847M  17% /boot
tmpfs                     999M     0  999M   0% /run/user/0
[[email protected] ~]# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host
       valid_lft forever preferred_lft forever
2: enp0s3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
    link/ether 08:00:27:c4:04:29 brd ff:ff:ff:ff:ff:ff
    inet 172.16.1.101/24 brd 172.16.1.255 scope global noprefixroute enp0s3
       valid_lft forever preferred_lft forever
    inet6 fe80::a00:27ff:fec4:429/64 scope link
       valid_lft forever preferred_lft forever
3: enp0s8: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
    link/ether 08:00:27:c0:e8:d9 brd ff:ff:ff:ff:ff:ff
    inet 10.0.1.101/24 brd 10.0.1.255 scope global noprefixroute enp0s8
       valid_lft forever preferred_lft forever
    inet6 fe80::51a9:c343:16be:8a4d/64 scope link noprefixroute
       valid_lft forever preferred_lft forever
[[email protected] ~]# ping -c 3 172.16.1.102
PING 172.16.1.102 (172.16.1.102) 56(84) bytes of data.
64 bytes from 172.16.1.102: icmp_seq=1 ttl=64 time=0.286 ms
64 bytes from 172.16.1.102: icmp_seq=2 ttl=64 time=0.177 ms
64 bytes from 172.16.1.102: icmp_seq=3 ttl=64 time=0.164 ms
 
--- 172.16.1.102 ping statistics ---
3 packets transmitted, 3 received, 0% packet loss, time 2072ms
rtt min/avg/max/mdev = 0.164/0.209/0.286/0.054 ms
[[email protected] ~]# ping -c 3 10.0.1.102
PING 10.0.1.102 (10.0.1.102) 56(84) bytes of data.
64 bytes from 10.0.1.102: icmp_seq=1 ttl=64 time=0.352 ms
64 bytes from 10.0.1.102: icmp_seq=2 ttl=64 time=0.143 ms
64 bytes from 10.0.1.102: icmp_seq=3 ttl=64 time=0.163 ms
 
--- 10.0.1.102 ping statistics ---
3 packets transmitted, 3 received, 0% packet loss, time 2068ms
rtt min/avg/max/mdev = 0.143/0.219/0.352/0.094 ms
[[email protected] ~]#

rac2:

[[email protected] ~]# df -h
Filesystem                Size  Used Avail Use% Mounted on
devtmpfs                  4.9G     0  4.9G   0% /dev
tmpfs                     4.9G     0  4.9G   0% /dev/shm
tmpfs                     4.9G  8.5M  4.9G   1% /run
tmpfs                     4.9G     0  4.9G   0% /sys/fs/cgroup
/dev/mapper/ol_rac2-root   89G  1.3G   88G   2% /
/dev/sda1                1014M  168M  847M  17% /boot
tmpfs                     999M     0  999M   0% /run/user/0
[[email protected] ~]# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host
       valid_lft forever preferred_lft forever
2: enp0s3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
    link/ether 08:00:27:b6:7f:ef brd ff:ff:ff:ff:ff:ff
    inet 172.16.1.102/24 brd 172.16.1.255 scope global noprefixroute enp0s3
       valid_lft forever preferred_lft forever
    inet6 fe80::3d4a:de87:394e:eb52/64 scope link noprefixroute
       valid_lft forever preferred_lft forever
3: enp0s8: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
    link/ether 08:00:27:1d:78:05 brd ff:ff:ff:ff:ff:ff
    inet 10.0.1.102/24 brd 10.0.1.255 scope global noprefixroute enp0s8
       valid_lft forever preferred_lft forever
    inet6 fe80::ada0:4000:64e9:d2c7/64 scope link noprefixroute
       valid_lft forever preferred_lft forever
[[email protected] ~]# ping -c 3 172.16.1.101
PING 172.16.1.101 (172.16.1.101) 56(84) bytes of data.
64 bytes from 172.16.1.101: icmp_seq=1 ttl=64 time=0.166 ms
64 bytes from 172.16.1.101: icmp_seq=2 ttl=64 time=0.175 ms
64 bytes from 172.16.1.101: icmp_seq=3 ttl=64 time=0.161 ms
 
--- 172.16.1.101 ping statistics ---
3 packets transmitted, 3 received, 0% packet loss, time 2032ms
rtt min/avg/max/mdev = 0.161/0.167/0.175/0.012 ms
[[email protected] ~]# ping -c 3 10.0.1.101
PING 10.0.1.101 (10.0.1.101) 56(84) bytes of data.
64 bytes from 10.0.1.101: icmp_seq=1 ttl=64 time=0.244 ms
64 bytes from 10.0.1.101: icmp_seq=2 ttl=64 time=0.242 ms
64 bytes from 10.0.1.101: icmp_seq=3 ttl=64 time=0.171 ms
 
--- 10.0.1.101 ping statistics ---
3 packets transmitted, 3 received, 0% packet loss, time 2050ms
rtt min/avg/max/mdev = 0.171/0.219/0.244/0.033 ms
[[email protected] ~]#

Update the packages then reboot the VMs if needed (a newer kernel was installed):

yum update -y
reboot

Oracle Database on Linux requires several packages installed, kernel parameters and resource limits set. Fortunately, on Oracle Linux, everything can be taken care of by installing a single package.

yum install oracle-database-preinstall-19c.x86_64 -y

This installs the required packages, sets the required kernel parameters and resource limits, and create the oracle user. Set a password for oracle user:

# id oracle
uid=54321(oracle) gid=54321(oinstall) groups=54321(oinstall),54322(dba),54323(oper),54324(backupdba),54325(dgdba),54326(kmdba),54330(racdba)
# passwd oracle
Changing password for user oracle.
New password:
BAD PASSWORD: The password contains the user name in some form
Retype new password:
passwd: all authentication tokens updated successfully.
#

VirtualBox shared folders will be used in this walkthrough to access the downloaded installers inside the VMs. Simply copying the zips with scp (WinSCP) is also fine, in that case, the following steps can be skipped.
To install VirtualBox guest additions, additional packages need to be installed (we will use

yum install gcc bzip2 kernel-uek-devel-$(uname -r) -y

Insert the VirtualBox Guest Additions ISO into the virtual DVD drive of the virtual machines:

VboxManage storageattach rac1 --storagectl rac1 --port 1 --medium additions
VboxManage storageattach rac2 --storagectl rac2 --port 1 --medium additions

In the virtual machines, mount the disc and install VirtualBox Guest Additions:

# mount /dev/cdrom /mnt/
mount: /dev/sr0 is write-protected, mounting read-only
# /mnt/VBoxLinuxAdditions.run
Verifying archive integrity... All good.
Uncompressing VirtualBox 6.1.4 Guest Additions for Linux........
VirtualBox Guest Additions installer
Copying additional installer modules ...
Installing additional modules ...
VirtualBox Guest Additions: Starting.
VirtualBox Guest Additions: Building the VirtualBox Guest Additions kernel
modules.  This may take a while.
VirtualBox Guest Additions: To build modules for other installed kernels, run
VirtualBox Guest Additions:   /sbin/rcvboxadd quicksetup <version>
VirtualBox Guest Additions: or
VirtualBox Guest Additions:   /sbin/rcvboxadd quicksetup all
VirtualBox Guest Additions: Building the modules for kernel
4.14.35-1902.300.11.el7uek.x86_64.
# umount /mnt
#

Stop the firewall and disable its automatic start:

# systemctl stop firewalld
# systemctl disable firewalld
Removed symlink /etc/systemd/system/multi-user.target.wants/firewalld.service.
Removed symlink /etc/systemd/system/dbus-org.fedoraproject.FirewallD1.service.

Edit /etc/hosts and append the IP addresses and host names:

# vi /etc/hosts
127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4
::1         localhost localhost.localdomain localhost6 localhost6.localdomain6
  
172.16.1.101    rac1       # public address of the first node
172.16.1.102    rac2       # public address of the second node
172.16.1.103    rac1-vip   # virtual address of the first node
172.16.1.104    rac2-vip   # virtual address of the second node
172.16.1.105    rac-scan   # SCAN address of the cluster
172.16.1.106    rac-scan   # SCAN address of the cluster
172.16.1.107    rac-scan   # SCAN address of the cluster
10.0.1.101      rac1-priv  # private address of the first node
10.0.1.102      rac2-priv  # private address of the second node

Now the problem is, even though there are 3 different entries for “rac-scan”, hosts file based name resolution will always resolve it to the first IP address in the list. To simulate some proper DNS resolution, install dnsmasq on both virtual machines (if not already installed):

# yum install dnsmasq -y

Next, configure dnsmasq, to use the current name resolution settings, and configure the virtual machines, to use their local dnsmasq service for name resolution:

# cp /etc/resolv.conf /etc/dnsmasq-resolv.conf
# vi /etc/dnsmasq.conf
  
resolv-file=/etc/dnsmasq-resolv.conf

Start dnsmasq and enable its automatic startup:

# systemctl start dnsmasq
# systemctl enable dnsmasq
Created symlink from /etc/systemd/system/multi-user.target.wants/dnsmasq.service to /usr/lib/systemd/system/dnsmasq.service.

Configure the virtual machines to use their local dnsmasq service, and test name resolution:

# vi /etc/resolv.conf
nameserver 127.0.0.1
  
# nslookup rac-scan
Server:         127.0.0.1
Address:        127.0.0.1#53
  
Name:   rac-scan
Address: 172.16.1.107
Name:   rac-scan
Address: 172.16.1.105
Name:   rac-scan
Address: 172.16.1.106
  
# nslookup wordpress.com
Server:         127.0.0.1
Address:        127.0.0.1#53
  
Non-authoritative answer:
Name:   wordpress.com
Address: 192.0.78.17
Name:   wordpress.com
Address: 192.0.78.9

This works for now, but with the current configuration, NetworkManager may overwrite /etc/resolv.conf. In order to prevent it, edit the configuration file, remove DNS settings, and add the line PEERDNS=no.

# sed -i '/DNS/d' /etc/sysconfig/network-scripts/ifcfg-enp0s3
# sed -i '/DNS/d' /etc/sysconfig/network-scripts/ifcfg-enp0s8
# echo "PEERDNS=no" >> /etc/sysconfig/network-scripts/ifcfg-enp0s3
# echo "PEERDNS=no" >> /etc/sysconfig/network-scripts/ifcfg-enp0s8

Finally, add the shared storage. For this, shutdown the virtual machines first, then create the virtual disks with fixed size (this is a requirement for using the same VDI at the same time in multiple virtual machines), make them shareable, and attach them to the virtual machines. In 19c, the GIMR (Grid Infrastructure Management Repository) is an optional component and we will not use it. The disks being added are: 100 GB for the DATA diskgroup and 32 GB for FRA diskgroup.

VboxManage createmedium disk --filename "C:\Users\balaz\VirtualBox VMs\rac\rac_DATA1.vdi" --format VDI --variant Fixed --size 102400
0%...10%...20%...30%...40%...50%...60%...70%...80%...90%...100%
Medium created. UUID: 537130e2-061b-4dc6-9100-9ccff1b6d78f
 
VboxManage createmedium disk --filename "C:\Users\balaz\VirtualBox VMs\rac\rac_FRA1.vdi" --format VDI --variant Fixed --size 32768
0%...10%...20%...30%...40%...50%...60%...70%...80%...90%...100%
Medium created. UUID: ee827a9b-a2b9-4bc6-b6b3-faca94e664f7
 
VBoxManage modifymedium disk "C:\Users\balaz\VirtualBox VMs\rac\rac_DATA1.vdi" --type shareable
VBoxManage modifymedium disk "C:\Users\balaz\VirtualBox VMs\rac\rac_FRA1.vdi" --type shareable
VboxManage storageattach rac1 --storagectl rac1 --port 2 --type hdd --medium "C:\Users\balaz\VirtualBox VMs\rac\rac_DATA1.vdi"
VboxManage storageattach rac1 --storagectl rac1 --port 3 --type hdd --medium "C:\Users\balaz\VirtualBox VMs\rac\rac_FRA1.vdi"
VboxManage storageattach rac2 --storagectl rac2 --port 2 --type hdd --medium "C:\Users\balaz\VirtualBox VMs\rac\rac_DATA1.vdi"
VboxManage storageattach rac2 --storagectl rac2 --port 3 --type hdd --medium "C:\Users\balaz\VirtualBox VMs\rac\rac_FRA1.vdi"

Make the installer software available on the virtual machines. I have the installers downloaded in I:\Oracle, so creating a shared folder:

VboxManage sharedfolder add rac1 --name install --hostpath I:\Oracle
VboxManage sharedfolder add rac2 --name install --hostpath I:\Oracle

After starting the virtual machines, the newly created and attached disks should show up:

# lsblk
NAME             MAJ:MIN RM  SIZE RO TYPE MOUNTPOINT
sdb                8:16   0  100G  0 disk
sr0               11:0    1 56.9M  0 rom
sdc                8:32   0   32G  0 disk
sda                8:0    0  100G  0 disk
├─sda2             8:2    0   99G  0 part
│ ├─ol_rac1-swap 252:1    0   10G  0 lvm  [SWAP]
│ └─ol_rac1-root 252:0    0   89G  0 lvm  /
└─sda1             8:1    0    1G  0 part /boot

Mounting the shared folders:

# mkdir /install
# mount -t vboxsf install /install -o uid=oracle,gid=oinstall
# ls -l /install/Database/19/grid/*zip /install/Database/19/db/*zip
-rwxrwxrwx. 1 oracle oinstall 3059705302 Apr 26  2019 /install/Database/19/db/V982063-01.zip
-rwxrwxrwx. 1 oracle oinstall 2889184573 Apr 26  2019 /install/Database/19/grid/V982068-01.zip

Installing Grid Infrastructure on a cluster requires passwordless SSH set up for the installation user between nodes. So generate the keys (without a passphrase) and have them copied to the list of accepted keys:

[[email protected] ~]$ ssh-keygen -t rsa
[[email protected] ~]$ ssh-copy-id rac1
[[email protected] ~]$ ssh-copy-id rac2
  
[[email protected] ~]$ ssh-keygen -t rsa
[[email protected] ~]$ ssh-copy-id rac1
[[email protected] ~]$ ssh-copy-id rac2

Since 12.2, we can just easily copy the contents of the zip file to the install directory. Create the directory structure and unzip the archive on the first node:

# mkdir /u01
# chown oracle:oinstall /u01
# chmod 775 /u01
  
[[email protected] ~]$ mkdir -p /u01/app/19.0.0/grid
[[email protected] ~]$ unzip -oq /install/Database/19/grid/V982068-01.zip -d /u01/app/19.0.0/grid

There are multiple ways of presenting the disks to ASM. In the 12.2 walkthrough I used ASM Filter Driver, but I prefer to keep it simple, without adding unnecessary layers, so below I use udev. Find the id of the disks:

[[email protected] ~]# lsblk
NAME             MAJ:MIN RM  SIZE RO TYPE MOUNTPOINT
sdb                8:16   0  100G  0 disk
sr0               11:0    1 56.9M  0 rom
sdc                8:32   0   32G  0 disk
sda                8:0    0  100G  0 disk
├─sda2             8:2    0   99G  0 part
│ ├─ol_rac1-swap 252:1    0   10G  0 lvm  [SWAP]
│ └─ol_rac1-root 252:0    0   89G  0 lvm  /
└─sda1             8:1    0    1G  0 part /boot
[[email protected] ~]# udevadm info --name sdb | grep "ID_SERIAL="
E: ID_SERIAL=VBOX_HARDDISK_VB537130e2-8fd7b6f1
[[email protected] ~]# udevadm info --name sdc | grep "ID_SERIAL="
E: ID_SERIAL=VBOX_HARDDISK_VBee827a9b-f764e694

Using this, create the necessary udev rules (on both nodes):

# vi /etc/udev/rules.d/99-oracleasm.rules
# cat /etc/udev/rules.d/99-oracleasm.rules
KERNEL=="sd*", ENV{ID_SERIAL}=="VBOX_HARDDISK_VB537130e2-8fd7b6f1", SYMLINK+="asm_data1", OWNER="oracle", GROUP="dba", MODE="0660"
KERNEL=="sd*", ENV{ID_SERIAL}=="VBOX_HARDDISK_VBee827a9b-f764e694", SYMLINK+="asm_fra1", OWNER="oracle", GROUP="dba", MODE="0660"

Next apply the udev rules (on both nodes):

# udevadm trigger --name sdb
# udevadm trigger --name sdc
# ls -l /dev/asm_*
lrwxrwxrwx. 1 root root 3 Apr  5 17:37 /dev/asm_data1 -> sdb
lrwxrwxrwx. 1 root root 3 Apr  5 17:37 /dev/asm_fra1 -> sdc
# ls -lH /dev/asm_*
brw-rw----. 1 oracle dba 8, 16 Apr  5 17:37 /dev/asm_data1
brw-rw----. 1 oracle dba 8, 32 Apr  5 17:37 /dev/asm_fra1

Additionally, the cvuqdisk package should be installed on both nodes (for this the rpm should be copied to the 2nd node to a temporary location):

# export CVUQDISK_GRP=oinstall
# rpm -ivh /u01/app/19.0.0/grid/cv/rpm/cvuqdisk-1.0.10-1.rpm
Preparing...                          ################################# [100%]
Updating / installing...
   1:cvuqdisk-1.0.10-1                ################################# [100%]
#

With this, everything is ready for running the prerequisite checks before installation (this is completely optional).

[[email protected] ~]$ /u01/app/19.0.0/grid/runcluvfy.sh stage -pre crsinst -n rac1,rac2 -r 19 -osdba dba -orainv oinstall -asm -asmgrp dba -asmdev /dev/sdb -crshome /u01/app/19.0.0/grid -method root
Enter "ROOT" password:
 
Verifying Physical Memory ...PASSED
Verifying Available Physical Memory ...PASSED
Verifying Swap Size ...PASSED
Verifying Free Space: rac2:/usr,rac2:/var,rac2:/etc,rac2:/u01/app/19.0.0/grid,rac2:/sbin,rac2:/tmp ...PASSED
Verifying Free Space: rac1:/usr,rac1:/var,rac1:/etc,rac1:/u01/app/19.0.0/grid,rac1:/sbin,rac1:/tmp ...PASSED
Verifying User Existence: oracle ...
  Verifying Users With Same UID: 54321 ...PASSED
Verifying User Existence: oracle ...PASSED
Verifying Group Existence: dba ...PASSED
Verifying Group Existence: oinstall ...PASSED
Verifying Group Membership: oinstall(Primary) ...PASSED
Verifying Group Membership: dba ...PASSED
Verifying Run Level ...PASSED
Verifying Hard Limit: maximum open file descriptors ...PASSED
Verifying Soft Limit: maximum open file descriptors ...PASSED
Verifying Hard Limit: maximum user processes ...PASSED
Verifying Soft Limit: maximum user processes ...PASSED
Verifying Soft Limit: maximum stack size ...PASSED
Verifying Architecture ...PASSED
Verifying OS Kernel Version ...PASSED
Verifying OS Kernel Parameter: semmsl ...PASSED
Verifying OS Kernel Parameter: semmns ...PASSED
Verifying OS Kernel Parameter: semopm ...PASSED
Verifying OS Kernel Parameter: semmni ...PASSED
Verifying OS Kernel Parameter: shmmax ...PASSED
Verifying OS Kernel Parameter: shmmni ...PASSED
Verifying OS Kernel Parameter: shmall ...PASSED
Verifying OS Kernel Parameter: file-max ...PASSED
Verifying OS Kernel Parameter: ip_local_port_range ...PASSED
Verifying OS Kernel Parameter: rmem_default ...PASSED
Verifying OS Kernel Parameter: rmem_max ...PASSED
Verifying OS Kernel Parameter: wmem_default ...PASSED
Verifying OS Kernel Parameter: wmem_max ...PASSED
Verifying OS Kernel Parameter: aio-max-nr ...PASSED
Verifying OS Kernel Parameter: panic_on_oops ...PASSED
Verifying Package: kmod-20-21 (x86_64) ...PASSED
Verifying Package: kmod-libs-20-21 (x86_64) ...PASSED
Verifying Package: binutils-2.23.52.0.1 ...PASSED
Verifying Package: compat-libcap1-1.10 ...PASSED
Verifying Package: libgcc-4.8.2 (x86_64) ...PASSED
Verifying Package: libstdc++-4.8.2 (x86_64) ...PASSED
Verifying Package: libstdc++-devel-4.8.2 (x86_64) ...PASSED
Verifying Package: sysstat-10.1.5 ...PASSED
Verifying Package: ksh ...PASSED
Verifying Package: make-3.82 ...PASSED
Verifying Package: glibc-2.17 (x86_64) ...PASSED
Verifying Package: glibc-devel-2.17 (x86_64) ...PASSED
Verifying Package: libaio-0.3.109 (x86_64) ...PASSED
Verifying Package: libaio-devel-0.3.109 (x86_64) ...PASSED
Verifying Package: nfs-utils-1.2.3-15 ...PASSED
Verifying Package: smartmontools-6.2-4 ...PASSED
Verifying Package: net-tools-2.0-0.17 ...PASSED
Verifying Port Availability for component "Oracle Notification Service (ONS)" ...PASSED
Verifying Port Availability for component "Oracle Cluster Synchronization Services (CSSD)" ...PASSED
Verifying Users With Same UID: 0 ...PASSED
Verifying Current Group ID ...PASSED
Verifying Root user consistency ...PASSED
Verifying Package: cvuqdisk-1.0.10-1 ...PASSED
Verifying Host name ...PASSED
Verifying Node Connectivity ...
  Verifying Hosts File ...PASSED
  Verifying Check that maximum (MTU) size packet goes through subnet ...PASSED
  Verifying subnet mask consistency for subnet "172.16.1.0" ...PASSED
  Verifying subnet mask consistency for subnet "10.0.1.0" ...PASSED
Verifying Node Connectivity ...PASSED
Verifying Multicast or broadcast check ...PASSED
Verifying Device Checks for ASM ...
  Verifying Package: cvuqdisk-1.0.10-1 ...PASSED
  Verifying ASM device sharedness check ...
    Verifying Shared Storage Accessibility:/dev/sdb ...PASSED
  Verifying ASM device sharedness check ...PASSED
  Verifying Access Control List check ...PASSED
  Verifying I/O scheduler ...PASSED
Verifying Device Checks for ASM ...PASSED
Verifying Network Time Protocol (NTP) ...PASSED
Verifying Same core file name pattern ...PASSED
Verifying User Mask ...PASSED
Verifying User Not In Group "root": oracle ...PASSED
Verifying Time zone consistency ...PASSED
Verifying Time offset between nodes ...PASSED
Verifying resolv.conf Integrity ...PASSED
Verifying DNS/NIS name service ...PASSED
Verifying Domain Sockets ...PASSED
Verifying /boot mount ...PASSED
Verifying File system mount options for path GI_HOME ...PASSED
Verifying Daemon "avahi-daemon" not configured and running ...PASSED
Verifying Daemon "proxyt" not configured and running ...PASSED
Verifying Grid Infrastructure home path: /u01/app/19.0.0/grid ...
  Verifying '/u01/app/19.0.0/grid' ...PASSED
Verifying Grid Infrastructure home path: /u01/app/19.0.0/grid ...PASSED
Verifying User Equivalence ...PASSED
Verifying RPM Package Manager database ...PASSED
Verifying /dev/shm mounted as temporary file system ...PASSED
Verifying File system mount options for path /var ...PASSED
Verifying DefaultTasksMax parameter ...PASSED
Verifying zeroconf check ...PASSED
Verifying ASM Filter Driver configuration ...PASSED
Verifying Systemd login manager IPC parameter ...PASSED
 
Pre-check for cluster services setup was successful.
 
CVU operation performed:      stage -pre crsinst
Date:                         Apr 5, 2020 5:48:13 PM
CVU home:                     /u01/app/19.0.0/grid/
User:                         oracle
[[email protected] ~]$

Before starting the installation, create a responsefile:

[[email protected] ~]$ cp /u01/app/19.0.0/grid/install/response/gridsetup.rsp .
[[email protected] ~]$ vi gridsetup.rsp

Set the following parameters:

[[email protected] ~]$ grep "=" gridsetup.rsp | grep -v "^#" | grep -v "=$"
oracle.install.responseFileVersion=/oracle/install/rspfmt_crsinstall_response_schema_v19.0.0
INVENTORY_LOCATION=/u01/app/oraInventory
oracle.install.option=CRS_CONFIG
ORACLE_BASE=/u01/app/oracle
oracle.install.asm.OSDBA=dba
oracle.install.asm.OSOPER=dba
oracle.install.asm.OSASM=dba
oracle.install.crs.config.scanType=LOCAL_SCAN
oracle.install.crs.config.gpnp.scanName=rac-scan
oracle.install.crs.config.gpnp.scanPort=1521
oracle.install.crs.config.ClusterConfiguration=STANDALONE
oracle.install.crs.config.clusterName=rac
oracle.install.crs.config.clusterNodes=rac1:rac1-vip,rac2:rac2-vip
oracle.install.crs.config.networkInterfaceList=enp0s3:172.16.1.0:1,enp0s8:10.0.1.0:5
oracle.install.crs.configureGIMR=false
oracle.install.crs.config.storageOption=FLEX_ASM_STORAGE
oracle.install.asm.SYSASMPassword=Oracle123
oracle.install.asm.diskGroup.name=DATA
oracle.install.asm.diskGroup.redundancy=EXTERNAL
oracle.install.asm.diskGroup.AUSize=4
oracle.install.asm.diskGroup.disks=/dev/asm_data1
oracle.install.asm.diskGroup.diskDiscoveryString=/dev/asm*
oracle.install.asm.monitorPassword=Oracle123
oracle.install.asm.configureAFD=false
[[email protected] ~]$

The installer should finish in a few minutes (unfortunately without “-ignorePrereqFailure” the installer fails at the “RPM Package Manager Database” check because that needs root privileges):

[[email protected] ~]$ /u01/app/19.0.0/grid/gridSetup.sh -silent -responseFile /home/oracle/gridsetup.rsp -ignorePrereqFailure
Launching Oracle Grid Infrastructure Setup Wizard...
 
[WARNING] [INS-41808] Possible invalid choice for OSASM Group.
   CAUSE: The name of the group you selected for the OSASM group is commonly used to grant other system privileges (For example: asmdba, asmoper, dba, oper).
   ACTION: Oracle recommends that you designate asmadmin as the OSASM group.
[WARNING] [INS-41809] Possible invalid choice for OSDBA Group.
   CAUSE: The group name you selected as the OSDBA for ASM group is commonly used for Oracle Database administrator privileges.
   ACTION: Oracle recommends that you designate asmdba as the OSDBA for ASM group, and that the group should not be the same group as an Oracle Database OSDBA group.
[WARNING] [INS-41810] Possible invalid choice for OSOPER Group.
   CAUSE: The group name you selected as the OSOPER for ASM group is commonly used for Oracle Database administrator privileges.
   ACTION: Oracle recommends that you designate asmoper as the OSOPER for ASM group, and that the group should not be the same group as an Oracle Database OSOPER group.
[WARNING] [INS-41813] OSDBA for ASM, OSOPER for ASM, and OSASM are the same OS group.
   CAUSE: The group you selected for granting the OSDBA for ASM group for database access, and the OSOPER for ASM group for startup and shutdown of Oracle ASM, is the same group as the OSASM group, whose members have SYSASM privileges on Oracle ASM.
   ACTION: Choose different groups as the OSASM, OSDBA for ASM, and OSOPER for ASM groups.
[WARNING] [INS-13013] Target environment does not meet some mandatory requirements.
   CAUSE: Some of the mandatory prerequisites are not met. See logs for details. /tmp/GridSetupActions2020-04-05_06-00-12PM/gridSetupActions2020-04-05_06-00-12PM.log
   ACTION: Identify the list of failed prerequisite checks from the log: /tmp/GridSetupActions2020-04-05_06-00-12PM/gridSetupActions2020-04-05_06-00-12PM.log. Then either from the log file or from installation manual find the appropriate configuration to meet the prerequisites and fix it manually.
The response file for this session can be found at:
 /u01/app/19.0.0/grid/install/response/grid_2020-04-05_06-00-12PM.rsp
 
You can find the log of this install session at:
 /tmp/GridSetupActions2020-04-05_06-00-12PM/gridSetupActions2020-04-05_06-00-12PM.log
 
As a root user, execute the following script(s):
        1. /u01/app/oraInventory/orainstRoot.sh
        2. /u01/app/19.0.0/grid/root.sh
 
Execute /u01/app/oraInventory/orainstRoot.sh on the following nodes:
[rac1, rac2]
Execute /u01/app/19.0.0/grid/root.sh on the following nodes:
[rac1, rac2]
 
Run the script on the local node first. After successful completion, you can start the script in parallel on all other nodes.
 
Successfully Setup Software with warning(s).
As install user, execute the following command to complete the configuration.
        /u01/app/19.0.0/grid/gridSetup.sh -executeConfigTools -responseFile /home/oracle/gridsetup.rsp [-silent]
 
 
Moved the install session logs to:
 /u01/app/oraInventory/logs/GridSetupActions2020-04-05_06-00-12PM
[[email protected] ~]$

Running the root scripts:

[[email protected] ~]# /u01/app/oraInventory/orainstRoot.sh
Changing permissions of /u01/app/oraInventory.
Adding read,write permissions for group.
Removing read,write,execute permissions for world.
 
Changing groupname of /u01/app/oraInventory to oinstall.
The execution of the script is complete.
 
[[email protected] ~]# /u01/app/oraInventory/orainstRoot.sh
Changing permissions of /u01/app/oraInventory.
Adding read,write permissions for group.
Removing read,write,execute permissions for world.
 
Changing groupname of /u01/app/oraInventory to oinstall.
The execution of the script is complete.
 
[[email protected] ~]# /u01/app/19.0.0/grid/root.sh
Check /u01/app/19.0.0/grid/install/root_rac1_2020-04-05_18-46-58-642523142.log for the output of root script
[[email protected] ~]# cat /u01/app/19.0.0/grid/install/root_rac1_2020-04-05_18-46-58-642523142.log
Performing root user operation.
 
The following environment variables are set as:
    ORACLE_OWNER= oracle
    ORACLE_HOME=  /u01/app/19.0.0/grid
   Copying dbhome to /usr/local/bin ...
   Copying oraenv to /usr/local/bin ...
   Copying coraenv to /usr/local/bin ...
 
 
Creating /etc/oratab file...
Entries will be added to the /etc/oratab file as needed by
Database Configuration Assistant when a database is created
Finished running generic part of root script.
Now product-specific root actions will be performed.
Relinking oracle with rac_on option
Using configuration parameter file: /u01/app/19.0.0/grid/crs/install/crsconfig_params
The log of current session can be found at:
  /u01/app/oracle/crsdata/rac1/crsconfig/rootcrs_rac1_2020-04-05_06-47-06PM.log
2020/04/05 18:47:12 CLSRSC-594: Executing installation step 1 of 19: 'SetupTFA'.
2020/04/05 18:47:12 CLSRSC-594: Executing installation step 2 of 19: 'ValidateEnv'.
2020/04/05 18:47:13 CLSRSC-363: User ignored prerequisites during installation
2020/04/05 18:47:13 CLSRSC-594: Executing installation step 3 of 19: 'CheckFirstNode'.
2020/04/05 18:47:15 CLSRSC-594: Executing installation step 4 of 19: 'GenSiteGUIDs'.
2020/04/05 18:47:16 CLSRSC-594: Executing installation step 5 of 19: 'SetupOSD'.
2020/04/05 18:47:16 CLSRSC-594: Executing installation step 6 of 19: 'CheckCRSConfig'.
2020/04/05 18:47:16 CLSRSC-594: Executing installation step 7 of 19: 'SetupLocalGPNP'.
2020/04/05 18:47:56 CLSRSC-4002: Successfully installed Oracle Trace File Analyzer (TFA) Collector.
2020/04/05 18:47:56 CLSRSC-594: Executing installation step 8 of 19: 'CreateRootCert'.
2020/04/05 18:48:00 CLSRSC-594: Executing installation step 9 of 19: 'ConfigOLR'.
2020/04/05 18:48:12 CLSRSC-594: Executing installation step 10 of 19: 'ConfigCHMOS'.
2020/04/05 18:48:12 CLSRSC-594: Executing installation step 11 of 19: 'CreateOHASD'.
2020/04/05 18:48:16 CLSRSC-594: Executing installation step 12 of 19: 'ConfigOHASD'.
2020/04/05 18:48:16 CLSRSC-330: Adding Clusterware entries to file 'oracle-ohasd.service'
2020/04/05 18:48:35 CLSRSC-594: Executing installation step 13 of 19: 'InstallAFD'.
2020/04/05 18:48:39 CLSRSC-594: Executing installation step 14 of 19: 'InstallACFS'.
2020/04/05 18:48:43 CLSRSC-594: Executing installation step 15 of 19: 'InstallKA'.
2020/04/05 18:48:47 CLSRSC-594: Executing installation step 16 of 19: 'InitConfig'.
 
ASM has been created and started successfully.
 
[DBT-30001] Disk groups created successfully. Check /u01/app/oracle/cfgtoollogs/asmca/asmca-200405PM064920.log for details.
 
2020/04/05 18:50:08 CLSRSC-482: Running command: '/u01/app/19.0.0/grid/bin/ocrconfig -upgrade oracle oinstall'
CRS-4256: Updating the profile
Successful addition of voting disk fc725a8fc3324f75bf884cb442aaeff2.
Successfully replaced voting disk group with +DATA.
CRS-4256: Updating the profile
CRS-4266: Voting file(s) successfully replaced
##  STATE    File Universal Id                File Name Disk group
--  -----    -----------------                --------- ---------
 1. ONLINE   fc725a8fc3324f75bf884cb442aaeff2 (/dev/asm_data1) [DATA]
Located 1 voting disk(s).
2020/04/05 18:51:22 CLSRSC-594: Executing installation step 17 of 19: 'StartCluster'.
2020/04/05 18:52:54 CLSRSC-343: Successfully started Oracle Clusterware stack
2020/04/05 18:52:54 CLSRSC-594: Executing installation step 18 of 19: 'ConfigNode'.
2020/04/05 18:54:17 CLSRSC-594: Executing installation step 19 of 19: 'PostConfig'.
2020/04/05 18:54:41 CLSRSC-325: Configure Oracle Grid Infrastructure for a Cluster ... succeeded
[[email protected] ~]#
 
[[email protected] ~]# /u01/app/19.0.0/grid/root.sh
Check /u01/app/19.0.0/grid/install/root_rac2_2020-04-05_18-55-19-857098030.log for the output of root script
[[email protected] ~]# cat /u01/app/19.0.0/grid/install/root_rac2_2020-04-05_18-55-19-857098030.log
Performing root user operation.
 
The following environment variables are set as:
    ORACLE_OWNER= oracle
    ORACLE_HOME=  /u01/app/19.0.0/grid
   Copying dbhome to /usr/local/bin ...
   Copying oraenv to /usr/local/bin ...
   Copying coraenv to /usr/local/bin ...
 
 
Creating /etc/oratab file...
Entries will be added to the /etc/oratab file as needed by
Database Configuration Assistant when a database is created
Finished running generic part of root script.
Now product-specific root actions will be performed.
Relinking oracle with rac_on option
Using configuration parameter file: /u01/app/19.0.0/grid/crs/install/crsconfig_params
The log of current session can be found at:
  /u01/app/oracle/crsdata/rac2/crsconfig/rootcrs_rac2_2020-04-05_06-55-27PM.log
2020/04/05 18:55:31 CLSRSC-594: Executing installation step 1 of 19: 'SetupTFA'.
2020/04/05 18:55:31 CLSRSC-594: Executing installation step 2 of 19: 'ValidateEnv'.
2020/04/05 18:55:31 CLSRSC-363: User ignored prerequisites during installation
2020/04/05 18:55:31 CLSRSC-594: Executing installation step 3 of 19: 'CheckFirstNode'.
2020/04/05 18:55:33 CLSRSC-594: Executing installation step 4 of 19: 'GenSiteGUIDs'.
2020/04/05 18:55:33 CLSRSC-594: Executing installation step 5 of 19: 'SetupOSD'.
2020/04/05 18:55:33 CLSRSC-594: Executing installation step 6 of 19: 'CheckCRSConfig'.
2020/04/05 18:55:34 CLSRSC-594: Executing installation step 7 of 19: 'SetupLocalGPNP'.
2020/04/05 18:55:37 CLSRSC-594: Executing installation step 8 of 19: 'CreateRootCert'.
2020/04/05 18:55:37 CLSRSC-594: Executing installation step 9 of 19: 'ConfigOLR'.
2020/04/05 18:55:47 CLSRSC-594: Executing installation step 10 of 19: 'ConfigCHMOS'.
2020/04/05 18:55:47 CLSRSC-594: Executing installation step 11 of 19: 'CreateOHASD'.
2020/04/05 18:55:48 CLSRSC-594: Executing installation step 12 of 19: 'ConfigOHASD'.
2020/04/05 18:55:48 CLSRSC-330: Adding Clusterware entries to file 'oracle-ohasd.service'
2020/04/05 18:55:58 CLSRSC-4002: Successfully installed Oracle Trace File Analyzer (TFA) Collector.
2020/04/05 18:56:05 CLSRSC-594: Executing installation step 13 of 19: 'InstallAFD'.
2020/04/05 18:56:06 CLSRSC-594: Executing installation step 14 of 19: 'InstallACFS'.
2020/04/05 18:56:07 CLSRSC-594: Executing installation step 15 of 19: 'InstallKA'.
2020/04/05 18:56:08 CLSRSC-594: Executing installation step 16 of 19: 'InitConfig'.
2020/04/05 18:56:17 CLSRSC-594: Executing installation step 17 of 19: 'StartCluster'.
2020/04/05 18:57:03 CLSRSC-343: Successfully started Oracle Clusterware stack
2020/04/05 18:57:03 CLSRSC-594: Executing installation step 18 of 19: 'ConfigNode'.
2020/04/05 18:57:17 CLSRSC-594: Executing installation step 19 of 19: 'PostConfig'.
2020/04/05 18:57:23 CLSRSC-325: Configure Oracle Grid Infrastructure for a Cluster ... succeeded
[[email protected] ~]#

Running the last script for finishing the installation (this throws a warning as GIMR creation was skipped):

[[email protected] ~]$ /u01/app/19.0.0/grid/gridSetup.sh -executeConfigTools -responseFile /home/oracle/gridsetup.rsp -silent
Launching Oracle Grid Infrastructure Setup Wizard...
 
You can find the logs of this session at:
/u01/app/oraInventory/logs/GridSetupActions2020-04-05_06-58-38PM
 
You can find the log of this install session at:
 /u01/app/oraInventory/logs/UpdateNodeList2020-04-05_06-58-38PM.log
Configuration failed.
[WARNING] [INS-43080] Some of the configuration assistants failed, were cancelled or skipped.
   ACTION: Refer to the logs or contact Oracle Support Services.
[[email protected] ~]$

Checking the status of the cluster and its resources:

[[email protected] ~]$ /u01/app/19.0.0/grid/bin/crsctl stat res -t
--------------------------------------------------------------------------------
Name           Target  State        Server                   State details
--------------------------------------------------------------------------------
Local Resources
--------------------------------------------------------------------------------
ora.LISTENER.lsnr
               ONLINE  ONLINE       rac1                     STABLE
               ONLINE  ONLINE       rac2                     STABLE
ora.chad
               ONLINE  ONLINE       rac1                     STABLE
               ONLINE  ONLINE       rac2                     STABLE
ora.net1.network
               ONLINE  ONLINE       rac1                     STABLE
               ONLINE  ONLINE       rac2                     STABLE
ora.ons
               ONLINE  ONLINE       rac1                     STABLE
               ONLINE  ONLINE       rac2                     STABLE
--------------------------------------------------------------------------------
Cluster Resources
--------------------------------------------------------------------------------
ora.ASMNET1LSNR_ASM.lsnr(ora.asmgroup)
      1        ONLINE  ONLINE       rac1                     STABLE
      2        ONLINE  ONLINE       rac2                     STABLE
      3        OFFLINE OFFLINE                               STABLE
ora.DATA.dg(ora.asmgroup)
      1        ONLINE  ONLINE       rac1                     STABLE
      2        ONLINE  ONLINE       rac2                     STABLE
      3        OFFLINE OFFLINE                               STABLE
ora.LISTENER_SCAN1.lsnr
      1        ONLINE  ONLINE       rac2                     STABLE
ora.LISTENER_SCAN2.lsnr
      1        ONLINE  ONLINE       rac1                     STABLE
ora.LISTENER_SCAN3.lsnr
      1        ONLINE  ONLINE       rac1                     STABLE
ora.asm(ora.asmgroup)
      1        ONLINE  ONLINE       rac1                     Started,STABLE
      2        ONLINE  ONLINE       rac2                     Started,STABLE
      3        OFFLINE OFFLINE                               STABLE
ora.asmnet1.asmnetwork(ora.asmgroup)
      1        ONLINE  ONLINE       rac1                     STABLE
      2        ONLINE  ONLINE       rac2                     STABLE
      3        OFFLINE OFFLINE                               STABLE
ora.cvu
      1        ONLINE  ONLINE       rac1                     STABLE
ora.qosmserver
      1        ONLINE  ONLINE       rac1                     STABLE
ora.rac1.vip
      1        ONLINE  ONLINE       rac1                     STABLE
ora.rac2.vip
      1        ONLINE  ONLINE       rac2                     STABLE
ora.scan1.vip
      1        ONLINE  ONLINE       rac2                     STABLE
ora.scan2.vip
      1        ONLINE  ONLINE       rac1                     STABLE
ora.scan3.vip
      1        ONLINE  ONLINE       rac1                     STABLE
--------------------------------------------------------------------------------
[[email protected] ~]$

As a last action in this part, create the FRA diskgroup:

[[email protected] ~]$ export ORACLE_HOME=/u01/app/19.0.0/grid
[[email protected] ~]$ export PATH=$ORACLE_HOME/bin:$PATH
[[email protected] ~]$ export ORACLE_SID=+ASM1
[[email protected] ~]$ sqlplus / as sysasm
 
SQL*Plus: Release 19.0.0.0.0 - Production on Sun Apr 5 19:05:15 2020
Version 19.3.0.0.0
 
Copyright (c) 1982, 2019, Oracle.  All rights reserved.
 
 
Connected to:
Oracle Database 19c Enterprise Edition Release 19.0.0.0.0 - Production
Version 19.3.0.0.0
 
SQL> create diskgroup FRA external redundancy disk '/dev/asm_fra1' attribute 'au_size'='4M';
 
Diskgroup created.
 
SQL>

In 19c, the DB installer behaves similarly to the GI software, we just need to unzip to its final location:

[[email protected] ~]$ mkdir -p /u01/app/oracle/product/19.0.0/dbhome_1
[[email protected] ~]$ unzip -oq /install/Database/19/db/V982063-01.zip -d /u01/app/oracle/product/19.0.0/dbhome_1

Next prepare the responsefile:

[[email protected] ~]$ cp /u01/app/oracle/product/19.0.0/dbhome_1/install/response/db_install.rsp db_install.rsp
[[email protected] ~]$ vi db_install.rsp
[[email protected] ~]$ grep "=" db_install.rsp | grep -v "^#" | grep -v "=$"                                                                                   oracle.install.responseFileVersion=/oracle/install/rspfmt_dbinstall_response_schema_v19.0.0
oracle.install.option=INSTALL_DB_SWONLY
UNIX_GROUP_NAME=oinstall
INVENTORY_LOCATION=/u01/app/oraInventory
ORACLE_HOME=/u01/app/oracle/product/19.0.0/dbhome_1
ORACLE_BASE=/u01/app/oracle
oracle.install.db.InstallEdition=EE
oracle.install.db.OSDBA_GROUP=dba
oracle.install.db.OSOPER_GROUP=dba
oracle.install.db.OSBACKUPDBA_GROUP=dba
oracle.install.db.OSDGDBA_GROUP=dba
oracle.install.db.OSKMDBA_GROUP=dba
oracle.install.db.OSRACDBA_GROUP=dba
oracle.install.db.CLUSTER_NODES=rac1,rac2
[[email protected] ~]$

Install the database software:

[[email protected] ~]$ /u01/app/oracle/product/19.0.0/dbhome_1/runInstaller -silent -responsefile /home/oracle/db_install.rsp -waitforcompletion -ignorePrereqFailure
Launching Oracle Database Setup Wizard...
 
[WARNING] [INS-13013] Target environment does not meet some mandatory requirements.
   CAUSE: Some of the mandatory prerequisites are not met. See logs for details. /u01/app/oraInventory/logs/InstallActions2020-04-05_07-28-58PM/installActions2020-04-05_07-28-58PM.log
   ACTION: Identify the list of failed prerequisite checks from the log: /u01/app/oraInventory/logs/InstallActions2020-04-05_07-28-58PM/installActions2020-04-05_07-28-58PM.log. Then either from the log file or from installation manual find the appropriate configuration to meet the prerequisites and fix it manually.
The response file for this session can be found at:
 /u01/app/oracle/product/19.0.0/dbhome_1/install/response/db_2020-04-05_07-28-58PM.rsp
 
You can find the log of this install session at:
 /u01/app/oraInventory/logs/InstallActions2020-04-05_07-28-58PM/installActions2020-04-05_07-28-58PM.log
 
As a root user, execute the following script(s):
        1. /u01/app/oracle/product/19.0.0/dbhome_1/root.sh
 
Execute /u01/app/oracle/product/19.0.0/dbhome_1/root.sh on the following nodes:
[rac1, rac2]
 
 
Successfully Setup Software with warning(s).
[[email protected] ~]$

Run root script:

[[email protected] ~]# /u01/app/oracle/product/19.0.0/dbhome_1/root.sh
Check /u01/app/oracle/product/19.0.0/dbhome_1/install/root_rac1_2020-04-05_19-44-40-433417847.log for the output of root script
[[email protected] ~]#
 
[[email protected] ~]# /u01/app/oracle/product/19.0.0/dbhome_1/root.sh
Check /u01/app/oracle/product/19.0.0/dbhome_1/install/root_rac2_2020-04-05_19-45-14-369621583.log for the output of root script
[[email protected] ~]#

Finally, the database can be created. While in general, using the “General Purpose” template is not recommended, this is still the quickest method for creating a database, so for the sake of simplicity, the below command creates a database using the “General Purpose” template:

[[email protected] ~]$ /u01/app/oracle/product/19.0.0/dbhome_1/bin/dbca -silent -ignorePrereqFailure -createDatabase -templateName General_Purpose.dbc -gdbName ORCL -characterSet AL32UTF8 -databaseConfigType RAC -datafileDestination +DATA -totalMemory 2048 -nationalCharacterSet AL16UTF16 -nodelist rac1,rac2 -adminManaged -recoveryAreaDestination +FRA -recoveryAreaSize 20480 -sid ORCL -sysPassword Oracle123 -systemPassword Oracle123 -useOMF true
Prepare for db operation
8% complete
Copying database files
33% complete
Creating and starting Oracle instance
34% complete
35% complete
39% complete
42% complete
45% complete
50% complete
Creating cluster database views
52% complete
67% complete
Completing Database Creation
71% complete
73% complete
75% complete
Executing Post Configuration Actions
100% complete
Database creation complete. For details check the logfiles at:
 /u01/app/oracle/cfgtoollogs/dbca/ORCL.
Database Information:
Global Database Name:ORCL
System Identifier(SID) Prefix:ORCL
Look at the log file "/u01/app/oracle/cfgtoollogs/dbca/ORCL/ORCL1.log" for further details.
[[email protected] ~]$

Once it is complete, check the status:

[[email protected] ~]$ export ORACLE_HOME=/u01/app/oracle/product/19.0.0/dbhome_1
[[email protected] ~]$ export PATH=$ORACLE_HOME/bin:$PATH
[[email protected] ~]$ export ORACLE_SID=ORCL1
[[email protected] ~]$ srvctl status database -d orcl
Instance ORCL1 is running on node rac1
Instance ORCL2 is running on node rac2
[[email protected] ~]$ lsnrctl status
 
LSNRCTL for Linux: Version 19.0.0.0.0 - Production on 05-APR-2020 20:58:16
 
Copyright (c) 1991, 2019, Oracle.  All rights reserved.
 
Connecting to (ADDRESS=(PROTOCOL=tcp)(HOST=)(PORT=1521))
STATUS of the LISTENER
------------------------
Alias                     LISTENER
Version                   TNSLSNR for Linux: Version 19.0.0.0.0 - Production
Start Date                05-APR-2020 20:46:58
Uptime                    0 days 0 hr. 11 min. 18 sec
Trace Level               off
Security                  ON: Local OS Authentication
SNMP                      OFF
Listener Parameter File   /u01/app/19.0.0/grid/network/admin/listener.ora
Listener Log File         /u01/app/oracle/diag/tnslsnr/rac1/listener/alert/log.xml
Listening Endpoints Summary...
  (DESCRIPTION=(ADDRESS=(PROTOCOL=ipc)(KEY=LISTENER)))
  (DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=172.16.1.101)(PORT=1521)))
  (DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=172.16.1.103)(PORT=1521)))
Services Summary...
Service "+ASM" has 1 instance(s).
  Instance "+ASM1", status READY, has 1 handler(s) for this service...
Service "+ASM_DATA" has 1 instance(s).
  Instance "+ASM1", status READY, has 1 handler(s) for this service...
Service "+ASM_FRA" has 1 instance(s).
  Instance "+ASM1", status READY, has 1 handler(s) for this service...
Service "ORCL" has 1 instance(s).
  Instance "ORCL1", status READY, has 1 handler(s) for this service...
Service "ORCLXDB" has 1 instance(s).
  Instance "ORCL1", status READY, has 1 handler(s) for this service...
The command completed successfully
[[email protected] ~]$ sqlplus sys/[email protected]\'rac-scan:1521/orcl\' as sysdba
 
SQL*Plus: Release 19.0.0.0.0 - Production on Sun Apr 5 20:58:31 2020
Version 19.3.0.0.0
 
Copyright (c) 1982, 2019, Oracle.  All rights reserved.
 
 
Connected to:
Oracle Database 19c Enterprise Edition Release 19.0.0.0.0 - Production
Version 19.3.0.0.0
 
SQL>