Skip to content

Introduction to Oracle Clusterware

    Oracle Clusterware is portable cluster software that provides comprehensive multi-tiered high availability and resource management for consolidated environments. It supports clustering of independent servers so that they cooperate as a single system.

    Oracle Clusterware is the integrated foundation for Oracle Real Application Clusters (Oracle RAC), and the high-availability and resource management framework for all applications on any major platform.

    Oracle Flex Clusters

    In Oracle Clusterware 12c release 2 (12.2), all clusters are configured as Oracle Flex Clusters, meaning that a cluster is configured with one or more Hub Nodes and can support a large number of Leaf Nodes. Clusters currently configured under older versions of Oracle Clusterware are converted in place as part of the upgrade process, including the activation of Oracle Flex ASM (which is a requirement for Oracle Flex Clusters).

    Description of Figure 1-1 follows

    All nodes in an Oracle Flex Cluster belong to a single Oracle Grid Infrastructure cluster. This architecture centralizes policy decisions for deployment of resources based on application needs, to account for various service levels, loads, failure responses, and recovery.

    Oracle Flex Clusters contain two types of nodes arranged in a hub and spoke architecture:
    • Hub Nodes: The number of Hub Nodes in an Oracle Flex Cluster must be at least one and can be as many as 64
    • Leaf Nodes: The number of Leaf Nodes can be many more.

    Hub Nodes and Leaf Nodes can host different types of applications.

    Oracle Flex Clusters may operate with one or many Hub Nodes, but Leaf Nodes are optional and can only exist as members of a cluster that includes at least one Hub Node.

    The benefits of using a cluster include:

    • Scalability of applications (including Oracle RAC and Oracle RAC One databases)
    • Reduce total cost of ownership for the infrastructure by providing a scalable system with low-cost commodity hardware
    • Ability to fail over
    • Increase throughput on demand for cluster-aware applications, by adding servers to a cluster to increase cluster resources
    • Increase throughput for cluster-aware applications by enabling the applications to run on all of the nodes in a cluster
    • Ability to program the startup of applications in a planned order that ensures dependent processes are started in the correct sequence
    • Ability to monitor processes and restart them if they stop
    • Eliminate unplanned downtime due to hardware or software malfunctions
    • Reduce or eliminate planned downtime for software maintenance

    Oracle Clusterware has two stored components, besides the binaries: The voting files, which record node membership information, and the Oracle Cluster Registry (OCR), which records cluster configuration information. Voting files and OCRs must reside on shared storage available to all cluster member nodes.

    Clusterware Architectures

    Oracle Clusterware provides you with two different deployment architecture choices for new clusters during the installation process. You can either choose a domain services cluster or a member cluster, which is used to host applications and databases.

    A domain services cluster is an Oracle Flex Cluster that has one or more Hub Nodes (for database instances) and zero or more Leaf Nodes. Shared storage is locally mounted on each of the Hub Nodes and an Oracle ASM instance is available to all Hub Nodes. In addition, a management database is stored and accessed, locally, within the cluster. This deployment is also used for an upgraded, pre-existing cluster.

    A member cluster groups multiple cluster configurations for management purposes and makes use of shared services available within that cluster domain. The cluster configurations within that cluster domain are:

    • Domain services cluster: A cluster that provides centralized services to other clusters within the Cluster Domain. Services can include a centralized Grid Infrastructure Management Repository (on which the management database for each of the clusters within the Cluster Domain resides), the trace file analyzer service, an optional Rapid Home Provisioning service, and, very likely, a consolidated Oracle ASM storage management service.
    • Database member cluster: A cluster that is intended to support Oracle RAC or Oracle RAC One database instances, the management database for which is off-loaded to the domain services cluster, and that can be configured with local Oracle ASM storage management or make use of the consolidated Oracle ASM storage management service offered by the domain services cluster.
    • Application member cluster: A cluster that is configured to support applications without the resources necessary to support Oracle RAC or Oracle RAC One database instances. This cluster type has no configured local shared storage but it is intended to provide a highly available, scalable platform for running application processes.

    Oracle Clusterware Software Concepts and Requirements

    Oracle Clusterware uses voting files to provide fencing and cluster node membership determination. Oracle Cluster Registry (OCR) provides cluster configuration information. Collectively, voting files and OCR are referred to as Oracle Clusteware files.

    Oracle Clusterware files must be stored on Oracle ASM. If the underlying storage for the Oracle ASM disks is not hardware protected, such as RAID, then Oracle recommends that you configure multiple locations for OCR and voting files. The voting files and OCR are described as follows:

    • Voting FilesOracle Clusterware uses voting files to determine which nodes are members of a cluster. You can configure voting files on Oracle ASM, or you can configure voting files on shared storage.If you configure voting files on Oracle ASM, then you do not need to manually configure the voting files. Depending on the redundancy of your disk group, an appropriate number of voting files are created.If you do not configure voting files on Oracle ASM, then for high availability, Oracle recommends that you have a minimum of three voting files on physically separate storage. This avoids having a single point of failure. If you configure a single voting file, then you must use external mirroring to provide redundancy.Oracle recommends that you do not use more than five voting files, even though Oracle supports a maximum number of 15 voting files.
    • Oracle Cluster RegistryOracle Clusterware uses the Oracle Cluster Registry (OCR) to store and manage information about the components that Oracle Clusterware controls, such as Oracle RAC databases, listeners, virtual IP addresses (VIPs), and services and any applications. OCR stores configuration information in a series of key-value pairs in a tree structure. To ensure cluster high availability, Oracle recommends that you define multiple OCR locations. In addition:
      • You can have up to five OCR locations
      • Each OCR location must reside on shared storage that is accessible by all of the nodes in the cluster
      • You can replace a failed OCR location online if it is not the only OCR location
      • You must update OCR through supported utilities such as Oracle Enterprise Manager, the Oracle Clusterware Control Utility (CRSCTL), the Server Control Utility (SRVCTL), the OCR configuration utility (OCRCONFIG), or the Database Configuration Assistant (DBCA)

    Oracle Clusterware Network Configuration Concepts

    Oracle Clusterware enables a dynamic Oracle Grid Infrastructure through the self-management of the network requirements for the cluster.

    Oracle Clusterware 12c supports the use of Dynamic Host Configuration Protocol (DHCP) or stateless address autoconfiguration for the VIP addresses and the Single Client Access Name (SCAN) address, but not the public address. DHCP provides dynamic assignment of IPv4 VIP addresses, while Stateless Address Autoconfiguration provides dynamic assignment of IPv6 VIP addresses.

    When you are using Oracle RAC, all of the clients must be able to reach the database, which means that the clients must resolve VIP and SCAN names to all of the VIP and SCAN addresses, respectively. This problem is solved by the addition of Grid Naming Service (GNS) to the cluster. GNS is linked to the corporate Domain Name Service (DNS) so that clients can resolve host names to these dynamic addresses and transparently connect to the cluster and the databases. Oracle supports using GNS without DHCP or zone delegation in Oracle Clusterware 12c (as with Oracle Flex ASM server clusters, which you can configure without zone delegation or dynamic networks).

    Single Client Access Name (SCAN)

    Oracle Clusterware can use the Single Client Access Name (SCAN) for dynamic VIP address configuration, removing the need to perform manual server configuration.

    The SCAN is a domain name registered to at least one and up to three IP addresses, either in DNS or GNS. When using GNS and DHCP, Oracle Clusterware configures the VIP addresses for the SCAN name that is provided during cluster configuration.

    The node VIP and the three SCAN VIPs are obtained from the DHCP server when using GNS. If a new server joins the cluster, then Oracle Clusterware dynamically obtains the required VIP address from the DHCP server, updates the cluster resource, and makes the server accessible through GNS.

    Manual Addresses Configuration

    You have the option to manually configure addresses, instead of using GNS and DHCP for dynamic configuration.

    In manual address configuration, you configure the following:

    • One public address and host name for each node.
    • One VIP address for each node.You must assign a VIP address to each node in the cluster. Each VIP address must be on the same subnet as the public IP address for the node and should be an address that is assigned a name in the DNS. Each VIP address must also be unused and unpingable from within the network before you install Oracle Clusterware.
    • Up to three SCAN addresses for the entire cluster.Note:The SCAN must resolve to at least one address on the public network. For high availability and scalability, Oracle recommends that you configure the SCAN to resolve to three addresses on the public network.
    Cluster Ready Services Stack

    The Cluster Ready Services (CRS) technology stack leverages several processes to manage various services.

    The following list describes these processes:

    • Cluster Ready Services (CRS): The primary program for managing high availability operations in a cluster.The CRSD manages cluster resources based on the configuration information that is stored in OCR for each resource. This includes start, stop, monitor, and failover operations. The CRSD process generates events when the status of a resource changes. When you have Oracle RAC installed, the CRSD process monitors the Oracle database instance, listener, and so on, and automatically restarts these components when a failure occurs.
    • Cluster Synchronization Services (CSS): Manages the cluster configuration by controlling which nodes are members of the cluster and by notifying members when a node joins or leaves the cluster. If you are using certified third-party clusterware, then CSS processes interface with your clusterware to manage node membership information.The cssdagent process monitors the cluster and provides I/O fencing. This service formerly was provided by Oracle Process Monitor Daemon (oprocd), also known as OraFenceService on Windows. A cssdagent failure may result in Oracle Clusterware restarting the node.
    • Oracle ASM: Provides disk management for Oracle Clusterware and Oracle Database.
    • Cluster Time Synchronization Service (CTSS): Provides time management in a cluster for Oracle Clusterware.
    • Event Management (EVM): A background process that publishes events that Oracle Clusterware creates.
    • Grid Naming Service (GNS): Handles requests sent by external DNS servers, performing name resolution for names defined by the cluster.
    • Oracle Agent (oraagent): Extends clusterware to support Oracle-specific requirements and complex resources. This process runs server callout scripts when FAN events occur. This process was known as RACG in Oracle Clusterware 11g release 1 (11.1).
    • Oracle Notification Service (ONS): A publish and subscribe service for communicating Fast Application Notification (FAN) events.
    • Oracle Root Agent(orarootagent): A specialized oraagent process that helps the CRSD manage resources owned by root, such as the network, and the Grid virtual IP address.

    The Cluster Synchronization Service (CSS), Event Management (EVM), and Oracle Notification Services (ONS) components communicate with other cluster component layers on other nodes in the same cluster database environment. These components are also the main communication links between Oracle Database, applications, and the Oracle Clusterware high availability components. In addition, these background processes monitor and manage database operations.

    Oracle High Availability Services Technology Stack

    The Oracle High Availability Services technology stack uses several processes to provide Oracle Clusterware high availability.

    The following list describes the processes in the Oracle High Availability Services technology stack:

    • appagent: Protects any resources of the application resource type used in previous versions of Oracle Clusterware.
    • Cluster Logger Service (ologgerd): Receives information from all the nodes in the cluster and persists in an Oracle Grid Infrastructure Management Repository-based database. This service runs on only two nodes in a cluster.
    • Grid Interprocess Communication (GIPC): A support daemon that enables Redundant Interconnect Usage.
    • Grid Plug and Play (GPNPD): Provides access to the Grid Plug and Play profile, and coordinates updates to the profile among the nodes of the cluster to ensure that all of the nodes have the most recent profile.
    • Multicast Domain Name Service (mDNS): Used by Grid Plug and Play to locate profiles in the cluster, and by GNS to perform name resolution. The mDNS process is a background process on Linux and UNIX and on Windows.
    • Oracle Agent (oraagent): Extends clusterware to support Oracle-specific requirements and complex resources. This process manages daemons that run as the Oracle Clusterware owner, like the GIPC, GPNPD, and GIPC daemons.Note:This process is distinctly different from the process of the same name that runs in the Cluster Ready Services technology stack.
    • Oracle Root Agent (orarootagent): A specialized oraagent process that helps the CRSD manage resources owned by root, such as the Cluster Health Monitor (CHM).Note:This process is distinctly different from the process of the same name that runs in the Cluster Ready Services technology stack.
    • scriptagent: Protects resources of resource types other than application when using shell or batch scripts to protect an application.
    • System Monitor Service (osysmond): The monitoring and operating system metric collection service that sends the data to the cluster logger service. This service runs on every node in a cluster.
    Oracle Clusterware ComponentLinux/UNIX ProcessWindows Processes
    CRScrsd.bin (r)crsd.exe
    CSSocssd.bincssdmonitorcssdagentcssdagent.execssdmonitor.exe ocssd.exe
    CTSSoctssd.bin (r)octssd.exe
    EVMevmd.binevmlogger.binevmd.exe
    GIPCgipcd.bin
    GNSgnsd (r)gnsd.exe
    Grid Plug and Playgpnpd.bingpnpd.exe
    LOGGERologgerd.bin (r)ologgerd.exe
    Master Diskmondiskmon.bin
    mDNSmdnsd.binmDNSResponder.exe
    Oracle agentoraagent.bin (Oracle Clusterware 12c release 1 (12.1) and 11g release 2 (11.2)), or racgmain and racgimon (Oracle Clusterware 11g release 1 (11.1))oraagent.exe
    Oracle High Availability Servicesohasd.bin (r)ohasd.exe
    ONSonsons.exe
    Oracle root agentorarootagent (r)orarootagent.exe
    SYSMONosysmond.bin (r)osysmond.exe

    Illustrates cluster startup.

    Description of Figure 1-2 follows

    Also See:

    Oracle Cluster Registry

    Voting Disk